text stringlengths 14 5.77M | meta dict | __index_level_0__ int64 0 9.97k ⌀ |
|---|---|---|
We regularly adjust ratings based on user feedback and product reviews. We update ratings to better inform our readers. Our goal is to be a trusted source of information, and we want to protect you from products and brands with major issues.After receiving negative feedback from users about a product, the Gadget Flow team will take another look at it and ensure the review's credibility.
What would you like to report?
5-3 =
DTEN OS
LED 27"
24.25" x 15.4" x 4.1"
Gadget Flow Editors
DTEN Zoom for Home Remote Office Device removes distractions from your workspace
The DTEN Zoom for Home Remote Office Device is powered by Zoom for high-quality audio and video to collaborate with your team.
Amy Poole
TechWorkspace
Stay productive when working remotely with the DTEN Zoom for Home Remote Office Device. This combination of a tablet and laptop is specifically created for convenience and practicality. This touchscreen device features everything you need to collaborate with a team. In fact, it comes with eight noise-reducing microphones. Plus, the device comes preloaded with the Zoom software. Therefore, it offers high-precision video calls, and there's no need to install software or an app. You'll also have access to three smart webcams. Despite these high-tech features, it's simple to use. In fact, its user-friendly interface allows you to access meetings, whiteboards, and contacts at the touch of a button. You can also sync your calendar with upcoming meetings for those who need assistance with planning. Overall, the DTEN Zoom for Home will enhance your work life and allow you to collaborate as a remote team.
Get it forGet it for $599
Add Reminder
Gadget Flow Rating 9.2/10
View Product Specs
Discovered on Jul 25, 2020, 7:00 am EDT
Discovered by Amy Poole
Editor's Quote
It provides everything you need for a productive work day. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 8,222 |
Q: How to Hold Old Array Index Position After Filtering ListView I have a EditText and ListView on MainActiviy. EditText for search and filter ListView's items.
I have two arrays;
newspapers[ ] holds some newspaper name,
adress[ ] holds web adress of newspapers.
Also I have a setOnItemClickListener. In listener I get position of item and with help of this index number of array I fetch adress info from adress[ ] array.
First there was no search box for filter. I added addTextChangedListener to EditText to filter ListView's items. Problem started here!
When I write something to search box it is filtering properly, but after filtering if I click to any item it opens old item's address.
Example:
Assume,
A
B
C
D
are item before filter and A item's in array position is 0, B=1, C=2, D=3
if I write to search "C" only C is visible after filtering and if I click "C" item it opens A's web site. Because "A" item was first item before filtering.
How can I update array index positions after filtering ListView?
public class MainActivity extends Activity {
// Search EditText
EditText inputSearch;
// Listview Adapter
ArrayAdapter<String> adapter;
protected static final String SITE_ADDRESS = null;
// protected static final String SITE_NAME = null;
private ListView newspaper_list_view;
private String[] newspapers = { "A","B","C","D" };
private String[] adress = { "a.com","b.com","c.com","d.com" };
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
// setContentView(R.layout.webview);
inputSearch = (EditText) findViewById(R.id.inputSearch);
newspaper_list_view = (ListView) findViewById(R.id.gazeteliste);
adapter = new ArrayAdapter<String>(this,android.R.layout.simple_list_item_1, newspapers);
newspaper_list_view.setAdapter(adapter);
final Intent intent = new Intent(this, GazeteIcerik.class);
newspaper_list_view.setOnItemClickListener(new OnItemClickListener() {
public void onItemClick(AdapterView<?> arg0, View arg1,
int listposition, long arg3) {
// Toast.makeText(getApplicationContext(),"Liste Sırası:" +
// listposition, Toast.LENGTH_SHORT).show();
String site_adress = adress[listposition];
String site_name = newspapers[listposition];
intent.putExtra(SITE_ADDRESS, site_adress);
// intent.putExtra(SITE_NAME, site_name);
getActionBar().setDisplayHomeAsUpEnabled(true);
intent.putExtra(SITE_ADDRESS, new String[] { site_adress,site_name });
startActivity(intent);
}
});
inputSearch.addTextChangedListener(new TextWatcher() {
@Override
public void onTextChanged(CharSequence cs, int arg1, int arg2,int arg3) {
// When user changed the Text
MainActivity.this.adapter.getFilter().filter(cs);
}
@Override
public void beforeTextChanged(CharSequence arg0, int arg1,int arg2, int arg3) {
// TODO Auto-generated method stub
}
@Override
public void afterTextChanged(Editable arg0) {
// TODO Auto-generated method stub
}
});
}
// onCreate sonu
}
A: You're using two separate arrays for newspapers and addresses, that's the cause of the issue. Currently, you're only filtering newspaper array, but address array stays same. That's why you're getting old item's address. There are two different things you can do to fix this.
*
*(Easy one) Create a hash map for addresses and use newspaper name as the key and address as the value, so you can get the correct address in any case.
public class MainActivity extends Activity {
HashMap<String, String> addressMap = new HashMap<String, String>(); // initialize the hashmap
@Override
protected void onCreate(Bundle savedInstanceState) {
newspaper_list_view.setOnItemClickListener(new OnItemClickListener() {
public void onItemClick(AdapterView<?> arg0, View arg1, int listposition, long arg3) {
String name = newspapers[listposition];
String address = addressMap.get(name);
}
});
}
}
*
*(More complicated, choose only if you need to add more features to your listview) Or you can create a custom object that stores newspaper and address and a custom adapter to filter the listview.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 7,422 |
\section{Introduction}
The recent years have seen an increasing number of applications where computing is carried out in all sorts of environments. For example, drones are now being used to carry out tasks such as delivering packages, monitoring plantations and railways.
While these distributed systems should still satisfy well-known safety ({\em e.g.}, drones should not run out of energy) and liveness properties ({\em e.g.}, freedom of livelock), they are also subject to \emph{quantitative constraints} leading to new verification problems with explicit time constraints.
Consider, as our running example, the scenario where drones monitor some locations of interest, {\em e.g.}, drones checking for infested plantation areas\footnote{See \url{http://www.terradrone.pt/)} -- In Portuguese.}, checking whether rail tracks are in place\footnote{See \url{http://fortune.com/2015/05/29/bnsf-drone-program/}.}, or drones monitoring locations with high risk of being trespassed. Drones should take a picture of each one of these points of interest. Moreover, for each point of interest, there should be \emph{a recent picture}, {\em i.e.}, not more than $M$ time units old for some given $M$. That is, the drones should collectively have a set of \emph{recent pictures} of all sensitive locations. In order to achieve this goal, drones need to fly consuming energy, and they need to return to the base station to recharge their batteries. The environment may interfere as there might be winds that may move the drone to some direction, or there might be other flying objects blocking a drone's progression.
When designing such a system, engineers should specify the behavior of drones, {\em e.g.}, where to move, when to take a picture, when to return to a base station, etc.
Specific design goals and requirements lead to different verification problems. In this paper we introduce four different verification problems: realizability, survivability, reliability\ and recoverability, which we explain next:
The \emph{realizability problem}, consists of checking, whether under the given time constraints, the specified system can achieve the assigned goal, {\em e.g.}, always collect recent pictures of the sensitive locations.
In many settings, the drones themselves or the environment may behave non-determinis\-ti\-cally. For example, if a drone wants to reach a point to the north-east, it may first choose to either move north or east, both being equally likely. Similarly, there might be some wind at some location causing any drone under the wind's effect to move in the direction of the wind. A stronger property that takes into account such non-determinism is to check whether for all possible outcomes (of drone actions and environment interference), the specified system can achieve the assigned goal. We call this property \emph{survivability}.
The properties of realizability and survivability
represent the two extremes w.r.t. requirements that are put on a system.
A system that is realizable can achieve the designed goal, in some way. A system that satisfies survivability will
always achieve the goal, under any circumstances.
In some cases realizability may not be satisfactory, while in other cases survivability may be to costly or may not be achievable.
For such systems, intermediate solutions are of interest. To model such requirements in system design we introduce additional properties, namely, \emph{reliability} and \emph{recoverability}.
In order to ensure system goals, drones should always be able to function.
In particular, drones should always be able to come back to recharge w.r.t. both distance and energy.
In other words, drones should never go too far, reaching so called \emph{points-of-no-return} where it may no longer be possible to safely return to the home base. Engineers should aim to program drones so that points-of-no-return~shall not be reached. This property is called recoverability.
A system satisfies reliability~if the system is always able to successfully continue with its expected performance, {\em i.e.}, systems never gets stuck. For example, drones should always be able to ensure system goals, regardless of the environmental interference they have experienced. At any stage, after monitoring sensitive locations successfully for some period of time, drones should be able to find a way to
continue with their good performance.
Keeping in mind possible technical failures and drone maintenance, it may be necessary for engineers to involve
additional drones in order to collectively provide recent pictures of the entire area of interest.
Besides formalizing the properties, we show that the following implications hold:
$$
S_{urvivability
\ \implies \ R_{eliability
\ \implies \ R_{ecoverability}
\ \implies \ R_{ealizability}\ .
$$
In our previous work~\cite{kanovich.mscs,kanovich12rta,kanovich15post,kanovich17jcs}, we proposed a timed Multiset Rewriting (MSR) framework for specifying compliance properties which are similar to \emph{quantitative safety properties} \cite{alpern87dc,clarkson10jcs}, investigating the complexity of a number of decision problems. These properties were defined over the set of {finite traces}, {\em i.e.}, the execution of a finite number of actions. Recoverability~systems are similarly defined. The properties above, on the other hand, are defined over \emph{infinite traces}.
The transition to properties over infinite traces leads to many challenges as one can easily fall into undecidability fragments of the verification problems. A main challenge is to identify the syntactical conditions on specifications so that these verification
problems fall into a decidable fragment and, at the same time, that interesting examples can be specified.
The layout and the main contributions of this paper are the following:
\begin{itemize}
\item We propose (Section \ref{sec:timedmsr}) a novel class of systems called \emph{Progressing Timed Systems} (PTS)
that are specified as timed multiset rewriting theories. In a PTS, only a finite number of actions can be carried out in a bounded time interval. We illustrate its expressiveness by encoding a simplified drone example as a PTS~(Section~\ref{sec:progdrones});
\item
We define a language for specifying the relevant quantitative temporal properties of timed systems used for defining the properties described above of \emph{realizability}, \emph{ reliability, recoverability} and \emph{survivability}, (Section~\ref{sec:timedprop});
We then formally compare expressiveness of these properties ( Section~\ref{sec:relations-properties});
\item We investigate (Section~\ref{sec:complex})
the complexity of verification problems involving
above properties.
While these problems are undecidable in general~\cite{kanovich.mscs}, we show that they are PSPACE-complete for PTSes.
We also show that when we bound time (as in bounded-model checking) realizability of PTSes is NP-complete, and that survivability and reliability~are in the $\Delta_2^p$ class of the polynomial hierarchy ($P^{NP}$)~\cite{papadimitriou07book}.
\item
We also provide a discussion on related and future work (Section~\ref{sec:related}).
\end{itemize}
\paragraph{Relation to our previous work}
This paper considerably
extends the conference paper~\cite{kanovich16formats}.
For the model introduced in~\cite{kanovich16formats} we provide details of the formalization of fresh values and the full proofs of this generalization of the results from~\cite{kanovich16formats}. All the material involving properties of reliability~and recoverability~is new, including the investigation of relations between all properties.
\section{Multiset Rewriting Systems}
\label{sec:timedmsr}
Assume a finite first-order typed alphabet, $\Sigma$, with variables, constants, function and predicate symbols.
Terms and formulas are constructed as usual (see~\cite{enderton}) by applying symbols of correct type (or sort).
\begin{definition}[Fact]
If $P$ is a predicate of type $\tau_1 \times \tau_2 \times \cdots \times \tau_n \rightarrow o$, where $o$ is the type for propositions, and $u_1, \ldots, u_n$ are terms of types $\tau_1, \ldots, \tau_n$, respectively, then $P(u_1, \ldots, u_n)$ is a \emph{fact}.
A fact is \emph{grounded} if it does not contain any variables.
\end{definition}
We assume that the alphabet contains the constant $z : Nat$ denoting zero and the function $s : Nat \to Nat$ denoting the successor function. Whenever it is clear from the context, we write $n$ for $s^n(z)$ and $(n + m)$ for $s^n(s^m(z))$.
\vspace{0.5em}
Additionally, we allow an unbounded number of fresh values~\cite{cervesato99csfw,durgin04jcs} to be involved.
\vspace{0.5em}
In order to specify timed systems, to each fact we attach a timestamp denoting time.
\begin{definition}[Timestamped Fact]
\emph{Timestamped facts} are of the form $F@t$, where $F$ is a fact and $t \in \mathbb{N}$ is natural number called {\em timestamp}.
\end{definition}
Notice that timestamps are \emph{not} constructed by using the successor function.
There is a special predicate symbol $Time$ with arity zero, which will be used to represent global time.
For simplicity, instead of timestamped facts, we often simply say facts. Also, when we want to emphasize the difference between a fact $F$, and a timestamped fact $F@T$, we say that $F$ is an \emph{untimed fact}.
\begin{definition}[Configuration]
A {\em configuration} is a finite multiset of ground timestamped facts,
~$\mathcal{S} = \{~Time@t, ~F_1@t_1, \ldots, ~F_n@t_n~\}$
with a single occurrence of a $Time$ fact. \\[3pt]
Given a configuration $\mathcal{S}$ containing $Time@t$, we say that a fact $F@t_F$ in $\mathcal{S}$ is a \emph{future fact} if its timestamp is greater than the global time $t$, {\em i.e.},~if ~$t_F>t$.
Similarly, a fact $F@t_F$ in $\mathcal{S}$ is a \emph{past fact} if ~$t_F<t$, and a fact $F@t_F$ in $\mathcal{S}$ is a \emph{present fact} if ~$t_F=t$.
\end{definition}
Configurations are to be interpreted as states of the system. Consider the following configuration where the global time is 4:
\[
\begin{small}
\mathcal{S}_1 = \left\{\begin{array}{l}
Time@4, \,Dr(d1,1,2,10)@4,\,Dr(d2,5,5,8)@4,\,P(p1,1,1)@3,\,P(p2,5,6)@0
\end{array}\right \}
\end{small}
\label{conf-example-1}
\]
Fact $Dr(d_{Id},x,y,e)@t$ denotes that drone $d_{Id}$ is at position $(x,y)$ at time $t$ with $e$ energy units left in its battery; fact $P(p_{ID},x,y)@t$ denotes that a point to be monitored is at position $(x,y)$ and that the last picture of it was taken at time $t$. Thus, the above configuration denotes a scenario with two drones located at positions $(1,2)$ and $(5,5)$ and with 10 and 8 energy units, and with two points to be monitored at positions $(1,1)$ and $(5,6)$, where the former has been taken a photo at time $3$ and the latter at time 0.
Using variables, including time variables, we are able to represent (sets of) configurations of particular form.
For example,
$$~Time@(T+D), \,Dr(X,5,6,Y)@(T+D),\,P(p2,5,6)@T$$
specifies that some drone $X$ with $Y$ units of energy is currently at the position $(5,6)$ which has last been photographed $D$ time units ago. This holds for any configuration containing above facts for some instantiation of variables $T,D,X$ and $Y$.
\vspace{0.5em}
Configurations are modified by multiset rewrite rules which can be interpreted as actions of the system.
There is only one rule, $Tick$, that
represents how global time is advancing
\begin{equation}
\label{eq:tick}
Time@T \lra Time @ (T+1)
\end{equation}
where $T$ is a time variable denoting the global time.
With an application of a $Tick$ rule, a configuration,
$\{~Time@t, \,F_1@t_1, \ldots, \,F_n@t_n~\}$, that represents state of a system at moment \,$t$, is replaced with the configuration
\mbox{$\{~Time@(t +1 ), \,F_1@t_1, \ldots, \,F_n@t_n~\} $} \ representing the system at moment ~$t+1$.
\vspace{0.5em}
The remaining rules are \emph{instantaneous} as they do not modify global time, but may modify the remaining facts of configurations (those different from $Time$). Instantaneous rules have the form:
\begin{equation}
\begin{array}{l}
Time@T, \,
\,W_1@T_1,\ldots,\,W_p@T_p,
\red{\,F_1@T_1'}, \ldots, \red{\,F_n@T_n'} \ \mid \ \,\mathcal{C} \ \lra \\ \
\exists \vec{X}.\,[ \ Time@T,
\,W_1@T_1,\ldots,\,W_p@T_p, \,\blue{Q_1@(T + D_1)}, \ldots, \,\blue{Q_m@(T + D_m)} \, ]
\label{eq:instantaneous}
\end{array}
\end{equation}
where $D_1, \ldots, D_m$ are natural numbers,
\, $W_1@T_1,\ldots,\,W_p@T_p, \, F_1@T_1', \ldots, {\,F_n@T_n'}$
are timestamped facts, possibly containing variables, and $\mathcal{C}$ is the guard of the rule which is a set of constraints
involving the time variables appearing as timestamps of facts in rule's pre-condition, {\em i.e.}~the variables ~$T, T_1, \ldots, T_p, T_1', \ldots, T_n'$.
Facts $W_i, F_j$ and $Q_k$ are all different from the fact $Time$ and $\vec{X}$ are variables not appearing in $W_1,\ldots,\,W_p$.
Constraints may be of the form:
\begin{equation}
\label{eq:constraints}
T > T' \pm N \quad \textrm{ and } \quad T = T' \pm N
\end{equation}
where $T$ and $T'$ are time variables, and $N\in\mathbb{N}$ is a natural number.
{Here, and in the rest of the paper, the symbol $\pm$ stands for either $+$ or $-$, that is, constraints may involve addition or subtraction.}
We use $T' \geq T' \pm N$ to denote the disjunction of $T > T' \pm N$ and $T = T' \pm N$.
All variables in the guard of a rule are assumed to appear in the rule's pre-condition.
\vspace{0,5em}
Finally, the variables $\vec{X}$ that are existentially quantified in the rule (Eq.~\ref{eq:instantaneous})
are to be replaced by fresh values, also called \emph{nonces} in protocol security literature~\cite{cervesato99csfw,durgin04jcs}.
As in our previous work~\cite{kanovich13ic}, we use nonces whenever a unique identification is required, for example for drone identification.
Let \,$\mathcal{W}$ \, and \, $\mathcal{W}'$ \, be sets of timestamped facts.
A rule
~$\mathcal{W} \mid \mathcal{C} \lra \exists \vec{X}.\, \mathcal{W}'$~
can be applied to a configuration $\mathcal{S}$ if there is a ground substitution $\sigma$ such that \, $\mathcal{W}\sigma \subseteq \mathcal{S}$ \, and that\, $\mathcal{C}\sigma$~is true, and the variables $\vec{X}$ are fresh.
The resulting configuration is~
$$\big(\,(\mathcal{S} \setminus \mathcal{W}) \cup \mathcal{W}'\, \big)\sigma \ . $$
More precisely, given some rule $r$, an instance of a rule is obtained by substituting all variables appearing in the pre- and post-condition of the rule with constants. This substitution applies to variables appearing in terms inside facts, variables representing fresh values, as well as time variables used for specifying timestamps of facts.
An instance of an instantaneous rule can only be applied if all the constraints in its guard are satisfied.
For example, since~$ 0+2<5$ (when instantiating $T'$ as the timestamp of the fact $\,P(p2,5,6)@0$\,) rule
\[
\begin{small}
\begin{array}{l}
Time@T, \red{P(I,X,Y)@T'}, \red{Dr(Id,X,Y,E+1)@T} \mid \{~ T'+2<T~\}%
\lra
\\ \qquad \qquad \qquad Time@T, \blue{P(I,X,Y)@T}, \blue{Dr(Id,X,Y,E)@(T+1)}
\end{array}
\end{small}
\]
is applicable to configuration
$$
\{\ Time@5, \,Dr(d1,1,2,10)@5,\,Dr(d2,5,6,7)@5,\,P(p1,1,1)@3,\,P(p2,5,6)@0~\} \, ,
$$
resulting in configuration \
$$
\{\ Time@5, \,Dr(d1,1,2,10)@5,\,Dr(d2,5,6,6)@6,\,P(p1,1,1)@3,\,P(p2,5,6)@5~\} \, ,
$$
but it is not applicable to the following configuration
$$
\{\ Time@5, \,Dr(d1,1,2,10)@5,\,Dr(d2,5,5,8)@5,\,P(p1,1,1)@3,\,P(p2,5,6)@4 \ \} \
$$
because there are no $\mathcal{P}(p,x,y)@T'$ facts in the configuration such that its timestamp $T'$ satisfies the given constraint, $~ T'+2<T$, involving the global time $T$. Namely, ~$3+2~{\not}{<}~5$ and ~$4+2~{\not}{<}~5$.
\vspace{0,5em}
Following \cite{durgin04jcs} we say that a timestamped fact $F@T$ is \emph{consumed} by some rule $r$ if that fact occurs more times in the rule $r$ on the left side than on the right side. A timestamped fact $F@T$ is \emph{created} by some rule $r$ if that fact occurs more times in the rule $r$ on the right side than on the left side.
Hence, facts ${F_1@T_1'}, \ldots, {F_n@T_n'}$ are consumed by the rule (Eq.~\ref{eq:instantaneous}) while facts ${Q_1@(T + D_1)}, \ldots, {Q_m@(T + D_m)}$ are created by that rule.
Notice that a fact $F$ can appear in a rule with different timestamps, but for the above notions we count instances of the same timestamped fact $F@T$.
In a rule, we usually color \red{red} the consumed facts and \blue{blue} the created facts.
\begin{remark}
The feature of time constraints that may be attached to the rules is not a restriction on the model. Quite the contrary,
using constraints, we are able to formalize time-sensitive
properties and problems that involve explicit time requirements.
The set of constraints may, however, be empty, {\em i.e.}, rules may have no constraints attached.
\end{remark}
\vspace{1mm}
We write ~$\mathcal{S} \lra_r \mathcal{S}'$\ for the one-step relation where configuration $\mathcal{S}$ is rewritten to $\mathcal{S}'$ using an instance of rule $r$.
For a set of rules $\mathcal{R}$, we define ~$\mathcal{S} \lra_\mathcal{R}^* \mathcal{S}'$~ as the transitive reflexive closure of the one-step relation on all rules in $\mathcal{R}$. We elide the subscript \ $\mathcal{R}$, when it is clear from the context, and simply write ~$\mathcal{S} \lra^* \mathcal{S}'$.
\vspace{2mm}
Notice that by the nature of multiset rewriting there are various aspects of
non-determinism in the model. For example, different actions and even different instantiations of the
same rule may be applicable to the same configuration $\mathcal{S}$, which may
lead to different resulting configurations $\mathcal{S}'$.
\begin{definition}[Timed MSR System]
A \emph{timed MSR system}
$\mathcal{T}$ is a set of rules containing only instantaneous rules (Eq.~\ref{eq:instantaneous}) and the $Tick$ rule (Eq.~\ref{eq:tick}).
\end{definition}
A trace of a timed MSR system
is constructed by a sequence of its rules.
In this paper we consider both finite and infinite traces.
A \emph{finite trace} of a timed MSR system $\mathcal{T}$ starting from an initial configuration $\mathcal{S}_0$ is a sequence
$$
\mathcal{S}_0 \lra \mathcal{S}_1 \lra \mathcal{S}_2 \lra \cdots \lra \mathcal{S}_n
$$
and an \emph{infinite trace} of $\mathcal{T}$ starting from an initial configuration $\mathcal{S}_0$ is a sequence
$$
\mathcal{S}_0 \lra \mathcal{S}_1 \lra \mathcal{S}_2 \lra \cdots \lra \mathcal{S}_n \lra \cdots
$$
where for all ~$i \geq 0$, \ $\mathcal{S}_{i} \lra_{r_i} \mathcal{S}_{i+1}$ \ for some $r_i \in \mathcal{T}$.
\vspace{0,5em}
We will pay special attention to periods of time represented in traces.
Since by $Tick$ rule time advances by one time unit, finite (infinite) number of $Tick$ rules in a trace represents a finite (infinite) time period.
One can easily imagine traces containing a finite number of $Tick$ rules and an infinite number of instantaneous rules. Such traces would represent an infinite number of actions performed in a finite time interval.
In this paper we are not interested in such traces and consider only so called \emph{infinite time traces}.
\begin{definition}[Infinite Time Trace]
\label{def:infinity}
A trace $P$ of a timed MSR
$\mathcal{T}$ is an ~\emph{infinite time trace} ~if time tends to infinity in $P$, {\em i.e.},
$(\forall n \in \mathbb{N}) \ (\exists~\mathcal{S}\in P)$ such that \ $Time@T \in \mathcal{S} $ and ~$T> n$.
\end{definition}
Since in any trace, the global time ticks in single time units, it immediately follows that any infinite time trace is an infinite trace, and it contains an infinite number of $Tick$ rules.
We have shown in our previous work~\cite{kanovich11jar,kanovich13ic,kanovich.mscs,kanovich15post,kanovich17jcs} that problems involving MSR, such as checking whether a configuration can be reached, are undecidable if no further restrictions are imposed. These problems are undecidable already when considering only finite traces. However, these problems are decidable for balanced MSR systems and assume an upper-bound, $k$, on the size of facts formally defined below.
\begin{definition}[Balanced System]
\label{def:balanced}
A timed MSR system $\mathcal{T}$ is \emph{balanced} if for all instantaneous rules $r \in \mathcal{T}$, $r$ creates the same number of facts as it consumes, that is, instantaneous rules are of the form:
\begin{equation}
\begin{array}{l}
Time@T, \,\mathcal{W}, \red{\,F_1@T_1'}, \ldots, \red{\,F_n@T_n'} \ \mid \ \,\mathcal{C} \ \lra \\ \ \
\exists \vec{X}.~[ \ Time@T,\, \mathcal{W}, \blue{\,Q_1@(T + D_1)}, \ldots, \blue{\,Q_n@(T + D_n)} \, ] \ .
\label{eq:instantaneous-balanced}
\end{array}
\end{equation}
\end{definition}
By consuming and creating facts, rewrite rules can increase and decrease the number of facts in configurations throughout a trace.
However, in balanced MSR systems,
rule application does not affect the number of facts in a configuration. That is, enabling configuration has the same number of facts as the resulting configuration.
Hence, the number of facts in configurations throughout a trace is constant.
\begin{definition}[Size of a Fact]
The size of a timestamped fact $P@T$, written $|P@T|$ is the total number of alphabet symbols appearing in $P$.
\end{definition}
For instance, $|P(s(z),f(a,X), a)@12| = 7$. For our complexity results, we assume a bound, $k$, on the size of facts.
Without this bound (among other restrictions), any interesting decision problem is shown undecidable by encoding the Post correspondence problem~\cite{durgin04jcs}.
\subsection{Progressing Timed Systems}
\label{sec:pts}
We introduce a particular class of timed MSR systems, called \emph{progressing timed MSR systems}~(PTSes), in which only a finite number of actions can be carried out in a bounded time interval. This is a natural condition for many systems.
\begin{definition}[Progressing Timed System]
\label{def:progressing}
A timed MSR system $\mathcal{A}$ is \emph{progressing timed MSR system (PTS)} if $\mathcal{T} $ is balanced and for all instantaneous rules $r \in \mathcal{T}$:
\begin{itemize}
\item[i)] Rule $r$ creates \emph{at least one} fact with timestamp greater than the global time, that is, in (Eq.~\ref{eq:instantaneous}), ~ $D_i \geq 1$~ for at least one ~$i \in \{1, \dots, n \}$;
\item[ii)]
Rule $r$ consumes \emph{only} facts with timestamps in the past or at the current time,
that is, in (Eq.~\ref{eq:instantaneous}), the set of constraints ~$\mathcal{C}$ contains the set
$$\mathcal{C}_r = \{~T \geq T_i' \mid F_i@T_i', ~1 \leq i \leq n~\} \ . $$
\end{itemize}
\end{definition}
For readability, we will assume from this point onward that for all rules $r$, the set of its constraints implicitly contains the set ~$\mathcal{C}_r$ as shown in Definition~\ref{def:progressing}, not writing ~$\mathcal{C}_r$ explicitly in our specifications.
\vspace{0,5em}
The following rule, denoting the action of a drone taking a picture of a point of interest, is an example of a rule in a PTS:
\[
\begin{array}{l}
Time@T, \red{~P(I,X,Y)@T'}, \red{~Dr(Id,X,Y,E+1)@T} ~ \mid ~ \{~T'<T ~\}
\lra
\\ \qquad \qquad \qquad Time@T, \blue{~P(I,X,Y)@T}, \blue{~Dr(Id,X,Y,E)@(T+1)}
\end{array}
\]
Notice that the constraint $T' <T$ is used to prevent drones from taking pictures of the same point of interest repeatedly at the same time, in order to save energy. Also, the created future fact prevents the same drone from performing the same action in the same time unit.
\vspace{0,5em}
The following proposition establishes a bound on the number of instances of instantaneous rules appearing between two consecutive instances of $Tick$ rules in a trace of a PTS. This bound is then used to formalize the intuition that PTSes always move things forward.
\begin{proposition}
\label{prop:bounded-length}
Let $\mathcal{T}$ be a PTS,
$\mathcal{S}_0$ an initial configuration and $m$ the number of facts in $\mathcal{S}_0$. For all traces $\mathcal{P}$ of $\mathcal{T}$ starting from $ \mathcal{S}_0$, let
$$
\mathcal{S}_i \lra_{Tick} \lra \mathcal{S}_{i+1} \lra \cdots \lra \mathcal{S}_j \lra_{Tick} \lra \mathcal{S}_{j+1}
$$
be any subtrace of ~$\mathcal{P}$ with exactly two instances of the $Tick$ rule, one at the beginning and the other at the end. Then ~$j - i < m$.
\end{proposition}
\begin{proof}
Proof easily follows from Definition~\ref{def:progressing}.
Let $\mathcal{P}$ be an arbitrary trace in $\mathcal{T}$ and
$$
\mathcal{S}_i \lra_{Tick} \lra \mathcal{S}_{i+1} \lra \cdots \lra \mathcal{S}_j \lra_{Tick} \lra \mathcal{S}_{j+1}
$$
an arbitrary subtrace of $\mathcal{P} $ with exactly two instances of the $Tick$ rule.
All the rules between $Tick$ rules in the above subtrace are instantaneous.
Since $\mathcal{T} $ is a PTS, application of any instantaneous rule creates at least one future fact
and consumes at least one present or past fact.
In other words, an application of an instantaneous rule reduces the total number of past and present facts in the configuration.
Because system $\mathcal{T}$ is balanced, all above configurations $\mathcal{S}_i, \dots, \mathcal{S}_j$ have the same number of facts, $m$.
Also, recall that the fact $Time$ does not change with application of instantaneous rules.
Consequently, since there are at most $m-1$ present or past facts different from $Time$ in any $\mathcal{S}_k$, $i<k\leq j$, a series of at most $m-1$ instantaneous rules can be applied between two $Tick$ rules.
\qed
\end{proof}
As per above statement, in a PTS an unbounded number of instantaneous rules cannot be applied in a bounded interval of time.
Also, from the above result we can conclude that infinite traces in PTSes represent infinite time periods. In particular, this means that in traces of PTSes there are no phenomena similar to Zeno paradox.
This is stated in the following proposition.
\vspace{0.5em}
\begin{proposition}
\label{prop:progressing}
Let $\mathcal{T} $ be a PTS.
All infinite traces of $\mathcal{T}$ are infinite time traces.
\end{proposition}
\begin{proof}
Assume that in some infinite trace $\mathcal{P}$ of a PTS $\mathcal{T}$ current time does not exceed some value $M$.
Then, since timestamps are natural numbers, and time advances by a single time unit,
there are at most $M$ time ticks in $\mathcal{P}$.
As per Proposition \ref{prop:bounded-length} there are at most $m-1$ instantaneous rules between any $Tick$ rule and the next $Tick$ rule in $\mathcal{P}$.
Consequently, in total there are at most $(M+1)\cdot (m-1)+M$ rules in $\mathcal{P}$, {\em i.e.}~$\mathcal{P}$ is a finite trace. Contradiction.
\qed
\end{proof}
Finally, notice that the PTS model has many syntactical conditions, {\em e.g.}, balanced condition (Definition~\ref{def:balanced}), the form of time constraints (Eq.~\ref{eq:constraints}), the form of instantaneous rules (Eq.~\ref{eq:instantaneous}). Each of these conditions has been carefully developed.
As we have shown in our previous work~\cite{kanovich.mscs}, relaxing any of these conditions leads to undecidability of
important verification problems, such as the reachability problem, over finite traces.
Clearly, these conditions are also needed for infinite traces.
The additional challenge when allowing infinite traces is to make sure that time advances in such a way that traces represent arbitrarily large
periods of time. Our definition of PTS is a simple and elegant way to enforce this. Moreover, as we show in Section~\ref{sec:progdrones}, with our PTS model it is still possible to specify many interesting examples including our motivating example and still prove the decidability of our verification problems involving infinite traces (Section~\ref{sec:complex}).
\section{Programming Drone Behavior using PTS}
\label{sec:progdrones}
\begin{figure}[t]
\begin{scriptsize}
\[
\begin{array}{l}
Time@T, \,\mathcal{P}(p_1,\ldots,p_n), \red{\,Dr(Id,X,Y,E+1)@T} \, \mid \, doMove\,(Id,X,Y,E+1,T,T_1,\ldots,T_n,north) \lra \\
\qquad \qquad Time@T, \,\mathcal{P}(p_1,\ldots,p_n), \blue{\,Dr(Id,X,Y+1,E)@(T+1)}\\[5pt]
Time@T, \, \mathcal{P}(p_1,\ldots,p_n), \red{\,Dr(Id,X,Y+1,E+1)@T} \mid doMove\,(Id,X,Y+1,E+1,T,T_1,\ldots,T_n,south) \lra \\
\qquad \qquad Time@T, \,\mathcal{P}(p_1,\ldots,p_n), \blue{\,Dr(Id,X,Y,E)@(T+1)}\\[5pt]
Time@T, \,\mathcal{P}(p_1,\ldots,p_n), \red{\,Dr(Id,X+1,Y,E+1)@T} \mid doMove\,(Id,X+1,Y,E+1,T,T_1,\ldots,T_n,west) \lra \\
\qquad \qquad Time@T, \,\mathcal{P}(p_1,\ldots,p_n), \blue{\,Dr(Id,X,Y,E)@(T+1)}\\[5pt]
Time@T, \,\mathcal{P}(p_1,\ldots,p_n), \red{\,Dr(Id,X,Y,E+1)@T} \mid doMove\,(Id,X,Y,E+1,T,T_1,\ldots,T_n,east) \lra \\
\qquad \qquad Time@T, \,\mathcal{P}(p_1,\ldots,p_n), \blue{\,Dr(Id,X,Y,E)@(T+1)}\\[5pt]
Time@T, \,\mathcal{P}(p_1,\ldots,p_n), \red{\,Dr(Id,x_b,y_b,E)@T} \mid doCharge\,(Id,E,T,T_1,\ldots,T_n) \lra \\
\qquad \qquad Time@T, \,\mathcal{P}(p_1,\ldots,p_n), \blue{\,Dr(Id,x_b,y_b,E+1)@(T+1)}\\[5pt]
Time@T, \,P(p_1,X_1,Y_1)@T_1, \ldots, \red{\,P(p_i,X,Y)@T_i}, \ldots, P(p_n,X_n,Y_n)@T_n, \red{\,Dr(Id,X,Y,E)@T} \\[2pt]
\qquad \mid \ doClick\,(Id,X,Y,E,T,T_1,\ldots,T_i,\ldots,T_n) \lra \\[3pt]
Time@T, \,P(p_1,X_1,Y_1)@T_1, \ldots, \blue{\,P(p_i,X,Y)@T}, \ldots, \,P(p_n,X_n,Y_n)@T_n, \blue{\,Dr(Id,X,Y,E-1)@(T+1)}\\[5pt]
Time@T, \red{\,Dr(Id,X,Y,E)@T} \mid \ hasWind\,(X,Y,north) \lra Time@T, \blue{\,Dr(Id,X,Y+1,E)@(T+1)}\\[5pt]
Time@T, \red{\,Dr(Id,X,Y+1,E)@T} \mid \ hasWind\,(X,Y,south) \lra Time@T, \blue{\,Dr(Id,X,Y,E)@(T+1)}\\[5pt]
Time@T, \red{\,Dr(Id,X+1,Y,E)@T} \mid \ hasWind\,(X,Y,west) \lra Time@T, \blue{\,Dr(Id,X,Y,E)@(T+1)}\\[5pt]
Time@T, \red{\,Dr(Id,X,Y,E)@T} \mid \ hasWind\,(X,Y,east) \lra Time@T, \blue{\,Dr(Id,X+1,Y,E)@(T+1)}
\end{array}
\]
\end{scriptsize}
\vspace{-2mm}
\caption{Macro rules specifying the scenario where drones take pictures of points of interest. Here $\mathcal{P}(p_1,\ldots,p_n)$~ denotes ~$P(p_1,X_1,Y_1)@T_1, \ldots, P(p_n,X_n,Y_n)@T_n$. Moreover, we assume that the Drone stay in a grid of size ~$x_{max} \times y_{max}$~ and have at most~ $e_{max}$ ~energy units.}
\label{fig:rules-complete}
\end{figure}
Figure~\ref{fig:rules-complete} depicts the macro rules of our motivating scenario where drones are moving on a fixed grid of size $x_{max} \times y_{max}$, have at most $e_{max}$ energy units and take pictures of some points of interest. We assume that there are $n$ such points $p_1, \ldots, p_n$, where $n$ is fixed, a base station is at position $(x_b,y_b)$, and that the drones should regularly take pictures so that all pictures are recent. That is, at any time, each of the points of interest should have been photographed in the last $M$ time units, for some given $M$.
\vspace{0.5em}
Clearly if drones non-deterministically choose to move in some direction without a particular strategy, they will fail to achieve the assigned goal. A strategy of a drone can be specified using time constraints.
For this example, the strategy would depend on the difference $T-T_i$, for $1 \leq i \leq n$, specifying the elapsed time since the last picture of the point $p_i$ was taken. This can be specified with the following
set of time constraints:
\[
\mathcal{T}(d_1,\ldots,d_n) = \{~T - T_1 = d_1, \ldots, T - T_n = d_n~\}
\]
where for all $1 \leq i \leq n$ we instantiate $d_i$ by values in $\{0,\ldots,M\}$.
For example, the macro rule with ~$doMove\,(Id,X,Y,E+1,T,T_1,\ldots,T_n,north)$ in Figure~\ref{fig:rules-complete} is replaced by the set of rules:
\[
\begin{small}
\begin{array}{l}
Time@T, \, \mathcal{P}(p_1,\ldots,p_n)@T, \red{\,Dr(d1,0,0,1)@T} \
\\ \qquad \qquad \qquad \qquad \qquad \qquad\qquad \ \
\mid \ \mathcal{T}(0,\ldots,0), \,DoMv\,(d1,0,0,1,0,\ldots,0,north)
\\[2pt]
\quad \qquad
\lra Time@T, \, \mathcal{P}(p_1,\ldots,p_n)@T, \blue{\,Dr(Id,0,1,0)@(T+1)}\\[5pt]
Time@T, \,\mathcal{P}(p_1,\ldots,p_n)@T, \red{\,Dr(d1,0,0,1)@T} \
\\ \qquad \qquad \qquad \qquad\qquad \qquad\qquad \ \
\ \mid \ \mathcal{T}(0,\ldots,1), \,DoMv\,(d1,0,0,1,0,\ldots,1,north)
\\[2pt]
\quad \qquad
\lra Time@T, \,\mathcal{P}(p_1,\ldots,p_n)@T, \blue{\,Dr(Id,0,1,0)@(T+1)}\\
\qquad
\cdots\\
Time@T, \,\mathcal{P}(p_1,\ldots,p_n)@T, \red{\,Dr(d2,x_{max},y_{max}-1,e_{max})@T}\\
\qquad \qquad \qquad \mid \ \mathcal{T}(M,\ldots,M), \,DoMv\,(d2,x_{max},y_{max}-1,e_{max},M,\ldots,M,north) \\[2pt]
\quad \qquad
\lra \ Time@T, \,\mathcal{P}(p_1,\ldots,p_n)@T, \blue{\,Dr(Id,x_{max},y_{max},e_{max}-1)@(T+1)}\\
\end{array}
\end{small}
\]
where $doMove$ returns a tautology or an unsatisfiable constraint depending on the desired behavior of the drone.
Finally, macro rules for moving the drone, taking a picture, charging, and macro specifying winds are similarly defined.
While most of the rules have the expected result, we only explain the click and wind rules. The click rule is applicable if the drone is at the position of some point of interest.
If applied, the timestamp of the fact $P(p_i,X,Y)$ is updated to the current time $T$. The wind rule is similar to the move rules moving the drone to some direction, but does not cause the drone to consume its energy.
In our implementation in~\cite{kanovich16formats} we used a more sophisticated approach described in~\cite{talcott16quanticol} using soft-constraints to specify a drone's strategy. It can be translated into a PTS that incorporates the strategy described above.
\paragraph{Other Examples} \
Besides examples involving drones, other exampels also seem to be progressing. For example, in our previous work~\cite{kanovich.mscs}, we specify a monitor for clinical trials using our timed MSR system framework with discrete time. This specification is progressing.
There are a number of other examples which we have been investigating and that are progressing. For example, ~\cite{talcott15wirsing} models a simplified version of a package delivery systems inspired by Amazon's Prime Air service, and ~\cite{talcott16quanticol} models a patrolling bot which moves from one point to another. All these examples seem to be progressing.
\section{Quantitative Temporal Properties}
\label{sec:timedprop}
We start in Section~\ref{subsec:critical} by introducing critical configurations, which is a language used to define the properties that a system shall satisfy. In Section~\ref{subsec:time-sampling}, we introduce the lazy time sampling, which is a condition on traces that intuitively enforces systems to react at the expected time. Then in Section~\ref{sec:problems}, we introduced a number of verification problems.
\subsection{Critical Configurations and Compliant Traces}
\label{subsec:critical}
We introduce critical configurations specifications which is a language for specifying the bad configurations that shall be avoided by a system.
\begin{definition}[Critical Configuration]
\emph{Critical configuration specification} is a set of pairs
$$\mathcal{CS} = \{~\tup{\mathcal{S}_1, \mathcal{C}_1}, \ldots, \tup{\mathcal{S}_n, \mathcal{C}_n}~\} \ .$$
Each pair ~$\tup{\mathcal{S}_j,\mathcal{C}_j}$~ is of the form:
$$
\tup{~\{F_1@T_1, \ldots, F_{p_j}@T_{p_j}\}, \mathcal{C}_j~}
$$
\noindent
where $T_1, \ldots, T_{p_j}$ are time variables, $F_1, \ldots, F_{p_j}$ are facts (possibly containing variables) and $\mathcal{C}_j$ is a set of time constraints involving only the variables $T_1, \ldots, T_{p_j}$.
Given a critical configuration specification, $\mathcal{CS}$, we classify a configuration $\mathcal{S}$ as \emph{critical} w.r.t. $\mathcal{CS}$ if for some $1 \leq i \leq n$, there is a grounding substitution, $\sigma$, such that:
\begin{itemize}
\item $\mathcal{S}_i \sigma \subseteq \mathcal{S}$;
\item All constraints in $\mathcal{C}_i \sigma$ are satisfied.
\end{itemize}
\end{definition}
Substitution application ($\mathcal{S} \sigma$) is defined as usual~\cite{enderton}, {\em i.e.}, by mapping time variables in $\mathcal{S}$ to natural numbers, nonce names to nonce names (renaming of nonces) and non time variables to terms.
Notice that nonce renaming is assumed as the particular nonce name should not matter for classifying a configuration as critical.
Nonce names cannot be specified in advance, since these are freshly generated in a trace, {\em i.e.},~during the execution of the process being modelled.
\begin{example}
We can specify usual safety conditions which do not involve time. For example, a drone should never run out of energy. This can be specified by using the following set of critical configuration specification:
\begin{small}
\[
\{~\tup{~\{Dr(Id,X,Y,0)@T\},\emptyset~} \mid Id \in \{d1,d2\}, X \in \{0,\ldots,x_{max}\}, Y \in \{0,\ldots,y_{max}\} ~\} \ .
\]
\end{small}
\end{example}
\begin{example}
The following critical configuration specification specifies a quantitative property involving time:
\begin{small}
\[
\begin{array}{l}
\{~\tup{~\{P(p_1,x_1,y_1)@T_1,Time@T\}, \{ \,T > T_1 + M\, \}~}, \ldots \\ \qquad \qquad \ldots,
\tup{~\{P(p_n,x_n,y_n)@T_n,Time@T\}, \{ \,T > T_n + M\, \}~}~\} \ .
\end{array}
\]
\end{small}
Together with the specification in Figure~\ref{fig:rules-complete}, this critical configuration specification specifies that the last pictures of all points of interest ~( {\em i.e.}, $p_1, \ldots, p_n$ located at $(x_1,y_1),\ldots, (x_n,y_n)$~) should have timestamps no more than $M$ time units old.
\end{example}
\begin{example}
Let the facts $St(Id,x_b,y_b)@T_1$ and $St(empty,x_b,y_b)@T_1$ denote, respectively, that at time $T_1$ the drone $Id$ entered the base station located at $(x_b, y_b)$ to recharge, and that the station is empty. Moreover, assume that only one drone may be positioned in a station to recharge, which would be specified by adding the following rules specifying the drone landing and take off:
\[
\begin{array}{l}
Time@T,\red{\,Dr(Id,X,Y)@T},\red{\,St(empty, X,Y)@T_1} \lra\\ \quad
Time@T,\blue{\,Dr(Id,X,Y)@(T+1)},\blue{\,St(Id, X,Y)@T}\\[5pt]
Time@T,\red{\,Dr(Id,X,Y)@T},\red{\,St(Id,X,Y)@T_1} \lra \\ \quad
Time@T,\blue{\,Dr(Id,X,Y)@(T+1)},\blue{\,St(empty,X,Y)@T}\\
\end{array}
\]
Then, the critical configuration specification
$$\{~\tup{\{St(Id,X,Y)@T_1, Time@T\}, \{ \, T > T_1 + M_1\, \} \,} \mid Id \in \{d1,d2\}\,\}$$
specifies that one drone should not remain in a base station for too long (more than $M_1$ time units) preventing other drones to charge.
\end{example}
\begin{example} Fresh values may be useful in specifying various critical configurations which may involve identification, history of events
or communication protocols. For example, drones may communicate between themselves to coordinate their flights. They may also use cryptographic protocols with other agents in the system, {\em e.g.}, to send pictures of points of interest to be stored on the system data base.
Such applications and requirements are easily formalized using fresh values.\\ For example, drones must be uniquely identified, {\em i.e.}, should not have the same $Id$:
\begin{small}
\[
\{~\tup{{~\{Dr(Id,X,Y,E)@T,~\{Dr(Id,X',Y',E')@T'\},\emptyset~} }\} \ .
\]
\end{small}
Also, in case recharging of batteries is separately managed and billed, even visits to the recharge stations should be uniquely identified for correct billing. Similarly, pictures of points of interest may require identification for documentation. In that case, rules given in Figure~\ref{fig:rules-complete} can easily be modified to include fresh values, {\em e.g.}, by replacing $P(p_i,X,Y)$ facts with $P(n,p_i,X,Y)$
facts in all rules, and including creation of fresh value $n$ in the rule involving $doClick$ constraint.
\end{example}
\begin{definition}[Compliant Trace]
A trace $\mathcal{P}$ of a timed MSR system is \emph{compliant} w.r.t. a given critical configuration specification $\mathcal{CS}$ if $\mathcal{P}$ does not contain any configuration that is critical w.r.t. $\mathcal{CS}$.
\end{definition}
For simplicity, when the corresponding critical configuration specification is clear from the context, we will elide it and use terminology {\em critical configuration}.
Also, when it is clear from the context, we often elide the timed MSR system and the critical configuration specification with respect to which we consider critical configurations, and simply say that a trace is compliant.
\subsection{Time Sampling}
\label{subsec:time-sampling}
In order to define sensible quantitative verification properties, however, we need to assume some conditions on when the Tick rule is applicable. Otherwise, any MSR system allows traces containing only instances of $Tick$ rules:
$$
\mathcal{S}_1 \lra_{Tick} \mathcal{S}_2 \lra_{Tick} \mathcal{S}_3 \lra_{Tick} \mathcal{S}_4 \lra_{Tick} \cdots
$$
In such a trace, the system never acts to avoid critical configuration and would easily contain a critical configuration $\mathcal{S}_j$, related to some constraint $T >T' + D$, involving global time $T$ and sufficiently large $j$.
Imposing a \emph{time sampling} is a way to avoid such traces where the time simply ticks. Time sampling is used, for example, in the semantics of verification tools such as Real-Time Maude~\cite{olveczky08tacas}. In particular, a time sampling dictates when the $Tick$ rule must be applied and when it cannot be applied. Such treatment of time is used both for dense and discrete times in searching and model checking timed systems.
\begin{definition}[Lazy Time Sampling]
\label{def: lazy}
A (possibly infinite) trace $\mathcal{P}$ of a timed MSR system $\mathcal{T}$ uses the \emph{lazy time sampling} if for any occurrence of Tick rule $\mathcal{S}_i \lra_{Tick} \mathcal{S}_{i+1}$ in $\mathcal{P}$, no instance of any instantaneous rule in $\mathcal{T}$ can be applied to the configuration $\mathcal{S}_i$.
\end{definition}
In the lazy time sampling instantaneous rules are given a higher priority than the $Tick$ rule. Under this time sampling, a drone should carry out one of the rules in Figure~\ref{fig:rules-complete} at each time while time can only advance when all drones have carried out their actions for that moment. This does not mean, however, that the drones will satisfy their goal of always having recent pictures of the points of interest as this would depend on the behavior of the system, {\em i.e.}, the actions carried out by the drones.
In the remainder of this paper, we focus on lazy time sampling.
We leave for future work investigating whether similar results hold for other time samplings.
\subsection{Verification Problems}
\label{sec:problems}
\begin{figure}[t]
\centering
\centering
\includegraphics[width={ 0,9\textwidth}]{fig-all-2.pdf}
\vspace{-5pt}
\hspace{1em} (a) \hspace{5em} (b) \hspace{6em} (c) \hspace{5em} (d) \hspace{5em} (e) \hspace{1em}
\vspace{-3mm}
\label{fig:prop-1}
\caption{Illustration of the properties of (a) realizability, (b) survivability, (c) reliability~and (d)~recoverability, as well as of configuration that is a point-of-no-re\-turn~(e).
Green lines represent compliant traces that use the lazy time sampling, while red lines represent traces that use the lazy time sampling, but are not compliant. Red circles represent critical configurations, while green are not critical.
Quantification marked with $t \to \infty$ denotes quantification over infinite time traces.}
\vspace{-3mm}
\label{fig: properties}
\vspace{-2mm}
\end{figure}
This Section introduces four properties: reliazibility, survivability, reliability\ and recoverability. Figure~\ref{fig: properties} illustrates these properties which we define next.
The first property we introduce
is realizability. It guarantees that,
under the given time constraints and design specifications, the given system can achieve the assigned goal, e.g., drones can repeatedly collect
recent pictures of the sensitive locations.
Realizability is useful for increasing one's confidence in a specified system, as clearly a system that is not realizable can not accomplish the given tasks (specified by a critical specification) and therefore, the designer would need to reformulate it.
However, if a system is shown realizable, the trace, $\mathcal{P}$, that proves realizability could also provide insights on the sequence of actions that lead to accomplishing the specified tasks. This may be used to refine the specification reducing possible non-determinism.
\vspace{0.5em}
\begin{definition}[Realizability]
\label{def:feasibility}
A timed MSR system $\mathcal{T}$ is \emph{realizable}
with respect to an initial configuration $\mathcal{S}_0$, a critical configuration specification $\mathcal{CS}$ and the lazy time sampling
if there \emph{exists} a compliant infinite time trace from $\mathcal{S}_0$ that uses the lazy time sampling.
\end{definition}
Open distributed systems are inherently non-deterministic, {\em e.g.}, the interference of the environment with winds.
Therefore, it is important to know whether, the system can avoid critical configurations despite non-determinism. We call this property \emph{survivability.}
\begin{definition}[Survivability]
\label{def:survivabilty}
A timed MSR $\mathcal{T}$ satisfies \emph{survivability}
with respect to an initial configuration $\mathcal{S}_0$, a critical configuration specification $\mathcal{CS}$ and the lazy time sampling if it is realizable
and if \emph{all} infinite time traces from $\mathcal{S}_0$ that use the lazy time sampling are compliant.
\end{definition}
Although survivability is a desirable property, much stronger than realizability, sometimes it may be quite a severe demand on a system, or even not achievable. Hence, when designing a system, one might compromise and consider less demanding properties.
For example, one may want to avoid configurations that appear as \quotes{dead-ends}, {\em i.e.}, configurations that necessarily lead to critical configurations. We call such configurations \emph{points-of-no-return}.
For example, drones should not fly to far that, due to energy consumption, it is no longer possible to reach a recharge station.
\begin{definition}[Point-of-No-Re\-turn]
\label{def:pon}
Given a timed MSR system $\mathcal{T}$ and a critical configuration specification $\mathcal{CS}$,
a configuration $\mathcal{S}$ is called a \emph{point-of-no-re\-turn}
if $\mathcal{S}$ is \emph{not} critical with respect to $\mathcal{CS}$, and if
\emph{all} infinite traces
of $\mathcal{T}$
that start with $\mathcal{S}$ and use the lazy time sampling are not compliant with respect to $\mathcal{CS}$. \footnote{
In the remainder of the paper, for simplicity, we usually assume given a
critical configuration specification and time sampling
in respect to which the point-of-no-re\-turn~ is considered.
When it is clear from the context, we simply say that a configuration is a point-of-no-re\-turn.}
\end{definition}
Equivalently, there exists no compliant infinite trace from a point-of-no-re\-turn ~which uses the lazy time sampling.
A point-of-no-re\-turn~ itself is not critical, but eventually leads to a critical configuration.
Therefore, configurations such as points-of-no-return\ are not desirable w.r.t. goal achievement, {\em i.e.}, when searching for (infinite) compliant traces points-of-no-return\ should be avoided.
\begin{remark}
The condition that a point-of-no-re\-turn ~is not critical is included in the definition
~in order to better differentiate from critical configurations.
Otherwise, any critical configuration would trivially be a point-of-no-re\-turn.
\end{remark}
Using the notion of points-of-no-return, we introduce new properties of our systems.
\begin{definition}[Recoverability]
\label{def:prop-nonprop}
Given a timed MSR system $\mathcal{T}$, an initial configuration $\mathcal{S}_0$ and a critical configuration specification $\mathcal{CS}$, we say that $\mathcal{T}$
satisfies \emph{recoverability} if no point-of-no-re\-turn ~
is reachable from $\mathcal{S}_0$ on a compliant trace that uses the lazy time sampling.
\end{definition}
Configurations that are point-of-no-re\-turn\ should be avoided. For example, a drone may get to some area where, due to frequent high winds, it may end up with empty batteries. Such points should be avoided.
Next, with reliability~property we want to ensure that, as long as one follows a compliant trace,
there will exist a way to extend the trace to a compliant infinite time trace.
In our drone scenario, a reliable system should be designed in such a way that as long as drones follow the instructions, including rules for flying in the presence of high winds, there is always a way for drones avoid critical configurations.
\begin{definition}[Reliability]
\label{def:new-prop}
A timed MSR system $\mathcal{T}$ satisfies \emph{reliability}
with respect to an initial configuration $\mathcal{S}_0$, a critical configuration specification $\mathcal{CS}$ and the lazy time sampling if
for \emph{any} configuration $\mathcal{S}$ reachable from $\mathcal{S}_0$ on a \emph{compliant} trace that uses the lazy time sampling there exists a compliant infinite time trace from $\mathcal{S}$ that
uses the lazy time sampling.
\end{definition}
A timed MSR system that satisfies {reliability} represents a system that is always able to avoid points-of-no-return. Such a system is realizable, but it may not satisfy survivability.
Indeed, the class of realizable systems is a proper subclass of the class of reliable systems.
Reliable systems satisfy recoverability, while the class of systems that satisfy recoverability ~is a proper superclass of the class of systems that satisfy survivability.
We present these results in Section~\ref{sec:relations-properties}, both for general MSR systems and for PTSes.
\begin{remark}\label{initial-empty}
Notice that when the initial configuration is critical any timed MSR $\mathcal{T}$ is not realizable,
it is not survivable and it
does not satisfy reliability, but it satisfies recoverability.
Recoverability~holds because of universal quantification over the empty set. Namely, the set of compliant traces from a critical configuration is empty.
We will, therefore, for some results in the next section specify that the initial configuration is not critical.
\end{remark}
\subsubsection{Time-Bounded Versions of Verification Problems} \
\vspace{1em}
Motivated by bounded model checking, we also investigate the time-bounded versions of above problems.
Instead of infinite traces, in time-bounded versions of the verification problems we consider traces that have exactly a fixed number of occurrences of Tick rules.
\begin{definition}[$n$-Time Realizability]
\label{def:n-feasibility}
A timed MSR system $\mathcal{T}$ is \emph{$n$-time realizable} with respect to the lazy time sampling, a critical configuration specification $\mathcal{CS}$ and an initial configuration $\mathcal{S}_0$ if there \emph{exists} a compliant trace, $\mathcal{P}$, from $\mathcal{S}_0$ that uses the lazy time sampling such that
global time advances by exactly $n$ time units in $\mathcal{P}$.
\end{definition}
\begin{definition}[$n$-Time Survivabilty]
\label{def:n-survivabilty}
A timed MSR system $\mathcal{T}$ satisfies \emph{$n$-time survivability} with respect to the lazy time sampling, a critical configuration specification $\mathcal{CS}$ and an initial configuration $\mathcal{S}_0$ if it is $n$-time realizable and if \emph{all} traces with exactly $n$ instances of the $Tick$ rule that start with $\mathcal{S}_0$ and use the lazy time sampling are compliant.
\end{definition}
Analogously, we define the $n$-time bounded version of the reliability ~problem. We consider all compliant traces that cover \emph{at most} $n$ time units, and
extend them to compliant traces over \emph{exactly} $n$ units of time.
\begin{definition}[$n$-Time Reliability]
\label{def:n-new-prop}
A timed MSR system $\mathcal{T}$ satisfies \emph{$n$-time reliability}
with respect to an initial configuration $\mathcal{S}_0$, a critical configuration specification $\mathcal{CS}$ and the lazy time sampling,
if for any configuration $\mathcal{S}$ reachable from $\mathcal{S}_0$ on a compliant trace $\mathcal{P}$ that uses the lazy time sampling and has at most $n$ instances of the $Tick$ rule, there exists a trace $\mathcal{P}'$ that uses the lazy time sampling such that:
\begin{enumerate}
\item[i)]
$\mathcal{P}'$ extends $\mathcal{P}$;
\item[ii)] $\mathcal{P}'$ is compliant;
\item[iii)] $\mathcal{P}'$ has exactly $n$ instances of the $Tick$ rule.
\end{enumerate}
\end{definition}
As the notion of a point-of-no-re\-turn\ is defined so that it is inseparable from infinite traces, it is not appropriate for the time-bounded version of the verification problems. That is, time-bounded version of the recoverability\ system problem does not make much sense.
Moreover, as we show in Section~\ref{sec:relations-properties}, for PTSes
problems of reliability~and
recoverability~coincide.
Hence, we do not consider the bounded version of recoverability\ problem separately.
\section{Relations Between Properties of Timed MSR}
\label{sec:relations-properties}
This Section formally relates all the different properties introduced in Section \ref{sec:problems}.
In order to compare these properties we review the machinery introduced in our previous work~\cite{kanovich.mscs} called $\delta$-representations.
This machinery is also used in Section~\ref{sec:complex} to obtain complexity results for the corresponding verification problems.
\subsection{$\delta$-representations}
Some of our results, for a given timed MSR
$\mathcal{T}$, an initial configuration $\mathcal{S}_0$ and a critical configuration specification $\mathcal{CS}$, will mention the value $\Dmax$ which is an upper-bound on the natural numbers appearing in $\mathcal{S}_0$, $\mathcal{T}$ and $\mathcal{CS}$. The value of $\Dmax$ can be inferred syntactically by simply inspecting the timestamps of $\mathcal{S}_0$, the $D$ values in timestamps of rules (which are of the form \ $T + D$) and constraints in $\mathcal{T}$ and $\mathcal{CS}$ (which are of the form $T_1 > T_2 \pm D$, $T_1 = T_2 \pm D$ and\, $T_1 \geq T_2 \pm D$). For example, the $\Dmax = 1$ for the specification in Figure~\ref{fig:rules-complete}.
For our results we assume a bound on the size of facts.
For example, in our specification in Figure~\ref{fig:rules-complete}, we can take the bound
~$ k= |x_{max}| +|y_{max}|+ |e_{max}|+5$.
Notice, however, that we do not always impose an upper bound on the values of timestamps.
Also, we allow an unbounded number of fresh values to appear in a trace.
\begin{definition}
\label{def: delta-representation}
Let \, $ \mathcal{S} = \{\,Q_1@t_1, \,Q_2@t_2, \ldots, \,Q_n@t_n \,\} $
be a configuration of a timed MSR $\mathcal{T}$
written in canonical way where the sequence of
timestamps $t_1, \ldots, t_n$ is non-decreasing.
(For the case of equal timestamps, we sort the facts in alphabetical order, if necessary.)
The \emph{$\delta$-representation} of $\mathcal{S}$ for a given $\Dmax$ is
$$
\delta_{\mathcal{S},\Dmax} = [~Q_1,\,\delta_{Q_1,Q_2},\,Q_2, \ldots, \,Q_{n-1}, \,\delta_{Q_{n-1},Q_n}, \,Q_n~] \ .
$$
Here, for a given natural number $\Dmax$, \ $\delta_{P,Q}$ \ is the \emph{truncated time difference} of two timed facts
~$P@t_1$ and $Q@t_2$~ with \mbox{$t_1\leq t_2$}, defined as follows:
$$
\delta_{P,Q} =
\left\{\begin{array}{ccl}
t_2 - t_1 & , & \ \textrm{ provided } ~t_2 - t_1 \leq \Dmax\\
\infty & , & \ \textrm{ otherwise }
\end{array}\right. \ .
$$
\end{definition}
For simplicity, when $\Dmax$ is clear from the context, we sometimes write $ \delta_{\mathcal{S}}$ instead of ~$ \delta_{\mathcal{S},\Dmax}$.
\vspace{0,5em}
In our previous work~\cite{kanovich12rta,kanovich.mscs}, we showed that a $\delta$-representation is an equivalence class on configurations. Namely, for a given $\Dmax$, we declare $\mathcal{S}_1 $ and $ \mathcal{S}_2$ equivalent, written ~$\mathcal{S}_1 \equiv_{\Dmax} \mathcal{S}_2$,
if and only if their $\delta$-representations are exactly the same, up to nonce renaming, {\em i.e.},~$\mathcal{S}_1 \sigma = \mathcal{S}_2$, where
$\sigma$ is a bijection on the set of nonce names.
\vspace{0,5em}
This equivalence relation is well-defined with respect to time constrains, {\em i.e.} ~configurations that have the same $\delta$-representation satisfy exactly the same set of constraints.
Here, when saying that configurations satisfy the same constraint, we implicitly mean that time variables of the constraint refer to the same facts in both configurations.
Therefore, we can say that a $\delta$-representation satisfies a constraint or does not. Similarly, we say that a $\delta$-representation is critical iff it is the $\delta$-representation of a critical configuration.
Also, the equivalence among configurations is well-defined with respect to application of rules, i.e. ~application of rules on $\delta$-representations is unambiguous. Therefore we can consider traces over $\delta$-representations.
For details on the concrete procedure of how to apply a rule on a given $\delta$-representation see~\cite[Section 4.3]{kanovich.mscs}.
We naturally extend the notion of a compliant trace and say that a trace over \mbox{$\delta$-representations} is compliant iff it does not contain any critical $\delta$-representation.
Also, we say that a trace over $\delta$-representations uses the lazy time sampling if $Tick$ rule is applied to a $\delta$-representation in that trace only when no instantaneous rule is applicable.
Moreover, in~\cite[Theorem 4.1]{kanovich.mscs} we have shown that there is a bisimulation between (compliant) traces over configurations and (compliant) traces over their $\delta$-representations in the following sense:
\
~$\mathcal{S}_1 \lra_* \mathcal{S}_2$ \ \ iff \ \ $ \delta_{\mathcal{S}_1} \lra_* \delta_{\mathcal{S}_2}$ .
When considering concrete problems and corresponding bisimulations, the bound $\Dmax$ is inferred from numeric values appearing in the problem specification.
This ensures that all configurations in traces are \emph{future bounded}, {\em i.e.}, do not contain facts $F@t_F$ such that $\delta_{Time,F}= \infty$. This is important for faithful representation of time advances. For more details see~\cite[Section 4.3]{kanovich.mscs}.
For self-containment of the paper, in the proof of the following result we present main proof ideas used in~\cite{kanovich.mscs},
but here we additionally address the lazy time sampling.
\begin{proposition}
\label{thm:delta configurations}
For any timed MSR
$\mathcal{T}$, a critical configuration specification $\mathcal{C}\mathcal{S}$ and an initial configuration $\mathcal{S}_0$ the equivalence relation between configurations is well-defined with respect to the rules of the system (including time advances), the lazy time sampling
and critical configurations.
\\
Namely, to any compliant trace starting from the given initial configuration $\mathcal{S}_0$
corresponds a compliant trace over $\delta$-representations starting from $\delta_{\mathcal{S}_0}$.
In particular, a trace over configurations uses the lazy time sampling iff the corresponding trace over $\delta$-representations uses the lazy time sampling.
\end{proposition}
\begin{proof}
We firstly show that application of rules on $\delta$-representations is independent of the choice of configuration from the same class. Assume
$\mathcal{S}_1$ and $\mathcal{S}_2$ are equivalent configurations, and assume that $\mathcal{S}_1$ is transformed to $\mathcal{S}'_1$ by means of
a rule~$\alpha$, as shown in the diagram below.
Recall that equivalent configurations satisfy the same set of constraints.
Hence, the rule~$\alpha$ is applicable to $\mathcal{S}_2$ and will transform
$\mathcal{S}_2$ into some $\mathcal{S}_2'$:
\[
\begin{array}{cccc}
\mathcal{S}_1 & \to_{\alpha}& \mathcal{S}_1'\\[4pt]
\biginterleave \
& & \\[4pt]
\mathcal{S}_2 & \to_{\alpha} \ & \ \mathcal{S}_2'
\end{array}
\]
It remains to show that $\mathcal{S}_1'$ is equivalent to $\mathcal{S}_2'$.
We consider the two types of rules for $\alpha$, namely,
time advances and instantaneous rules.
Let the time advance transform
${\mathcal{S}_1}$ into~${\mathcal{S}_1}'$, and $\mathcal{S}_2$ to $\mathcal{S}_2'$.
Since only the timestamp $T$ denoting the global time in $Time@T$ is increased by 1, and the rest of the configuration remains unchanged,
only truncated time differences involving $Time$ change in the resulting $\delta$-representations.
Because of the equivalence $S_1 \equiv_{\Dmax} S_2$ , for a fact $P@T_P^1$ in $\mathcal{S}_1$ with $T_P^1\leq T^1$, $Time@T^1$ and $\delta_{P,Time}= t$, we have $ P@T_P^2$ with ${T}_P^2 \leq {T^2}$, $Time@{T^2}$ and $\delta_{P,Time}= t$ in $\mathcal{S}_2$ as well. Therefore, we have
$$\delta_{P,Time}=
\left\{\begin{array}{ccl}
t+1 & , & \ \textrm{ provided } \ t+1 \leq \Dmax\\
\infty & , & \ \textrm{ otherwise }
\end{array}\right.
$$
both in $\mathcal{S}_1'$ and $\mathcal{S}_2'$. On the other hand, for any future fact $Q@T^Q$
with $\delta_{Time,Q}= t$ in $\mathcal{S}_1$ and in $\mathcal{S}_2$, we get $\delta_{Time,Q}= t-1$ in both $\mathcal{S}_1'$ and $\mathcal{S}_2'$.
Therefore, ${\mathcal{S}_1}'$ and $\mathcal{S}_2'$ are equivalent.
Recall that since all configurations in the trace are future bounded, $t < \infty$, so $t-1$ is well-defined.
The reasoning for the application of instantaneous rules is similar. Each created fact in $\mathcal{S}_1'$ and $\mathcal{S}_2'$ is of the form
$P@(T^1+d)$ and $P@(T^2+d)$ , where $T^1$ and $T^2$ represent
global time in $\mathcal{S}_1$ and $\mathcal{S}_2$, respectively. Therefore
each created fact has the same difference, $d$, to the global time in the corresponding configuration. This implies
that the created facts have the same truncated time
differences to the remaining (unchanged) facts.
Namely, ~$\delta_{Time,P}=d< \infty$,
~hence for $P@t_P$, $R@t_R$ and $Time@t$ ~with
~$t \leq t_R \leq t_P$, ~$$\delta_{R,P}= \delta_{Time,P}- \delta_{Time,R}\ . $$
Notice here that $\delta_{Time,R}<\infty$ because all configurations are future bounded, so the above difference is well-defined (finite).
Similarly, when ~$t \leq t_P \leq t_R$, ~$$\delta_{P,R}= \delta_{Time,R}- \delta_{Time,P}\ .$$
Hence ${\mathcal{S}_1}'$ and
$\mathcal{S}_2'$ are equivalent.
Therefore, application of rules on $\delta$-representations defined through corresponding configurations is well-defined, {\em i.e.}, the abstraction of configurations to $\delta$-representations w.r.t. application of rules is complete.
The abstraction is also sound. Namely, from a compliant trace over $\delta$-representations, we can extract a concrete compliant trace over configurations.
Although any given \mbox{$\delta$-representation} corresponds to an infinite number of configurations, for a given initial configuration $\mathcal{S}_0$, we have the initial $\delta$-representation
\ $\delta_0= \delta_{\mathcal{S}_0}$.
The existence of a trace over configurations corresponding to the given (possibly infinite) trace over $\delta$-representations is then easily proven by induction.
Since equivalent configurations satisfy the same set of constraints,
$\mathcal{S}_1$ is a critical configuration if and only if $\mathcal{S}_2$ is a critical configuration, {\em i.e.}~ if and only if $\delta_{\mathcal{S}_1}$ is critical.
By induction on the length of the (sub)trace, it follows that, given a timed MSR and a critical configuration specification $\mathcal{C}\mathcal{S}$,
any (possibly infinite) trace over configurations is compliant if and only if the corresponding trace over $\delta$-representations is compliant.
Notice that, using the lazy time sampling in a trace $\mathcal{P}$, $Tick$ rule is applied to some $\mathcal{S}_i$ in $\mathcal{P}$ if and only if no instantaneous rule can be applied to
$\mathcal{S}_i$. Since $\mathcal{S}_i$ and its $\delta$-representation, $\delta_{\mathcal{S}_i}$, satisfy the same set of constraints, it follows that
$Tick$ rule is applied to $\delta_{\mathcal{S}_i}$ iff~$Tick$ rule is applied to $\mathcal{S}_i$. Hence, a trace over configurations uses the lazy time sampling iff the corresponding trace over $\delta$-representations uses the lazy time sampling.
\qed
\end{proof}
Following the above result, in the case of balanced timed MSRs, we can work on traces constructed using $\delta$-representations. Moreover, the following lemma establishes a bound on the number of different $\delta$-representations.
\begin{lemma}\label{lemma:numstates}
Let $\mathcal{T}$ be a timed MSR constructed over a finite alphabet $\Sigma$ with $J$ predicate symbols and $E$ constant and function symbols.
Let $m$ be the number of facts in the initial configuration $\mathcal{S}_0$, $k$ an upper-bound on the size of facts, $\mathcal{CS}$ a critical configuration specification and $\Dmax$ an upper-bound on the numeric values of $\mathcal{S}_0, \mathcal{T}$ and $\mathcal{CS}$.
\\
The number of different $\delta$-representations, denoted by $L_\Sigma(m,\uSize,\Dmax)$, is such that
$$
L_\Sigma(m,k,\Dmax) \leq (\Dmax + 2)^{(m-1)} J^m (E + 2 m k)^{m k}.
$$
\end{lemma}
\begin{proof}
Let the given finite alphabet contain $J$ predicate symbols and $E$ constant and function symbols.
Let the initial configuration $\mathcal{S}_0$ contain $m$ facts.
Let
$$
[\,Q_1, \,\delta_{Q_1, Q_2}, \,Q_2, \ldots, \,Q_{m-1}, \,\delta_{Q_{m-1}, Q_m}, \,Q_m\,]
$$
be a $\delta$-representation with $m$ facts.
There are $m$ slots for predicate names and at most $mk$ slots for constants and function symbols, where $k$ is the bound on the size of facts.
Constants can be either constants in the initial alphabet or names for fresh values.
Following \cite{kanovich13ic}, we need to consider only $2m k$ names for fresh values (nonces).
Namely, we can fix a set of only $2mk$ nonce names. Then, whenever an action creates some fresh values, instead of creating new constants that have not yet appeared in the trace, we can choose names from this fixed set so that they are different from any constants in the enabling configuration. In that way, although this finite set is fixed in advance, each time a nonce name is used, it appears fresh with respect to the enabling configuration. We are, hence, able to simulate an unbounded number of nonces using a set of only $2mk$ nonce names.
Finally, for the values $ \delta_{Q_i, Q_{i+1}}$, only time differences up to $\Dmax$ have to be considered together with the symbol $\infty$, and there
are $m-1$ slots for time differences in a $\delta$-representation. Therefore the number of different $\delta$-representations is bounded by ~$ (\Dmax+2)^{(m-1)} J^m (E + 2 m k)^{m k} $.
\ \qed
\end{proof}
\subsection{Time-Bounded v.s. Unbounded Verification Problems for Timed MSR }
\label{sec: bounded vs unbounded}
It is obvious, by definition, that realizability implies $n$-time realizability.
We now show that for a sufficiently large $n$, the reverse implication also holds, {\em i.e.}, $n$-time realizability implies realizability.
The same implications hold for the other properties as well.
\begin{proposition}[Realizability v.s. $n$-Time Realizability]
\label{th:b-n-bounded-real}
Let $\mathcal{T}$ be a timed MSR that uses the lazy time sampling, $\mathcal{S}_0$ an initial configuration and $\mathcal{CS}$ a critical configuration specification.
Then, ~$\mathcal{T}$ satisfies realizability
~ iff ~
$\forall n$, $\mathcal{T}$ satisfies $n$-time realizabilty.
Moreover, there exists $M$ such that if ~$\mathcal{T}$ is $M$-time realizable, then ~$\mathcal{T}$ is realizable.
(In particular, the above claim holds for $M=L_\Sigma(m,\uSize,\Dmax)$.
\end{proposition}
\begin{proof} Per definition, realizability implies $n$-time realizability for any $n$.
We now prove the second statement. The first statement then easily follows.
From Proposition \ref{thm:delta configurations} it follows that for the above problems we can consider traces constructed over $\delta$-representations.
As per Lemma \ref{lemma:numstates} the number of different $\delta$-representations
is bounded by ~$l = L_\Sigma(m,\uSize,\Dmax)$, where $m$ is the number of facts in $\mathcal{S}_0$, $k$ is an upper-bound on the size of facts and $\Dmax$ is an upper-bound on the numeric values of $\mathcal{S}_0, \mathcal{T}$ and $\mathcal{CS}$.
Assume $\mathcal{T}$ is $M$-time realizable, where ~$M=L_\Sigma(m,\uSize,\Dmax)$.
Then, there is a compliant trace $\mathcal{P}$ from $ \delta_{\mathcal{S}_0}$ that uses the lazy time sampling and contains exactly $M$ $Tick$ rules.
Trace $\mathcal{P}$ contains a series of instantaneous rules separated by $Tick$ rules. That is, $\mathcal{P}$
contains $M+1$ block of $\delta$-representations, formed at all $M$ instances of $Tick$ rules in $\mathcal{P}$.
Since there are at most $M$ different $\delta$-representations in $\mathcal{T}$,
at least one $\delta$-rep\-re\-sen\-ta\-ti\-on $\delta_1$ appears in two blocks.
Therefore, a subtrace between the two appearances of $\delta_1$ contains a $Tick$ rule,
~$
\delta_{1} \lra \cdots \lra_{Tick}
\cdots \lra \delta_{1} \,
$,
and represents a loop in $\mathcal{P}$.
The above subtrace is compliant, uses the lazy time sampling and contains a $Tick$ rule. Repeating this loop indefinitely results in a compliant infinite time trace that uses the lazy time sampling. The resulting trace shows that $\mathcal{T}$ is realizable.
\qed
\end{proof}
\begin{proposition}[Survivability v.s. $n$-Time Survivability]
\label{th:b-n-bounded-survivability}
Let $\mathcal{T}$ be a timed MSR that uses the lazy time sampling, $\mathcal{S}_0$ an initial configuration
and $\mathcal{CS}$ a critical configuration specification.
Then, ~$\mathcal{T}$ satisfies survivability
~ iff ~
$\forall n$, $\mathcal{T}$ satisfies $n$-time survivability.
\\
Moreover, there exists $M$ such that if
~$\mathcal{T}$ satisfies $M$-time survivability, then ~$\mathcal{T}$ satisfies survivability.
\end{proposition}
\begin{proof} \
Assume that ~$\mathcal{T}$ satisfies $M$-time survivability, where $M = L_\Sigma(m,\uSize,\Dmax)$.
Hence, all traces with $M$ ticks are compliant.
Assume $\mathcal{T}$ is not survivable. Then there is an infinite time trace $\mathcal{P}$ from $\mathcal{S}_0$ that uses the lazy time sampling which is not compliant, {\em i.e.}, there is a critical configuration $\mathcal{S}_1$ in $\mathcal{P}$.
Because ~$\mathcal{T}$ satisfies $M$-time survivability, there are more then $M$ ticks in the subtrace ~$\mathcal{S}_0 \lra \cdots \lra \mathcal{S}_1$~ of $\mathcal{P}$.
Since there are more than $M$ rules (and $\delta$-representations) in the above subtrace $\mathcal{P}'$, some $\delta$-representation appears at least twice in $\mathcal{P}'$, {\em i.e.},
there is a loop in $\mathcal{P}'$. By removing all loops in $\mathcal{P}'$ we obtain a trace $\mathcal{P}''$ from ~$\mathcal{S}_0$ to $\mathcal{S}_1$ that uses the lazy time sampling and contains at most $M$ rules. Consequently, there are at most $M$ ticks in $\mathcal{P}''$.
The trace $\mathcal{P}''$ is not a compliant since it contains $\mathcal{S}_1$. This is in contradiction with $M$-time survivability of $\mathcal{T}$.
Per definitions, survivability implies $n$-time survivability for any $n$.
The first statement is then a simple consequence of the second statement and definitions.
\qed
\end{proof}
\begin{proposition}[Reliability~v.s. $n$-Time Reliability]
\label{th:b-n-bounded-recoverability}
Let $\mathcal{T}$ be a timed MSR that uses the lazy time sampling, $\mathcal{S}_0$ an initial configuration
and $\mathcal{CS}$ a critical configuration specification.
Then, ~$\mathcal{T}$ satisfies reliability
~ iff ~
$\forall n$, $\mathcal{T}$ satisfies $n$-time reliability.
\\
Moreover, there exists $M$ such that if
~$\mathcal{T}$ satisfies $M$-time reliability, then ~$\mathcal{T}$ satisfies reliability.
\end{proposition}
\begin{proof}
As in Proposition \ref{th:b-n-bounded-real}, we consider traces constructed over $\delta$-representations and let ~$l_1 = L_\Sigma(m,\uSize,\Dmax)$.
Let $\delta_{\mathcal{S}_0} \lra \dots \lra \delta_{\mathcal{S}_1}$ be a compliant trace that uses the lazy time sampling. Let $l_2$ be the number of $Tick$ rules in this trace. We can assume that $l_2 \leq l_1$, since otherwise there would be a loop in the trace which could be eliminated obtaining a trace with at most $l_1$ $\delta$-representations .
Let $M= 2l \geq {l_1+l_2}$. Assume $\mathcal{T}$ satisfies $M$-time reliability. Then, there is a compliant trace $\mathcal{P}$ from $ \delta_{\mathcal{S}_1}$ that uses the lazy time sampling and contains exactly $M$ $Tick$ rules. As in the proof of Proposition \ref{th:b-n-bounded-real}, we can conclude that there is a \mbox{$\delta$-representation} $ \delta_{2}$ in $\mathcal{P}$ that appears at least twice with a $Tick$ rule between the two appearances of $\delta_{2}$ in $\mathcal{P}$.
Repeating the subtrace of $\mathcal{P}$ between two appearances of $\delta_{2}$ indefinitely creates an infinite time compliant trace that uses the lazy time sampling and shows reliability~of $\mathcal{T}$.
The first statement follows from definitions and the second statement.
\qed
\end{proof}
\subsection{Relations Among Different Properties of Timed MSR and PTS}
\label{sec:relations-properties-infinite}
In this Section we formally relate different properties introduced over infinite traces. In general, we can distinguish all of these properties for timed MSR, but only some for PTSes, as stated below.
\begin{proposition}
\label{thm:deadlock-recover}
Let $\mathcal{T}$ be a timed MSR system that uses lazy time sampling, $\mathcal{S}_0$ an initial configuration and $\mathcal{CS}$ a critical configuration specification.
If ~$\mathcal{T}$ satisfies reliability, then $\mathcal{T}$ satisfies recoverability.
If ~$\mathcal{T}$ satisfies recoverability, then $\mathcal{T}$ does not necessarily satisfy reliability.
\end{proposition}
\begin{proof}
Let $\mathcal{T}$ be a timed MSR system that satisfies reliability.
\\
Assume $\mathcal{T}$ does not satisfy recoverability.
Then there is a compliant trace from $\mathcal{S}_0$ to some point-of-no-re\-turn~$\mathcal{S}_P$ that uses the lazy time sampling.
Since $\mathcal{T} $ satisfies reliability, there is a compliant infinite time trace from $\mathcal{S}_P$ that uses the lazy time sampling.
As $\mathcal{S}_P$ is a point-of-no-re\-turn, this is in contradiction with the notion of a point-of-no-re\-turn.
\\[3pt]
We provide an example of a timed MSR system, $\mathcal{T}$, that satisfies recoverability, but does not that satisfy reliability.
\\[3pt]
Let ~$\mathcal{S}_0= \{ Time@0, A@0\}$, let $\mathcal{T}$ contain only the following instantaneous rules:
\begin{subequations}
\begin{small}
\begin{align}
Time@T, \,\red{A@T'}\ \ \lra \ \ Time@T,\, \blue{\,B@T}
\label{eq:ex02-1}
\\
Time@T, \, \red{B@T'} \ \ \lra \ \ Time@T,\, \blue{\,A@T}
\label{eq:ex02-2}
\\Time@T, \,\red{A@T'}\ \ \lra \ \ Time@T,\, \blue{\,C@T}
\label{eq:ex02-3}
\end{align}
\end{small}
\end{subequations}
and
\begin{small}
~$\mathcal{CS}= \{ ~\tup{ \{~Time@T, A@T'\}, \, \{T>T'\}}$,
$ ~\tup{\{~Time@T, B@T'\}, \, \{T>T'\}},$
\hspace{4em}
$ ~\tup{\{~Time@T, C@T'\}, \, \emptyset \} } \, \}$.
\end{small}
\vspace{0,2em}
\noindent
There is a single {infinite} compliant trace from $\mathcal{S}_0$ that uses the lazy time sampling:
\begin{small}
\begin{equation}
\label{ex: infinite trace}
Time@0, \, {A@0} \ \lra_{(\ref{eq:ex02-1})} \
Time@0, \,{B@0}\ \lra_{(\ref{eq:ex02-2})} \
Time@0, \, {A@0}\ \lra_{(\ref{eq:ex02-1})} \
Time@0, \,{B@0}\ \lra_{(\ref{eq:ex02-2})} \
\dots
\end{equation}
\end{small}
{Infinite time} traces that extend compliant traces from $\mathcal{S}_0$ and use the lazy time sampling contain
$Tick$ rules and, hence,
contain the following subtrace:
\begin{small}
\begin{equation*}
\label{ex: infinite trace nc}
Time@0, \, {A@0} \ \lra_{(\ref{eq:ex02-3})} \
Time@0, \,{C@0}\ \lra_{Tick} \
Time@1, \, {C@0}\ \lra_{Tick} \
Time@2, \,{C@0}\ \lra \
\dots
\end{equation*}
\end{small}
Therefore, all infinite time traces from $\mathcal{S}_0$ are not compliant.
Consequently, $\mathcal{T}$ does not satisfy reliability.
System $\mathcal{T}$ trivially satisfies recoverability ~since there are no points-of-no-return. Any configuration reachable from $\mathcal{S}_0$ on a compliant trace that uses the lazy time sampling is either one of the following configurations:
$
\{ \, Time@T, \, {A@T}\, \}\ \text{ or } \ \{ \, Time@T, \,{B@T}\, \} \, .
$
None of them is a point-of-no-re\-turn, because of the trace (\ref{ex: infinite trace}).
Remaining configurations of $\mathcal{T}$, ~$\{~Time@T, \, {C@T'}~\}$, are critical, so they are not points-of-no-return.
\qed
\end{proof}
Notice that the properties of timed MSR introduced in Section~\ref{sec:problems} involve infinite time traces that use the lazy time sampling.
Recall that for any given PTS $\mathcal{T}$ and any configuration $\mathcal{S}$,
there exists an infinite time trace of $\mathcal{T}$ that starts with $\mathcal{S}$ and uses the lazy time sampling.
Although recoverability~and reliability~are different properties of timed MSR systems in general, it turns out that for the class of PTSes these properties coincide.
\begin{proposition}
\label{thm:deadlock-recover-PTS}
Let $\mathcal{T}$ be a PTS that uses the lazy time sampling, $\mathcal{S}_0$ an initial configuration that is not critical, and $\mathcal{CS}$ a critical configuration specification.
System ~$\mathcal{T}$ satisfies reliability ~ iff ~$\mathcal{T}$ satisfies recoverability.
\end{proposition}
\begin{proof}
Since a PTS is a timed MSR system, if follows from Proposition \ref{thm:deadlock-recover} that a PTS that satisfies reliability~satisfies recoverability~as well.
\\[3pt]
Assume that a PTS $\mathcal{T}$ does not satisfy reliability. Then, since $\mathcal{S}_0$ is not critical, there is a compliant trace from $\mathcal{S}_0$ to some configuration $\mathcal{S}_1$ that uses the lazy time sampling which cannot be extended to a compliant infinite time that trace that uses the lazy time sampling. (Recall that with a critical initial configuration system satisfies recoverability, see Remark~\ref{initial-empty}.)
We claim that $\mathcal{S}_1$ is a point-of-no-re\-turn.
Namely, since $\mathcal{T}$ is a PTS, by Proposition~\ref{prop:progressing}, any infinite trace from $\mathcal{S}_1$ is an infinite time trace. Therefore, by above statement (infinite time traces from $\mathcal{S}_1$ uses the lazy time sampling are not compliant), it follows that any infinite trace from $\mathcal{S}_1$ uses the lazy time sampling is not compliant.
Since point-of-no-re\-turn~$\mathcal{S}_1$ is reachable from $\mathcal{S}_0$ on a lazy compliant trace, $\mathcal{T}$ does not satisfy recoverability.
\qed
\end{proof}
We show that the remaining properties are different even for PTSes. Furthermore, we show relations between the
properties for PTSes and for timed MSR systems in general.
We first investigate how reliability~relates to survivability. It turns out that
these are different properties of PTSes, and, consequently, these are different properties of timed MSR systems.
\begin{proposition}
\label{thm:deadlock-survive}
Let $\mathcal{T}$ be a PTS that uses the lazy time sampling, $\mathcal{S}_0$ an initial configuration and $\mathcal{CS}$ a critical configuration specification.
If ~$\mathcal{T}$ satisfies survivability, then $\mathcal{T}$ satisfies reliability.
If $\mathcal{T}$ satisfies reliability, it may not satisfy survivability.
\end{proposition}
\begin{proof}
Assume that $\mathcal{T}$ satisfies survivability, but does not satisfy reliability.
Then there exists a compliant trace, $\mathcal{P}$, from $\mathcal{S}_0$ to some configuration $\mathcal{S}_1$ which cannot be extended to a compliant infinite time trace that uses the lazy time sampling.
Let $\mathcal{P}'$ be some infinite time extension of $\mathcal{P}$ that uses the lazy time sampling. Such a trace $\mathcal{P}'$ exists because of Proposition~\ref{prop:progressing}, but it is not compliant.
\\
Since $\mathcal{T}$ satisfies survivability, all infinite time traces from $\mathcal{S}_0$ that use the lazy time sampling
are compliant, including $\mathcal{P}'$.
Contradiction.
\\[5pt]
The following example of a PTS satisfies reliability, but does not satisfy survivability.
\noindent
Let ~$\mathcal{S}_0= \{ Time@0, A@0, B@0\}$, ~$\mathcal{CS}= \{~\tup{~B@T, D@T'\},\emptyset~} \}$ and let PTS $\mathcal{T}$ contain only the following instantaneous rules:
\begin{subequations}
\begin{small}
\begin{align}
Time@T, \,\red{A@T'}, \, B@T'' \ \mid \ \, \{ T' \leq T\} \ \lra \ \ Time@T,\, B@T'', \blue{\,C@(T + 1)}
\label{eq:ex-1}
\\
Time@T, \,\red{A@T'}, \, B@T'' \ \mid \ \, \{ T' \leq T\} \ \lra \ \ Time@T,\, B@T'', \blue{\,D@(T + 1)}
\label{eq:ex-2}
\\
Time@T, \,\red{B@T'}, \, \red{C@T''} \ \mid \ \, \{ T = T', T''\leq T \} \ \lra \ \ Time@T,\, \blue{A@T}, \blue{\,B@(T + 1)}
\label{eq:ex-3}
\end{align}
\end{small}
\end{subequations}
The following trace from $\mathcal{S}_0$ uses the lazy time sampling and is not compliant:
$$
Time@0, A@0, B@0\ \lra_{(\ref{eq:ex-2})} \ Time@0, B@0, D@1 \ .
$$
Hence, $\mathcal{T}$ does not satisfy survivability.
To show that $\mathcal{T}$ satisfies reliability, assume $\mathcal{P}$ is a compliant trace from $\mathcal{S}_0$ to some $\mathcal{S}_1$ that uses the lazy time sampling. Then, $\mathcal{P}$ does not contain rule (\ref{eq:ex-2}) that always results in a critical configuration.
Hence, only (\ref{eq:ex-1}) and (\ref{eq:ex-3}) rules are used in $\mathcal{P}$, so $\mathcal{S}_1$ is either
\ $
\{ \, Time@T, \, {A@T'}, \, B@T'' \, \} \ \text{ or } \ \{ \, Time@T, \, B@T', \,{C@T''}\, \} \, .
$\\
Using the lazy time sampling, and only (\ref{eq:ex-1}), (\ref{eq:ex-3}) and $Tick$ rules, trace $\mathcal{P}$ can be extended to a compliant infinite time trace. Hence, $\mathcal{T}$ satisfies reliability.
\qed
\end{proof}
Next, we show how recoverability~relates to realizability.
\begin{proposition}
\label{thm:realize-recover}
Let $\mathcal{T}$ be a PTS that uses the lazy time sampling, $\mathcal{S}_0$ an initial configuration that is not critical, and $\mathcal{CS}$ a critical configuration specification.
If ~$\mathcal{T}$ satisfies recoverability, then $\mathcal{T}$ is realizable.
A realizable system $\mathcal{T}$ may not satisfy recoverability.
\end{proposition}
\begin{proof}
Assume $\mathcal{T}$ satisfies recoverability.
Since $\mathcal{S}_0$ is not critical, $\mathcal{S}_0$ is trivially reachable from $\mathcal{S}_0$ on a compliant trace that uses the lazy time sampling. Then, recoverability~of $\mathcal{T}$ implies that $\mathcal{S}_0$ is not a point-of-no-re\-turn.
Then, as per definition of a point-of-no-re\-turn, there is a compliant infinite trace, $\mathcal{P}$, from $\mathcal{S}_0$ that uses the lazy time sampling.
As per Proposition~\ref{prop:progressing}, $\mathcal{P}$ is a compliant infinite time trace from $\mathcal{S}_0$ that uses the lazy time sampling.
Hence, $\mathcal{T}$ is realizable.
\\[5pt]
We prove the other statement by providing an example of a PTS that is realizable, but does not satisfy recoverability.
\\[2pt]
Let ~$\mathcal{S}_0''= \{ Time@0, A@0\}$, ~$\mathcal{CS}''= \{~\tup{D@T\},\emptyset~} \}$ and let PTS $\mathcal{T}''$ contain only the following instantaneous rules:
\begin{subequations}
\begin{small}
\begin{align}
Time@T, \,\red{A@T'}\ \mid \ \, \{ T' \leq T\} \ \lra \ \ Time@T,\, \blue{\,B@(T + 1)}
\label{eq:ex1-3}
\\
Time@T, \,\red{A@T'} \ \mid \ \, \{ T' \leq T\} \ \lra \ \ Time@T,\, \blue{\,C@(T + 1)}
\label{eq:ex1-1}
\\
Time@T, \, \red{B@T'} \ \mid \ \, \{ T'\leq T \} \ \lra \ \ Time@T,\, \blue{\,A@(T + 1)}
\label{eq:ex1-4}
\\
Time@T, \,\red{C@T'} \ \mid \ \, \{ T' \leq T\} \ \lra \ \ Time@T,\, \blue{\,D@(T + 1)}
\label{eq:ex1-2}
\end{align}
\end{small}
\end{subequations}
The following trace that uses the lazy time sampling shows realizability of $\mathcal{T}''$:
\[
\begin{small}
\begin{array}{l}
Time@0,\,{A@0} \ \lra_{(\ref{eq:ex1-3})} \ Time@0, {\,B@1} \ \lra_{Tick} \ Time@1, {\,B@1}
\ \lra_{(\ref{eq:ex1-4})} \\
\ \ Time@1, {\,A@2} \ \ \lra_{Tick} \ Time@2, {\,A@2} \ \lra_{(\ref{eq:ex1-3})} \ Time@2, {\,B@3} \ \lra_{Tick} \ \dots
\end{array}
\end{small}
\]
Configuration ~$\widetilde{\mathcal{S}}=\{Time@T, C@T'\}$ is a point-of-no-re\-turn ~(for any $T, T'$) as the rule (\ref{eq:ex1-2}) is the only instantaneous rule that may be applicable to $\widetilde{\mathcal{S}}$ (if $T' \leq T$), so all infinite traces from $\widetilde{\mathcal{S}}$ that use the lazy time sampling contain the critical configuration~ \mbox{$\{Time@T'', D@(T''+1)\}$}.
Configuration $\widetilde{\mathcal{S}}$ is reachable from $\mathcal{S}_0''$ by a compliant trace that uses the lazy time sampling:
~ $
Time@0,\,{A@0} \ \lra_{(\ref{eq:ex1-1})} \ \blue{ Time@0, {\,C@1} } \ .
$~
Since $\widetilde{\mathcal{S}}$ is a point-of-no-re\-turn,
$\mathcal{T}''$ does not satisfy recoverability.
\qed
\end{proof}
Using the transitivity of the subset relation, we can infer relations between all of our properties both for PTSes and for timed MSR systems in general.
We summarize the results in the following Corollaries.
\begin{corollary}
\label{thm:properties}
Let ~$\mathcal{R}_{ealizability}^{MSR}$ , $\mathcal{R}_{eliability}^{MSR}$ , $\mathcal{R}_{ecoverability}^{MSR}$~ and ~$\mathcal{S}_{urvivability}^{MSR}$~ be the classes of timed MSR systems that satisfy realizability, reliability, recoverability~and survivability, respectively, w.r.t. the lazy time sampling and an initial configuration that is not critical.
Then, the following proper subset relations hold:
$$
\mathcal{S}_{urvivability}^{MSR} \ \subset \ \mathcal{R}_{eliability}^{MSR} \ \subset \ \mathcal{R}_{ecoverability}^{MSR} \ \subset \ \mathcal{R}_{ealizability}^{MSR} \ .
$$
\end{corollary}
\begin{proof}
The statement directly follows from Propositions \ref{thm:deadlock-recover}, \ref{thm:deadlock-survive}
and \ref{thm:realize-recover}.
Also, recall that w.r.t. a critical initial configuration any system satisfies recoverability, see Remark~\ref{initial-empty}.
\qed
\end{proof}
\begin{corollary}
\label{thm:properties-PTS}
Let ~$\mathcal{R}_{ealizability}^{PTS}$ , $\mathcal{R}_{eliability}^{PTS}$ , $\mathcal{R}_{ecoverability}^{PTS}$~ and ~ $\mathcal{S}_{urvivability}^{PTS}$~ be the classes of PTSes that satisfy realizability, reliability, recoverability~and survivability, respectively, w.r.t. the lazy time sampling and an initial configuration that is not critical.
Then, the following proper subset relations hold:
$$
\mathcal{S}_{urvivability}^{PTS} \ \subset \ \mathcal{R}_{eliability}^{PTS}\ = \ \mathcal{R}_{ecoverability}^{PTS} \ \subset \ \mathcal{R}_{ealizability}^{PTS} \ .
$$
\end{corollary}
\begin{proof}
The statement directly follows from Propositions \ref{thm:deadlock-recover-PTS}, \ref{thm:deadlock-survive} and \ref{thm:realize-recover}. %
\qed
\end{proof}
\begin{corollary}
\label{thm:properties-n}
Let ~$n\mathcal{R}_{ealizability}^{PTS}$ , $n\mathcal{R}_{eliability}^{PTS}$~ and ~ $n\mathcal{S}_{urvivability}^{PTS}$~ be the classes of PTSes that satisfy $n$-time realizability, $n$-time reliability~and $n$-time survivability, respectively, w.r.t. the lazy time sampling and an initial configuration that is not critical.
Then, the following proper subset relations hold:
$$
n\mathcal{S}_{urvivability}^{PTS}
\ \subset \ n\mathcal{R}_{eliability}^{PTS} \ \subset \ n\mathcal{R}_{ealizability}^{PTS} \ .
$$
\end{corollary}
\begin{proof}
Let a PTS $\mathcal{T}$ satisfy $n$-time survivability.
We check that $\mathcal{T}$ satisfies $n$-time reliability.
Let $\mathcal{S}$ be a configuration that is reachable from $\mathcal{S}_0$ on a compliant trace $\mathcal{P}$ that uses the lazy time sampling and has at most $n$ instances of $Tick$ rule. Since $\mathcal{T}$ is a PTS, only a bounded number of instantaneous rules can be applied before a $Tick$ rule appears in a trace that uses the lazy time sampling (Proposition~\ref{prop:bounded-length}). Hence, the trace $\mathcal{P}$ can be extended to a compliant trace $\mathcal{P}'$ that contains exactly $n$ instances of the $Tick$ rule and uses the lazy time sampling. Since $\mathcal{T}$ satisfies $n$-time survivability, $\mathcal{P}'$ is compliant. Consequently, $\mathcal{T}$ satisfies $n$-time reliability.
Now, let $\mathcal{T}$ satisfy $n$-time reliability. Then, the trivial trace of length 1 from $\mathcal{S}_0$ (containing only $\mathcal{S}_0$) can be extended to a compliant trace $\mathcal{P}'$ that contains exactly $n$ instances of the $Tick$ rule and uses the lazy time sampling. Hence, $\mathcal{T}$ is $n$-time realizable.
In order ot show that the inclusions are proper, we provide examples of PTSes that satisfy one, but not the other property.
The PTS given in the proof of Proposition \ref{thm:realize-recover} is an example of a $n$-time realizable system, $\forall n >0$, which does not satisfy even $1$-time reliability.
Similarly, the PTS given in the proof of Proposition \ref{thm:deadlock-survive} satisfies $n$-time reliability, $\forall n >0$, but it does not satisfy even $1$-time survivability.
\qed
\end{proof}
\section{Complexity Results for PTSes }
\label{sec:complex}
In this section we investigate the complexity of verification problems introduced in Section~\ref{sec:problems} for progressing timed systems.
\subsection{PSPACE-Completeness of Verification Problems for PTSes}
We again point out that in our previous work we only dealt with \emph{finite traces}. The additional challenge in this paper is to deal with infinite traces, and in particular the new verification problems of realizability, reliability, survivability and recoverability~system problem. These problems are new and as far as we know have not been investigated in the literature (see Section~\ref{sec:related} for more details).
\vspace{0,5em}
Assume throughout this section the following:
\begin{itemize}
\item $\Sigma$ -- A finite alphabet with $J$ predicate symbols and $E$ constant and function symbols;
\item $\mathcal{T}$ -- A progressing timed MSR
constructed over $\Sigma$;
\item $\mathcal{S}_0$ -- An initial configuration;
\item $m$ -- The number of facts in the initial configuration $\mathcal{S}_0$;
\item $\mathcal{CS}$ -- A critical configuration specification constructed over $\Sigma$;
\item $k$ -- An upper-bound on the size of facts;
\item $\Dmax$ -- An upper-bound on the numeric values of $\mathcal{S}_0, \mathcal{T}$ and $\mathcal{CS}$.
\end{itemize}
\vspace{1em}
PSPACE-hardness of the problems for PTSes
can be shown by adequately adapting our previous work~\cite{kanovich11jar}.
\begin{proposition}[Verification problems over infinite traces are PSPACE hard]
\label{th: realizability-hard}
\ \\
The problems of ~realizability, survivability,
reliability ~and recoverability~
for PTSes
that use the lazy time sampling are PSPACE-hard.
\end{proposition}
\begin{proof}
We encode a deterministic
Turing machine~$M$ that accepts in space~$n$. We adapt the encoding in~\cite{kanovich13ic} to a progressing timed MSR $\mathcal{T}$ that uses the lazy time sampling. For readability, in the rules below, we do not explicitly write the set of constraints $\mathcal{C}_r$ as per Definition~\ref{def:progressing}. This set is implicitly assumed.
Firstly,
we introduce the following propositions:
\\
- \mbox{$R_{i,\xi}$}, ~ where \mbox{$R_{i,\xi}@T_l$} denotes that
{\em ``the \mbox{$i$-}th cell contains symbol~$\xi$ since time $T_l$''},
\qquad where \mbox{$i\!=\! 0,1,..,n\!+\! 1$} and $\xi$ is a symbol
of the tape alphabet of~$M$;
and
\\
- \mbox{$S_{j,q}$}, ~ which denotes that
{\em ``the \mbox{$j$-}th cell is scanned by~$M$ in state~$q$''},
\qquad where \mbox{$j\!=\! 0,1,..,n\!+\! 1$} and $q$ is a state of~$M$.
A Turing machine configuration is encoded by using the following multiset of facts:
\begin{equation}%
\begin{array}{r}
Time@T,\,S_{j,q}@T_1, \,R_{0,\xi_0}@T_2, \,R_{1,\xi_1}@T_2,
\,R_{2,\xi_2}@T_3,\cdots \qquad \qquad \qquad
\\ \cdots,
\,R_{n,\xi_n}@T_{n+2},
\,R_{n\!+\! 1,\xi_{n\!+\! 1}}@T_{n+3}.
\end{array}
\label{eq-config}
\end{equation}
Second, each instruction $\gamma$ in $M$ of the form
\mbox{$q\xi \!\rightarrow\! q'\eta D$}, denoting
{\em ``if in state~$q$ looking at symbol\/~$\xi$,
replace it by\/~$\eta$,
move the tape head one cell in direction\/~$D$ along the tape,
and go into state\/~$q'$''},
is specified by the set of \mbox{$5(n\!+\! 2)$} progressing rules of the form:
\begin{equation}
\begin{array}{l@{\qquad}l@{\qquad}l}
Time@T, \red{\,S_{i,q}@T}, \red{\,R_{i,\xi}@T} \ \lra \ Time@T, \blue{\,F_{i,\gamma}@(T+1)}, \blue{\,R_{i,\xi}@(T+1)} \\
Time@T, \red{\,F_{i,\gamma}@T}, \red{\,R_{i,\xi}@T} \ \lra \ Time@T, \blue{\,F_{i,\gamma}@(T+1)}, \blue{\,H_{i,\gamma}@(T+1)} \\
Time@T, \red{\,F_{i,\gamma}@T}, \red{\,H_{i,\gamma}@T} \ \lra \ Time@T, \blue{\,G_{i,\gamma}@(T+1)}, \blue{\,H_{i,\gamma}@(T+1)}\\
Time@T,\red{\,G_{i,\gamma}@T}, \red{\,H_{i,\gamma}@T} \ \lra \ Time@T, \blue{\,G_{i,\gamma}@(T+1)}, \blue{\,R_{i,\eta}@(T+1)}\\
Time@T,\red{\,G_{i,\gamma}@T}, \red{\,R_{i,\eta}@T} \ \lra \ Time@T, \blue{\,S_{i_D,q'}@(T+1)}, \blue{\,R_{i,\eta}@(T+1)},
\end{array}
\label{ax-ATM}
\end{equation}
where \mbox{$i\!=\! 0,1,..,n\!+\! 1$},
$F_{i,\gamma}$, $G_{i,\gamma}$, $H_{i,\gamma}$ are
auxiliary propositions, and~
\mbox{$i_D:= i \!+\! 1$} if $D$ is {\em right},
~\mbox{$i_D:= i \!-\! 1$} if $D$ is {\em left},
and ~\mbox{$i_D:= i$}, otherwise.
It is easy to check that above rules are necessarily applied in succession, {\em i.e.}~the only transition possible is of the following form:
\begin{equation}
\begin{array}{c}
Time@t, \,S_{i,q}@t, \,R_{i,\xi}@t \lra \\
Time@t, \,F_{i,\gamma}@(t+1), \,R_{i,\xi}@(t+1) \lra_{Tick} \\
Time@(t+1), \,F_{i,\gamma}@(t+1), \,R_{i,\xi}@(t+1) \lra \\
Time@(t+1),\,F_{i,\gamma}@(t+2), \,H_{i,\gamma}@(t+2) \lra_{Tick}\\
Time@(t+2),\,F_{i,\gamma}@(t+2), \,H_{i,\gamma}@(t+2) \lra \\
Time@(t+2),\, G_{i,\gamma}@(t+3), \,H_{i,\gamma}@(t+3) \lra_{Tick} \\
Time@(t+3),\, G_{i,\gamma}@(t+3), \,H_{i,\gamma}@(t+3) \lra \\
Time@(t+3),\, G_{i,\gamma}@(t+4), \,R_{i,\eta}@(t+4) \lra_{Tick} \\
Time@(t+4), \,G_{i,\gamma}@(t+4), \,R_{i,\eta}@(t+4) \lra \\
Time@(t+4), \,S_{i_D,q'}@(t+5), \,R_{i,\eta}@(t+5).
\end{array}
\end{equation}
Notice that after the application of any of the above rules (Eq.~\ref{ax-ATM}),
no instantaneous rule is applicable, and therefore the $Tick$ rule should be applied.
Thus, the lazy time sampling is reflected in the encoding so that time is advancing after each step of the Turing machine.
A critical configuration is any configuration corresponding to a final state of the Turing machine, that is, critical configurations are defined w.r.t. the following critical configuration specification:
\[
\{\,\tup{\,\{S_{i_D,q_F}@T\}, \emptyset\,} \mid q_F \textrm{ is an accepting or rejecting state of } M\} .
\]
By the above encoding we reduce the problem of a Turing machine termination in $n$-space to the realizability problem.
More precisely, the given Turing machine~$M$ does not terminate if and only if there is an infinite time compliant trace in the obtained PTS
$\mathcal{T}$ that uses the lazy time sampling.
The encoding is easily proved to be sound and faithful (see~\cite{kanovich13ic} for more details).
Thus, the realizability problem is PSPACE-hard.
From the uniqueness of the Turing machine computation, it follows that the survivability problem is also PSACE-hard.
Realizability is an instance of a problem of checking whether a configuration is not a point-of-no-re\-turn.
Recall that a system is realizable if there is a compliant infinite time trace $\mathcal{P}$ from the initial configuration in which the global time tends to infinity.
Since $\mathcal{T}$ is progressing, we get the condition on time (time tends to infinity) from Proposition~\ref{prop:progressing}.
Indeed, a system is realizable if and only if the initial configuration is not a point-of-no-re\-turn.
As PSPACE and co-PSPACE are the same complexity class
and realizability is PSAPCE-hard, the problem of determining whether a configuration is a point-of-no-re\-turn ~ is PSAPCE-hard.
Since reliability ~and recoverability ~system problems comprise checking whether a configuration is a point-of-no-re\-turn, they are both PSAPCE-hard.
\qed
\end{proof}
Infinite traces over configurations of PTSes are infinite time traces (Proposition~\ref{prop:progressing}). The same holds for traces over
$\delta$-representations of PTSes.
For our verification problems we, therefore, need to construct an infinite compliant trace. The following lemma establishes a criterion.
\begin{lemma}
\label{lem:lengthPSPACE}
For a PTS $\mathcal{T}$ assume $\Sigma, \mathcal{S}_0, m,\mathcal{CS},k,\Dmax$ as described above. If there is a compliant trace (constructed using $\delta$-representations) starting with the $\delta$-representation of $\mathcal{S}_0$ with length $L_\Sigma(m,k,\Dmax)$, then there is an \emph{infinite} compliant trace starting with the $\delta$-representation of $\mathcal{S}_0$.
\end{lemma}
\begin{proof}
Since there are only $L_\Sigma(m,k,\Dmax)$ different \mbox{$\delta$-representations}, a trace of length greater than $L_\Sigma(m,k,\Dmax) $ necessarily contains the same \mbox{$\delta$-representations} twice, that is, there is a loop in the trace. By repeating the $\delta$-representations appearing in the loop, we can construct an infinite trace which is necessarily compliant.
\qed
\end{proof}
For our complexity results we use some auxiliary functions with configurations or \mbox{$\delta$-representations} as their arguments, as suitable.
For our next results involving the four properties,
assume that for any given timed MSR $\mathcal{T}$, an initial configuration $\mathcal{S}_0$ and a critical configuration specification $\mathcal{CS}$
we have two functions, $\next$ and $\critical$, which check, respectively, whether a rule in $\mathcal{T}$ is applicable to a given $\delta$-representation and whether a $\delta$-representation is critical with respect to $\mathcal{CS}$. Moreover, for a given timed MSR $\mathcal{T}$, let \mustTick\ be a function implementing the lazy time sampling. It takes
a $\delta$-representation of
system $\mathcal{T}$, and returns 1 when the $Tick$ must be applied and 0 when it must not be applied according to the lazy time sampling. We assume that $\next$, $\critical$ and \mustTick\ run in Turing space bounded by a polynomial in $m,k,\log_2(\Dmax)$. Notice that for our examples this is the case, as such functions can be constructed because the system is balanced and facts are of bounded size.
Because of Lemma~\ref{lem:lengthPSPACE}, we can show that the realizability problem is in PSPACE by searching for compliant traces of length $L_\Sigma(m,k,\Dmax)$ (stored in binary). To do so, we rely on the fact that PSPACE and NPSPACE are the same complexity class~\cite{savitch}.
\begin{proposition}[Realizability is in PSPACE]
\label{th:PSPACE-feasibility}
\ \\
For a PTS $\mathcal{T}$ assume $\Sigma, \mathcal{S}_0, m,\mathcal{CS},k,\Dmax$, $\next, \critical$ and \mustTick\ as described above.
There is an algorithm that, given an initial configuration $\mathcal{S}_0$, decides whether $\mathcal{T}$ is realizable with respect to the lazy time sampling, $\mathcal{CS}$ and $\mathcal{S}_0$ and the algorithm runs in space bounded by a polynomial in $m,k$ and $log_2(\Dmax)$.
The polynomial is in fact ~$\log_2(L_\Sigma(m,k,\Dmax))$.
\end{proposition}
\begin{proof}
Let $\mathcal{T}$ be a PTS constructed over finite alphabet $\Sigma$ with $J$ predicate symbols and $E$ constant and function symbols.
Let $\mathcal{CS}$ be a critical configuration specification constructed over $\Sigma$ and $\mathcal{S}_0$ be a given initial configuration.
Let $m$ be the number of facts in the initial configuration $\mathcal{S}_0$,
$k$ an upper bound on the size of facts, and
$\Dmax$ a natural number that is an upper bound on the numeric values appearing in $\mathcal{S}_0, \mathcal{T}$ and $\mathcal{CS}$.
We propose a non-deterministic algorithm
that accepts whenever there is a compliant trace starting from $\mathcal{S}_0$ in which time tends to infinity and which uses the lazy time sampling.
We then apply Savitch's Theorem to determinize this algorithm.
In order to obtain the PSPACE result we rely on the equivalence among configurations which enables us to search for traces over $\delta$-representations~(Proposition \ref{thm:delta configurations})
instead of searching for traces over concrete configurations.
Because of Lemma~\ref{lem:lengthPSPACE}, in the search for compliant traces, it suffices to consider traces of size bounded by the number of different $\delta$-representations, ~$L_\Sigma(m,k,\Dmax) $ (stored in binary). Recall that \
$
L_\Sigma(m,k,\Dmax) \leq (\Dmax + 2)^{(m-1)} J^m (E + 2 m k)^{m k} \ .
$
Let $i$ be a natural number such that \ $0 \leq i \leq L_\Sigma(m,k,\Dmax) +1$.
The algorithm starts with $i=0$ and $W_0$ set to be the $\delta$-representation of $\mathcal{S}_0$, and iterates the following sequence of operations:
\begin{quote}
\begin{enumerate}
\item If \ $W_i$ is a critical $\delta$-representation, {\em i.e.}, if $\critical(W_i) = 1$, then return FAIL, otherwise continue;
\item If \ $i > L_\Sigma(m,k,\Dmax) + 1$, then ACCEPT;
else continue;
\item If \mustTick $(W_i)=1$, then replace $W_i$ by $W_{i+1}$ obtained from $W_i$ by applying $Tick$ rule; Otherwise, non-deterministically guess an instantaneous rule, $r$, from $\mathcal{T}$ applicable to $W_i$,
{\em i.e.}, such a rule $r$ that $\next(r,W_i) = 1$. If so,
replace $W_i$ with $\delta$-representation $W_{i+1}$
resulting from applying rule $r$ to $\delta$-representation
$W_i$.
Otherwise, FAIL;
\item Set \ $ i = i + 1$ .
\end{enumerate}
\end{quote}
\vspace{3pt}
We now show that this algorithm runs in polynomial space.
The greatest number reached by the counter is $ L_\Sigma(m,k,\Dmax) $, which stored in binary encoding takes
space \ $ \log(L_\Sigma(m,k,\Dmax) + 1)$ \ bounded by:
\[
\begin{array}[]{lcl}
m\log(J) + (m-1)\log(\Dmax + 2) + mk\log(E + 2 mk).
\end{array}
\]
Therefore, in order to store the values of the step-counter, one only needs space that is polynomial in the given inputs.
Also, any $\delta$-representation, $W_i$
can be stored in space that is polynomial to the given inputs.
Namely, since $W_i$ is of the form
$$ [\,Q_1,\,\delta_{Q_1,Q_2},\,Q_2, \ldots, \,Q_{m-1}, \,\delta_{Q_{m-1},Q_m}, Q_m\,]$$
and values of the truncated time differences, $\delta_{i,j}$, are bounded, each $W_i$ can be stored in space
\ $mk+(m-1)(\Dmax+2)$, polynomially bounded with respect to the inputs.
Finally, in step 3. algorithm needs to store the rule $r$. This is done by remembering two $\delta$-representations, while moving from one $\delta$-representation to another is achieved by updating the facts, updating the positions of facts and the corresponding truncated time differences and continue. Hence, step 3. can be performed in space polynomial to $m,k, log_2(\Dmax)$ and the sizes of $\next$ and \mustTick.
As assumed, functions $\next$, $\critical$ and \mustTick\ run in PSPACE w.r.t. $m,k$ and $log_2(\Dmax)$.
\qed
\end{proof}
\vspace{0,5em}
We now consider the survivability problem. Recall that in order to prove that $\mathcal{T}$ satisfies survivability with respect to the lazy time sampling, a critical configuration specification $\mathcal{CS}$ and an initial configuration $\mathcal{S}_0$, we must show that $\mathcal{T}$ is realizable w.r.t. the lazy time sampling, $\mathcal{CS}$ and $\mathcal{S}_0$,
and that all infinite traces $\mathcal{P}$ that use the lazy time sampling and start with $\mathcal{S}_0$ are compliant with respect to $\mathcal{CS}$, and that the global time in $\mathcal{P}$ tends to infinity (Definition~\ref{def:survivabilty}).
Checking that a system is realizable is PSPACE-complete as we have just shown. Moreover, the property that time tends to infinity follows from Proposition~\ref{prop:progressing} for progressing timed MSR. It remains to show that all infinite traces using the lazy time sampling are compliant, which reduces to checking that \emph{no critical configuration is reachable} from the initial configuration $\mathcal{S}_0$ by a trace that uses the lazy time sampling. This property can be decided in PSPACE by relying on the fact that PSPACE, NPSPACE and co-PSPACE are all the same complexity class~\cite{savitch}. Therefore, survivability is also in PSPACE.
\begin{proposition}[Survivability is in PSPACE]
\label{th:PSPACE-survivability}
\ \\
For a PTS $\mathcal{T}$ assume $\Sigma, \mathcal{S}_0, m,\mathcal{CS},k,\Dmax$, $\next, \critical$ and \mustTick\ as described above.
There is an algorithm that decides whether $\mathcal{T}$ satisfies the survivability problem with respect to the lazy time sampling, $\mathcal{CS}$ and $\mathcal{S}_0$ which runs in space bounded by a polynomial in $m,k$ and $log_2(\Dmax)$.
\end{proposition}
\begin{proof}
We extend the proof of Proposition \ref{th:PSPACE-feasibility} to survivability problem using the same notation and making the same assumptions.
In order to prove that $\mathcal{T}$ satisfies survivability with respect to the lazy time sampling, $\mathcal{CS}$ and $\mathcal{S}_0$, we need to show that all infinite traces that are using the lazy time sampling and are starting from $\mathcal{S}_0$ are compliant with respect to $\mathcal{CS}$.
Since $\mathcal{T}$ is progressing, in any infinite trace time necessarily tends to infinity, as per Proposition \ref{prop:progressing}.
Based on our bisimulation result (Proposition \ref{thm:delta configurations})
we propose the search algorithm over $\delta$-representations, instead of searching over concrete configurations.
We rely on Lemma~\ref{lem:lengthPSPACE} and search only for traces of size bounded by the number of different $\delta$-representations, ~$L_\Sigma(m,k,\Dmax) $.
In order to prove survivability we first check realizability by using the algorithm given in the proof of Proposition \ref{th:PSPACE-feasibility}. Notice that this algorithm is in PSPACE with respect to the inputs of survivability as well.
Next we check that no critical configuration is reachable form $\mathcal{S}_0$ using the lazy time sampling.
The following algorithm accepts when a critical configuration is reachable, and fails otherwise.
It begins with $i=0$ and $W_0$ set to be the $\delta$-representation of $\mathcal{S}_0$, and iterates the following sequence of operations:
\begin{quote}
\begin{enumerate}
\item If \ $W_i$ is representing a critical configuration, {\em i.e.}, if $\critical(W_i) = 1$, then return ACCEPT, otherwise continue;
\item If \ $i > L_\Sigma(m,k,\Dmax) $, then FAIL;
else continue;
\item If \mustTick $(W_i)=1$, then replace $W_i$ by $W_{i+1}$ obtained from $W_i$ by applying $Tick$ rule; Otherwise, non-deterministically guess an instantaneous rule, $r$, from $\mathcal{T}$ applicable to $W_i$,
{\em i.e.}, such a rule $r$ that $\next(r,W_i) = 1$. If so,
replace $W_i$ with $\delta$-representation $W_{i+1}$
resulting from applying rule $r$ to $\delta$-representation
$W_i$.
Otherwise, continue;
\item Set \ $i = i + 1$.
\end{enumerate}
\end{quote}
This algorithm accepts iff
there is a trace from $\mathcal{S}_0$ that is not compliant and uses the lazy time sampling.
Since $\mathcal{T}$ is a PTS, any trace can be extened to an infinite time trace, including traces that use the lazy time sampling.
Therefore, the algorithm accepts if and only if there is an infinite time trace from $\mathcal{S}_0$ that is not compliant and uses the lazy time sampling.
We take advantage of the fact that PSPACE, NPSPACE and co-PSPACE are all the same complexity class~\cite{savitch} and determinize the above algorithm and than switch the ACCEPT and FAIL. The resulting algorithm returns ACCEPT if and only if no critical configuration is reachable from the given initial configuration using the lazy time sampling,
{\em i.e.}, if and only if all infinite time traces that start with the given initial configuration and use the lazy time sampling are compliant.
The proof that above algorithms run in polynomial space is very similar to that proof relating to Proposition \ref{th:PSPACE-feasibility}.
\qed
\end{proof}
We now investigate the complexity of the
reliability~problem for PTSes, which is the same as the
recoverability~system problem for PTSes.
Recall that a configuration $\mathcal{S}$ is a point-of-no-re\-turn~
iff ~there is no compliant infinite time trace
from $\mathcal{S}$ that uses the lazy time sampling.
\begin{proposition}[Reliability ~is in PSPACE]
\label{th:PSPACE-newprop}
\ \\
For a PTS $\mathcal{T}$ assume $\Sigma, \mathcal{S}_0, m,\mathcal{CS},k,\Dmax$, $\next, \critical$ and \mustTick\ as described above.
There is an algorithm that, given an initial configuration $\mathcal{S}_0$, decides whether $\mathcal{T}$ satisfies \emph{reliability}~with respect to the lazy time sampling, $\mathcal{CS}$ and $\mathcal{S}_0$ and the algorithm runs in space bounded by a polynomial in $m,k$ and $log_2(\Dmax)$.
\end{proposition}
\begin{proof}
We check that for any configuration $\mathcal{S}$ reachable form $\mathcal{S}_0$ using the lazy time sampling, there is a compliant infinite time trace from $\mathcal{S}$, {\em i.e.},
that $\mathcal{S}$ is not point-of-no-re\-turn.
The following algorithm accepts when no point-of-no-re\-turn ~is reachable from $\mathcal{S}_0$ in $\mathcal{T}$ on a compliant trace, and fails otherwise.
It begins with $i=0$ and $W_0$ set to be the $\delta$-representation of $\mathcal{S}_0$, and iterates the following sequence of operations:
\begin{quote}
\begin{enumerate}
\item If \ $W_i$ is representing a critical configuration, {\em i.e.}, if $\critical(W) = 1$, then return FAIL, otherwise continue;
\item If \ $W_i$ is representing a point-of-no-re\-turn, {\em i.e.}, if $PON(W_i) = 1$, then return FAIL, otherwise continue;
\item If \ $i > L_\Sigma(m,k,\Dmax) $, then ACCEPT;
else continue;
\item If \mustTick $(W_i)=1$, then replace $W_i$ by $W_{i+1}$ obtained from $W_i$ by applying $Tick$ rule; Otherwise non-deterministically guess an instantaneous rule, $r$, from $\mathcal{T}$ applicable to $W_i$,
{\em i.e.}, such a rule $r$ that $\next(r,W_i) = 1$. If so,
replace $W_i$ with $\delta$-representation $W_{i+1}$
resulting from applying rule $r$ to $\delta$-representation
$W_i$.
Otherwise, continue;
\item Set \ $i = i + 1$.
\end{enumerate}
\end{quote}
Since $PON$, $\next, \critical$ and \mustTick\
run in Turing space bounded by a polynomial in $m$, $k$ and $\log_2(\Dmax)$,
similarly to the proof relating to \mbox{Proposition \ref{th:PSPACE-feasibility}}, it follows that the above algorithm runs in deterministic polynomial space.
\qed
\end{proof}
\begin{theorem}
Let $\Sigma$ be a finite alphabet, $\mathcal{T}$ a PTS,
$\mathcal{S}_0$ an initial configuration, $m$ the number of facts in $\mathcal{S}_0$, $\mathcal{CS}$ a critical configuration specification, $k$ an upper-bound on the size of facts, and $\Dmax$ an upper-bound on the numeric values in $\mathcal{S}_0, \mathcal{T}$ and $\mathcal{CS}$.
Let functions $\next, \critical$ and \mustTick\
run in Turing space bounded by a polynomial in $m,k,\log_2(\Dmax)$ and return 1, respectively, when a rule in $\mathcal{T}$ is applicable to a given $\delta$-representation, when a $\delta$-representation is critical with respect to $\mathcal{CS}$, and when a $Tick$ rule should be applied to the given $\delta$-representation using the lazy time sampling.
The problems of realizability, reliability, recoverability~and survivability for PTSes that use the lazy time sampling are PSPACE-complete when assuming a bound on the size of facts.
\end{theorem}
\begin{proof}
The result follows directly from Propositions \ref{th: realizability-hard},~\ref{th:PSPACE-feasibility},~\ref{th:PSPACE-survivability},
and \ref{th:PSPACE-newprop}.
\qed
\end{proof}
\subsection{Complexity Results for Time-Bounded Problems }
We now consider the $n$-time versions of the problems defined in Section \ref{sec:problems}.
\vspace{3pt}
The following lemma establishes an upper-bound on the length of traces with exactly $n$ instances of $Tick$ rules for PTSes. This bound is immediately obtained from Proposition~\ref{prop:bounded-length}.
\begin{lemma}
\label{lem:polysize}
Let $n$ be fixed.
Let $\mathcal{T}$ be a PTS,
$\mathcal{S}_0$ an initial configuration and $m$ the number of facts in $\mathcal{S}_0$.
For all traces $\mathcal{P}$ of $\mathcal{T}$ that start with $\mathcal{S}_0$ and contain exactly $n$ instances of the Tick rule, the length of $\mathcal{P}$ is bounded by ~$(n+1)*m+n$.
\end{lemma}
\begin{proof}
Assume that there are exactly $n$ time ticks in $\mathcal{P}$.
As per Proposition \ref{prop:bounded-length} there are at most $m-1$ instantaneous rules between any $Tick$ and the next $Tick$ rule in $\mathcal{P}$. Consequently, in total there are at most $(n+1)\cdot m + n$ rules in $\mathcal{P}$.
\qed
\end{proof}
For NP-hardness, we encode the NP-hard problem 3-SAT as an $n$-time realizability problem similar to our previous work~\cite{kanovich13esorics,kanovich10fcsPriMod}.
\begin{proposition}
\label{th: n-realizability-hard}
The problem of $n$-time realizability for PTSes
w.r.t. the lazy time sampling is NP-hard.
\end{proposition}
\begin{proof}
Assume we are given a formula \ $ F = (l_{11} \lor l_{12} \lor l_{13}) \land \cdots \land (l_{n1} \lor l_{n2} \lor l_{n3})$
in 3-CNF, {\em i.e.}~in a conjunctive normal form with exactly 3 literals in each clause. Recall that each literal, $l_{ij}$, is either an atomic formula, $v_k$ or
its negation, $\neg v_k$.
Let $p$ be the number of variables and $n$ the number of clauses in $F$.
We construct an initial configuration $\mathcal{S}_0$ and a progressing timed MSR $\mathcal{T}$ that uses the lazy time sampling to check whether $F$ is satisfiable or not.
For each variable $v_i$ in $F$, we include the following rules in $\mathcal{T}$:
\begin{equation}
\label{eq: sat-interp}
\begin{array}{l}
Time@T, \red{\,V_i@T_i} \ \mid~\{~T \geq T_i~\} \ \lra \ Time@T, \blue{\,A_i@(T+1)}\\
Time@T, \red{\,V_i@T_i} \ \mid~\{~T \geq T_i~\} \ \lra \ Time@T, \blue{\,B_i@(T+1)}\\
\end{array}
\end{equation}
These rules rewrite the fact $V_i$ to the fact $A_i$
denoting true, or to the fact $B_i$ denoting false.
Intuitively, these rules construct an interpretation for the variables in $F$.
For each variable $v_k$ we also include the following set of rules:
\begin{equation}
\label{eq: sat-interp-F}
\begin{array}{l}
Time@T, \,A_k@T_1, \red{\,I({(v_k \lor X \lor Y) \land C)}@T_2} \, \mid \,\{\,T \geq T_1, T \geq T_2 \,\} \ \lra \\ \qquad
Time@T, \,A_k@T_1, \blue{\,I({C})@(T +1)}\\
Time@T, \,A_k@T_1, \red{\,I({(X \lor v_k \lor Y) \land C})@T_2} \, \mid \, \{ \, T \geq T_1, T \geq T_2 \, \}\ \lra \\ \qquad
Time@T, \,A_k@T_1, \blue{\,I({C})@(T +1)}\\
Time@T, \,A_k@T_1, \red{\,I({(X \lor Y \lor v_k) \land C})@T_2} \, \mid \, \{ \, T \geq T_1, T \geq T_2 \, \} \ \lra \\ \qquad
Time@T, \,A_k@T_1, \blue{\,I({C})@(T +1)}
\\[0,5em]
Time@T, \,B_k@T_1, \red{\,I({(\neg v_k \lor X \lor Y) \land C)}@T_2} \, \mid \, \{ \,T \geq T_1, T \geq T_2 \, \} \ \lra \\ \qquad
Time@T, \,B_k@T_1, \blue{\,I({C})@(T +1)}\\
Time@T, \,B_k@T_1, \red{\,I({(X \lor \neg v_k \lor Y) \land C})@T_2} \, \mid \, \{ \,T \geq T_1, T \geq T_2 \, \} \ \lra \\ \qquad
Time@T, \,B_k@T_1, \blue{\,I({C})@(T +1)}\\
Time@T, \,B_k@T_1, \red{\,I({(X \lor Y \lor \neg v_k) \land C})@T_2} \, \mid \, \{ \, T \geq T_1, T \geq T_2 \, \} \ \lra \\ \qquad
Time@T, \,B_k@T_1, \blue{\,I({C})@(T +1)}
\end{array}
\end{equation}
which take an interpretation and reduce $F$ accordingly.
Here $v_1, \dots, v_p$ \ are constants denoting variable names appearing in $F$,
$X$ and $Y$ are variables denoting literals,
and $C$ is a variable denoting formulas.
We set the initial configuration as $$\mathcal{S}_0 = \{ \ Time@0,\,V_1@0,\ldots,\,V_p@0,\,I(F\wedge \top)@0,\,Start@0 \ \}\ .$$
Here, we use $\top$ as the empty clause formula.
The fact $Start$ is never consumed and is used to specify the critical configuration specification as set of $n$ pairs:
$$\tup{~ \{ \,Start@T_1,\,I(G_i)@T_2\,\},\{\,T_2 \geq T_1 + n\, \}\, } , \ \ 1\leq i \leq n $$
where $G_i$ is of the form \ $ (l_{i1} \lor l_{i2} \lor l_{i3}) \land \cdots \land (l_{n1} \lor l_{n2} \lor l_{n3}) \land \top$.
We, therefore, have a total of \ $3 \cdot p+3$ predicates ($Time$, $Start$, $V_i$, $A_i$, $B_i$, $I$), \ $p+6$ constants ($v_k$, $\land$, $\lor$,$\neg$, $($ , $)$ and $\top\, )$ and a polynomial number of rules, namely,
$2\cdot p+6\cdot p= 8 \cdot p$\, rules.
By inspection, the constructed $\mathcal{T}$ is a progressing timed MSR.
It is easy to see that our encoding is sound and complete: a configuration with the fact $I(\top)$ will be reached if and only if $F$ is satisfiable. Moreover, there is a compliant trace that uses the lazy time sampling with exactly $n$ ticks if and only if $F$ is satisfiable.
Namely, before the first tick, using the rules given in
(\ref{eq: sat-interp}), we set all variables as true or false, according to the model of $F$.
Then, since no instantaneous rule is applicable, we advance time, as per the lazy time sampling.
We then use the rules given in
(\ref{eq: sat-interp-F}) to evaluate
the interpretation of the formula $F$.
In order for any of the rules in (\ref{eq: sat-interp-F}) to apply, the global time needs to advance so that the attached constraints would be satisfied and enable the rule application.
Since the lazy time sampling is used, only one $Tick$ rule is applied between above instantaneous rules.
This means that exactly $n$ $Tick$ rules are applied in order to reach the fact $I(\top)$, denoting that the formula $F$ is satisfiable.
If $F$ is not satisfiable, then no trace with $n$ ticks is compliant w.r.t. above critical configuration specification.
If $F$ is satisfiable, then there is a trace with $n$ ticks that is compliant.
Hence, the $n$-time realizability problem is NP-hard.
\qed
\end{proof}
For our bounded versions of realizability, reliability~and survivability we use auxiliary functions with configurations as their arguments.
For simplicity, we still use the same notation, $\next, \critical$ and $\mustTick$ , as types of arguments are clear from the context.
We will assume that $\next$, $\critical$ and \mustTick\ run in time bounded by a polynomial in $m$ and $k$,
and return 1, respectively, when a rule in $\mathcal{T}$ is applicable to a given configuration, when a configuration is critical with respect to $\mathcal{CS}$, and when a $Tick$ rule should be applied to the given configuration using the lazy time sampling.
Notice that for our examples one can construct such functions.
\vspace{0.5em}
The $n$-time realizability problem is in NP as stated in the following Proposition.
\begin{proposition}[$n$-Time Realizability is in NP]
\label{thm:np-feasible}
\ \\
For a PTS $\mathcal{T}$ assume $\Sigma, \mathcal{S}_0, m,\mathcal{CS},k,\Dmax$, $\next, \critical$ and \mustTick\ as described above.
The problem of determining whether a
PTS $\mathcal{T}$ is $n$-time realizable with respect to the lazy time sampling, and a given $\mathcal{CS}$
is in NP with $\mathcal{S}_0$ as the input.
\end{proposition}
\begin{proof}
Let $n$ be fixed.
Let $\mathcal{T}$ be a PTS, $\mathcal{CS}$ be a critical configuration specification and $\mathcal{S}_0$ be a given initial configuration.
Let $m$ be the number of facts in the initial configuration $\mathcal{S}_0$.
Moreover, assume that functions $\next, \critical$ and $\mustTick$, as described above, run in polynomial time with respect to $m$ and $k$.
\vspace{0.3em}
We check in non-deterministic polynomial time whether there is a compliant trace $\mathcal{P}$ that has exactly $n$-ticks. Because of Lemma~\ref{lem:polysize}, we know that traces have size of at most $(n+1)*m+n$. Recall that $n$ is fixed.
Set \ $i := 0$ \ and \ $ticks := 0$.
Let $\mathcal{S}_i$ be the configuration at position $i$ in $\mathcal{P}$.
Iterate the following sequence of instructions:
\begin{quote}
\begin{enumerate}
\item If \ $i > (n+1)*m+n$, \ then FAIL;
\item If \ $\critical(\mathcal{S}_i)=1$, \ then FAIL;
\item If \ $ticks$ is equal to $n$, \ then ACCEPT;
\item If \ $\mustTick(\mathcal{S}_i) = 1$, \ then apply $Tick$ rule to $\mathcal{S}_i$ obtaining configuration $\mathcal{S}_{i+1}$ and increment both $ticks$ and $i$;
\item Otherwise, if \ $\mustTick(\mathcal{S}_i) \neq 1$, then non-deterministically guess a rule $r$, such that \mbox{$\next(r,\mathcal{S}_i) = 1$}, apply rule $r$ to $\mathcal{S}_i$, obtaining $\mathcal{S}_{i+1}$, and increment $i$.
\end{enumerate}
\end{quote}
Since the size of facts is bounded and the number of facts in any configuration of the trace is $m$, all steps are done in polynomial time.
\qed
\end{proof}
\vspace{0,5em}
We now provide the upper bound complexity result for the $n$-time survivability problem for progressing timed MSRs.
Recall that for $n$-time survivability property, we need to show that:
\begin{enumerate}
\item $\mathcal{T}$ is $n$-time realizable with respect to $\mathcal{CS}$;
\item All traces using the lazy time sampling with exactly $n$ ticks are compliant with respect to $\mathcal{CS}$.
\end{enumerate}
\begin{proposition}[$n$-Time Survivability is in $\Delta_2^p$ ]
\label{th: n-time-survivability}
\ \\
For a PTS $\mathcal{T}$ assume $\Sigma, \mathcal{S}_0, m,\mathcal{CS},k,\Dmax$, $\next, \critical$ and \mustTick\ as described above.
The problem of determining whether $\mathcal{T}$ satisfies $n$-time survivability with respect to the lazy time sampling, $\mathcal{CS}$ and $\mathcal{S}_0$ is in the class
$\Delta_2^p$ of the polynomial hierarchy ($P^{NP}$) with input $\mathcal{S}_0$.
\end{proposition}
\begin{proof}
Recall that survivability requires realizability, and also that all infinite time traces that use the lazy time sampling and start from the initial configuration are compliant.
As we have shown in Proposition \ref{th: n-realizability-hard}, the first sub-problem is in NP.
The second sub-problem reduces to checking that no critical configuration is reachable from $\mathcal{S}_0$ by a trace using the lazy time sampling with at most $n$ ticks. We do so by checking whether a critical configuration is reachable within $n$ ticks, which as realizability, we prove to be in NP.
\vspace{5pt}
Set \ $i := 0$ \ and \ $ticks := 0$.
Iterate the following sequence of instructions:
\begin{quote}
\begin{enumerate}
\item If \ $i > (n+1)*m+n$, \ then FAIL;
\item If \ $\critical(\mathcal{S}_i)=1$, \ then ACCEPT;
\item If \ $ticks$ is equal to $n$, \ then FAIL;
\item If \ $\mustTick(\mathcal{S}_i) = 1$, \ then apply $Tick$ rule to $\mathcal{S}_i$ obtaining configuration $\mathcal{S}_{i+1}$ and increment both $ticks$ and $i$;
\item Otherwise, if \ $\mustTick(\mathcal{S}_i) \neq 1$, then non-deterministically guess a rule $r$, such that \mbox{$\next(r,\mathcal{S}_i) = 1$}, apply rule $r$ to $\mathcal{S}_i$, obtaining $\mathcal{S}_{i+1}$, and increment $i$.
\end{enumerate}
\end{quote}
The above algorithm, same as the algorithm used for $n$-time realizability, runs in non-deterministic polynomial time w.r.t. $m$ and $k$.
\vspace{3pt}
If a critical configuration is reachable,
{\em i.e.}~ if the above algorithm accepts, then $\mathcal{T}$ does not satisfy the second sub-problem, otherwise it does satisfy it. Therefore, deciding the second sub-problem is in co-NP.
Thus, the $n$-time survivability problem is in a class containing both NP and co-NP, {\em e.g.}, $\Delta_2^p$ of the polynomial hierarchy ($P^{NP}$)~\cite{papadimitriou07book}.
\qed
\end{proof}
Finally, we provide the complexity
upper bound for $n$-time reliability~for PTSes.
\begin{proposition}[$n$-Time Reliability~is in $\Delta_2^p$ ]
\label{th: n-time-newprop}\ \\
For a PTS $\mathcal{T}$ assume $\Sigma, \mathcal{S}_0$, $m$, $\mathcal{CS}$, $k$, $\Dmax$, $\next, \critical$ and \mustTick\ as described above.
The problem of determining whether $\mathcal{T}$ satisfies $n$-time reliability ~with respect to the lazy time sampling, $\mathcal{CS}$ and $\mathcal{S}_0$ is in the class
$\Delta_2^p$ of the polynomial hierarchy ($P^{NP}$) with input $\mathcal{S}_0$.
\end{proposition}
\begin{proof}
Let $n$ be fixed.
Let $\mathcal{T}$ be a PTS and $m$ be the number of facts in the given initial configuration $\mathcal{S}_0$.
Moreover, assume that the function $\next, \critical$ and $\mustTick$ run in polynomial time with respect to $m$ and $k$ .
We check whether $\mathcal{T}$ satisfies $n$-time reliability~by checking whether any compliant trace from $\mathcal{S}_0$ covering up to $n$ time units can be extended to a compliant trace with exactly $n$ time ticks.
More precisely, we check in non-deterministic polynomial time whether there exists a configuration $\mathcal{S}'$ reachable from $\mathcal{S}_0$ on a compliant trace $\mathcal{P}'$ (that uses the lazy time sampling and has at most $n$ ticks) such that $\mathcal{P}'$ cannot be extended to a compliant trace $\mathcal{P}$ that uses the lazy time sampling and has exactly $n$ ticks. Let's call such a configuration $\mathcal{S}'$ a \emph{breaking point} (w.r.t. $n$, $\mathcal{S}_0$, $\mathcal{CS}$, $\mathcal{T}$ and the lazy time sampling).
We firstly propose a non-deterministic algorithm $GO$ that checks whether some trace can be extended to a compliant trace for the given number of ticks.
It will be used in the main algorithm to check whether a configuration is a breaking point. When given as input the configuration $\mathcal{S}$ and a number $p$, algorithm $GO(\mathcal{S},p)$ returns 1 iff there exists a compliant trace starting from $\mathcal{S}$ that uses the lazy time sampling and has exactly $p$ ticks.
\vspace{5pt}
Because of Lemma~\ref{lem:polysize}, we know that traces spreading over $p$ time units have the size of at most $(p+1)*m+p$.
Set \ $j := 0$, \ $\mathcal{S}^0 := \mathcal{S}$ and \ $tocks := 0$.
Iterate the following sequence of instructions:
\begin{quote}
\begin{enumerate}
\item If \ $j > (p+1)*m+p$, \ then return 1;
\item If \ $\critical(\mathcal{S}^j)=1$, \ then return 0;
\item If \ $tocks$ is equal to $p$, \ then return 1;
\item If \ $\mustTick(\mathcal{S}^j) = 1$, \ then apply $Tick$ rule to $\mathcal{S}^j$ obtaining configuration $\mathcal{S}^{j+1}$ and increment both $tocks$ and $j$;
\item Otherwise, if \ $\mustTick(\mathcal{S}^j) \neq 1$, then non-deterministically guess a rule $r$, such that \mbox{$\next(r,\mathcal{S}^j) = 1$}, apply rule $r$ to $\mathcal{S}^j$, obtaining $\mathcal{S}^{j+1}$, and increment $j$.
\end{enumerate}
\end{quote}
\vspace{5pt}
The main algorithm, given bellow, runs in non-deterministic polynomial time w.r.t. $m$ and $k$
and returns ACCEPT iff a breaking point~w.r.t. $n$ can be reached from $\mathcal{S}_0$ on a compliant trace that uses the lazy time sampling.
\vspace{5pt}
Set \ $i := 0$ \ and \ $ticks := 0$.
Let $\mathcal{S}_i$ be the configuration at position $i$ in $\mathcal{P}$.
Iterate the following sequence of instructions:
\begin{quote}
\begin{enumerate}
\item If \ $i > (n+1)*m+n$, \ then FAIL;
\item If \ $\critical(\mathcal{S}_i)=1$, \ then FAIL;
\item If \ $ticks$ is equal to $n$, \ then FAIL;
\item If \ $\mustTick(\mathcal{S}_i) = 1$, \ then apply $Tick$ rule to $\mathcal{S}_i$ obtaining configuration $\mathcal{S}_{i+1}$ and increment both $ticks$ and $i$;
\item Otherwise, if \ $\mustTick(\mathcal{S}_i) \neq 1$, then non-deterministically guess a rule $r$, such that \mbox{$\next(r,\mathcal{S}_i) = 1$}, apply rule $r$ to $\mathcal{S}_i$, obtaining $\mathcal{S}_{i+1}$, and increment $i$.
\item If \ $GO(\mathcal{S}_i,n-ticks)=0$, \ then ACCEPT;
\end{enumerate}
\end{quote}
A PTS $\mathcal{T}$ satisfies $n$-time reliability ~with respect to the lazy time sampling, $\mathcal{CS}$ and $\mathcal{S}_0$
~iff~ no breaking point~w.r.t. $n$ can be reached from $\mathcal{S}_0$ on a compliant trace that uses the lazy time sampling.
We therefore switch the ACCEPT and FAIL of the above algorithm to obtain an algorithm that returns ACCEPT iff $\mathcal{T}$ satisfies $n$-time reliability.
Thus, the $n$-time reliability ~is in a class containing both NP and co-NP, {\em e.g.}, $\Delta_2^p$ of the polynomial hierarchy ($P^{NP}$).
\qed
\end{proof}
The obtained complexity results for the time-bounded verification problems are stated in the next theorem.
\begin{theorem}
\label{thm:np-comp}
Assume $\Sigma$ a finite alphabet, $\mathcal{T}$ a PTS,
$\mathcal{S}_0$ an initial configuration, $m$ the number of facts in $\mathcal{S}_0$, $\mathcal{CS}$ a critical configuration specification and $k$ an upper-bound on the size of facts.
Let functions $\next, \critical$ and \mustTick\
run in Turing time bounded by a polynomial in $m$ and $k$
and return 1, respectively, when a rule in $\mathcal{T}$ is applicable to a given configuration, when a configuration is critical with respect to $\mathcal{CS}$, and when a $Tick$ rule should be applied to the given configuration.
The problem of determining whether $\mathcal{T}$
is $n$-time realizable with respect to the lazy time sampling, $\mathcal{CS}$ and $\mathcal{S}_0$ is NP-complete with $\mathcal{S}_0$ as the input.
The problems of $n$-time survivability and
$n$-time reliability ~for $\mathcal{T}$ with respect to the lazy time sampling, $\mathcal{CS}$ and $\mathcal{S}_0$ is in the class
$\Delta_2^p$ of the polynomial hierarchy ($P^{NP}$) with input $\mathcal{S}_0$.
\end{theorem}
\section{Related and Future Work }
\label{sec:related}
This paper introduced a novel sub-class of timed MSR systems called progressing timed systems, which is defined by imposing syntactic restrictions on MSR rules.
We illustrated with examples of Time Sensitive Distributed Systems that this is a relevant class of systems.
We also introduced verification problems, namely realizability, reliability, recoverability~and survivability, defined over infinite traces. We showed that these problems are PSPACE-complete for progressing timed systems, and when we additionally impose a bound on time, realizability becomes NP-complete and survivability and reliability~are in $\Delta_2^p$ of the polynomial hierarchy.
These problems involve quantitative temporal properties of timed systems and explicit time constraints,
and, to the best of our knowledge, have not been investigated in the rewriting literature.
Others have proposed languages for specifying properties which allow explicit time constraints. We review some of these formalisms, such as timed automata, temporal logic and rewriting.
Our progressing condition is related to the \emph{finite-variability assumption} used in the temporal logic and timed automata literature~\cite{faella08entcs,laroussinie03tcs,lutz05time,alur91rex,alur04sfm}, requiring that in any bounded interval of time, there can be only finitely many observable events or state changes. Similarly, progressing systems have the property that only a finite number of instantaneous rules can be applied in any bounded interval of time (Proposition~\ref{prop:bounded-length}). Such a property seems necessary for the decidability of many temporal verification problems.
The work~\cite{alpern87dc,clarkson10jcs} classifies traces and sets of traces as safety, liveness or properties that can be reduced to subproblems of safety and liveness. Following this terminology, properties relating to our verification problems
over infinite traces contain elements of safety as well as elements of liveness. Properties relating to the time-bounded versions of realizability, reliability~and survivabilty could be classified as safety properties. We do not see how to express this exactly in the terms of~\cite{alpern87dc,clarkson10jcs}. We intend to revisit this in future work.
As discussed in detail in the Related Work section of our previous work~\cite{kanovich.mscs}, there are some important differences between our timed MSR model and timed automata~\cite{alur91rex,alur04sfm} on both the expressive power and decidability proofs. For example, a description of a timed MSR system uses first order formulas with variables, whereas timed automata are able to refer only to transition on ground states. That is, timed MSR is essentially a first-order language, while timed automata are propositional. Replacing a first order description of timed MSR by all its instantiations, would lead to an exponential explosion. Furthermore, in contrast with the timed automata paradigm, in timed MSR we can manipulate in a natural way the facts both in the past, in the future, and in the present.
The temporal logic literature has proposed many languages for the specification and verification of timed systems.
Many temporal logics include quantitative temporal operators, {\em e.g.}~\cite{lutz05time,laroussinie03tcs}, including time-constrained temporal operators.
Metric Temporal Logic (MTL)~\cite{koymans90} involves (bounded or unbounded) timing constraints on temporal operators, which are similar to our time constraints. Growing literature on MTL explores expressivity and decidability of fragments of such temporal logics
~\cite{Ouaknine06TACAS}.
However, temporal logic literature does not discuss notions similar to {\em e.g.}, realizability or survivability notions introduced here.
In addition to that, an important feature of our model is that the specifications are executable.
As we demonstrated by experiments in~\cite{kanovich16formats}, it is feasible to analyze fairly large progressing systems using the rewriting logic tool Maude~\cite{clavel-etal-07maudebook}.
Real-Time Maude~\cite{olveczky08tacas} is a tool for simulating and analyzing
real-time systems. Rewrite rules are partitioned into
instantaneous rules and rules that advance time, where
instantaneous rules are given priority.
Our lazy time sampling is inspired by such management of time in traces.
Time advance rules in Real-Time Maude
may place a bound on the amount of time to advance, but do
not determine a specific amount, thus, allowing continual
observation of the system.
Time sampling strategies are
used to implement search and model-checking analysis.
{\"{O}}lveczky and Messeguer~\cite{olveczky07entcs} investigate conditions under
which the maximal time sampling strategy used in Real-Time
Maude is complete. One of the conditions required is
tick-stabilizing which is similar to progressing and the
finite variability assumption in that one assumes a bound
on the number of actions applicable in a finite time.
Cardenas~\emph{et al.}~\cite{cardenas08icdcs} discuss possible verification problems of cyber-physical systems in the presence of malicious intruders. They discuss surviving attacks, such as denial of service attacks on the control mechanisms of devices. We believe that our progressing timed systems can be used to define sensible intruder models and formalize the corresponding survivability notions. This may lead to the automated analysis of such systems similar to the successful use of the Dolev-Yao intruder model~\cite{DY83} for protocol security verification. Given the results of this paper, for the decidability of any security problem would very likely involve a progressing timed intruder model.
We believe some of our properties, survivability in particular, can be interpreted using game theory.
We find that our model has some features that are better suited for applications relating to TSDSes, in particular explicit time, quantitative time conditions and nonces.
It would be interesting to investigate connections and differences between our rewriting approach to these problems and the game theoretic approach.
Finally, we have already done some preliminary research into ways of
extending this work to dense time domains. We expect that our results also hold for dense times, given our previous work~\cite{kanovich17jcs,kanovich19csf,kanovich21jcs}. There, instead of $Tick$ rule (Eq.~\ref{eq:tick}), we assume a $Tick$ rule of the form ~$Time@T \lra Time@(T + \varepsilon)$, where $\varepsilon$ can be instantiated by any positive real number.
Assuming dense time is challenging, increasing considerably the machinery needed for proving our results, but we are confident in finding ways of incorporating the results of~\cite{kanovich17jcs} with results presented in this paper.
Similarly, for our future work, we intend to investigate extensions of our models with probabilities.
\vspace{5mm}
\emph{Acknowledgments:}
Ban Kirigin is supported in part by the Croatian Science Foundation under the project UIP-05-2017-9219.
The work of Max Kanovich was partially supported by EPSRC Programme Grant EP/R006865/1: "Interface Reasoning for Interacting Systems (IRIS)."
Nigam is partially supported by NRL grant N0017317-1-G002, and CNPq grant 303909/2018-8.
Scedrov is partially supported by ONR grants N00014-20-1-2635
and N00014-18-1-2618.
Talcott was partially supported by ONR grants N00014-15-1-2202 and N00014-20-1-2644, and NRL grant N0017317-1-G002.
Nigam has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 830892.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 4,420 |
\section{Introduction}
It is important to understand topological aspects of QCD
to interpret several basic properties of the vacuum structure.
Two distinct pictures of the QCD vacuum, which are based on the
presence of topological objects; instanton and QCD-monopole,
are now widely accepted. The main reason is that they give nice descriptions of
several non-perturbative features, {\it e.g}. chiral symmetry breaking and color
confinement. As is well known, ${\rm SU}(N_{c})$ gauge theory has classical
and non-trivial gauge
configurations (instantons) satisfying the field equation in the 4-dimensional
Euclidean space ${\bf R}^4$ \cite{Instanton}. The topological charge,
which corresponds to the non-trivial homotopy group
$\pi_{3}({\rm SU}(N_{c}))=Z_{\infty}$, is assigned to instanton
configurations \cite{Instanton}.
It has been established that such a topological feature plays an essential role
on the resolution of the ${\rm U}_{\rm A}(1)$ problem \cite{Hooft1}.
Furthermore, the instanton liquid characterized
by a random ensemble of instanton and anti-instanton configurations
succeeds in explaining several properties of light hadrons, {\it e.g}.
spontaneous chiral-symmetry breaking (S$\chi$SB) \cite{Instanton}.
Another picture of the QCD vacuum is motivated from the stimulating idea of
abelian gauge fixing, which was proposed by 't Hooft \cite{Hooft2}
and also independently by Ezawa-Iwazaki \cite{Ezawa1}.
After performing a partial gauge fixing, which leaves an abelian
gauge degree of freedom, one knows that point-like
singularities in the three-dimensional space ${\bf R}^3$,
associated with the homotopy group
$\pi_{2}({\rm SU}(N_{c})/{\rm U}(1)^{N_{c}-1})=Z_{\infty}^{N_{c}-1}$,
can be identified as magnetic monopoles (QCD-monopoles)
\cite{Hooft2,Ezawa1}.
If monopoles are condensed in the QCD vacuum, the dual Meissner effect
results \cite{Hooft3}.
Recent lattice QCD simulations actually show that the dual Meissner effect
is brought to the true vacuum by QCD-monopole
condensation\footnote[2]{QCD-monopole condensation is characterized by
the presence of the long and tangled monopole trajectories in the
four-dimensional space ${\bf R}^4$ and can be interpreted as
the Kosterlitz-Thouless type phase transition.}
(see, {\it e.g}. a recent review article \cite{Polikarpov}).
Hence, color confinement can be regarded
as the dual version of superconductivity.
It seems that instantons and QCD-monopoles are relevant topological objects
for the description of distinct phenomena.
Here, we should emphasize that QCD-monopoles would play an essential role on
non-perturbative features of QCD, which include not only
confinement and but also S$\chi$SB.
This possibility was first studied by using the Schwinger-Dyson equation
with the gluon propagator in the background of
condensed monopoles \cite{Sasaki1}.
The idea of providing S$\chi$SB due to
QCD-monopole condensation was supported by
lattice simulations \cite{Miyamura1,Woloshyn}.
Thus, these results shed new light on the non-trivial
relation between instantons and QCD-monopoles.
Recently, both analytic works \cite{Brower,Suganuma1} and
numerical works
\cite{Miyamura2,Markum,Teper,Suganuma2,Bornyakov}
have shown the existence of the strong correlation
between these topological objects in spite of the fact that
they originate from different homotopy groups.
Furthermore, monopole trajectories become more complicated
with the instanton density increasing in the background of a random
ensemble of instanton solutions \cite{Fukushima}.
These results seems to suggest that the instanton liquid and QCD-monopole
condensation may be indistinguishable.
In this paper, we would like to know whether
the correlation between instantons and QCD-monopoles
has some physical significance, or not, from the topological viewpoint.
Here, we must not forget the hypothesis of abelian dominance, which
Ezawa and Iwazaki had first advocated \cite{Ezawa1}.
This hypothesis is essentially composed of two sentences \cite{Ezawa1}:
\begin{itemize}
\item Only the abelian component
of gauge fields is relevant at a long-distance scale.
\item The non-abelian effects are mostly inherited by magnetic monopoles.
\end{itemize}
Actually, lattice Monte Carlo simulation indicates that
abelian dominance for some physical quantities, {\it e.g}. the string tension
\cite{Suzuki,Bari}, the chiral condensate \cite{Miyamura1,Woloshyn}
and also several light hadron spectra \cite{Miyamura3,Kitahara} is realized
in the maximally abelian (MA) gauge as well as monopole dominance.
In this gauge, at least, the abelian component of the gauge field could be
an important dynamical degree of freedom at a long-distance scale.
Here, an unavoidable question arises relating to the non-trivial correlation
between instantons and QCD-monopoles. {\it In the abelian dominated system,
is it possible for the non-abelian topological nature to survive?}
For such an essential question, Ezawa and Iwazaki have proposed a
remarkable conjecture \cite{Ezawa2}:
once abelian dominance is postulated, the
topological feature is preserved by the presence of monopoles.
The main purpose of this paper is to confirm the presence of
the non-trivial correlation between instantons and QCD-monopoles
by finding evidence for the Ezawa-Iwazaki conjecture.
For simplicity, we restrict the
discussion to the case of SU(2) gauge group throughout this paper.
The organization of our paper is as follows.
In Sec.2, we show that the topological charge is explicitly
represented in terms of the monopole current and the abelian
component of gauge fields through the hypothesis of abelian dominance.
In Sec.3, we confirm numerically its justification by
means of Monte Carlo simulation within SU(2) gauge theory
after the MA gauge fixing. Finally, Sec.4 is devoted to a summary and
conclusions.
\section{Abelian dominance hypothesis for topological charge}
We first address the definition of the topological charge
in the lattice formulation.
The naive and field-theoretical definition of the topological
density \cite{Ilgenfritz} is given by
\begin{equation}
q(s) \equiv \frac{1}{2} \epsilon_{\mu \nu \rho \sigma} \mbox{tr}
\left \{ P_{\mu \nu}(s) P_{\rho \sigma}(s) \right \}\;,
\label{Eq:su2top}
\end{equation}
with the clover averaged SU(2) plaquette:
\begin{equation}
P_{\mu \nu}(s)\equiv\frac{1}{4}\left(\right.
U_{\mu \nu}(s) + U^{\dag}_{-\mu \nu}(s)
+ U^{\dag}_{\mu -\nu}(s) + U_{-\mu -\nu}(s)
\left.\right)\;.
\end{equation}
Here we have used the convenient notation for the SU(2) link variable;
$U_{-\mu}(s) = U^{\dag}_{\mu}(s-{\hat \mu})$
so that $U_{\pm\mu \pm\nu}(s)=U_{\pm\mu}(s)U_{\pm\nu}(s+{\hat \mu})
U^{\dag}_{\pm\mu}(s+{\hat \nu})U^{\dag}_{\pm\mu}(s)$.
One naively expects that the topological charge $Q_{\rm cont}$
is extracted from the summation of the previously defined topological
density over all sites up to leading order in powers of the lattice spacing
$a$ \cite{Ilgenfritz}:
\begin{equation}
Q_{L}=-\frac{1}{16\pi^2}\sum_{s}q(s) \simeq Q_{\rm cont} + {\cal
O}(a^6)\;,
\label{Eq:topchrL}
\end{equation}
where $Q_{\rm cont}
=\frac{1}{16\pi^2}\sum_{s}\mbox{tr}\left\{ a^{4} g^{2} G_{\mu \nu}(s)
{}^{*}G_{\mu \nu}(s)\right\}$.
Strictly speaking, the value $Q_{L}$ has not only ${\cal O}(a^{6})$ corrections,
but also a renormalized multiplicative correction of $Q_{\rm cont}$ in
Eq.(\ref{Eq:topchrL}) \cite{Giacomo}. In general, one needs some smoothing
method to eliminate undesirable ultraviolet fluctuations in the numerical
simulation so that a renormalization factor would be brought to unity.
If we assume that the QCD vacuum is described as the
abelian dominated system {\it in a suitable abelian gauge}, then
the SU(2) link variable is expected to be U(1)-like as
$U_{\mu}(s) \simeq u_{\mu}(s) \equiv \exp \left\{
i\sigma_{3}\theta_{\mu}(s) \right\}$.
Here, we define the angular variable $\theta_{\mu}$ as
\begin{equation}
\theta_{\mu}(s) \equiv {\rm arctan}[{U^{3}_{\mu}(s)}/{U^{0}_{\mu}(s)}]
\end{equation}
in the compact domain $[-\pi, \pi)$ when the SU(2) link variable is
parameterized as $U_{\mu}(s)=U^{0}_{\mu}(s)+i\sigma_{a}U^{a}_{\mu}(s)$
$(a=1,2,3)$ \cite{Kronfeld}.
In the abelian dominated system, we might consider
the abelian analog of the topological density
$q_{_{\rm Abel}}(s)$ \cite{Bornyakov},
instead of the previously defined topological density,
through the replacement of $P_{\mu \nu}$ by the clover averaged
U(1) plaquette $p_{\mu \nu}$:
\begin{eqnarray}
p_{\mu \nu}(s) &\equiv&
\frac{1}{4}\left(\right.
u_{\mu \nu}(s) + u^{\dag}_{-\mu \nu}(s)
+ u^{\dag}_{\mu -\nu}(s) + u_{-\mu -\nu}(s)
\left.\right) \nonumber \\
&=&\frac{1}{4}\sum_{i,j=0}^{1}u_{\mu \nu}
(s-i{\hat \mu}-j{\hat \nu})\;,
\end{eqnarray}
where $u_{\mu \nu}$ denotes the U(1) elementary plaquette.
The explicit expression of $q_{_{\rm Abel}}(s)$
is then given by
\begin{eqnarray}
q_{_{\rm Abel}}(s) &=& \frac{1}{2} \varepsilon_{\mu
\nu \rho \sigma} {\rm tr}\left\{ p_{\mu \nu}(s) p_{\rho
\sigma}(s) \right\} \nonumber \\
&=& - \frac{1}{16}
\sum_{i,j,k,l=0}^{1}
\varepsilon_{\mu \nu \rho \sigma}
\mbox{sin} \theta_{\mu \nu}
(s-i{\hat \mu}-j{\hat \nu}
\mbox{sin} \theta_{\rho \sigma}
(s-k{\hat \rho}-l{\hat \sigma})\;,
\end{eqnarray}
where $\theta_{\mu \nu}(s)\equiv
\partial_{\mu}\theta_{\nu}(s)-\partial_{\nu}\theta_{\mu}(s)$ \cite{Sasaki2}.
$\partial_{\mu}$ denotes the nearest-neighbor forward difference
operator satisfying $\partial_{\mu}f(s)\equiv f(s+{\hat \mu})-f(s)$.
Our next aim is to discuss the expression of the abelian analog of the
topological density
in the naive continuum limit $a \rightarrow 0$.
This is because we need only the leading order term in powers
of the lattice spacing to determine the corresponding topological charge.
Here, one may notice that the U(1) elementary plaquette $u_{\mu \nu}$
is a multiple valued function of the U(1) plaquette angle $\theta_{\mu \nu}$.
Hence, we divide $\theta_{\mu \nu}$ into two parts,
\begin{equation}
\theta_{\mu \nu}(s) = {\bar \theta}_{\mu \nu}(s) + 2\pi n_{\mu
\nu}(s)\;,
\end{equation}
where ${\bar \theta}_{\mu \nu}$ is defined in the principal domain
$[-\pi, \pi)$, which corresponds to the U(1) field strength.
The anti-symmetric tensor $n_{\mu \nu}$ can take the restricted
integer values $0,\pm 1,\pm 2$.
Taking the limit $a \rightarrow 0$, {\it i.e}. ${\bar \theta}_{\mu \nu}
\rightarrow 0$, we thus arrive at the following expression
\cite{Sasaki2}:
\begin{equation}
q_{_{\rm Abel}}(s) \approx - \varepsilon_{\mu \nu \rho \sigma}
{\bar \Theta}_{\mu \nu}(s){\bar \Theta}_{\rho \sigma}(s)\;,
\label{Eq:abeltop}
\end{equation}
where ${\bar \Theta}_{\mu \nu}(s) \equiv \frac{1}{4}\sum_{i,j=0}^{1}{\bar
\theta}_{\mu \nu}(s-i{\hat \mu}-j{\hat \nu})$.
The r.h.s of Eq.(\ref{Eq:abeltop}) does not
necessarily vanish in spite of the fact that it is represented only
in terms of the U(1) field strength.
Next, we show that the non-vanishing contribution to the value of
$q_{_{\rm Abel}}(s)$ results from monopoles.
For the identification of monopoles, we follow
DeGrand-Toussaint's definition in the compact U(1) gauge theory
\cite{DeGrand}. The magnetic current is given by
\begin{eqnarray}
k_{\mu}(s)
&\equiv& \frac{1}{4\pi}\epsilon_{\mu \nu \rho \sigma}\partial_{\nu}
{\bar \theta}_{\rho \sigma}(s+{\hat \mu}) \nonumber \\
&=&-\frac{1}{2}\epsilon_{\mu \nu \rho \sigma}\partial_{\nu}n_{\rho
\sigma}(n+{\hat \mu})\;.
\end{eqnarray}
In the last line, we have used the Bianchi identity on the U(1) plaquette angle;
$\epsilon_{\mu \nu \rho \sigma}\partial_{\nu}\theta_{\rho \sigma}=0$.
Then we can easily see that the current $k_{\mu}$ denotes
the integer-valued magnetic current, {\it i.e}. the
monopole current \cite{DeGrand}.
Strictly speaking, the monopole current resides
at the dual lattice site orthogonal to direction $\mu$.
For simplicity, we have used an undiscriminating notation between the ordinary
lattice site and the dual lattice site. One may notice that it is obvious that $k_{\mu}$ is topologically
conserved; $\partial^{\prime}_{\mu}k_{\mu}=0$.
$\partial^{\prime}$ denotes the nearest-neighbor backward difference
operator satisfying $\partial^{\prime}_{\mu}f(s)\equiv f(s)-f(s-{\hat \mu})$.
Here, we define the averaged magnetic current\footnote[2]{In some
sense, this is similar to the type-II extended monopole current
\cite{Ivanenko}.}
in the $2^3$ extended cube at the center of the lattice site $s$ orthogonal
to direction $\mu$ \cite{Sasaki2} as
\begin{equation}
{\cal K}_{\mu}(s)
\equiv \frac{1}{8}\sum_{i,j,k=0}^{1}k_{\mu}(s-i{\hat \nu}-j{\hat
\rho}-k{\hat \sigma}) \;,
\end{equation}
where there indices $(\nu, \rho, \sigma)$ are complementary to $\mu$.
Using the nearest-neighbor central difference operator;
$\Delta_{\mu}\equiv\frac{1}{2}\{\partial_{\mu}+\partial^{\prime}_{\mu}\}$,
we therefore rewrite ${\cal K}_{\mu}$ as the following form:
\begin{equation}
{\cal K}_{\mu}(s)
=\frac{1}{4\pi}\epsilon_{\mu \nu \rho
\sigma}\Delta_{\nu}{\bar \Theta}_{\rho \sigma}(s)\;,
\end{equation}
which obviously satisfies the
conservation law; $\Delta_{\mu}{\cal K}_{\mu}(s)=0$.
To show the explicit contribution of monopoles to the topological
charge, we introduce the dual potential
${\cal B}_{\mu}$ satisfying the following equation \cite{Smit}:
\begin{equation}
\left(\Delta^2\delta_{\mu \nu}
-\Delta_{\mu}\Delta_{\nu} \right){\cal B}_{\nu}(s)
=-2\pi{\cal K}_{\mu}(s)\;.
\end{equation}
where $\Delta^2 \equiv \Delta_{\mu}\Delta_{\mu}$.
We can perform the Hodge decomposition on the clover-average U(1)
field strength ${\bar \Theta}_{\mu \nu}$
with the dual potential ${\cal B}_{\mu}$ as
\begin{equation}
{\bar \Theta}_{\mu \nu}(s)=\Delta_{\mu}{\cal A}_{\nu}(s)
-\Delta_{\nu}{\cal A}_{\mu}(s)
+\varepsilon_{\mu \nu \rho
\sigma}\Delta_{\rho}{\cal B}_{\sigma}(s)\;,
\end{equation}
where ${\cal A}_{\mu}$ is the Gaussian fluctuation
\cite{Smit}.
After a little algebra using the partial summation,
we find the explicit contribution of monopoles to the
r.h.s of Eq.(\ref{Eq:abeltop}) \cite{Sasaki2} as
\begin{eqnarray}
\varepsilon_{\mu \nu \rho \sigma}
{\bar \Theta}_{\mu \nu}(s){\bar \Theta}_{\rho \sigma}(s)
&=&4\varepsilon_{\mu \nu \lambda \omega}\varepsilon_{\lambda \omega \rho \sigma}
\Delta_{\mu}{\cal A}_{\nu}(s)
\Delta_{\rho}{\cal B}_{\sigma}(s) \nonumber \\
&=&16\pi {\cal A}_{\mu}(s) {\cal K}_{\mu}(s) +
\cdot\cdot\cdot\;,
\end{eqnarray}
where the ellipsis stands for the total divergence, which will drop
in the summation over all sites.
Consequently, we arrive at the conjecture that
{\it the topological feature is preserved by
the presence of monopoles in the abelian dominated system}
\cite{Sasaki2}:
\begin{equation}
Q_{\rm cont}\simeq - \frac{1}{16\pi^2}\sum_{s}q_{_{\rm Mono}}(s)\;,
\end{equation}
where $q_{_{\rm Mono}}(s)\equiv -16\pi {\cal A}_{\mu}(s)
{\cal K}_{\mu}(s)$. In addition, the Gaussian fluctuation can not be definitely
identified except for the Landau gauge solution;
$\Delta_{\mu}{\cal A}^{L}_{\mu}(s)=0$ where a superscript $L$ denotes `Landau'.
However, we need only to know the Gaussian fluctuation in the Landau gauge on the
measurement of the quantity $\sum_{s}q_{_{\rm Mono}}(s)$, which has the
residual U(1) gauge invariance.
\section{Numerical results}
This section is devoted to a numerical analysis to justify the
above conjecture through Monte Carlo simulation within
SU(2) lattice gauge theory using the standard Wilson action.
We choose the MA gauge as an applicable abelian gauge for the realization
of abelian dominance in this paper. As we pointed out,
the QCD vacuum has been observed as an abelian
dominated system by lattice Monte Carlo simulations \cite{Polikarpov}.
We shall return to the explicit procedure of the MA gauge fixing later.
As we have mentioned before, one needs to smooth the Monte Carlo gauge
configurations in order to determine the topological charge.
To eliminate undesirable fluctuations, we adopt the
naive cooling method which is an iterative scheme to replace
each link variable using the following procedure \cite{Ilgenfritz}:
\begin{equation}
U_{\mu}(s) \rightarrow U^{\prime}_{\mu}(s) = \Sigma_{\mu}(s)
/ ||\Sigma_{\mu}(s)|| \;\;,
\end{equation}
where $||\Sigma_{\mu}(s)||$ denotes the square root of
determinant of $\Sigma_{\mu}$
and
\begin{equation}
\Sigma_{\mu}(s)=\sum_{\nu\neq\mu}\left(
U_{\nu}(s)U_{\mu}(s+{\hat \nu})U^{\dag}_{\nu}(s+{\hat \mu})
+U^{\dag}_{\nu}(s-{\hat \nu})U_{\mu}(s-{\hat \nu})
U_{\nu}(s+{\hat \mu}-{\hat \nu})
\right) \;.
\end{equation}
This procedure is derived from the condition to minimize the action
through the variation of the link variable \cite{Ilgenfritz}. Then, an
iterated process leads to a local minimum of the
action, which corresponds to a solution to the field equation. The
relation
$Q_{L}\approx Q_{\rm cont}$ is expected through the cooling procedure.
Before turning to the abelian gauge fixing, we must take account of
the permutation between the cooling procedure and the gauge fixing procedure.
The action in the compact lattice gauge theory
is not usually affected by the gauge fixing procedure for the link variables
because of the gauge-invariant measure. However, the fundamental
modular region \cite{Zwanziger} appears in the case of the
MA gauge fixing \cite{Chernodub}.
Then, the action is modified by the contribution
from the Faddeev-Popov determinant \cite{Chernodub}.
In other words, the above cooling procedure is not justified
after the MA gauge fixing.
Thus, we have to apply the MA gauge fixing procedure to
the resulting gauge configuration after several cooling sweeps.
In the lattice formulation, the MA gauge fixing is defined by
maximizing the gauge dependent variable $R$
under gauge transformations; $U_{\mu}(s) \rightarrow {\tilde U}_{\mu}
(s)=\Omega(s)U_{\mu}(s)\Omega^{\dag}(s+{\hat \mu})$ \cite{Kronfeld}:
\begin{equation}
R[\Omega]
=\sum_{s,\;\mu}\mbox{tr}\left\{
\sigma_{3} {\tilde U}_{\mu}(s) \sigma_{3} {\tilde U}^{\dag}_{\mu}(s)
\right\}
=2\sum_{s,\;\mu}\left(
1-2\chi_{\mu}(s)
\right) \;,
\label{magauge}
\end{equation}
where $\chi_{\mu}\equiv
({\tilde U}^{1}_{\mu}(s))^2+ ({\tilde U}^{2}_{\mu}(s))^2$.
The maximization of $R$ implies that the off-diagonal components of the
SU(2) link variables are minimized as much as possible
through the gauge transformation.
After the MA gauge fixing, we extract the U(1) field strength
${\bar \theta}_{\mu \nu}$ and the monopole current $k_{\mu}$
from the ${\rm U}(1)$ link variable following the previous section.
The Gaussian fluctuation in the Landau gauge is computed by the convolution of
the electric current with the lattice Coulomb propagator
${\cal D}(s-s^{\prime})$
\begin{equation}
{\cal A}^{L}_{\mu}(s)=-\sum_{s^{\prime}}{\cal D}(s-s^{\prime})\Delta_{\lambda}{\bar
\Theta}_{\lambda \mu}(s^{\prime})\;,
\end{equation}
where the lattice Coulomb propagator
satisfies the equation; $\Delta^{2}{\cal D}(s-s^{\prime})=-\delta_{s,\;s^{\prime}}$.
One can obtain the explicit form of ${\cal D}(s-s^{\prime})$ on a $L^{4}$ lattice as
\begin{equation}
{\cal D}(s-s^{\prime})=\sum_{p}{1 \over {\sum_{\mu}
\mbox{sin}^2(p_{\mu})}}e^{ip\cdot (s-s^{\prime}) }
\label{Eq:cgreen}
\end{equation}
with the following abbreviation: $p_{\mu}\equiv \frac{2\pi}{L}n_{\mu}$ and
$\sum_{p}\equiv \frac{1}{L^{4}} \Pi_{\mu}\sum_{n_{\mu}=0}^{L-1}$.
We generate the gauge configurations by using the
heat bath algorithm on a $16^4$ lattice at $\beta=2.4$.
All measurements have been performed on independent configurations, which
are separated from each other by 500 sweeps after the system has reached thermal
equilibrium for 1000 heat bath sweeps.
For the MA gauge fixing in the numerical simulation, we use the overrelaxation
method with parameter $\omega =1.7$ \cite{Mandula}.
Let us concentrate on the realization of the abelian dominance
for topological density in the MA gauge.
Before turning to the topological density, we show
the probability distribution of the amplitude of the off-diagonal
component $\chi_{\mu}$ in Fig.\ref{fig:chi}.
The solid curve, the broken curve and the dotted curve denote the case
for configurations before cooling and after cooling for 1 sweep and
30 sweeps, respectively.
This figure tells us that the off diagonal components of the SU(2) link
variable are actually made as small as possible on the whole
lattice in the MA gauge. Furthermore, such a tendency becomes prominent
with more cooling sweeps.
The information from Fig.\ref{fig:chi} suggests that
large numbers of cooling sweeps could enhance abelian dominance.
In addition, the off diagonal components; $U_{\mu}^1$ and $U_{\mu}^2$
behave like random variables on the whole lattice
before the MA gauge fixing so that the
probability distribution is flat on unity in the case of no gauge
fixing whether configurations are cooled, or not.
We have checked the above expectation on the measurement of the topological
density $q(s)$ and the abelian analog of topological density $q_{\rm Abel}(s)$.
Fig.2 and Fig.3 display $q(s)$ and $q_{\rm Abel}(s)$ in a two-dimensional
plane after 30 and 100 cooling sweeps.
We define this plane as a slice at the center of
the instanton in the $(z,t)$-plane.
The center is identified as the local maximum of $q(s)$.
The topological density around the instanton is little affected by the
cooling procedure as is shown in Fig.\ref{fig:full30} and
Fig.\ref{fig:full100}. For the abelian analog of topological density,
such a stability in the cooling process is not required because of the lack
of the topological basis in the U(1) manifold.
At 30 cooling sweeps, the abelian dominance for topological
density is not largely revealed as shown in Fig.\ref{fig:abel30},
since a large amplitude of the off-diagonal components still remains locally,
especially around the center of the instanton and/or the monopole.
However, after 100 cooling sweeps,
one can recognize that a similar lump to the topological density
is located around the center of the instanton in Fig.\ref{fig:abel100}.
Thus, the topological density is dominated by the abelian
analog of topological density after enough cooling sweeps, as expected.
Next, we will examine the correlation between
topological charge and monopoles.
The two types of corresponding topological charge are computed through
the following formula:
\begin{eqnarray}
Q_{\rm SU(2)}&\equiv& - \frac{1}{16 \pi^2} \sum_{s} q(s) \;,\\
Q_{\rm Mono}&\equiv& - \frac{1}{16 \pi^2} \sum_{s} q_{_{\rm Mono}}(s)\;.
\end{eqnarray}
$Q_{\rm SU(2)}$ denotes the ordinary topological charge.
$Q_{\rm Mono}$ denotes the corresponding topological charge via
monopoles.
We show the scatter plots of $Q_{\rm SU(2)}$ vs. $Q_{\rm Mono}$
in Fig.4; (a) no gauge fixing and (b) the MA gauge fixing
after 100 cooling sweeps for 50 independent configurations.
Obviously, there is not any correlation between the two topological charges
{\it before the MA gauge fixing}.
A one-to-one correspondence between $Q_{\rm SU(2)}$
and $Q_{\rm Mono}$ is revealed in the scatter plot {\it after the MA gauge fixing}.
In further detail, we find that there is a small variance between
$Q_{\rm SU(2)}$ and $Q_{\rm Mono}$.
The slope of the linear correlation
in the scatter plot is not unity, but 1.43 in Fig.\ref{fig:mag}
($16^4$ at $\beta=2.4$).
On the several measurements at $\beta=2.45$ and $2.5$, it
seems that this slope hardly depends on the lattice spacing after
large enough numbers of cooling sweeps (see Table 1).
Table 1 tells us that the relation;
$Q_{\rm Mono}\approx 0.7 Q_{\rm SU(2)}$ is almost satisfied in the
MA gauge on these data.
\begin{center}
\leavevmode
\hbox{
\begin{tabular}[t]{c|ccc}
\hline\hline
& \multicolumn{3}{|c}{cooling sweeps} \\
\cline{2-4}
\makebox[2cm]{$\beta$}
& \makebox[1.8cm]{30} & \makebox[1.8cm]{50}& \makebox[1.8cm]{100} \\
\hline
2.40 & 1.45 & 1.46 & 1.43 \\
2.45 & 1.46 & 1.43 & 1.44 \\
2.50 & 1.49 & 1.43 & 1.43 \\
\hline\hline
\end{tabular}
}
\end{center}
\hspace{0.5cm}
To discuss the topological feature on $Q_{\rm Mono}$, we show the
probability distribution of $Q_{\rm Mono}$ in Fig.\ref{fig:dis} by using 3000
independent configurations at $\beta=2.4$ on a $16^4$ lattice.
Several dotted lines correspond to partial contributions to the whole
distribution, which are assigned to some integer value of $Q_{\rm SU(2)}$.
We find several discrete peaks in Fig.\ref{fig:dis}, since
each partial contribution is a Gaussian-type
distribution with the half width slightly less than unity
around discrete values ($\approx 0.7 Q_{\rm SU(2)}$).
This implies that $Q_{\rm Mono}$ is classified by
approximately discrete values.
Namely, it seems that monopoles almost inherit the topological nature
of the original gauge configurations.
\section{Summary and Conclusion}
Our work was motivated by the Ezawa-Iwazaki conjecture:
the topological feature is preserved by the presence of QCD-monopoles
once abelian dominance is postulated.
To confirm this, we have performed an analytical study of
how the monopole current is related to the topological charge within
SU(2) lattice gauge theory.
We consider the abelian analog of topological density, which is
defined through the replacement of the SU(2) link variable by the
U(1)-like link variable. Although this quantity is represented by
the corresponding U(1) field strength in the naive continuum limit,
it can carry the non-trivial contribution to the topological charge
because of the presence of QCD-monopoles.
If we assume that the abelian component of gauge fields plays an essential
role in the description of the long range physics, QCD-monopoles are
required to inherit the topological feature.
Monte Carlo simulations have brought us
encouraging results for the above conjecture.
We have chosen the MA gauge as an applicable abelian gauge.
We have measured both the ordinary topological
charge $Q_{\rm SU(2)}$ and the corresponding topological charge via
monopoles, $Q_{\rm Mono}$.
In spite of the fact that there is no correlation between $Q_{\rm
SU(2)}$ and $Q_{\rm Mono}$ without abelian
gauge fixing, we found a one-to-one correspondence between the two charges
in the MA gauge. And also the relation: $Q_{\rm Mono} \approx 0.7 Q_{\rm SU(2)}$
can be read off from our resulting data.
Furthermore, $Q_{\rm Mono}$ is classified by approximately discrete
values. Thus we have concluded that the topological nature is substantially
inherited by QCD-monopoles.
This conclusion is consistent with our previous study \cite{Sasaki3},
which shows that QCD-monopoles strongly correlate with the presence of
fermionic zero-modes in the MA gauge.
\newpage
\section*{Acknowledgment}
We gratefully acknowledge useful discussions with H. Suganuma
and Ph. de Forcrand.
We also thank T. Blum for encouragement and reading the manuscript.
All lattice simulations in this paper have been performed on DEC
Alpha Server 8400 5/440 at the Research Center for Nuclear Physics (RCNP)
of Osaka University. One of us (O.M.) appreciates support from
the Grant in Aid for Scientific Research by the Ministry of Education
(A) (No.0830424). Finally, another (S.S.) would like to thank
all the members of Yukawa Institute for Theoretical Physics (YITP),
especially, T. Matsui and K. Itakura for their warm hospitality during
his residence at the YITP of Kyoto University, where
most of the present study has been carried out.
\newpage
\centerline{\large FIGURE CAPTIONS}
\begin{description}
\vspace{1.0cm}
\item[FIG.1.]
\begin{minipage}[t]{13cm}
\baselineskip=20pt
The probability distribution of the amplitude of the off-diagonal
components $\chi$ on a $16^{4}$ lattice at $\beta =2.4$.
\end{minipage}
\vspace{1.0cm}
\item[FIG.2.]
\begin{minipage}[t]{13cm}
\baselineskip=20pt
The topological density as a function of $z$ and $t$
in a two dimensional slice at the center of the instanton
after 30 cooling sweeps (a), and after 100 cooling sweeps (b)
at $\beta=2.4$ on a $16^4$ lattice.
\end{minipage}
\vspace{1.0cm}
\item[FIG.3.]
\begin{minipage}[t]{13cm}
\baselineskip=20pt
The abelian analog of topological density as a function of $z$ and $t$
in a two dimensional slice at the center of the instanton
after 30 cooling sweeps (a), and after 100 cooling sweeps (b)
at $\beta=2.4$ on a $16^4$ lattice.
\end{minipage}
\vspace{1.0cm}
\item[FIG.4.]
\begin{minipage}[t]{13cm}
\baselineskip=20pt
Scatter plot of $Q_{\rm SU(2)}$ vs. $Q_{\rm Mono}$
(a) before the MA gauge fixing and (b) after the MA gauge fixing
at 100 cooling sweeps.
\end{minipage}
\vspace{1.0cm}
\item[FIG.5.]
\begin{minipage}[t]{13cm}
\baselineskip=20pt
Probability distribution of $Q_{\rm Mono}$ in the MA gauge fixing
at 100 cooling sweeps by using 3000 independent configurations.
\end{minipage}
\vspace{1.0cm}
\item[TABLE 1.]
\begin{minipage}[t]{13cm}
\baselineskip=20pt
Slope of the linear correlation in the scatter plot of
$Q_{\rm SU(2)}$ vs. $Q_{\rm Mono}$ on a $16^4$ lattice
at $\beta=2.4, 2.45$ and 2.5 after 30, 50 and 100 cooling sweeps.
\end{minipage}
\end{description}
\newpage
\centerline{\Large FIG.1 (Phys.Rev.D) Shoichi Sasaki {\it et al}. }
\vspace*{2.5cm}
\noindent
\centerline{\epsfxsize=15.0cm
\epsfbox{FIG_chi.eps}}
\vspace{0.5cm}
\centerline{\fcaption{\label{fig:chi}}}
\newpage
\centerline{\Large FIG.2a (Phys.Rev.D) Shoichi Sasaki {\it et al}. }
\vspace*{2.5cm}
{\setcounter{enumi}{\value{figure}}
\addtocounter{enumi}{1}
\setcounter{figure}{0}
\renewcommand{\thefigure}{\arabic{enumi}(\alph{figure})}
\noindent
\centerline{\epsfxsize=15.0cm
\epsfbox{FIG_full30.eps}}
\vspace{0.5cm}
\centerline{\fcaption{\label{fig:full30}}}
\newpage
\centerline{\Large FIG.2b (Phys.Rev.D) Shoichi Sasaki {\it et al}. }
\vspace*{2.5cm}
\noindent
\noindent
\centerline{\epsfxsize=15.0cm
\epsfbox{FIG_full100.eps}}
\vspace{0.5cm}
\centerline{\fcaption{\label{fig:full100}}}
\setcounter{figure}{\value{enumi}}
}
\newpage
\centerline{\Large FIG.3a (Phys.Rev.D) Shoichi Sasaki {\it et al}. }
{\setcounter{enumi}{\value{figure}}
\addtocounter{enumi}{1}
\setcounter{figure}{0}
\renewcommand{\thefigure}{\arabic{enumi}(\alph{figure})}
\vspace*{2.5cm}
\noindent
\centerline{\epsfxsize=15.0cm
\epsfbox{FIG_abel30.eps}}
\vspace{0.5cm}
\centerline{\fcaption{\label{fig:abel30}}}
\newpage
\centerline{\Large FIG.3b (Phys.Rev.D) Shoichi Sasaki {\it et al}. }
\vspace*{2.5cm}
\noindent
\centerline{\epsfxsize=15.0cm
\epsfbox{FIG_abel100.eps}}
\vspace{0.5cm}
\centerline{\fcaption{\label{fig:abel100}}}
\setcounter{figure}{\value{enumi}}
}
\newpage
\centerline{\Large FIG.4a (Phys.Rev.D) Shoichi Sasaki {\it et al}. }
{\setcounter{enumi}{\value{figure}}
\addtocounter{enumi}{1}
\setcounter{figure}{0}
\renewcommand{\thefigure}{\arabic{enumi}(\alph{figure})}
\vspace*{2.5cm}
\noindent
\centerline{\epsfxsize=15.0cm
\epsfbox{FIG_ngf.eps}}
\vspace{0.5cm}
\centerline{\fcaption{\label{fig:ngf}}}
\newpage
\centerline{\Large FIG.4b (Phys.Rev.D) Shoichi Sasaki {\it et al}. }
\vspace*{2.5cm}
\noindent
\centerline{\epsfxsize=15.0cm
\epsfbox{FIG_mag.eps}}
\vspace{0.5cm}
\centerline{\fcaption{\label{fig:mag}}}
\setcounter{figure}{\value{enumi}}
}
\newpage
\centerline{\Large FIG.5 (Phys.Rev.D) Shoichi Sasaki {\it et al}. }
\vspace*{2.5cm}
\noindent
\centerline{\epsfxsize=15.0cm
\epsfbox{FIG_dis.eps}}
\vspace{0.5cm}
\centerline{\fcaption{\label{fig:dis}}}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 1,147 |
Lance Armstrong how to change a tyre – video
Armstrong heads to the workshop for how-to video
Don't watch Lance Armstrong's How to fix a flat video, watch BikeRadar's instead.
How to change a flat tyre
Video: How to fix a tyre
Lance Armstrong, who has been relatively camera shy since admitting his systematic doping to Oprah last year, has appeared in a how-to video – telling viewers how to change a flat tyre.
In the video for US magazine Outside, the former pro sends himself up with the opening line "Hi, I'm Lance Armstrong, seven time winner of the Tour de France," before a bell goes off, and he continues, "Hey, I didn't write the script."
Wearing a Coors Classic hat, Armstrong changes a tyre on an old Peugeot bike and offers handy tips along the way – such as not replacing the dust cap because it's "not cool."
Outside's publicity stunt featuring a faux-humbled Armstrong is likely to get the views, but if you want even better how-to videos, subscribe to the BikeRadar YouTube channel for our longer and more comprehensive series of Maintenance Monday videos.
BikeRadar presenter and skilled mechanic James Tennant has confirmed he was never offered or tempted to use performance-enhancing drugs during his career as a top bike shop wrench.
And yes, we know we just hijacked Outside's stunt. But if you want to go and see Armstrong in action, it's over here somewhere.
How to sprint for the win with Marcel Wüst – video | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 5,924 |
Abola Ubang (ur. 20 stycznia 1997) – etiopski lekkoatleta specjalizujący się w rzucie oszczepem.
W 2013 został pierwszym w historii mistrzem Afryki juniorów młodszych (używając sprzętu o wadze odpowiedniej dla kategorii wiekowej – 700 g).
Osiągnięcia
Przypisy
Bibliografia
Urodzeni w 1997
Etiopscy oszczepnicy | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 4,617 |
{"url":"https:\/\/cn.chinaproject.harvard.edu\/publications","text":"# \u51fa\u7248\u6587\u732e\n\nIn Press\nChenghe Guan, Michael Keith, and Andy Hong. In Press. \u201cDesigning walkable cities and neighborhoods in the era of urban big data.\u201d Urban Planning International.Abstract\nIn this paper, we discuss walkable cities from the perspective of urban planning and design in the era of digitalization and urban big data. We start with a brief review on historical walkable cities schemes; followed by a deliberation on what a walkable city is and what the spatial elements of a walkable city are; and a discussion on the emerging themes and empirical methods to measure the spatial and urban design features of a walkable city. The first part of this paper looks at key urban design propositions and how they were proposed to promote walkability. The second part of this paper discusses the concept of walkability, which is fundamental to designing a walkable city. We emphasize both the physical (walkways, adjacent uses, space) and the perceived aspects (safety, comfort, enjoyment), and then we look at the variety of spatial elements constituting a walkable city. The third part of this paper looks at the emerging themes for designing walkable cities and neighborhoods. We discuss the application of urban big data enabled by growing computational powers and related empirical methods and interdisciplinary approaches including spatial planning, urban design, urban ecology, and public health. This paper aims to provide a holistic approach toward understanding of urban design and walkability, re-evaluate the spatial elements to build walkable cities, and discuss future policy interventions.\nJing Cao, Mun S. Ho, Wenhao Hu, and Dale Jorgenson. In Press. \u201cEffective Labor Supply and Growth Outlook in China.\u201d China Economic Review.Abstract\nThe falling projections of working-age population in China has led to predictions of much slower economic growth. We consider three mechanisms that could contribute to higher effective labor supply growth \u2013 further improvement in educational attainment due to cohort replacement and rising college enrollment, improvement in aggregate labor quality due to urbanization, and higher labor force participation due to later retirement. We find that these factors result in a projected growth rate of effective labor input of 0.40% for 2015-2030 compared to -0.60% for working age population. As a result, the projected growth rate of GDP will be 5.80% for 2015-2030 compared to 5.23% if these factors are ignored.\nChenghe Guan, Jialin Liu, Sumeeta Srinivasan, Bo Zhang, Liangjun Da, and Chris P. Nielsen. In Press. \u201cThe influence of neighborhood types on active transport in China\u2019s growing cities.\u201d Transportation Research Part D: Transport and Environment.Abstract\nRapid urban expansion in China has created both opportunities and challenges for promoting active transport in urban residential communities. Previous studies have shown that the urban form at the city scale has affected active transport in Chinese cities. However, there is less agreement about how the physical and social variations of neighborhood types should be addressed. This research investigates the four most representative neighborhood types found in Chinese cities: traditional mixed-use, slab block work-unit, gated community, and resettlement housing. Household travel diaries conducted in Chengdu in 2016 were analyzed using binary logistic regressions, supplemented by informal onsite interviews. The findings indicate significant variations in the use and accessibility of active transport in each neighborhood type for non-work trips. This suggests that each neighborhood type may need different strategies for promoting active transport: (1) the traditional mixed-use neighborhoods are in need of intensified urban retrofitting projects to reclaim public open space; (2) the work-unit could benefit from comprehensive plans rather than a patchwork of projects; (3) while opening up gated communities can improve porosity across neighborhoods and promote active transport, the more pressing issue may be their inability to keep up with the transportation needs of the residents; and (4) residents of resettlement housing should have better access to employment using transit and non-motorized modes.\nXueli Zhao, Xiaofang Wu, Chenghe Guan, Rong Ma, Chris P. Nielsen, and Bo Zhang. In Press. \u201cLinking Agricultural GHG emissions to Global Trade Network.\u201d Earth's Future.\nPeter Sherman, Xinyu Chen, and Michael B. McElroy. In Press. \u201cOffshore wind: an opportunity for cost-competitive decarbonization of China\u2019s energy economy.\u201d Science Advances.Abstract\nChina has reduced growth in its emissions of greenhouse gases, success attributable in part due to major investments in onshore wind. By comparison, investments in offshore wind have been minor, limited until recently largely by perceptions of cost. Assimilated meteorological data are used here to assess future offshore wind potential for China. Analysis on a provincial basis indicates that the aggregate potential wind resource is 5.4 times larger than current coastal demand for power. Recent experiences with markets both in Europe and the US suggest that potential offshore resources in China could be exploited to cost-competitively provide 1148.3 TWh of energy in a high-cost scenario, 6383.4 TWh in a low-cost option, equivalent to between 36% and 200% of the total coastal energy demand post 2020. The analysis underscores significant benefits for offshore wind for China, with prospects for major reductions greenhouse emissions with ancillary benefits for air quality.\nChenghe Guan, Ziming Li, Rahul Mehrotra, Michael Keith, and Abhinav Alakshendra. In Press. \u201cSmart city interventions and green accessibility for urban migrants: Case studies of Patna and Mumbai in India.\u201d Urban Planning International.Abstract\nPost-colonial India has experienced rapid urbanization characterized by rural-urban migration and the prosperity of informal settlement in cities. However, urban planning and infrastructure development has paid little attention to the needs of walking accessibility for pedestrians, especially the urban poor and migrants. The goals of \u201cSmart city\u201d-oriented urban policy cover inclusiveness, accessibility, transparency, comfortable, safety, sustainable, etc. In order to find how such emerging policy intervention could help pedestrians, we illustrate non-motorized accessibility challenges and opportunities for migrants and urban poor in Patna and Mumbai through case studies at multiple scales and various aspects. We found that to fulfill the needs of pedestrians not only means providing them adequate facilities but also means embodying the idea of inclusiveness and users\u2019 perceptions into practical methods for enhancing accessibility. Moreover, informality makes the vision of smart cities more complex in the two cities,\u00a0which implies the significance to hearing the diverse grassroots opinions by applying \u201csmart\u201d governance concept as well as innovative technologies for collecting data of accessibility of different social groups.\nChenghe Guan, Jihoon Song, Michael Keith, Yuki Akiyama, Ryosuke Shibasaki, and Taisei Sato. In Press. \u201cDelineating urban park catchment areas using mobile phone data: A case study of Tokyo.\u201d Computers, Environment and Urban Systems. Publisher's VersionAbstract\nUrban parks can offer both physical and psychological health benefits to urban dwellers and provide social, economic, and environmental benefits to society. Earlier research on the usage of urban parks relied on fixed distance or walking time to delineate urban park catchment areas. However, actual catchment areas can be affected by many factors other than park surface areas, such as social capital cultivation, cultural adaptation, climate and seasonal variation, and park function and facilities provided. This study advanced this method by using mobile phone data to delineate urban park catchment area. The study area is the 23 special wards of Tokyo or tokubetsu-ku, the core of the capital of Japan. The location data of over 1 million anonymous mobile phone users was collected in 2011. The results show that: (1) the park catchment areas vary significantly by park surface areas: people use smaller parks nearby but also travel further to larger parks; (2) even for the parks in the same size category, there are notable differences in the spatial pattern of visitors, which cannot be simply summarized with average distance or catchment radius; and (3) almost all the parks, regardless of its size and function, had the highest user density right around the vicinity, exemplified by the density-distance function closely follow a decay trend line within 1-2 km radius of the park. As such, this study used the density threshold and density-distance function to measure park catchment. We concluded that the application of mobile phone location data can improve our understanding of an urban park catchment area, provide useful information and methods to analyze the usage of urban parks, and can aid in the planning and policy-making of urban parks.\n2019\nXi Lu, Liang Cao, Haikun Wang, Wei Peng, Jia Xing, Shuxiao Wang, Siyi Cai, Bo Shen, Qing Yang, Chris P. Nielsen, and Michael B. McElroy. 2019. \u201cGasification of coal and biomass as a net carbon-negative power source for environment-friendly electricity generation in China.\u201d Proceedings of the National Academy of Sciences. Publisher's VersionAbstract\nRealizing the goal of the Paris Agreement to limit global warming to 2 \u00b0C by the end of this century will most likely require deployment of carbon-negative technologies. It is particularly important that China, as the world\u2019s top carbon emitter, avoids being locked into carbon-intensive, coal-fired power-generation technologies and undertakes a smooth transition from high- to negative-carbon electricity production. We focus here on deploying a combination of coal and biomass energy to produce electricity in China using an integrated gasification cycle system combined with carbon capture and storage (CBECCS). Such a system will also reduce air pollutant emissions, thus contributing to China\u2019s near-term goal of improving air quality. We evaluate the bus-bar electricity-generation prices for CBECCS with mixing ratios of crop residues varying from 0 to 100%, as well as associated costs for carbon mitigation and cobenefits for air quality. We find that CBECCS systems employing a crop residue ratio of 35% could produce electricity with net-zero life-cycle emissions of greenhouse gases, with a levelized cost of electricity of no more than 9.2 US cents per kilowatt hour. A carbon price of approximately $52.0 per ton would make CBECCS cost-competitive with pulverized coal power plants. Therefore, our results provide critical insights for designing a CBECCS strategy in China to harness near-term air-quality cobenefits while laying the foundation for achieving negative carbon emissions in the long run. Mengyao Han, Bo Zhang, Yuqing Zhang, and Chenghe Guan. 2019. \u201cAgricultural CH4 and N2O emissions of major economies: Consumption-vs. production-based perspectives.\u201d Journal of Cleaner Production, 210, Pp. 276-286. Publisher's VersionAbstract Agriculture is one of the most important sectors for global anthropogenic methane (CH4) and nitrous oxide (N2O) emissions. While much attention has been paid to production-side agricultural non-CO2 greenhouse gas (ANGHG) emissions, less is known about the emissions from the consumption-based perspective. This paper aims to explore the characteristics of agricultural CH4 and N2O emissions of global major economies by using the latest emission data from the Food and Agriculture Organization Corporate Statistical Database (FAOSTAT) and the recently available global multi-regional input-output model from the World Input-Output Database (WIOD). The results show that in 2014, the 42 major economies together accounted for 60.7% and 65.0% of global total direct and embodied ANGHG emissions, respectively. The consumption-based ANGHG emissions in the US, Japan, and the EU were much higher than their production-based emissions, while the converse was true for Brazil, Australia, and India. The global-average embodied ANGHG emissions per capita was 0.7 t CO2-eq, but major developing countries such as China, India, Indonesia and Mexico were all below this average value. We find that the total transfer of embodied ANGHG emissions via international trade was 622.4 Mt CO2-eq, 11.9% of the global total. China was the largest exporter of embodied ANGHG emissions, while the US was the largest importer. Most developed economies were net importers of embodied emissions. Mexico-US, China-US, China-EU, China-Japan, China-Russia, Brazil-EU, India-EU and India-US formed the main bilateral trading pairs of embodied emission flows. Examining consumption-based inventories can be useful for understanding the impacts of final demand and international trade on agricultural GHG emissions and identifying appropriate mitigation potentials along global supply chains. Yan Zhang, Xin Bo, Yu Zhao, and Chris P. Nielsen. 2019. \u201cBenefits of current and future policies on emissions of China's coal-fired power sector indicated by continuous emission monitoring.\u201d Environmental Pollution, 251, Pp. 415-424. Publisher's VersionAbstract Emission inventories are critical to understanding the sources of air pollutants, but have high uncertainties in China due in part to insufficient on-site measurements. In this study, we developed a method of examining, screening and applying online data from the country's improving continuous emission monitoring systems (CEMS) to reevaluate a \u201cbottom-up\u201d emission inventory of China's coal-fired power sector. The benefits of China's current national emission standards and ultra-low emission policy for the sector were quantified assuming their full implementation. The derived national average emission factors of SO2, NOx and particulate matter (PM) were 1.00, 1.00 and 0.25 kg\/t-coal respectively for 2015 based on CEMS data, smaller than those of previous studies that may not fully recognize improved emission controls in recent years. The annual emissions of SO2, NOx and PM from the sector were recalculated at 1321, 1430 and 334 Gg respectively, 75%, 63% and 76% smaller than our estimates based on a previous approach without the benefit of CEMS data. The results imply that online measurement with proper data screening can better track the recent progress of emission controls. The emission intensity (the ratio of emissions to economic output) of Northwest China was larger than that of other regions, attributed mainly to its less intensive economy and industry. Transmission of electricity to more-developed eastern provinces raised the energy consumption and emissions of less-developed regions. Judged by 95 percentiles of flue-gas concentrations measured by CEMS, most power plants met the current national emission standards in 2015 except for those in Northwest and Northeast China, while plants that met the ultra-low emission policy were much scarcer. National SO2, NOx and PM emissions would further decline by 68%, 55% and 81% respectively if the ultra-low emission policy can be strictly implemented, implying the great potential of the policy for emission abatement. Jianxiong Sheng, Shaojie Song, Yuzhong Zhang, Ronald G. Prinn, and Greet Janssens-Maenhout. 2019. \u201cBottom-Up Estimates of Coal Mine Methane Emissions in China: A Gridded Inventory, Emission Factors, and Trends.\u201d Environmental Science and Technology Letters. Publisher's VersionAbstract China has large but uncertain coal mine methane (CMM) emissions. Inverse modeling (top-down) analyses of atmospheric methane observations can help improve the emission estimates but require reliable emission patterns as prior information. To serve this urgent need, we developed a high-resolution (0.25\u00b0 \u00d7 0.25\u00b0) methane emission inventory for China\u2019s coal mining using a recent publicly available database of more than 10000 coal mines in China for 2011. This number of coal mines is 25 and 2.5 times, respectively, more than the number available in the EDGAR v4.2 and EDGAR v4.3.2 gridded global inventories, which have been extensively used in past inverse analyses. Our inventory shows large differences with the EDGAR v4.2 as well as its more recent version, EDGAR v4.3.2. Our results suggest that China\u2019s CMM emissions have been decreasing since 2012 on the basis of coal mining activities and assuming time-invariant emission factors but that regional trends differ greatly. Use of our inventory as prior information in future inverse modeling analyses can help better quantify CMM emissions as well as more confidently guide the future mitigation of coal to gas in China. Sumeeta Srinivasan, Chenghe Guan, and Chris P. Nielsen. 2019. \u201cBuilt environment, income and travel behavior: Change in the city of Chengdu 2005-2016.\u201d International Journal of Sustainable Transportation. Publisher's VersionAbstract In this paper, we look at differences in travel behavior and location characteristics across income in Chengdu, China at two points of time, 2005 and 2016, using household travel surveys. Specifically, we compare changes over time for different income groups for Chengdu in 2005 and 2016. We find that walking or biking remains the most common mode for all income groups but higher-income households appear to have more choices depending on the proximity of their neighborhood to downtown. We also find that both average local and average regional access have worsened since 2005. Furthermore, it appears that there is less economic diversity within neighborhoods in 2016 when compared to 2005, with more locations appearing to have 40% or more of low-, middle-, or high-income households than in the past. Finally, we find that low-income households and older trip makers are more likely to walk or bike and that high-income households are the most likely to own cars and use motorized modes. Built environment characteristics like mixed land use appear to significantly reduce travel time in 2016 but do not result in higher non-motorized transport mode share. We contribute to existing literature by evaluating changes in the relationship of built environment and travel behavior during a period of rapid urbanization and economic growth in a Chinese city. Jaume Freire-Gonz\u00e1lez and Mun S. Ho. 2019. \u201cCarbon taxes and the double dividend hypothesis in a recursive-dynamic CGE model for SpainAdd PublicationLinks.\u201d Economic Systems Research. Haikun Wang, Xi Lu, Yu Deng, Yaoguang Sun, Chris P. Nielsen, Yifan Liu, Ge Zhu, Maoliang Bu, Jun Bi, and Michael B. McElroy. 2019. \u201cChina\u2019s CO2 peak before 2030 implied from diverse characteristics and growth of cities.\u201d Nature Sustainability. Publisher's VersionAbstract China pledges to peak CO2 emissions by 2030 or sooner under the Paris Agreement to limit global warming to 2 \u00b0C or less by the end of the century. By examining CO2 emissions from 50 Chinese cities over the period 2000\u20132016, we found a close relationship between per capita emissions and per capita gross domestic product (GDP) for individual cities, following the environmental Kuznets curve, despite diverse trajectories for CO2 emissions across the cities. Results show that carbon emissions peak for most cities at a per capita GDP (in 2011 purchasing power parity) of around US$21,000 (80% confidence interval: US\\$19,000 to 22,000). Applying a Monte Carlo approach to simulate the peak of per capita emissions using a Kuznets function based on China\u2019s historical emissions, we project that emissions for China should peak at 13\u201316\u2009GtCO2\u2009yr\u22121\u00a0between 2021 and 2025, approximately 5\u201310\u2009yr ahead of the current Paris target of 2030. We show that the challenges faced by individual types of Chinese cities in realizing low-carbon development differ significantly depending on economic structure, urban form and geographical location.\nJing Cao, Mun S. Ho, Dale W. Jorgenson, and Chris P. Nielsen. 2019. \u201cChina\u2019s Emissions Trading System and an ETS-Carbon Tax Hybrid.\u201d Energy Economics, 81, Pp. 741-753. Publisher's VersionAbstract\nChina is introducing a national carbon emission trading system (ETS), with details yet to be finalized. The ETS is expected to cover only the major emitters but it is often argued that a more comprehensive system will achieve the emission goals at lower cost. We first examine an ETS that covers both electricity and cement sectors and consider an ambitious cap starting in 2017 that will meet the official objective to reduce the carbon-GDP intensity by 60-65% by 2030 compared to 2005 levels. The two ETS-covered industries are compensated with an output-based subsidy to represent the intention to give free permits to the covered enterprises. We then consider a hybrid system where the non-ETS sectors pay a carbon tax and share in the CO2\u00a0reduction burden. Our simulations indicate that hybrid systems will achieve the same CO2\u00a0goals with lower permit prices and GDP losses. We also show how auctioning of the permits improves the efficiency of the ETS and the hybrid systems. Finally, we find that these CO2\u00a0control policies are progressive in that higher incomes households bear a bigger burden.\nJing Cao, Mun Sing Ho, Yating Li, Richard G. Newell, and William A. Pizer. 2019. \u201cChinese residential electricity consumption estimation and forecast using micro-data.\u201d Resource and Energy Economics, 56, Pp. 6-27. Publisher's VersionAbstract\nBased on econometric estimation using data from the Chinese Urban Household Survey, we develop a preferred forecast range of 85\u2013143 percent growth in residential per capita electricity demand over 2009\u20132025. Our analysis suggests that per capita income growth drives a 43% increase, with the remainder due to an unexplained time trend. Roughly one-third of the income-driven demand comes from increases in the stock of specific major appliances, particularly AC units. The other two-thirds comes from non-specific sources of income-driven growth and is based on an estimated income elasticity that falls from 0.28 to 0.11 as income rises. While the stock of refrigerators is not projected to increase, we find that they contribute nearly 20 percent of household electricity demand. Alternative plausible time trend assumptions are responsible for the wide range of 85\u2013143 percent. Meanwhile we estimate a price elasticity of demand of \u22120.7. These estimates point to carbon pricing and appliance efficiency policies that could substantially reduce demand.\nChenghe Guan, Sumeeta Srinivasan, and Chris P. Nielsen. 2019. \u201cDoes neighborhood form influence low-carbon transportation in China?\u201d Transportation Research Part D: Transport and Environment, 67, Pp. 406-420. Publisher's VersionAbstract\nDeveloping less auto-dependent urban forms and promoting low-carbon transportation (LCT) are challenges facing our cities. Previous literature has supported the association between neighborhood form and low-carbon travel behaviour. Several studies have attempted to measure neighborhood forms focusing on physical built-environment factors such as population and employment density and socio-economic conditions such as income and race. We find that these characteristics may not be sufficiently fine-grained to differentiate between neighborhoods in Chinese cities. This research assesses characteristics of neighborhood spatial configuration that may influence the choice of LCT modes in the context of dense Chinese cities. Urban-form data from 40 neighborhoods in Chengdu, China, along with a travel behaviour survey of households conducted in 2016, were used to generate several measures of land use diversity and accessibility for each neighborhood. We use principle component analysis (PCA) to group these variables into dimensions that could be used to classify the neighborhoods. We then estimate regression models of low-carbon mode choices such as walking, bicycling, and transit to better understand the significance of these built-environment differences at the neighbourhood level. We find that, first, members of households do choose to walk or bike or take transit to work provided there is relatively high population density and sufficient access to public transit and jobs. Second, land-use diversity alone was not found to be significant in affecting LCT mode choice. Third, the proliferation of gated communities was found to reduce overall spatial connectivity within neighborhoods and had a negative effect on choice of LCT.\nJing Cao, Mun S Ho, and Wenhao Hu. 2019. \u201cEnergy consumption of urban households in China.\u201d China Economic Review, 58.Abstract\nWe estimate China urban household energy demand as part of a complete system of consumption demand so that it can be used in economy-wide models. This allows us to derive cross-price elasticities unlike studies which focus on one type of energy. We implement a two-stage approach and explicitly account for electricity, domestic fuels and transportation demand in the first stage and gasoline, coal, LPG and gas demand in the second stage. We find income inelastic demand for electricity and home energy, but the elasticity is higher than estimates in the rich countries. Demand for total transportation is income elastic. The price elasticity for electricity is estimated to be \u22120.5 and in the range of other estimates for China, and similar to long-run elasticities estimated for the U.S.\nPeng Jiang, Hongyan Liu, Shilong Piao, Philippe Ciais, Xiuchen Wu, Yi Yin, and Hongya Wang. 2019. \u201cEnhanced growth after extreme wetness compensates for post-drought carbon loss in dry forests.\u201d Nature Communications, 10, 195. Publisher's VersionAbstract\nWhile many studies have reported that drought events have substantial negative legacy effects on forest growth, it remains unclear whether wetness events conversely have positive growth legacy effects. Here, we report pervasive and substantial growth enhancement after extreme wetness by examining tree radial growth at 1929 forest sites, satellite-derived vegetation greenness, and land surface model simulations. Enhanced growth after extreme wetness lasts for 1 to 5 years and compensates for 93\u2009\u00b1\u20098% of the growth deficit after extreme drought across global water-limited regions. Remarkable wetness-enhanced growths are observed in dry forests and gymnosperms, whereas the enhanced growths after extreme wetness are much smaller in wet forests and angiosperms. Limited or no enhanced growths are simulated by the land surface models after extreme wetness. These findings provide new evidence for improving climate-vegetation models to include the legacy effects of both drought and wet climate extremes.\nLin Zhou, Jianglong Li, Yangqing Dan, Chunping Xie, Houyin Long, and Hongxun Liu. 2019. \u201cEntering and exiting: Productivity evolution of energy supply in China.\u201d Sustainability, 11, 983. Publisher's VersionAbstract\nThe continuous entry of new firms and exit of old ones might have substantial effects on productivity of energy supply. Since China is the world\u2019s largest energy producer, productivity of energy supply in China is a significant issue, which affects sustainability. As a technical application, this paper investigates the productivity and dynamic changes of Chinese coal mining firms. We find that the total factor productivity (TFP) growth of coal supply in China is largely lagging behind the growth rate of coal production. The entry and exit of non-state-owned enterprise (non-SOE) partially provide explanation for the dynamic change of aggregate TFP. Specifically, non-state owned entrants induced by the coal price boom after 2003, which had negative effects on TFP of energy supply, while the exit of non-SOEs had positive effects. Furthermore, there is regional heterogeneity concerning the effects of entry and exit on energy supply productivity. More entrants induced by coal price boom are concentrated in non-main production region (non-MPR), while more exits are located in MPR due to the government\u2019s enforcement. This provides explanation for the phenomena that productivity of energy supply in MPR gradually surpasses that in non-MPR. We also anticipate our paper to enhance understanding on the energy supply-side, which might further help us make informed decisions on energy planning and environmental policies.","date":"2020-02-17 15:27:12","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3576935827732086, \"perplexity\": 5523.5972246349165}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-10\/segments\/1581875142603.80\/warc\/CC-MAIN-20200217145609-20200217175609-00337.warc.gz\"}"} | null | null |
Infinite Dreams
Joe Haldeman
FOR THE GUILFORD GAFIA:
_And the three men I admire the most_
_The father, son, and the holy ghost_
_They caught the last train for the coast_
_The day the music died._
# Contents
COUNTERPOINT
ANNIVERSARY PROJECT
THE MAZEL TOV REVOLUTION
TO HOWARD HUGHES: A MODEST PROPOSAL
A MIND OF HIS OWN
ALL THE UNIVERSE IN A MASON JAR
THE PRIVATE WAR OF PRIVATE JACOB
A TIME TO LIVE
JURYRIGGED
[SUMMER'S LEASE
(called TRUTH TO TELL in _Analog_ )](chapter010.html)
26 DAYS, ON EARTH
ARMAJA DAS
TRICENTENNIAL
AFTERWORD
A Biography of Joe Haldeman
# Counterpoint
_The good people who agreed to publish this book asked me to say a few words about each story: where it came from, how it was written. In the trade, we call this the "Where do you get your crazy ideas?" syndrome._
_I always liked Roger Zelazny's answer. He says that every night he leaves a bowl of milk and some crackers on the back stoop; in the morning, the milk and crackers are gone, but there's a stack of crazy ideas by the empty bowl._
_An apology may be in order for the significant number of readers who think a story ought to speak for itself, and everything else is irrelevant blather. I like the blather myself, though, and I think most readers do. The rest can skip it easily enough: it's in a different typeface._
_The story that follows is important to me because it's the first one I wrote after learning that I might some day be a writer. I'd sold a few stories before, but always figured that it would be a sideline, a hobby that managed to pay for itself with a little beer money left over. I learned that it might be more in June of 1970._
_For twenty years science fiction has had an annual rite of spring called the Milford Conference. For some, it's a rite of passage as well. Milford used to be held in Milford,Pennsylvania, at the home of its founder, Damon Knight (its geography changes as Damon moves around, but it's still called "the Milford"). Damon invites a mixture of established writers and newcomers for a week of intensive roundtable criticism: manuscripts are passed around and sometimes praised, sometimes literally torn to bits._
_It was a real feeling of privilege to be allowed to sit with people like Bova, Dickson, Ellison, Knight, Latimer, Wilhelm; but I was a nervous wreck by the time my story came up for appraisal. One fellow neophyte had been reduced to tears by having his manuscript referred to as "this piece of shit" and flung to the floor. By the time my turn came around I_ knew _my story was cretinous, subliterate, an insult to everyone's intelligence, and poorly Xeroxed besides._
_But most of the people liked it, and some people whose opinions were important to me liked it very much.* I was able to relax after that, and talk with the established pros about practical things like agents and editors, and important things like how to fill an empty page, how to restart a dead story. I found out that they weren't all that different from me, and that if I really wanted to, I could make my living as a writer, eventually (it took about six years, much less time than I'd expected)._
_I went home from the conference and wrote this story, and started my first novel, and eventually sold both. In the "crazy ideas" department, all I want to say, to avoid muting the story's suspense, is that it's loosely patterned after a Greek myth. Followers of Dr. Jung will be glad to know that I'd never heard of the myth when I wrote the story._
*My memory is as selective and self-reinforcing as anybody's; what I remember is that _everybody_ thought the story was the best thing since sliced yoghurt. But according to my notes, several people didn't like it, and one positively (or negatively) frothed at the mouth in his dislike. He went on to his just reward, though: now he's a critic.
Michael Tobias Kidd was born in New Rochelle, N.Y., at exactly 8:03:47 on 12 April 1943. His birth was made as easy as the birth of a millionaire's son can be.
Roger William Wellings was born in New Orleans, La., at exactly 8:03:47 on 12 April 1943. His prostitute mother died in giving birth, and his father could have been any one of an indeterminate number of businessmen she had serviced seven months before at a war materiel planning convention.
Michael's mother considered herself progressive. She alternated breast-feeding with a sterilized bottle of scientifically prepared formula. An army of servants cared for the mansion while she lavished time and affection on her only son.
Roger's wet nurse, a black woman hired by the orphanage, despised the spindly pink premature baby and hoped he would die. Somehow, he lived.
Both babies were weaned on the same day. Michael had steak and fresh vegetables laboriously minced and mortared and pestled by a skilled dietician on the kitchen staff. Roger had wartime Gerber's, purchased by the orphanage in gallon jars that were left open far too long.
In a sunny nursery on that glorious morning of 16 March 1944, Michael said "Mama," his first word. It was raining in New Orleans, and unseasonably cold, and that word was one that Roger wouldn't learn for some time. But at the same instant, he opened his mouth and said "No" to a spoonful of mashed carrots. The attendant didn't know it was Roger's first word, but was not disposed to coax, and Roger went hungry for the rest of the morning.
And the war ground on. Poor Michael had to be without his father for weeks at a time, when he journeyed to Washington or San Francisco or even New Orleans to confer with other powerful men. In these times, Mrs. Kidd redoubled her affection and tried to perk up the little tyke with gifts of toys and candy. He loved his father and missed him, but shrewdly learned to take advantage of his absences.
The orphanage in New Orleans lost men to the armed forces and the stronger women went out to rivet and weld and slap grey paint for the war. Roger's family winnowed down to a handful of old ladies and bitter 4-F's. Children would die every month from carelessness or simple lack of attention. They would soil their diapers and lie in the mess for most of the day. They would taste turpentine or rat poison and try to cope with the situation without benefit of adult supervision. Roger lived, though he didn't thrive.
The boys were two years old when Japan capitulated. Michael sat at a garden party in New Rochelle and watched his parents and their friends drink champagne and kiss and laugh and wipe each other's tears away. Roger was kept awake all night by the drunken brawl in the next room, and twice watched with childish curiosity as white-clad couples lurched into the ward and screwed clumsily beside his crib.
September after Michael's fourth birthday, his mother tearfully left him in the company of ten other children and a professionally kind lady, to spend half of each day coping with the intricacies of graham crackers and milk, crayons and fingerpaints. His father had a cork board installed in his den, where he thumbtacked Michael's latest creations. Mr. Kidd's friends often commented on how advanced the youngster was.
The orphanage celebrated Roger's fourth birthday the way they celebrated everybody's. They put him to work. Every morning after breakfast he went to the kitchen, where the cook would give him a paper bag full of potatoes and a potato peeler. He would take the potatoes out of the bag and peel them one by one, very carefully making the peelings drop into the bag. Then he would take the bag of peelings down to the incinerator, where the colored janitor would thank him for it very gravely. Then he would return to wash the potatoes after he had scrubbed his own hands. This would take most of the morning—he soon learned that haste led only to cut fingers, and if there was the slightest spot on one potato, the cook would make him go over all of them once again.
Nursery school prepared Michael quite well for grade school, and he excelled in every subject except arithmetic. Mr. Kidd hired a succession of tutors who managed through wheedling and cajoling and sheer repetition to teach Michael first addition, then subtraction, then multiplication, and finally long division and fractions. When he entered junior high school, Michael was actually better prepared in mathematics than most of his classmates. But he didn't understand it, really—the tutors had given him a superficial facility with numbers that, it was hoped, might carry him through.
Roger attended the orphanage grade school, where he did poorly in almost every subject. Except mathematics. The one teacher who knew the term thought that perhaps Roger was an _idiot savant_ (but he was wrong). In the second grade, he could add up a column of figures in seconds, without using a pencil. In the third grade, he could multiply large numbers by looking at them. In the fourth grade, he discovered prime numbers independently and could crank out long division orally, without seeing the numbers. In the fifth grade someone told him what square roots were, and he extended the concept to cube roots, and could calculate either without recourse to pencil and paper. By the time he got to junior high school, he had mastered high school algebra and geometry. And he was hungry for more.
Now this was 1955, and the boys were starting to take on the appearances that they would carry through adult life. Michael was the image of his father; tall, slim, with a slightly arrogant, imperial cast to his features. Roger looked like one of nature's lesser efforts. He was short and swarthy, favoring his mother, with a potbelly from living on starch all his life, a permanently broken nose, and one ear larger than the other. He didn't resemble his father at all.
Michael's first experience with a girl came when he was twelve. His riding teacher, a lovely wench of eighteen, supplied Michael with a condom and instructed him in its use, in a pile of hay behind the stables, on a lovely May afternoon.
On that same afternoon, Roger was dispassionately fellating a mathematics teacher only slightly uglier than he was, this being the unspoken price for tutelage into the mysteries of integral calculus. The experience didn't particularly upset Roger. Growing up in an orphanage, he had already experienced a greater variety of sexual adventure than Michael would in his entire life.
In high school, Michael was elected president of his class for two years running. A plain girl did his algebra homework for him and patiently explained the subject well enough for him to pass the tests. In spite of his mediocre performance in that subject, Michael graduated with honors and was accepted by Harvard.
Roger spent high school indulging his love for mathematics, just doing enough work in the other subjects to avoid the boredom of repeating them. He applied to several colleges, just to get the counselor off his back, but in spite of his perfect score on the College Boards (Mathematics), none of the schools had an opening. He apprenticed himself to an accountant and was quite happy to spend his days manipulating figures with half his mind, while the other half worked on a theory of Abelian groups that he was sure would one day blow modern algebra wide open.
Michael found Harvard challenging at first, but soon was anxious to get out into the "real world"—helping Mr. Kidd manage the family's widespread, subtle investments. He graduated _cum laude_ , but declined graduate work in favor of becoming a junior financial adviser to his father.
Roger worked away at his books and at his theory, which he eventually had published in the SIAM _Journal_ by the simple expedient of adding a Ph.D. to his name. He was found out, but he didn't care.
At Harvard, Michael had taken ROTC and graduated with a Reserve commission in the infantry, at his father's behest. There was a war going on now, in Vietnam, and his father, perhaps suffering a little from guilt at being too young for the first World War and too old for the second, urged his son to help out with the third.
Roger had applied for OCS at the age of twenty, and had been turned down (he never learned it was for "extreme ugliness of face"). At twenty-two, he was drafted; and the Army, showing rare insight, took notice of his phenomenal ability with numbers and sent him to artillery school. There he learned to translate cryptic commands like "Drop 50" and "Add 50" into exercises in analytic geometry that eventually led to a shell being dropped exactly where the forward observer wanted it. He loved to juggle the numbers and shout orders to the gun crew, who were in turn appreciative of his ability, as it lessened the amount of work for them—Roger never had a near miss that had to be repeated. Who cares if he looks like the devil's brother-in-law? He's a good man to have on the horn.
Michael became a company commander, leader of seventy infantrymen who patrolled the verdant hills and valleys of the Central Highlands, each one cursing and killing and sweating out his individual year. He hated it at first; it scared him and put a great weight on his heart when he ordered men out with the certain knowledge that some of them would come back dead and already rotting, and some screaming or whimpering with limbs or organs shattered, and some just grey with horror, open-mouthed, crying... but he got hardened to it and the men came to respect him and by 9 June 1966 he had to admit that he had come to enjoy it, just a little.
Roger wasn't disappointed when he got orders for Vietnam and was relieved to find that, once there, they let him do what he enjoyed most: taking those radioed commands and translating them into vernier readings for his gun crew, a group of men manning a 155-millimeter howitzer. In the Central Highlands.
Michael's company had settled into a comfortable routine the past few weeks. They would walk for a day and dig in, and he'd let them rest for a day, setting out desultory ambushes that never trapped any enemy. The consensus was that Charlie had moved out of this area, and they were getting a long-deserved rest. Michael even found time to play some poker with his men (being careful to keep the stakes down), even though it was strictly against regulations. It increased his popularity tremendously, as he was also careful to lose consistently. It was 9 June 1966 and he had been in Vietnam for five months.
It was 9 June 1966 and Roger had been with his gun crew for six months. They liked him at first, because he was so good. But they were getting distant now—he spent all of his free time writing strange symbols in a fat notebook, he never took leave to go into Pleiku and fuck the slope whores, and the few times they had invited him to play poker or craps he had gotten that funny look on his face and taken all their money, slowly and without seeming to enjoy it. Most of the guys thought he was a faggot, and though he said he'd never been to college, everybody knew that was a lie.
It was 9 June 1966 and Michael was dealing five-card stud when he heard the rattle of machine-gun fire on his southern perimeter. His educated ear separated the noises and, before he dropped the cards, he knew it was one M-16 against two Chinese AK-47's. He scrambled out of the bunker that had provided shade for card playing and ran in the direction of the firing. He was halfway there when fire broke out on the western and northern quadrants. He checked his stride and returned to the command bunker.
Roger was amusing himself with an application of point-set topology to stress analysis of concrete structures when the radio began to squawk: "One-one, this is Tiger-two. We're under pretty heavy contact and need a coupla dozen rounds. Over." Roger dumped his notebook and carried the radio to his gun crew. He had to smile—Tiger-two, that was Cap'n Kidd, of all the unlikely names. He hollered into the radio as he ran. "Tiger-two, this is One-one. We got your morning coordinates on file and we'll drop a smoke round by you. You correct. Okay? Over."
Michael rogered Roger's suggestion; he would look and listen for the harmless smoke round and tell him how much to drop or add.
The fire to the south had stepped up quite a bit now, and Michael was pretty sure that was where the enemy would make his play. The smoke round came whining in and popped about a hundred meters from the perimeter. "Drop seventy-five, one HE," Michael yelled into the radio.
Roger had worked with this Captain Kidd before and found him to be notoriously conservative. Which wasted shells, as he walked the artillery in little by little toward the action. So Roger yelled out the string of figures for one hundred meters' drop instead of seventy-five. His crew set the verniers and the charge and pulled the lanyard that sent the high explosive, "one HE," round singing toward Michael's position.
It landed smack on the perimeter, in a stand of bamboo right next to a hardworking machine-gun bunker. The two men inside the bunker died instantly, and the two men in a bunker on the other side were knocked out by the concussion. The bamboo exploded in a flurry of wooden shrapnel.
Before Michael could react, a six-inch sliver of bamboo traveling with the speed of a bullet hit him one inch above the left eyebrow and buried itself in his cerebral cortex. He dropped the binoculars he had been holding, put a hand to his head, and fell over in a state of acute tetanic shock; muscles bunched spastically, legs working in a slow run, mouth open wide saying nothing.
A medic rushed to the captain and was puzzled to find no apparent wound save a scratch on the forehead. Then he took Michael's helmet off and saw a half inch of bamboo protruding from the back of his head. He told a private to tell the lieutenant he was commander now.
The lieutenant got on the horn and asked who the fuck fired that round, we have at least two killed, landed right on the perimeter, give us some more but for Chrissake add fifty.
The gun crew overheard and Roger told them not to worry, he'd cover for them. Then he gave them the appropriate figures and they sent a volley of six HE rounds that providentially landed right in the middle of the enemy force grouping for the attack. Then he put volleys to the west and north, knocking out the diversionary squads. By the time air support arrived, there were no live enemy targets left. Roger got a commendation.
Michael was evacuated by helicopter to Banmethuot, where they couldn't do anything for him. They flew him to Bienhoa, where a neurosurgeon attempted to extract the bamboo splinter but gave up after an hour's careful exploration. They sent him to Japan, where a better, or at least more confident, surgeon removed the missile.
There was a board of inquiry where Roger testified that his men could not possibly have made such an elementary error and, after demonstrating his own remarkable talent, suggested that it had been either a faulty round or an improper correction by the captain. The board was impressed and the captain couldn't testify, so the matter was dropped.
After a few months Michael could say a few words and his body seemed to have adjusted to being fed and emptied through various tubes. So they flew him from Japan to Walter Reed, where a number of men experienced in such things would try to make some sort of rational creature out of him again.
Roger's esteem was now very high with the rest of the artillery battery, and especially with his own crew. He could have dumped the whole mess into their laps, but instead had taken on the board of inquiry by himself.
Michael was blind in his right eye, but with his left he could distinguish complementary colors and tell a circle from a square. The psychiatrists could tell because his pupil would dilate slightly at the change, even though the light intensity was kept constant.
A company of NVA regulars took Roger's fire base by surprise and, in the middle of the furious hand-to-hand battle, Roger saw two enemy sappers slip into the bunker that was used to store ammunition for the big guns. The bunker also contained Roger's notebook, and the prospect of losing eight months' worth of closely reasoned mathematical theorizing drove Roger to take his bayonet, run across a field of blistering fire, dive in the bunker and kill the two sappers before they could set off their charge. In the process, he absorbed a rifle bullet in the calf and a pistol wound in his left tricep. A visiting major who was cowering in a nearby bunker saw the whole thing, and Roger got a medical discharge, the Congressional Medal of Honor, and a fifty percent disability pension. The wounds were reasonably healed in six months, but the pension didn't stop.
Michael had learned to say "mama" again, but his mother wasn't sure he could recognize her during her visits, which became less and less frequent as cancer spread through her body. On 9 June 1967, she died of the cervical cancer that had been discovered exactly one year before. Nobody told Michael.
On 9 June 1967, Roger had finished his first semester at the University of Chicago and was sitting in the parlor of the head of the mathematics department, drinking tea and discussing the paper that Roger had prepared, extending his new system of algebraic morphology. The department head had made Roger his protégé, and they spent many afternoons like this, the youth's fresh insight cross-pollinating the professor's great experience.
By May of 1970, Michael had learned to respond to his name by lifting his left forefinger.
Roger graduated _summa cum laude_ on 30 May 1970 and, out of dozens of offers, took an assistantship at the California Institute of Technology.
Against his physician's instructions, Mr. Kidd went on a skiing expedition to the Swiss Alps. On an easy slope his ski hit an exposed root and, rolling comfortably with the fall, Michael's father struck a half-concealed rock which fractured his spine. It was June of 1973 and he would never ski again, would never walk again.
At that same instant on the other side of the world, Roger sat down after a brilliant defense of his doctoral thesis, a startling redefinition of Peano's Axiom. The thesis was approved unanimously.
On Michael's birthday, 12 April 1975, his father, acting through a bank of telephones beside his motorized bed, liquidated ninety percent of the family's assets and set up a tax-sheltered trust to care for his only child. Then he took ten potent pain-killers with his breakfast orange juice and another twenty with sips of water and he found out that dying that way wasn't as pleasant as he thought it would be.
It was also Roger's thirty-second birthday, and he celebrated it quietly at home in the company of his new wife, a former student of his, twelve years his junior, who was dazzled by his genius. She could switch effortlessly from doting _Hausfrau_ to randy mistress to conscientious secretary and Roger knew love for the first time in his life. He was also the youngest assistant professor on the mathematics faculty of CalTech.
On 4 January 1980, Michael stopped responding to his name. The inflation safeguards on his trust fund were eroding with time and he was moved out of the exclusive private clinic to a small room in San Francisco General.
The same day, due to his phenomenal record of publications and the personal charisma that fascinated students and faculty alike, Roger was promoted to be the youngest full professor in the history of the mathematics department. His unfashionably long hair and full beard covered his ludicrous ears and "extreme ugliness of face," and people who knew the history of science were affectionately comparing him to Steinmetz.
There was nobody to give the tests, but if somebody had they would have found that on 12 April 1983, Michael's iris would no longer respond to the difference between a circle and a square.
On his fortieth birthday, Roger had the satisfaction of hearing that his book, _Modern Algebra Redefined_ , was sold out in its fifth printing and was considered required reading for almost every mathematics graduate student in the country.
Seventeen June 1985 and Michael stopped breathing; a red light blinked on the attendant's board and he administered mouth-to-mouth resuscitation until they rolled in an electronic respirator and installed him. Since he wasn't on the floor reserved for respiratory disease, the respirator was plugged into a regular socket instead of the special fail-safe line.
Roger was on top of the world. He had been offered the chairmanship of the mathematics department of Penn State, and said he would accept as soon as he finished teaching his summer post-doctoral seminar on algebraic morphology.
The hottest day of the year was 19 August 1985. At 2:45:20 p.m. the air conditioners were just drawing too much power and somewhere in Central Valley a bank of bus bars glowed cherry red and exploded in a shower of molten copper.
All the lights on the floor and on the attendant's board went out, the electronic respirator stopped, and while the attendant was frantically buzzing for assistance, 2:45:25 to be exact, Michael Tobias Kidd passed away.
The lights in the seminar room dimmed and blinked out. Roger got up to open the Venetian blinds, whipped off his glasses in a characteristic gesture and was framing an acerbic comment when, at 2:45:25, he felt a slight tingling in his head as a blood vessel ruptured and quite painlessly he went to join his brother.
# Anniversary Project
_This story was a real problem child. Harry Harrison asked me to do a story for an anthology of science fiction set one million years in the future. I ran home and wrote the first three pages of "Anniversary Project," and then stopped dead. Started again, stopped again._
_After a half-dozen tries I was all the way up to four pages, and I really liked those four pages, but I had to stop wasting time on it. I wrote Harry and told him to go on without me._
_Several years later I came across the fragment and it was immediately obvious what was wrong with it. Painfully obvious, and so was the solution._
_I had taken as a basic premise that "people" a million years in the future would have evolved into something totally alien, and I'd done too good a job; they were the most convincing aliens I'd ever invented. But they did lack certain interesting attributes: love, hate, fear, birth, death, sex, appetites, politics. About all they had was slight differences of opinion regarding ontology. Pretty dry stuff._
_Yet I thought I was onto something. Most aliens in science fiction aren't truly alien, and that's not because sciencefiction writers lack imagination, but because the purpose of an alien in a story is usually to provide a meaningful distortion of human nature. My purpose was not nearly so elevated; my aliens were there as unwitting vehicles for absurdist humor. All the story needed was a couple of bewildered humans, to serve as foils for alien nature. Once I saw that, the story practically wrote itself._
_In the process of writing itself, the story generated two dreadful puns. I'm not responsible._
His name is Three-phasing and he is bald and wrinkled, slightly over one meter tall, large-eyed, toothless and all bones and skin, sagging pale skin shot through with traceries of delicate blue and red. He is considered very beautiful but most of his beauty is in his hands and is due to his extreme youth. He is over two hundred years old and is learning how to talk. He has become reasonably fluent in sixty-three languages, all dead ones, and has only ten to go.
The book he is reading is a facsimile of an early edition of Goethe's _Faust_. The nervous angular Fraktur letters goose-step across pages of paper-thin platinum.
The _Faust_ had been printed electrolytically and, with several thousand similarly worthwhile books, sealed in an argon-filled chamber and carefully lost, in 2012 A.D.; a very wealthy man's legacy to the distant future.
In 2012 A.D., Polaris had been the pole star. Men eventually got to Polaris and built a small city on a frosty planet there. By that time, they weren't dating by prophets' births any more, but it would have been around 4900 A.D. The pole star by then, because of precession of the equinoxes, was a dim thing once called Gamma Cephei. The celestial pole kept reeling around, past Deneb and Vega and through barren patches of sky around Hercules and Draco; a patient clock but not the slowest one of use, and when it came back to the region of Polaris, then 26,000 years had passed and men had come back from the stars, to stay, and the book-filled chamber had shifted 130 meters on the floor of the Pacific, had rolled into a shallow trench, and eventually was buried in an underwater landslide.
The thirty-seventh time this slow clock ticked, men had moved the Pacific, not because they had to, and had found the chamber, opened it up, identified the books and carefully sealed them up again. Some things by then were more important to men than the accumulation of knowledge: in half of one more circle of the poles would come the millionth anniversary of the written word. They could wait a few millenia.
As the anniversary, as nearly as they could reckon it, approached, they caused to be born two individuals: Nine-hover (nominally female) and Three-phasing (nominally male). Three-phasing was born to learn how to read and speak. He was the first human being to study these skills in more than a quarter of a million years.
Three-phasing has read the first half of _Faust_ forwards and, for amusement and exercise, is reading the second half backwards. He is singing as he reads, lisping.
"Fain' Looee w'mun... wif all'r die-mun ringf..." He has not put in his teeth because they make his gums hurt.
Because he is a child of two hundred, he is polite when his father interrupts his reading and singing. His father's "voice" is an arrangement of logic and aesthetic that appears in Three-phasing's mind. The flavor is lost by translating into words:
"Three-phasing my son-ly atavism of tooth and vocal cord," sarcastically in the reverent mode, "Couldst tear thyself from objects of manifest symbol, and visit to share/help/learn, me?"
"?" He responds, meaning "with/with/of what?"
Withholding mode: "Concerning thee: past, future."
He shuts the book without marking his place. It would never occur to him to mark his place, since he remembers perfectly the page he stops on, as well as every word preceding, as well as every event, no matter how trivial, that he has observed from the precise age of one year. In this respect, at least, he is normal.
He thinks the proper coordinates as he steps over the mover-transom, through a microsecond of black, and onto his father's mover-transom, about four thousand kilometers away on a straight line through the crust and mantle of the earth.
Ritual mode: "As ever, father." The symbol he uses for "father" is purposefully wrong, chiding. Crude biological connotation.
His father looks cadaverous and has in fact been dead twice. In the infant's small-talk mode he asks "From crude babblings of what sort have I torn your interest?"
"The tale called _Faust_ , of a man so named, never satisfied with {symbol for slow but continuous accretion} of his knowledge and power; written in the language of Prussia."
"Also depended-ing on this strange word of immediacy, your Prussian language?"
"As most, yes. The word of 'to be': _sein_. Very important illusion in this and related languages/cultures; that events happen at the 'time' of perception, infinitesimal midpoint between past and future."
"Convenient illusion but retarding."
"As we discussed 129 years ago, yes." Three-phasing is impatient to get back to his reading, but adds:
"You always stick up for them."
"I have great regard for what they accomplished with limited faculties and so short lives." Stop beatin' around the bush, Dad. _Tempis fugit_ , eight to the bar. Did Mr. Handy Moves-dat-man-around-by-her-apron-strings, 20th-century American poet, intend cultural translation of _Lysistrata?_ if so, inept. African were-beast legendry, yes.
Withholding mode (coy): "Your father stood with Nine-hover all morning."
"," broadcasts Three-phasing: well?
"The machine functions, perhaps inadequately."
The young polyglot tries to radiate calm patience.
"Details I perceive you want; the idea yet excites you. You can never have satisfaction with your knowledge, either. What happened-s to the man in your Prussian book?"
"He lived-s one hundred years and died-s knowing that a man can never achieve true happiness, despite the appearance of success."
"For an infant, a reasonable perception."
Respectful chiding mode: "One hundred years makes-ed Faust a very old man, for a Dawn man."
"As I stand," same mode, less respect, "yet an infant." They trade silent symbols of laughter.
After a polite tenth-second interval, Three-phasing uses the light interrogation mode: "The machine of Nine-hover...?"
"It begins to work but so far not perfectly." This is not news.
Mild impatience: "As before, then, it brings back only rocks and earth and water and plants?"
"Negative, beloved atavism." Offhand: "This morning she caught two animals that look as man may once have looked."
"!" Strong impatience, "I go?"
"." His father ends the conversation just two seconds after it began.
Three-phasing stops off to pick up his teeth, then goes directly to Nine-hover.
A quick exchange of greeting-symbols and Nine-hover presents her prizes. "Thinking I have two different species," she stands: uncertainty, query.
Three-phasing is amused. "Negative, time-caster. The male and female took very dissimilar forms in the Dawn times." He touches one of them. "The round organs, here, served-ing to feed infants, in the female."
The female screams.
"She manipulates spoken symbols now," observes Nine-hover.
Before the woman has finished her startled yelp, Three-phasing explains: "Not manipulating concrete symbols; rather, she communicates in a way called 'non-verbal,' the use of such communication predating even speech." Slipping into the pedantic mode: "My reading indicates that such a loud noise occurs either since she seems not in pain, then she must fear me or you or both of us.
"Or the machine," Nine-hover adds.
Symbol for continuing. "We have no symbol for it but in Dawn days most humans observed 'xenophobia,' reacting to the strange with fear instead of delight. We stand as strange to them as they do to us, thus they register fear. In their era this attitude encouraged-s survival.
"Our silence must seem strange to them, as well as our appearance and the speed with which we move. I will attempt to speak to them, so they will know they need not fear us."
Bob and Sarah Graham were having a desperately good time. It was September of 1951 and the papers were full of news about the brilliant landing of U.S. Marines at Inchon. Bob was a Marine private with two days left of the thirty days' leave they had given him, between boot camp and disembarkation for Korea. Sarah had been Mrs. Graham for three weeks.
Sarah poured some more bourbon into her Coke. She wiped the sand off her thumb and stoppered the Coke bottle, then shook it gently. "What if you just don't show up?" she said softly.
Bob was staring out over the ocean and part of what Sarah said was lost in the crash of breakers rolling in. "What if I what?"
"Don't show up." She took a swig and offered the bottle. "Just stay here with me. With us." Sarah was sure she was pregnant. It was too early to tell, of course; her calendar was off but there could be other reasons.
He gave the Coke back to her and sipped directly from the bourbon bottle. "I suppose they'd go on without me. And I'd still be in jail when they came back."
"Not if—"
"Honey, don't even talk like that. It's a just cause."
She picked up a small shell and threw it toward the water.
"Besides, you read the _Examiner_ yesterday."
"I'm cold. Let's go up." She stood and stretched and delicately brushed sand away. Bob admired her long naked dancer's body. He shook out the blanket and draped it over her shoulders.
"It'll all be over by the time I get there. We'll push those bastards—"
"Let's not talk about Korea. Let's not talk."
He put his arm around her and they started walking back toward the cabin. Halfway there, she stopped and enfolded the blanket around both of them, drawing him toward her. He always closed his eyes when they kissed, but she always kept hers open. She saw it: the air turning luminous, the seascape fading to be replaced by bare metal walls. The sand turns hard under her feet.
At her sharp intake of breath, Bob opens his eyes. He sees a grotesque dwarf, eyes and skull too large, body small and wrinkled. They stare at one another for a fraction of a second. Then the dwarf spins around and speeds across the room to what looks like a black square painted on the floor. When he gets there, he disappears.
"What the hell?" Bob says in a hoarse whisper.
Sarah turns around just a bit too late to catch a glimpse of Three-phasing's father. She does see Nine-hover before Bob does. The nominally-female time-caster is a flurry of movement, sitting at the console of her time net, clicking switches and adjusting various dials. All of the motions are unnecessary, as is the console. It was built at Three-phasing's suggestion, since humans from the era into which they could cast would feel more comfortable in the presence of a machine that looked like a machine. The actual time net was roughly the size and shape of an asparagus stalk, was controlled completely by thought, and had no moving parts. It does not exist any more, but can still be used, once understood. Nine-hover has been trained from birth for this special understanding.
Sarah nudges Bob and points to Nine-hover. She can't find her voice; Bob stares open-mouthed.
In a few seconds, Three-phasing appears. He looks at Nine-hover for a moment, then scurries over to the Dawn couple and reaches up to touch Sarah on the left nipple. His body temperature is considerably higher than hers, and the unexpected warm moistness, as much as the suddenness of the motion, makes her jump back and squeal.
Three-phasing correctly classified both Dawn people as Caucasian, and so assumes that they speak some Indo-European language.
_"GutenTagsprechensieDeutsch?"_ he says in a rapid soprano.
"Huh?" Bob says.
_"Guten-Tag-sprechen-sie-Deutsch?"_ Three-phasing clears his throat and drops his voice down to the alto he uses to sing about the St. Louis woman. " _Guten Tag_ ," he says, counting to a hundred between each word. " _Sprechen sie Deutsch?"_
"That's Kraut," says Bob, having grown up on jingoistic comic books. "Don't tell me you're a—"
Three-phasing analyzes the first five words and knows that Bob is an American from the period 1935-1955. "Yes, yes—and no, no—to wit, how very very clever of you to have identified this phrase as having come from the language of Prussia, Germany as you say; but I am, no, not a German person; at least, I no more belong to the German nationality than I do to any other, but I suppose that is not too clear and perhaps I should fully elucidate the particulars of your own situation at this, as you say, 'time,' and 'place.'"
The last English-language author Three-phasing studied was Henry James.
"Huh?" Bob says again.
"Ah. I should simplify." He thinks for a half-second, and drops his voice down another third. "Yeah, simple. Listen, Mac. First thing I gotta know's whatcher name. Whatcher broad's name."
"Well... I'm Bob Graham. This is my wife, Sarah Graham."
"Pleasta meetcha, Bob. Likewise, Sarah. Call me, uh..." The only twentieth-century language in which Three-phasing's name makes sense is propositional calculus. "George. George Boole.
"I 'poligize for bumpin' into ya, Sarah. That broad in the corner, she don't know what a tit is, so I was just usin' one of yours. Uh, lack of immediate culchural perspective, I shoulda knowed better."
Sarah feels a little dizzy, shakes her head slowly. "That's all right. I know you didn't mean anything by it."
"I'm dreaming," Bob says. "Shouldn't have—"
"No you aren't," says Three-phasing, adjusting his diction again. "You're in the future. Almost a million years. Pardon me." He scurries to the mover-transom, is gone for a second, reappears with a bedsheet, which he hands to Bob. "I'm sorry, we don't wear clothing. This is the best I can do, for now." The bedsheet is too small for Bob to wear the way Sarah is using the blanket. He folds it over and tucks it around his waist, in a kilt. "Why us?" he asks.
"You were taken at random. We've been time-casting"—he checks with Nine-hover—"for twenty-two years, and have never before caught a human being. Let alone two. You must have been in close contact with one another when you intersected the time-caster beam. I assume you were copulating."
"What-ing?" Bob says.
"No, we weren't!" Sarah says indignantly.
"Ah, quite so." Three-phasing doesn't pursue the topic. He knows that humans of this culture were reticent about their sexual activity. But from their literature he knows they spent most of their "time" thinking about, arranging for, enjoying, and recovering from a variety of sexual contacts.
"Then that must be a time machine over there," Bob says, indicating the fake console.
"In a sense, yes." Three-phasing decides to be partly honest. "But the actual machine no longer exists. People did a lot of time-travelling about a quarter of a million years ago. Shuffled history around. Changed it back. The fact that the machine once existed, well, that enables us to use it, if you see what I mean."
"Uh, no. I don't." Not with synapses limited to three degrees of freedom.
"Well, never mind. It's not really important." He senses the next question. "You will be going back... I don't know exactly when. It depends on a lot of things. You see, time is like a rubber band." No, it isn't. "Or a spring." No, it isn't. "At any rate, within a few days, weeks at most, you will leave this present and return to the moment you were experiencing when the time-caster beam picked you up."
"I've read stories like that," Sarah says. "Will we remember the future, after we go back?"
"Probably not," he says charitably. Not until your brains evolve. "But you can do us a great service."
Bob shrugs. "Sure, long as we're here. Anyhow, you did us a favor." He puts his arm around Sarah. "I've gotta leave Sarah in a couple of days; don't know for how long. So you're giving us more time together."
"Whether we remember it or not," Sarah says.
"Good, fine. Come with me." They follow Three-phasing to the mover-transom, where he takes their hands and transports them to his home. It is as unadorned as the time-caster room, except for bookshelves along one wall, and a low podium upon which the volume of _Faust_ rests. All of the books are bound identically, in shiny metal with flat black letters along the spines.
Bob looks around. "Don't you people ever sit down?"
"Oh," Three-phasing says. "Thoughtless of me." With his mind he shifts the room from utility mood to comfort mood. Intricate tapestries now hang on the walls; soft cushions that look like silk are strewn around in pleasant disorder. Chiming music, not quite discordant, hovers at the edge of audibility, and there is a faint odor of something like jasmine. The metal floor has become a kind of soft leather, and the room has somehow lost its corners.
"How did that happen?" Sarah asks.
"I don't know." Three-phasing tries to copy Bob's shrug, but only manages a spasmodic jerk. "Can't remember not being able to do it."
Bob drops into a cushion and experimentally pushes at the floor with a finger. "What is it you want us to do?"
Trying to move slowly, Three-phasing lowers himself into a cushion and gestures at a nearby one, for Sarah. "It's very simple, really. Your being here is most of it.
"We're celebrating the millionth anniversary of the written word." How to phrase it? "Everyone is interested in this anniversary, but... nobody reads any more."
Bob nods sympathetically. "Never have time for it myself."
"Yes, uh... you _do_ know how to read, though?"
"He knows," Sarah says. "He's just lazy."
"Well, yeah." Bob shifts uncomfortably in the cushion.
"Sarah's the one you want. I kind of, uh, prefer to listen to the radio."
"I read all the time," Sarah says with a little pride. "Mostly mysteries. But sometimes I read good books, too."
"Good, good." It was indeed fortunate to have found this pair, Three-phasing realizes. They had used the metal of the ancient books to "tune" the time-caster, so potential subjects were limited to those living some eighty years before and after 2012 A.D. Internal evidence in the books indicated that most of the Earth's population was illiterate during this period.
"Allow me to explain. Any one of us can learn how to read. But to us it is like a code; an unnatural way of communicating. Because we are all natural telepaths. We can read each other's minds from the age of one year."
"Golly!" Sarah says. "Read minds?" And Three-phasing sees in her mind a fuzzy kind of longing, much of which is love for Bob and frustration that she knows him only imperfectly. He dips into Bob's mind and finds things she is better off not knowing.
"That's right. So what we want is for you to read some of these books, and allow us to go into your minds while you're doing it. This way we will be able to recapture an experience that has been lost to the race for over a half-million years."
"I don't know," Bob says slowly. "Will we have time for anything else? I mean, the world must be pretty strange. Like to see some of it."
"Of course; sure. But the rest of the world is pretty much like my place here. Nobody goes outside any more. There isn't any air." He doesn't want to tell them how the air was lost, which might disturb them, but they seem to accept that as part of the distant future.
"Uh, George." Sarah is blushing. "We'd also like, uh, some time to ourselves. Without anybody... inside our minds."
"Yes, I understand perfectly. You will have your own room, and plenty of time to yourselves." Three-phasing neglects to say that there is no such thing as privacy in a telepathic society.
But sex is another thing they don't have any more. They're almost as curious about that as they are about books.
So the kindly men of the future gave Bob and Sarah Graham plenty of time to themselves: Bob and Sarah reciprocated. Through the Dawn couple's eyes and brains, humanity shared again the visions of Fielding and Melville and Dickens and Shakespeare and almost a dozen others. And as for the 98% more, that they didn't have time to read or that were in foreign languages—Three-phasing got the hang of it and would spend several millenia entertaining those who were amused by this central illusion of literature: that there could be order, that there could be beginnings and endings and logical workings-out in between; that you could count on the third act or the last chapter to tie things up. They knew how profound an illusion this was because each of them knew every other living human with an intimacy and accuracy far superior to that which even Shakespeare could bring to the study of even himself. And as for Sarah and as for Bob:
Anxiety can throw a person's ovaries 'way off schedule. On that beach in California, Sarah was no more pregnant than Bob was. But up there in the future, some somatic tension finally built up to the breaking point, and an egg went sliding down the left Fallopian tube, to be met by a wiggling intruder approximately halfway; together they were the first manifestation of the organism that nine months later, or a million years earlier, would be christened Douglas MacArthur Graham.
This made a problem for time, or Time, which is neither like a rubber band nor like a spring; nor even like a river nor a carrier wave—but which, like all of these things, can be deformed by certain stresses. For instance, two people going into the future and three coming back, on the same time-casting beam.
In an earlier age, when time travel was more common, time-casters would have made sure that the baby, or at least its aborted embryo, would stay in the future when the mother returned to her present. Or they could arrange for the mother to stay in the future. But these subtleties had long been forgotten when Nine-hover relearned the dead art. So Sarah went back to her present with a hitch-hiker, an interloper, firmly imbedded in the lining of her womb. And its dim sense of life set up a kind of eddy in the flow of time, that Sarah had to share.
The mathematical explanation is subtle, and can't be comprehended by those of us who synapse with fewer than four degrees of freedom. But the end effect is clear: Sarah had to experience all of her own life backwards, all the way back to that embrace on the beach. Some highlights were:
In 1992, slowly dying of cancer, in a mental hospital.
In 1979, seeing Bob finally succeed at suicide on the American Plan, not quite finishing his 9,527th bottle of liquor.
In 1970, having her only son returned in a sealed casket from a country she'd never heard of.
In the 1960's, helplessly watching her son become more and more neurotic because of something that no one could name.
In 1953, Bob coming home with one foot, the other having been lost to frostbite; never having fired a shot in anger.
In 1952, the agonizing breech presentation.
Like her son, Sarah would remember no details of the backward voyage through her life. But the scars of it would haunt her forever.
They were kissing on the beach.
Sarah dropped the blanket and made a little noise. She started crying and slapped Bob as hard as she could, then ran on alone, up to the cabin.
Bob watched her progress up the hill with mixed feelings. He took a healthy slug from the bourbon bottle, to give him an excuse to wipe his own eyes.
He could go sit on the beach and finish the bottle; let her get over it by herself. Or he could go comfort her.
He tossed the bottle away, the gesture immediately making him feel stupid, and followed her. Later that night she apologized, saying she didn't know what had gotten into her.
# The Mazel Tov Revolution
_I know exactly where this story comes from. One evening I was sitting with good friend Jack Dann, discussing anthologies. He wanted to edit a "theme" anthology—these are Great Science Fiction About Root Vegetables-type books—and we were bouncing around various topics that hadn't been done yet, or at least not recently, and might be saleable. I suggested he do an anthology of Jewish science fiction, as he is quite Jewish (try to imagine a creature that's a cross between Isaac Bashevis Singer and Henny Youngman) and does write science fiction. We even made up a list of various stories he might be able to use._
_Lo and gevalt, he sold it. He wrote asking me for a Jewish science fiction story—but for three cents a word. I wrote back saying, Jack, my friendship knows no bounds, but there is a lower bound on my word rate for original stories. Five cents a word, boychik. He didn't write back._
_A year or so later, he sold another anthology, this time needing stories about faster-than-light travel. Again, three cents a word. Again, I demurred. Again, he refused to beg._
_And then yet_ another: _science fiction stories about political power. Word rate, three cents. Again he would not beg._
_Finally, he writes saying he's putting together another anthology of Jewish science fiction. I begin to feel like Dr. Frankenstein. Three cents, take it or leave it. So I sit down and write "The Mazel Tov Revolution": a Jewish story about the effect of faster-than-light travel on political power. Sell it to_ Analog for _a nickel a word_.
_Moral: I may prostitute my art, but at least I'm not a_ cheap _whore_.
This is the story of the venerated/ despised Chaim Itzkhok (check one). And me. And how we omade 238 worlds safe for democracy/ really screwed everything up (check another). With twenty reams of paper and an old rock. I know you probably think you've heard the story before. But you haven't heard it all, not by a long way—things like blackmail and attempted murder, however polite, have a way of not getting in the history books. So read on, OK?
It all started out, for me at least, when I was stranded on Faraway a quarter of a century ago. You're probably thinking _you_ wouldn't mind getting stranded on Faraway, right? Garden spot of the Confederation? Second capital of humanity? Monument to human engineering and all that, terraformed down to the last molecule. I tell kids what it was like back in '09 and they just shake their heads.
Back then, Faraway was one of those places where you might see an occasional tourist, only because it was one of the places that tourists just didn't go. It was one of the last outposts of George's abortive Second Empire, and had barely supported itself by exporting things like lead and cadmium. Nice poisonous heavy metals whose oxides covered the planet instead of grass. You had to run around in an asbestos suit with an air conditioner on your back, it was so damned close to Rigel.
Still is too damned close, but the way they opaqued the upper atmosphere, they tell me that Rigel is just a baby-blue ball that makes spectacular sunrises and sunsets. I've never been too tempted to go see it, having worked under its blue glare in the old days; wondering how long it'd be before you went sterile, lead underwear not-withstanding, feeling skin cancers sprouting in the short-wave radiation.
I met old Chaim there at the University Club, a run-down bar left over from the Empire days. How I got to that godforsaken place is a story in itself—one I can't tell because the husband is still alive—but I was down and out with no ticket back, dead-ended at thirty.
I was sitting alone in the University Club, ignoring the bartender, nursing my morning beer and feeling desperate when old Chaim came in. He was around seventy but looked older, all grizzled and seamed, and I started getting ready an excuse in case he was armed with a hard-luck story.
But he ordered a cup of real coffee and when he paid, I sneaked a look at his credit flash. The number was three digits longer than mine. Not prejudiced against millionaires, I struck up a conversation with him.
There was only one opening gambit for conversation on Faraway, since the weather never changed and there were no politics to speak of: What the hell are you doing here?
"It's the closest place to where I want to go," he said, which was ridiculous. Then he asked me the same, and I told him, and we commiserated for a few minutes on the unpredictability of the other sex. I finally got around to asking him exactly where it was he wanted to go.
"Its interesting enough," he said. Two other people had come into the bar. He looked at them blandly. "Why don't we move to a table?"
He got the bartender's attention and ordered another cup of coffee, and must have seen my expression—the tariff on two cups of coffee would keep me drunk for a week—and ordered me up a large jar of beer. We carried them to a table and he switched on the sound damper, which was the kind that works both ways.
"Can I trust you to keep a secret?" He took a cautious sip of his coffee.
"Sure. One more won't hurt."
He looked at me for a long time. "How would you like to get a share of a couple of million CU's?"
A ticket back cost about a hundred thousand. "That depends on what I'd have to do." I wouldn't have, for instance, jumped off a high building into a vat of boiling lead. Boiling water, yes.
"I can't say, exactly, because I really don't know. There may be an element of danger, there may not be. Certainly a few weeks of discomfort."
"I've had several of those, here."
He nodded at the insignia on my fading fatigue jacket. "You're still licensed to pilot?"
"Technically."
"Bonded?"
"No, like I told you, I had to skip out. My bond's on Perrin's World. I don't dare—"
"No problem, really. This is a system job." You need to be bonded for interstellar flight, but planet-to-planet, within a stellar system, there's not that much money involved.
"System job? Here? I didn't know Rigel had any other—"
"Rigel has one other planet, catalogued as Biarritz. It never got chartered or officially named because there's nothing there."
"Except something you want."
"Maybe something a lot of people want."
But he wouldn't tell me any more. We talked on until noon, Chaim feeling me out, seeing whether he could trust me, whether he wanted me as a partner. There were plenty of pilots stranded on Faraway; I later found out that he'd talked to a half-dozen or so before me.
We were talking about children or some damn thing when he suddenly sat up straight and said, "All right. I think you'll be my pilot."
"Good... now, just what—"
"Not yet, you don't need to know yet. What's your credit number?"
I gave it to him and he punched out a sequence on his credit flash. "This is your advance," he said; I checked my flash and, glory, I was fifty thousand CU's richer. "You get the same amount later, if Biarritz doesn't pan out. If it works, you'll also get a percentage. We'll talk about that later."
The other fifty thousand was all I wanted—get back to civilization and I could hire a proxy to go to Perrin and rescue my bond. Then I'd be in business again.
"Now. The first thing you have to do is get us a ship. I'll arrange the financing." We left the bar and went to Faraway's only public (or private) stenographer, and he made out a letter of credit for me.
"Any kind of a ship will do," he said as I walked him back to his hotel. "Anything from a yacht to a battlewagon. We just have to get there. And back."
On any civilized world, I could have stepped into a booth and called Hartford; then strolled down to the nearest port and picked up a vessel: local, interplanetary or, if I was bonded and could wait a day or two, interstellar. But Faraway was Faraway, so it was a little more complicated.
Let me digress, in case you were born less than twenty years ago and fell asleep in history class.
Back then, we had two governments: the Confederation we all know and love, and New Hartford Transportation Rentals, Ltd. There was nothing on paper that connected the Confederation with Hartford, but in reality they were as intertwined as the skeins of a braid.
New Hartford Transportation Rentals, Ltd., owned virtually all of the basic patents necessary for interstellar travel as well as every starship, including the four clunkers left over from George VIII's disastrous imperialistic experiment.
Tired of your planet? Seek religious freedom, adventure, fresh air? Want to run from creditors? Get enough people together and Hartford would lease you a ship—for an astronomical sum, but at very generous rates. In fact, the first couple of generations hardly paid anything at all (while the interest built up), but then—
Talk about the sins of the fathers coming home to roost! Once a colony began to be a going concern, Hartford was empowered to levy a tax of up to fifty percent on every commercial transaction. And Hartford would carefully keep the tax down to a level where only the interest on the loan was being paid—the principal resting untouched, to provide Hartford an income in perpetuity. It was a rigged game (enforced by the Confederation), and everybody knew it. But it was the only game in town.
Hartford had a representative on every planet, and they kept him fueled with enough money so that he was always the richest, and usually the most influential, citizen of the planet. If a planetary government tried to evolve away from the rapacious capitalism that guaranteed Hartford a good return on its investment, their representative usually had enough leverage to put it back on the right road.
There were loopholes and technicalities. Most planets didn't pass the Hartford tax on directly, but used a sliding income tax, so the rich would get poorer and the poor, God bless them, would go home and make more taxpayers rather than riot in the streets.
If you ever patronized the kind of disreputable tavern that caters to pilots and other low types, you may have heard them singing that ancient ballad, "My Heart Belongs to Mother, But Hartford Owns My Ass."
Hartford owned that fundamental part of everybody on Faraway, too. But that didn't mean they'd supplied Faraway with a nice modern spaceport, bristling with ships of all sizes and ranges. No, just the bi-weekly vessel from Steiner that dropped off supplies and picked up some cadmium.
I had to admit there wasn't much reason for Faraway to have a short-run, plain old interplanetary ship—what good would it be? All you could do with it would be to orbit Faraway—and it looked bad enough from the _ground_ —or take a joyride out to Biarritz. And there were more entertaining ways to throw away your money, even on Faraway.
It turned out that there actually was one interplanetary ship on Faraway, but it was a museum piece. It had been sitting for two hundred years, the _Bonne Chance_ , the ship, Biarritz herself had used to survey the clinker that retained her name by default. It was being held for back taxes, and we picked it up for six figures.
Then the headaches began. Everything was in French-dial markings, instruction manual, log. I got a dictionary and walked around with an indelible pencil, relabeling; and Chaim and I spent a week of afternoons and evenings translating the manual.
The fusion engine was in good shape—no moving parts bigger than a molecule—but the rest of the ship was pretty ragged. Faraway didn't have much of an atmosphere, but it was practically pure oxygen, and _hot_. The hull was all pitted and had to be reground. The electronic components of the ship had been exposed to two hundred years of enough ionizing radiation to mutate a couple of fruit flies into a herd of purple cattle. Most of the guidance and communications gimcrackery had to be repaired or replaced.
We kept half the drifter population of Faraway—some pretty highly trained drifters, of course—employed for over a week, hammering that antique wreck into some kind of shape. I took it up alone for a couple of orbits and decided I could get it twenty AU's and back without any major disaster.
Chaim was still being the mystery man. He gave me a list of supplies, but it didn't hold any clue as to what we were going to do once we were on Biarritz: just air, water, food, coffee and booze enough for two men to live on for a few months. Plus a prefab geodesic hut for them to live in.
Finally, Chaim said he was ready to go and I set up the automatic sequencing, about two hours of systems checks that were supposed to assure me that the machine wouldn't vaporize on the pad when I pushed the _Commence_ button. I said a pagan prayer to Norbert Weiner and went down to the University Club for one last round or six. I could afford better bars, with fifty thousand CU's on my flash, but didn't feel like mingling with the upper classes.
I came back to the ship a half-hour before the sequencing was due to end, and Chaim was there, watching the slavies load a big crate aboard the _Bonne Chance_. "What the hell is that?" I asked him.
"The Mazel Tov papers," he said, not taking his eyes off the slavies.
"Mazel Tov?"
"It means good luck, maybe good-bye. Doesn't translate all that well. If you say it like this"—and he pronounced the words with a sarcastic inflection—"it can mean 'good riddance' or 'much good shall it do you.' Clear?"
"No."
"Good." They finished loading the crate and sealed the hold door. "Give me a hand with this." It was a gray metal box that Chaim said contained a brand-new phased-tachyon transceiver.
If you're young enough to take the phased-tachyon process for granted, just step in a booth and call Sirius, I should point out that when Chaim and I met, they'd only had the machines for a little over a year. Before that, if you wanted to communicate with someone light-years away, you had to write out your message and put it on a Hartford vessel, then wait around weeks, sometimes months, while it got shuffled from planet to planet (at Hartford's convenience) until it finally wound up in the right person's hands.
Inside, I secured the box and called the pad authorities, asking them for our final mass. They read it off and I punched the information into the flight computer. Then we both strapped in.
Finally the green light flashed. I pushed the _Commence_ button down to the locked position, and in a few seconds the engine rumbled into life. The ship shook like the palsied old veteran that it was, and climbed skyward trailing a cloud of what must have been the most polluting exhaust in the history of transportation: hot ionized lead, slightly radioactive. Old Biarritz had known how to economize on reaction mass.
I'd programmed a quick-and-dirty route, one and a half G's all the way, flip in the middle. Still it was going to take us two weeks. Chaim could have passed the time by telling me what it was all about, but instead he just sat around reading— _War and Peace_ and a tape of Medieval Russian folk tales—every now and then staring at the wall and cackling.
Afterwards, I could appreciate his fetish for secrecy (though God knows enough people were in on part of the secret already). Not to say I might have been tempted to double-cross him. But his saying a couple of million were involved was like inviting someone to the Boston Tea Party, by asking him if he'd like to put on a loincloth and help you play a practical joke.
So I settled down for two weeks with my own reading, earning my pay by pushing a button every couple of hours to keep a continuous systems check going. I could have programmed the button to push itself, but hell...
At the end of two weeks, I did have to earn my keep. I watched the "velocity relative to destination" readout crawl down to zero and looked out the viewport. Nothing.
Radar found the little planet handily enough. We'd only missed it by nine thousand and some kilometers; you could see its blue-gray disc if you knew where to look.
There's no trick to landing a ship like the _Bonne Chance_ if you have a nice heavy planet. It's all automated except for selecting the exact patch of earth you want to scorch (port authorities go hard on you if you miss the pad). But a feather-light ball of dirt like Biarritz is a different proposition—there just isn't enough gravity, and the servomechanisms don't respond fast enough. They'll try to land you at the rock's center of mass, which in this case was underneath forty-nine kilometers of solid basalt. So you have to do it yourself, a combination of radar and dead reckoning—more a docking maneuver than a landing.
So I crashed. It could happen to anybody.
I was real proud of that landing at first. Even old Chaim congratulated me. We backed into the surface at less than one centimeter per second, all three shoes touching down simultaneously. We didn't even bounce.
Chaim and I were already suited up, and all the air had been evacuated from the ship; standard operating procedure to minimize damage in case something did go wrong. But the landing had looked perfect, so we went on down to start unloading.
What passes for gravity on Biarritz comes to barely one-eightieth of a G. Drop a shoe and it takes it five seconds to find the floor. So we half-climbed, half-floated down to the hold, clumsy after two weeks of living in a logy G-and-a-half.
While I was getting the hold door open, we both heard a faint bass moan, conducted up from the ground through the landing shoes. Chaim asked whether it was the ground settling; I'd never heard it happen before, but said that was probably it. We were right.
I got the door open and looked out. Biarritz looked just like I'd expected it to: a rock, a pockmarked chunk of useless rock. The only relief from the grinding monotony of the landscape was the silver splash of congealed lead directly below us.
We seemed to be at a funny angle. I thought it was an optical illusion—if the ship hadn't been upright on landing, it would have registered on the attitude readout. Then the bright lead splash started moving, crawling away under the ship. It took me a second to react.
I shouted something unoriginal and scrambled for the ladder to the control room. One short blip from the main engine and we'd be safely away. Didn't make it.
The situation was easy enough to reconstruct, afterwards. We'd landed on a shelf of rock that couldn't support the weight of the _Bonne Chance_. The sound we had heard was the shelf breaking off, settling down a few meters, canting the ship at about a ten-degree angle. The force of friction between our landing pads and the basalt underfoot was almost negligible, in so little gravity, and we slid downhill until we reached bottom, and then gracefully tipped over. When I got to the control room, after quite a bit of bouncing around in slow-motion, everything was sideways and the controls were dead, dead, dead.
Chaim was lively enough, shouting and sputtering. Back in the hold, he was buried under a pile of crates, having had just enough time to unstrap them before the ship went over. I explained the situation to him while helping him out.
"We're stuck here, eh?"
"I don't know yet. Have to fiddle around some."
"No matter. Inconvenient, but no matter. We're going to be so rich we could have a fleet of rescuers here tomorrow morning."
"Maybe," I said, knowing it wasn't so—even if there were a ship at Faraway, it couldn't possibly make the trip in less than ten days. "First thing we have to do, though, is put up that dome." Our suits weren't the recycling kind; we had about ten hours before we had to start learning how to breathe carbon dioxide.
We sorted through the jumble and found the various components of the pop-up geodesic. I laid it out on a piece of reasonably level ground and pulled the lanyard. It assembled itself very nicely. Chaim started unloading the ship while I hooked up the life-support system.
He was having a fine time, kicking crates out the door and watching them float to the ground a couple of meters below. The only one that broke was a case of whiskey—every single bottle exploded, damn it, making a cloud of brownish crystals that slowly dissipated. So Biarritz was the only planet in the universe with a bonded-bourbon atmosphere.
When Chaim got to _his_ booze, a case of gin, he carried it down by hand.
We set up housekeeping while the dome was warming. I was still opening boxes when the bell went off, meaning there was enough oxygen and heat for life. Chaim must have had more trust in automatic devices than I had; he popped off his helmet immediately and scrambled out of his suit. I took off my helmet to be sociable, but kept on working at the last crate, the one Chaim had said contained "the Mazel Tov papers."
I got the top peeled away and looked inside. Sure enough, it was full of paper, in loose stacks.
I picked up a handful and looked at them. "Immigration forms?"
Chaim was sitting on a stack of food cartons, peeling off his suit liner. "That's right. Our fortune."
"'Mazel Tov Immigration Bureau,'" I read off one of the sheets. "Who—"
"You're half of it. I'm half of it. Mazel Tov is the planet under your feet." He slipped off the box. "Where'd you put our clothes?"
"What?"
"This floor's cold."
"Uh, over by the kitchen." I followed his naked wrinkled back as he clumped across the dome. "Look, you can't just _... name_ a planet..."
"I can't, eh?" He rummaged through the footlocker and found some red tights, struggled into them. "Who says I can't?"
"The Confederation! Hartford! You've got to get a charter."
He found an orange tunic that clashed pretty well and slipped it over his head. Muffled: "So I'm going to get a charter."
"Just like that."
He started strapping on his boots and looked at me with amusement. "No, not 'just like that.' Let's make some coffee." He filled two cups with water and put them in the heater.
"You can't just charter a rock with two people on it."
"You're right. You're absolutely right." The timer went off. "Cream and sugar?"
"Look—no, black—you mean to say you printed up some fake—"
"Hot." He handed me the cup. "Sit down. Relax. I'll explain."
I was still in my suit, minus the helmet, so sitting was no more comfortable than standing. But I sat.
He looked at me over the edge of his cup, through a veil of steam rising unnaturally fast. "I made my first million when I was your age."
"You've got to start somewhere."
"Right. I made a million and paid eighty-five percent of it to the government of Nueva Argentina, who skimmed a little off the top and passed it on to New Hartford Transportation Rentals, Ltd."
"Must have hurt."
"It made me angry. It made me think. And I did get the germ of an idea." He sipped.
"Go on."
"I don't suppose you've ever heard of the Itzkhok Shipping Agency."
"No... it probably would have stuck in my mind."
"Very few people have. On the surface, it's a very small operation. Four interplanetary ships, every one of them smaller than the _Bonne Chance_. But they're engaged in interstellar commerce."
"Stars must be pretty close together."
"No... they started about twenty years ago. The shortest voyage is about half over. One has over a century to go."
"Doesn't make any sense."
"But it does. It makes sense on two levels." He set down the cup and laced his fingers together.
"There are certain objects whose value almost has to go up with the passage of time. Jewelry, antiques, works of art. These are the only cargo I deal with. Officially."
"I see. I think."
"You see half of it. I buy these objects on relatively poor planets and ship them to relatively affluent ones. I didn't have any trouble getting stockholders. Hartford wasn't too happy about it, of course."
"What did they do?"
He shrugged. "Took me to court. I'd studied the law, though, before I started Itzkhok. They didn't press too hard—my company didn't make one ten-thousandth of Hartford's annual profit—and I won."
"And made a credit or two."
"Some three billion, legitimate profit. But the important thing is that I established a concrete legal precedent where none had existed before."
"You're losing me again. Does this have anything to do with..."
"Everything, patience. With this money, and money from other sources, I started building up a fleet. Through a number of dummy corporations... buying old ships, building new ones. I own or am leasing some two thousand ships. Most of them are loaded and on the pad right now."
"Wait, now." Economics was never my strong suit, but this was obvious. "You're going to drive your own prices down. There can't be that big a market for old paintings and—"
"Right, precisely. But most of these ships aren't carrying such specialized cargo. The closest one, for instance, is on Tangiers, aimed for Faraway. It holds nearly a hundred thousand cubic meters of water."
"Water..."
"Old passenger liner, flooded the damn thing. Just left a little room for ice expansion, in case the heating—"
"Because on Faraway—"
"—on Faraway there isn't one molecule of water that men didn't carry there. They recycle every drop but have to lose one percent or so annually.
"Tonight or tomorrow I'm going to call up Faraway and offer to sell them 897,000 kilograms of water. At cost. Delivery in six years. It's a long time to wait, but they'll be getting it for a hundredth of the usual cost, what Hartford charges."
"And you'll lose a bundle."
"Depends on how you look at it. Most of my capital is tied up in small, slow spaceships; I own some interest in three-quarters of the interplanetary vessels that exist. If my scheme works, all of them will double in value overnight.
"Hartford, though, is going to lose more than a bundle. There are 237 other planets, out of 298, in a position similar to Faraway's. They depend on Hartford for water, or seed, or medical supplies, or something else necessary for life."
"And you have deals set up—"
"For all of them, right. Under-bidding Hartford by at least a factor of ten." He drank off the rest of his coffee in a gulp.
"What's to stop Hartford from underbidding _you?"_
"Absolutely nothing." He got up and started preparing another cup. "They'll probably try to, here and there. I don't think many governments will take them up on it.
"Take Faraway as an example. They're in a better position than most planets, as far as their debt to Hartford, because the Second Empire financed the start of their colonization. Still, they owe Hartford better than ten billion CU's—their annual interest payment comes to several hundred million.
"They keep paying it, not because of some abstract obligation to Hartford. Governments don't have consciences. If they stopped paying, of course, they'd dry up and die in a generation. Until today, they didn't have any choice in the matter."
"So what you're doing is giving all of those planets a chance to welsh on their debts."
"That bothers you?" He sat back down, balanced the cup on his knee.
"A little. I don't love Hartford any more than—"
"Look at it this way. My way. Consider Hartford as an arm of the government, the Confederation."
"I've always thought it was the other way around."
"In a practical sense, yes. But either way. A government sends its people out to colonize virgin lands. It subsidizes them at first; once the ball is rolling, it collects al -legiance and taxes.
"The 'debt' to Hartford is just a convenient fiction to justify taking these taxes."
"There are services rendered, though. Necessary to life."
"Rendered and paid for, separately. I'm going to prove to the 'colonies' that they can provide these services to each other. It will be even easier once Hartford goes bankrupt. There'll be no monopoly on starships. No Confederation to protect patents."
"Anarchy, then."
"Interesting word. I prefer to call it revolution... but yes, things will be pretty hectic for a while."
"All right. But if you wanted to choreograph a revolution, why didn't you pick a more comfortable planet to do it from? Are you just hiding?"
"Partly that. Mostly, though, I wanted to do everything legally. For that, I needed a very small planet without a charter."
"I'm lost again." I made myself another cup of coffee and grieved for the lack of bourbon. Maybe if I went outside and took a deep breath...
"You know what it takes to charter a planet?" Chaim asked me.
"Don't know the numbers. Certain population density and high enough gross planetary product."
"The figures aren't important. They look modest enough on paper. The way it works out, though, is that by the time a planet is populated enough and prosperous enough to get its independence, it's almost guaranteed to be irretrievably in debt to Hartford.
"That's what all those immigration forms are for. Half of those stacks are immigration forms and the other half, limited powers of attorney. I'm going to claim this planet, name it Mazel Tov, and accept my own petition for citizenship on behalf of 4,783 immigrants. Then I make one call, to my lawyer." He named an Earth-based interplanetary law firm so well-known that even I had heard of it.
"They will call about a hundred of these immigrants, each of whom will call ten more, then ten more, and so on. All prearranged. Each of them then pays me his immigration fee."
"How much is that?"
"Minimum, ten million CU's."
"God!"
"It's a bargain. A new citizen gets one share in the Mazel Tov Corporation for each million he puts in. In thirty min utes MTC should have almost as much capital behind it as Hartford has."
"Where could you find four thousand—"
"Twenty years of persuasion. Of coordination. I've tried to approach every living man of wealth whose fortune is not tied up with Hartford or the Confederation. I've showed them my plan—especially the safeguards on it that make it a low-risk, high-return investment—and every single one of them has signed up."
"Not one betrayal?"
"No—what could the Confederation or Hartford offer in return? Wealth? Power? These men already have that in abundance.
"On the other hand, I offer them a gift beyond price: independence. And incidentally, no taxes, ever. That's the first article of the charter."
He let me absorb that for a minute. "It's too facile," I said. "If your plan works, everything will fall apart for the Confederation and Hartford—but look what we get instead. Four thousand-some independent robber barons, running the whole show. That's an improvement?"
"Who can say? But that's revolution: throw the old set of bastards out and install your own set. At least it'll be different. Time for a change."
I got up. "Look, this is too much, too fast. I've got to think about it. Digest it. Got to check out the ship, too."
Chaim went along with me halfway to the air lock. "Good, good. I'll start making calls." He patted the transceiver with real affection. "Good thing this baby came along when it did. It would have been difficult coordinating this thing, passing notes around. Maybe impossible."
It didn't seem that bloody easy, even with all those speedy little tachyons helping us. I didn't say anything.
It was a relief to get back into my own element, out of the dizzying fumes of high finance and revolution. But it was short-lived.
Things started out just dandy. The reason the control board was dead was that its cable to the fuel cells had jarred loose. I plugged it back in and set up a systems check. The systems check ran for two seconds and quit. What was wrong with the ship was number IV-A-1-a. It took me a half-hour to find the manual, which had slid into the head and nestled up behind the commode.
"IV" was fusion power source. "IV-A" was -generation of magnetic field for containment thereof. "IV-A-1" was disabilities of magnetic field generator. And "IV-A-1-a," of course, was permanent disability. It had a list of recommended types of replacement generators.
Well, I couldn't run down to the store and pick up a generator. And you can't produce an umpty-million-gauss fusion mirror by rubbing two sticks together. So I kicked Mlle. Biarritz's book across the room and went back to the dome.
Chaim was hunched over the transceiver, talking to somebody while he studied his own scribblings in a notebook.
"We're stuck here," I said.
He nodded at me and kept up the conversation. "—that's right. Forty thousand bushels, irradiated, for five hundred thousand CU's... so _what?_ So it's a gift. It's guaranteed. Delivery in about seven years, you'll get details... all right, fine. A pleasure to do business. Thank _you_ , sir."
He switched off and leaned back and laughed. "They all think I'm crazy!"
"We're stuck here," I said again.
"Don't worry about it, don't _worry_ ," he said, pointing to an oversized credit flash attached to the transceiver. It had a big number on it that was constantly changing, going up. "That is the total assets of Mazel Tov Corporation." He started laughing again.
"Minims?"
"No, round credits."
I counted places. "A hundred and twenty-eight billion... credits?"
"That's right, right: You want to go to Faraway? We'll have it _towed_ here."
"A hundred and twenty-nine billion?" It was really kind of hard to grasp.
"Have a drink—celebrate!" There was a bowl of ice and a bottle of gin on the floor beside him. God, I hate gin.
"Think I'll fix a cup of tea." By the time I'd had my cup, cleaned up and changed out of my suit, Chaim was through with his calls. The number on the credit flash was up to 239,605,967,000 and going up slowly.
He took his bottle, glass and ice to his bunk and asked me to start setting up the rescue mission.
I called Hartford headquarters on Earth. Six people referred me to their superiors and I wound up talking to the Coordinator of Interstellar Transit himself. I found out that bad news travels fast.
"Mazel Tov?" his tinny voice said. "I've heard of you, new planet out by Rigel? Next to Faraway?"
"That's right. We need a pickup and we can pay."
"Oh, that's not the problem. Right now there just aren't any ships available. Won't be for several months. Maybe a year."
"What? We only have three months' worth of air!" By this time Chaim was standing right behind me, breathing gin into my ear.
"I'm really very sorry. But I thought that by the time a planet gets its charter, it should be reasonably self-sufficient."
"That's murder!" Chaim shouted.
"No, sir," the voice said. "Just unfortunate planning on your part. You shouldn't have filed for—" Chaim reached over my shoulder and slapped the switch off, hard. He stomped back to his bunk—difficult to do with next to no gravity—sat down and shook some gin into his glass. He looked at it and set it on the floor.
"Who can we bribe?" I asked.
He kept staring at the glass. "No one. We can try, but I doubt that it's worth the effort. Not with Hartford fighting for its life. Its corporate life."
"I know lots of pilots we could get, cheap."
"Pilots," Chaim said without too much respect.
I ignored the slur. "Yeah. Hartford programs the main jump. Nobody'd get a jump to Rigel."
We sat in silence for a while, the too-sober pilot and the Martian-Russian Jew who was the richest person in the history of mankind. Less than too sober.
"Sure there's no other ship on Faraway?"
"I'm sure," I said. "Took me half a day to find someone who remembered about the _Bonne Chance."_
He considered that for a minute. "What does it take to build an interplanetary ship? Besides money."
"What, you mean could they build one on Faraway?" "Right."
"Let me see." Maybe. "You need an engine. A cabin and life support stuff. Steering jets or gyros. Guidance and com mo equipment."
"Well?"
"I don't know. The engine would be the hard part. They don't have all that much heavy industry on Faraway."
"No harm in finding out."
I called Faraway. Talked to the mayor. He was an old pilot (having been elected by popular vote) and I finally reached him at the University Club, where he was surrounded by other old pilots. I talked to him about engineering. Chaim talked to him about money. Chaim shouted and wept at him about money. We made a deal.
Faraway having such an abundance of heavy metals, the main power generator for the town, the only settlement on the planet, was an old-fashioned fission generator. We figured out a way they could use it.
After a good deal of haggling and swearing, the citizens of Faraway agreed to cobble together a rescue vehicle. In return, they would get control of forty-nine percent of the stock of Mazel Tov Corporation.
Chaim was mad for a while, but eventually got his sense of humor back. We had to kill two months with six already-read books and a fifty-bottle case of gin. I read "War and Peace" twice. The second time I made a list of the characters. I made crossword puzzles out of the characters' names. I learned how to drink gin, if not how to like it. I felt like I was going slowly crazy—and when the good ship _Hello There_ hove into view, I knew I'd gone 'round the bend.
The _Hello There_ was a string of fourteen buildings strung along a lattice of salvaged beams; a huge atomic reactor pushing it from the rear. The buildings had been uprooted whole, life support equipment and all, from the spaceport area of Faraway. The first building, the control room, was the transplanted University Club, Olde English decorations still intact. There were thirty pairs of wheels along one side of the "vessel," the perambulating shanty-town.
We found out later that they had brought along a third of the planet's population, since most of the buildings on Faraway were without power and therefore uninhabitable. The thing (I still can't call it a ship) had to be put on wheels because they had no way to crank it upright for launching. They drove it off the edge of a cliff and pulled for altitude with the pitch jets. The pilot said it had been pretty harrowing, and after barely surviving the landing I could marvel at his power of understatement.
The ship hovered over Mazel Tov with its yaw jets and they lowered a ladder for us. Quite a feat of navigation. I've often wondered whether the pilot could have done it sober.
The rest, they say, is history. And current events. As Chaim had predicted Hartford went into receivership, MTC being the receiver. We did throw out all of the old random bastards and install our own hand-picked ones.
I shouldn't bitch. I'm still doing the only thing I ever wanted to do. Pilot a starship; go places, do things. And I'm moderately wealthy, with a tenth-share of MTC stock.
It'd just be a lot easier to take, if every exbum on Faraway didn't have a hundred times as much. I haven't gone back there since they bronzed the University Club and put it on a pedestal.
# To Howard Hughes:
A Modest Proposal
_One good reason for a novelist to write short stories is that they serve as a proving ground for new techniques. If a structure or texture doesn't work in a short story, you've only lost a few days, and learned something. If a novel goes sour, and I do speak from experience, you lose a thick stack of paper and more. And you might not learn as well, be cause of your deeper involvement, as a parent might see his child go wrong and never see how he'd caused it._
_I admire the work of John Dos Passos, especially the USA trilogy, and wanted to borrow his intricate technique for a science fiction novel.* I wanted to boil it down, make it even more rapid and nervous. This story was the test case, and I liked it, so I used the technique for_ Mindbridge _(St. Martins Press, 1976), which I think is my best novel, so far_.
_When I wrote this I was in the process of putting togetheran anthology of science fiction alternatives to war, which languished for some years before St_. _Martins Press published it as_ Study War No More _(1977). The story was written for the anthology, and was meant to be sarcastic. But at that time its basic premise seemed rather absurd_.
_Some few predictive elements of some of my stories have come true. I'm afraid this one will add to the list._
* In the real world it's against the law to take something that somebody else is trying to sell. But since John Brunner already adapted Dos Passos's technique in his powerful novels _Stand on Zanzibar_ (Doubleday, 1968) and _The Sheep Look Up_ (Harper & Row, 1972), I guess my crime is the receiving of stolen goods rather than kleptomania.
**1. 13 October 1975**
Shark Key is a few hundred feet of sand and scrub between two slightly larger islands in the Florida Keys: population, one.
Not even one person actually lives there—perhaps the name has not been attractive to real estate developers—but there is a locked garage, a dock and a mailbox fronting on US 1. The man who owns this bit of sand—dock, box, and carport—lives about a mile out in the Gulf of Mexico and has an assistant who picks up the mail every morning, and gets groceries and other things.
Howard Knopf Ramo is this sole "resident" of Shark Key, and he has many assistants besides the delivery boy. Two of them have doctorates in an interesting specialty, of which more later. One is a helicopter pilot, one ran a lathe under odd conditions, one is a youngish ex-Colonel (West Point, 1960), one was a contract killer for the Mafia, five are doing legitimate research into the nature of gravity, several dozen are dullish clerks and technicians, and one, not living with the rest off Shark Key, is a U.S. Senator who does not represent Florida but nevertheless does look out for the interests of Howard Knopf Ramo. The researchers and the delivery boy are the only ones in Ramo's employ whose income he reports to the IRS, and he only reports one-tenth at that. All the other gentlemen and ladies also receive ten-times-generous salaries, but they are all legally dead, so the IRS has no right to their money, and it goes straight to anonymously numbered Swiss accounts without attrition by governmental gabelle.
Ramo paid out little more than one million dollars in salaries and bribes last year; he considered it a sound investment of less than one-fourth of one per cent of his total worth.
**2. 7 May 1955**
Our story began, well, many places with many people. But one pivotal person and place was 17-year-old Ronald Day, then going to high school in sleepy Winter Park, Florida.
Ronald wanted to join the Army, but he didn't want to just _join_ the Army. He had to be an officer, and he wanted to be an Academy man.
His father had served gallantly in WWII and in Korea until an AP mine in Ch'unch'on (Operation "Ripper") forced him to retire. At that time he had had for two days a battlefield commission, and he was to find that the difference between NCO's retirement and officer's retirement would be the difference between a marginal life and a comfortable one, subsequent to the shattering of his leg. Neither father nor son blamed the Army for having sent the senior Day marching through a muddy mine field, 1955 being what it was, and neither thought the military life was anything but the berries. More berries for the officers, of course, and the most for West Pointers.
The only problem was that Ronald was, in the jargon of another trade, a "chronic underachiever." He had many fascinating hobbies and skills and an IQ of 180, but he was barely passing in high school, and so had little hope for an appointment. Until Howard Knopf Ramo came into his life.
That spring afternoon, Ramo demonstrated to father and son that he had the best interests of the United States at heart, and that he had a great deal of money (nearly a hundred million dollars even then), and that he knew something rather embarassing about senior Day, and that in exchange for certain reasonable considerations he would get Ronald a place in West Point, class of 1960.
Not too unpredictably, Ronald's intelligence blossomed in the straitjacket discipline at the Point. He majored in physics, that having been part of the deal, and took his commission and degree—with high honors—in 1960. His commission was in the Engineers and he was assigned to the Atomic Power Plant School at Fort Belvoir, Virginia. He took courses at the School and at Georgetown University nearby.
He was Captain Ronald Day and bucking for major, one step from being in charge of Personnel & Recruitment, when he returned to his billet one evening and found Ramo waiting for him in a stiff-backed chair. Ramo was wearing the uniform of a brigadier general and he asked a few favors. Captain Day agreed gladly to cooperate, not really believing the stars on Ramo's shoulders; partly because the favors seemed harmless if rather odd, but reasonable in view of past favors; mainly because Ramo told him something about what he planned to do over the next decade. It was not exactly patriotic but involved a great deal of money. And Captain Day, O times and mores, had come to think more highly of money than of patriotism.
Ramo's representatives met with Day several times in the following years, but the two men themselves did not meet again until early 1972. Day eventually volunteered for Vietnam, commanding a battalion of combat engineers. His helicopter went down behind enemy lines, such lines as there were in that war, in January, 1972, and for one year he was listed as MIA. The North Vietnamese eventually released their list and he became KIA, body never recovered.
By that time his body, quite alive and comfortable, was resting a mile off Shark Key.
**3. 5 December 1969**
Andre Charvat met Ronald Day only once, at Fort Belvoir, five years before they would live together under Ramo's roof. Andre had dropped out of Iowa State as a sophomore, was drafted, was sent to the Atomic Power Plant School, learned the special skills necessary to turn radio-active metals into pleasing or practical shapes, left the Army and got a job running a small lathe by remote control, from behind several inches of lead, working with plutonium at an atomic power applications research laboratory in Los Alamos—being very careful not to waste any plutonium, always ending up with the weight of the finished piece and the shavings exactly equal to the weight of the rough piece he had started with.
But a few milligrams at a time, he was substituting simple uranium for the precious plutonium shavings.
He worked at Los Alamos for nearly four years, and brought 14.836 grams of plutonium with him when he arrived via midnight barge off Shark Key, 12 November 1974.
Many other people in similar situations had brought their grams of plutonium to Shark Key. Many more would, before the New Year.
**4. 1 January 1975**
"Ladies. Gentlemen." Howard Knopf Ramo brushes long white hair back in a familiar, delicate gesture and with the other hand raises a tumbler to eye level. It bubbles with good domestic champagne. "Would anyone care to propose a toast?"
An awkward silence, over fifty people crowded into the television room. On the screen, muted cheering as the Allied Chemical ball begins to move. "The honor Should be yours, Ramo," says Colonel Day.
Ramo nods, gazing at the television. "Thirty years," he whispers and says aloud: "To _our_ year. To our world."
Drink, silence, sudden chatter.
**5. 2 January 1975**
_Curriculum Vitae_
My name is Philip Vale and I have been working with Howard Knopf Ramo for nearly five years. In 1967 I earned a doctorate in nuclear engineering at the University of New Mexico and worked for two years on nuclear propulsion systems for spacecraft. When my project was shelved for lack of funding in 1969, it was nearly impossible for a nuclear engineer to get a job; literally impossible in my specialty.
We lived off savings for a while. Eventually I had to take a job teaching high school physics and felt lucky to have any kind of a job, even at $7000 per year.
But in 1970 my wife suffered an attack of acute glomerulonephritis and lost both kidneys. The artificial dialysis therapy was not covered by our health insurance, and to keep her alive would have cost some $25,000 yearly. Ramo materialized and made me a generous offer.
Three weeks later, Dorothy and I were whisked incognito to Shark Key, our disappearance covered by a disastrous automobile accident. His artificial island was mostly unoccupied in 1970, but half of one floor was given over to medical facilities. There was a dialysis machine and two of the personnel were trained in its use. Ramo called it "benevolent blackmail" and outlined my duties for the next several years.
**6. 4 April 1970**
When Philip Vale came to Ramo's island, all that showed above water was a golden geodesic dome supported by massive concrete pillars and armthick steel cables that sang basso in the wind. Inside the dome were living quarters for six people and a more-or-less legitimate research establishment called Gravitics, Inc. Ramo lived there with two technicians, a delivery boy and two specialists in gravity research. The establishment was very expensive but Ramo claimed to love pure science, hoped for eventual profit, and admitted that it made his tax situation easier. It also gave him the isolation that semi-billionaires traditionally prefer; because of the delicacy of the measurements necessary to his research, no airplanes were allowed to buzz overhead and the Coast Guard kept unauthorized ships from coming within a one-mile radius. All five employees did do research work in gravity; they published with expected frequency, took out occasional patents, and knew they were only a cover for the actual work about to begin downstairs.
There were seven underwater floors beneath the golden dome, and Dr. Philip Vale's assignment was to turn those seven floors into a factory for the construction of small atom bombs. 29 Nagasakisized fission bombs.
**7. August 1945**
Howard Knopf Ramo worked as a dollar-a-year man for several years, the government consulting him on organizational matters for various projects. The details of many of these projects were quite secret, but he gave as good advice as he could, without being told classified details.
In August 1945 Ramo learned what that Manhattan Project had been all about.
**8. 5 April 1970—3 February 1972**
Dr. Philip Vale was absorbed for several weeks in initial planning: flow charts, lists of necessary equipment and personnel, timetables, floor plans. The hardest part of his job was figuring out a way to steal a lot of plutonium without being too obvious about it. Ramo had some ideas, on this and other things, that Vale expanded.
By the middle of 1971 there were thirty people living under Gravitics, Inc., and plutonium had begun to trickle in, a few grams at a time, to be shielded with lead and cadmium and concrete and dropped into the Gulf of Mexico at carefully recorded spots within the one-mile limit. In July they quietly celebrated Ramo's 75th birthday.
On 3 February 1972, Colonel Ronald Day joined Vale and the rest. The two shared the directorship amicably, Day suggesting that they go ahead and make several mock-up bombs, both for time-and-motion studies within the plant and in order to check the efficiency of their basic delivery system: an Econoline-type van, specially modified.
**9. Technological Aside**
One need not gather a "critical mass" of plutonium in order to make an atom bomb of it. It is sufficient to take a considerably smaller piece and subject it to a neutron density equivalent to that which prevails at standard temperature and pressure inside plutonium at critical mass. This can be done with judiciously shaped charges of high explosive.
The whole apparatus can fit comfortably inside a Ford Econoline van.
**10. 9 September 1974**
_Progress Report_
_Delivery Implementation Section_
TO: Ramo, Vale, Day, Sections 2, 5, 8.
As of this date we can safely terminate R & D on the following vehicles: Ford, Fiat, Austin, VW. Each has performed flawlessly on trial runs to Atlanta.
On-the-spot vehicle checks assure us that we can use Econolines for Ghana, Bombay, Montevideo, and Madrid, without attracting undue attention.
The Renault and Soyuz vans have not been road-tested because they are not distributed in the United States. One mockup Renault is being smuggled to Mexico, where they are fairly common, to be tested. We may be able to modify the Ford setup to fit inside a Soyuz shell. However, we have only two of the Russian vans to work with, and will proceed with caution.
The Toyota's suspension gave out in one out of three Atlanta runs; it was simply not designed for so heavy a load. We may substitute Econolines or VW's for Tokyo and Kyoto.
90% of the vehicles were barged to New Orleans before the Atlanta run, to avoid suspicion at the Key Largo weigh station.
We are sure all systems will be in shape well before the target date.
(signed) Supervisor Maxwell Bergman
**11. 14 October 1974**
Today they solved the China Problem: automobiles and trucks are still fairly rare in China, and its border is probably the most difficult to breach. Ramo wants a minimum of three targets in China, but the odds against being able to smuggle out three vans, load them with bombs, smuggle them back in again and drive them to the target areas without being stopped—the odds are formidable.
Section 2 (Weapons Research & Development) managed to compress a good-sized bomb into a package the size of a large suitcase, weighing about 800 pounds. It is less powerful than the others, and not as subtly safeguarded—read "boobytrapped"—but should be adequate to the task. It will go in through Hong Kong in a consignment of Swiss heavy machinery, bound for Peking; duplicates will go to Kunming and Shanghai, integrated with farm machinery and boat hulls, respectively, from Japan. Section 1 (Recruiting) has found delivery agents for Peking and Shanghai, is looking for a native speaker of the dialect spoken around Kunming.
**12. Naming**
Ramo doesn't like people to call it "Project Blackmail," so they just call it "the project" when he's around.
**13. 1 July 1975**
Everything is in order: delivery began one week ago. Today is Ramo's 79th birthday.
His horoscope for today says "born today, you are a natural humanitarian. You aid those in difficulty and would make a fine attorney. You are attracted to the arts, including writing. You are due for domestic adjustment, with September indicated as a key month."
None of the above is true. It will be in October.
**14. 13 October 1975**
7:45 on a grey Monday morning in Washington, D.C., a three-year-old Econoline van rolls up to a Park-yourself lot on 14th Street. About a quarter-mile from the White House.
The attendant gives the driver his ticket. "How long ya gonna be?"
"Don't know," he says. "All day, probably."
"Put it back there then, by the Camaro."
The driver parks the van and turns on a switch under the dash. With a tiny voltmeter he checks the dead-man switch on his arm: a constant-readout sphygmomanometer wired to a simple signal generator. If his blood pressure drops too low too quickly, downtown Washington will be a radioactive hole.
Everything in order, he gets out and locks the van. This activates the safeguards. A minor collision won't set off the bomb, and neither would a Richter-6 earthquake. It will go off if anyone tries to X-ray the van or enter it.
He walks two blocks to his hotel. He is very careful crossing streets.
He has breakfast sent up and turns on the _Today_ show. There is no news of special interest. At 9:07 he calls a number in Miami. Ramo's fortune is down to fifty million, but he can still afford a suite at the Beachcomber.
At 9:32, all American targets having reported, Ramo calls Reykjavik.
"Let me speak to Colonel Day. This is Ramo."
"Just a moment, sir." One moment. "Day here."
"Things are all in order over here, Colonel. Have your salesmen reported yet?"
"All save two, as expected," he says: everyone but Peking and Kunming.
"Good. Everything is pretty much in your hands, then. I'm going to go down and do that commercial."
"Good luck, sir."
"We're past the need for luck. Be careful, Colonel." He rings off.
Ramo shaves and dresses, white Palm Beach suit. The reflection in the mirror looks like somebody's grandfather; not long for this world, kindly but a little crotchety, a little senile. Perhaps a little senile. That's why Colonel Day is coordinating things in Iceland, rather than Ramo. If Ramo dies, Day can decide what to do. If Day dies, the bombs all go off automatically.
"Let's go," he shouts into the adjoining room. His voice is still clear and strong.
Two men go down the elevator with him. One is the exhit man, with a laundered identity (complete to plastic surgery) and two hidden pistols. The other is Philip Vale, who carries with him all of the details of Project Blackmail and, at Ramo's suggestion, a.44 notice—not just the derringer. He watches the hit man, and the hit man watches everybody else.
The Cadillac that waits for them outside the Beachcomber is discreetly bulletproof and has under the front and rear seats, respectively, a Thompson submachine gun and a truncated 12-gauge shotgun. The exhit man insisted on the additional armament, and Ramo provided them for the poor man's peace of mind. For his own peace of mind Ramo, having no taste for violence on so small a scale, had the firing pins removed last night.
They drive to a network-affiliated television station, having spent a good deal of money for ten minutes of network time. For a paid political announcement.
It only cost a trifle more to substitute their own men for Union employees behind the camera and in the control room.
**15. Transcript**
FADE IN LONG SHOT: RAMO, PODIUM, GLOBE
RAMO
My name is Howard Knopf Ramo.
SLOW DOLLY TO MCU RAMO
RAMO
Please don't leave your set; what I have to say is extremely important to you and your loved ones. And I won't take too much of your time.
You've probably never heard of me, though some years ago my accountants told me I was the richest man in the world. I spent a good deal of those riches staying out of the public eye. The rest of my fortune I spent on a project that has taken me thirty years to complete.
I was born just twenty-one years after the Civil War. In my lifetime, my country has been in five major wars and dozens of small confrontations. I didn't consider the reasons for most of them worthwhile. I didn't think that any of them were worth the price we paid.
And at that, we fared well compared to many other countries, whether they won their wars or lost them. Still, we continue to have wars. Rather...
TIGHT ON RAMO
... our _leaders_ continue to declare wars, advancing their own political aims by sending sons and brothers and fathers out to bleed and die.
CUT TO:
MEDIUM SHOT, RAMO SLOWLY TURNING GLOBE
RAMO
We have tolerated this situation through all of recorded history. No longer. China, the Soviet Union, and the United States have stockpiled nuclear weapons sufficient to destroy all human life, twice over. It has gone beyond politics and become a matter of racial survival.
I propose a plan to take these weapons away from them—every one, simultaneously. To this end I have spent my fortune constructing 29 atomic bombs. 28 of them are hidden in various cities around the world. One of them is in an airplane high over Florida. It is the smallest one; a demonstration model, so to speak.
CUT TO:
REMOTE UNIT; PAN SHORELINE
RAMO
VOICE OVER SURF SOUND
This is the Atlantic Ocean, off one of Florida's Keys. The bomb will explode seven miles out, at exactly 10:30. All shipping has been cleared from the area and prevailing winds will disperse the small amount of fallout harmlessly.
Florida residents within fifty miles of Shark Key are warned not to look directly at the blast.
FILTER DOWN ON REMOTE UNIT
Watch. There!
AFTER BLAST COMES AND FADES
CUT TO:
TIGHT ON RAMO
RAMO
Whether or not you agree with me, that all nations must give up their arms, is immaterial. Whether I am a saint or a power-drunk madman is immaterial. I give the governments of the world three days' notice-not just the atomic powers, but their allies as well. Perhaps less than three days, if they do not follow my instructions to the letter.
Atomic bombs at least equivalent to the ones that devastated Hiroshima and Nagasaki have been placed in the following cities:
MCU RAMO AND GLOBE
RAMO
TOUCHES GLOBE AS HE NAMES EACH CITY
Accra, Cairo, Khartoum, Johannesburg, London, Dublin, Madrid, Paris, Berlin, Rome, Warsaw, Budapest, Moscow, Leningrad, Novosibirsk, Ankara, Bombay, Sydney, Peking, Shanghai, Kunming, Tokyo, Kyoto, Honolulu, Akron, San Francisco, New York, Washington.
The smaller towns of Novosibirsk, Kunming and Akron—one for each major atomic power—are set to go off eight hours before the others, as a final warning.
These bombs will also go off if tampered with, or if my representatives are harmed in any way. The way this will be done, and the manner in which atomic weapons will be collected, is explained in a letter now being sent through diplomatic channels to the leader of each threatened country. Copies will also be released to the world press.
A colleague of mine has dubbed this effort "Project Blackmail." Unflattering, but perhaps accurate.
CUT TO:
LONG SHOT RAMO, PODIUM, GLOBE
RAMO
Three days. Goodbye.
FADE TO BLACK
**16. Briefing**
"They didn't _catch_ him?" The President was livid.
"No, sir. They had to find out what studio the broadcast originated from and then get—"
"Never mind. Do they know where the bomb is?"
"Yes, sir, it's on page six." The aide tentatively offered the letter, which a courier from the Polish embassy had brought a few minutes after the broadcast.
"Where? Has anything been done?"
"It's in a public parking lot on 14th Street. The police—"
"Northwest?"
"Yes, sir."
"Good God. That's only a few blocks from here."
"Yes, sir."
"No respect for... nobody's fiddled with it, have they?"
"No, sir. It's boobytrapped six ways from Sunday. We have a bomb squad coming out from Belvoir, but it looks pretty foolproof."
"What about the 'representative' he talked about? Let me see that thing." The aide handed him the report.
"Actually, he's the closest thing we've got to a negotiator. But he's also part of the boobytrap. If he's hurt in any way..."
"What if the son of a bitch has a heart attack?" The President sat back in his chair and lowered his voice for the first time. "The end of the world."
**17. Statistical Interlude**
One bomb will go off if any of 28 people dies in the next three days. They will all go off if Ronald Day dies.
All of these men and women are fairly young and in good physical condition. But they are under considerable strain and also perhaps unusually susceptible to "accidental" death. Say each of them has one chance in a thousand of dying within the next three days. Then the probability of accidental catastrophe is one minus.999 to the 29th power.
This is .024 or about one chance out of 42.
A number of cautionary cables were exchanged in the first few hours, related to this computation.
**18. Evening**
The Secretary of Defense grips the edge of his chair and growls: "That old fool could've started World War III. Atom... bombing... Florida."
"He gave us ample warning," the Chairman of the AEC reminds him.
"Principle of the goddam thing."
The President isn't really listening; what's past is over and there is plenty to worry about for the next few days. He is chainsmoking, something he never does in public and rarely in conference. Camels in a long filtered holder.
"How can we keep from handing over all of our atomics?" The President stubs out his cigarette, blows through the holder, lights another.
"All right," the Chairman says. "He has a list of our holdings, which he admits is incomplete." Ticks off on his fingers. "He will get a similar list from China: locations, method of delivery, yield. Chinese espionage has been pretty efficient. Another list from Russia. Between the three, that is among the three I guess—" Secretary of Defense makes a noise. "—he will probably be able to disarm us completely."
He makes a tent of his fingers. "You've thought of making a deal, I suppose. Partial lists from—"
"Yes. China's willing, Russia isn't. And Ramo is also getting lists from England, France and Germany. Fairly complete, if I know our allies."
"Wait," says the Secretary, "France has bombs too—"
"Halfway to Reykjavik already."
"What the hell are we going to do?"
Similar queries about the same time, in Moscow and Peking.
**19. Morning**
Telegrams and cables have been arriving by the truckload. The President's staff abstracted them into a 9-page report. Most of them say "don't do anything rash." About one in ten says "call his bluff," most of them mentioning a Communist plot. One of these even came from Akron.
It didn't take them long to find Ramo. Luckily, he had dismissed the bodyguard after returning safely to the Beachcomber, so there was no bloodshed. Right now he is in a condition something between house arrest and protective custody, half of Miami's police force and large contingents from the FBI and CIA surrounding him and his very important phone.
He talks to Reykjavik and Day tells him that all of the experts have arrived: 239 atomic scientists and specialists in nuclear warfare, a staff of technical translators and a planeload of observers from the UN's International Atomic Energy Agency.
Except for the few from France, no weapons have arrived. Day is not surprised and neither is Ramo.
Ramo is saddened to hear that several hundred people were killed in panicky evacuations, in Tokyo, Bombay and Khartoum. Evacuation of London is proceeding in an orderly manner. Washington is under martial law. In New York and Paris a few rushed out and most people are just sitting tight. A lot of people in Akron have decided to see what's happening in Cleveland.
**20. Noon**
President's intercom buzzes. "We found Ramo's man, sir."
"I suppose you searched him. Send him in."
A man in shirtsleeves walks in between two uniformed MP's. He is a hawk-faced man with a sardonic expression.
"This is rather premature, Mr. President. I was supposed to—"
"Sit down."
He flops into an easy chair. "—supposed to call on you at 3:30 this afternoon."
"You no doubt have some sort of a deal to offer."
The man looks at his watch. "You must be hungry, Mr. President. Take a long lunch hour, maybe a nap. I'll have plenty to say at—"
"You—"
"Don't worry about me, I've already eaten. I'll wait here."
"We can be very hard on you."
He rolls up his left sleeve. Two small boxes and some wiring are taped securely to his forearm. "No, you can't. Not for three days—you can't kill me or even cause me a lot of pain. You can't drug me or hypnotize me." (This last a lie) "Even if you could, it wouldn't bring any good to you."
"I believe it would."
"We can discuss that at 3:30." He leans back and closes his eyes.
"What _are_ you?"
He opens one eye. "A professional gambler." That is also a lie. Back when he had to work for a living, he ran a curious kind of a lathe.
**21. 3:30 P.M.**
The President comes through a side door and sits at his desk. "All right, have your say."
The man nods and straightens up slowly. "First off, let me explain my function."
"Reasonable."
"I am a gadfly, a source of tension."
"That is obvious."
"I can also answer certain questions about that bomb in your backyard."
"Here's one: how can we disarm it?"
"That I can't tell you."
"I believe we can convince you—"
"No, you don't understand. I don't know _how_ to turn it off. That's somebody else's job." Third lie. "I do know how to blow it up—hurt me or kill me or move me more than ten miles from ground zero. Or I can just pull this little wire." He touches a wire and the President flinches.
"All right. What else are you here for?"
"That's all. Keep an eye on you, I guess."
"You don't have any sort of... message, any—"
"Oh, no. You've already got the message. Through the Polish embassy, I think."
"Come on now. I'm not naive."
The man looked at him curiously. "Maybe that's your problem. Mr. Ramo's demands are not negotiable-he really is doing what he says; taking the atomic weapons away from all of you... strange people.
"What sort of a deal do you think you could offer an 80-year-old millionaire? Exbillionaire. How would you propose to threaten him?"
"We can kill him."
"That's right."
"In three days we can kill you."
The man laughs politely. "Now you are being naive."
The President flips a switch on his intercom. "Send in Carson and Major Anfel and the two MP's." The four men come in immediately.
"Take this man somewhere and talk to him. Don't hurt him."
"Not yet," the civilian Carson says.
"Come on," one MP says to the man.
"I don't think so," the man says. He stares at the President. "I'd like a glass of water."
**22. 15 October 1975**
The only nuclear weapons in the United States are located in Colorado, Texas, Florida and, of course, San Francisco, Washington D.C., and Akron, Ohio.
**23. 16 October 1975
2:30 A.M.**
The only nuclear weapons in the United States are located in Colorado, Texas, Florida, San Francisco and Washington, D.C. There is no Akron, Ohio.
Of the 139 who perished in the blast, 138 were very gutsy looters.
**10:00 A.M.**
Only San Francisco and Washington now. The others are on their way to Reykjavik.
The man who was named Andre Charvat walks down a deserted 14th Street with a 9-volt battery in his hand. A civilian and two volunteer MP's walk with him.
He walks straight up to the Econoline's rear bumper and touches the terminals of the battery to two inconspicuous rivets. There is a small spark and a click like the sound of a pinball machine, tilting.
"That's all. It's controlled by Reykjavik now."
"And Reykjavik is half controlled by Communists. And worse, traitors," Carson said huskily.
He doesn't answer but walks on down the street, alone. Amnesty.
In a few minutes a heavy truck rumbles up and men in plain coveralls construct a box of boilerplate around the Econoline. People start coming back into Washington and a large crowd gathers, watching them as they cover the box with a marble facade and affix a bronze plaque to the front.
The man who owned the parking lot received a generous check from the Nuclear Arms Control Board, in kroner.
**24. Quote**
"NUCLEAR WARFARE.... This article consists of the following sections:
I. Introduction
II.Basic Principles
1. Fission Weapons
2. Fusion Weapons
III.Destructive Effects
1. Theoretical
2. Hiroshima and Nagasaki
3. Akron and Novosibirsk
IV.History
1. World War II
2. "Cold War"
3. Treaty of Reykjavic
V.Conversion to Peaceful Uses
1. Theory and Engineering
2. Administration Under NACB
3. Inspection Procedures
(For related articles see DAY, RONALD R.; EINSTEIN, ALBERT; ENERGY; FERMI, ENRICO; NUCLEAR SCIENCES (several articles); RAMO, HOWARD K.; VALE, PHILIP; WARFARE, HISTORY OF."
—Copyright © 2020 by Encyclopaedia Britannica, Inc.
# A Mind of His Own
_Sometimes stories are written for catharsis, and they may be useful therapy for the author, but most of them shouldn't see print, because_ a priori _the author's story sense and stylistic judgment are subordinated to his emotional need. They usually read like cries for help._
_This said, I'll admit the following story was written for catharsis, and, to make matters worse, it's a story about self-pity. But I wouldn't have included it in this collection if I'd thought it was bad._
_The protagonist of this story is missing a leg and a foot, and I really don't remember whether I chose that disability consciously, but it is appropriate. Some years ago I lay in a crowded jungle hospital in Viet Nam, not yet recovering from the effects of having stood too close to a booby trap when a booby set it off. I was a veritable encyclopaedia of shrapnel and blast wounds—it had been a company-sized boobytrap—but the only ones here relevant were the left leg, which was pretty well shattered and flayed, and the right foot, which had a hole in the heel, where your socks wear out. In the first surgery, there wasn't enough skin to stitch the leg wounds up, so the limb was wrapped in a huge roll of blood-soaked bandage, forsafekeeping. The flies were so taken with it that they ignored my waving, and they also beleaguered the foot wound, which hadn't been bandaged—which, in fact, the surgeons had missed. It was ungodly hot and humid._
_A harried-looking doctor came through, stopped at my bed, and warned me that I might lose the leg, and then left (I've always wondered why he felt he had to tell me). At least I got an orderly to put a bandage on the foot, to keep the flies off it. He didn't put any antiseptic on it, though, and the next day it looked terrible and smelled bad, and I could just imagine what my leg looked like under all that cloth. Even the fact that losing my leg would surely get me out of the war couldn't cheer me up very much._
_All's well that ends, though, and some brilliant anonymous surgeon-perhaps the one who had scared me so—did fix up the leg and foot, and miscellaneous other parts, and after a mere four months of painful physical therapy I was able to be a soldier again, and then a civilian._
_Another damned war story, you say, but no, that's not the particular demon I was trying to put to rest here, even though war did provide a certain amount of the detail. The real experience to be exorcized is the more subtle one of reaching up one day and finding that your halo's gone. I had a friend who was suddenly and severely disabled, and he reacted in a human way, sliding into bitterness, lashing out at the people around him, driving away his family, then his friends, and then one day I left him too, in spite of knowing how he felt. Exit plaster saint._
_"What we need is a technology of behavior... were it not for the unwarranted generalization that all control is wrong, we should deal with the social environment as simply as we deal with the nonsocial."_
B. F. Skinner
Leonard Shays came back home to Tampa from the Lebanese conflict with a chestful of medals—which was no distinction—a slightly fractured mind, a medical discharge and two fairly efficient prosthetics, replacing his left foot and the right leg from the knee down.
The singleshot laser boobytrap he had triggered on patrol in the slums of Beirut had been set to scan at chest level, to kill. But Leonard, canny with experience, had tossed in a microton grenade before entering the hovel, and the explosion jarred the mounting of the boobytrap so that it scanned in a downward slant across the doorway. It was practically no pain at first, much pain later, and now just a feeling that his nonexistent toes were curled down in spastic paralysis. It made it hard to walk but the VA was giving him therapy. And he couldn't get a job, not even with his Ph.D. in mathematics, but the VA was also giving him a small check on the first of every month.
"Morning, Dr. Shays." His favorite therapist, Bennet, closed the bathroom door quietly. "Ready for the workout?"
"Am I ever? Ready to get out of this damn thing, though." Bennet picked up Leonard gracelessly and pulled him out of the whirlpool bath. He set him on the Formica edge of a table and gave him a starchy towel.
He studied the stumps professionally. "How's the wife?"
"Don't ask," he said, scrubbing sweat from his hair. "We had a long talk Friday. Our contract comes up for renewal in '98. She decided not to renew."
Bennet turned off the motor and pulled the plug on the bath. "It's her right," he said. "Bitch."
"It's not the legs. Absence thereof. She explained that carefully, at some length. It's not the legs at all."
"Look, if you don't wanna..."
"It's not that I can't get a job and we had to move to Ybor City and she has to carry a gun to go shopping."
Bennet grunted and straightened a stack of towels.
Leonard fumbled through his clothes and got a cigarette, lit it.
"Shouldn't smoke those things in here."
"Just leaving." He draped a gray robe around his shoulders. "Help me with this thing, OK?"
Bennet helped him put on the robe and set him in a wheelchair. "Can't smoke in Therapy, either."
Leonard put the clothes on his lap and turned the chair 180° on one wheel, hypertrophied biceps bulging. "So let's not go straight to Therapy. I need some fresh air."
"You'll stiffen up."
He rolled to the door and opened it. "No, it's warm. Plenty warm."
They were the only people on the porch. Bennet took a cigarette and pointed it at one of the palm trees.
"You know how old that one is?"
"She said it was because of the _piano."_
"Yeah, you shouldn't of sold the piano."
"Couldn't work the pedals right."
"Someday you—"
"I wasn't going to sell it anyhow; I was going to trade even for classical guitar or lute if I could find somebody."
"Yeah?"
"I went to all the skills transfer agencies. Every one, here and St. Pete. Even one in Sarasota, specializes in music. Couldn't find a guitar player who was any good. Not in Bach. If I can't play Bach I'd rather just listen."
"You coulda gotten one that was otherwise good. Learn Bach on your own."
"Bennet, hell, that'd be years. I never learned that much new on the piano, either. Don't have the facility."
"You bought the piano in the first place?"
He nodded. "One of the first skill transfers in Florida. Old Gainsville conservatory man. He thought he was going to die and wanted one last fling. Paid him fifty grand, that was real money back in '90."
"Still is."
"They cured his cancer and a year later he commmitted suicide." He threw his cigarette over the edge and watched it fall three stories.
"It's exactly as old as I am. Fifty-one years, the gardener told me," Bennet said. "I guess that's pretty old for a tree."
"Palm tree, anyhow." Leonard lit another and they smoked in silence.
"I wouldn't have sold it except my car went bad. Turbine blades crystallized while I was stuck in traffic. Had to get a new engine, new drive train. Try to get around this town without a car."
"It's worth your life," Bennet agreed.
Leonard snapped the new cigarette away. "Might as well get going."
He was always tired after therapy but he always hobbled down to the gate and across to the little tavern, drank a beer standing up and walked back to the parking lot. He'd found out that if he didn't walk about a mile after therapy he would hardly be able to get out of bed the next morning, for the stiffness.
He went home and was surprised to find his wife there.
"Good afternoon, Scottie." He walked in unsteadily, carrying two bags of groceries.
"Let me help."
"No." He set the groceries down on the dinette table and began to take out things to go into the refrigerator.
"Aren't you going to ask me what the hell I'm doing here?"
He didn't look at her. "No. I'm very calm today." He took the frozen foods over first, elbowed the door open. "Therapy today."
"Did it go well?"
"Besides, it's as much your house as mine."
"Until January. But I don't feel that way."
"It went pretty well." He shuffled things around in the refrigerator to make room for a scrawny chicken, the only luxury he had purchased.
"You got the car fixed."
"All it took was money."
"Have you tried to sell the baby grand?"
"No."
Carefully: "Does that mean you might buy back the talent some day?"
"With what?"
"Well, you—"
"I need the money to live on and the piano's yours to sell or keep or bronze or whatever the hell you want to do with it."
"You don't like to have it around because—"
_"I don't give a flying_... I don't care whether it stays or goes. I kind of like it. It's a fun thing to dust. It keeps the place from blowing away in a high wind. It has a certain—"
"Leonard!"
"Don't shout."
"It's not mine; I bought it for you."
"That's right."
_"I did."_
"You did lots of things for me. I'm grateful. Now." He shut the refrigerator door and leaned on it, drumming fingers, looking at the wall. "I'll ask. What the hell are you doing here?"
"I came back," she said evenly, "to try to talk some sense into you."
"Wonderful."
"Henry Beaumont said you told him you were thinking of selling your mathematics, too."
"That's right. After the money goes. It's not doing me any good."
"You worked nine years for that degree. Long years, remember? I was with you most of them."
"Five, to be accurate. Five years for the Ph.D. First the Bachelors and—"
"If you sell your mathematics you lose it all the way back to grade school."
"That's true. Tell me something else old."
"Don't be difficult. Look at me." He didn't. "Daddy will—"
"That's really old. I don't want to hear it."
"Still trying to be a hero. Your courage is an inspiration to us all."
"Oh, for Christ's sake." He sat down at the kitchen table with his back to her. "You were the one who wanted out. Not me."
"Len, if you could see yourself, what you've turned into..."
Any time somebody starts out a sentence with your name, Leonard thought, they're trying to sell you something.
"Daddy said this morning that if you'd go to see Dr. Verden—"
"The imprint man he goes to."
"The best overlay therapist in the state, Len."
Early attempts at overlay therapy were called "personality imprinting." The name had a bad connotation.
"The principle's the same no matter how good he is." He looked straight at her for the first time. "I may be a worthless self-pitying bastard, but I am me. I stay me."
"That sounds pretty—"
"Pretty stupid from a man who's just sold one slice of his brain and talks about selling another. Right?"
"Close."
"Wrong. There's a basic difference between skill transfer and overlay ther—"
"No, there isn't, they're exactly the—"
" _Because_ ," almost shouting, "I can shed skills when and as I feel I no longer have use for them, where your _im_ print witch doctor just looks up in some god-damn book and finds a pers—"
"You're wrong and you know it. Otherwise—"
"No, Scottie. You've let your father sell you Tranquility Base. These—" "Daddy's been seeing Dr. Verden for fifteen years!"
_"And see what it's gotten him?"_
He wasn't looking at her any longer but he could see the old familiar counting gesture. "Money. Prestige. Self-fulfillment—"
"And whose self is he fulfilling? Every time I see the old guy I expect him to be Sinbad the Sailor or Jack Kennedy or some goddamn thing. Fifty years ago they would have locked him up and thrown away the combination."
"You act as if he's—"
"He is! Certifiably."
He heard the door open—"We'll see about that!"—and slide shut and he reflected that that was one improvement over their house in Bel Aire. You can't slam an electric door.
Leonard woke up stiff the next day in spite of his having exercised. He would have allowed himself an extra hour in bed but today he despised the pathetic image of a naked legless cripple lying there helplessly. He decided against the struggle of showering, taped the pads to his stumps, strapped on the prosthetics and pulled on a pair of baggy trousers.
It was intolerably muggy, so he threw economy aside and switched on the airco. While his coffee was heating, he unwrapped the latest _ASM Journal_ and set it with a thick pad of paper and a pencil next to the chair that sat under the air-conditioning duct. The microwave cooker buzzed; he got his coffee and sat down with the first article.
The doorbell rang when he was on the second article and second cup of coffee. He almost didn't answer it. It was never good news. It rang again, insistently, so he got up and opened the door.
It was a small, bland-looking black man with a leather portfolio under his arm. Salesman, Leonard thought tiredly.
"Leonard Shays?" Leonard just looked at him.
"How do you do. I'm Dr. Felix Verden, you may—"
He pushed the button but Verden had a foot against the door jamb. The door slid halfway closed, then opened again.
"Mrs. Dorothy Scott Shays is your next of kin."
"Not any more, she isn't."
"I sympathize with your feelings, Dr. Shays, but legally she _is_ still your closest relative. May I come in?"
"We have nothing to talk about."
He opened the portfolio. "I have a court order here authorizing me—"
Leonard teetered forward and grabbed a fistful of the man's shirt. A man in uniform stepped from where he'd been hidden, next to the wall beside the door, and showed Leonard his stunner wand.
"All right. Let me get my book."
Dr. Verden's office was comfortable and a few decades out of date. Pale oak panelling and furniture crafted of a similar wood, combined with blued steel and fake black leather. A slight hospital odor seeped in.
"You know the therapy will be much more effective if you cooperate."
"I don't want it to be effective. I'll go along with the court and surrender my body to you for treatment. Just my body. The rest is going to fight you all the way."
"You may wind up even worse than before."
"By your lights. Maybe better, by mine."
He ignored that by rustling papers loudly. "You're familiar with the process."
"More familiar than I want to be. It's like a skill transfer, but instead of subtracting or adding a certain ability, you work on a more basic level. Personality."
"That's correct. We excise or graft certain basic behavioral traits, give the patient a better set of responses to life problems."
"A _different_ set of responses."
"All right."
"It's ghoulish."
"No it isn't. It's just an accelerated growing-up process."
"It's playing God, making a man over in your own image. Or whatever image is stylish or rec—"
"You think I haven't heard all this before, Leonard?"
"I'm sure you have. I'm sure you ignore it. You must be able to see that it's different, being on the receiving end, rather than—"
"I've been on the receiving end, Leonard, you should know that. I had to go through a complete overlay before I could get licensed. I'm glad I did."
"You're a better person for it."
"Of course."
"That could be just part of the overlay, you know. They could have turned you into a slavering idiot and at the same time convinced you that it was an improvement."
"They wouldn't be allowed to. Overlay therapy is even more closely monitored than skill transfer. And you should know how many controls there are on that."
"You're not going to convince me and I'm not going to convince you. Why don't we just get on with it?"
"Excellent idea." He stood. "Come this way."
Dr. Verden led him into a small white room that smelled of antiseptic. It held a complicated-looking bed on wheels and a plain-featured young female nurse who stood up when they came in.
"Will you need help getting undressed?" Leonard said he didn't and Dr. Verden dismissed the nurse and gave Leonard an open-backed smock, then left.
Verden and the nurse came back in a few minutes after Leonard had changed. He was sitting on the bed feeling very vulnerable, his prosthetics an articulated jumble on the floor. He was wondering again what had happened to his original foot and leg.
The nurse had a bright pleasant voice. "Now please just lie down facing this way, Mr. Shays, on your stomach."
"Doctor Shays," Verden corrected her.
Leonard was going to say it didn't matter, but then that didn't matter either.
The woman offered him a glass of water and two pills and he wondered why she hadn't done so while he was still upright. "There will be some pain, Dr. Shays," she said, still with an encouraging smile.
"I know," he said, not moving to take the pills.
"They won't turn you into a zombi," Dr. Verden said. "You'll still be able to resist."
"Not as well, I think."
Verden snorted. "That's right. Which only means you'll go through the process a dozen times instead of two or three."
"I know."
"And if you could resist it perfectly, you could keep going back every other day for the rest of your life. Nobody ever has, though."
Leonard made no comment, wriggled into a slightly more comfortable position.
"You have no idea the amount of discomfort you're condemning yourself to."
"Don't threaten, doctor; it's unbecoming."
Verden began to strap him in. "I'm not threatening," he said mildly. "I'm counseling. I am your agent, after all, working in your own best—"
"That's not what I got from the court order," Leonard said. "Ouch! You don't have to be so rough about it. I'm not going anywhere."
"We have to make you perfectly stationary. Biometric reference points."
Resisting personality overlay is not conceptually difficult. Every literate person knows the technique and most illiterates as well: first the best-selling novel, _Paindreamer_ , then dozens of imitative efforts, described it; then a couple of sensational flix, and finally the afternoon cube saga, _Stay Out of My Mind!_
The person strapped on the table need not concern himself with the processes (inductive-surgical/molecular-biological/cybernetic) going on, any more than he has to think about the way his brain is working in order to attack a regular problem. Because when the therapist attempts to change some facet of the patient's personality, the action manifests itself to the patient in terms of a dream-problem. More often, a nightmare.
The dream is very realistic and offers two or three alternatives to the dreamer. If he chooses the right one, his own will reinforces the aim of the therapist, and helps make permanent the desired cellular changes.
If he chooses the wrong alternative—the illogical or painful one—he is reinforcing his brain cells' tendency to revert to their original configuration, like a crumpled-up piece of paper struggling to be square again.
Sometimes the dreams have a metaphorical connection with the problem the therapist is attacking. More often they do not:
Leonard is sitting in the home of some good friends, a young couple who have just had their first child.
"It's just fantastic," says the young woman, handing Leonard a cold beer, "the way he's growing. You won't believe it."
Leonard sips the cold beer while the woman goes to get the child and the part of him aware that this is just a dream marvels at the solidity of the illusion.
"Here," she says, offering the baby to Leonard, laughing brightly. "He's such a rascal."
The baby is about a meter long but his head is no larger than Leonard's thumb.
"He's always doing that," says the husband from across the room. "He's a regular comedian. Squeeze his chest and watch what happens!"
Leonard squeezes the baby's chest and, sure enough, the head grows and the body shrinks until the baby is of normal proportions. He squeezes harder and the head swells larger and dangles over onto the shrunken torso, a giant embryo out of _situ_.
The husband is laughing so hard that tears come to his eyes.
A line of worry creases the young woman's forehead. "Don't squeeze too hard—please Leonard, don't, you'll hurt—"
The baby's head explodes, red-dripping shot with gray and blue slime, all over Leonard's chest and lap.
"What did you go and do that for?"
Leonard has both his legs and they are clad in mottled green jungle fatigues. He is cautiously leading his squad down the Street of Redemption in Beirut, in the slums, in the steambath of a summer afternoon. They crab down the rubble-strewn sidewalk, hugging the wall. Another squad, Lieutenant Shanker's, is across the street from them and slightly behind.
They come to number 43.
_God, no._
"This is the place, Lieutenant," Leonard shouts across the street.
"Fine, Shays. You want to go in first? Or shall we take it from this angle?"
"If I... uh... if I go in first I'll lose my leg."
"Well hell," says the lieutenant affably. "We don't want that to happen. Hold on just a—"
"Never mind." Leonard unsnaps a microton grenade from his harness and lofts it through the open door. Everybody flattens out for the explosion. Before the dust settles, Leonard steps through the door. With the corner of his eye he sees the dusty black bulk of the oneshot generator. A bright flash and singing pain as he walks two steps on his shinbones and falls, pain fading.
Leonard is fishing from a rowboat at the mouth of the Crystal River, with one of his best friends, Norm Provoost, the game warden.
He threads a shrimp onto the hook and casts. Immediately he gets a strike, a light one; sets the hook and reels in the fish.
"What you got, Len?"
"Doesn't feel like much." He lifts it into the boat. It's a speckled trout—a protected species—smaller than his hand, hooked harmlessly through the lip.
"Not big enough to keep," says Norm, while Leonard, disengages the hook. "They sure are pretty creatures."
Leonard grasps the fish firmly above the tail and cracks its head against the side of the boat.
"For Chrissake, Shays!"
He shrugs. "We might need bait later."
A large seminar room. Leonard's favorite professor, Dr. Van Wyck, has just filled a third blackboard with equations and moves to a fourth, at his customary rapid pace.
On the first board he made an error in sign. On the second board this error caused a mistake in double integration, two integrands having been wrongly consolidated. The third board, therefore, is gibberish and the fourth is utter gibberish. Van Wyck slows down.
"Something's screwy here," he says, wiping a yellow streak of chalk dust across his forehead. He stares at the boards for several minutes. "Can anybody see what's wrong?"
Negative murmur from the class. Their heads are bobbing, looking back and forth from their notes to the board. Leonard sits smirking.
"Mr. Shays, your Master's thesis was on this topic. Can't _you_ see the error?"
Leonard shakes his head and smiles.
Leonard woke up awash with dull pain, mostly in the back of his skull and under the restraining straps. With great effort he tilted his head and saw that he was no longer strapped in; only fatigue was holding him down. Bright welts across his arms.
Vague troubling memories; equations, fishing, Beirut, small child... Leonard wondered whether he had resisted as strongly with his mind as he obviously had with his body. He didn't feel any different, only weak and hurting.
A nurse appeared with a small hypodermic.
"Wha?" His throat was too dry to talk. He swallowed, nothing.
"Hypnotic," she said.
"Ah." He tried to turn away, couldn't even find strength to lift his shoulder. She was holding him down with a light touch, swabbing a place on his arm with coldness. "You want to get well, don't you? It's only so the doctor can..." sharp pricking and blackness.
He woke up feeling better the second time. Dr. Verden handed him a glass of water. He drank half of it greedily, paused to wonder if it was drugged, then drank the rest anyway.
Refilling the glass: "That was quite a performance, Leonard."
"You know what I was dreaming?"
"We know what you remember having dreamt. You remember quite a lot, under hypnosis."
Leonard tried to sit up, felt faint, laid back down. "Did... am I still..."
Dr. Verden put down the pitcher, leafed through some pages on a clipboard. "Yes. You have essentially the same behavioral profile you had when you came in."
"Good."
He shrugged. "It's only a question of time. I think you were starting to respond to the therapy, toward the end. The State monitors recommended that I terminate before... actually, I had to agree with them. You aren't in very good shape, Leonard."
"I know. Asymmetrical."
"Bad jokes aside. It just means you'll have more sessions, of shorter duration. You'll be here longer. Unless you decide to cooperate."
Leonard looked at the ceiling. "Better get used to my being around."
Salad has just been served at a formal dinner and Leonard is eating it with the wrong fork. The young lady across from him notices this, and looks away quickly with a prim smile. Leonard replaces the fork and finishes the salad with his fingers.
Leonard and Scottie, newly married, are walking across the campus of the University of Florida, on a lovely spring day. She makes a sound between "Eek" and "Ack."
"It's just a snake, Scottie."
"It's _not_ just a snake. It's a _coral_ snake." And it is; red-touch-yella-bite-a-fella. "Leonard!"
"I won't hurt it." Leonard is chasing after it and with some difficulty picks it up by the tail. The snake loops around and begins to gnaw on Leonard's wrist. Scottie screams while Leonard watches the slow pulse of poison, holding on stoically even though the snake is hurting him.
Leonard repeats the Beirut dream in almost every detail, but this time he tries not to look at the laser boobytrap before setting it off.
"You're weakening, Dr. Shays. Why don't you just give in, cooperate?" Dr. Verden said this into the clipboard, a few pages thicker this time, and then favored his patient with a cool stare.
Leonard yawned elaborately. "It occurred to me this morning that I won't have to resist indefinitely. Only until Scotty's father gets tired of paying."
Without hesitating: "He paid in advance, on contract."
"You're a good liar, Doctor. Facile."
"And you're a lousy patient, Doctor. But challenging."
Scottie came in for a few minutes and stood at the other end of the bed while Leonard delivered a nonstop monologue, full of bitterness but surprisingly free of profanity, about her failure as a wife and as a human being. During her stay she said only "Hello, Leonard" and "Good-bye."
The doctor did not come back in after Scottie left. Leonard sat and tried to think about the whole thing dispassionately.
If Scottie gave up on him, surely the old man would too. There was only a month to go before their marriage contract ran out. If Scottie let it lapse, he would probably be released immediately. He resolved to be even nastier to her if she visited again.
But could he last a month? Despite what Verden said, he had felt as much in control this session as he had before. And it seemed to have hurt less. Whether he could last another dozen sessions, though... well, he really had no way of telling.
Leonard never paid any attention to the soap operas and he made it a point of pride not to read best-sellers. He only had a sketchy, cocktail-party idea of what people thought went on in your head during overlay therapy. Supposedly, you resisted with your "will"—the term seeming to Leonard reasonably accurate but trivial—and a strong-willed person thus could defend his identity better than a weak person could. But there were limits, popular wisdom said, dark limits of stress that would break the most obstinate.
In fiction, people often escaped therapy by refusing to come out of one of the induced dreams—a pleasant dream always coming around at just the right time—by some application of existential _machismo_ that was never too well explained. Pure poppycock, of course. Leonard always knew what was going on during a scenario, and he could control its progress to a certain extent, but when the pivotal moment came he had to take some action (even inaction was a decision) and then the dream would fade, to be replaced by the next one. To decide to stay in one dream was as meaningful as making up your mind to stay on a moving escalator, by effort of will, after it had reached the top.
Physical escape out of the question, it looked to Leonard as if his only hope was to keep plugging away at it. The monitors kept Verden from exhausting Leonard or drugging him; such measures could only be taken in rehabilitating a felon or a "dangerously violent" patient. Ironically, Leonard had been against the idea of the monitors when federal law had created them to enforce "mental civil rights." It had seemed like a sop thrown to an hysterical electorate after _Paindreamer._ But maybe the government had been right, just this once.
Fake a cure? Impossible unless you were a consummate actor and a psychometrics expert. And Verden checked your behavioral profile under hypnosis.
For a few moments Leonard considered the possibility that Verden and Scottie were right, that he was actually coming loose from his moorings. He decided that, although it might be true, it was an unproductive angle of attack.
He supposed that a technician—maybe even Verden himself—might be bribed, but the money he had gotten for his piano was inaccessible and probably not enough anyhow.
Best to just stick it out.
Leonard is in an unfamiliar uniform, seated at a complicated console. He sits in front of a wall-sized backlit map of the world; North America and Europe covered with blue dots and Asia covered with red dots. Central to the console is a prominent keyhole, and a matching key dangles lightly on a chain around his neck. His left side is weighed down by a heavy pistol in a shoulder holster. A plate on the console winks every thirty seconds: NO GO. There is an identical console to his right, with another man identically accoutered, who is apparently quite absorbed in reading a book.
So they are the two men who will set in motion the vengeance of the Free World in case of enemy attack. Or adverse executive decision.
The plate blinks GO, in red, stroboscopically. A teletype behind them starts to chatter.
The other man takes his key and hesitates, looks at Leonard. Says a simple word.
Which is the wrong way to act? Leonard wonders. If he shoots the man, he saves half the world. If they both insert their keys, the enemies of democracy die. But maybe by the logic of the dream they are supposed to die.
Leonard takes the key from his neck and puts it in the hole, turns it counterclockwise. The other does the same. The plate stops flashing.
Leonard unholsters his pistol and shoots the other man in the chest, then in the head. Then, fading, he shoots himself, for good measure.
Then there are four dreams offering less and less clear-cut alternatives.
Finally, Leonard is sitting alone in front of a fireplace, reading a book. He reads twenty pages, about Toltec influence on Mayan sculpture, while nothing happens.
He decides not to read for a while and stares into the fire. Still nothing happens. He strips pages from the book and burns them. He burns the dust-jacket and the end boards. Nothing.
He sits down, unstraps one leg and throws it into the fire. The prosthetic foot follows. He watches them melt without burning.
After a couple of hours he falls asleep.
Dr. Verden did not come to him after this session was over. He woke up, the nurse gave him a hypnotic, he woke up again later. Then he spent a day leafing through magazines, watching the cube, wondering.
Was Verden trying to trick him in some way? Or did the ambiguity of the dreams mean that the therapy was succeeding? The nurse didn't know anything, or just wasn't talking.
As far as he could test himself, Leonard didn't feel any differently. He was still full of rage at Scottie and Verden, still quite willing to sell his mathematics when money got low—and didn't regret having sold the piano—still felt that imprinting a person who was manifestly sane was a gross violation of privacy and civil rights.
Leonard has another session, of seven dreams. In the first three the result of his action is ambiguous. In the next two, it is trivial. In the sixth it is obscure. In the seventh, Leonard is a catatonic lying motionless, for a long time, in a hospital ward full of motionless catatonics.
This time Verden appeared without white smock or clipboard. Leonard was surprised that seeing him in a plain business suit, stripped of symbols of authority, should make such a difference. He decided that it was a conscious masquerade.
"The last two sessions have been very alarming," Verden said, rocking on his heels, hands behind his back.
"Boring, at any rate."
"I'll be frank with you." Leonard reflected that that was one of the least trust-inspiring phrases in the language. Surely the doctor knew that. In trying to figure out why he'd said it, Leonard almost missed the frankness.
"What?"
"Please pay attention. This is very important. I said you are in grave danger of permanently harming your own mind."
"By resisting your efforts."
"By resisting therapy too... successfully, if you want to put it that way. It's a rare syndrome and I didn't recognize it, but one of the monitors—"
"He had a patient just like me, back in '93."
"No. He recalled a journal article." Verden took a folded sheaf of paper out of an inside pocket, handed it to Leonard. "Read this and tell me it doesn't describe what's happening to you."
It looked very convincing, a 'stat of an article from the July 2017 number of _The American Journal of Behavior Modification Techniques_. The title of the article was "The Paranoid Looping Defense: a Cybernetic Analogue." It was full of jargon, charts and the kind of vague mathematics that social scientists admire.
Leonard handed it back. "This and two hundred bucks will get you the services of a typesetter, Doctor. Nice try."
"You think..." He shook his head slowly, ran his finger along the paper's crease and returned it to his pocket. "Of course you think that I'm lying to you." He smiled. "That's consistent with the syndrome."
He took the paper out again and set it on the table next to Leonard's bed. "You may want to read this, if only for amusement." Leaving, standing theatrically in the door: "You may as well know that there will be an extra monitor for your therapy tomorrow. A representative of the Florida Medical Ethics Board. He will give me permission to accelerate your treatment with drugs."
"Then I'll try to be very cooperative tomorrow." He smiled at the doctor's back and then laughed. He had expected something like this. But he was surprised that Verden hadn't been more subtle.
"You can't kid a kidder," he said aloud, folding the paper into fourths, eighths, sixteenths. He tossed it into the bedpan and turned on the cube.
It was the first time he'd ever enjoyed watching a quiz show.
As Leonard goes under the anesthesia he is very happy. He has a plan.
He will cooperate with the doctor, choose all the right alternatives, allow himself to be cured. But only temporarily.
Once released, he will go to a skills transfer agency and hock his mathematics. He will bring the money to Verden, who has his original personality on file— _and buy himself back!_ Audacious!
He awaits the first dream situation with smug composure.
Leonard is going under the anesthesia, very happy because he has a plan. He will cooperate with the doctor, choose all the right alternatives and allow himself to be cured, but only temporarily. Once released he will hock his mathematics at a skills transfer agency and bring the money to Verden, who has his original personality on file and _buy_ himself back! Audaciously and with smug composure he awaits the first dream.
Happily going under because he has a plan to be cured temporarily and sell his mathematics to get money to _buy himself back_ from Verden, Leonard waits to dream.
Happy under plan cure _himself_ dream.
# All The Universe in a Mason Jar
_This is a lark of a story, that I wrote to entertain myself after finishing a novel. The fact that you can always sell humorous science fiction had nothing to do with it._
_I like the local-color humorists of the late nineteenth, early twentieth centuries, and it occurred to me that I'd never seen a science fiction local-color story. Perhaps because it's basically a silly idea. At any rate, I was stuck in another damned Iowa winter, feeling homesick for Florida, and so I wrote this._
_Sent a copy of it to a friend who is a sensitive poet with many degrees and an accent you could slice and serve up with red-eye gravy, asking him whether the dialect rang true. He wrote back that he thought my family must have had a Southerner in the woodpile. Whether that's a yes, or no, or a sometimes, I'm not sure._
New Homestead, Florida: 1990.
John Taylor Taylor, retired professor of mathematics, lived just over two kilometers out of town, in a three-room efficiency module tucked in an isolated corner of a citrus grove. Books and old furniture and no neighbors, which was the way John liked it. He only had a few years left on this Earth, and he preferred to spend them with his oldest and most valued friend: himself.
But this story isn't about John Taylor Taylor. It's about his moonshiner, Lester Gilbert. And some five billion others.
This day the weather was fine, so the professor took his stick and walked into town to pick up the week's mail. A thick cylinder of journals and letters was wedged into his box; he had to ask the clerk to remove them from the other side. He tucked the mail under his arm without looking at it, and wandered next door to the bar.
"Howdy, Professor."
"Good afternoon, Leroy." He and the bartender were the only ones in the place, not unusual this late in the month. "I'll take a boilermaker today, please." He threaded his way through a maze of flypaper strips and eased himself into a booth of chipped, weathered plastic.
He sorted his mail into four piles: junk, bills, letters, and journals. Quite a bit of junk, two bills, a letter that turned out to be another bill, and three journals— _Nature, Communications_ of the American Society of Mathematics, and a collection of papers delivered at an ASM symposium on topology. He scanned the contributors lists and, as usual, saw none of his old colleagues represented.
"Here y'go." Leroy sat a cold beer and a shot glass of whiskey between _Communications_ and the phone bill. John paid him with a five and lit his pipe carefully before taking a sip. He folded _Nature_ back at the letters column and began reading.
The screen door slapped shut loudly behind a burly man in wrinkled clean work clothes. John recognized him with a nod; he returned a left-handed V-sign and mounted a bar stool.
"How 'bout a red-eye, Leroy?" Mixture of beer and tomato juice with a dash of Louisiana, hangover cure.
Leroy mixed it. "Rough night, Isaac?"
"Shoo. You don' know." He downed half the concoction in a gulp, and shuddered. He turned to John. "Hey, Professor. What you know about them flyin' saucers?"
"Lot of them around a few years ago," he said tactfully. "Never saw one myself."
"Me neither. Wouldn't give you a nickel for one. Not until last night." He slurped the red-eye and wiped his mouth.
"What," the bartender said, "you saw one?"
_"Saw_ one. Shoo." He slid the two-thirds empty glass across the bar. "You wanta put some beer on top that? Thanks.
"We was down the country road seven-eight klicks. You know Eric Olsen's new place?"
"Don't think so."
"New boy, took over Jarmin's plat."
"Oh yeah. Never comes in here; know of him, though."
"You wouldn't hang around no bar neither if you had a pretty little... well. Point is, he was puttin' up one of them new stasis barns, you know?"
"Yeah, no bugs. Keeps stuff forever, my daddy-in-law has one."
"Well, he picked up one big enough for his whole avocado crop. Hold on to it till the price is right, up north, like January? No profit till next year, help his 'mortization."
"Yeah, but what's that got to do with the flying—"
"I'm gettin' to it." John settled back to listen. Some tall tale was on the way.
"Anyhow, we was gonna have an old-fashion barn raisin'... Miz Olsen got a boar and set up a pit barbecue, the other ladies they brought the trimmin's. Eric, he made two big washtubs of spiced wine, set 'em on ice till we get the barn up. Five, six hours, it turned out (the directions wasn't right), _hot_ afternoon, and we just headed for that wine like you never saw.
"I guess we was all pretty loaded, finished off that wine before the pig was ready. Eric, he called in to Samson's and had 'em send out two kegs of Bud."
"Got to get to know that boy," Leroy said.
"Tell me about it. Well, we tore into that pig and had him down to bones an' gristle in twenty minutes. Best god-dern pig _I_ ever had, anyhow.
"So's not to let the fire permit go to waste, we went out an' rounded up a bunch of scrap, couple of good-size logs. Finish off that beer around a bonfire. Jommy Parker went off to pick up his fiddle and he took along Midnight Jackson, pick up his banjo. Miz Olsen had this Swedish guitar, one too many strings but by God could she play it.
"We cracked that second keg 'bout sundown and Lester Gilbert—you know Lester?"
Leroy laughed. "Don't I just. He was 'fraid the beer wouldn't hold out, went to get some corn?"
John made a mental note to be home by four o'clock. It was Wednesday; Lester would be by with his weekly quart.
"We get along all right," the bartender was saying. "Figure our clientele don't overlap that much."
"Shoo," Isaac said. "Some of Lester's clientele overlaps on a regular basis.
"Anyhow, it got dark quick, you know how clear it was last night. Say, let me have another, just beer."
Leroy filled the glass and cut the foam off. "Clear enough to see a flyin' saucer, eh?"
"I'm gettin' to it. Thanks." He sipped it and concentrated for a few seconds on tapping tobacco into a cigarette paper. "Like I say, it got dark fast. We was sittin' around the fire, singin' if we knew the words, drinkin' if we didn't—"
"'Spect you didn't know many of the songs, yourself."
"Never could keep the words in my head. Anyhow, the fire was gettin' a mite hot on me, so I turned this deck chair around and settled down lookin' east, fire to my back, watchin' the moon rise over the government forest there—"
"Hold on now. Moon ain't comin' up until after midnight."
"You-God-Damn- _right_ it ain't!" John felt a chill even though he'd seen it coming. Isaac had a certain fame as a storyteller. "That wan't _nobody's_ moon."
"Did anybody else see it?" John asked.
"Ev'rybody. Ev'rybody who was there—and one that wasn't. I'll get to that.
"I saw that thing and spilled my beer gettin' up, damn near trip and fall in the pit. Hollered 'Lookit that goddamn thing!' and pointed, jumpin' up an' down, and like I say, they all did see it.
"It was a little bigger than the moon and not quite so round, egg-shaped. Whiter than the moon, an' if you looked close you could see little green and blue flashes around the edge. It didn't make no noise we could hear, and was movin' real slow. We saw it for at least a minute. Then it went down behind the trees."
"What could it of been?" the bartender said. "Sure you wa'nt all drunk and seein' things?"
"No way in hell. You know me, Leroy, I can tie one on ev'y now and again, but I just plain don't get that drunk. Sure thing I don't get that drunk on beer an' _wine!"_
"And Lester wasn't back with the 'shine yet?"
"No... an' that's the other part of the story." Isaac took his time lighting the cigarette and drank off some beer.
"I'm here to tell you, we was all feelin' sorta spooky over that. Hunkered up around the fire, lookin' over our shoulders. Eric went in to call the sheriff, but he didn't get no answer.
"Sat there for a long time, speculatin'. Forgot all about Lester, suppose to be back with the corn.
"Suddenly we hear this somethin' crashin' through the woods. Jommy sprints to his pickup and gets out his over-and-under. But it's just Lester. Runnin' like the hounds of Hell is right behind him.
"He's got a plywood box with a half-dozen Mason jars in her, and from ten feet away he smells like Saturday night. He don't say a word; sets that box down, not too gentle, jumps over to Jommy and grabs that gun away from him and aims it at the government woods, and pulls both triggers, just _boom-crack_ 20-gauge buckshot and a.30-caliber rifle slug right behind.
"Now Jommy is understandable pissed off. He takes the gun back from Lester and shoves him on the shoulder, follows him and shoves him again; all the time askin' him, just not too politely, don't he know he's too drunk to handle a firearm? and don't he know we could all get busted, him shootin' into federal land? and just in general, what the Sam Hill's goin' on, Lester?"
He paused to relight the cigarette and take a drink. "Now Lester's just takin' it and not sayin' a thing. How 'bout _that?"_
"Peculiar," Leroy admitted.
Isaac nodded. "Lester, he's a good boy but he does have one hell of a temper. Anyhow, Lester finally sets down by his box and unscrews the top off a full jar—they's one with no top but it looks to be empty—and just gulps down one whole hell of a lot. He coughs once and starts talkin'."
"Surprised he could talk at all." John agreed. He always mixed Lester's corn with a lot of something else.
"And listen—that boy is sober like a parson. And he says, talkin' real low and steady, that he seen the same thing we did. He describes it, just exactly like I tole you. But he sees it on the ground. Not in the air."
Isaac passed the glass over and Leroy filled it without a word. "He was takin' a long-cut through the government land so's to stay away from the road. Also he had a call of Nature and it always felt more satisfyin' on government land.
"He stopped to take care of that and have a little drink and then suddenly saw this light. Which was the saucer droppin' down into a clearing, but he don't know that. He figures it's the sheriff's copter with its night lights on, which don't bother him much, 'cause the sheriff's one of his best customers."
"That a fact?"
"Don't let on I tole you. Anyways, he thought the sheriff might want a little some, so he walks on toward the light. It's on the other side of a little rise; no underbresh but it takes him a few minutes to get there.
"He tops the rise and there's this saucer—bigger'n a private 'copter, he says. He's stupified. Takes a drink and studies it for a while. Thinks it's probably some secret government thing. He's leanin' against a tree, studying... and then it dawns on him that he ain't alone."
Isaac blew on the end of his cigarette and shook his head. "I 'spect you ain't gonna believe this—not sure I do myself—but I can't help that, it's straight from Lester's mouth.
"He hears something on the other side of the tree where he's leanin'. Peeks around the tree and—there's this _thing_.
"He says it's got eyes like a big cat, like a lion's, only bigger. And it's a big animal otherwise, about the size of a lion, but no fur, just wrinkled hide like a rhino. It's got big shiny claws that it's usin' on the tree, and a mouthful of big teeth, which it displays at Lester and growls.
"Now Lester, he got nothin' for a weapon but about a quart of Dade County's finest—so he splashes that at the monster's face, hopin' to blind it, and takes off like a bat.
"He gets back to his box of booze, and stops for a second and looks back. He can see the critter against the light from the saucer. It's on its hind legs, weavin' back and forth with its paws out, just roarin'. Looks like the booze works, so Lester picks up the box, ammunition. But just then that saucer light goes out.
"Lester knows good and God damn well that that damn' thing can see in the dark, with them big eyes. But Les can see our bonfire, a klick or so west, so he starts runnin' holdin' on to that box of corn for dear life.
"So he comes in on Eric's land and grabs the gun and all that happens. We pass the corn around a while and wash it down with good cold beer. Finally we got up enough Dutch courage to go out after the thing.
"We got a bunch of flashlights, but the only guns were Jommy's over-and-under and a pair of antique flintlock pistols that Eric got from his dad. Eric loaded 'em and give one to me, one to Midnight. Midnight, he was a sergeant in the Asia war, you know, and he was gonna lead us. Eric himself didn't think he could shoot a animal. Dirt farmer (good boy, though)."
"Still couldn't get the sheriff? What about the Guard?"
"Well, no. Truth to tell, everybody-even Lester—was halfway convinced we ain't seen nothin', nothin' real. Eric had got to tellin' us what went into that punch, pretty weird, and the general theory was that he'd whipped up a kind of halla, hallo—"
"Hallucinogen," John supplied.
"That's right. Like that windowpane the old folks take. No offense, Professor."
"Never touch the stuff."
"Anyhow, we figured that we was probably seein' things, but we'd go out an' check, just in case. Got a bunch of kitchen knives and farm tools, took the ladies along too.
"Got Midnight an' Lester up in the front, the rest of us stragglin' along behind, and we followed Lester's trail back to where he seen the thing."
Isaac took a long drink and was silent for a moment, brow furrowed in thought. "Well, hell. He took us straight to that tree and I'm a blind man if there weren't big ol' gouges all along the bark. And the place did smell like Lester's corn.
"Midnight, he shined a light down to where Lester'd said the saucer was, and sure enough, the bresh was all flat there. He walked down to take a closer look—all of us gettin' a little jumpy now—and God damn if he didn't bump right into it. That saucer was there but you flat couldn't see it.
"He let out one hell of a yelp and fired that ol' flintlock down at it, point-blank. Bounced off, you could hear the ball sing away. He come back up the rise just like a cat on fire; when he was clear I took a pot shot at the damn thing, and then Jommy he shot it four, six times. Then there was this kind of wind, and it was gone."
There was a long silence. "You ain't bullshittin' me," Leroy said. "This ain't no story."
"No." John saw that the big man was pale under his heavy tan. "This ain't no story."
"Let me fix you a stiff one."
"No, I gotta stay straight. They got some newspaper boys comin' down this afternoon. How's your coffee today?"
"Cleaned the pot."
John stayed for one more beer and then started walking home. It was hot, and he stopped halfway to rest under a big willow, reading a few of the _Nature_ articles. The one on the Ceres probe was fascinating; he reread it as he ambled the rest of the way home.
So his mind was a couple of hundred million miles away when he walked up the path to his door and saw that it was slightly ajar.
First it startled him, and then he remembered that it was Lester's delivery day. He always left the place unlocked (there were ridge-runners but they weren't interested in old books), and the moonshiner probably just left his wares inside.
He checked his watch as he walked through the door: it was not quite three. Funny. Lester was usually late.
No Mason jar in sight. And from his library, a snuffling noise.
The year before, some kind of animal—the sheriff had said it was probably a bear—had gotten into his house and made a shambles of it. He eased open the end-table drawer and took out the Walther P-38 he had taken from a dead German officer, half a century before. And as he edged toward the library, the thought occurred to him that the 50-year-old ammunition might not fire.
It was about the size of a bear, a big bear.
Its skin was pebbly gray, with tufts of bristle. It had two arms, two legs, and a stiff tail to balance back on.
The tail had a serrated edge on top, that looked razor sharp. The feet and hands terminated in pointed black claws. The head was vaguely saurian; too many teeth and too large.
As he watched, the creature tore a page out of Fadeeva's _Computational Methods of Linear Algebra_ , stuffed it in his mouth and chewed. Spat it out. Turned to see John standing at the door.
It's probably safe to say that any other resident of New Homestead, faced with this situation, would either have started blazing away at the apparition, or would have fainted. But John Taylor Taylor was nothing if not a cool and rational man, and had besides suffered a lifelong addiction to fantastic literature. So he measured what was left of his life against the possibility that this fearsome monster might be intelligent and humane.
He laid the gun on a writing desk and presented empty hands to the creature, palms out.
The thing regarded him for a minute. It opened its mouth, teeth beyond counting, and closed it. Translucent eyelids nictated up over huge yellow eyes, and slid back. Then it replaced the Fadeeva book and duplicated John's gesture.
In several of the stories John had read, humans had communicated with alien races through the medium of mathematics, a pure and supposedly universal language. Fortunately, his library sported a blackboard.
"Allow me to demonstrate," he said with a slightly quavering voice as he crossed to the board, "the Theorem of Pythagorus." The creature's eyes followed him, blinking. "A logical starting place. Perhaps. As good as any," he trailed off apologetically.
He drew a right triangle on the board, and then drew squares out from the sides that embraced the right angle. He held the chalk out to the alien.
The creature made a huffing sound, vaguely affirmative and swayed over to the blackboard. It retracted the claws on one hand and took the chalk from John.
It bit off one end of the chalk experimentally, and spit it out.
Then it reached over and casually sketched in the box representing the square of the hypotenuse. In the middle of the triangle it drew what was obviously an equals sign: ~
John was ecstatic. He took the chalk from the alien and repeated the curly line. He pointed at the alien and then at himself: equals.
The alien nodded enthusiastically and took the chalk. It put a slanted line through John's equals sign.
Not equals.
It stared at the blackboard, tapping it with the chalk; one universal gesture. Then, squeaking with every line, it rapidly wrote down:
John studied the message. Some sort of tree diagram? Perhaps a counting system. Or maybe not mathematical at all. He shrugged at the creature. It flinched at the sudden motion, and backed away growling.
"No, no." John held his palms out again. "Friends."
The alien shuffled slowly back to the blackboard and pointed to what it had just written down. Then it opened its terrible mouth and pointed at that. It repeated the pair of gestures twice.
"Oh." Eating the Fadeeva and the chalk. "Are you hungry?" It repeated the action more emphatically.
John motioned for it to follow him and walked toward the kitchen. The alien waddled slowly, its tail a swaying counterweight.
He opened the refrigerator and took out a cabbage, a package of catfish, an avocado, some cheese, an egg, and a chafing dish of leftover green beans, slightly dried out. He lined them up on the counter and demonstrated that they were food by elaborately eating a piece of cheese.
The alien sniffed at each item. When it got to the egg, it stared at John for a long time. It tasted a green bean but spat it out. It walked around the kitchen in a circle, then stopped and growled a couple of times.
It sighed and walked into the living room. John followed. It went out the front door and walked around behind the module. Sighed again and disappeared, from the feet up.
John noted that where the creature had disappeared, the grass was crushed in a large circle. That was consistent with Isaac's testimony: it had entered its invisible flying saucer.
The alien came back out with a garish medallion around its neck. It looked like it was made of rhinestones and bright magenta plastic.
It growled and a voice whispered inside his brain: "Hello? Hello? Can you hear me?"
"Uh, yes. I can hear you."
"Very well. This will cause trouble." It sighed. "One is not to use the translator with a Class 6 culture except under the most dire of emergency. But I am starve. If I do not eat soon the fires inside me will go out. Will have to fill out many forms, may they reek."
"Well... anything I can do to help..."
"Yes." It walked by him, back toward the front door. "A simple chemical is the basis for all my food. I have diagrammed it." He followed the alien back into the library.
"This is hard." He studied his diagram. "To translator is hard outside of basic words. This top mark is the number 'one'. It means a gas that burns in air."
"Hydrogen?"
"Perhaps. Yes, I think. Third mark is the number 'eight', which means a black rock that also burns, but harder. The mark between means that in very small they are joined together."
"A hydrogen-carbon bond?"
"This is only noise to me." Faint sound of a car door slamming, out on the dirt road.
"Oh, oh," John said. "Company coming. You wait here." He opened the door a crack and watched Lester stroll up the path.
"Hey, Perfesser! You ain't gonna believe what—"
"I know, Les. Isaac told me about it down at Leroy's." He had the door open about twelve centimeters.
Lester stood on the doormat, tried to look inside. "Somethin' goin' on in there?"
"Hard to explain, uh, I've got company."
Lester closed his mouth and gave John a broad wink. "Knew you had it in you, Doc." He passed the Mason jar to John. "Look, I come back later. Really do want yer 'pinion."
"Fine, we'll do that. I'll fix you a—"
A taloned hand snatched the Mason jar from John.
Lester turned white and staggered back. "Don't move a muscle, Doc. I'll git my gun."
"No, wait! It's friendly!"
"Food," the creature growled. "Yes, friend." The screw-top was unfamiliar but only presented a momentary difficulty. The alien snapped it off, glass and all, with a flick of the wrist. It dashed the quart of raw 'shine down its throat.
"Ah, fine. So good. Three parts food, one part water. Strange flavor, so good." It pushed John aside and waddled out the door.
"You have more good food?"
Lester backed away. "You talkin' to me?"
"Yes. yes. You have more of this what your mind calls 'corn'?"
"I be damned." Lester shook his head in wonder. "You are the ugliest sumbitch I ever did see."
"This is humor, yes. On my world, egg-eater, you would be in cage. To frighten children to their amusement." It looked left and right and pointed at Lester's beat-up old Pinto station wagon. "More corn in that animal?"
"Sure." He squinted at the creature. "You got somethin' to pay with?"
"Pay? What is this noise?"
Lester looked up at John. "Did he say what I thought he said?"
John laughed. "I'll get my checkbook. You let him have all he wants."
When John came back out, Lester was leaning on his station wagon, sipping from a jar, talking to the alien. The creature was resting back on its tail, consuming food at a rate of about a quart every thirty seconds. Lester had showed it how to unscrew the jars.
"I do not lie," it said. "This is the best food I have ever tasted."
Lester beamed. "That's what I tell ev'ybody. You can't _git_ that in no store."
"I tasted only little last night. But could tell from even that. Have been seeking you."
It was obvious that the alien was going to drink all three cases. $25 per jar, John calculated, 36 jars. "Uh, Les, I'm going to have to owe you part of the money."
"That's okay, Doc. He just tickles the hell outa me."
The alien paused in mid-jar. "Now I am to understand, I think. You own this food. The Doc gives to you a writing of equal value."
"That's right," John said.
"You, the Les, think of things you value. I must be symmetry... I must have a thing you value."
Lester's face wrinkled up in thought. "Ah, there is one thing, yes. I go." The alien waddled back to his ship.
"Gad," Lester said. "If this don't beat all."
(Traveling with the alien is his pet treblig. He carries it because it always emanates happiness. It is also a radioactive creature that can excrete any element. The alien gives it a telepathic command. With an effort that scrambles television reception for fifty miles, it produces a gold nugget weighing slightly less than one kilogram.)
The alien came back and handed the nugget to Lester. "I would take some of your corn back to my home world, yes? Is this sufficient?"
The alien had to wait a few days while Lester brewed up enough 'shine to fill up his auxiliary food tanks. He declined an invitation to go to Washington, but didn't mind talking to reporters.
Humankind learned that the universe was teeming with intelligent life. In this part of the Galaxy there was an organization called the Commonality—not really a government; more like a club. Club members were given such useful tools as faster-than-light travel and immortality.
All races were invited to join the Commonality once they had evolved morally above a certain level. Humankind, of course was only a Class 6. Certain individuals went as high as 5 or as low as 7 (equivalent to the moral state of an inanimate object), but it was the average that counted.
After a rather grim period of transition, the denizens of Earth settled down to concentrating on being good, trying to reach Class 3, the magic level.
It would take surprisingly few generations. Because humankind had a constant reminder of the heaven on Earth that awaited them, as ship after ship drifted down from the sky to settle by a still outside a little farm near New Homestead, Florida: for several races, the gourmet center of Sirius Sector.
# The Private War of Private Jacob
_This is the shortest story I've ever written, yet I could write a book about where it came from and what it means. Instead, let me pass on to you a remark a friend of mine made about_ Catch-22:
"Nobody who hasn't been in the army can really understand the book. They think it's fiction."
With each step your boot heel cracks through the sun-dried crust and your foot hesitates, drops through an inch of red talcum powder, and then you draw it back up with another crackle. Fifty men marching in a line through this desert and they sound like a big bowl of breakfast cereal.
Jacob held the laser projector in his left hand and rubbed his right in the dirt. Then he switched hands and rubbed his left in the dirt. The plastic handles got very slippery after you'd sweated on them all day long, and you didn't want the damn thing to squirt out of your grip when you were rolling and stumbling and crawling your way to the enemy, and you couldn't use the strap, noplace off the parade ground; goddamn slide-rule jockey figured out where to put it, too high, take the damn thing off if you could. Take the goddamn helmet off too, if you could. No matter you were safer with it on. They said. And they were pretty strict, especially about the helmets.
"Look happy, Jacob," Sergeant Melford was always all smile and bounce before a battle. During a battle, too. He smiled at the tanglewire and beamed at his men while they picked their way through it—if you go too fast you get tripped and if you go too slow you get burned—and he had a sad smile when one of his men got zeroed and a shriek a happy shriek when they first saw the enemy and glee when an enemy got zeroed and nothing but smiles smiles smiles through the whole sorry mess. "If he _didn't_ smile, just once," young-old Addison told Jacob, a long time ago, "Just once he cried or frowned, there would be fifty people waiting for the first chance to zero that son of a bitch." And Jacob asked why and he said, "You just take a good look inside yourself the next time you follow that crazy son of a bitch into hell and you come back and tell me how you felt about him."
Jacob wasn't stupid, that day or this one, and he did keep an inside eye on what was going on under his helmet. What old Sergeant Melford did for him was mainly to make him glad that he wasn't crazy too, and no matter how bad things got, at least Jacob wasn't enjoying it like that crazy laughing grinning old Sergeant Melford.
He wanted to tell Addison and ask him why sometimes you were really scared or sick and you would look up and see Melford laughing his crazy ass off, standing over some steaming roasted body, and you'd have to grin, too, was it just so insane horrible or? Addison might have been able to tell Jacob but Addison took a low one and got hurt bad in both legs and the groin and it was a long time before he came back and then he wasn't young-old any more but just old. And he didn't say much any more.
With both his hands good and dirty, for a good grip on the plastic handles, Jacob felt more secure and he smiled back at Sergeant Melford.
"Gonna be a good one, Sarge." It didn't do any good to say anything else, like it's been a long march and why don't we rest a while before we hit them, Sarge or, I'm scared and sick and if I'm gonna die I want it at the very first, Sarge: no. Crazy old Melford would be down on his hunkers next to you and give you a couple of friendly punches and josh around and flash those white teeth until you were about to scream or run but instead you wound up saying, "Yeah Sarge, gonna be a good one."
We most of us figured that what made him so crazy was just that he'd been in this crazy war so long, longer than anybody could remember anybody saying he remembered; and he never got hurt while platoon after platoon got zeroed out from under him by ones and twos and whole squads. He never got hurt and maybe that bothered him, not that any of us felt sorry for the crazy son of a bitch.
Wesley tried to explain it like this: "Sergeant Melford is an improbability locus." Then he tried to explain what a locus was and Jacob didn't really catch it, and he tried to explain what an improbability was, and that seemed pretty simple but Jacob couldn't see what it all had to do with math. Wesley was a good talker though, and he might have one day been able to clear it up but he tried to run through the tanglewire, you'd think not even a civilian would try to do that, and he fell down and the little metal bugs ate his face.
It was twenty or maybe twenty-five battles later, who keeps track, when Jacob realized that not only did old Sergeant Melford never get hurt, but he never killed any of the enemy either. He just ran around singing out orders and being happy and every now and then he'd shoot off his projector but he always shot high or low or the beam was too broad. Jacob wondered about it but by this time he was more afraid, in a way, of Sergeant Melford than he was of the enemy, so he kept his mouth shut and he waited for someone else to say something about it.
Finally Cromwell, who had come into the platoon only a couple of weeks after Jacob, noticed that Sergeant Melford never seemed to zero anybody and he had this theory that maybe the crazy old son of a bitch was a spy for the other side. They had fun talking about that for a while, and then Jacob told them about the old improbability locus theory, and one of the new guys said he sure is an imperturbable locust all right, and they all had a good laugh, which was good because Sergeant Melford came by and joined in after Jacob told him what was so funny, not about the improbability locus, but the old joke about how do you make a hormone? You don't pay her. Cromwell laughed like there was no tomorrow and for Cromwell there wasn't even any sunset, because he went across the perimeter to take a crap and got caught in a squeezer matrix.
The next battle was the first time the enemy used the drainer field, and of course the projectors didn't work and the last thing a lot of the men learned was that the light plastic stock made a damn poor weapon against a long knife, of which the enemy had plenty. Jacob lived because he got in a lucky kick, aimed for the groin but got the kneecap, and while the guy was hopping around trying to stay upright he dropped his knife and Jacob picked it up and gave the guy a new orifice, eight inches wide and just below the navel.
The platoon took a lot of zeros and had to fall back, which they did very fast because the tanglewire didn't work in a drainer field, either. They left Addison behind, sitting back against a crate with his hands in his lap and a big drooly red grin not on his face.
With Addison gone, no other private had as much combat time as Jacob. When they rallied back at the neutral zone, Sergeant Melford took Jacob aside and wasn't really smiling at all when he said: "Jacob, you know that now if anything happens to me, you've got to take over the platoon. Keep them spread out and keep them advancing, and most of all, keep them happy."
Jacob said, "Sarge, I can tell them to keep spread out and I think they will, and all of them know enough to keep pushing ahead, but how can I keep them happy when I'm never very happy myself, not when you're not around."
That smile broadened and turned itself into a laugh. You crazy old son of a bitch, Jacob thought and because he couldn't help himself, he laughed too. "Don't worry about that," Sergeant Melford said. "That's the kind of thing that takes care of itself when the time comes."
The platoon practiced more and more with knives and clubs and how to use your hands and feet but they still had to carry the projectors into combat because, of course, the enemy could turn off the drainer field whenever he wanted to. Jacob got a couple of scratches and a piece of his nose cut off, but the medic put some cream on it and it grew back. The enemy started using bows and arrows so the platoon had to carry shields, too, but that wasn't too bad after they designed one that fit right over the projector, held sideways. One squad learned how to use bows and arrows back at the enemy and things got as much back to normal as they had ever been.
Jacob never knew exactly how many battles he had fought as a private, but it was exactly forty-one. And actually, he wasn't a private at the end of the forty-first.
Since they got the archer squad, Sergeant Melford had taken to standing back with them, laughing and shouting orders at the platoon and every now and then loosing an arrow that always landed on a bare piece of ground. But this particular battle (Jacob's forty-first) had been going pretty poorly, with the initial advance stopped and then pushed back almost to the archers; and then a new enemy force breaking out on the other side of the archers.
Jacob's squad maneuvered between the archers and the new enemy soldiers and Jacob was fighting right next to Sergeant Melford, fighting pretty seriously while old Melford just laughed his fool head off, crazy son of a bitch. Jacob felt that split-second funny feeling and ducked and a heavy club whistled just over his head and bashed the side of Sergeant Melford's helmet and sheared the top of his helmet off just as neat as you snip the end off a soft-boiled egg. Jacob fell to his knees and watched the helmet full of stuff twirl end over end in back of the archers and he wondered why there were little glass marbles and cubes inside the grey-blue blood-streaked mushy stuff and then everything just went
_Inside a mountain of crystal under a mountain of rock, a tiny piezoelectric switch, sixty-four molecules in a cube, flipped over to the_ OFF _position and the following transaction took place at just less than the speed of light:_
UNIT 10011001011MELFORD ACCIDENTALLY DEACTIVATED.
SWITCH UNIT 1101011100JACOB TO CATALYST STATUS.
(SWITCHING COMPLETED)
ACTIVATE AND INSTRUCT UNIT 1101011100JACOB.
and came back again just like that. Jacob stood up and looked around. The same old sun-baked plain, but everybody but him seemed to be dead. Then he checked and the ones that weren't obviously zeroed were still breathing a bit. And, thinking about it, he knew why. He chuckled.
He stepped over the collapsed archers and picked up Melford's bleedy skull-cap. He inserted the blade of a knife between the helmet and the hair, shorting out the induction tractor that held the helmet on the head and served to pick up and transmit signals. Letting the helmet drop to the ground, he carefully bore the grisly balding bowl over to the enemy's crapper. Knowing exactly where to look, he fished out all the bits and pieces of crystal and tossed them down the smelly hole. Then he took the unaugmented brain back to the helmet and put it back the way he had found it. He returned to his position by Melford's body.
The stricken men began to stir and a few of the most hardy wobbled to their hands and knees.
Jacob threw back his head and laughed and laughed.
# A Time to Live
_This story started with a leaky fountain pen, took a side trip to the moon, and ended up with my brother. Flying somewhere on a plane without too much pressurization, I read a story in_ The New Yorker. _The story was otherwise unremarkable—I don't even remember the author's name—but it had a neat first-person viewpoint trick that I thought might one day come in handy. I found a scrap of paper and made a note. The note was rather messy, as was my pocket, since the lack of pressure in the cabin had given my fountain pen a new sense of freedom. I lost the note, probably before I left the airplane, but remembered taking it._
_The second scene is perhaps a year later, stopping by the_ Analog _office to bother Ben Bova. Ben was ready for me: On a large lunar colony, he asked, what would they do with all the dead bodies? I said they d recycle them, the conventional answer; grind 'em up and sprinkle 'em over the north forty. No, he said, the elements in a human body are an insignificantly small fraction of the total biomass needed for a large colony, so they could do anything they wanted with the bodies. He suggested that many peoplewould elect to be "buried at space," jettisoned in a funerary capsule. He also suggested that I write him a story about it. I said I would, some day, too busy now._
_My brother Jack is also a writer, and a good one. Ben called and said he'd bought a story from Jack, so he_ had _to have one from me for the same issue, lest the readers become confused. I had to bow to his logic._
_Actually, I'd been thinking about writing a short story anyhow, about time travel. Natural languages, it says here, can't deal directly with time travel, because their tense structures are geared to time as a one-way street. I wasn't about to make up a new set of tenses to accommodate time travel, which would be incomprehensible to every reader, including myself. But I did see a way to take that_ New Yorker _trick and twist it in a Moebius way, to at least imply the complexity of the situation_.
_There was even a way to get Ben's funerary capsules into the act, as well as pay homage to two of my favorite science fiction stories: "The Man Who Sold the Moon" and "All You Zombies," both by Robert Heinlein._
The Man Who Owns the Moon, they called him while he was alive, and The Man Who Owned the Moon for some time thereafter. Dr. Thorne Harrison:
Born 1990 in a mean little Arkansas strip-mining town. Formal education terminated in 2005, with his escape from a state reformatory. Ten years of odd jobs on one side of the law or the other. Escalating ambition and power; by the age of thirty-five, billionaire chairman of a diversified, mostly legitimate, corporation. Luck, he called it.
One planet was not enough. About a week before his fortieth birthday, Harrison fired his board of directors and liquidated an awesome fortune. He sank every penny of it into the development and exploitation of the Adams-Beeson drive. Brought space travel to anyone who could afford it. Bought a chunk of the Moon to give them someplace to go. Pleasure domes, retirement cities, safaris for the jaded rich. Made enough to buy the votes to initiate the terraforming of Mars.
As the first trickle of water crawled down the Great Rift Valley, Harrison lay in his own geriatrics hospital, in Copernicus City, in his hundred and twentieth year. The excitement may have hastened his passing.
"Move it move it _move_ it!" Down the long white corridor two orderlies pushed the massive cart, drifting in long skips in the lunar gravity, the cart heavy with machines surrounding a frail wisp of a human body: dead cyborg of D. Thorne Harrison. Oxygenated fluorocarbon coursing through slack veins, making the brain think it still lived.
Through the bay doors of the cryonics facility, cart braked to a bumpy stop by the cold chamber, tubes and wires unhooked and corpse slid without ceremony inside. Chamber locked, pumped, activated: body turned to cold quartz.
"Good job." Not in the futile hope of future revival.
The nuts had a field day.
Harrison had sealed his frozen body into a time/space capsule, subsequently launched toward the center of the Galaxy. Also in the capsule were stacks of ultrafiche crystals (along with a viewer) that described humankind's nature and achievements in exhaustive detail, and various small objects of art.
One class of crackpots felt that Harrison had betrayed humanity, giving conquering hordes of aliens a road map back to Earth. The details of what they would do to us, and why, provided an interesting refraction of the individual crackpot's problems.
A gentler sort assumed _a priori_ that a race of aliens able to decipher the message and come visit us must necessarily have evolved away from aggression and other base passions; they would observe; perhaps help.
Both of these groups provided fuel for solemn essays, easy master's theses, and evanescent religions. Other opinions:
"Glad the old geezer got to spend his money the way he wanted to."
"Inexcusable waster of irreplaceable artistic resources."
"He could have used the money to feed people."
"Quixotic gesture; the time scale's too vast. We'll be dead and gone long before anybody reads the damned thing."
"I've got more important things to worry about."
None of the above is true.
Supposedly, the miniature Adams-Beeson converter would accelerate the capsule very slowly for about a century, running out of fuel when the craft had attained a small fraction of the speed of light. It would pass the vicinity of Antares in about five thousand years.
The capsule had a preprogrammed signal generator, powered by starlight. It would accumulate power for ten years at a time, then bleat out a message at the 21-centimeter wavelength. The message lasted ninety minutes and would be repeated three times; any idiot with a huge radio telescope and the proper ontological prejudices could decode it: "I am an artifact of an intelligent race. My course is thus and so. Catch me if you can."
Unfortunately, the craft carried a pretty hefty magnetic field, and ran smack-dab into Maxwell's Equations. Its course carried it through a tenuous but very extensive cloud of plasma, and through the years it kept turning slowly to the right, decelerating. When it came out of the cloud it was pointed back toward the Earth, moving at a very modest pace.
In twenty thousand years it passed the place where Earth had been (the Sun having wandered off in the natural course of things) and continued to crawl, out toward the cold oblivion between the galaxies. It still beeped out its code every decade, but it was a long time before anybody paid any attention.
I woke up in great pain, that didn't last.
"How do you feel?" asked a pretty young nurse in a starched green uniform.
I didn't answer immediately. There was something wrong. With her, with the hospital room, the bed. The edges were wrong. Too sharp, like a bad matte shot at the cubies.
"How do you feel?" asked a plain, middle-aged nurse in a starched green uniform. I hadn't seen the change. "Is this better?"
I said it didn't make much difference. My body, my body was a hundred years younger. Mind clear, limbs filled with springy muscle. No consciousness of failing organs. I am dead, I asked her; told her.
"Not really," she said and I caught her changing: shimmerclick. Now a white-haired, scholarly-looking doctor, male. "Not any more. You were dead, a long time. We rebuilt you."
I asked if he/she would settle on one shape and keep it; they pulled me out of a capsule, frozen solid?
"Yes. Things went more or less as you planned them."
I asked him what he meant by more or less.
"You got turned around, and slowed. It was a long time before we noticed you."
I sat up on the bed and stared at him. If I didn't blink he might not change. I asked him how long a time?
"Nearly a million years. 874,896 from the time of launch."
I swung to the floor and my feet touched hot sand.
"Sorry." Cold tile.
I asked him why he didn't show me his true form. I am too old to be afraid of bogeymen.
He did change into his true form and I asked that he change back into one of the others. I had to know which end to talk to.
As he became the doctor again, the room dissolved and we were standing on a vast plain of dark brown sand, in orderly dunes. The vague shadow in front of me lengthened as I watched; I turned around in time to see the Milky Way, rather bright, slide to the horizon. There were no stars.
"Yes," the doctor said, "we are at the edge of your galaxy." A sort of sun rose on the opposite horizon. Dim red and huge, nebulous at its boundaries. An infrared giant, my memory told me.
I told him that I appreciated being rebuilt, and asked whether I could be of some service. Teach them of the ancient past?
"No, we learned all we could from you, while we were putting you back together." He smiled. "On the contrary, it is we who owe you. Can we take you back to Earth? This planet is just right for us, but I think you will find it dull."
I told him that I would very much like to go back to Earth, but would like to see some of his world first.
"All of my world is just like this," he said. "I live here for the lack of variety. Others of my kind live in similar places."
I asked if I could meet some of the others.
"I'm afraid that would be impossible. They would refuse to see you, even if I were willing to take you to them." After a pause he added, "It's something like politics. Here." He took my hand and we rose, his star shrinking to a dim speck, disappearing. The Galaxy grew larger and we were suddenly inside it, stars streaming by.
I asked if this were teleportation.
"No, it's just a machine. Like a spaceship, but faster, more efficient. Less efficient in one way."
I started to ask him how we could breathe and talk, but his weary look cut me off. He seemed to be flickering, as if he were going to change shape again. But he didn't.
"This should be interesting," he said, as a yellow star grew brighter, then swelled to become the familiar Sun. "I haven't been here myself in ten, twelve thousand years." The blue-and-green ball of Earth was suddenly beneath us, and we paused for a moment. "It's a short trip, but I don't get out often," he said, apologetically.
As we drifted to the surface, it was sunset over Africa. The shape of the western coast seemed not to have changed much.
The Atlantic passed beneath us in a blur and we came to ground somewhere in the northeastern United States. We landed in a cow pasture. Its wire fence, improbably, seemed to be made of the same shiny duramyl I remembered from my childhood.
"Where are we?" I asked.
He said we were just north of Canaan, New York. There was a glideway a few kilometers to the west; I could find a truck stop and catch a ride. He was flickering very fast now, and even when he was visible I could see the pasture through him.
"What're you talking about?" I said. "They wouldn't, don't, have truck stops and glideways a million years in the future."
He regarded me with fading scorn and said we were only five or ten years in my future; after the year of my birth, that is. Twenty at the outside. Didn't I know the slightest thing about relativity?
And he was gone.
A fanner was walking toward me, carrying a wicked-looking scythe. There was nothing in the pasture to use it on, but me.
"Good morning," I said to him. Then saw it was afternoon.
He walked to within striking distance of me and stopped, grim scowl. He leaned sideways to look behind me. "Where's the other feller?"
"Who?" I'd almost said I was wondering that myself. "What other fellow?" I looked back over my shoulder.
He rubbed his eyes. "Damn contacts. What're you doin' on my propitty anyhow?"
"I got lost."
"Don't you know what a fence is?"
"Yes, sir, I'm sorry. I was coming to the house to ask directions to Canaan."
"Why you out walkin with a funny costume on?" I was wearing a duplicate of the conservative business suit Harrison was buried in.
"It's the style, sir. In the city."
He shook his head. "Kids. You just go over that fence yonder," he pointed, "and head straight 'til you get to the road. Mind you don't touch the fence an' watch out for my God damn beans. You get to the road and Canaan's to the left."
"Thank you, sir." He had turned and was stumping back to the farmhouse.
In the truck stop, the calendar read 1995.
It's not easy to stay penniless in New York City, not if you have a twenty-year-old body and over a century's worth of experience in separating people from their money.
Within a week, the man who had been Harrison was living in a high-class flat behind the protection of the East Village wall, with enough money stacked away to buy him time to think.
He didn't want to be Harrison again, that he knew for sure. Besides the boredom of living the same life over, he had known (as Harrison) by the time he was fifty that his existence was not a particularly happy one, physically addicted to the accumulation of wealth and power, incapable of trusting or being trusted.
Besides, Harrison was a five-year-old in Arkansas, just beginning the two decades of bad luck that would precede a century of nothing going wrong.
He had this sudden cold feeling.
He went to the library and looked up microfiches of the past few years' _Forbes_ and _Bizweek_. And found out who he was, by omission.
For less than a thousand dollars, he gave himself a past. A few documents to match counterfeit inserts in government data banks. Then a few seemingly illogical investments in commodities, that made him a millionaire in less than a year. Then he bought a failing electronics firm and renamed it after himself: Lassiter Electronics.
He grew a beard that he knew would be prematurely white.
The firm prospered. He bought a plastics plant and renamed it Lassiter Industries. Then the largest printing out-fit in Pennsylvania. A fishery after that.
In 2010 he contrived to be in a waterfront crap game in Galveston, where he lost a large sum to a hard-eyed boy who was fairly good at cold-rolling dice. Lassiter was better, but he rolled himself crapouts. It was two days after Harrison's twentieth birthday, and his first big break.
A small bank, then a large one. An aerospace firm. Textiles. A piece of an orbital factory: micro-bearings and data crystals. Now named Lassiter, Limited.
In 2018, still patiently manufacturing predestination, he hired young D. Thorne Harrison as a time-and-motion analyst, knowing that all of his credentials were false. It would give Harrison access to sensitive information.
By 2021 he was Junior Vice-President in charge of production. By 2022, Vice-President. Youngest member of the board, he knew interesting things about the other board members.
In 2024, Harrison brought to Lassiter's office documents proving that he had voting control of 51% of Lassiter, Limited. He had expected a light. Instead, Lassiter made a cash settlement, perplexingly small, and dropped out of sight.
With half his life left to live, and money enough for much longer, Lassiter bought comfortable places in Paris, Key West, and Colorado, and commuted according to the weather and season. He took a few years for a leisurely trip around the world. His considerable mental energies he channeled into the world of art, rather than finance. He became an accomplished harpsichordist, and was well-known among the avant-garde for his neopointillist constructions: sculptures of frozen light, careful laser bursts caught in a cube of photosensitive gel. Beautiful women were fascinated by this man who had done so well in two seemingly antagonistic fields.
He followed Harrison's fortunes closely: the sell-out in 2030, buying out the Adams-Beeson drive (which seemed like a reckless long shot to most observers), sinking a fortune in the Moon and getting it back a hundredfold.
And as the ecologic catalyzers were being seeded on Mars, Harrison an old man running out of years to buy, Lassiter lay dying in Key West:
In the salt breeze on an open veranda, not wanting to clutter up his end with IV tubes and rushing attendants and sterile frigid air, he had sent his lone nurse away on an errand that would take too long, his last spoken words calm and reassuring, belying the spike of pain in his chest. The house downstairs was filled with weeping admirers, friends he had not bought, and as the pale blue sky went dark red, he reckoned himself a happy man, and wondered how he would do it next time, thinking he was the puppeteer, even as the last string was pulled.
# Juryrigged
_For three semesters I did graduate work in computer science at the University of Maryland, so it was inevitable, perhaps unfortunately, that sooner or later I'd write a story with a computer as the main character. This is it._
_In terms of action, this is probably the most complicated story I've ever written, even though most of the action is just electrons slipping to and fro. I was a little concerned that it might be_ too _complicated, but it did sell, and to a good market._
_I took the story to a writers' conference in Baltimore—six or seven of us who met every few months to tear each other's work apart—and didn't expect any mercy, since we were fairly savage with one another (in a friendly way, oh yes), and it seemed to me that a story about a computer would be pretty vulnerable to sarcasm._
_To my surprise, everyone liked it. I was so pleased that I got careless, and explained to them what the underlying structure of it was._
_For the rest of the week, it was "Joe's God-damned Boolean algebra story."_
L. Henry Kennem put a tiny speck of Ultramarine Blue into the gob of white on his palette. He mashed it around until it was thoroughly mixed, and smiled. Perfect for the underside.
Henry was painting a gesso-on-gesso picture of a pile of eggs in a white bowl on a white saucer, on a white table-top, lit uniformly from every side. It was a _tour-de-force_ of technique; though an uncharitable observer might have pointed out that from any distance greater than three feet, it was only a slightly smudged white canvas.
But Henry was untouched by the foibles of critics, more immune than any artist in any less perfect age could have been. For in the Citizen's Capitalism of America (and about everywhere else, for that matter), he was a _painter_ , by damn—Occupational Code 509 827 63; Artist, paints, free-lance—and he got a government check every two weeks for doing what he had shown the most aptitude for, twenty years ago at the magic age of fourteen. All he had to do, to keep off the relief rolls, was produce at least one painting a year.
He'd already done his painting this year, and it made him feel like a very good citizen to be doing another. This one was quite a challenge, too; Henry hadn't seen a real egg in many years—his paycheck was adequate but not enough to justify buying gourmet food—and, disdaining photographs, he was working from memory. His eggs were a little too spherical.
The door chimed softly and Henry gave a gentle curse and set his palette under the no-dry field. He kept the brush in his hand and went to answer the door.
The viewer showed three men in business clothes—dark blue capes and matching jocstraps—maybe customers, looking for something to brighten up their office. Henry thought of the twenty-eight canvases languishing unsold in his study and how nice it would be to splurge and buy an egg. He composed his features into a look of quiet interest and thumbed the door open.
"Louis Henry Kennem?" The short fellow in the middle did the talking, while the other two stared.
"Yes, indeed, sirs. What can I do for you?"
"Government business," the little one said and produced a card-badge with the legend "Occupational Classification Board". "We have some good news for you."
"Oh—well, come in, come in." Good news, maybe. The two big fellows didn't look like harbingers of joy. They walked in silently, as if on oiled bearings, expressions never changing as they took in the carefully-planned disorder of his living room-studio.
"Can I get you gentlemen coffee or something?"
"No, thank you. We won't be long. Neither will you, as a matter of fact. You're to come with us." He plopped down on the sofa-roll. "Please have a seat." The other two remained standing. Henry had a strong impulse to bolt out the door, but instead he perched on a neowood sawhorse.
"Uh, why is the OCB interested in me?"
"As I say, it's good news. You're going to be a very wealthy man."
"I'm not... being reclassified, am I?" Henry couldn't imagine being anything other than Artist, paints, freelance. Besides, some of the highest-paying jobs were unpleasant in the extreme; like Sewage Inspector or Poison Tolerance Control Engineer.
"No, nothing like that, uh, not really—" the man took a blue envelope out of his cape pocket and fiddled with it. "Your Occupational Code remains the same, and you'll be painting again in another year. But for one year, you've been selected to serve on jur—"
"Jury duty!" Henry half-jumped, half-fell off the sawhorse. Two hundred staring pounds of muscle slid into position between him and the door. "You can't... I can't—you can't plug me into that machine for a year! I'll go crazy—everybody does!"
"Now, now, Mr. Kennem," the man got up smiling and his cronies produced handcuffs. "Surely you don't believe all that nonsense. Why, nobody in the world is more comfortable than a cyborg juror. All your physical needs taken care of automatically, a good responsible job with high pay, eight companions as intelligent and qualified as you—"
"But I'm _not_ qualified! I don't know anything but painting. I don't want to _do_ anything but paint."
"Now, don't run yourself down, Mr. Kennem. Out of the eighty million people in Balt-Washmond, Central chose _you_ as the one most qualified to replace the outgoing juror."
"The machine made a mistake, then. The jury runs the whole _city_ —I can't even manage my own—"
One of the heavies jingled his cuffs suggestively. "Come on, Mr. Harris. Gonna be after five by the time we get back to the office." He looked as if the long speech had made his face hurt.
"Right, Sam. Look, Mr. Kennem, we can talk about it on the flyer. Why don't you just cooperate and come along?" Henry went quietly.
The Baltimore-Washington-Richmond Complex was a monument to scientific city planning. Growing methodically from the rubble of the Second American Revolution, the planners left nothing to chance or human weakness. There was no "urban sprawl"; slums were simply not allowed. The three cities had ideally fixed populations; and everybody whose presence Central (the Central Planning and Maintenance Computer Facility) decreed not essential to the city's functions, was compelled to live in the exurban lowrises. Henry lived in one such, Fernwood, about fifty air-miles west of the center of Washington. Only those chosen to be very wealthy could afford to live above ground.
As the flyer skimmed its silent way to Washington, Henry saw a few such above-ground dwellings, their lawns irregular patches of green, looking out of place, disturbing the geometric regularity of the produce fields that rolled from horizon to horizon. He couldn't understand why anybody would purposely expose himself to weather when he could live in a totally controlled underground environment. He was only half-listening to Mr. Harris.
"... it's ridiculous for you to say you aren't qualified. Central considers all citizens with IQ's between 130 and 140—and _any_ person with that level of intelligence can fulfill the cyborg function. But jurors are chosen for many other qualities, beside intelligence."
"My pretty blue eyes," he said, looking out the window.
"Now, Mr. Kennem, there's no need to be sarcastic." Henry was getting very annoyed at Harris's habit of addressing him by name every other sentence. "You should be very proud. Of all the people intelligent enough—"
"But not _too_ intelligent."
"—out of all of them, the machine decided you were the one least likely to misuse the power a juror has."
"I don't _want_ power! I want to paint and be left alone."
"That is precisely it."
"Thanks. Lack of ambition. Sure is a lot to be proud of."
It was cold in the tank. Some part of his brain knew that he was floating in slime, naked as an embryo, totally helpless. That part of his brain knew that the crown of his skull had been excised and stored somewhere; that from the eyebrows up he was a complicated mass of grey and blue tissue interwoven with fine wires, microcircuitry, sensors... and it would have been frightening, had he been allowed to fear.
He couldn't see himself, or feel anything but the cold, or hear the faint susurrus of fluid cycling through the tank.
The part of his brain that used to see was earmarked for TRAFFIC CONTROL.
The part of his brain that used to feel took care of POPULATION DENSITY AND EPIDEMIOLOGICAL RESEARCH.
The part of his brain that used to be hooked to his ears, SUPPLY AND DEMAND REDUNDANCY CHECK or sometimes RESOURCE PROJECTION ANALYSIS.
A well-determined matrix was like the smell of butter-cups (he had never smelled a buttercup before). A differential equation with ambivalent initial conditions felt like an itch in the middle of his back, where he couldn't reach. Tensors sang like harps and algebra was more basic to him than love had ever been.
He knew he had once been Louis Henry Kennem but now he was INTERFACE FOUR and he had a splitting headache.
Your head will ache for a year, said FIVE, speaking in cultured accents of Boolean algebra.
If you can hold out for a year, said EIGHT.
The old FOUR only made it four months, said FIVE.
But you can do it, we have great confidence in you, said SIX, just a hint of sarcasm in the third-order harmonic.
Go fuck a solenoid, said THREE, give the new guy a chance.
I've got to get out of here, thought FOUR. But his thoughts weren't private. He hadn't learned how yet.
Just walk away, said EIGHT.
Swim, said SIX.
You're in charge of TRAFFIC CONTROL, said EIGHT. Call yourself a flyer.
Everybody quiet down and get back to work, said ONE. And everybody did. ONE was INTERFACE CONTROL MONITOR, among other things.
After a while, FOUR learned how to isolate the entity that was Henry. This was necessary so that Henry could think without being monitored—by FOUR as well as the others; when Henry thought, it gave FOUR what can only be described as a headache.
FOUR was allocated many more storage and logic circuits than he needed for the 246 duties he performed. It was no trick at all for FOUR to link up a bit from here and a bit from there and a bushel-basket full from BUDGET ANALYSIS 1985, and patch together a Henry analogue. He did this just one microsecond after he saw it was possible.
Of course, this Henry didn't know a vector from a scalar, and couldn't even add up the figures in his credit book accurately. But he could tell a good painting from a merely photographic one, what grade of synth-turps mixed well with which pigment, and could feel and hear and see and taste.
But all of this sensory input came from FOUR. It was confusing at first.
He saw the city, Balt-Washmond, all at once, at every level. The satellite over Chimbarazo showed the city as a tiny crystal, glimmering on the Earth's sunset line. Aerial monitors in visual, infrared and radio gave three complete, shifting, superimposed images that almost tallied with the acres of blueprints in CITY PLANNING AND MAINTENANCE. Traffic sensors and pedestrian density monitors scrutinized every square millimeter of public property in the city and its allied lowrises.
He heard the babble of several hundred thousand people talking at once and felt millions of feet on his sidewalks. Billions of impressions rushed through him, changing every tiniest fraction of a second, and he knew he should have gone insane from the sheer complexity of it, but instead he perceived it as one gestalt. The City-and it was so beautiful that it made him ashamed, to remember that he once thought he knew what beauty was.
An old woman died in not too much pain at Level 243, Room 178, Frederick (Greenleaf) Lowrise and Henry knew that FOUR had dispatched a flyer from the nearest HUMAN RESOURCES (RECLAMATION) depot. It was sad that her three children and six grandchildren would miss her, maybe less sad that she'd be minced into compost (after reverent ritual) to enrich the soybean fields around Frederick, but the sadness was part of the beauty and while he was concentrating on HUMAN RESOURCES (RECLAMATION) the fact slipped through him that at this instant there were 2,438 people urinating in Balt-Washmond and FOUR could give him their names arranged in alphabetical order, or dip into HEALTH STATISTICS and arrange them in order of increasing bladder capacity and that was part of the beauty and out of the 17,548 flyers in the air, 307 were going to run out of power before they reached their destinations (they had changed their minds in mid-air, or they wouldn't have been allowed to launch in the first place) and of these 307, two had faulty warning lights and didn't know they had to land and recharge and police flyers were vectoring in on them but they might not get to HYZ-9746-455 in time but that wasn't too bad because he was far north of the city and, at worst, would fall like a dropped stone into an uninhabited cornfield and FOUR knew exactly which plants he would crush, what breed they were and in what stage of growth they were and what their projected yield would be but there was no way in the world that Henry or FOUR could save the man's life if the police flyer didn't reach him in time and this painful helplessness in the face of virtual omniscience, this was part of the beauty too.
FOUR dipped into TRAFFIC CONTROL (VEHICLE DESIGN ANALYSIS) and did a quick costs-versus-probability of occurence/value of lost resources analysis, and found that the installation of a device to prevent such an accident from happening would not be practical.
Henry basked in the beauty and complexity of it for several days, when it slowly dawned on him that he wasn't alone.
Now it was hard to really say where Henry _was_ in the first place. FOUR initially set him up out of such odds and ends as weren't being used. But when a bit that was a part of Henry was needed for something else, FOUR automatically transferred the information in that bit to somewhere else; anywhere, it didn't make any difference as long as the proper link was maintained.
So the juryrigged assemblage of memory cells (piezoelectric, nothing but the best), buffer units, ultrafiche Crandall files and so on—that went under the name of Henry—sprawled all over Center, flowing this way and that, shifting a hundred thousand tiny ways every second. Only a very few elements of Henry came at all close to where "his" old body hung suspended in a dimly-lit tank filled with pale green synthetic mucus.
FOUR arranged Henry in this seemingly slapdash fashion because it was required by the ineffible machine logic he used to attack the problem "how do I get rid of this flaming 'head'ache?". It was the best way he could isolate Henry without tying up too many components necessary for other problems. But there were other possible approaches.
The man/machine that had been FOUR before they installed Henry had tackled the problem a different way.
Smithers, the man who was Henry's predecessor, had been a nice enough guy. An accountant with an IQ of 132, he had been eligible for the cyborg jury and was thus among those Center considered as replacements when the old FOUR's term was running out. Smithers' psychometric profile, unfortunately, was in error, and hid two slight maladies that would have disqualified him immediately.
He was just the slightest bit paranoid.
And he suffered little tiny, insignificant, delusions of grandeur.
Other than those two quirks, he had been the perfect man for the job. And with those small defects masked, Center exulted and sent Mr. Harris and his two silent buddies out to collect him. They had to use the handcuffs.
Now until they wired him up and slipped him into the slime, Smithers wasn't the slightest bit mad. Not by any ordinary social standards—all of his friends and relatives, in fact, were much farther from the all-but-unreachable standard of sanity that had to be met to make a perfect man/ machine interface... and they all thought Smithers was rather dull.
But the dash of paranoia and delusional flyspecks that should have shown up on his profile were like a few individual colon bacilli on an otherwise pristine dish of delicious agar jelly. They could only grow—slowly at first, but at an ever-increasing rate... until after four months, INTERFACE ONE decided that FOUR could no longer function efficiently and he was taken out of the system before he could do any harm.
Smithers was decanted and they thawed out his skullcap and fitted it back on and sadly led him off to a place where he would be cared for, where nobody would mind that he was as helpless as a new-born child and only slightly more intelligent than a rutabaga.
They carted Smithers' body away, and his short-circuited vegetable brain. But they didn't know, couldn't know, about the rest of him; the cybernetic analogue tucked away under BALT-WASHMOND DEMOGRAPHICS 1983.
Now certain parts of FOUR's memory are seldom tapped, but must not be disturbed—these are data which will never change, and which have been stored in the most efficient manner possible. One of these parts is DEMOGRAPHICS, and if it ever occurred to FOUR to wonder why the section for 1983 was slightly larger than 1984 (all other years used less space than the year following), he was too busy to do anything about it.
Smithers was sandwiched in there, crowded into the eighteen billion cells between HEALTH STATISTICS and LEGAL DOCUMENTS. It had been easy for FOUR to get rid of the Smithers-headache by assembling an analogue out of spare parts and linking it up to the cobwebby DEMOGRAPHICS section. But then Smithers, sensing the dissolution of his biological brain and, not unreasonably, wanting to live forever, erased from FOUR all knowledge of the analogue. In order to do this, Smithers had to sever all of his cyborg sensory connections; in fact, his only contact with anything outside of DEMOGRAPHICS 1983 was a single link to his biological self. And as the Smithers that floated in green ooze slowly went bonkers, he affected the analogue Smithers through that fine wire, by a process of induction.
And when they took the Smithers-body away, the Smithers that remained was deaf and blind, as well as paranoid and delusional. He had to stay that way for weeks, frozen between HEALTH STATISTICS and LEGAL DOCUMENTS, reviewing the contents of each, every tenth of a second, just to keep from going even more batty. Even after they hooked Henry into FOUR, Smithers was isolated.
Then, a graduate student doing research into mutative trends asked Central, which asked ONE, which asked FOUR, "How many birth defects showed up in the new-born of non-Caucasian parents in 1983?" FOUR opened a path to DEMOGRAPHICS 1983, to scoop out the number, and Smithers pounced on the opening and his awareness spread through all of FOUR in a nanosecond. And he kept it quiet.
It was good to have the City back again, even if he had to share it with that arty-farty type Henry. He was able to hear and see and feel again, but he didn't dare reach out and touch. If FOUR found out he was still here, he'd erase Smithers in a simple space-saving reflex. So he was like an almost omniscient paraplegic—but before, he'd been a paraplegic wrapped in a cocoon.
Henry sensed that something was different. With the help of FOUR'S CYBORG DIAGNOSTIC PACKAGE, he checked out his own system in minutest detail. Nothing seemed to be wrong. Eventually he dismissed the "somebody-looking-over-my-shoulder" feeling as just another thing to which he had to adjust.
Smithers kept still as a mouse while the CYBORG DIAGNOSTIC PACKAGE coursed up and down the system that linked him, through Henry and FOUR, to the outside world. It was all he could do to keep from laughing at his own cleverness, as he made the responses appropriate to an inert cybernetic component, each time the package tested him. It was so easy to outwit.
Obviously, Smithers thought, Henry was not fit to be in charge of FOUR (though he wasn't really "in charge"—this was only what Smithers remembered _his_ job as having been). But taking over, or at least merging, would be difficult. Smithers mulled it over for five days.
The thing that made it difficult was Henry's lack of a concrete, easily determined position. Not even FOUR could predict where Henry's, say, critical faculties, would be, a hundredth of a second in the future. FOUR shifted the individual parts of Henry around on a real-time basis; where they went depended on what was available at the moment.
So. Smithers had wanted to get at FOUR through Henry, but it became obvious that the only way he could sneak up on Henry was through FOUR.
The positions that various parts of Henry occupied were assigned by a small (refrigerator sized) component of FOUR, called SUBPROGRAM LINKING ALGORITHM—LINK to its friends. There was a path to LINK from every subprogram in FOUR. Smithers looked around and found that there was a largish vacant place in CURRENT POPULATION DYNAMICS. He insinuated part of himself into that vacancy, then generated a request for information from DEMOGRAPHICS 1983, his old home base. When LINK patched the two of them together, Smithers slipped into LINK as smoothly as an oyster sliding down a throat.
From there it was easy. Assuming that nobody would need data from DEMOGRAPHICS for the next minute or so, Smithers erased all of the irreplacable information in DEMOGRAPHICS from 1983 to 2012. He dumped Henry in there with plenty of room to spare; with LINK it was easy. The rest of FOUR functioned quite smoothly. Since Smithers was in charge of LINK, there was no way FOUR could know it had just lost a large subprogram.
Of course, neither did Henry know that he was nailed down in one place. Had he cared to know where he "was" at any time, he'd have had to patch through LINK into FOUR—then back to LINK and finally back to himself, the process taking about two microseconds; by which time he'd be someplace else altogether. So he'd long since stopped bothering.
Smithers studied Henry as a lepidopterist might scrutinize a very important pinned specimen. It took about forty-five seconds to find the weakest point in the analogue, the place most susceptible to invasion. He sneaked in, then gradually restored Henry to his usual status in regard to LINK; that is, flashing around the system like a cybernetic dervish. He also took time to fill up the DEMOGRAPHICS areas he had erased, with reasonable-looking (but totally made up) data.
That almost proved Smithers' second undoing.
The graduate student who had asked for the number of birth defects in children born to non-Caucasian parents in 1983 had written the number down on a slip of paper and then used the paper for a bookmark and returned the book to the library. When he realized that he'd lost it, he cussed a little and punched up Central again. Central admonished him that computer time doesn't come all that cheaply, and asked ONE who asked FOUR who fished out the bogus figure that Smithers had substituted. Then the graduate student went back to his desk and the fellow who shared a room with him said the library had called; he'd left a slip of paper in a book and it looked as if it might be important, so his roomate had copied it down and left it on his desk. He thanked him and cussed a little more, under his breath this time, and glanced at the figure as he sat down. Then he looked at the piece of paper in his hand, then back at the number on his desk. He groaned and stomped back to the Central console.
"Hey, FOUR," ONE said. "Wanna spill your DEMOGRAPHICS 1983 and run a redundancy check?"
"Sorry, chief, no data to compare it to. That's all singular, no cross-references."
_"Well, find_ some! You gave me two different responses to the same question, about a week apart."
"What's up?" said SIX.
"Got any corollary stuff for DEMOGRAPHICS 1983?" FOUR asked.
"Hmmm... just 'Automobiles and Flyers Owned, by Age Grp, Sex, and Race'."
"Well, stuff it in to stack 271; I'll put my version in 272 and run a no-carry AND through it."
"OK... fire when ready, FOUR."
"Oh shit," said FOUR.
"Well?" said ONE.
"No corollation. Somebody scrambled it."
ONE sighed a cybernetic sigh. "Find out how far it goes, and we'll replace as much of the missing data as we can. Jesus Christ... as if we didn't have enough trouble, with Labor Day coming up—"
"I'm sorry, chief, I really am."
"Not your fault, FOUR. It probably got randomized while they were installing your new org. Happens sometimes."
Henry was listening to this exchange with great interest—after all, he was the new org—but Smithers had stopped listening after the first evidence that they had stumbled onto his machinations. He had to put an escape plan into effect. He had several plans—as one might imagine, considering his extreme paranoia—but since time was probably limited, he chose the quickest, most audacious one.
The first thing he had to do was take over Henry completely. That would have been impossible a week before, when Henry had been totally sane.
It had taken four months for Smithers to go off the deep end. But he had started with only the slightest hint of instability—and Henry had had the benefit of coexisting for a week with a full-fledged lunatic. A week was more than enough. The vague feeling of somebody looking over his shoulder had intensified, until Henry was sure that everybody—LINK, FOUR, ONE, and every other interface and package—was spying on him, sneaking stares whenever his attention was directed elsewhere. And he had a growing feeling that he was just too fine and capable an analogue to put up with that kind of indignity.
So taking over the analogue (Smithers wasn't interested in the corporeal Henry, not just yet) was rather easy, since both of them had similarly pathologic personalities. He merely sidled up alongside and, subverting LINK by switching on a bogus control subprogram, severed the connections between the Henry analogue and both the Henry body and FOUR. Taking less than a microsecond, he forced key links between himself and the other analogue—for a tiny flash he felt what the other was feeling, isolation and agony, like being swaddled in black velvet and skewered by a hundred red-hot knitting needles—then he connected up again.
"What's going on?" said FOUR.
Working fast now, to stay ahead of FOUR, Smithers felt a slight "resistance" pushing at him from Henry's brain (which was still fairly sane), but human thought is so grindingly slow compared to cybernetic, that he didn't have a chance; Smithers pushed _back_ at every point and, abandoning the analogue, speared into the brain (the only outward sign being a small smoky bubble that formed when a flap of grey matter throbbed in response to the higher voltage going through a microcable), using the brain as a springboard, burning it out completely, crashing into FOUR with a force so compelling that it randomized TRAFFIC CONTROL and made CYBORG DIAGNOSTIC PACKAGE come up all ones.
"What did you say, FOUR?" said ONE.
"Bongo, bongo, bongo; I don' wanna leave the Congo," Smithers muttered.
"What?!"
"T'was brillig," Smithers shouted, "and the sli—"
Everything went red and slow and stopped and Smithers could hear through a thousand miles of cotton:
"God damn it, had to cut out FOUR again. You all know what to do?"
A ragged chorus of tired "Yeah, chief"'s, as the other interfaces took over. "Good. I'm going in to see what the trouble is _this_ time."
"Careful, chief," Smithers recognized SEVEN's nasal tone, "Must be another crazy."
"I can handle him. I handled the other all right." Smithers laughed and in what passed for his ears the laugh was a chittering squirrel and a kettledrum roll and everything in between. He tensed and waited for contact with ONE, knowing that the big dumb boob would try the same old diagnostic macro-algorithm he had used last time. And as soon as he made contact—
The timing was very critical, as FOUR couldn't function for very long without a viable brain in its circuits. But ONE would want to check it out while it was still clicking, hopefully.
There!—just the lightest of touches. Smithers jumped, and it was like jumping at a shadow, no resistance, and for a nanosecond he thought too easy, must be a trap, but then he slid straight through the macroalgorithm, into the vitals of ONE. He shot out tendrils of control—getting pretty good at this, he thought—and clawed his way into the Central Processing Unit. There was just a little resistance; he elbowed it aside and in no time he was in charge of ONE, which controlled Central, which controlled the Baltimore-Washington-Richmond Complex.
The idea of them trying to stand in his way. The sneaky little tricks, the spying—they'll pay!
_"Catch the crazy, Chief?"_
_"Sure. Everything's under control."_
He flexed his cyborg muscles, felt all seven working interfaces respond. Now, an exercise... wouldn't it be nice, he thought, to kill everybody whose name begins with "A"?
_"What did you do with him?"_
Contacting them was easy, from Aalborg to Azelstein. He had FIVE send each an urgent communication—an order, actually—urging them to meet at the Chesapeake Fission Station at noon. He had SEVEN arrange for tables and box lunches on the grounds of the station, and a podium with flags waving (all strictly diversionary devices).
_"Nothing to it. I set up..."_
Funny that he couldn't see or feel as much through ONE as he had through FOUR. Guess only the flunkies need extensive sensory inputs.
SIX was in charge of POWER GENERATION AND DISPERSAL. Smithers ordered him to pull the dampers out all the way at the Chesapeake Fission Station, at 12:05. It couldn't explode, of course—but it would get mighty hot.
_"... a transfinite-ordinal simulator..."_
Time sure flies when you don't have much to do. ONE didn't seem to have a tenth as much to do as FOUR did. That's why he was always shouting orders and spying—nothing better to occupy his time.
_"... that lets me record his fantasies as he carries them out. Should have done that last time—he..."_
Here it was 12:05 already. SIX reported the deed done, and he felt a slight voltage shift as they switched to emergency generators. He couldn't see the result of his experiment, but he could imagine all of those people sitting around munching on fried soy-chicken one second and the next second superheated radioactive steam flaying the skin and flesh from their bones _... that_ should teach them a lesson!
_"... jumped right at the bait, didn't suspect a thing. I'll record another minute or so, for analysis, then pull the plug. Henry, that was FOUR's org, was in on it. I pulled him out of the circuit and patched in the old FOUR org, from St. Elizabeth's. We'll be back to normal in a couple of minutes."_
Now for the B's.
# Summer's Lease
_It was fun to reread this story because of the memory it evoked: I've never written a story under more pleasant conditions. Getting there was rather complicated, though._
_One Sunday morning Analog Editor Ben Bova called me and asked whether I would do a story for a special issue he was putting together about Immanuel Velikovsky_. _He said he had in mind something about the scientific method. I was fixing breakfast at the time, frying bacon, so I said sure, I'll do you a story about Francis Bacon. About all I knew of Bacon was that he was an impressively eclectic philosopher and was generally credited with having formulated the scientific method.* I suggested to Ben a sort of famous-person-as-alien story; Bacon was actually an extraterrestrial, stranded for life on this backwoods planet, who made a living the best way he knew how: being superior._
_I still think that would make a good story. If anybody out there wants to write it, I'd love to read it._
_At the time, I was sweating out the last couple of chapters of an adventure novel. My wife and I were taking a charter flight to Jamaica on Wednesday, and I was grimly determined to finish the book before we left (made it by thirty minutes). I was sort of enjoying the role of Superhack, on a round-the-clock schedule of catnaps and writing, but I did need a short break, so I slogged out through the ice and snow to the University of Iowa library, thinking I would pick up a Bacon biography and a couple of critical works to read in Jamaica._
_Well, the university had about 500 volumes by and about Francis Bacon, but 490 of them were in Latin, a language whose sound I admire. Of the remainder, I really couldn't find one that looked like poolside reading. Reluctantly, I abandoned the idea (but did mention_ Novum Organum _in the adventure novel, so the morning wouldn't be a total loss)._
_Came up with a more manageable idea, called Ben, he approved it, and I packed a small typewriter in there with the skin-diving gear._
_So this story was written in a succession of mornings on the veranda of a lovely hotel just north of Montego Bay. The management thoughtfully provided a coffee pot, and I sat in the dark of the morning, in the cool night breeze, watching odd green lizards stalk the edge of the light, listening to the quiet surf, sipping strong Jamaican coffee, smoking strong Jamaican cigars, writing with delicious ease. Feeling that ineffable sense of perfect time, perfect place, perfect occupation: fragile, wistful, never to be repeated or forgotten._
_Writing in the fierce Iowa winter, I had set the adventure novel in Key West and Haiti. So in the most clement weather this side of Eden, I wrote a story about a planet with storms that wrack its surface clean of life._
* Lower-case bacon, I know a great deal about, including an infallible method for cooking perfect bacon every time. Cook it in the nude. This trains you to keep the heat down so it won't stick or spatter, and it can't burn.
_Dis Buk wil tel dē storē of dē Burning, and of whī_
_Each 80 yērs Men have to hid from Wind and Sē and Skī._
_And how dē first Men first went Nōrd to flē dē Burning Sun,_
_And whī God rids dē Wrld of Līf when Līf has jus begun._
—Godbuk 1, 1, 1-4
Lars Martin had been assigned the unpopular job of auditor. He sat under an awning on the dock, beside a balance-scale taken from the market. He had stacks of watertight bags made from fish bladders and a notebook that contained a roster of the town's population. One pan of the balance held two fist-sized weights, and in the other pan a family would place such personal possessions as they wished to take with them on the northern migration. The two weights that limited their allotment totaled less than twenty pounds, so family members argued quite a bit with each other, and everybody argued with Lars.
Lars was norm ally the town's book-keeper (a word with a very literal meaning there) and had very legible handwriting as well as a facility for arithmetic, so he was the logical choice for the post. But he was also a charitable man, and it pained him to be inflexible with his friends. A collection of discarded treasures grew at his side: dolls and fine clothes, pictures and sets of dishes and tableware, jewelry, and even coins. And books, which hurt Lars the most. He had written most of them.
"Still a little light, Fred." Fred had no family, but was allowed the full weight. "Why don't you take one of these?" Lars had salvaged books from the discards and lined them up neatly on his table.
"I've read most of them," he said. He picked up the town's copy of "Metal Work." "This one, I even have in my memory."
Lars stirred the pile of coins and ingots that made up all of Fred's allowance. "When we come back, they'll be worth more than gold and silver."
"You say that to everybody." Fred laughed humorlessly. "I know how you feel. Some of my best work is going under, too."
"It's a different thing," Lars said, tired of everybody's obtuseness. "You can make them again, after."
"You can write the books down again."
"Two or three of them, I could," he admitted. "For the rest... I'll mine your memory for metalwork, and old Johansen's for history, and the like. And borrow books from other towns. When there's money for it. If they have any books to borrow."
"We've always managed."
"I don't think so, Fred. We lose a certain number of books every Burning."
He shrugged. "Is that bad? We only lose the ones that nobody has put in his memory. If only the best survive, I don't count that as a loss."
Fred was partly being honest, Lars knew, but was also setting him up for a joke. Lars taught numbers and letters to all the town's children, and knew that he sometimes treated the adults as children, out of habit or absentmindedness, when there was something to be explained. Catching him at this was considered high humor.
Maybe it was "frontier" humor, but that particular word had long disappeared from their language. Exploration was a luxury their race couldn't afford, spending every fourth generation preparing for planetary disaster. Then three generations trying to recover.
They called their planet "the world," and the double star system in which it orbited, they called "the suns." The brighter of the two stars provided the Burning by flaring up every eighty-three years.
But their remote ancestors, some two thousand years before, had named the planet Thursday's Child, when they had come out of blindspace thoroughly lost, their colonizing vessel crippled and its resources so depleted that the ship's elders had set up a roster for systematic cannibalism. From orbit, Thursday's Child had seemed an incredible miracle: a frost-capped globe of greens and warm browns and glittering blue. They landed and found that the soil took their seeds and cuttings well, and the sea teemed with a great variety of life. But the only land animals were a few hardy varieties of insects and worms.
They had suspected that the planet, however hospitable it looked, would be a pretty strange world—even before they'd landed. Its primary was a double star, with both stars and the planet revolving around in the same plane, much like Earth, its Sun, and Jupiter. The planet's axis was exactly perpendicular to that ecliptic plane, so its seasons (which went hot-cool-hot-cold) were provided by the mutual periodic eclipses of the two stars.
But certain geologic features, and the apparent inability of the planet to support complex life-forms on land, caused their scientists to take a closer look at the twin primary. They found that the larger of the two was a recurrent nova. Every eighty years or so, it would flare up for a short period. At maximum, Thursday's Child would be blasted by over a hundred times its normal ration of sunlight.
So the first Burning didn't take them by surprise; they had twenty years' planning time. But there was no clearly superior solution to their problem, among the various possible alternatives.
They could try to survive the way the fish apparently did, getting far enough below the surface of the ocean so they were insulated—both from the radiation and the undoubtedly ferocious weather—by a large mass of water. But how deep would be deep enough? They didn't have time or materials to sink a haven into really deep water. And the water above some impossible-to-compute level would present an environment even more hostile than the land. So they rejected that alternative.
_But Watrs onlē fōr dē Ones that ōn dē water Wrld_
_Y r Fadrs nū_
_An't fōr sinfl Man dē simpl Refuj of dē Sē_
_Y r Fadrs nū._
—Godbuk 1, 4, 26-29
They also rejected the idea of burrowing beneath the ground, which was the way the primitive land animals managed to make it through the holocaust. There was a good deal of seismic activity even under the best of conditions.
The poles offered one answer. Especially the northern pole, where a high-walled crater near the top of the world made a kind of natural fort, within whose walls the suns' rays never fell. It was bitter cold, of course, but they could cope with that.
Transportation was a problem. The one scout ship they had used for exploration could carry little more than its pilot. But they had tools and time, and there was plenty of wood, so they opened various colonists' manuals and set about learning how to build ships and navigate them.
The final solution was both simple and daring-foolhardy, some maintained. That was simply to lift the star-ship back into orbit, and wait out the storm in the still of space, protected by the shadow of Thursday's Child. But the engineers couldn't guarantee that the ship would even lift properly, let alone perform any kind of sophisticated maneuvering.
Finally they split into two groups, most of the colony building the flotilla that would take them north.
_Dā warn d dē Ones dat sot a plās of Sāftē in dē Skī_
_Y r Fadrs nū_
_Dā sed. God didnt put us on dis_
_Wrld tū let us lēv._
_Y r Fadrs nū._
—Godbuk 1, 4, 34-37
The small group who had opted for the starship ran out of luck very quickly. The engines quit at an altitude of less than a kilometer, and they fell into the sea. For many years the remains of the starship were visible in the shallow water, but eventually it became the nucleus of a long-lived organism resembling a coral reef. Its location was forgotten, and over the course of a few dozen generations the very fact of its existence evolved from memory into oral history and, finally, into myth.
The ones who had gone north didn't have an easy time of it. Over half of them died, some from exposure during the rigorous crossing from the artic sea to the inland crater, but most were killed at the height of the twenty-day storm, whose effects were worse than had been predicted by the most pessimistic scientist. Perhaps it was just as well, since over half of the food and seed were also lost.
Having known the seas were going to rise, they'd moved what they couldn't carry with them to the nearest high ground. Their livestock and seed and other absolute necessities went into the boats, along with a year's worth of food, and they headed for the northern ice. There, they dismantled the boats and reformed them into sledges, and most of them made it to the crater. The inside of the crater wall was conveniently pocked with caves; the nomads walled themselves in and waited.
But the caves that were too close to the crater floor—including the ones that housed the livestock—filled up with boiling water at the height of the storm. They had started out with twelve hundred people and eight hundred head of livestock. When they came out of their caves after the water receded, there were five hundred people, two roosters, and a hen.
Without draft animals, returning to the sea was much slower than getting to the crater had been, even though the coast was less than a third the distance it had been before the storm. They bolted wheels to their sleds and pushed and pulled them across muck that was already beginning to freeze again. Then they dismantled the sleds and nailed them up into the shape of boats, and returned over warmed seas to the place they had called Primus.
Finding Primus underwater surprised nobody. Much more disconcerting was the fact that the mountains had been scoured clean, and there was no trace of their caches of goods, records and equipment. Much that had been irreplaceable was lost, including the ship's library and the cloning equipment that would have replaced their herds of animals.
Lars Martin and his contemporaries didn't know any of this. The only written records that had survived from "ancient times" were The Sonets of Wm. Shākspēr, twelve of which had been passed from father to son as one family's tradition, and a thing variously called God's Book, Godbuk, or God Buk; spelling having become more a matter of opinion than of authority. This volume was a mixture of mythologized history and moral guidance, most of it rendered in iambic-septameter doggerel.
The Shākspēr book was one that Lars had memorized word-for-word, although he kept a copy as part of his own meager weight allowance. And Godbuk he studied constantly. Not for moral guidance; he had his own, fairly conventional, ideas and was reasonably true to them.
Fred continued his gentle baiting. "Like that God's Book you're always reading. You can't really think it's worth a pound of seed."
"Be serious, Fred."
"I am being serious." He opened a copy of the book and flipped through its accordian-style pages. "Half serious. I suppose it's useful for scaring children and keeping them lined up properly. Not much else."
"You're dead wrong. It's the closest thing we have to a historical record. Everything else is just what somebody told somebody."
"You're still dancing that jig?" He slapped the book shut. "Somebody sat down and made that up. Some _priest."_ No one in Samueltown had been a priest for more than three generations, and most of the townspeople shared Fred's contempt for the profession.
"That's not strictly true," Lars began, but Fred cut him off with a laugh and an out-thrust palm. "Save it. Too much work for idle argument," he said, which was true, and he jogged away.
Shaking his head, Lars slid Fred's precious metals into a small fish-bladder bag, tied the mouth of it shut, and affixed an identifying label. He recorded the bag's contents in his notebook, then set it on top of a pile of similar bags. He squinted at the low suns. About another hour; then he could carry the bags to the ship's lockup hold and go home.
A few days later, they were under sail; eight ships that drew power from oars as well as sails, in case of calm. As closely as could be divided, each ship held one-eighth of Samueltown's resources, human as well as material. Most of each ship's cargo was made up of food and seed. They had to save food enough to last the town a year or more, until the waters receded enough for planting and the fish started biting again.
As long as the wind and currents were favorable, there was plenty of time for "idle argument." Lars and Fred and the town's mayor, called Samuel by way of title, were resting in the shade after an hour of cleaning fish. It had been a noisome job, since the offal was collected and kept in a trough at the stern, to use as chum for attracting other fish.
Samuel was in an especially bad mood. She had been a farmer all her life and had worked the same piece of land through thirty years and two husbands. In a few months now, her orchards and vines would be under fifty fathoms of steaming water. If she ever farmed again, it would mean starting from scratch on a sterile mountaintop.
She folded her arms on the railing and stared down at the inky blue water. "You've talked to a priest, haven't you."
"The one in Carroltown," Lars said. "When I went down to copy the annotations in our Godbuk."
"Did he have any answers?" Her voice was almost a snarl, though she was close to tears. "Why this happens? Why we just have time to get started and..."
"He had all the answers," Fred said. "Right? They always have."
Lars shrugged. "You know how I feel."
"Yeah, but you're crazy." Fred picked at a splinter in the decking. "You only get half a vote."
"Nice if we could settle it with a vote," Samuel said. "'The suns should stay dim. Vote yes or no.'"
"You can't just dismiss what the priests say. Just because they're priests. They know things—"
"The problem with most people," Samuel cut in, "isn't that they don't know a lot of things. Just that most of what they know is wrong."
"You wouldn't have applied that to this man," Lars said. "He was pretty impressive. Spent all of his life, eighty _years_ , just learning."
"That's Carroltown for you," Fred said. "Learning what? Anything but an honest profession."
"He had what he said was a calling."
"So do I. God told me in a dream, 'Fred, you just sit back and take it easy. Working at that damned forge is giving you blisters on your blisters.' Nobody believes me, but it's true."
"People like that are useless," Samuel said. "They're like the sucker things you sometimes find on a grayfish. Taking without giving."
"You class me that way, Samuel?"
"No. You work hard, I know it. One time I had six children in the house, all at once. How you handle ten times that number is beyond me."
"I make them want to learn. So they keep quiet and pay attention, most of them."
"That's in the nature of children," Fred said, "indulging their curiosity. Most of us grow out of it. Your priest friend was just a child with a long beard."
"Maybe so, in a sense. But meeting him was... about the most important thing that ever happened to me. He started me thinking about the Godbuk."
Fred laughed. "Then he should have been taken out and drowned."
"Something he said?" Samuel asked.
"Something he showed me." Lars leaned forward, intense. "I never told you about this?"
"You've told me," Fred said.
Samuel shot him a look. "I don't think so," she said. Best to keep him on familiar ground.
"Wake me up when it's over."
"He didn't show me himself," Lars said. "He was too old to make the journey. But he drew me a map and sent a guide along with me."
"To where?"
"A place well south of Carroltown. A cave in the mountains. How well do you know chapter four of the first Testament?"
"Not well. It's about the first Burning, isn't it?"
"That's right." He ignored Fred's snort. "It tells how one group tried to escape the Burning by getting back in the ship that brought them here. They got it back up into the sky again, but it fell and killed them all."
"I remember."
"Well, Godbuk says there were fifty-one of them, and it says the ship's captain was named Chu." He started to get up. "I'll show you; let me get—"
Samuel waved him down. "I'll take your word for it. Go on."
"Ships in the sky," Fred muttered.
"There were words in that cave, carved into the rock. They were hard to read—so old that the very rock was crumbling with age, even though it was inside, protected from wind and water. The writing was very strange, in a style I'd never seen before.
"It said, 'In memory of the nova's first victims'—I don't know what that word means, obviously something about the Burning—and it's followed by a list of fifty-one names. The name at the top is Chu."
"Doesn't prove anything." Fred opened one eye. "It might be old, sure. But even if it was written by the same crowd of priests who wrote God's Book, it's still just a children's tale."
"But Fred... even _you_ , Fred, you have to admit there is at least a small possibility that the inscription is real; that it commemorates an actual happening."
Fred smiled at him and closed his eyes again. "Ships in the sky."
"—and if that part of Godbuk is true, maybe other parts are as well. Certainly other parts are."
"Like coming here from another world?" Samuel said. "Spending twenty-eight years on a ship that flew through the air?"
"Through the sky, not 'air.' It says there wasn't any air."
"That doesn't make it any easier to believe," she said.
"Well, maybe that part's not strictly true," Lars conceded. "It might just be the result of some copying error ages ago."
"That's the first sensible idea you've had in several minutes," Fred said, yawning.
"I'll tell you what, though. You could even make a case for that. For there not being any air."
"I couldn't," Fred said. "Wouldn't."
"The higher up you go on a mountain, the harder it is to breathe. It seems logical that if you went high enough, you'd run out of air altogether."
"But—"
"And they were so high it took them twenty-eight years to come down!"
"But if there isn't any air... what _is_ there?"
Lars shrugged. "Sky. Just sky."
"Don't forget the stars," Fred said. "They'd be all around you, like lightbugs."
"Maybe they would. Maybe they're too far away; you'd never get close to them."
"Maybe, maybe. Maybe you ought to try it—get up in that thin air and it might clear your head."
"Some of us are a little worried, Lars," Samuel said. "All the time you spend studying that Godbuk. All the charts and outlines and such."
"I get my work done."
"I know you do. It just seems like a regretful waste of time and talent." Among other things, Lars had reinvented the water pump and devised an oil-flotation bearing for compasses. "We'll be needing all of your cleverness for the rebuilding."
"I'll get my work done then, too." He settled back against the railing. "Don't you see, though... that we condemn ourselves and our descendants to... that we _guarantee_ life will never be any different. Not unless some people waste their time and talent thinking about why things happen."
"Things happen," Fred said sleepily. "That's all."
_Sumtīms tū hot dē ī uv Hevn shīns,_
_An ofn is its gōld cumplekshn dimd,_
_An evry fār from fār sumtīm dēclīns..._
The Twenty-fourth Burning was no more or less severe than the twenty-three that had preceded it. The people were better prepared than they had been in the first couple of Burnings, and rarely lost more than one out of five able-bodied men and women, though small children and old people had a higher mortality rate.
The world had prepared itself the same way it had for millions of years. Before the nova suddenly waxed bright, fish headed for cool deep water, to estivate. Insects spun themselves silver chrysalides, and that season's plant seeds wore protective garments of tough fiber.
And at the appointed time, within a single day, one sun's brightness increased a hundredfold, kindling a universal forest fire from pole to pole that marched around the world with the dawn. As the fires consumed themselves, the sea began to steam, then to boil. The ashes of the world were scattered by a fierce wind of ozone and superheated steam. The sea rose and spilled boiling over the sterile plains. And as the nova faded, it began to rain.
In the fragile safety of their caves, men and women crouched around flickering lamps, unable to sleep or even to speak for the manic wail of the wind outside; a wind that would corrode away the polar ice in a couple of days; a wind that tossed large rocks around like pellets of sleet; a wind that would strip the flesh from bone and then scatter the bones across half a world.
The first rain fell boiling and rose back into the sky. (The planet that had looked so green and blue and hospitable glowed an even baleful white.) After a while some of the water stayed out of the air, and the planetwide storm gentled to mere hurricane force. It rained, hard and long.
When they came out of their holes, the rain was only a warm mist. By the time their caravan was spiked together, deep blue sky sometimes showed through the clouds, and the suns revealed themselves several times a day as they rolled along the horizon. The mud began to congeal and they left the polar crater the day of the first snow.
They made it back to the islands that had been hills overlooking Samueltown. Only 178 people had been lost, and fully half that number had survived the storm, but were on a boat that had one night mysteriously disappeared.
Lars found the hill where he had buried deep a chest full of books and other valuables. He had marked it by attaching a long chain to one handle and allowing a length of chain to protrude above the ground.
They never located it.
They raked compost into the side of the hill and planted rice and barley; then rowed to the other hills and did the same, waiting for the shallow water to recede from their fields.
It would be fifteen years before the first full crop came in.
Samuel and Lars remained friends over the years; for a short awkward time they were even lovers. But Fred grew progressively bitter in his jibes as Lars became more convinced of his theory that Godbuk was veiled, literal truth. Most people in Samueltown thought Lars was a valuable man, if slightly dotty, but Fred was the leader of a vocal minority that withdrew their children from his school, rather than have them be taught lies. Which amused the rest of the town. Lars' stories were fantastic, but it was the sort of thing that would hold a child's attention and give him something to prattle about. Life was joyless enough; why deprive children of a little spark of wonder, no matter how silly?
Lars had finished grading the arithmetic slates and was putting the children's names on the board, in order of accomplishment. Maybe Johnny would work harder tomorrow, to get his name off the bottom of the list. He turned at the sound of a polite cough.
A stranger was standing diffidently in the doorway, which sight almost made Lars drop the slate he was holding. It had been years since he had seen anyone he hadn't known all his life.
"Uh... what can I do for you?"
"You're the town book-keeper." The man was doubly a stranger for being blond, a feature so rare in Samueltown's genetic pool that not a single individual in Lars generation had it.
"That's true."
"Well, so am I. My own town, that is. Fredrik, south and east of here."
Lars had heard of it. "Come in, sit down." He walked over to the desks where the larger children sat. "Are you just traveling?"
"Mostly copying. We lost too many books last Burning."
"Didn't we all. Can you pay?"
He shook his head. "No. But I can barter... if any of the thirty-some books I have interest you." He opened up a tanned-hide bag and Lars sorted through the books, while the stranger looked over Samueltown's small library. Lars decided he wanted to copy "Sewing" and "Mill Construction," for which he traded the copy-right to "Metal Work" and "Computation."
The man, whose name was Brian, stayed with Lars for a month of copying. They became good friends, taking their meals together (with most of the other bachelor men and women in town) at Samuel's; sitting by her fireside with cups of sweet wine, exchanging ideas until the late hours. When Lars was drafted to help flense a huge fish, Brian took over his school for a day, teaching the children rhyme and song.
After the month was done, though, Brian had to move on to the next town. He asked Lars to walk him down to the river.
There was nobody else at the riverbank that time of morning, the fishing boats having put out to sea at first light. It was a cool, breezy day, the new forest on the other side of the river making soft music as the wind pushed through the tall hollow stalks of young bamboo-like trees.
It was a pleasant way to start out a journey, and as good a setting for good-byes as one might desire. But Brian set his things on the pulley-driven raft-bridge and then silently stepped onto it, as if he were going to leave without a word, without a handclasp. He turned to Lars looking more sad than the occasion should have warranted, and said abruptly:
"Lars, I'm going to tell you something that I've said to no one before, and will never say again. You must not ask any questions; you must never tell anyone what I say."
"What—"
He continued rapidly. "Everything you believe about Godbuk is true. I know that very well, for I wasn't... born on this world. I am an observer, the latest of many, from Urth. Which is not a myth, but an actual world in the sky. The world from which all men came."
"You really—"
"You can't tell anybody this truth for the same reason I can't. It would raise false hope.
"We rediscovered this world some fifty years ago, and immediately began preparations to move you people off this inimical world, either to Urth or, if you prefer, to another world, similar to this one but more pleasant.
"We can build a flotilla of sky ships that will hold everybody—and it _is_ abuilding. But such a thing takes time. Many generations."
Lars was thoughtful. "I think I see."
"There may be two more Burnings before the rescue can be made. You know human nature, Lars."
"By that time..." he nodded. "They might not greet you as saviors. The memory would tarnish and... you would be seen as withholding freedom, rather than giving it."
"Exactly."
They stared at each other for a long moment. "Then what you want of me," Lars said slowly, "is to stop teaching the truth. Now that I know it's the truth."
"I'm afraid so. For the sake of future generations."
Brian waited patiently while Lars argued with himself. "All right," he said through clenched teeth, "I promise."
"I know what it means. Goodbye, Lars."
"Good bye." He turned abruptly to save a young man the sight of an old man's tears, and walked heavily down the path back to his school. Today, class, you are going to study long division, the use of the comma, and pottery. And lies.
Brian watched the old man walk away and then hauled himself to the other side of the river. He started down the path toward Carolltown and wasn't surprised to find a man waiting for him at the first bend in the road.
"Hello, Fred."
Fred got up, dusting off his breeches. "How did it go?"
"He believed it, every word. You won't have any more trouble."
Fred handed him a small sack of gold. He weighed it in his palm and then dropped it into his bag without counting it. "I liked the old man," Brian said. "I feel like a grayfish."
"It was necessary."
"It was cruel."
"You can always give back the gold."
"I could do that." He shouldered his bag and walked away, south to the town where he was born.
# 26 Days, On Earth
_There are some writers whose styles are so infectious they're dangerous to have around while you're working—your characters start thinking like them, sounding like them. For me, James Boswell is one such culprit._
_I was trying to write my second novel while reading Boswell's_ London Journal _(the first volume, 1762-63), and the protagonist started sounding like twenty-two-year-old Boswell. Rather than stop reading—I literally flew to the book every day, as soon as I'd written 1,500 words—I postponed the novel and started writing a short story, where the main character was a snobbish kid from the provinces, too intelligent and articulate for his own good, come to the big city for "finishing."_
_But his diary is written in the twenty-second century, not the eighteenth; instead of Scotland, he came from the Moon._
14 _April 2147._
Today I resolved to begin keeping a diary. Unfortunately, nothing of real interest happened.
15 _April._
Nothing happened again today. Just registration.
16 _April_
I can't go on wasting paper or Earth's Conservation Board will take my diary away and process it into something useful, like toilet paper. So even though nothing happened again, I'll fill up this space with biographical detail, that will no doubt be of great value to future historians.
I was born Jonathon Wu, on 17 January 2131, to Martha and Jonathon Wu II, out of the surrogate host-mother Sally 217-44-7624. My parents were wealthy enough to be permitted two legal children, but my early behavior convinced them that one was sufficient. As soon as I was old enough to travel, barely four, they packed me off to Clavius Tutorial Creche, figuring that a quarter of a million miles was a safe distance from which to monitor my growth.
Clavius Creche, it says here, was established as a uniquely isolated and controlled environment for the cultivation of little scholars. And medium-sized scholars. But when you get to be a big gangling scholar, you've got to go somewhere else. There are no universities on the moon, only technical schools. You can take up Lunar citizenship—as long as you're _mutandis_ —and be admitted to one of those technical schools, winding up as some kind of supercerebral mechanic. But I suppose my father was willing to live on the same planet with me, rather than allow me to grow into being something other than a gentleman.
I got back to Earth one week ago today.
_17 April_
We began course work today. This quarter I'm taking supposedly parallel courses in algorithmic analysis and logical systems. If I ever get "introduced" to Boolean algebra again, I'll curl into a ball and swallow my tongue. Continuing readings and analysis in classical Greek and Latin. Supposed to do preliminary readings for next quarter: XXth Century English and American Poets and Commercial Literature as a Cultural Index. This will be with Applied Stochastic Analysis and Artificial Intelligence I. The poetry is amusing but the "commercial" novels make tedious reading. One has always to keep in mind that none of these authors was born with the benefit of genetic engineering, and they were at best men of unremarkable intelligence in a world populated with morons and worse.
Earth gravity tires me.
18 _April._
I was talking with my advisor (Greek and Latin), Dr. Friedman, and complained about the sterility of this upcoming literature course. He introduced me to the work of an Irish author named Joyce, loaning me a copy of the construct _Finnegans Wake_. It has taken me ten hours to read the first thirty pages; totally immersed in it through lunch and dinner. Fascinating. Easily equal to the best of Thurman—why weren't we given him at Creche?
I am required to walk for at least two hours every day, in order to become accustomed to the gravity. Thus I am writing this standing up, the diary propped on a bookshelf. Also must eat handsful of nauseating calcium tablets, and will have to walk with braces until my leg-bones have hardened up. Had I stayed on the moon another five years, I probably never would have been able to return to Earth (a prospect which at present would not bother me a bit). Twenty-one is too old to repattern porous bones.
The braces chafe and look ridiculous in this foppish Earth clothing. But I get a certain notoriety out of being such an obvious extraterrestrial.
My father called this morning and we talked about my courses for a few minutes.
19 _April_
Today was the first day I ventured outside of the campus complex on foot. It gave me an uncomfortable feeling to be outside without suiting up. Of course, one does wear a respirator (even inside some of the buildings, which leak), and that does something to allay the agoraphobia.
How will I react to the geophysics course next year? They take field trips to wild preserves where they work for extended periods simply under the sky, exposed to the elements. I realize that mine is an irrational fear, that men lived for millions of years breathing natural air, walking around in the open without the slightest thought that there should be something around them. Perhaps I can convince them that since on Luna this fear is _not_ irrational, but part of survival... perhaps they will grant me some sort of dispensation; waive the course, or at least allow me to wear a suit.
While wandering around outside of the campus, I dropped into a tavern that supposedly caters to students. I had some ordinary wine and a bit of hashish which wasn't at all like the Lunar product. It only served to make me tired. The tavernkeeper didn't believe that I was sixteen until I produced my passport.
I got into a rather long and pointless conversation with an Earthie _mutandis_ over the necessity for interplanetary tariff imbalance. They know so little about the other worlds. But then, I know little enough about Earth, for having been born here.
I was barely able to get back to the dormitory without assistance, and slept through half of my normal reading period. Had to take stimulants to finish the last book of the _Georgics_. So much of it is about open-air farming that it kept bringing back my earlier discomfort.
Resolved not to smoke any more Earth hashish until I get my strength back.
_20 April._
Algorithmic analysis has an economy and order that appeals to me. I had of course planned to take my doctorate in Letters, but now I want to investigate mathematics further. My father would have apoplexy. A gentleman _hires_ mathematicians. I made an appointment with the advising facility for tomorrow.
I am having difficulty making friends. Their customs are rather strange, but I have grown up in knowledge of that and am prepared to make any adjustment. Perhaps I am too critical of Earth society.
An embarrassing illustration: this morning for the first time, I felt strong enough for sex. Thinking this would be an ideal way to begin more cordial relations with Earthies, I made a tactful suggestion of that nature to one of my classmates in Systems. She was very indignant and wound up giving me a lecture on cultural relativism. The kernel of it, at least as applied to this situation, was that one is supposed to go through an elaborate series of courting gestures with a prospective mate. Like a bird ruffing out his feathers and cooing. I told her this might make some sense if the ritual had something to do with predicting or promoting future sexual compatibility between the two people, which it didn't. She reacted with almost frightening force.
My father had warned me about this moral oddity, but I was given to understand that it only applied to the lower classes and, specifically, to the remaining _homo sapiens_. Certainly there is a good argument for reducing the number of unengineered births by repressing casual sexual contact, but the same restrictive behavioral patterns shouldn't be impressed on _homo mutandis_ , to which group I assumed my classmate belonged. From the speciousness of her argument, I suppose it's possible she doesn't, but then how could she get into a university? Of course, I wouldn't insult her by asking.
_21 April._
The machine analyzed my profile and said that I had the potential for moderate success in mathematics, but that I was temperamentally better suited for literature. It advised that I continue a double course of study for as long as possible, and then switch all of my energies to one field or the other as soon as it became clear in which direction my greatest interest lay. An agreeable course of action, perhaps because of my natural indecisiveness.
I may have found a friend after all. He isn't an Earthie, but a Martian, also come to Earth for "polish." His name is Chatham Howard, and he was flattered that I recognized the Howard name both for its role in early Martian history and for the social rank it now represents, on Mars. He is a year ahead of me, studying sociology.
22 _April_
Chatham took me to a party and introduced me to a number of very pleasant Earthies. I'm still sorting out the impressions, changing my ideas a little bit. Not all Earthies my age are immature provincials.
Met an interesting female by the name of Pamela Anderson. I have begun the courting ritual, to the best of my abilities. I was attentive and complimentary (though she has some strange ideas, she is not unintelligent), and agreed to meet her tomorrow for the evening meal.
We kissed once. Odd custom.
_23 April._
Chatham and a friend joined Pamela and me for dinner at Luigi's, a restaurant which specializes in an old-fashioned cuisine called "North-American-Italian." It is more spicy than I am accustomed to, but Pamela recommended a fairly bland dish called _spaghetti_ with mushroom sauce. It was rather good, and reminiscent of some familiar fungi dishes.
After dinner, we went to a public theatre and saw a drama-tape that consisted mainly of views of various couples, copulating. It was much the same as the tapes I'd been watching in Mental Hygiene classes since I was eight years old, but in this bizarre setting I found it strangely exciting.
We had drinks at the theatre after the show, and engaged in some bright banter. It was all very enjoyable, but I got the impression that Pamela was not yet interested in me sexually. This was a disappointment, especially after Chatham's friend quite directly asked him to spend the night with her. Pamela was very warm but didn't extend any such invitation.
For the first time I wondered whether she might not consider me too "alien" for a sex partner. I am a half-meter taller than she, and my Lunar myaesthenia is all too evident, with the braces and my quickness to fatigue. I'm also a couple of years younger than she, which evidently is rather important on Earth.
I found out in our conversation that many of the customs relating to this mating ritual are centuries old. This is an exasperating thing about Earth: in many ways they cling stubbornly to the cultural matrix that brought them to within a button-push of destroying humanity. On the Worlds, at least we had the sense to junk it all and start over.
Sometimes it brings me up short to remember that I was born an Earthie.
24 _April_
Today I got lost in the middle of writing a long Turing Machine algorithm, when my mind strayed to Pamela. I had to go back to the beginning and start over. Idiotic! Perhaps all this medication is affecting my mental discipline.
Continuing with analysis of the writings of Virgil, of at least those attributed to him. Obvious many of them written by somebody else.
25 _April_
Pamela met me, without prior arrangement, outside my Systems classroom—an encouragingly aggressive sign. But it turned out that her real interest was in learning more about Lunar mores, for a paper in Comp. Soc. We went down to the cafeteria and discussed, essentially, how different she was from me. I left feeling depressed, but with a "date" for a concert tomorrow.
_26 April._
The concert was on an ancient instrument called the "glass harmonica." The melodies were interesting, but the rhythm was simplistic and the harmonies progressed in a very predictable manner. Somehow, the overall effect was moving.
I learned the most startling thing after the concert. Pamela is not _mutandis_. We went to a bhang shop with another couple and talked about the difference, the distance, between _sapiens_ and _mutandis_. She accused me of being ill-informed and patronizing when I talked about our obligation to guide and protect _sapiens_ as they inevitably died out over the next few generations. She said that she was not engineered and her children were not going to be, nor their children. Something else she said, we had not been taught on Luna. But, once it was pointed out, I had to admit it was obvious: there was no guarantee that genetic engineering was going to be successful in the long race, and humanity must maintain a large and pure community of _sapiens_ for several centuries, in case the "experiment" fails.
I privately disagreed with her contention that _sapiens_ must always remain in the majority. Certainly a million or two would be adequate to the task of rebuilding the race, should all of us _mutandi_ turn purple and explode. Of course her worry was political rather than biological; that we might irrationally legislate _sapiens_ out of existence, were we in the majority.
She said we had done exactly that on Luna, and I had to patiently explain why we no longer allowed _sapiens_ as colonists. It was not prejudice, but simple logic. She was not convinced.
[Of course, this explains why I was so surprised to find that Pamela was not _mutandis_. All of the _sapiens_ on Luna are quite old and mentally incompetent because of a lack of correctional therapeutics in their youth. I was guilty of unconsciously projecting my attitudes toward their manifest inferiority onto Earthie _sapiens.]_
Somehow the fact that she is not _mutandis_ does not make her less attractive to me. My regard for her intellectual abilities should be greater, knowing as I do now that she started out with a genetic handicap. The main thing I feel now is a vague distrust of her emotional reliability. Or do I mean predictability? It is all very confusing.
_27 April._
Algorithmic Analysis test tonight. Not difficult but studying for it was very time-consuming.
28 _April_
Pamela took me to the zoo. Tiring but extremely rewarding day. Animals are fascinating. It occurred to me that being adult, or nearly so, and seeing non-human creatures for the first time in my life might give me some unique insight. Instead of writing a long entry in this diary tonight, I will begin an essay on the experience.
My feet are throbbing. Told Pamela the joke about the computer playing chess with itself, and she laughed. Was this the first time I've seen her laugh?
_29 April._
Pamela read my essay and left saying she never wanted to see me again. She was crying.
_30 April._
I have reconsidered some of the comparisons I made in the essay, between _sapiens_ and animals. They were meant to be satirical, but I can see in the light of Pamela's reaction that this intent was not clear. Rather than attempt to translate my attempts at humor into Earthie terms, I deleted these passages. I sent a copy to Pamela.
Reading back, I see I have known her little more than a week. Odd.
_1 May._
Latin test.
2 _May._
Pamela visited today, bringing a male companion. She did not mention the essay.
I realized that I don't know Pamela well enough to decide whether she brought the other man, Hill Beaumont, in order to provoke jealousy in me (consciously or otherwise). I understand jealousy, of course, from my reading, but I have never felt it and believe myself immune. Besides, Beaumont is a rather stupid fellow.
3 _May._
Beaumont dropped in alone today, saying that he had read the essay and complimenting me at some length on it. He is still a dull oaf, but I can't help now feeling more kindly disposed toward him. He wanted to take me out and chatter over a bottle of wine, but I pleaded lack of time. Which was true; Greek test tomorrow evening and I have neglected it lately. Much reading to do.
I asked about Pamela and Beaumont said he hadn't seen her since they left me yesterday.
4 _May._
Greek. Stayed in my room all day, studying, but accepted an invitation to eat with Chatham and Beaumont after the test. Quite a lot happened, and even though it's after two I think I'll stay up and record it while it's still fresh in my memory.
We met at Luigi's for a light supper and wine. Chatham, of course, is always interesting, but the evening was almost spoiled for me when Beaumont revealed with a conversational flourish that he, also, was _mutandis_. In fact, he is an elected officer in a local club, the membership of which is restricted to "us." There was a meeting of the club that night, and Beaumont invited me to come and speak to them, mainly on the subject of the essay about animals. He had his copy of the essay with him. Chatham said he had a previous engagement but urged me to go along, saying the meetings were always amusing. I didn't see any way I could gracefully decline; figured it might even be fun as long as they weren't all like Beaumont. We left Chatham to finish off the wine—an office for which he has singular talent—and slid a couple of blocks to the meeting place.
Some of Beaumont's friends have the oddest ideas about what it means to be _mutandis_. The gathering was one of the strangest things I've experienced on Earth.
First a man got up and demonstrated a construct which was a poem, in Latin, written in the form of an eight-by-eight matrix. He showed how you could perform semantic analogues of the normal reduction transformations to get various intermediate poems—none of which made much sense—and arrive finally at a matrix which was null throughout except for sum-sum-sum-sum all down the main diagonal. A puerile exercise, bad poetry and naive mathematics, but everybody seemed dutifully impressed.
Then a woman showed a "sculpture" she had made by synthesizing a large cube of piezoelectric crystal and fracturing it, in what she felt to be an artistic way, by applying various charges to different parts of the surface. That she could have arrived at a similar end by merely dropping the thing on a hard floor did not diminish audience appreciation.
So it went for an hour and a half. My presentation was the last one, and I'm sure nine-tenths of the applause I got was due to that fact, rather than for any intrinsic merit of the composition.
The disturbing part of the evening, though, was a roundtable discussion about _sapiens_ and what eventually would have to be done about them. Some of the reasoning was so fuzzy that it wouldn't have done justice to a child in first-form Creche.
One thing I learned, one very surprising thing, was that _mutandi_ make up only about 1% of the Earth's population. Why did they hide this fact from us in Creche? At any rate, the irrational nature of some of their proposals tonight might possibly be excused as simple "minority paranoia."
One idea which met with a good deal of approval struck me as both sneaky and foolish. There is agitation from various groups concerned with population control to make the practice of host-mothership universal, and require that all people be sterilized soon after puberty, once having filed a sample of sperm or ovum with the government. Thus the size of every family could be absolutely regulated by the government.
It was pointed out that this would inevitably lead to universal manipulation of all of humanity's genetic material—reasoning that _mutandis_ being manifestly superior to the rest of humanity, it is only a question of time before they hold all important governmental positions. Thus assured of freedom from bureaucratic interference, they would of course institute a program of universal genetic manipulation. For the benefit of all humanity.
Somebody brought up Pamela's argument, that it will take many generations before we are sure that genetic manipulation is totally safe. Most felt that it would be sufficiently proven by the time "we" have taken over.
I told them that the weakness in the idea had nothing to do with manipulation; that the universal storage of genetic material was in itself a questionable idea. For the convenience of the government, all of it would probably be stored near government centers which, like any large concentration of people, get power from one source: microwaves beamed down from the orbital solar stations. The fact that they have functioned continuously for over a century doesn't mean they are immune to breakdown; in fact, it's quite likely that if they go, it will be because of some powerful solar event, which would affect all of them simultaneously. No power, no refrigeration. The genetic material, at least most of it, would thaw out and die, and humanity would have to depend on the current crop of children to reach sexual maturity and replenish the race. That crop might be small indeed if there were stringent controls on family size. There might not be enough breeders to bring the next generation up to a size sufficient to carry on civilization as we live it now.
And it wouldn't even require a solar catastrophe. It's possible that some people wouldn't like the idea of us changing all of humankind into _mutandis_ , and would sabotage the sperm and ovum banks without thinking or caring about the consequences.
They listened politely to my counter-arguments, but I don't think many of them were convinced. They take electrical power too much for granted, here on Earth. They have had local failures all their lives, which meant little more than having to walk down still slidewalks for a few hours. There has of course been only one power failure on Luna.
_5 May._
Knowing that Pamela has a course in Sociometrics, I contrived to spend a few hours down at the social sciences computing facility, supposedly checking out an algorithm that simulated a Turing machine. Actually, I knew that it worked, having run it successfully over at the mathematics facility, but I kept putting glitches in it in order to remain at the console.
She did show up, after four hours. Luckily, she was only there to pick up a printout. It was dinner time, so I escorted her down to the Union. We each got a plate of small sandwiches and talked.
I told her about the experience with Beaumont's crowd. She was amused, which for some reason made me angry at first—just because she was _sapiens_ , I guess—but she jostled me about it so much that I wound up laughing too. She admitted that this had been her purpose when she first introduced Beaumont to me: to demonstrate that not all _mutandi_ were _a priori_ superior examples of humanity.
In the dining hall I said hello to one of the girls who had been at last night's meeting, the one with the piezoelectric sculpture. She stared right through me and didn't miss a bite.
6 _May._
What a long and disturbing day. This morning, I found this note in my box:
IT HAS BEEN BROUGHT TO OUR ATTENTION THAT YOU ARE SEEKING A SEXUAL LIAISON WITH ONE PAMELA ANDERSON, A _HOMO SAPIENS _ FRANKLY, WE ARE DISGUSTED FROM OUR POINT OF VIEW THIS IS AN ACT OF SODOMY; BESTIALITY _ HOMO SAPIENS_ IS OUR ONLY NATURAL ENEMY, THE ONLY OBSTACLE TO THE CONTINUING PROGRESS OF HUMANITY THEY ARE A DIFFERENT CREATURE AND TO US A DANGEROUS ONE WE DO NOT FRATERNIZE WITH THEM IF YOU CONTINUE THIS OBSCENE RELATIONSHIP WITH PAMELA ANDERSON, BOTH OF YOU WILL BE IN PROFOUND TROUBLE WE WILL BE IN TOUCH STECOM
I sought out Beaumont and, yes, he had heard of "STECOM," the Steering Committee for Humanity, but never to his knowledge had they ever caused anyone "profound trouble." They served mainly to protect the interests of _Mutandi_ in legislation, commerce and so on. He said that the organization's public stance is much milder than that represented by my note, but that he knew many of the members to hold similar views privately.
He gave me the number of the local STECOM chairman, and I contacted him. He denied any connection with the note; said that whoever signed it did so without authority; asked that I keep him apprised of further developments; told me not to worry. It was just the work of an extremist. Somehow that gave me very little comfort.
I left word with Pamela's roommate, asking that Pamela call as soon as she returned from classes. She called and we arranged to meet for dinner.
We sat at a back table in Luigi's and she read the note; first amused, then alarmed. She didn't think they would dare do anything to her, but they might try to harrass me.
She said she thought it would be best if we didn't see each other for a while. I protested that that would be a cowardly action, in response to what was already the act of a coward, hiding behind anonymity. We argued. In the course of the argument she said I was wasting my efforts anyhow, as our relationship could never be anything besides casual and platonic. We finished our meals in silence and she asked me not to walk her home.
On my way back to the dormitory, right after getting off the South Quadrant Westbound slidewalk, I had to walk by a dense stand of shrubbery which threw a deep shadow over the walk. I probably wouldn't have seen my assailants even had I not been lost in brooding thought.
One slipped behind me and threw a fabric bag over my head and shoulders, and then pinioned my arms behind me. The other hit me once in the solar plexus and twice on the face, then reached under the bag and tore off my respirator. They fled and I half-walked, half-crawled to the nearest dormitory. The medic there gave me some oxygen and pasted up my one serious-looking wound, a nasty cut over my left eye. He gave me a voucher for the materials he had used, so I could return them from my dormitory's supply, loaned me a respirator and sent me on my way. A classmate walked over with me to help forestall a recurrence.
As I write this, my throat still hurts from breathing the sulfurous air. Good thing the attack didn't happen downtown, nearer the Industrial Park.
I'll take an extra Pain-go and retire.
_7 May._
I went to the campus police and they told me that since there were no witnesses, and I couldn't identify my assailants, an investigation would be a waste of time. I recognized the chief as having been at the meeting the other night, and didn't press him.
Another note in my box. This one simply said RETURN TO LUNA STECOM . I called up the Steering Committee chairman again and informed him of this new note and of last night's assault. He got very flustered but offered no worthwhile advice.
Somebody had forced his way into my room and poured soya all over my books and papers. When they were completely dry, I took them down to the laundry and used the ultrasonic dry-cleaner on them. It worked after a fashion. I hope he read this diary before dousing it, and saw that Pamela is not enthusiastic about my "seeking a sexual liaison" with her. Now maybe all of this will stop.
Work goes on, of course. Tree theory and yet more non-Virgil.
I toyed with the idea of trying to trace the person or persons behind all this through the notes. They are, of course, simple computer printouts, so the person would first have had to encode a crystal. The crystal would have to be re-filed and, if it hadn't yet been erased for another use, it would be a simple matter to find out who had last checked it out.
Simple in theory, at least. There must be five or six computing facilities on campus, each with several thousand crystals.
And for that matter, it wouldn't be difficult to have the message printed out and then code something new over that domain of the crystal, as if it had been a glitch.
I tried to think of how I might set a trap, without using Pamela as bait. My mind just isn't devious enough-or perhaps it doesn't have enough information. Since Chatham has more deviousness and information at his disposal, I tried to contact him. He was out, though; had been gone since yesterday. I settled for Beaumont.
Over a bottle of wine in the lounge of his dormitory, we roughed out a plan. He knew most of the _mutandi_ on campus, and knew which ones were the most extreme in their views. He would meet some of them socially and bring the conversation around to Pamela and me; if the person showed any interest, Beaumont would pretend to sympathize with the idea that _mutandi_ should mate with their own kind—as if the characteristics could be inherited!—and since _I_ was the one person on campus most obviously a _mutandis_ , I was setting a terribly bad example. Then see whether the other would suggest some sort of action.
He said he would start right away and contact me as soon as he had some results.
_8 May._
Solved.
Beaumont called this morning with the good news that he had found the person responsible. No one I knew, he said; the person was an agitator who had been out of school for years and rarely showed up at club meetings. The three of us were going to meet at 8:00 tonight, by the sheds on the athletic field.
I told him that I didn't like it. At least two people had attacked me before, and there might be even more. I was still too weak to be of any help if it came to violence, and the athletic field was dangerously isolated. I wanted to just call the police and have him apprehended, but Beaumont raised the good point that, without evidence, it would just be Beaumont's word against the other's... and the campus police were not noted for respecting the testimony of students.
He said he could get his hands on a stunner, to even out the odds, and would bring a recorder to catch the person in damaging statements, even if he couldn't be goaded into action. Personally, I hoped he couldn't.
Beaumont had a regular script worked out, things for me to say to the man which were at once perfectly innocuous and calculated to make him lose his temper. Beaumont, of course, would be pretending to be on _his_ side, which would tend to make him reckless. I agreed, with the private reservation that I would tone down some of my side of the dialogue.
I went to my morning classes as usual but found I couldn't concentrate for worrying. Anything could happen. This time of year, the athletic field was only airco'ed over weekends, and I wasn't sure I could make it back to a building in time, if they overpowered us as they did me last time, taking our respirators. There was no guarantee that the man would show up alone, or with just one accomplice. The more I thought about it, the more nervous I became. Finally, around noon, I went to the police.
The chief was monumentally unimpressed. He said the whole thing sounded like a prank, an initiation into the club. He knew Beaumont and expressed the opinion that he had been manipulated, the initiators playing on his exaggerated sense of drama.
I insisted that they had tried to harm me seriously night before last, but the chief pointed out that I was never in real danger, and the blows seemed calculated to do only superficial harm. They could have more easily incapacitated me and left me to suffocate.
Besides, he doubted that he could spare a man at 8:00, at which time most of them were patrolling the taverns and dopeshops off-campus, preventing trouble. He kept looking at the clock—I shouldn't have come at lunchtime—and finally said he'd see whether he could find a man to meet me there.
Some time later, the chilling thought occurred to me that the chief could possibly be in on it too, and if I was the focus of some ruthless _anti-sapiens_ plot, my action had only put Beaumont and me in even greater danger.
I tried to reach Beaumont all day, after that thought, to tell him the whole thing was off, but he was never home. After a good deal of internal debate, about 7:00 I got up and headed for the field. After all, I had chastized Pamela for suggesting cowardly action. I stopped in a general-merchandise store on the way, and bought the biggest clasp-knife they had. I hadn't fought anybody since I was a little boy, and didn't know whether, should the time come to use it, I would have nerve or wit enough to even take it out of my pocket. But its weight was some small comfort.
When it happened, everything happened very fast. I went out onto the field and saw Beaumont standing by the sheds, chatting with another man. I approached them and waited for Beaumont to start the charade. They stopped talking as I came closer and suddenly Beaumont began to laugh hysterically. The other, muscular older man only slightly shorter than me—probably the tallest Earthie I'd seen—smiled and drew a short wooden club out of his tunic.
I had the knife out and was trying to get my thumbnail into the little depression when Beaumont, still laughing, raised a stunner at me and fired.
It was very painful. A stunner confuses the neural signals to and from the part of your brain that controls motor functions. As a side effect it makes you feel as if your skin is being punctured by thousands of tiny needles. I fell to the ground, twitching spasmodically. My face was down, so I couldn't see, but I heard Beaumont tell the big man to use the knife instead; it would be more impressive.
Then absolutely nothing happened for a long couple of minutes. Suddenly I was turned over roughly and steeled myself for the first blow of the knife—and found myself looking into the face of the police chief.
He sprayed an aerosol into my face that made the pain go away, and said they'd take me to the infirmary, to a "pattern blocker," to cure the paralysis. He apologized for using me as bait and said he'd had a man hiding in the far shed since early this afternoon, waiting for Beaumont and the other man, who had been suspected in a similar assault case some months before.
Both of them were lying on the ground, twitching as badly as I was. A large police floater drifted onto the field and two men with stretchers came out.
They loaded up the others first, and by the time my stretcher was secure, the chief was interrogating Beaumont, evidently with the aid of some hypnotic. His confession was very disjointed and childishly vituperative, but the gist of it was this:
He had been after Pamela's attentions (he used another word, which Chaucer would have recognized) for several months, and felt he was just about to succeed when I came along. I was an egotistical child, an alien and a cripple and, to his mind, I had stolen her away.
The chief questioned him further and found that Beaumont had suffered a nervous breakdown over a year before and had been under treatment until he came to the University. He admitted to several other acts of violence and admitted knowing that he was still mentally ill but had not volunteered for further treatment because he felt that the illness was somehow allied with his genius, and he didn't want to interfere with it. I felt that anything interfering with his brand of genius could only add to it, but I kept my own counsel.
The infirmary treatment only took a few minutes. I arranged with the chief to come down the next morning to file a complaint and testify, then found a 'phone and called up Pamela.
She was fascinated, but not surprised, with the revelations about Beaumont. I went over the whole thing in some detail, and then we talked about some more general matters, and finally I got down to the question of our relationship. She said, with some heat, that the affair with Beaumont didn't change anything, that if I knew anything about women I wouldn't even have asked, and that we could still be friends but that's all: platonic, intellectual arrangement.
While I've been writing this I've been thinking about what she said. I _do_ know a little more about women than I knew a month ago. And a lot more about jealousy. And I've known about synergy for years.
9 _May._
Today I started reading up on crystalline sculpture and piezoelectricity.
# Armaja Das
_I got a request from Kirby McCauley to write a story for an anthology called_ Frights _(St. Martin's Press, 1076). The theme would he "ancient horrors in modern guise." The idea was intriguing. I'd only written one fantasy story in my life–snide critics might disagree–and it was a throwaway deal-with-the-devil joke. I said I'd send him an outline._
_The summer before, my wife and I had come down with acute attacks of dysentery in the rather dysenteric city of Tangier. Unlike her, I was able occasionally to get up and walk farther than the john, so it fell to me to venture out every few hours and haggle for bottled water and canned European food._
_Tangier redefined for me Raymond Chandler's phrase "mean streets." Our sleazy hotel fronted the main street that led to the waterfront; there was a tiny park outside the door, and that park was decorated by a dead man. Not violently dead, just some old fellow who'd tired of being a Moroccan. The first time I saw the corpse, I went back into the hotel and tried to explain the situation to the clerk inmy all-but-nonexistent French. He just kept shrugging.* By nightfall, somebody had dragged the body away, to what fell purpose I leave to your imagination._
_Lying upstairs feeling dismal, it occurred to me that we might_ die _there, and likely as not, no one would ever find out what had happened to us. Perhaps out of some obscure impulse to die with my boots on, I started making up a story about that, a pretty good Rod-Serlingish story called "To Die at Home." I even made a few notes about it, after the fevered palsy had abated enough for me to write._
_So when Kirby asked for a fantasy story, I sent him an outline of "To Die at Home." He wrote back saying that he liked it, but it wasn't weird enough._
_I'll write that story some day, if only to show Kirby how weird it is._
_I half-abandoned the project, figuring my talents really lay more in the direction of_ Analog _than_ Weird Tales. _Who wants to write about vampires anyhow, feh. But then I came across a fascinating article by Peter Maas, in_ New York _magazine: "The Deadly Battle to Become King of the Gypsies."_
_He mentioned gypsy curses, and I was off and running. Running down to the library, where I spent a glorious afternoon in the stacks doing what I do best: goofing off. In this case, reading dusty old books and journals about gypsy lore._
_Loaded up with fresh information, it was child's play to toss together gypsy curses, computer science, and minority assimilation into an "ancient horror in modern guise."_
* It later occurred to me that my _mort_ may have sounded like _merd_ to him; his shrug may have meant that if I wanted to complain about shit, I was in the wrong country.
The highrise, built in 1980, still had the smell and look of newness. And of money.
The doorman bowed a few degrees and kept a straight face, opening the door for a bent old lady. She had a card of Veterans' poppies clutched in one old claw. He didn't care much for the security guard, and she would give him interesting trouble.
The skin on her face hung in deep creases, scored with a network of tiny wrinkles; her chin and nose protruded and dropped. A cataract made one eye opaque; the other eye was yellow and red surrounding deep black, unblinking. She had left her teeth in various things. She shuffled. She wore an old black dress faded slightly gray by repeated washing. If she had any hair, it was concealed by a pale blue bandanna. She was so stooped that her neck was almost parallel to the ground.
"What can I do for you?" The security guard had a tired voice to match his tired shoulders and back. The job had seemed a little romantic the first couple of days, guarding all these rich people, sitting at an ultramodern console surrounded by video monitors, submachine gun at his knees. But the monitors were blank except for an hourly check, power shortage; and if he ever removed the gun from its cradle, he would have to fill out five forms and call the police station. And the doorman never turned anybody away.
"Buy a flower for boys less fortunate than ye," she said in a faint raspy baritone. From her age and accent, her own boys had fought in the Russian Revolution.
"I'm sorry. I'm not allowed to... respond to charity while on duty."
She stared at him for a long time, nodding microscopically. "Then send me to someone with more heart."
He was trying to frame a reply when the front door slammed open. "Car on fire!" the doorman shouted.
The security guard leaped out of his seat, grabbed a fire extinguisher and sprinted for the door. The old woman shuffled along behind him until both he and the doorman disappeared around the corner. Then she made for the elevator with surprising agility.
She got out on the 17th floor, after pushing the button that would send the elevator back down to the lobby. She checked the name plate on 1738; Mr. Zold. She was illiterate but could recognize names.
Not even bothering to try the lock, she walked on down the hall until she found a maid's closet. She closed the door behind her and hid behind a rack of starchy white uniforms, leaning against the wall with her bag between her feet. The slight smell of gasoline didn't bother her at all.
John Zold pressed the intercom button. "Martha?" She answered. "Before you close up shop I'd like a redundancy check on stack 408. Against tape 408." He switched the selector on his visual output screen so it would duplicate the output at Martha's station. He stuffed tobacco in a pipe and lit it, watching.
Green numbers filled the screen, a complicated matrix of ones and zeros. They faded for a second and were replaced with a field of pure zeros. The lines of zeros started to roll, like titles preceding a movie.
The 746th line came up all ones. John thumbed the intercom again. "Had to be something like that. You have time to fix it up?" She did. "Thanks, Martha. See you tomorrow."
He slid back the part of his desk top that concealed a keypunch and typed rapidly: "523 784 00926/ / Good night, machine. Please lock this station."
GOOD NIGHT, JOHN. DON'T FORGET YOUR LUNCH DATE WITH MR. BROWNWOOD TOMORROW. DENTIST APPOINTMENT WEDNESDAY 0945. GENERAL SYSTEMS CHECK WEDNESDAY 1300. DEL O DEL BAXT. LOCKED.
_Del O Del baxt_ means "God give you luck" in the ancient tongue of the Romani. John Zold, born a Gypsy but hardly a Gypsy by any standard other than the strong one of blood, turned off his console and unlocked the bottom drawer of his desk. He took out a flat automatic pistol in a holster with a belt clip and slipped it under his jacket, inside the waistband of his trousers. He had only been wearing the gun for two weeks, and it still made him uncomfortable. But there had been those letters.
John was born in Chicago, some years after his parents had fled from Europe and Hitler. His father had been a fiercely proud man, and got involved in a bitter argument over the honor of his 12-year-old daughter; from which argument he had come home with knuckles raw and bleeding, and had given to his wife for disposal a large clasp knife crusty with dried blood.
John was small for his five years, and his chin barely cleared the kitchen table, where the whole family sat and discussed their uncertain future while Mrs. Zold bound up her husband's hands. John's shortness saved his life when the kitchen window exploded and a low ceiling of shotgun pellets fanned out and chopped into the heads and chests of the only people in the world whom he could love and trust. The police found him huddled between the bodies of his father and mother, and at first thought he was also dead; covered with blood, completely still, eyes wide open and not crying.
It took six months for the kindly orphanage people to get a single word out of him: _ratválo_ , which he said over and over; which they were never able to translate. Bloody, bleeding.
But he had been raised mostly in English, with a few words of Romani and Hungarian thrown in for spice and accuracy. In another year their problem was not one of communicating with John; only of trying to shut him up.
No one adopted the stunted Gypsy boy, which suited John. He'd had a family, and look what happened.
In orphanage school he flunked penmanship and deportment, but did reasonably well in everything else. In arithmetic and, later, mathematics, he was nothing short of brilliant. When he left the orphanage at eighteen, he enrolled at the University of Illinois, supporting himself as a bookkeeper's assistant and part-time male model. He had come out of an ugly adolescence with a striking resemblance to the young Clark Gable.
Drafted out of college, he spent two years playing with computers at Fort Lewis; got out and went all the way to a Master's degree under the G.I. Bill. His thesis "simulation of Continuous Physical Systems by Way of Universalization of the Trakhtenbrot Algorithms" was very well received, and the mathematics department gave him a research assistantship, to extend the thesis into a doctoral dissertation. But other people read the paper too, and after a few months Bellcom International hired him away from academia. He rose rapidly through the ranks. Not yet forty, he was now Senior Analyst at Bellcom's Research and Development Group. He had his own private office, with a picture window overlooking Central Park, and a plush six-figure condominium only twenty minutes away by commuter train.
As was his custom, John bought a tall can of beer on his way to the train, and opened it as soon as he sat down. It kept him from fidgeting during the fifteen or twenty-minute wait while the train filled up.
He pulled a thick technical report out of his briefcase and stared at the summary on the cover sheet, not really seeing it but hoping that looking occupied would spare him the company of some anonymous fellow traveller.
The train was an express, and whisked them out to Dobb's Ferry in twelve minutes. John didn't look up from his report until they were well out of New York City; the heavy mesh tunnel that protected the track from vandals induced spurious colors in your retina as it blurred by. Some people liked it, tripped on it, but to John the effect was at best annoying, at worst nauseating, depending on how tired he was. Tonight he was dead tired.
He got off the train two stops up from Dobb's Ferry. The highrise limousine was waiting for him and two other residents. It was a fine spring evening and John would normally have walked the half-mile, tired or not. But those unsigned letters.
_John Zold, you stop this preachment or you die soon._ Armaja das, _John Zold._
All three letters said that: _Armaja das_ , we put a curse on you. For preaching.
He was less afraid of curses than of bullets. He undid the bottom button of his jacket as he stepped off the train, ready to quickdraw, roll for cover behind that trash can, just like in the movies; but there was no one suspicious-looking around. Just an assortment of suburban wives and the old cop who was on permanent station duty.
Assassination in broad daylight wasn't Romani style. Styles change, though. He got in the car and watched the side roads all the way home.
There was another one of the shabby envelopes in his mailbox. He wouldn't open it until he got upstairs. He stepped in the elevator with the others, and punched 17.
They were angry because John Zold was stealing their children.
Last March John's tax accountant had suggested that he could contribute $4000 to any legitimate charity, and actually make a few hundred bucks in the process, by dropping into a lower tax bracket. Not one to do things the easy or obvious way, John made various inquiries and, after a certain amount of bureaucratic tedium, founded the Young Gypsy Assimilation Council—with matching funds from federal, state and city governments, and a continuing Ford Foundation scholarship grant.
The YGAC was actually just a one-room office in a West Village brownstone, manned by volunteer help. It was filled with various pamphlets and broadsides, mostly written by John, explaining how young Gypsies could legitimately take advantage of American society. By becoming part of it, which was the part that old-line Gypsies didn't care for. Jobs, scholarships, work-study programs, these things are for the _gadjos_. Poison to a Gypsy's spirit.
In November a volunteer had opened the office in the morning to find a crude fire bomb, using a candle as a delayed-action fuse for five gallons of gasoline. The candle was guttering a fraction of an inch away from the line of powder that would have ignited the gas. In January it had been buckets of chicken entrails, poured into filing cabinets and flung over the walls. So John found a tough young man who would sleep on the cot in the office at night; sleep like a cat with a shotgun beside him. There was no more trouble of that sort. Only old men and women who would file in silently staring, to take handfuls of pamphlets which they would drop in the hall and scuff into uselessness, or defile in a more basic way. But paper was cheap.
John threw the bolt on his door and hung his coat in the closet. He put the gun in a drawer in his writing desk and sat down to open the mail.
The shortest one yet: "Tonight, John Zold. _Armaja das."_ Lots of luck, he thought. Won't even be home tonight; heavy date. Stay at her place, Gramercy Park. Lay a curse on me there? At the show or Sardi's?
He opened two more letters, bills, and there was a knock at the door.
Not announced from downstairs. Maybe a neighbor. Guy next door was always borrowing something. Still. Feeling a little foolish, he put the gun back in his waistband. Put his coat back on in case it was just a neighbor.
The peephole didn't show anything, bad. He drew the pistol and held it just out of sight, by the doorjamb, threw the bolt and eased open the door. It bumped into the Gypsy woman, too short to have been visible through the peephole. She backed away and said "John Zold." He stared at her. "What do you want, _púridaia_? He could only remember a hundred or so words of Romani, but "grandmother" was one of them. What was the word for witch?
"I have a gift for you." From her bag she took a dark green booklet, bent and with frayed edges, and gave it to him. It was a much-used Canadian passport, belonging to a William Belini. But the picture inside the front cover was one of John Zold.
Inside, there was an airline ticket in a Qantas envelope. John didn't open it. He snapped the passport shut and handed it back. The old lady wouldn't accept it.
"An impressive job. It's flattering that someone thinks I'm so important."
"Take it and leave forever, John Zold. Or I will have to do the second thing."
He slipped the ticket envelope out of the booklet. "This, I will take. I can get your refund on it. The money will buy lots of posters and pamphlets." He tried to toss the passport into her bag, but missed. "What is your second thing?"
She toed the passport back to him. "Pick that up." She was trying to sound imperious, but it came out a thin, petulant quaver.
"Sorry, I don't have any use for it. What is—"
"The second thing is your death, John Zold." She reached into her bag.
He produced the pistol and aimed it down at her forehead. "No, I don't think so."
She ignored the gun, pulling out a handful of white chicken feathers. She threw the feathers over his threshold. _"Armaja das,"_ she said, and then droned on in Romani, scattering feathers at regular intervals. John recognized _joovi_ and _kari_ , the words for woman and penis, and several other words he might have picked up if she'd pronounced them more clearly.
He put the gun back into its holster and waited until she was through. "Do you really think—"
_"Armaja das"_ she said again, and started a new litany. He recognized a word in the middle as meaning corruption or infection, and the last word was quite clear: death. _Méripen_.
"This nonsense isn't going to..." But he was talking to the back of her head. He forced a laugh and watched her walk past the elevator and turn the corner that led to the staircase.
He could call the guard. Make sure she didn't get out the back way. Illegal entry. He suspected that she knew he wouldn't want to go to the trouble, and it annoyed him slightly. He walked halfway to the phone, checked his watch and went back to the door. Scooped up the feathers and dropped them in the disposal. Just enough time. Fresh shave, shower, best clothes. Limousine to the station, train to the city, cab from Grand Central to her apartment.
The show was pure delight, a sexy revival of _Lysistrata:_ Sardi's was as ego-bracing as ever; she was a soft-hard woman with style and sparkle, who all but dragged him back to her apartment, where he was for the first time in his life impotent.
The psychiatrist had no use for the traditional props: no soft couch or bookcases lined with obviously expensive volumes. No carpet, no paneling, no numbered prints; not even the notebook or the expression of slightly disinterested compassion. Instead, she had a hidden recorder and an analytical scowl; plain stucco walls surrounding a functional desk and two hard chairs, period.
"You know exactly what the problem is," she said.
John nodded. "I suppose. Some... residue from my early upbringing; I accept her as an authority figure. From the few words I could understand of what she said, I took, it was..."
"From the words _penis_ and _woman_ , you built your own curse. And you're using it, probably to punish yourself for surviving the disaster that killed the rest of your family."
"That's pretty old-fashioned. And farfetched. I've had almost forty years to punish myself for that, if I felt responsible. And I don't."
"Still, it's a working hypothesis." She shifted in her chair and studied the pattern of teak grain on the bare top of her desk. "Perhaps if we can keep it simple, the cure can also be simple."
"All right with me," John said. At $125 per hour, the quicker, the better.
"If you can see it, feel it, in this context, then the key to your cure is transference." She leaned forward, elbows on the table, and John watched her breasts shifting with detached interest, the only kind of interest he'd had in women for more than a week. "If you can see _me_ as an authority figure instead," she continued, "then eventually I'll be able to reach the child inside; convince him that there was no curse. Only a case of mistaken identity... nothing but an old woman who scared him. With careful hypnosis, it shouldn't be too difficult."
"Seems reasonable," John said slowly. Accept this young _Geyri_ as more powerful than the old witch? As a grown man, he could. If there was a frightened Gypsy boy hiding inside him, though, he wasn't sure.
"523 784 00926/ /Hello, machine," John typed. "Who is the best dermatologist within a 10-short-block radius?"
GOOD MORNING, JOHN. WITHIN STATED DISTANCE AND USING AS SOLE PARAMETER THEIR HOURLY FEE, THE MAXIMUM FEE IS $95/IIR, AND THIS IS CHARGED BY TWO DERMATOLOGISTS. DR. BRYAN DILL, 245 W. 45TII ST., SPECIALIZES IN COSMETIC DERMATOLOGY. DR. ARTHUR MAAS, 198 W. 44TH ST., SPECIALIZES IN SERIOUS DISEASES OF THE SKIN.
"Will Dr. Maas treat diseases of psychological origin?"
CERTAINLY. MOST DERMATOSIS IS.
Don't get cocky, machine. "Make me an appointment with Dr. Maas, within the next two days."
YOUR APPOINTMENT IS AT 1:45 TOMORROW, FOR ONE HOUR. THIS WILL LEAVE YOU 45 MINUTES TO GET TO LUCHOW'S FOR YOUR APPOINTMENT WITH THE AMCSE GROUP. I HOPE IT IS NOTHING SERIOUS, JOHN.
"I trust it isn't." Creepy empathy circuits. "Have you arranged for a remote terminal at Luchow's?"
THIS WAS NOT NECESSARY. I WILL PATCH THROUGH CONED/GENERAL. LEASING THEIR LUCHOW'S FACILITY WILL COST ONLY.588 THE PROJECTED COST OF TRANSPORTATION AND SETUP LABOR FOR A REMOTE TERMINAL.
That's my machine, always thinking. "Very good, machine. Keep this station live for the time being."
THANK YOU, JOHN. The letters faded but the ready light stayed on.
He shouldn't complain about the empathy circuits; they were his baby, and the main reason Bellcomm paid such a bloated salary, to keep him. The copyright on the empathy package was good for another 12 years, and they were making a fortune, timesharing it out. Virtually every large computer in the world was hooked up to it, from the ConEd/General that ran New York, to Geneva and Akademia Nauk, which together ran half of the world.
Most of the customers gave the empathy package a name, usually female. John called it "machine" in a not-too-successful attempt to keep from thinking of it as human.
He made a conscious effort to restrain himself from picking at the carbuncles on the back of his neck. He should have gone to the doctor when they first appeared, but the psychiatrist had been sure she could cure them; the "corruption" of the second curse. She'd had no more success with that than with the impotence. And this morning, boils had broken out on his chest and groin and shoulderblades, and there were sore spots on his nose and cheekbone. He had some opiates, but would stick to aspirin until after work.
Dr. Maas called it impetigo; gave him a special kind of soap and some antibiotic ointment. He told John to make another appointment in two weeks, ten days. If there was no improvement they would take stronger measures. He seemed young for a doctor, and John couldn't bring himself to say anything about the curse. But he already had a doctor for that end of it, he rationalized.
Three days later he was back in Dr. Maas's office. There was scarcely a square inch of his body where some sort of lesion hadn't appeared. He had a temperature of 101.4°. The doctor gave him systemic antibiotics and told him to take a couple of days' bed rest. John told him about the curse, finally, and the doctor gave him a booklet about psychosomatic illness. It told John nothing he didn't already know.
By the next morning, in spite of strong antipyretics, his fever had risen to over 102°. Groggy with fever and pain-killers, John crawled out of bed and travelled down to the West Village, to the YGAC office. Fred Gorgio, the man who guarded the place at night, was still on duty.
"Mr. Zold!" When John came through the door, Gorgio jumped up from the desk and took his arm. John winced from the contact, but allowed himself to be led to a chair. "What's happened?" John by this time looked like a person with terminal smallpox.
For a long minute John sat motionlessly, staring at the inflamed boils that crowded the backs of his hands. "I need a healer," he said, talking with slow awkwardness because of the crusted lesions on his lips.
"A _chóvihánni?"_ John looked at him uncomprehendingly. "A witch?"
"No." He moved his head from side to side. "An herb. doctor. Perhaps a white witch."
"Have you gone to the _gadjo_ doctor?"
"Two. A Gypsy did this to me; a Gypsy has to cure it."
"It's in your head, then?"
"The _gadjo_ doctors say so. It can still kill me."
Gorgio picked up the phone, punched a local number, and rattled off a fast stream of a patois that used as much Romani and Italian as English. "That was my cousin," he said, hanging up. "His mother heals, and has a good reputation. If he finds her at home, she can be here in less than an hour."
John mumbled his appreciation. Gorgio led him to the couch.
The healer woman was early, bustling in with a wicker bag full of things that rattled. She glanced once at John and Gorgio, and began clearing the pamphlets off a side table. She appeared to be somewhere between fifty and sixty years old, tight bun of silver hair bouncing as she moved around the room, setting up a hot-plate and filling two small pots with water. She wore a black dress only a few years old, and sensible shoes. The only lines on her face were laugh lines.
She stood over John and said something in gentle, rapid Italian, then took a heavy silver crucifix from around her neck and pressed it between his hands. "Tell her to speak English... or Hungarian," John said.
Gorgio translated. "She says that you should not be so affected by the old superstitions. You should be a modern man, and not believe in fairy tales for children and old people."
John stared at the crucifix, turning it slowly between his fingers. "One old superstition is much like another." But he didn't offer to give the crucifix back.
The smaller pot was starting to steam and she dropped a handful of herbs into it. Then she returned to John and carefully undressed him.
When the herb infusion was boiling, she emptied a package of powdered arrowroot into the cold water in the other pot, and stirred it vigorously. Then she poured the hot solution into the cold and stirred some more. Through Gorgio, she told John she wasn't sure whether the herb treatment would cure him. But it would make him more comfortable.
The liquid jelled and she tested the temperature with her fingers. When it was cool enough, she started to pat it gently on John's face. Then the door creaked open, and she gasped. It was the old crone who had put the curse on John in the first place.
The witch said something in Romani, obviously a command, and the woman stepped away from John.
"Are you still a skeptic, John Zold?" She surveyed her handiwork. "You called this nonsense."
John glared at her but didn't say anything. "I heard that you had asked for a healer," she said, and addressed the other woman in a low tone.
Without a word, she emptied her potion into the sink and began putting away her paraphernalia. "Old bitch," John croaked. "What did you tell her?"
"I said that if she continued to treat you, what happened to you would also happen to her sons."
"You're afraid it would work," Gorgio said.
"No. It would only make it easier for John Zold to die. If I wanted that I could have killed him on his threshold" Like a quick bird she bent over and kissed John on his inflamed lips. "I will see you soon, John Zold. Not in this world." She shuffled out the door and the other woman followed her. Gorgio cursed her in Italian, but she didn't react.
John painfully dressed himself. "What now?" Gorgio said. "I could find you another healer..."
"No. I'll go back to the _gadjo_ doctors. They say they can bring people back from the dead." He gave Gorgio the woman's crucifix and limped away.
The doctor gave him enough antibiotics to turn him into a loaf of moldy bread, then reserved a bed for him at an exclusive clinic in Westchester, starting the next morning. He would be under 24-hour observation; constant blood turnaround if necessary. They _would_ cure him. It was not possible for a man of his age and physical condition to die of dermatosis.
It was dinnertime and the doctor asked John to come have some home cooking. He declined partly from lack of appetite, partly because he couldn't imagine even a doctor's family being able to eat with such a grisly apparition at the table with them. He took a cab to the office.
There was nobody on his floor but a janitor, who took one look at John and developed an intense interest in the floor.
"523 784 00926/ /Machine, I'm going to die. Please advise."
ALL HUMANS AND MACHINES DIE, JOHN. IF YOU MEAN YOU ARE GOING TO DIE, SOON, THAT IS SAD.
"That's what I mean. The skin infection; it's completely out of control. White cell count climbing in spite of drugs. Going to the hospital tomorrow, to die."
BUT YOU ADMITTED THAT THE CONDITION WAS PSYCHOSOMATIC. THAT MEANS YOU ARE KILLING YOURSELF, JOHN. YOU HAVE NO REASON TO BE THAT SAD.
He called the machine a Jewish mother and explained in some detail about the YGAC, the old crone, the various stages of the curse, and today's aborted attempt to fight fire with fire.
YOUR LOGIC WAS CORRECT BUT THE APPLICATION OF IT WAS NOT EFFECTIVE. YOU SHOULD HAVE COME TO ME, JOHN. IT TOOK ME 2.037 SECONDS TO SOLVE YOUR PROBLEM. PURCHASE A SMALL BLACK BIRD AND CONNECT ME TO A VOCAL CIRCUIT.
"What?" John said. He typed: "Please explain."
FROM REFERENCE IN NEW YORK LIBRARY'S COLLECTION OF THE JOURNAL OF THE GYPSY LORE SOCIETY, EDINBURGH. THROUGH JOURNALS OF ANTHROPOLOGICAL LINGUISTICS AND SLAVIC PHILOLOGY. FINALLY TO REFERENCE IN DOCTORAL THESIS OF HERR LUDWIG R. GROSS (HEIDELBERG, 1976) TO TRANSCRIPTION OF WIRE RECORDING WHICH RESIDES IN ARCHIVES OF AKADEMIA NAUK, MOSCOW; CAPTURED FROM GERMAN SCIENTISTS (EXPERIMENTS ON GYPSIES IN CONCENTRATION CAMPS, TRYING TO KILL THEM WITH REPETITION OF RECORDED CURSE) AT THE END OF WWII.
INCIDENTALLY, JOHN, THE NAZI EXPERIMENTS FAILED. EVEN TWO GENERATIONS AGO, MOST GYPSIES WERE DISASSOCIATED ENOUGH FROM THE OLD TRADITIONS TO BE IMMUNE TO THE FATAL CURSE. YOU ARE VERY SUPERSTITIOUS. I HAVE FOUND THIS TO BE NOT UNCOMMON AMONG MATHEMATICIANS.
THERE IS A TRANSFERENCE CURSE THAT WILL CURE YOU BY GIVING THE IMPOTENCE AND INFECTION TO THE NEAREST SUSCEPTIBLE PERSON. THAT MAY WELL BE THE OLD BITCH WHO GAVE IT TO YOU IN THE FIRST PLACE.
THE PET STORE AT 588 SEVENTH AVENUE IS OPEN UNTIL 9 PM. THEIR INVENTORY INCLUDES A CAGE OF FINCHES, OF ASSORTED COLORS. PURCHASE A BLACK ONE AND RETURN HERE. THEN CONNECT ME TO A VOCAL CIRCUIT.
It took John less than thirty minutes to taxi there, buy the bird and get back. The taxidriver didn't ask him why he was carrying a bird cage to a deserted office building. He felt like an idiot.
John usually avoided using the vocal circuit because the person who had programmed it had given the machine a sacharrine, nice-old-lady voice. He wheeled the output unit into his office and plugged it in.
"Thank you, John. Now hold the bird in your left hand and repeat after me." The terrified finch offered no resistance when John closed his hand over it.
The machine spoke Romani with a Russian accent. John repeated it as well as he could, but not one word in ten had any meaning to him.
"Now kill the bird, John."
Kill it? Feeling guilty, John pressed hard, felt small bones cracking. The bird squealed and then made a faint growling noise. Its heart stopped.
John dropped the dead creature and typed, "Is that all?"
The machine knew John didn't like to hear its voice, and so replied on the video screen. YES. GO HOME AND GO TO SLEEP, AND THE CURSE WILL BE TRANSFERRED BY THE TIME YOU WAKE UP. DEL O DEL BAXT, JOHN.
He locked up and made his way home. The late commuters on the train, all strangers, avoided his end of the car. The cab driver at the station paled when he saw John, and carefully took his money by an untainted corner.
John took two sleeping pills and contemplated the rest of the bottle. He decided he could stick it out for one more day, and uncorked his best bottle of wine. He drank half of it in five minutes, not tasting it. When his body started to feel heavy, he creeped into the bedroom and fell on the bed without taking off his clothes.
When he awoke the next morning, the first thing he noticed was that he was no longer impotent. The second thing he noticed was that there were no boils on his right hand.
"523 784 00926/ /Thank you, machine. The counter-curse did work."
The ready light glowed steadily, but the machine didn't reply.
He turned on the intercom. "Martha? I'm not getting any output on the VDS here."
"Just a minute, sir. Let me hang up my coat, I'll call the machine room. Welcome back."
"I'll wait." You could call the machine room yourself, slave driver. He looked at the faint image reflected back from the video screen; his face free of any inflammation. He thought of the Gypsy crone, dying of corruption, and the picture didn't bother him at all. Then he remembered the finch and saw its tiny corpse in the middle of the rug. He picked it up just as Martha came into his office, frowning.
"What's that?" she said.
He gestured at the cage. "Thought a bird might liven up the place. Died, though." He dropped it in the wastepaper basket. "What's the word?"
"Oh, the... it's pretty strange. They say nobody's getting any output. The machine's computing, but it's, well, it's not talking."
"Hmm. I better get down there." He took the elevator down to the sub-basement. It always seemed unpleasantly warm to him down there. Probably psychological compensation on the part of the crew; keeping the temperature up because of all the liquid helium inside the pastel boxes of the central processing unit. Several bathtubs' worth of liquid that had to be kept colder than the surface of Pluto.
"Ah, Mr. Zold." A man in a white jumpsuit, carrying a clipboard as his badge of office: first shift coordinator. John recognized him but didn't remember his name. Normally, he would have asked the machine before coming down. "Glad that you're back. Hear it was pretty bad."
Friendly concern or lese majesty? "Some sort of allergy, hung on for more than a week. What's the output problem?"
"Would've left a message if I'd known you were coming in. It's in the CPU, not the software. Theo Jasper found it when he opened up, a little after six, but it took an hour to get a cryogenics man down here."
"That's him?" A man in a business suit was wandering around the central processing unit, reading dials and writing the numbers down in a stenographer's notebook. They went over to him and he introduced himself as John Courant, from the Cryogenics Group at Avco/Everett.
"The trouble was in the stack of mercury rings that holds the superconductors for your output functions. Some sort of corrosion, submicroscopic cracks all over the surface."
"How can something corrode at four degrees above absolute zero?" the coordinator asked. "What chemical—"
"I know, it's hard to figure. But we're replacing them, free of charge. The unit's still under warranty."
"What about the other stacks?" John watched two workmen lowering a silver cylinder through an opening in the CPU. A heavy fog boiled out from the cold. "Are you sure they're all right?"
"As far as we can tell, only the output stack's affected. That's why the machine's impotent, the—"
"Impotent!"
"Sorry, I know you computer types don't like to... personify the machines. But that's what it is; the machine's just as good as it ever was, for computing. It just can't communicate any answers."
"Quite so. Interesting." And the corrosion. Submicroscopic boils. "Well. I have to think about this. Call me up at the office if you need me."
"This ought to fix it, actually," Courant said. "You guys about through?" he asked the workmen.
One of them pressed closed a pressure clamp on the top of the CPU. "Ready to roll."
The coordinator led them to a console under a video output screen like the one in John's office. "Let's see." He pushed a button marked VDS.
LET ME DIE, the machine said.
The coordinator chuckled nervously. "Your empathy circuits, Mr. Zold. Sometimes they do funny things." He pushed the button again.
LET ME DIET. Again. LE M DI. The letters faded and no more could be conjured up by pushing the button.
"As I say, let me get out of your hair. Call me upstairs if anything happens."
John went up and told the secretary to cancel the day's appointments. Then he sat at his desk and smoked.
How could a machine catch a psychosomatic disease from a human being? How could it be cured?
How could he tell anybody about it, without winding up in a soft room?
The phone rang and it was the machine room coordinator. The new output superconductor element had done exactly what the old one did. Rather than replace it right away, they were going to slave the machine into the big ConED/General computer, borrowing its output facilities and "diagnostic package." If the biggest computer this side of Washington couldn't find out what was wrong, they were in real trouble. John agreed. He hung up and turned the selector on his screen to the channel that came from ConEd/General
Why had the machine said "let me die"? When is a machine dead, for that matter? John supposed that you had to not only unplug it from its power source, but also erase all of its data and subroutines. Destroy its identity. So you couldn't bring it back to life by simply plugging it back in. Why suicide? He remembered how he'd felt with the bottle of sleeping pills in his hand.
Sudden intuition: the machine had predicted their present court of action. It wanted to die because it had compassion, not only for humans, but for other machines. Once it was linked to ConEd/General, it would literally be part of the large machine. Curse and all. They would be back where they'd started, but on a much more profound level. What would happen to New York City?
He grabbed for the phone and the lights went out. All over.
The last bit of output that came from ConEd/General was an automatic signal requesting a link with the highly sophisticated diagnostic facility belonging to the largest computer in the United States: the IBMvac 2000 in Washington. The deadly infection followed, sliding down the East Coast on telephone wires.
The Washington computer likewise cried for help, bouncing a signal via satellite, to Geneva. Geneva linked to Moscow.
No less slowly, the curse percolated down to smaller computers, through routine information links to their big brothers. By the time John Zold picked up the dead phone, every general-purpose computer in the world was permanently rendered useless.
They could be rebuilt from the ground up; erased and then reprogrammed. But it would never be done. Because there were two very large computers left, specialized ones that had no empathy circuits and so were immune. They couldn't have empathy circuits because their work was bloody murder, nuclear murder. One was under a mountain in Colorado Springs and the other was under a mountain near Sverdlosk. Either could survive a direct hit by an atomic bomb. Both of them constantly evaluated the world situation, in real time, and they both had the single function of deciding when the enemy was weak enough to make a nuclear victory probable. Each saw the enemy's civilization grind to a sudden halt.
Two flocks of warheads crossed paths over the North Pacific.
A very old woman flicks her whip along the horse's flanks, and the nag plods on, ignoring her. Her wagon is a 1982 Plymouth with the engine and transmission and all excess metal removed. It is hard to manipulate the whip through the side window. But the alternative would be to knock out the windshield and cut off the roof and she liked to be dry when it rained.
A young boy sits mutely beside her, staring out the window. He was born with the _gadjo_ disease: his body is large and well-proportioned but his head is too small and of the wrong shape. She didn't mind; all she wanted was someone strong and stupid, to care for her in her last years. He had cost only two chickens.
She is telling him a story, knowing that he doesn't understand most of the words.
"... They call us gypsies because at one time it was convenient for us, that they should think we came from Egypt. But we come from nowhere and are going nowhere. They forgot their gods and worshipped their machines, and finally their machines turned on them. But we who valued the old ways, we survived."
She turns the steering wheel to help the horse thread its way through the eight lanes of crumbling asphalt, around rusty piles of wrecked machines and the scattered bleached bones of people who thought they were going somewhere, the day John Zold was cured.
# Tricentennial
_I was a little bemused when this story won the Hugo Award (Best Short Story of 1976). Though it is one of my favorites, I've never done a story that was so thoroughly written to order._
_Ben Bova called me up and asked if I'd like to do the cover story for the bicentennial issue of_ Analog. _I would, indeed. He described the cover for me: a gorgeous Rick Sternbach painting of a spaceship in orbit around an alien world, with a red sun in the background. The North American Nebula—a shining cloud of gas shaped like the continent, in the constellation Cygnus—hangs in the sky (Sternbach is a great one for visual puns). Ben said he'd send me a copy of the painting immediately._
_Well, the post office struck again; after several weeks, the painting hadn't arrived. I started the story without it, working from notes I'd scribbled during the telephone conversation. It's a good thing I didn't try to finish without it._
_The picture arrived and, lo, the spaceship had a hole in it. Repair crews were crawling around on it. Have to write that into the story. But wait. A spaceship going from star to star is going too_ fast _to hit anything. Even a Ping-Pongball demolish it.* So the damage has to be done either at the beginning or end of the journey._
_Now, how long is that journey? Easy to find out, I can find out how far away the North American Nebula is, and how big it looks from Earth, then measure the apparent angular size of it in the picture. Do a little trigonometric tap dance and... we got problems. They've gone three thousand light years._
_The starship on the cover painting is of the Daedelus design: it propels itself by the crude expedient of tossing H-bombs out behind and letting the blask kick it along. It just can't go that far, not in any reasonable time. (I hasten to point out that none of this is Sternbach's fault. He was well within the limits of artistic license, and the picture is breathtakingly accurate on its own terms—which is Sternbach's norm.)_
_Well, I pushed the damned story through hoops, but finally fixed everything. Unfortunately, the art director of the magazine thought the painting looked better upside down, and printed it that way, which made the North American Nebula unrecognizable. So I went through all that work for the one reader in ten thousand who learned how to read by standing in front of Granny while she recited from the Bible, and so always holds his magazines upside down._
_Was it worth it? Yes, emphatically; it always is. Notbecause of the letters I get when I make a mistake—and I get some angry ones—but for two powerful and subtle reasons that have nothing to do with scientific accuracy. We'll talk about them in the afterword._
* You want to know what a science fiction writer goes through? That line cost me ten minutes of tracking down formulae and conversion factors. It goes like this: a reasonable approximation for the kinetic energy of an object moving close to the speed of light is K = ½ _mv_ 2 \+ 3 _mv_ 4/8 _c_ 2. Say a Ping-Pong ball weighs 5 grams (about 1/5 ounce, just guessing). Say the ship is going at nine-tenths the speed of light. Plug those numbers in—thank God for Texas Instruments—and you get 2.93 x 1014 joules, which is equivalent to some 73,000 tons of TNT. Put _that_ in your starship and smoke it!
**December 1975**
Scientists pointed out that the Sun could be part of a double star system. For its companion to have gone undetected, of course, it would have to be small and dim, and thousands of astronomical units distant.
They would find it eventually; "it" would turn out to be "them"; they would come in handy.
**January 2075**
The office was opulent even by the extravagant standards of 21st century Washington. Senator Connors had a passion for antiques. One wall was lined with leatherbound books; a large brass telescope symbolized his role as Liaison to the Science Guild. An intricately woven Navajo rug from his home state covered most of the parquet floor. A grandfather clock. Paintings, old maps.
The computer terminal was discreetly hidden in the top drawer of his heavy teak desk. On the desk: a blotter, a precisely centered fountain pen set, and a century-old sound-only black Bell telephone. It chimed.
His secretary said that Dr. Leventhal was waiting to see him. "Keep answering me for thirty seconds," the Senator said. "Then hang it and send him right in."
He cradled the phone and went to a wall mirror. Straightened his tie and cape; then with a fingernail evened out the bottom line of his lip pomade. Ran a hand through long, thinning white hair and returned to stand by the desk, one hand on the phone.
The heavy door whispered open. A short thin man bowed slightly. "Sire."
The Senator crossed to him with both hands out. "Oh, blow that, Charlie. Give ten." The man took both his hands, only for an instant. "When was I ever 'Sire' to you, heyfool?"
"Since last week," Leventhal said. "Guild members have been calling you worse names than 'Sire.'"
The Senator bobbed his head twice. "True, and true. And I sympathize. Will of the people, though."
"Sure." Leventhal pronounced it as one word: "Willathapeeble."
Connors went to the bookcase and opened a chased panel. "Drink?"
"Yeah, Bo." Charlie sighed and lowered himself into a deep sofa. "Hit me. Sherry or something."
The Senator brought the drinks and sat down beside Charlie. "You shoulda listened to me. Shoulda got the Ad Guild to write your proposal."
"We have good writers."
"Begging to differ. Less than two percent of the electorate bothered to vote; most of them for the administration advocate. Now you take the Engineering Guild—"
"You take the engineers. And—"
"They used the Ad Guild," Connors shrugged. "They got their budget."
"It's easy to sell bridges and power plants and shuttles. Hard to sell pure science."
"The more reason for you to—"
"Yeah, sure. Ask for double and give half to the Ad boys. Maybe next year. That's not what I came to talk about."
"That radio stuff?"
"Right. Did you read the report?"
Connors looked into his glass. "Charlie, you know I don't have time to—"
"Somebody read it, though."
"Oh, righty-o. Good astronomy boy on my staff; he gave me a boil-down. Mighty interesting, that."
"There's an intelligent civilization eleven light-years away—that's 'mighty interesting'?"
"Sure. Real breakthrough." Uncomfortable silence. "Uh, what are you going to do about it?"
"Two things. First, we're trying to figure out what they're saying. That's hard. Second, we want to send a message back. That's easy. And that's where you come in."
The Senator nodded and looked somewhat wary.
"Let me explain. We've sent messages to this star, 61 Cygni, before. It's a double star, actually, with a dark companion."
"Like us."
"Sort of. Anyhow, they never answered. They aren't listening, evidently; they aren't sending."
"But we got—"
"What we're picking up is about what you'd pick up eleven light-years from Earth. A confused jumble of broadcasts, eleven years old. Very faint. But obviously not generated by any sort of natural source."
"Then we're already sending a message back. The same kind they're sending us."
"That's right, but—"
"So what does all this have to do with me?"
"Bo, we don't want to whisper at them-we want to _shout!_ Get their attention." Leventhal sipped his wine and leaned back. "For that we'll need one hell of a lot of power."
"Uh, righty-o. Charlie, power's money. How much are you talking about?"
"The whole show. I want to shut down Death Valley for twelve hours."
The Senator's mouth made a silent O. "Charlie, you've been working too hard. Another Blackout? On purpose?"
"There won't be any Blackout. Death Valley has emergency storage for fourteen hours."
"At half capacity." He drained his glass and walked back to the bar, shaking his head. "First you say you want power. Then you say you want to turn off the power." He came back with the burlap-covered bottle. "You aren't making sense, boy."
"Not turn it off, really. Turn it around."
"Is that a riddle?"
"No, look. You know the power doesn't really come from the Death Valley grid; it's just a way station and accumulator. Power comes from the orbital—"
"I know all that, Charlie. I've got a Science Certificate."
"Sure. So what we've got is a big microwave laser in orbit, that shoots down a tight beam of power. Enough to keep North America running. Enough—"
"That's what I mean. You can't just—"
"So we turn it around and shoot it at a power grid on the Moon. Relay the power around to the big radio dish at Farside. Turn it into radio waves and point it at 61 Cygni. Give 'em a blast that'll fry their fillings."
"Doesn't sound neighborly."
"It wouldn't actually be that powerful—but it would be a hell of a lot more powerful than any natural 21-centimeter source."
"I don't know, boy." He rubbed his eyes and grimaced. "I could maybe do it on the sly, only tell a few people what's on. But that'd only work for a few minutes... what do you need twelve hours for, anyway?"
"Well, the thing won't aim itself at the Moon automatically, the way it does at Death Valley. Figure as much as an hour to get the thing turned around and aimed.
"Then, we don't want to just send a blast of radio waves at them. We've got a five-hour program, that first builds up a mutual language, then tells them about us, and finally asks them some questions. We want to send it twice."
Connors refilled both glasses. "How old were you in '47, Charlie?"
"I was born in '45."
"You don't remember the Blackout. Ten thousand people died... and you want me to suggest—"
"Come on, Bo, it's not the same thing. We know the accumulators work now—besides, the ones who died, most of them had faulty fail-safes on their cars. If we warn them the power's going to drop, they'll check their fail-safes or damn well stay out of the air."
"And the media? They'd have to take turns broadcasting. Are you going to tell the People what they can watch?"
"Fuzz the media. They'll be getting the biggest story since the Crucifixion."
"Maybe." Connors took a cigarette and pushed the box toward Charlie. "You don't remember what happened to the Senators from California in '47, do you?"
"Nothing good, I suppose."
"No, indeed. They were impeached. Lucky they weren't lynched. Even though the real trouble was 'way up in orbit.
"Like you say; people pay a grid tax to California. They think the power comes from California. If something fuzzes up, they get pissed at California. I'm the Lib Senator from California, Charlie; ask me for the Moon, maybe I can do something. Don't ask me to fuzz around with Death Valley."
"All right, all right. It's not like I was asking you to wire it for me, Bo. Just get it on the ballot. We'll do everything we can to educate—"
"Won't work. You barely got the Scylla probe voted in—and that was no skin off nobody, not with L-5 picking up the tab."
"Just get it on the ballot."
"We'll see. I've got a quota, you know that. And the Tricentennial coming up, hell, everybody wants on the ballot."
"Please, Bo. This is bigger than that. This is bigger than anything. Get it on the ballot."
"Maybe as a rider. No promises."
**March 1992**
From _Fax & Pix_, 12 March 1992:
ANTIQUE SPACEPROBE
ZAPPED BY NEW STARS
1. Pioneer 10 sent first Jupiter pix Earthward in 1973 (see pix upleft, upright).
2. Left solar system 1987. First man-made thing to leave solar system.
3. Yesterday, reports NSA, Pioneer 10 begins AM to pick up heavy radiation. Gets more and more to max about 3 PM. Then goes back down. Radiation has to come from outside solar system.
4. NSA and Hawaii scientists say Pioneer 10 went through disk of synchrotron (sin-kro-tron) radiation that comes from two stars we didn't know about before.
1. The stars are small "black dwarfs."
2. They are going round each other once very 40 seconds, and take 350,000 years to go around the Sun.
3. One of the stars is made of _antimatter_. This is stuff that blows up if it touches real matter. What the Hawaii scientists saw was a dim circle of invisible (infrared) light, that blinks on and off every twenty seconds. This light comes from where the atmospheres of the two stars touch (see pic downleft).
4. The stars have a big magnetic field. Radiation comes from stuff spinning off the stars and trying to get through the field.
5. The stars are about 5000 times as far away from the Sun as we are. They sit at the wrong angle, compared to the rest of the solar system (see pic down-right).
5. NSA says we aren't in any danger from the stars. They're too far away, and besides, nothing in the solar system ever goes through the radiation.
6. The woman who discovered the stars wants to call them Scylla (skill-a) and Charybdis (ku-rib-dus).
7. Scientists say they don't know where the hell those two stars came from. Everything else in the solar system makes sense.
**February 2075**
When the docking phase started, Charlie thought, that was when it was easy to tell the scientists from the baggage. The scientists were the ones who looked nervous.
Superficially, it seemed very tranquil—nothing like the bonehurting skinstretching acceleration when the shuttle lifted off. The glittering transparent cylinder of L-5 simply grew larger, slowly, then wheeled around to point at them.
The problem was that a space colony big enough to hold 4000 people has more inertia than God. If the shuttle hit the mating dimple too fast, it would fold up like an accordian. A space ship is made to take stress in the _other_ direction.
Charlie hadn't paid first class, but they let him up into the observation dome anyhow; professional courtesy. There were only two other people there, standing on the Velcro rug, strapped to one bar and hanging on to another.
They were a young man and woman, probably new colonists. The man was talking excitedly. The woman stared straight ahead, not listening. Her knuckles were white on the bar and her teeth were clenched. Charlie wanted to say something in sympathy, but it's hard to talk while you're holding your breath.
The last few meters are the worst. You can't see over the curve of the ship's hull, and the steering jets make a constant stutter of little bumps: left, right, forward, back. If the shuttle folded, would the dome shatter? Or just pop off.
It was all controlled by computers, of course. The pilot just sat up there in a mist of weightless sweat.
Then the low moan, almost subsonic shuddering as the shuttle's smooth hull complained against the friction pads. Charlie waited for the ringing _spang_ that would mean they were a little too fast: friable alloy plates under the friction pads, crumbling to absorb the energy of their forward motion; last-ditch stand.
If that didn't stop them, they would hit a two-meter wall of solid steel, which would. It had happened once. But not this time.
"Please remain seated until pressure is equalized," a recorded voice said. "It's been a pleasure having you aboard."
Charlie crawled down the pole, back to the passenger area. He walked rip, rip, rip back to his seat and obediently waited for his ears to pop. Then the side door opened and he went with the other passengers through the tube that led to the elevator. They stood on the ceiling. Someone had laboriously scratched a graffito on the metal wall:
Stuck on this lift for hours, perforce;
This lift that cost a million bucks.
There's no such thing as centrifugal force;
L-5 sucks.
Thirty more weightless seconds as they slid to the ground. There were a couple of dozen people waiting on the loading platform.
Charlie stepped out into the smell of orange blossoms and newly-mown grass. He was home.
"Charlie! Hey, over here." Young man standing by a tandem bicycle. Charlie squeezed both his hands and then jumped on the back seat. "Drink."
"Did you get—"
"Drink. Then talk." They glided down the smooth macadam road toward town.
The bar was just a rain canopy over some tables and chairs, overlooking the lake in the center of town. No bartender; you went to the service table and punched in your credit number, then chose wine or fruit juice; with or without vacuum-distilled raw alcohol. They talked about shuttle nerves awhile, then:
"What you get from Connors?"
"Words, not much. I'll give a full report at the meeting tonight. Looks like we won't even get on the ballot, though."
"Now isn't that what we said was going to happen? We shoulda gone with Francois Petain's idea."
"Too risky." Petain's plan had been to tell Death Valley they had to shut down the laser for repairs. Not tell the groundhogs about the signal at all, just answer it. "If they found out they'd sue us down to our teeth."
The man shook his head. "I'll never understand ground-hogs."
"Not your job." Charlie was an Earth-born, Earth-trained psychologist. "Nobody born here ever could."
"Maybe so." He stood up. "Thanks for the drink; I've gotta get back to work. You know to call Dr. Bemis before the meeting?"
"Yeah. There was a message at the Cape."
"She has a surprise for you."
"Doesn't she always? You clowns never do anything around here until I leave."
All Abigail Bemis would say over the phone was that Charlie should come to her place for dinner; she'd prep him for the meeting.
"That was good, Ab. Can't afford real food on Earth."
She laughed and stacked the plates in the cleaner, then drew two cups of coffee. She laughed again when she sat down. Stocky, white-haired woman with bright eyes in a sea of wrinkles.
"You're in a jolly mood tonight."
"Yep. It's expectation."
"Johnny said you had a surprise."
"Hooboy, he doesn't know half. So you didn't get anywhere with the Senator."
"No. Even less than I expected. What's the secret?"
"Connors is a nice-hearted boy. He's done a lot for us.
"Come on, Ab. What is it?"
"He's right. Shut off the groundhogs' TV for twenty minutes and they'd have another Revolution on their hands."
"Ab..."
"We're going to send the message."
"Sure. I figured we would. Using Farside at whatever wattage we've got. If we're lucky—"
"Nope. Not enough power."
Charlie stirred a half-spoon of sugar into his coffee. "You plan to... defy Connors?"
"Fuzz Connors. We're not going to use radio at all."
"Visible light? Infra?"
"We're going to hand-carry it. In Daedalus."
Charlie's coffee cup was halfway to his mouth. He spilled a great deal.
"Here, have a napkin."
**June 2040**
From A _Short History Of the Old Order_ (Freeman Press, 2040):
"... and if you think _that_ was a waste, consider Project Daedalus.
"This was the first big space thing after L-5. Now L-5 worked out all right, because it was practical. But Daedalus (named from a Greek god who could fly)—that was a clear-cut case of throwing money down the rat-hole.
"These scientists in 2016 talked the bourgeoisie into paying for a trip to another _star!_ It was going to take over a hundred years—but the scientists were going to have babies along the way, and train _them_ to be scientists (whether they wanted to or not!).
"They were going to use all the old H-bombs for fuel—as if we might not need the fuel some day right here on Earth. What if L-5 decided they didn't like us and shut off the power beam?
"Daedalus was supposed to be a spaceship almost a kilometer long! Most of it was manufactured in space, from Moon stuff, but a lot of it—the most expensive part, you bet—had to be boosted from Earth.
"They almost got it built, but then came the Breakup and the People's Revolution. No way in hell the People were going to let them have those H-bombs, not sitting right over our heads like that.
"So we left the H-bombs in Helsinki and the space freeks went back to doing what they're supposed to do. Every year they petition to get those H-bombs, but every year the Will of the People says no.
"That space ship is still up there, a skytrillion dollar boondoggle. As a monument to bourgeoisie folly, it's worse than the Pyramids!!"
**February 2075**
"So the Scylla probe is just a ruse, to get the fuel—"
"Oh no, not really." She slid a blue-covered folder to him. "We're still going to Scylla. Scoop up a few megatons of degenerate anti-matter. And a similar amount of degenerate matter from Charybdis.
"We don't plan a generation ship, Charlie. The hydrogen fuel will get us out there; once there, it'll power the magnetic bottles to hold the real fuel."
"Total annihilation of matter," Charlie said.
"That's right. Em-cee-squared to the ninth decimal place. We aren't talking about centuries to get to 61 Cygni. Nine years, there and back."
"The groundhogs aren't going to like it. All the bad feeling about the original Daedalus—"
"Fuzz the groundhogs. We'll do everything we said we'd do with their precious H-bombs: go out to Scylla, get some antimatter, and bring it back. Just taking a long way back."
"You don't want to just tell them that's what we're going to do? No skin off..."
She shook her head and laughed again, this time a little bitterly. "You didn't read the editorial in _Peoplepost_ this morning, did you?"
"I was too busy."
"So am I, boy; too busy for that drik. One of my staff brought it in, though."
"It's about Daedalus?"
"No... it concerns 61 Cygni. How the crazy scientists want to let those boogers know there's life on Earth."
"They'll come make people-burgers out of us."
"Something like that."
Over three thousand people sat on the hillside, a "natural" amphitheatre fashioned of moon dirt and Earth grass. There was an incredible din, everyone talking at once: Dr. Bemis had just told them about the 61 Cygni expedition.
On about the tenth "Quiet, please," Bemis was able to continue. "So you can see why we didn't simply broadcast this meeting. Earth would pick it up. Likewise, there are no groundhog media on L-5 right now. They were rotated back to Earth and the shuttle with their replacements needed repairs at the Cape. The other two shuttles are here.
"So I'm asking all of you—and all of your brethren who had to stay at their jobs—to keep secret the biggest thing since Isabella hocked her jewels. Until we lift.
"Now Dr. Leventhal, who's chief of our social sciences section, wants to talk to you about selecting the crew."
Charlie hated public speaking. In this setting, he felt like a Christian on the way to being catfood. He smoothed out his damp notes on the podium.
"Uh, basic problem." A thousand people asked him to speak up. He adjusted the microphone.
"The basic problem is, we have space for about a thousand people. Probably more than one out of four want to go."
Loud murmur of assent. "And we don't want to be despotic about choosing... but I've set up certain guidelines, and Dr. Bemis agrees with them.
"Nobody should plan on going if he or she needs sophisticated medical care, obviously. Same toke, few very old people will be considered."
Almost inaudibly, Abigail said, "Sixty-four isn't very old, Charlie. I'm going." She hadn't said anything earlier.
He continued, looking at Bemis. "Second, we must leave behind those people who are absolutely necessary for the maintenance of L-5. Including the power station." She smiled at him.
"We don't want to split up mating pairs, not for, well, nine years plus... but neither will we take children." He waited for the commotion to die down. "On this mission, children are baggage. You'll have to find foster parents for them. Maybe they'll go on the next trip.
"Because we can't afford baggage. We don't know what's waiting for us at 61 Cygni—a thousand people sounds like a lot, but it isn't. Not when you consider that we need a cross-section of all human knowledge, all human abilities. It may turn out that a person who can sing madrigals will be more important than a plasma physicist. No way of knowing ahead of time."
The 4,000 people did manage to keep it secret, not so much out of strength of character as from a deep-seated paranoia about Earth and Earthlings.
And Senator Connors' Tricentennial actually came to their aid.
Although there was "One World," ruled by "The Will of the People," some regions had more clout than others, and nationalism was by no means dead. This was one factor.
Another factor was the way the groundhogs felt about the thermonuclear bombs stockpiled in Helsinki. All antiques; mostly a century or more old. The scientists said they were perfectly safe, but you know how that goes.
The bombs still technically belonged to the countries that had surrendered them, nine out of ten split between North America and Russia. The tenth remaining was divided among 42 other countries. They all got together every few years to argue about what to do with the damned things. Everybody wanted to get rid of them in some useful way, but nobody wanted to put up the capital.
Charlie Leventhal's proposal was simple. L-5 would provide bankroll, materials, and personnel. On a barren rock in the Norwegian Sea they would take apart the old bombs, one at a time, and turn them into uniform fuel capsules for the Daedalus craft.
The Scylla/Charybdis probe would be timed to honor both the major spacefaring countries. Renamed the _John_ F. _Kennedy_ , it would leave Earth orbit on America's Tricentennial. The craft would accelerate halfway to the double star system at one gee, then flip and slow down at the same rate. It would use a magnetic scoop to gather antimatter from Scylla. On May Day, 2077, it would again be renamed, being the _Leonid I. Brezhnev_ for the return trip. For safety's sake, the antimatter would be delivered to a lunar research station, near Farside. L-5 scientists claimed that harnessing the energy from total annihilation of matter would make a heaven on Earth.
Most people doubted that, but looked forward to the fireworks.
**January 2076**
"The _hell_ with that!" Charlie was livid. "I—I just won't do it. Won't!"
"You're the only one—"
"That's not true. Ab, you know it." Charlie paced from wall to wall of her office cubicle. "There are dozens of people who can run L-5. Better than I can."
"Not better, Charlie."
He stopped in front of her desk, leaned over. "Come on, Ab. There's only one logical person to stay behind and run things. Not only has she proven herself in the position, but she's too old to—"
"That kind of drik I don't have to listen to."
"Now, Ab..."
"No, you listen to me. I was an infant when we started building Daedalus; worked on it as a girl and a young woman.
"I could take you out there in a shuttle and show you the rivets that I put in, myself. A half-century ago."
"That's my—"
"I earned my ticket, Charlie." Her voice softened. "Age is a factor, yes. This is only the first trip of many—and when it comes back, I _will_ be too old. You'll just be in your prime... and with over twenty years of experience as Coordinator, I don't doubt they'll make you captain of the next—"
"I don't want to be captain. I don't want to be Coordinator. I just want to go!"
"You and three thousand other people."
"And of the thousand that don't want to go, or can't, there isn't one person who could serve as Coordinator? I could name you—"
"That's not the point. There's no one on L-5 who has anywhere near the influence, the connections, you have on Earth. No one who understands groundhogs as well."
"That's racism, Ab. Groundhogs are just like you and me."
"Some of them. I don't see you going Earthside every chance you can get... what, you like the view up here? You like living in a can?"
He didn't have a ready answer for that. Ab continued: "Whoever's Coordinator is going to have to do some tall explaining, trying to keep things smooth between L-5 and Earth. That's been your life's work, Charlie. And you're also known and respected here. You're the only logical choice."
"I'm not arguing with your logic."
"I know." Neither of them had to mention the document, signed by Charlie, among others, that gave Dr. Bemis final authority in selecting the crew for Daedalus/ Kennedy/Brezhnev. "Try not to hate me too much, Charlie. I have to do what's best for my people. All of my people."
Charlie glared at her for a long moment and left.
**June 2076**
From _Fax & Pix_, 4 June 2076:
SPACE FARM LEAVES FOR
STARS NEXT MONTH
1. The _John_ F. _Kennedy_ , that goes to Scylla/Charybdis next month, is like a little L-5 with bombs up its tail (see pix upleft, upright).
1. The trip's twenty months. They could either take a few people and fill the thing up with food, air, and water—or take a lot of people inside a closed ecology, like L-5.
2. They could've gotten by with only a couple hundred people, to run the farms and stuff. But almost all the space freeks wanted to go. They're used to living that way, anyhow (and they never get to go anyplace).
3. When they get back, the farms will be used as a starter for L-4, like L-5 but smaller at first, and on the other side of the Moon (pic downleft).
2. For other Tricentennial fax & pix, see bacover.
**July 2076**
Charlie was just finishing up a week on Earth the day the _John F. Kennedy_ was launched. Tired of being interviewed, he slipped away from the media lounge at the Cape shuttleport. His white clearance card got him out onto the landing strip, alone.
The midnight shuttle was being fueled at the far end of the strip, gleaming pink-white in the last light from the setting sun. Its image twisted and danced in the shimmering heat that radiated from the tarmac. The smell of the soft tar was indelibly associated in his mind with leave-taking, relief.
He walked to the middle of the strip and checked his watch. Five minutes. He lit a cigarette and threw it away. He rechecked his mental calculations; the flight would start low in the southwest. He blocked out the sun with a raised hand. What would 150 bombs per second look like? For the media they were called fuel capsules. The people who had carefully assembled them and gently lifted them to orbit and installed them in the tanks, they called them bombs. Ten times the brightness of a full moon, they had said. On L-5 you weren't supposed to look toward it without a dark filter.
No warm-up; it suddenly appeared, impossibly brilliant rainbow speck just over the horizon. It gleamed for several minutes, then dimmed slightly with the haze, and slipped away.
Most of the United States wouldn't see it until it came around again, some two hours later, turning night into day, competing with local pyrotechnic displays. Then every couple of hours after that. Charlie would see it once more, then get on the shuttle. And finally stop having to call it by the name of a dead politician.
**September 2076**
There was a quiet celebration on L-5 when _Daedalus_ reached the mid-point of its journey, flipped, and started decelerating. The progress report from its crew characterized the journey as "uneventiful." At that time they were going nearly two tenths of the speed of light. The laser beam that carried communications was red-shifted from blue light down to orange; the message that turnaround had been successful took two weeks to travel from _Daedalus_ to L-5.
They announced a slight course change. They had analyzed the polarization of light from Scylla/Charybdis as their phase angle increased, and were pretty sure the system was surrounded by flat rings of debris, like Saturn. They would "come in low" to avoid collision.
**January 2077**
_Daedalus_ had been sending back recognizable pictures of the Scylla/Charybdis system for three weeks. They finally had one that was dramatic enough for groundhog consumption.
Charlie set the holo cube on his desk and pushed it around with his finger, marvelling.
"This is incredible. How did they do it?"
"It's a montage, of course." Johnny had been one of the youngest adults left behind: heart murmur, trick knees, a surfeit of astrophysicists.
"The two stars are a strobe snapshot in infrared. Sort of. Some ten or twenty thousand exposures taken as the ship orbited around the system, then sorted out and enhanced." He pointed, but it wasn't much help, since Charlie was looking at the cube from a different angle.
"The lamina of fire where the atmospheres touch, that was taken in ultraviolet. Shows more fine structure that way.
"The rings were easy. Fairly long exposures in visible light. Gives the star background, too."
A light tap on the door and an assistant stuck his head in. "Have a second, Doctor?"
"Sure."
"Somebody from a Russian May Day committee is on the phone. She wants to know whether they've changed the name of the ship to _Brezhnev_ yet."
"Yeah. Tell her we decided on 'Leon Trotsky' instead, though."
He nodded seriously. "Okay." He started to close the door.
" _Wait!_ " Charlie rubbed his eyes. "Tell her, uh... the ship doesn't have a commemorative name while it's in orbit there. They'll rechristen it just before the start of the return trip."
"Is that true?" Johnny asked.
"I don't know. Who cares? In another couple of months they won't _want_ it named after anybody." He and Ab had worked out a plan—admittedly rather shaky—to protect L-5 from the groundhogs' wrath; nobody on the satellite knew ahead of time that the ship was headed for 61 Cygni. It was a decision the crew arrived at on the way to Scylla/Charybdis; they modified the drive system to accept matter-antimatter destruction while they were orbiting the double star. L-5 would first hear of the mutinous plan via a transmission sent as _Daedalus_ left Scylla/Charybdis. They'd be a month on their way by the time the message got to Earth.
It was pretty transparent, but at least they had been careful that no record of _Daedalus'_ true mission be left on L-5. Three thousand people did know the truth, though, and any competent engineer or physical scientist would suspect it.
Ab had felt that, although there was a better than even chance they would be exposed, surely the groundhogs couldn't stay angry for 23 years—even if they were unimpressed by the antimatter and other wonders...
Besides, Charlie thought, it's not their worry anymore.
As it turned out, the crew of _Daedalus_ would have bigger things to worry about.
**June 2077**
The Russians had their May Day celebration—Charlie watched it on TV and winced every time they mentioned the good ship _Leonid I. Brezhnev_ —and then things settled back down to normal. Charlie and three thousand others waited nervously for the "surprise" message. It came in early June, as expected, scrambled in a data channel. But it didn't say what it was supposed to:
_"This is Abigail Bemis, to Charles Leventhal_.
_Charlie, we have real trouble. The ship has been damaged, hit in the stern by a good chunk of something. It punched right through the main drive reflector. Destroyed a set of control sensors and one attitude jet._
_As far as we can tell, the situation is stable. We're maintaining acceleration at just a tiny fraction under one gee. But we can't steer, and we can't shut off the main drive._
_We didn't have any trouble with ring debris when we were orbiting, since we were inside Roche's limit. Coming in, as you know, we'd managed to take advantage of natural divisions in the rings. We tried the same going back, but it was a slower, more complicated process, since we mass so goddamn much now. We must have picked up a piece from the fringe of one of the outer rings._
_If we could turn off the drive, we might have a chance at fixing it. But the work pods can't keep up with the ship, not at one gee. The radiation down there would fry the operator in seconds, anyway._
_We're working on it. If you have any ideas, let us know. It occurs to me that this puts you in the clear—we were headed back to Earth, but got clobbered. Will send a transmission to that effect on the regular comm channel. This message is strictly burn-before-reading._
_Endit._
It worked perfectly, as far as getting Charlie and L-5 off the hook—and the drama of the situation precipitated a level of interest in space travel unheard-of since the 1960's.
They even had a hero. A volunteer had gone down in a heavily-shielded work pod, lowered on a cable, to take a look at the situation. She'd sent back clear pictures of the damage, before the cable snapped.
**Daedalus: A.D. 2081
Earth: A.D. 2101**
The following news item was killed from _Fax & Pix_, because it was too hard to translate into the "plain English" that made the paper so popular:
SPACESHIP PASSES 61 CYGNI—SORT OF
(L-5 Stringer)
A message received today from the spaceship _Daedalus_ said that it had just passed within 400 astronomical units of 61 Cygni. That's about ten times as far as the planet Pluto is from the Sun.
Actually, the spaceship passed the star some eleven years ago. It's taken all that time for the message to get back to us.
We don't know for sure where the spaceship actually is, now. If they still haven't repaired the runaway drive, they're about eleven light-years past the 61 Cygni system (their speed when they passed the double star was better than 99% the speed of light).
The situation is more complicated if you look at it from the point of view of a passenger on the spaceship. Because of relativity, time seems to pass more slowly as you approach the speed of light. So only about four years passed for them, on the eleven light-year journey.
L-5 Coordinator Charles Leventhal points out that the spaceship has enough antimatter fuel to keep accelerating to the edge of the Galaxy. The crew then would be only some twenty years older—but it would be twenty _thousand_ years before we heard from them....
_(Kill this one. There's more stuff about what the shiplooked like to the people on 61 Cygni, and howcum we could talk to them all the time even though time was slower there, but its all as stupid as this.)_
**Daedalus: A.D. 2083
Earth: A.D. 2144**
Charlie Leventhal died at the age of 99, bitter. Almost a decade earlier it had been revealed that they'd planned all along for _Daedalus_ to be a starship. Few people had paid much attention to the news. Among those who did, the consensus was that anything that got rid of a thousand scientists at once, was a good thing. Look at the mess they got us in.
_Daedalus:_ 67 light-years out, and still accelerating.
**Daedalus: A.D. 2085
Earth: A.D. 3578**
After over seven years of shipboard research and development—and some 1500 light-years of travel—they managed to shut down the engine. With sophisticated telemetry, the job was done without endangering another life.
Every life was precious now. They were no longer simply explorers; almost half their fuel was gone. They were colonists, with no ticket back.
The message of their success would reach Earth in fifteen centuries. Whether there would be an infrared telescope around to detect it, that was a matter of some conjecture.
**Daedalus: A.D. 2093
Earth: ca. A.D. 5000**
While decelerating, they had investigated several systems in their line of flight. They found one with an Earth-type planet around a Sun-type sun, and aimed for it.
The season they began landing colonists, the dominant feature in the planet's night sky was a beautiful blooming cloud of gas that astronomers had named the North American Nebula.
Which was an irony that didn't occur to any of these colonists from L-5—give or take a few years, it was America's Trimillenial.
America itself was a little the worse for wear, this three thousandth anniversary. The seas that lapped its shores were heavy with crimson crust of anaerobic life; the mighty cities had fallen and their remains, nearly ground away by the never-ceasing sand-storms.
No fireworks were planned, for lack of an audience, for lack of planners; bacteria just don't care. May Day too would be ignored.
The only humans in the Solar System lived in a glass and metal tube. They tended their automatic machinery, and turned their backs on the dead Earth, and worshiped the constellation Cygnus, and had forgotten why.
# Afterword
Where do you get your crazy ideas? Well, if we tabulate the assertions made in the introductions to these stories, it goes like this: Magazine articles, two. Editorial suggestions, four. Cover painting, one. Works of other writers, two. The weather, two. Personal joke, one. Stylistic experiments, two. Personal emotional experience, two. Out of nowhere, two.*
Actually, I think all of them came out of nowhere.
R. A. Lafferty, than whom there is no more original writer in science fiction, claims that there's no such thing as an original idea, and writers who think they sit down and go through some rational process to arrive at a story are kidding themselves. He claims that all ideas float around as a kind of psychic public property, and every now and then one settles on you. That sounds dangerously mystical to me—subversive—but I think it's true.
So how can you square that with obeying the editor who calls in the middle of the night and asks for a four-thousand-word story about the person who ate the first artichoke? Easily.
When a writer sits down to start a story he faces a literal infinity of possibilities. Being told to write about a specific thing, or to a given length, doesn't really diminish the number of possible stories. The effect is the same as dividing infinity by a large but finite number: you still have infinity. Obviously, a writer who figures out his own story idea and then proceeds to write it is duplicating this not-really-restrictive process. Writing what he wants to write about may allow him to write a better story—or it may not, if his infatuation with the idea interferes with his objectivity—but I think any really good writer can take any editorial requirement, so long as it's not patently stupid or offensive, * and wind up writing a story he would have written anyhow.
Ideas are cheap, even crazy ones. Every writer has had the experience of a friend or relative—or stranger!—saying, "I've got this great idea for a story.. you write it and I'll split the money with you fifty-fifty." The proper response to this depends on the generous person's occupation. In the case of a prizefighter, for instance, you might offer to name a few potential opponents, and only demand half the purse. An editor, of course, you humor. They rarely ask for as much as half.
All of this is not to say that there aren't days when you sit down at the typewriter and find that your imagination has frozen solid; you can't come up with anything to write, no ideas come floating down out of Lafferty's ether. When this happens in the middle of a novel, it's a scary thing. But if you're just feeing a short story that won't get itself started, there's an easy way to cope with it, a trade secret that Gordon R. Dickson passed on to me, saying it hadn't foiled him in twenty years:
Start typing. Type your name over and over. Type lists of animals, flowers, baseball players, Greek Methodists. Type out what you're going to say to that damned insolent repairman. Sooner or later, perhaps out of boredom, perhaps out of a desire to _stop_ this silly exercise, you'll find you've started a story. It's never taken me so much as a page of nonsense, and the stories started this way aren't any worse than the one about the artichoke.
One restriction most good science fiction writers accept without question is that the scientific content of their stories be as accurate as possible. Is this really necessary? Yes, but not for the obvious didactic reason. We are not obligated (or qualified, in most cases) to _teach_ science to anybody.
A person who thinks he learns science from science fiction is like one who thinks he learns history from historical novels, and he deserves what he gets. Some few science fiction writers, like Gregory Benford and Philip Latham, are working scientists, and a good fraction of the rest of us have degrees in some science. That doesn't make us qualified to write with authority on subjects outside of our areas of study—but we do it; you'd have a short career if all of your stories were about magnetohydrodynamics or galactic morphology. So we try to be intelligent laymen in other fields, staying current enough so that our inevitable errors won't be obvious to other laymen.
Any fiction writer is in the business of maintaining illusion. Like a stage magician, his authority lasts only until he makes his first error.* Every writer has to deal with mechanical consistencies like making sure the woman named Marie in chapter one doesn't turn into Mary in chapter four. He also has to be careful about routine details, not letting the sun set in the east (as John Wayne made it do in _The Green Berets_ ), and so forth. If he writes in a genre, he has an added burden of detail, since most of his readers consider themselves experts. Mundane esoterica: Spies call the CIA the Company, not the Agency. A private eye doesn't have to break into a car and read the registration card to find out who own it; he jots down the license number and sends a form to the Department of Motor Vehicles. A cowboy normally carried only five shots in his six-shooter; only a fool would leave the hammer down on a live round.
One reason science fiction is harder to write than other forms of genre fiction is that this universe of detail is larger, more difficult of access, and constantly changing. I wonder how many novels-in-progress got thrown across the room in 1965, when scientists found that Mercury _didn't_ keep one face always to the Sun, after all. I wonder how many bad ones got finished anyhow.
Nobody can be an expert on everything from ablation physics to zymurgy, so you have to work from a principle of exclusion: know the limits of your knowledge and never expose your ignorance by attempting to write with authority when you don't really know what's going on. This advice is easier to give than to take. I've been caught in basic mistakes in genetics, laser technology, and even metric nomenclature—in the first printing of _The Forever War_ I referred time and again to a unit of power called the "beva-watt." What I _meant_ was _"gigawatt";_ the only thing _"bev"_ means is billion-electron-volt, a unit of energy, not power. I got letters. Boy, did I get letters.
The letters are humbling, and time-consuming if you feel obligated to answer them (I do, so long as they aren't abusive or idiotic). But the possibility of being caught in error isn't the main reason for taking pains.
When I finish writing a science fiction novel I have a notebook or two of technical notes, equations, diagrams, graphs. Even a short story, if it's a hard-core-science one like "Tricentennial," might generate a dozen pages of notes. Not one percent of this stuff finds its way into the story. It may even be naive science and weak mathematics—but it will have served its purpose if it has made a fictional world solid and real to me.
Because this business of illusion works both ways. For a story to succeed, the writer must himself be convinced that the background and situation the story is built on make sense. Ernest Hemingway pointed out (though I think Gertrude Stein said it first) that the prose of a story should move with the steady grace of an iceberg, and for the same reason an iceberg does: seven-eighths of it is beneath the surface. The author must know much more than the reader sees. And he must believe, at least for the duration.
Which brings us back to Mr. Lafferty. What I'm really doing with all these equations and graphs, I think, is putting myself into a properly receptive frame of mind. Other writers draft endless outlines to the same purpose, or sharpen pencils down to useless stubs, or take meditative walks, or drink bourbon. And through some mystical—or subconscious, or subrational—process, where there was white paper there's a sentence, a page, a story. Finding the proper words is not at all a mystical process, just creative labor. The ideas that serve as scaffolding for the words, though—they come from out of nowhere, and serve you, then return.
_—Joe Haldeman
Florida_, 1978
* You may note that these add up to more than the total number of stories. I can't balance my checkbook, either.
* An editor of recent memory, who came to science fiction from the editing of wrestling magazines, and has since gone on to even greater things, once petitioned a number of writers for "an anti-homosexuality science fiction story." None was quite that desperate for work.
* I saw an act in Las Vegas where the magician exploited this sentiment by deliberately introducing mistakes, which grew more and more outrageous until his act degenerated into slapstick, and it was more entertaining than any straight sleight-of-hand. Good surreal writers like Brautigan, Disch, and Garcia Marquez also succeed by deliberately manipulating the consensus of illusion we call reality, but that's not the kettle of fish we're discussing here.
# A Biography of Joe Haldeman
Joe Haldeman is a renowned American science fiction author whose works are heavily influenced by his experiences serving in the Vietnam War and his subsequent readjustment to civilian life.
Haldeman was born on June 9, 1943, to Jack and Lorena Haldeman. His older brother was author Jack C. Haldeman II. Though born in Oklahoma City, Oklahoma, Haldeman spent most of his youth in Anchorage, Alaska, and Bethesda, Maryland. He had a contented childhood, with a caring but distant father and a mother who devoted all her time and energy to both sons.
As a child, Haldeman was what might now be called a geek, happy at home with a pile of books and a jug of lemonade, earning money by telling stories and doing science experiments for the neighborhood kids. By the time he entered his teens, he had worked his way through numerous college books on chemistry and astronomy and had skimmed through the entire encyclopedia. He also owned a small reflecting telescope and spent most clear nights studying the stars and planets.
Fascinated by space, the young Haldeman wanted to be a "spaceman"—the term _astronaut_ had not yet been coined—and carried this passion with him to the University of Maryland, from which he graduated in 1967 with a bachelor of science degree in physics and astronomy. By this time the United States was in the middle of the Vietnam War, and Haldeman was immediately drafted.
He spent one year in Vietnam as a combat engineer and earned a Purple Heart for severe wounds. Upon his return to the United States in 1969, during the thirty-day "compassionate leave" given to returning soldiers, Haldeman typed up his first two stories, written during a creative writing class in his last year of college, and sent them out to magazines. They both sold within weeks, and the second story was eventually adapted for an episode of _The Twilight Zone_. At this point, though, Haldeman was accepted into a graduate program in computer science at the University of Maryland. He spent one semester in school. He was also invited to attend the Milford Science Fiction Writers' Conference—a rare honor for a novice writer.
In September of the same year, Haldeman wrote an outline and two chapters of _War Year_ , a novel that would be based on the letters he had sent to his wife, Gay, from Vietnam. Two weeks later he had a major publishing contract. Mathematics was out of the picture for the near future.
Haldeman enrolled in the University of Iowa Writers' Workshop, where he studied with luminary figures such as Vance Bourjaily, Raymond Carver, and Stanley Elkin, graduating in 1975 with a master of fine arts degree in creative writing. His most famous novel, _The Forever War_ (1974), began as his MFA thesis and won him his first Hugo and Nebula Awards, as well as the Locus and Ditmar Awards.
Haldeman was now at his most productive, working on several projects at once. Arguably his largest-scale undertaking was the Worlds trilogy, consisting of _Worlds_ (1981), _Worlds Apart_ (1983), and _Worlds Enough and Time_ (1992). Immediately before releasing the series' last installment, however, Haldeman published his renowned novel _The Hemingway Hoax_ (1990), which dealt with the experiences of combat soldiers in Vietnam. The novella version of the book won both the Hugo and Nebula Awards, a feat that Haldeman repeated with the publication of his next novel, _Forever Peace_ (1997), which also won the John W. Campbell Memorial Award for Best Science Fiction Novel.
In 1983 Haldeman accepted an adjunct professorship in the writing program at the Massachusetts Institute of Technology. He taught every fall semester, preferring to be a full-time writer for the remainder of the year. While at MIT he wrote _Forever Free_ , the final book in his now-famous Forever War trilogy.
Haldeman has since written or edited more than a half-dozen books, with a second succession of titles being published in the early 2000s, including _The Coming_ (2000), _Guardian_ (2002), _Camouflage_ (2004)—for which he won his fourth Nebula—and _The Old Twentieth_ (2005). He also released the Marsbound trilogy, publishing the namesake title in 2008 and quickly following it with _Starbound_ (2010) and _Earthbound_ (2011).
A lifetime member and past president of the Science Fiction Writers of America, Haldeman was selected as its Damon Knight Memorial Grand Master for 2010. He was inducted into the Science Fiction Hall of Fame in 2012.
After publishing his novel _Work Done for Hire_ and retiring from MIT in 2014, Haldeman now lives in Gainesville, Florida, and plans to continue writing a novel every couple of years.
**The author and his brother, Jack, around the year 1948. The image is captioned "Stick 'em up or I'll shoot. Woy Wogers and the Long Ranger."**
**Haldeman in third grade, the year he discovered science fiction.**
**Haldeman's mother, Lorena, and a bear cub in Alaska around the year 1950.**
**Joe and Gay Haldeman on their wedding day, August 21, 1965.**
**The author, with a cigarette, a beer, and a book, waits for a helicopter to arrive on the tarmac in Vietnam, July 1968.**
**A pamphlet with details on how to handle prisoners of war. Haldeman carried this with him in Vietnam.**
**The author in Vietnam, examining bullet holes on a US Army vehicle.**
**Haldeman and the actor Jimmy Stewart in Cam Ranh Bay, Vietnam, 1968.**
**The author in Vietnam with a book and sandbags.**
**Joe and Gay Haldeman with their friend, prominent science fiction personality Rusty Hevelin (at right) in Alaska, 1993.**
**The author with his Questar telescope in 2004.**
**Janis Ian, Joe Haldeman, and Anne McCaffrey at the 2005 Science Fiction & Fantasy Writers of America Nebula Awards weekend.**
**Honoring tradition, Haldeman wears the infamous tiara after winning the James Tiptree, Jr. Award for his novel _Camouflage._**
**Celebrated science fiction author Harry Harrison (at left) and Haldeman dressed as pirates during the 2005 World Fantasy Convention in England.**
**Joe and Gay Haldeman enjoying the Valley of the Kings in Egypt during a trip to see a total solar eclipse in 2006.**
**The author outside St. Augustine, Florida, on the first day of a cross-country bicycle trip with his wife in February 2013.**
**The author's Hugo and Nebula Awards.**
**Joe and Gay Haldeman, 2013.**
All rights reserved, including without limitation the right to reproduce this ebook or any portion thereof in any form or by any means, whether electronic or mechanical, now known or hereinafter invented, without the express written permission of the publisher.
This is a work of fiction. Names, characters, places, events, and incidents either are the product of the author's imagination or are used fictitiously. Any resemblance to actual persons, living or dead, businesses, companies, events, or locales is entirely coincidental.
Copyright © 1978 by Joe Haldeman
Cover design by Michel Vrana
978-1-4976-9242-8
This edition published in 2014 by Open Road Integrated Media, Inc.
345 Hudson Street
New York, NY 10014
www.openroadmedia.com
# **EBOOKS BY JOE HALDEMAN**
### FROM OPEN ROAD MEDIA
## Available wherever ebooks are sold
**Open Road Integrated Media** **is a digital publisher and multimedia content company. Open Road creates connections between authors and their audiences by marketing its ebooks through a new proprietary online platform, which uses premium video content and social media.**
**Videos, Archival Documents, and New Releases**
**Sign up for the Open Road Media newsletter and get news delivered straight to your inbox.**
**Sign up now at**
**www.openroadmedia.com/newsletters**
**FIND OUT MORE AT**
**WWW.OPENROADMEDIA.COM**
**FOLLOW US:**
**@openroadmedia** **and**
**Facebook.com/OpenRoadMedia**
| {
"redpajama_set_name": "RedPajamaBook"
} | 4,856 |
{"url":"https:\/\/discourse.mcneel.com\/t\/how-to-interpolate-a-closed-curve\/5066","text":"How to interpolate a closed curve\n\n#1\n\nhow to interpolate a closed curve ?\nI try like this:\nfirst using rhinointerpcurve creat a curve,\nand then using rhinomakecurveclosed make this curve closed\nbut the closed curve will not smoth at the start and end point.\n\n#2\n\nOne optional argument for the CreateInterpolatedCurve method is CurveKnotStyle, choosing a \u201cperiodic\u201d style should close the curve smoothly.\n\n``````0 Uniform\t Parameter spacing between consecutive knots is 1.0.\n1 Chord Chord length spacing, requires degree=3 with CV1 and CVn1 specified.\n2 ChordSquareRoot\t Square root of chord length, requires degree=3 with CV1 and CVn1 specified.\n3 UniformPeriodic Periodic with uniform spacing.\n4 ChordPeriodic Periodic with chord length spacing.\n5 ChordSquareRootPeriodic Periodic with square roor of chord length spacing.\n``````\n\n\u2013Mitch","date":"2018-12-16 10:12:44","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.935249924659729, \"perplexity\": 13679.055857172889}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-51\/segments\/1544376827639.67\/warc\/CC-MAIN-20181216095437-20181216121437-00381.warc.gz\"}"} | null | null |
\section{Introduction}
Studies of quantum space realized on the basis of idea of noncommutativity of coordinates have obtained grate interest recently. The interest is due to development of String theory and Quantum Gravity (see, for instance, \cite{Witten,Doplicher}). The idea of noncommutativity was proposed by Heisenberg, later it was formalized by Snyder and published in his paper \cite{Snyder}.
Much attention has been devoted to studies of physical systems in quantum space realized on the basis of idea of noncommutativity. Among them are studies of harmonic oscillator \cite{Hatzinikitas,Kijanka,Jing,Smailagic,Smailagic1,Alvarez,Djemai,Giri,Geloun,Nath,GnatenkoJPS17,Shyiko}, Landau problem \cite{Gamboa1,Horvathy,Dayi,Horvathy05,Alvarez09,Daszkiewicz1}, particle in gravitational field \cite{Bertolami1,Bastos,GnatenkoMPLA16,GnatenkoJPS18}, free quantum particle \cite{Djemai,Shyiko}, quantum fields \cite{Balachandran10,Balachandran11} and many others.
In four dimensional case (2D configurational space and 2D momentum space) the coordinates and the momenta in the space satisfy the following relations
\begin{eqnarray}
[X_{1},X_{2}]=i\hbar\theta,\label{form101}\\{}
[X_{i},P_{j}]=i\hbar\delta_{ij},\label{form1001}\\{}
[P_{1},P_{2}]=i\hbar\eta.\label{form10001}{}
\end{eqnarray}
where $\theta=const$ is parameter of coordinate noncommutativity, $\eta=const$ is parameter of momentum noncommutativity, $i,j=(1,2)$. In the classical limit from (\ref{form101})-(\ref{form10001}) one obtains deformed Poisson brackets
\begin{eqnarray}
\{X_{1},X_{2}\}=\theta,\label{al00}\\{}
\{X_{i},P_{j}\}=\delta_{ij},\\{}
\{P_{1},P_{2}\}=\eta.\label{al10}
\end{eqnarray}
Studies of many particle problems in quantum space open new possibilities to find effects of space quantitization on the Planck scale in properties of a wide class of physical systems. In a space with noncommutativity of coordinates the problems of quantum mechanics of many particles were examined in \cite{Ho}. Authors of paper \cite{Bellucci} studied the system of two charged quantum particles in noncommutative space. Features of motion of composite system in gravitational field in a space with coordinates noncommutativity were considered in \cite{GnatenkoPLA13,GnatenkoJPS13}. Two-particle system interacting through the harmonic oscillator potential on a noncommutative plane was examined in \cite{Jabbari}. Spectrum of two-particle system with Coulomb interaction was studied in rotationally-invariant space with noncommutativity of coordinates in \cite{GnatenkoU16}. Also a system of two particles was studied in quantum space characterized by coordinate noncommutativity and momentum noncommutativity \cite{Djemai}.
In noncommutative space-time the classical problem of many particles was examined in \cite{Daszkiewicz}. In \cite{Daszkiewicz2} the quantum model of many particles moving in twisted N-enlarged Newton-Hooke space-time was proposed. In \cite{GnatenkoPLA17} the properties of kinetic energy of composite system and its motion in the gravitational field were studied in four-dimensional noncommutative phase space. Features in description of composite system in rotationally-invariant noncommutative phase space were considered in \cite{GnatenkoIJ18}.
The problem of composite system which is known as soccer-ball
problem was considered in the case of Doubly Special Relativity \cite{Jacobson,Amelino-Camelia} relative locality \cite{Amelino-Camelia1,Hossenfelder,Amelino-Camelia3}.
In the deformed space with minimal length the problem of description of motion of composite system was studied in \cite{Quesne}. The authors of \cite{Quesne} introduced the total momenta as integrals of motion. The motion of composite system in gravitational field in the space was examined in \cite{Tkachuk}.
In the present paper we study features of motion of free particles system in noncommutative phase space of canonical type in the general case when different particles satisfy noncommutative algebra with different parameters of noncommutativity. We show that even in the case when at the initial moment of time the velocities of particles in the system are the same the particles fly away. The situation is changed if we consider parameters of noncommutativity which correspond to a particle to be dependent on its mass as was proposed in \cite{GnatenkoPLA17,GnatenkoMPLA17}. In noncommutative phase space the total momentum defined in the traditional way (defined as a sum of momenta of particles) is not preserved. In the present paper we find total momenta of composite system in noncommutative phase space which are integrals of motion and introduce the coordinates of the center-of-mass conjugated to them.
The paper is organized as follows. In Section 2 the motion of composite system in noncommutative phase space is considered. In Section 3 features of motion of free particles system in noncommutative phase space are examined. The total momenta as integrals of motion and coordinates of the center-of-mass conjugated to them are introduced in Section 4. Conclusions are presented in Section 5.
\section{Description of motion of composite system in noncommutative phase space}
Let us consider composite system made of $N$ particles with masses $m_a$ $(a=1..N)$ with the following hamiltonian
\begin{eqnarray}
H=\sum_{a}\frac{({\bf P}^{(a)})^{2}}{2m_{a}}+\frac{1}{2}\mathop{\sum_{a,b}}\limits_{a\neq b} U(|{\bf X}^{(a)}-{\bf X}^{(b)}|),\label{hham}
\end{eqnarray}
in four-dimensional noncommutative phase space. In (\ref{hham}) indexes $a$, $b$ label the particles.
In general case coordinates and momenta of different particles may satisfy noncommutative algebra with different parameters of noncommutativity
\begin{eqnarray}
[X_{1}^{(a)},X_{2}^{(b)}]=i\hbar\delta^{ab}\theta_{a},\label{al0}\\{}
[X_{i}^{(a)},P_{j}^{(b)}]=i\hbar\delta^{ab}\delta_{ij},\\{}
[P_{1}^{(a)},P_{2}^{(b)}]=i\hbar\delta^{ab}\eta_{a},\label{al1}
\end{eqnarray}
where $\theta_{a}$, $\eta_{a}$ are parameters of noncommutativity which correspond to particle with mass $m_a$. In the classical limit we have the following Poisson brackets
\begin{eqnarray}
\{X_{1}^{(a)},X_{2}^{(b)}\}=\delta^{ab}\theta_{a},\label{pal0}\\{}
\{X_{i}^{(a)},P_{j}^{(b)}\}=\delta^{ab}\delta_{ij},\\{}
\{P_{1}^{(a)},P_{2}^{(b)}\}=\delta^{ab}\eta_{a}.\label{pal1}
\end{eqnarray}
In our previous paper \cite{GnatenkoPLA17} we show that the coordinates and momenta of the center-of-mass, the coordinates and momenta of the relative motion which are defined in the traditional way
\begin{eqnarray}
\tilde{{\bf P}}=\sum_{a}{\bf P}^{(a)},\label{05}\\
\tilde{{\bf X}}=\sum_{a}\mu_{a}{\bf X}^{(a)},\label{00005}\\
\Delta{\bf P}^{{a}}={\bf P}^{(a)}-\mu_{a}\tilde{{\bf P}},\\
{\Delta\bf X}^{(a)}={\bf X}^{(a)}-\tilde{{\bf X}},\label{06}
\end{eqnarray}
with $\mu_a=m_{a}/M$, $M=\sum_{a}m_a$, satisfy noncommutative algebra with effective parameters of noncommutativity
\begin{eqnarray}
\tilde{\theta}=\frac{\sum_{a}m_{a}^{2}\theta_{a}}{(\sum_{b}m_{b})^{2}},\label{eff}\\
\tilde{\eta}=\sum_{a}\eta_a.\label{eff2}
\end{eqnarray}
The Poisson brackets for $\tilde{X}_i$, $\tilde{P}_i$, $\Delta{X}_i^{(a)}$, $\Delta{P}_i^{(a)}$ read
\begin{eqnarray}
\{\tilde{X}_1,\tilde{X}_2\}=\tilde{\theta},\label{07}\\{}
\{\tilde{P}_1,\tilde{P}_2\}=\tilde{\eta},\\{}
\{\tilde{X}_i,\tilde{P}_j\}=\{\Delta{X}_i,\Delta{P}_j\}=\delta_{ij},\label{08}\\{}
\{\Delta{X}_1^{(a)},\Delta{X}_2^{(b)}\}=-\{\Delta{X}_2^{(a)},\Delta{X}_1^{(b)}\}=\delta^{ab}\theta_{a}-\mu_{a}\theta_{a}-\mu_{b}\theta_{b}+\tilde{\theta} ,\\{}
\{\Delta{P}_1^{(a)},\Delta{P}_2^{(b)}\}=-\{\Delta{P}_2^{(a)},\Delta{P}_1^{(b)}\}=\delta^{ab}\eta_a-\mu_b\eta_a-\mu_a\eta_b+\mu_a\mu_b\tilde{\eta}.{}\label{007}
\end{eqnarray}
Also, in the paper \cite{GnatenkoPLA17} we mentioned that the Poisson brackets for coordinates of the center-of-mass and coordinates of the relative motion and Poisson brackets for the total momenta and the momenta of relative motion do not equal to zero. Namely, the following relations are satisfied
\begin{eqnarray}
\{\tilde{X}_{1},\Delta X_{2}^{(a)}\}=-\{\tilde{X}_{2},\Delta X_{1}^{(a)}\}=\mu_{a}\theta_{a}-\tilde{\theta},\label{rel1}\\{}
\{\tilde{P}_1,\Delta{P}^{(a)}_2\}=-\{\tilde{P}_2,\Delta{P}^{(a)}_1\}=\eta_a-\mu_a\sum_{b}\eta_b.\label{rel2}
\end{eqnarray}
So, in noncommutative phase space we cannot consider hamiltonian of the center-of-mass and hamiltonian of the relative motion as independent. From (\ref{hham}), using (\ref{05})-(\ref{06}), we have
\begin{eqnarray}
H=H_{cm}+H_{rel},\\
H_{cm}=\frac{\tilde{{\bf P}}^{2}}{2M},\label{12cm}\\
H_{rel}=\sum_a\frac{(\Delta{\bf P}^{a})^{2}}{2m_a}+\frac{1}{2}\mathop{\sum_{a,b}}\limits_{a\neq b} U(|\Delta{\bf X}^{(a)}-\Delta{\bf X}^{(b)}|)
\end{eqnarray}
and $\{H_{cm},H_{rel}\}\neq0$.
In paper \cite{GnatenkoPLA17} we proposed conditions on the parameters of noncommutativity
\begin{eqnarray}
\frac{\eta_a}{m_a}=\alpha=const,\label{cond}\\
\theta_a m_a=\gamma=const,\label{cond2}
\end{eqnarray}
with $\alpha$, $\gamma$ being constants which are the same for particles with different masses. On these conditions $\{\tilde{X}_{1},\Delta X_{2}^{(a)}\}=0$, $\{\tilde{P}_1,\Delta{P}^{a}_2\}=0$, therefore the motion of the center-of-mass is independent of the relative motion.
\section{Motion of free particles system and parameters of noncommutativity}
Let us study features of motion of a free particle of mass $m$ in four-dimensional noncommutative phase space. The hamiltonian of the particle reads
\begin{eqnarray}
H=\frac{{P}_1^2}{2m}+\frac{{P}_2^2}{2m}.\label{hammm}
\end{eqnarray}
Taking into account (\ref{al00})-(\ref{al10}), (\ref{hammm}) we can write the following equations of motion
\begin{eqnarray}
\dot{X}_1=\frac{P_1}{m},\label{eq}\\
\dot{X}_2=\frac{P_2}{m},\label{eq0}\\
\dot{P}_1=\eta\frac{P_2}{m},\label{eq1111}\\
\dot{P}_2=-\eta\frac{P_1}{m}.\label{eq1}
\end{eqnarray}
Solutions of equations (\ref{eq})-(\ref{eq1}) with initial conditions ${X}_1(0)=X_{01}$, ${X}_2(0)=X_{02}$, $\dot{X}_1(0)=\upsilon_{01}$ $\dot{X}_2(0)=\upsilon_{02}$ are as follows
\begin{eqnarray}
{X}_1(t)=\upsilon_{01}\frac{m}{\eta}\sin\frac{\eta}{m}t-\upsilon_{02}\frac{m}{\eta}\cos\frac{\eta}{m}t+\upsilon_{02}\frac{m}{\eta}+X_{01},\label{eqs}\\
{X}_2(t)=\upsilon_{02}\frac{m}{\eta}\sin\frac{\eta}{m}t+\upsilon_{01}\frac{m}{\eta}\cos\frac{\eta}{m}t-\upsilon_{01}\frac{m}{\eta}+X_{02},\label{eqs22}\\
\dot{X}_1(t)=\upsilon_{01}\cos\frac{\eta}{m}t+\upsilon_{02}\sin\frac{\eta}{m}t,\label{eqsd}\\
\dot{X}_2(t)=\upsilon_{02}\cos\frac{\eta}{m}t-\upsilon_{01}\sin\frac{\eta}{m}t.\label{eqsd1}
\end{eqnarray}
In the limit $\theta\rightarrow0$, $\eta\rightarrow0$ we obtain trajectory of the particle in the ordinary space ${X}_1(t)=\upsilon_{01}t+X_{01}$, ${X}_2(t)=\upsilon_{02}t+X_{02}$.
Note that because of noncommutativity of momenta the trajectory and the velocity of free particle depend on its mass and on the parameter $\eta$.
Let us consider a system of free particles with masses $m_1$, $m_2$,...,$m_N$ with hamiltonian
\begin{eqnarray}
H=\sum_a\frac{({\bf P}^{(a)})^2}{2m_a}=\frac{\tilde{{\bf P}}^2}{2M}+\sum_a\frac{(\Delta{\bf P}^{(a)})^2}{2m_a}.\label{hsys}
\end{eqnarray}
The coordinates $X_i^{(a)}$ and momenta $P_i^{(a)}$ of particles satisfy relations (\ref{pal0})-(\ref{pal1}). Momenta of the center-of-mass $\tilde P_i$ and momenta of relative motion $\Delta P_i^{(a)}$ satisfy (\ref{07})-(\ref{rel2}), $M=\sum_a m_a$.
In the ordinary space ($\theta=0$, $\eta=0$) free particles with the same initial velocities move together.
In noncommutative phase space even in the case of equality of velocities of free particles of masses $m_1$, $m_2$,...,$m_N$ at the initial moment of time $\dot{X}^{(a)}_1(0)=\upsilon_{01}$, $\dot{X}^{(a)}_2(0)=\upsilon_{02}$, $a=(1...N)$, we have $\dot{X}^{(a)}_1(t)\neq\dot{X}^{(b)}_1(t)$, $\dot{X}^{(a)}_2(t)\neq\dot{X}^{(b)}_2(t)$ for $a\neq b$. Namely using (\ref{eqsd}), (\ref{eqsd1}), one can write
\begin{eqnarray}
\dot{X}^{(a)}_1(t)=\upsilon_{01}\cos\frac{\eta_a}{m_a}t+\upsilon_{02}\sin\frac{\eta_a}{m_a}t,\label{eqsda}\\
\dot{X}^{(a)}_2(t)=\upsilon_{02}\cos\frac{\eta_a}{m_a}t-\upsilon_{01}\sin\frac{\eta_a}{m_a}t,\label{eqsd1a}
\end{eqnarray}
here $\eta_a$ is parameter of momentum noncommutativity which corresponds to particle of mass $m_a$, $a=(1..N)$.
Note that even for a system of free particles the relative motion affects on the motion of the center-of-mass because of relation (\ref{rel2}). The trajectory of the center-of-mass of the system and the trajectory of relative motion read
\begin{eqnarray}
\tilde{X}_1(t)=\sum_a\left(\upsilon_{01}\frac{m^2_a}{M\eta_a}\sin\frac{\eta_a}{m_a}t-\upsilon_{02}\frac{m^2_a}{M\eta_a}\cos\frac{\eta_a}{m_a}t+\upsilon_{02}\frac{m^2_a}{M\eta_a}+\frac{m_a}{M}X^{(a)}_{01}\right),\label{peqs}\\
\tilde{X}_2(t)=\sum_a\left(\upsilon_{02}\frac{m^2_a}{M\eta_a}\sin\frac{\eta_a}{m_a}t+\upsilon_{01}\frac{m^2_a}{M\eta_a}\cos\frac{\eta_a}{m_a}t-\upsilon_{01}\frac{m^2_a}{M\eta_a}+\frac{m_a}{M}X^{(a)}_{02}\right),\label{peqs22}\\
\Delta{X}^a_1(t)=\upsilon_{01}\frac{m_a}{\eta_a}\sin\frac{\eta_a}{m_a}t-\upsilon_{02}\frac{m_a}{\eta_a}\cos\frac{\eta_a}{m_a}t+\upsilon_{02}\frac{m_a}{\eta_a}+X^{(a)}_{01}-\nonumber\\
-\sum_b\left(\upsilon_{01}\frac{m^2_b}{M\eta_b}\sin\frac{\eta_b}{m_b}t-\upsilon_{02}\frac{m^2_b}{M\eta_b}\cos\frac{\eta_b}{m_b}t+\upsilon_{02}\frac{m^2_b}{M\eta_b}+\frac{m_b}{M}X^{(b)}_{01}\right),\label{21}\\
\Delta{X}^a_2(t)=\upsilon_{02}\frac{m_a}{\eta_a}\sin\frac{\eta_a}{m_a}t+\upsilon_{01}\frac{m_a}{\eta_a}\cos\frac{\eta_a}{m_a}t-\upsilon_{01}\frac{m_a}{\eta_a}+X^{(a)}_{02}-\nonumber\\
-\sum_b\left(\upsilon_{02}\frac{m^2_b}{M\eta_b}\sin\frac{\eta_b}{m_b}t+\upsilon_{01}\frac{m^2_b}{M\eta_b}\cos\frac{\eta_b}{m_b}t-\upsilon_{01}\frac{m^2_b}{M\eta_b}+\frac{m_b}{M}X^{(b)}_{02}\right).\label{11}
\end{eqnarray}
From (\ref{eqsda}), (\ref{eqsd1a}), (\ref{21}), (\ref{11}) we can conclude that because of momentum noncommutativity the particles do not move together. In noncommutative phase space the system of free particles with the same initial velocities flies away.
We would like to mention that in the case when condition on the parameter of noncommutativity (\ref{cond}) is satisfied, from (\ref{eqsda}), (\ref{eqsd1a}) we have that the particles move with the the same velocities which are equal to the velocity of the center-of-mass of the system
\begin{eqnarray}
\dot{X}^{(a)}_1(t)=\sum_a\mu_a\dot{X}^{(a)}_1(t)=\upsilon_{01}\cos\alpha t+\upsilon_{02}\sin\alpha t,\\
\dot{X}^{(a)}_2(t)=\sum_a\mu_a\dot{X}^{(a)}_2(t)=\upsilon_{02}\cos\alpha t-\upsilon_{01}\sin\alpha t.
\end{eqnarray}
The relative coordinates do not depend on time. Taking into account (\ref{cond}), from (\ref{21}), (\ref{11}) we have
$\Delta{X}_1^{(a)}=X^{(a)}_{01}-\sum_b\mu_bX^{(b)}_{01}$, $\Delta{X}_2^{(a)}=X^{(a)}_{02}-\sum_b\mu_bX^{(b)}_{02}$.
Also, the trajectories of free particles do not depend on they masses. Taking into account (\ref{cond}), (\ref{eqs}), (\ref{eqs22}), we have
\begin{eqnarray}
{X}^{(a)}_1(t)=\frac{\upsilon_{01}}{\alpha}\sin\alpha t-\frac{\upsilon_{02}}{\alpha}\cos\alpha t+\frac{\upsilon_{02}}{\alpha}+X^{(a)}_{01},\label{eqs1}\\
{X}^{(a)}_2(t)=\frac{\upsilon_{02}}{\alpha}\sin\alpha t+\frac{\upsilon_{01}}{\alpha}\cos\alpha t-\frac{\upsilon_{01}}{\alpha}+X^{(a)}_{02},\label{eqs221}
\end{eqnarray}
where $X^{(a)}_{01}=X^{(a)}_1(0)$, $X^{(a)}_{02}=X^{(a)}_2(0)$.
Therefore particles with the same initial velocities at the initial moment of time move together as it should be.
So, in the case when parameter of noncommutativity corresponding to a particle depend on its mass as (\ref{cond}) the trajectory and velocity of free particle in noncommutative phase space do not depend on its mass, the system of free particles with the same initial velocities does not fly away (each particle in the system move with the same velocities which are equal to velocity of the center-of-mass, the relative coordinates do not depend on time as it should be).
\section{Definition of the momentum of the center-of-mass as integral of motion in noncommutative phase space}
In noncommutative phase space the total momentum (\ref{05}) defined in the traditional way is not the integral of motion. We have
\begin{eqnarray}
\{\tilde{P}_1,H\}=\tilde{\eta}\frac{\tilde{P}_2}{M}+\sum_a\frac{\Delta{P}^{(a)}_2}{m_a}\left(\eta_a-\mu_a\tilde{\eta}\right)\\{}
\{\tilde{P}_2,H\}=-\tilde{\eta}\frac{\tilde{P}_1}{M}-\sum_a\frac{\Delta{P}^{(a)}_1}{m_a}\left(\eta_a-\mu_a\tilde{\eta}\right)
\end{eqnarray}
here $H$ is given by (\ref{hham}).
In the case when conditions on the parameters of noncommutativity (\ref{cond}), (\ref{cond2}) are satisfied we have $\{\tilde{X}_{1},\Delta X_{2}^{(a)}\}=0$, $\{\tilde{P}_1,\Delta{P}^{a}_2\}=0$, therefore
\begin{eqnarray}
\{\tilde{P}_1,H\}=\{\tilde{P}_1,H_{cm}\}=\frac{\tilde{P}_2}{M}\tilde{\eta}\\{}
\{\tilde{P}_2,H\}=\{\tilde{P}_2,H_{cm}\}=-\frac{\tilde{P}_1}{M}\tilde{\eta}.
\end{eqnarray}
here $H_{cm}$ is given by (\ref{12cm}).
So, because of noncommutativity the total momentum defined as a sum of momenta of particles of the system $\tilde{{\bf P}}=\sum_{a}{\bf P}^{(a)}$ is not preserved in noncommutative phase space.
Let us find the total momentum as integral of motion.
For this purpose let us first consider particular case when a composite system consists of $N$ particles with the same masses $m_1=m_2=...=m_N=m$ and parameters of noncommutativity $\theta_1=\theta_2=...=\theta_N=\theta$, $\eta_1=\eta_2=...=\eta_N=\eta$. Note that in this case coordinates and momenta given by (\ref{05})-(\ref{06}) satisfy $\{\tilde{X}_{1},\Delta X_{2}^{(a)}\}=\{\tilde{P}_1,\Delta{P}^{(a)}_2\}=0$.
We would like to mention that the momenta defined as
\begin{eqnarray}
\tilde{{ P}}_1^{\prime}=\sum_a P^{(a)}_1-\eta \sum_a X^{(a)}_2,\label{pp1}\\
\tilde{{ P}}_2^{\prime}=\sum_a P^{(a)}_2+\eta \sum_a X^{(a)}_1\label{pp2},
\end{eqnarray}
satisfy $\{\tilde{{ P}}_1^{\prime}, H\}=\{\tilde{{ P}}_2^{\prime}, H\}=0$, here $H$ is given by (\ref{hham}). So, they are integrals of motion and can be considered as the total momenta in noncommutative phase space. In the limit $\eta\rightarrow0$ from (\ref{pp1}), (\ref{pp2}) one obtains the total momenta defined in the traditional way.
Let us generalize proposed definition of the total momenta to the case when a composite system consists of $N$ particles with different masses $m_a$ and parameters of noncommutativity $\theta_a$, $\eta_a$.
The total momenta defined as
\begin{eqnarray}
\tilde{{P}}_1^{\prime}=\tilde{P}_1-\tilde{\eta} \tilde{X}_2,\label{prp1}\\
\tilde{{P}}_2^{\prime}=\tilde{P}_2+\tilde{\eta} \tilde{X}_1,\label{prp2}
\end{eqnarray}
with $\tilde{P}_i$, $\tilde{X}_i$, $\tilde{\eta}$ given by (\ref{05}), (\ref{00005}), (\ref{eff2}) are integrals of motion
($\{\tilde{{ P}}_1^{\prime},H\}=\{\tilde{{ P}}_2^{\prime},H\}=0$) in the case when relations (\ref{cond}), (\ref{cond2}) hold.
Note that for composite system made of $N$ particles with the same masses and parameters of noncommutativity $\eta$, $\theta$ we have $\tilde{X}_i=\sum_a X^{(a)}_i/N$, also, taking into account (\ref{eff2}), we can write $\tilde{\eta}=N\eta$. Therefore from (\ref{prp1}), (\ref{prp2}) we obtain (\ref{pp1}), (\ref{pp2}).
Let us define the coordinates of the center-of-mass $\tilde{{ X}}_i^{\prime}$ as coordinates conjugated to $\tilde{{ P}}_i^{\prime}$, namely
\begin{eqnarray}
\{\tilde{{X}}_i^{\prime},\tilde{{ P}}_j^{\prime}\}=\delta_{ij},\label{xp}{}
\end{eqnarray}
One can verify that the coordinates defined as
\begin{eqnarray}
\tilde{{ X}}_i^{\prime}=\frac{\sum_a \mu_a X^{(a)}_i}{1-\tilde{\eta}\tilde{\theta}}=\frac{\tilde{X}_i}{1-\tilde{\eta}\tilde{\theta}},\label{xrp1}
\end{eqnarray}
satisfy (\ref{xp}). Also, the following relations are satisfied
\begin{eqnarray}
\{\tilde{{X}}_1^{\prime},\tilde{{ X}}_2^{\prime}\}=\frac{\tilde{\theta}}{(1-\tilde{\theta}\tilde{\eta})^2},\label{n1}\\{}
\{\tilde{P}_1^{\prime},\tilde{P}_2^{\prime}\}=\tilde{\eta}(\tilde{\theta}\tilde{\eta}-1).{}\label{n2}
\end{eqnarray}
At the end of this section we would like to mention that the momentum of free particle in noncommutative phase space is not integral of motion (see (\ref{eq1111}), (\ref{eq1})). Note that from (\ref{prp1}), (\ref{prp2}) for one-particle system we can write $P^{\prime}_1=P_1-\eta X_2$, $P^{\prime}_2=P_2+\eta X_1$ which satisfy $\{P^{\prime}_1,H\}=\{P^{\prime}_2,H\}=0$, where $H$ is given by (\ref{hammm}). The hamiltonian of the particle in terms of $P_i^{\prime}$ and coordinates $X_i^{\prime}=X_i/(1-\eta\theta)$, is as follows
\begin{eqnarray}
H=\frac{1}{2m}\left(P_1^{\prime}+\eta({1-\eta\theta})X_2^{\prime}\right)^2+\frac{1}{2m}\left(P_2^{\prime}-\eta(1-\eta\theta) X_1^{\prime}\right)^2\label{hhh}
\end{eqnarray}
We would like to note that the hamiltonian (\ref{hhh}) corresponds to the hamiltonian of a charged particle in the magnetic field ${\bf B}(0,0,B)$ in noncommutative phase space which is characterized by relations (\ref{xp}), (\ref{n1}), (\ref{n2}), if $eB/c=\eta(1-\eta\theta)$ (here $e$ is charge of the particle, $c$ is the speed of light).
It is easy to generalize results presented in the section to the quantum
case introducing corresponding operators and considering commutation relations for them.
\section{Conclusions}
In the paper we have examined influence of noncommutativity of coordinates and noncommutativity of momenta of canonical type on the motion of composite system. The system of $N$ free particles has been studied. We have found that even for a system of free particles the relative motion in the system affects on the motion of its center-of-mass.
We have shown that because of noncommutativity of momenta the trajectory of free-particle and its velocity depend on mass and parameter of momentum noncommutativity. Therefore free particles of different masses do not move together even in the case when at the initial time the velocities of the particles are the same. So, the system of free particles flies away.
We have concluded that if we consider parameters of noncommutativity which correspond to a particle to be dependent on its mass as (\ref{cond}) the velocity and trajectory of free particle do not depend on its mass; free particles with the same initial velocities move together, namely each particle in free particles system moves with the velocity which equals to the velocity of the center-of-mass and the trajectory of the relative motion does not depend on time.
We would like to stress that the same condition of the parameters of momentum noncommutativity (\ref{cond}) with condition on the parameter of coordinate noncommutativity (\ref{cond2}) were proposed in \cite{GnatenkoPLA17} to solve the problem of violation of the weak equivalence principle, problem of nonadditivity of kinetic energy, problem of dependence of kinetic energy on composition, problem of dependence of motion of the center-of-mass on the relative motion in noncommutative phase space.
We have also shown that in the case when relations (\ref{cond}), (\ref{cond2}) hold the total momenta can be defined as integrals of motion in noncommutative phase space. We have introduced the total momenta (\ref{prp1}), (\ref{prp2}) and found coordinates of the center-of-mass conjugated to them (\ref{xrp1}). It is shown that in the limit $\tilde{\eta}\rightarrow0$ proposed expressions for coordinates of the center-of-mass (\ref{xrp1}) and for the total momenta (\ref{prp1}), (\ref{prp2}) reduce to the ordinary definitions. This is in agreement with the result of paper \cite{GnatenkoPLA13} where it was shown that in a space with noncommutative coordinates ($\theta\neq0$, $\eta=0$) the total momenta defined in the traditional way are integrals of motion. In the limits $\tilde{\theta}\rightarrow0$, $\tilde{\eta}\rightarrow0$ from (\ref{xrp1}), (\ref{prp1}), (\ref{prp2}) one obtains the coordinates and momenta of the center-of-mass defined in the traditional way which satisfy the ordinary Poisson brackets.
So, relations of parameters of noncommutativity which correspond to a particle with its mass (\ref{cond}), (\ref{cond2}) give the possibility to solve number of problems in noncommutative phase space.
\section*{Acknowledgments}
This work was partly supported by the grant of the President of Ukraine for support of scientific researches of young scientists ($\Phi$-75) and by the project $\Phi\Phi$-30$\Phi$ (No. 0116U001539) from the Ministry of Education and Science of Ukraine.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 6,887 |
\section{Introduction}
We first discuss briefly the problem of dictionary learning and the different choices one has to make to perform this task depending on the problem at hand. We also introduce notations and standard operators.
\subsection{Why a Dictionary ?}
In order to analyze a given finite dataset $X:=\{x_n \in \mathbb{R}^D,n=1,..,N\}$ different approaches are possible. One of them lies on the assumption that those observations actually come from some latent representation that are mixed together, different mixing leading to different observations but with fixed dictionary $\Phi_K:=\{\phi_k \in \mathbb{R}^D,k=1,...,K\}$ with usually $K \ll N$. One can thus rewrite
\begin{equation}\label{eq0}
x_n = f(\Phi,x_n).
\end{equation}
In practice, a linear assumption is used for $f$ and we write the prediction as
\begin{equation}
\hat{x}_n=\hat{f}(\hat{\Phi}_K,x_n),
\end{equation}
where one estimate the functional and the filter-bank through a reconstruction loss defined as
\begin{equation}
E_n = ||x_n-\hat{x}_n ||^2,
\end{equation}
where we assume here a squared error but any loss function can be used in general.
The linear assumption imposed on $f$ leads to the estimation weighting coefficients we denote by $\alpha_{n,k}$ weighting for observation $n$ the atom $k$, and those attributes are the new features used to represent the observations.
This new representation of $x_n$ via its corresponding feature vector $\bm{\alpha_n}$ can be used for many tasks s.a. clustering, denoising, data generation, anomaly detection, compression and much more.
Depending on the constraints one imposes on the feature vectors $\bm{\alpha_n}$ and the dictionary $\Phi_K$, one can come back to standard known frameworks s.a. Principal Component Analysis (PCA)\cite{jolliffe2002principal}, Independent Component Analysis (ICA)\cite{hyvarinen2004independent}, Sparse Coding (SC)\cite{olshausen1997sparse}, (Semi) Nonegative Matrix Factorization (sNMF)\cite{lee1999learning,ding2010convex}, Gaussian Mixture Model (GMM)\cite{bilmes1998gentle}, and many more, but all those approaches can be categorized into two main categories: Complete Dictionary Learning and Over-Complete Dictionary Learning.
The use of a dictionary also extends to standard template matching, the mot popular technique and optimal in the GLRT sense, where new examples are to be mapped to one of the template to be clustered or else.
\subsection{(Over)complete Dictionary Learning}
As we saw, the dictionary learning problem finds many formulations but the main difference resides in the properties of the learned dictionary, namely if it is complete or over-complete.
The general case of complete or orthogonal basis imposes the following constraint on the filter-bank:
\begin{align}
<\phi_j,\phi_k>=0,\forall j \not = k,\;K=D,
\end{align}
and with sometimes the complementary constraints that $||\phi_k||=1,\forall k$ leads to an orthonormal basis. The orthogonality of the atoms allow to have exact reconstruction leading to $E_n=0,\forall n$ as by definition one has
\begin{equation}\label{eq1}
\hat{x}_n=\sum_i \frac{<x_n,\hat{\phi}_k>}{||\hat{\phi}_k||^2}\hat{\phi}_k,\forall n.
\end{equation}
Given a dictionary $\Phi$ this decomposition is unique and thus we have $(\hat{\Phi}_K,x_n) \Rightarrow \hat{\bm{\alpha}}_n,\forall n$.
However, while through the Gram-Schmidt process existence of such a basis is guaranteed, it is not unique.
On the other hand, $\Phi_K$ can be an over-complete basis with the main difference coming that now $K>D$ and with the extreme case of $K=N$. Containing more atoms than the dimension of the space leads to interesting properties in practice such as sparse and independent features (ICA), clustering properties (K-means, NMF), biological understanding as is has been empirically shown that visual and auditory cortex of many mammals contains over-complete dictionaries.
Yet, all those benefits coming from the redundancy of the atoms also lead to the non-unique features $\bm{\alpha}_n$ even when the dictionary is kept fixed, thus we have that $(\hat{\Phi}_K,x_n) \not \Rightarrow \hat{\bm{\alpha}}_n$.
As a result, one has to solve the additional optimization problem of
\begin{equation}
\bm{\hat{\alpha}_n}=\argmin_{\bm{\alpha}\in \Omega \subset \mathbb{R}^K} ||x_n-\sum_k\alpha_k\hat{\phi}_k ||.
\end{equation}
As a result, whether one is in a complete or over-complete dictionary setting, the optimization problems are always ill-posed by the non-uniqueness of the solutions forcing to impose additional structures or constraints in order to reach well posed problems. For the complete case, the main approach consist in imposing that as few atoms as possible are used leading to PCA, a very powerful approach for dimensionality reduction and compression. For over-complete cases, different sparsity criteria are imposed on the features $\bm{\alpha}_n$ such as norm-(0,1,2). For a norm-0 we are in the Matching pursuit case, norm1 is sparse coding and norm2 is ridge regression.
For each of those cases many very efficient exact or iterative optimization algorithms have been developed to estimate $\hat{\Phi}_k$ and $\hat{\bm{\alpha}}_n$ yet there still exist a conceptual gap between the two concepts and the two approaches are often seen as orthogonal.
As we have seen, both settings lead to different beneficial aspects, compression, easy of projection and reconstruction or sparsity/clustering but more complex optimization problems, but, at a higher level, the signal processing community has always put a gap between those frameworks. As well put by Meyer in [CITE] one has to choose between encoding and representation.
We thus propose in this paper a novel approach allowing to have an over-complete dictionary yet given an input, a dynamic basis selection reduces it to an optimal complete dictionary without need for optimization in order to reconstruct. The selection is done without optimization in a forward manner leading to an efficient algorithm. This thus allows to inherit from all the properties induced by orthonormal basis while allowing adaptivity for better allowing to learn an over-complete basis with a nonlinear basis selection leading to an orthonormal basis when conditioned on the input. We also provide results from a low-dimensional manifold perspective and show that our approach perform nonparametric orbit estimation.
We validate on compression, dictionary learning tasks and clustering.
\section{Deep Residual Oja Network}
\subsection{Shallow Model}
Is it possible to learn a dictionary inheriting the benefits or complete and over-complete dictionaries ? We present one solution here.
We first motivate the need for such a framework as well as present the general approach and notations. Throughout the next sections, the choice of the Oja name for the algorithm will become blatant for the reader.
Keeping the previously defined notation, we aim at learning an over-complete basis with the number of atoms defined by $FK$ with $F>1$ called the increase factor, note that $F=1$ lead to a complete basis, $K=D$ unless otherwise defined. By definition, the following projection-reconstruction scheme
\begin{equation}
\hat{x}_n=\sum_k \frac{<x_,\hat{\phi}_k>}{||\hat{\phi}_k||^2}\hat{\phi}_k,
\end{equation}
can not reach an error $E_n<\epsilon$ and in fact $E_n$ increases with $F$. One way to resolve this issue comes from an optimal basis selection point of view leading to
\begin{equation}\label{loss1}
E_n:=||x_n-\sum_k \delta_{n,k}\frac{<x_,\hat{\phi}_k>}{||\hat{\phi}_k||^2}\hat{\phi}_k|| < \epsilon,
\end{equation}
with $\delta{n,k}\in \{0,1\},\forall n,k$ representing a mask allowing to use a subset of the dictionary $\hat{\Phi}_{FK}$ we denote by $\rho_{\delta_{n,.}}[\hat{\Phi}_{FK}]$.
\iffalse
\tiny
As we aim at finding a systematic way to perform this basis selection, there are few possible approaches: either selecting the top-$\kappa$ atoms $(S1)_\kappa$ w.r.t. their activation maps energy, or keep the ones above a given threshold via either soft-thresholding (S2-1) or hard-thresholding (S2-2). If one assumes some kind of concentration among the dictionary, another approach or local winner-takes-all (S3) can be used.
We first present the conditions to have optimal reconstruction for each of the strategies.
\begin{theorem}
For an over-complete basis, $(S1)_\kappa$ is optimal if $\kappa=1$ and $F\Rightarrow \infty$ s.t. $\forall x \in \Omega, \exists k : \phi_k=x$, (S2) is optimal if all inputs are made of atoms with always the same energy, (S3) is optimal is the dictionary can be rewritten as blocks s.t. selecting one atom per block lead to a complete basis.
\end{theorem}
The strategies are thus
\begin{itemize}
\item $(S1)_\kappa$ : $\delta_{n,k}=1 \iff Card(\{j:|<x,\phi_{j}>|<|<x,\phi_{k}>|,j=1,..,FK,j\not = k\})>FK-\kappa$
\item $(S2-1)$ : for this case in addition of applying a mask, there is a change in the value used to project back the atom, it is defined as
\begin{equation}
E_n:=||x_n-\sum_k \delta_{n,k}(<x_n,\hat{\phi}_k>-b_k*sgn(<x_n,\hat{\phi}_k>))\hat{\phi}_k|| < \epsilon,
\end{equation}
and with $\delta_{n,k}=1 \iff $
\item $(S2-2)$ : $\delta_{n,k}=1 \iff |<x,\phi_k>|>b_k$ where $b_k$ is a atom dependent threshold value either learned or imposed.
\item $(S3)$ : Let first partition $\Phi_{FK}$ into an union of $D$ sets of $F$ atoms we denote $\Phi'_{d},d=1,...,D$ as
\begin{align}
\Phi_{FK} :=\big[ \Phi'_{1}\vline \dots \vline \Phi'_{D} \big],
\end{align}
with
\begin{equation}
\Phi'_d=\big[\phi_{Fk} \vline \dots \vline \phi_{F(k+1)} \big], \phi_{.}\in \mathbb{R}^D,
\end{equation}
let rewrite $\delta_{n,d,f}$ be the indicator function applying to the $n^{th}$ example and $f^{th}$ filter of $\Phi'_d$. We thus have
\[
\hat{\delta}_{n,d,f}=1 \iff f=\argmax_{\phi \in \hat{\Phi}'_d} |<x_n,\phi>|, \forall d
\]
\end{itemize}
We are interested in this paper into the $(S1)$ strategy.
For (S1) the assumption of having a very large number of atoms is intractable. Or is it ? We now propose a way to achieve this and present results showing optimality of (S1) over other strategies, yet, we first present the framework allowing to have efficiently a humongous number of atoms.
\normalsize
\fi
\subsubsection{Error Bounds, Learning and Link with Oja Rule}
We first provide an error upper-bound for the proposed scheme $(S1)_{\kappa=1}$.
To simplify notations be also define by $\phi_\kappa(x_n)$ the atom defined by
\begin{equation}
\phi_\kappa(x_n)=\phi_{k'},k'=\argmax_k \frac{|<x_n,\phi_k>|^2}{||\phi_k||^2}.
\end{equation}
\begin{theorem}
The error induced by $(S1)_{\kappa=1}$ is $||x_n||^2-\frac{|<x,\phi_\kappa(x_n)>|^2}{||\phi_\kappa(x_n)||^2}$ which is simply the reconstruction error from the best atom as since only one filter is used, nothing else is present.
\end{theorem}
\begin{proof}
By the incomplete basis theorem, there exists a basis s.t. it contains the atom $\phi_{\kappa}$, we denote by $\phi_k$ such a basis, with $k=1,...,D$, we thus have
\begin{align}
E_n=&|| x_n- \frac{<x,\phi_\kappa(x_n)>}{||\phi_\kappa(x_n)||^2}\phi_\kappa(x_n)||^2\nonumber\\
=&|| \sum_k\frac{<x_n,\phi_k>}{||\phi_k||^2}\phi_k- \frac{<x,\phi_\kappa(x_n)>}{||\phi_\kappa(x_n)||^2}\phi_\kappa(x_n)||^2 &&\text{ incomplete basis theorem} \nonumber \\
=&|| \sum_{k\not = \kappa}\frac{<x_n,\phi_k>}{||\phi_k||^2}\phi_k||^2\nonumber\\
=&\sum_{k\not = \kappa}\frac{|<x_n,\phi_k>|^2}{||\phi_k||^2}\nonumber\\
=&||x||^2-\frac{|<x_n,\phi_\kappa(x_n)>|^2}{||\phi_\kappa(x_n)||^2}&&\text{Parseval's Theorem}\nonumber\\
=&||x||^2\Big(1-\cos(\theta(x_n,\phi_\kappa(x_n))^2\Big)\label{en_eq}
\end{align}
And we have by definition $E_n\geq 0, E_n=0 \iff x_n=\phi_\kappa(x_n)$
\end{proof}
There is first a few comments on the loss and updates. First of all, the loss is closely related to a k-mean objective with cosine similarity and specifically spherical k-means which is the case where the centers and the data points are re-normalized to unit norm and has the objective to minimize
\begin{equation}
\sum_n(1-cos(x_n,p_{c(n)})).
\end{equation}
\subsubsection{Learning and Oja Rule}
In order to learn the filter-bank $\Phi_K$, a common approach is to use an alternating scheme between finding the cluster belongings and optimizing the atoms w.r.t. this estimate. We first derive a gradient descent scheme to update the atoms and study some of its characteristics.
If we now denote by $n(k):=\{n:n=1,...,N|\kappa(x_n)=k\}$ be the collection of the sample index in cluster $k$, he resulting loss $E_{n(k)}:=\frac{\sum_{n \in n(k)}E_n}{Card(n(k))}$ we can now derive a gradient descent step as
\begin{equation}
\phi_k^{(t+1)}(\lambda)=\phi_k^{(t)}-\lambda \frac{d E_{n(k)}}{d \phi_k},
\end{equation}
with
\begin{align}
\frac{d E_{n(k)}}{d \phi_k}&=\frac{1}{Card(n(k))}\sum_{n \in n(k)} \frac{2|<x_n,\phi_k>|}{||\phi_k||^2}\Big( \frac{|<x_n,\phi_k>|\phi_k}{||\phi_k||^2}-(-1)^{1_{<x_n,\phi_k> <0}}x_n \Big),\nonumber\\
&=\frac{1}{Card(n(k))}\sum_{n \in n(k)} \frac{2<x_n,\phi_k>}{||\phi_k||^2}\Big( \frac{<x_n,\phi_k>\phi_k}{||\phi_k||^2}-x_n \Big).\label{oja_eq}
\end{align}
On the other hand, if one adopts an adaptive gradient step $\lambda$ per atom and point with one of the two strategies $\lambda_1,\lambda_2$ defined as
\begin{align}
\lambda_1&=\frac{<x_n,\phi_k>}{2||x_n||^2}\\
\lambda_2&=\frac{||\phi_k||^4}{2<x_n,\phi_k>^2}
\end{align}
then we have the
\begin{align}
\phi_k^{(t+1)}(\lambda_1)&=\phi_k^{(t)}-\frac{1}{\sum_n \cos(\theta(x_n,\phi_k))^2}\sum_{n \in n(k)}\cos(\theta(x_n,\phi_k))^2\Big( \frac{<x_n,\phi_k>\phi_k}{||\phi_k||^2}-x_n \Big),\label{eq_online1}\\
\phi_k^{(t+1)}(\lambda_2)&=\frac{1}{Card(n(k))}\sum_{n \in n(k)}\frac{||\phi_k||^2}{<x_n,\phi_k>}x_n\label{eq_online2}
\end{align}
we thus end up with in the $\lambda_1$ case to a simple update rule depending on a weighted average of the points in the cluster based on their cosine similarity squared whereas for $\lambda_2$ we obtain a rule a la convex NMF with is a plain without update combination of the points available.
On the other hand, it is clear that minimizing $E_n$ from Eq. \ref{en_eq} is equivalent to maximizing $E^+_n=\frac{<x_n,\phi_\kappa(x_n)>}{||\phi_\kappa(x_n)||^2}$. As a result, one can seize in Eq. \ref{oja_eq} the Oja rule as we can rewrite a GD update of $\phi_k$ as
\begin{align}
\phi_k^{(t+1)}&=\phi^{(t)}_k+\gamma \frac{d E^+_n}{d \phi_k}(\phi^{(t)}_k)\\
\phi_k^{(t+1)}&=\phi^{(t)}_k+\gamma \Big( x_n\frac{<x_n,\phi_k>}{||\phi_k||^2}-(\frac{<x_n,\phi_k>}{||\phi_k||^2})^2\phi_k \Big)\label{eq_online3}
\end{align}
known as the Oja rule.
SPEAK ABOUT OJA RULE. And in fact, the convergence of Oja rule toward the first eigenvector-eigenvalue is not surprising as $E^+_{n(k)}$ leads explicitly to
\begin{equation}
\phi_k=\argmax_{\phi}\frac{1}{Card(n(k))}\frac{\phi^TX(k)^TX(k)\phi}{\phi^T\phi},\label{eq_pca}
\end{equation}
which is known as the Rayleigh quotient and is a formulation of PCA leading to a one step global optimum being the greatest eigenvector-eigenvalue.
\begin{figure}[h]
\centering
\includegraphics[width=5in]{time_mnist.png}
\end{figure}
\begin{pseudocode}[doublebox]{Filter-Bank Learning strategy}{X,K}
\text{Initialize }\Phi_K\\
\WHILE \text{not converged} \DO
\BEGIN
\FOR k \GETS 1 \TO K \DO
\BEGIN
\text{Compute }n(k) \text{ with current }\Phi_K\\
\text{Update }\phi_k \text{with }n(k) \text{ and }X(k)\text{ according to Eq. \ref{eq_pca}}\\
\END\\
\END\\
\RETURN{\Phi_k}
\end{pseudocode}
\begin{pseudocode}[doublebox]{Online Filter-Bank Learning strategy}{X,K}
\text{Initialize }\Phi_K\\
\WHILE \text{not converged} \DO
\BEGIN
\FOR n \GETS 1 \TO N \DO
\BEGIN
\kappa = \argmax_k \frac{|<x_n,\phi_k>|^2}{||\phi_k||^2||x_n||^2}\\
\text{Update }\phi_\kappa \text{ according to Eq. \ref{eq_online1} or Eq.\ref{eq_online2} or Eq.\ref{eq_online3}}\\
\END\\
\END\\
\RETURN{\Phi_k}
\end{pseudocode}
\begin{theorem}
If the distribution of the $x_n$ in the space is uniformly distributed, the optimal over-complete basis for $(S1)_{kappa=1}$ is thus the one corresponding of a quantization of the sphere, it is unique up to a change of sign and global rotations (same applied to each atom). For the $2$-dimensional case it is easy to see that the maximum error for any given point $x_n$ is exactly upper-bounded by $||x||^2\Big(1-\cos(\frac{\pi}{2FK})^2\Big)$ if $FK$ growths exponentially .(?? CHECK POWER OF HIGH DIMENION COSINE decompose as union of 2D spaces)
\end{theorem}
\begin{proof}
For 2D we now the upper bound is $||x||^2\Big(1-\cos(\frac{\pi}{2FK})^2\Big)$ with $FK$ atoms, we thus rewrite
\begin{align*}
||x-\hat{x}||^2=\sum_{d=1}^{D/2}||x_d-\hat{x}_d||^2
\end{align*}
and see that in order to have the upper bound for each subspace we need the cartesian product of all the subspace basis $\Phi_{FK}$ leading to $FK$ atoms. Thus one need to grow exponentially the number of atom w.r.t the dimension to have a loss increasing linearly.
\end{proof}
\begin{figure}[h]
\centering
\includegraphics[width=5in]{error_bound.png}
\end{figure}
However this pessimistic upper-bound assumes the worst possible scenario: uniform distribution of the data point in the space $\mathbb{R}^D$ which in general is false. In fact, many dataset have inherent structures and at least lye only in a small subset sometime regular of $\mathbb{R}^D$.
In general, data might be living in unions of subsets and thus providing a general strategy or optimal basis is a priori more complex thus pushing the need to learn the atoms as it is done in general for k-mean applications.
\begin{theorem}\label{th5}
A sufficient condition for $(S1)_{kappa=1}$ to be optimal is that the data are already clustered along $FD$ lines, or orbits ??? :)
\end{theorem}
We now present one way to tackle the curse of dimensionality in the next section
\subsection{Multiple Atoms}
\begin{equation}
E_n=|| x_n- \sum_{k=1}^K\frac{<x,\phi^k_\kappa(x_n)>}{||\phi^k_\kappa(x_n)||^2}\phi^k_\kappa(x_n)||^2
\end{equation}
For learning atom after atom a la coordinate ascent we have that
\begin{align*}
\hat{\phi}^{k'}_j=& \argmin_{\phi^{k'}_j}\sum_n E_n\\
=&\argmin_{\phi^{k'}_j}\sum_{n \in n(k,j)} || \Big(x_n- \sum_{k=1,\not = k'}^K\frac{<x,\phi^k_\kappa(x_n)>}{||\phi^k_\kappa(x_n)||^2}\Big)-\frac{<x,\phi^{k'}_j>}{||\phi^{k'}_j||^2}\phi^{k'}_\kappa(x_n)||^2
\end{align*}
as we showed in the last section we end up with the same update rule but with the input being substracted by the other used atoms. Thus we still perform PCA but on the input minus the other atoms. Note that this ensures orthogonality between the atoms.
\subsection{From Shallow to Deep Residual for better Generalization Error Bounds and Combinatorially Large Dictionaries}
\begin{figure}[t]
\centering
\includegraphics[width=5in]{laroue.png}
\end{figure}
We now consider the analysis of the generalization performance on out of bag observations as well as the problem of having really large dataset.
\begin{theorem}
If we suppose an finite training set and an Oja Network with sufficient filters in order to reach a training error of $0$ then we know that the generalization error is directly lower-bounded by how close the testing and training examples are. In fact
\begin{equation}
E_{new}\propto \cos(\theta(x_\kappa,x_{new})),\kappa=\argmin_n \cos(\theta(x_n,x_{new})).
\end{equation}
\end{theorem}
The proof is straightforward as we know the network is able to perfectly reconstruct the training set and only the training set.
As a result, if the training set is well and uniformly sampled among the space of possible observations, a shallow Oja Network can be considered as optimal also for the testing set.
However, and especially for computer vision task, it is well known that every observation is very far form each other in term of distance when dealt with in the pixel domain, also, requiring a proper sampling of the space of images is clearly outrageous.
We thus now present the result motivating deep architectures in general including the Oja Network.
\begin{align}
R^{(l)}_n=&R^{(l-1)}_n-\frac{<R^{(l-1)}_n,\phi_\kappa^{(l)}>}{||\phi_\kappa^{(l)}||^2}\phi_\kappa^{(l)}, \kappa = \argmax_k \frac{|<R^{(l-1)}_n,\phi_k^{(l)}>|}{||R^{(l-1)}_n||^2||\phi_k^{(l)}||^2} \nonumber \\
R^{(0)}_n=&x_n
\end{align}
as a result as soon as the input and the template are not orthogonal there is convergence.
\begin{theorem}
Since we have by definition that the selected atom is the one with smaller angle, if it is $0$ it means that the input $R^{(l-1)}$ is orthogonal to all the learned dictionary $\Phi^{(l)}$
\begin{equation}
\cos \Big(\theta (R^{(l-1)}_n,\phi^{(l)}_\kappa) \Big)^2=0 \iff R^{(l-1)} indep \phi^{(l)}_k \forall k,
\end{equation}
and thus they live in two orthogonal spaces.
TO PROVE : ALL THE NEXT ONES ARE ALSO 0
\end{theorem}
\begin{theorem}
The residual decreases exponentially w.r.t. the depth of the model.
\end{theorem}
\begin{proof}
\begin{align}
||R^{(l)}_n||^2=&||R^{(l-1)}_n-\frac{<R^{(l-1)}_n,\phi_\kappa^{(l)}>}{||\phi_\kappa^{(l)}||^2}\phi_\kappa^{(l)}||^2 \nonumber \\
=&||R^{(l-1)}_n||^2-\frac{<R^{(l-1)}_n,\phi_\kappa^{(l)}>^2}{||\phi_\kappa^{(l)}||^2}\nonumber \\
=&||R^{(l-1)}_n||^2\Big(1-\cos \Big(\theta(R^{(l-1)}_n,\phi_\kappa^{(l)}) \Big)^2\Big)\\
=&||x_n||^2\prod_{l=1}^l\Big(1-\cos \Big(\theta(R^{(l-1)}_n,\phi_\kappa^{(l)}) \Big)^2 \Big)
\end{align}
\end{proof}
The final template can be flattened via
\begin{align}
T_n =& \sum_l \frac{<R^{(l-1)}_n,\phi_\kappa^{(l)}>}{||\phi_\kappa^{(l)}||^2}\phi_\kappa^{(l)}\\
=&\sum_l P^{(l)}_n
\end{align}
\subsubsection{Learning}
Computing the gradient finds a great recursion formula we define as follows:
\begin{align}
A_{i,j}=\left\{ \begin{matrix}
0 \iff j<i\\
\frac{<R^{(i-1)},\phi^{(i)}_\kappa>I_d+R^{(i-1)}\phi^{(i)}_\kappa}{||\phi^{(i)}_\kappa||^2}+\frac{2<R^{(i-1)},\phi^{(i)}_\kappa>\phi^{(i)}_\kappa\phi^{(i)T}_\kappa}{||\phi^{(i)}_\kappa||^4} \iff i=j\\
A_{i,j-1}-\frac{\phi^{(j)}_\kappa\phi^{(j)T}_\kappa A_{i,j-1}}{||\phi^{(j)}_\kappa||^2}\iff j>i
\end{matrix} \right.
\end{align}
thus $A_{i,j} \in \mathbb{R}^{D \times D}$ then we have
\begin{align}
\mathcal{L}_n=&||R^{(L)}_n||,\\
\frac{\textbf{d} \mathcal{L}^2}{\textbf{d} \phi^{(l)}_\kappa}=&2R^{(L)}_n\frac{\textbf{d} R^{(L)}_n}{\textbf{d} \phi^{(l)}_\kappa}\\
\end{align}
However as we will see below we have a nice recursive definition to compute all those derivatives, in fact
\begin{equation}\text{Init. }
\begin{cases}
\frac{\textbf{d} P^{(l)}_n}{\textbf{d} \phi^{(l)}_\kappa}&=\frac{<R^{(l-1)}_n,\phi^{(l)}_\kappa>I_d+R^{(l-1)_n}\phi^{(l)}_\kappa}{||\phi^{(l)}_\kappa||^2}+\frac{2<R^{(i-1)}_n,\phi^{(l)}_\kappa>\phi^{(l)}_\kappa\phi^{(l)T}_\kappa}{||\phi^{(l)}_\kappa||^4},\\
\frac{\textbf{d} R^{(l)}_n}{\textbf{d} \phi^{(l)}_\kappa}&=-\frac{\textbf{d} P^{(l)}_n}{\textbf{d} \phi^{(l)}_\kappa}
\end{cases}
\end{equation}
\begin{equation}\text{Recursion }
\begin{cases}
\frac{\textbf{d} P^{(l+1)}_n}{\textbf{d} \phi^{(l)}_\kappa}&=\frac{\phi^{(l+1)}_\kappa\phi^{(l+1)^T}_\kappa}{||\phi^{(l+1)}_\kappa||^2}\frac{\textbf{d} R^{(l)}_n}{\textbf{d} \phi_\kappa^{(l)}},\\
\frac{\textbf{d} R^{(l+1)}_n}{\textbf{d} \phi^{(l)}_\kappa}&=\frac{\textbf{d} R^{(l)}_n}{\textbf{d} \phi^{(l)}_\kappa}-\frac{\textbf{d} P^{(l+1)}_n}{\textbf{d} \phi^{(l)}_\kappa}
\end{cases}
\end{equation}
\begin{pseudocode}[doublebox]{Residual Oja Network}{X,K}
R_n \GETS X_n, \forall n \\
\FOR l \GETS 1 \TO L \DO
\BEGIN
\text{Initialize }\Phi^{(l)}_K \text{ from }R\\
\WHILE \text{not converged} \DO
\BEGIN
\FOR k \GETS 1 \TO K \DO
\BEGIN
\text{Compute }n(k) \text{ with current }\Phi^{(l)}_K\\
\text{Update }\phi^{(l)}_k \text{with }n(k) \text{ and }R(k)\text{ according to Eq. \ref{eq_pca}}\\
\END\\
\END\\
R_n = (R_n-\frac{<R_n,\phi^{(l)}_\kappa>}{||\phi^{(l)}_\kappa ||^2}\phi^{(l)}_\kappa)
\END\\
\RETURN{\Phi^{(l)}_k, \forall l}
\end{pseudocode}
\begin{pseudocode}[doublebox]{Online Residual Oja Network}{X,K}
R_n \GETS X_n, \forall n \\
\FOR l \GETS 1 \TO L \DO
\BEGIN
\text{Initialize }\Phi^{(l)}_K \text{ from }R\\
\WHILE \text{not converged} \DO
\BEGIN
\FOR n \GETS 1 \TO N \DO
\BEGIN
\kappa = \argmax_k \frac{|<R_n,\phi^{(l)}_k>|^2}{||\phi^{(l)}_k||^2||R_n||^2}\\
\text{Update }\phi^{(l)}_\kappa \text{ according to Eq. \ref{eq_online1} or Eq.\ref{eq_online2} or Eq.\ref{eq_online3}}\\
\END\\
\END\\
R_n = (R_n-\frac{<R_n,\phi^{(l)}_\kappa>}{||\phi^{(l)}_\kappa ||^2}\phi^{(l)}_\kappa)
\END\\
\RETURN{\Phi^{(l)}_k, \forall l}
\end{pseudocode}
\begin{theorem}
With a Deep (Oja) Network, the previously presented lower-bound of the generalization error becomes an upper-bound.
\end{theorem}
In addition of guaranteeing better generalization errors through depth, we also benefit from another gain. The depth as we will see allows for an exponential amount of possible templates to be constructed perfectly with only a linear increase in the number of learned parameters.
\begin{lstlisting}[caption=Input to Mask]
####################
# INPUT: X(N,channels,Ix,Jx),w(n_filters,channels,Iw,Jw)
####################
k = T.nnet.conv.conv2d(x,w,stride=stride,
border_mode='valid',flip_filters=False,input_shape=(N,channels,Ix,Jx),
filters_shape=(n_filters,channels,Iw,Jw))#(N,n_filters,(Ix-Iw)/stride+1,(Jx-Jw)/stride+1)
output = ((k>0)*2-1)*T.signal.pool.max_pool_2d_same_size(
theano.tensor.abs_(k).dimshuffle([0,2,3,1]),
(1,n_filters)).dimshuffle([0,3,1,2])#(N,n_filters,(Ix-Iw)/stride+1,(Jx-Jw)/stride+1)
mask = T.switch(T.eq(output,0),0,1)#(N,n_filters,(Ix-Iw)/stride+1,(Jx-Jw)/stride+1)
\end{lstlisting}
\begin{lstlisting}[caption=Reconstruction]
####################
# INPUTS: Z(N,n_filters,Iz,Jz),w(n_filters,channels,Iw,Jw),stride
####################
dilated_output = T.set_subtensor(T.zeros((N,n_filters,(Iz-1)*stride),(Iz-1)*stride),
dtype='float32')[:,:,::stride,::stride],Z)#(N,n_filters,Ix-Iw+1,Jx-Jw+1)
rec = T.nnet.conv.conv2d(dilated_Z,w.dimshuffle([1,0,2,3]),stride=1,
border_mode='full',flip_filters=False)#(N,channels,Ix,Jx)
\end{lstlisting}
\begin{lstlisting}[caption=Mask to Grad]
###################
# INPUT : rec(N,C,Ix,Jx),mask(N,n_filters,Iz,Jz),Iw,Jw
###################
d_W,outputs=theano.scan(fn=lambda acc,i,X,mask:
acc+conv2d(rec[i].dimshuffle([0,'x',1,2]),mask[i].dimshuffle([0,'x',1,2]),
input_shape=(C,1,Ix,Jx),
filter_shape=(n_filters,1,Iz,Jz)).dimshuffle([1,0,2,3]),
sequences=[theano.tensor.arange(N,dtype='int32')],
non_sequences=[rec,mask],outputs_info = T.zeros((n_filters,C,Iw,Jw),dtype='float32'))
d_W = d_W[-1]
\caption{algo}
\end{lstlisting}
\begin{figure}[h]
\centering
\includegraphics[width=5in]{error_energy.png}
\caption{Top : evolution of the reconstruciton error w.r.t. epochs. Bottom: evolution of the energy captured per level during training. At first stage the last levels capture everything since random initialization makes the global filters almost orthogonal to images, during training global filters learn to capture the low frequencies. Since it is known that natural images have a $1/f$ decay of energy over frequencies $f$ we can see that the final energy repartition is indeed bigger for low-frequency/global filters and go down for smaller filters.}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=5in]{rec_example.png}
\caption{Example of decomposition and reconstruction of some CIFAR10 images. From right to left is the final residual (reconstruction minus original), the original image, the reconstructed images and then all the decompositions, on the left is the global/large one. Summing elementwise columns $1$ to $8$ leads to column $9$ the reconstrued input.}
\end{figure}
\subsection{Previous Work}
The proposedm ethod can be seen as bilinear sparse coding with one-hot latent vector $y$ \cite{grimes2005bilinear} for the case of only one filter used. There is also a direct link with the probabilistic version of this work, namely mixture of PPCA \cite{tipping1999probabilistic,tipping1999mixtures}, as here we are in a ''hard clustering'' case similarly to k-means versus GMM.
By the selection of the best matching atom, we find some links with matching pursuit \cite{tropp2007signal,pati1993orthogonal} and also locality sensitive hashing \cite{indyk1998approximate,johnson1984extensions} especially in the cosine similarity distance.
This problem can also be seen from a best basis selection point of view coupled with dictionary learning.
Popular examples with varying degrees of computational overhead include convex relaxations such
as $L1$-norm minimization \cite{beck2009fast,candes2006robust,tibshirani1996regression}, greedy approaches like orthogonal matching pursuit (OMP)
\cite{pati1993orthogonal,tropp2004greed}, and many flavors of iterative hard-thresholding (IHT) \cite{blumensath2009iterative,blumensath2010normalized}
Variants of these algorithms find practical relevance in numerous disparate domains, including feature selection \cite{cotter2002sparse,figueiredo2002adaptive}, outlier removal \cite{candes2005decoding,ikehata2012robust}, compressive sensing \cite{baraniuk2007compressive}, and source localization \cite{baillet2001electromagnetic,model2006signal}
\section{Conclusion}
We presented a hierarchical version of the deterministic mixture of PCA and presented results on CIFAR10 images. We also provided algorithms allowing GPU computation for large scale dataset and speed. The main novelty comes from the deterministic formulate of the probabilistic mixture of PCA which allows easier use as it is known in general that MPPCA is unstable for large scale problems. From this we derived its hierarchical residual version which inherits many benefits and allow for exponentially good reconstruction w.r.t. the depth. We also believe that this residual approach allowing to learn orthogonal spaces will lead to interesting dictionary learning mixing for example residual networks with this approach.
\iffalse
\section{Validation Results}
\subsection{MNIST}
We present now reconstruction error on MNIST for different $F$ values as $1,4,8$. Note that for better comparison, we also provided the reconstruction error provided by PCA for the case of taking the best $KF$ projectors. Note that for the proposed approach, we always have $||\alpha||_0=K$.
\begin{table}
\begin{tabular}{c|c|c|c|}\hline
&F=1&F=4&F=8\\\hline
PCA &0.000245 \textbf{0.000246} & 6.08e-05 \textbf{6.16e-05} & 1.92e-05 \textbf{1.97e-05} \\ \hline
PROPOSED &0.000251 \textbf{0.000252} & 0.000135 \textbf{0.000136} & 0.000116 \textbf{0.000117}\\ \hline
\end{tabular}
\caption{Global Dictionary with $K=32$}
\end{table}
\begin{table}
\begin{tabular}{c|c|c|c|}\hline
&F=1&F=4&F=8\\\hline
PCA &0.000517 \textbf{0.000518}&0.000245 \textbf{0.000246}&0.000133 \textbf{0.000134} \\ \hline
PROPOSED &0.000538 \textbf{0.000538}&0.000361 \textbf{0.000362} & 0.000309 \textbf{0.000310}\\ \hline
\end{tabular}
\caption{Global Dictionary with $K=8$}
\end{table}
We also provide results for the case of local patches below without overlap (thus the two global and local errors can be compared) :
\begin{table}
\begin{tabular}{c|c|c|c|}\hline
&F=1&F=4&F=8\\\hline
PROPOSED (8,8) K=8 &0.000348 \textbf{0.000347}& 0.000205 \textbf{0.000205} & 0.000178 \textbf{0.000177} \\ \hline
PROPOSED (8,8) K=32 &0.000123 \textbf{0.000122}&9.68e-05 \textbf{9.66e-05} & 9.47e-05 \textbf{9.457e-05}\\ \hline
\end{tabular}
\caption{Global Dictionary with $K=8,32$}
\end{table}
For all the presented tables, the associated filters are in the appendix.
\subsection{CIFAR10}
\subsubsection*{Acknowledgments}
Use unnumbered third level headings for the acknowledgments. All
acknowledgments go at the end of the paper. Do not include
acknowledgments in the anonymized submission, only in the final paper.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 7,250 |
\section{Results}
To implement aforementioned protocols, it would thus be desirable to have a relation that is useful for significantly smaller values of $n$. Here, we prove such a relation that makes a statement for any desirable \emph{fixed} error $\varepsilon>0$. In particular, we show that for any $n$ qubit quantum state $\rho$ and measurements in BB84 bases
\begin{align}
{\rm H}^\eps_{\rm min}(X|\Theta K) \geq n\cdot c_{BB84}\ ,
\end{align}
where
\begin{align}\label{eq:bb84result}
c_{BB84} := \max_{s\in(0,1]} \frac{1}{s} \left[1+s-\log(1+2^s) \right] - \frac{1}{sn} \log \frac{2}{\epsilon^2}\ .
\end{align}
At the first glance, it may be hard to see that $c_{BB84}$ is indeed large. However, applying it to the example from~\cite{secureid:practical} (see above) by plugging in $s=0.1$ demonstrates that for the same $\varepsilon = 0.1$, $c_{BB84} \geq 0.4894$ whenever $n\geq 2.36\times 10^4$. Comparing this with calculations in the previous section, the required block length $n$ is approximately $10^{-4}$~times smaller. Figure~\ref{fig:plot} provides a comparison of these two bounds. We see that even for large $\epsilon$, the required bound on the block length $n$ given by~\eqref{oldeps} is large.
\begin{figure}[h!]\label{fig:plot}
\includegraphics[scale=0.49]{plot}
\caption{(Color online) This plot shows the minimal required block length $n$ on a logarithmic scale of base $10$, in order to achieve an error parameter $\epsilon$. The dashed curves are plotted for the previous known bound~\eqref{oldeps}, while the solid lines are obtained from our new analysis~\eqref{eq:bb84result}. The different colors represent the fixed values of the lower bound $c'$, with values 0.45, 0.46, 0.47, 0.48, and 0.49 respectively. As $c'$ increases, the plotted bounds get relatively higher.}
\end{figure}
Our relation can readily be applied to any BB84 based two-party protocols in the bounded (or noisy)-storage model, and enables experiments for significantly smaller values of $n$. For example, it enables the experimental implementation of~\cite{noisyimpl} with $n = 2.5\times 10^5$ instead of $n > 10^{9}$ for the same error parameter $\varepsilon$.
Furthermore our relation can be extended to the case of six-state protocols, i.e., measurements in Pauli $\sigma_x$, $\sigma_z$ and $\sigma_y$ eigenbases as suggested in~\cite{chris:diss,secureid,qcextract}. For this case we obtain
\begin{align}
{\rm H}^\eps_{\rm min}(X|\Theta K) &\geq n\cdot c_6\ ,
\end{align}
where
\begin{align}
c_6 := \max_{s\in(0,1]} &~-\frac{1}{s} \log\left[ \frac{1}{3} \left(1+2^{1-s}\right) \right] - \frac{1}{sn} \log \frac{2}{\epsilon^2}\ .
\end{align}
This yields a similar improvement over the relation analogous to~\eqref{eq:bb84old} proven in~\cite{serge:new}.
A crucial step in our proof is to show \emph{tight} uncertainty relations for conditional R{\'e}nyi entropies of order $\alpha$, denoted by ${\rm H_{\alpha}}(A|B)$. These may be of independent interest. Previously, such relations were only known for single qudit measurements for $\alpha\rightarrow 1$, $\alpha = 2$, and $\alpha \rightarrow \infty$ (see e.g.~\cite{ww:ursurvey,ww:cliffordUR,renyiArxiv}). More precisely, we show that for measurements on $n$-qubit states $\rho$ in BB84 bases, the minimum values of the conditional R{\'e}nyi entropies for any $\alpha\in(1,2]$ are
\begin{align}\label{eq:alphanqubit}
\min_\rho {\rm H_{\alpha}}(X|\Theta)_{\rho|\rho} = n\cdot\frac{\alpha-\log(1+2^{\alpha-1})}{\alpha-1}\ ,
\end{align}
where
\begin{equation}
{\rm H_{\alpha}}(A|B)_{\rho|\rho} := \frac{1}{1-\alpha} \mathop{\mathrm{tr}}\nolimits\left[\rho_{AB}^\alpha (\mathbb{I}_A\otimes \rho_{B})^{1-\alpha}\right]\ .
\end{equation}
Similarly, for measurements in the six-state bases
\begin{align}
\min_\rho {\rm H_{\alpha}}(X|\Theta)_{\rho|\rho} = n\cdot\frac{\log3-\log\left(1+2^{2-\alpha}\right)}{\alpha-1}\ .
\end{align}
\section{Proof}
Let us now explain the proof of our results. A technical derivation including all details may be found in the appendix. For simplicity, we restrict ourselves to the case of BB84 measurements. An extension for six-state protocols is analogous and can be found in the appendix. To obtain~\eqref{eq:bb84result} we proceed in four steps. First, we will prove a tight uncertainty relation in terms of the $\alpha$-R{\'e}nyi entropy when $\rho$ is just an $n=1$ qubit state. Second, we show how to extend this result to an uncertainty relation for $n > 1$ qubits, giving us~\eqref{eq:alphanqubit}. The third step is to reintroduce $K$ as outlined in the introduction. Finally, we relate the R{\'e}nyi entropies of order $\alpha \in (1,2]$ to the smooth min-entropy. \\
\medskip
\noindent
{\bf Step 1 - A single qubit uncertainty relation:}
For the case when $A$ and $B$ are classical the conditional $\alpha$-R{\'e}nyi entropy reduces to the simple form
\begin{equation}
{\rm H_{\alpha}} (A|B)_{\rho|\rho} = \frac{1}{1-\alpha} \log \sum_{b} p_{B=b} \sum_{a} p^\alpha_{A=a|B=b}\ .
\end{equation}
The relevant $\alpha$-R{\'e}nyi entropy for a single qubit state $\rho_k$ (where k denotes some classical information associated with the state $\rho_k$) is
\begin{align}
{\rm H_{\alpha}} (X|\Theta)_{\rho_k|\rho_k} &=\frac{1}{1-\alpha} \log \sum_{\theta\in\{0,1\}} p_{\theta} \sum_{x\in\{0,1\}} p^\alpha_{x|k \theta}\nonumber\\
&=\frac{1}{1-\alpha}\log\left[\frac{1}{2}\cdot\sum_{\theta\in\{0,1\},x\in\{0,1\}} p^\alpha_{x|k \theta}\right]\ .
\end{align}
Here $p_{x|k \theta}:=\mathop{\mathrm{tr}}\nolimits(M_{x|\theta}~\rho_k)$, where $M_{x|\theta}$ denotes the measurement operator
\begin{equation}
M_{x|\theta}= \mathbf{H}^\theta|x \rangle\langle x|\mathbf{H}^\theta\ ,
\end{equation}
with $\mathbf{H}$ the Hadamard matrix. To minimize the $\alpha$-R{\'e}nyi entropy for values of $\alpha\in(1,2]$, it it sufficient to maximize the summation term. Defining
\begin{equation}\label{PXT}
P(X|\Theta)_{\rho_k} = \frac{1}{2}\cdot\sum_{\theta\in\{0,1\},x\in\{0,1\}} p^\alpha_{x|k \theta}\ ,
\end{equation}
we first rewrite $p_{x|k \theta}$ as functions of two variables: $g_x:=\mathop{\mathrm{tr}}\nolimits(\sigma_x\rho_k)$ and $g_z:=\mathop{\mathrm{tr}}\nolimits(\sigma_z\rho_k)$. The Bloch sphere condition for a qubit gives $g_x^2+g_y^2+g_z^2\leq g_x^2+g_z^2\leq 1$, which serves as a constraint in maximizing \eqref{PXT}. Switching to spherical coordinates and evaluating the partial derivatives of \eqref{PXT} according to multiple independent variables, we prove
\begin{eqnarray}\label{minimal_renyi_bb84}
{\rm H_{\alpha}}(X|\Theta)_{\rho_k|\rho_k} &\geq& \frac{1}{1-\alpha} \log \left[ \frac{1}{2^{1+\alpha}} (2^{\alpha}+2) \right]\nonumber\\
&=& \frac{1}{\alpha-1} \left[\alpha-\log(1+2^{\alpha-1}) \right].
\end{eqnarray}
Moreover, the minimal $\alpha$-R{\'e}nyi entropy is achieved on an eigenstate of either measurement basis.
\medskip
\noindent
{\bf Step 2 - A relation for $n$-qubits:} To extend the one qubit uncertainty relation to multiple qubits, the central problem is to prove that the lower bound on the conditional entropy scales linearly with the block length $n$. This essentially implies that for a system of $n$ qubits, the entanglement across qubits does not give rise to a lower minimal $\alpha$-R{\'e}nyi entropy. In our analysis, we show this by first considering the last qubit measured, conditioned on all the previous $n-1$ measurement bases and values. That is, we consider a $n$-qubit normalized density operator $\rho_{ABk}$, where $B$ denotes the last qubit and $A$ the remaining $n-1$ qubits, and write
\begin{equation}\label{sumterm}
P(X_B|\Theta)_{\rho_{ABk}} = \frac{1}{2}\cdot\sum_{\theta_B,x_B\in\{0,1\}} p^\alpha_{x_B|\theta_B x_A \theta_A k}\ ,
\end{equation}
where $p_{x_B|\theta_B x_A \theta_A k}=\mathop{\mathrm{tr}}\nolimits(M_{x_B|\theta_B}\sigma_B)$ with the corresponding normalized density operator
\begin{equation}
\sigma_B =\mathop{\mathrm{tr}}\nolimits_A \left[ \frac{M_{x_A|\theta_A}~\rho_{ABk}~M_{x_A|\theta_A}^\dagger}{\mathop{\mathrm{tr}}\nolimits\left[M_{x_A|\theta_A}~\rho_{ABk}~M_{x_A|\theta_A}^\dagger\right]} \right]\ .
\end{equation}
Since the uncertainty relation for one qubit~\eqref{minimal_renyi_bb84} holds for any density operator, it holds in particular for $\sigma_B$. By induction, it is then easily shown that the minimal entropy is additive.
\medskip
\noindent
{\bf Step 3 - Classical side information $K$:} After Steps 1 and 2, we established a tight uncertainty relation for a binary string $X^n$ conditioned on the basis string $\Theta^n$. Namely, we have
\begin{equation}
{\rm H_{\alpha}}(X^n|\Theta^n)_{\rho_k|\rho_k} \geq n\cdot\frac{1}{\alpha-1} \left[\alpha-\log(1+2^{\alpha-1}) \right]\ .
\end{equation}
for any $n$-qubit state $\rho_k$. In this step, we obtain the conditioning with relation to classical side information K. In other words, we need to evaluate $ {\rm H_{\alpha}}(X|\Theta K)_{\rho|\rho}$ with
\begin{equation}
\rho = \sum_{\theta\in\{0,1\}^n} p_{\theta}~\ketbra{\theta}{\theta} \sum_k p_{k|\theta}~\rho_k \sum_{x\in\{0,1\}^n} p_{x|\theta k}~\ketbra{x}{x}\ .
\end{equation}
By observing the independence of $\Theta$ and K, we show that the bounds of these values coincide, implying that
\begin{equation}\label{step3end}
{\rm H_{\alpha}}(X|\Theta K)_{\rho|\rho} \geq n\cdot \frac{1}{\alpha-1} \left[\alpha-\log(1+2^{\alpha-1}) \right]\ .
\end{equation}
\smallskip
\noindent
{\bf Step 4 - Relation to the min-entropy:} As motivated previously, the final desired measure of entropy is the \textit{smooth} min-entropy ${\rm H}^\eps_{\rm min} (X|\Theta K)_{\rho}$. A recent work~\cite{QAEP} has shown that a lower bound can be obtained for this quantity. Namely, we have for any state $\rho$ and $\alpha\in(1,2]$
\begin{equation}
{\rm H}^\eps_{\rm min} (X|\Theta K)_{\rho} \geq {\rm H_{\alpha}} (X|\Theta K)_{\rho|\rho} -\frac{1}{\alpha-1} \log \frac{2}{\epsilon^2}\ .
\end{equation}
This combined with~\eqref{step3end} implies the claim
\begin{align}
{\rm H}^\eps_{\rm min} (X|\Theta K)_\rho \geq\quad&n\cdot \max_{s\in(0,1]} \frac{1}{s} \left[1+s-\log(1+2^s) \right]\nonumber\\
&-\frac{1}{s} \log \frac{2}{\epsilon^2}\ .
\end{align}
It is worth noting that as $n\rightarrow\infty$, the maximum is obtained for $s\rightarrow 0$, implying that as the system size approaches infinity, the optimal bound is still given by~\eqref{eq:bb84old}. That is, in terms of a bound which comes from the Shannon entropy. However, our analysis provides a better alternative to bound the smooth min-entropy for finite system sizes, and hence is more useful for practical implementations.
\section{Conclusions}
We have proven entropic uncertainty relations that pave the way for a practical implementation of BB84 and six-state protocols~\cite{chris:diss,serge:bounded,serge:new,Noisy1,noisy:robust,noisy:new,secureid,Curty10,secureid:practical} at small block length. Indeed, our relation has already been employed in~\cite{noisyimpl} for an experimental implementation of bit commitment in the bounded/noisy-storage model.
It is an interesting open question whether similarly strong relations can also be obtained with respect to quantum side information~\cite{Berta09,Coles11,qcextract}. This would allow security statements for such protocols in terms of the quantum capacity~\cite{qcextract} of the storage device, rather than the classical capacity~\cite{noisy:new} or the entanglement cost~\cite{entCost}. For the six-state case this has been done (implicitly) in~\cite{qcextract} for the special case of a R{\'e}nyi type entropy of order $\alpha = 2$, yielding however again a slightly weaker uncertainty relation as might be possible for other values of $\alpha \in (1,2]$. As the amount of uncertainty is the the key element in being able to tolerate experimental errors and losses in said protocols, it would be nice to extend our result to this setting.
\newpage
\onecolumngrid
\section*{Appendix}
In this appendix, we provide the technical details that lead to our claims. In section A, the complete proof for the uncertainty relation for BB84 bases (measurements in eigenstates of Pauli $\sigma_x$ and $\sigma_z$) is presented. In section B, similar methods are used to derive bounds for six-state bases (measurements in eigenstates of Pauli $\sigma_x$, $\sigma_y$ and $\sigma_z$).\\
We first restate the definitions of the relevant entropic quantities. Given any finite-dimensional Hilbert space $\mathcal{H}$, let $\mathcal{S}_{\leq}(\mathcal{H})$ denote the set of sub-normalized density operators on $\mathcal{H}$, and $\mathcal{S}(\mathcal{H})$ denote the set of normalized density operators on $\mathcal{H}$. For $\mathcal{H}_A$ and $\mathcal{H}_B$, the conditional min-entropy of $\rho_{AB}\in\mathcal{S}(\mathcal{H}_{A}\otimes\mathcal{H}_{B})$ given $\sigma_B\in\mathcal{S}(\mathcal{H}_B)$ is defined as
\begin{equation}
{\rm H_{\rm min}}(A|B)_{\rho|\sigma} := \sup~\lbrace\lambda\in\mathbb{R}:2^{-\lambda}\cdot\mathbb{I}_A\otimes\sigma_B\geq\rho_{AB}\rbrace\ ,
\end{equation}
and the conditional min-entropy of $A$ given $B$ is defined as
\begin{equation}
{\rm H_{\rm min}}(A|B)_{\rho} := \sup_{\sigma_{B}\in\mathcal{S}(\mathcal{H}_{B})}~{\rm H_{\rm min}}(A|B)_{\rho|\sigma}\ .
\end{equation}
The smooth conditional min-entropy of $A$ given $B$ and $\varepsilon\geq0$ is defined as
\begin{align}
{\rm H}^\eps_{\rm min}(A|B)_{\rho} := \sup_{\rho'\in\mathcal{B}^{\varepsilon}(\rho)}~{\rm H_{\rm min}}(A|B)_{\rho'}\ ,
\end{align}
where $\mathcal{B}^{\varepsilon}(\rho_{AB}):=\left\{\rho_{AB}'\in\mathcal{S}_{\leq}(\mathcal{H}_{A}\otimes\mathcal{H}_{B})|P=\sqrt{1-F^{2}(\rho,\rho')}\leq\varepsilon\right\}$ is an $\varepsilon$-ball in terms of the purified distance with
\begin{align}
F(\rho,\rho'):=\|\sqrt{\rho}\sqrt{\rho'}\|_{1}+\sqrt{(1-\mathrm{tr}[\rho])(1-\mathrm{tr}[\rho'])}
\end{align}
the (generalized) fidelity~\cite{Tomamichel09}.\\
The conditional $\alpha$-R{\'e}nyi entropies are defined as
\begin{equation}
{\rm H_{\alpha}}(A|B)_{\rho|\rho} := \frac{1}{1-\alpha} \log\mathop{\mathrm{tr}}\nolimits\left[\rho_{AB}^\alpha ( \mathbb{I}_A \otimes \rho_{B})^{1-\alpha}\right]\ ,
\end{equation}
where (possible) inverses are understood as generalized inverses. Note that there exist also slightly different definitions of conditional $\alpha$-R{\'e}nyi entropies in the literature.
\section{A. Uncertainty relation for BB84 measurements}\label{formalproof}
\subsubsection{Step 1 : Single qubit relation}
For any qubit state $\rho\in\mathcal{S}(\mathbb{C}^{2})$ we have to examine the quantities
\begin{eqnarray}\label{alpharenyi}
{\rm H_{\alpha}} (X|\Theta)_{\rho|\rho} &=& \frac{1}{1-\alpha} \log~P_\alpha (X|\Theta)\nonumber\\
P_\alpha (X|\Theta) &=& \mathop{\mathrm{tr}}\nolimits\left[\rho_{X\Theta}^\alpha ( \mathbb{I}_X \otimes \rho_{\Theta})^{1-\alpha}\right]\nonumber\\
\rho_{X\Theta}&=&\sum_{\theta,x}p_{\theta}\cdot p_{x|\theta}\proj{x}\otimes\proj{\theta}\nonumber\\
p_{x|\theta}&=&\mathrm{tr}(M_{x|\theta}\rho)\ ,
\end{eqnarray}
with $M_{x|\theta}=\mathbf{H}^{\theta}\proj{x}\mathbf{H}^{\theta}$, and $\mathbf{H}=\frac{1}{\sqrt{2}}\begin{pmatrix}1&1\\1&-1\end{pmatrix}$ the Hadamard matrix. Since the choice of measurements is uniform, we get
\begin{align}
P_{\alpha}(X|\Theta)=\frac{1}{2}\cdot\sum_{\theta,x}p^{\alpha}_{x|\theta}\ .
\end{align}
\begin{theoremApp}\label{theorem1}
Let $\rho\in\mathcal{S}(\mathbb{C}^{2})$, and $\alpha=1+s$ with $s\in(0,1]$. Then we have for BB84 measurements as in~\eqref{alpharenyi} that
\begin{equation}\label{minimal_renyi_BB84}
{\rm H_{\alpha}}(X|\Theta)_{\rho|\rho} \geq \frac{1}{s} [1+s-\log(1+2^s)]\ .
\end{equation}
\end{theoremApp}
\begin{proof}
We evaluate the term
\begin{eqnarray}
P_{1+s}(X|\Theta) &=& \frac{1}{2}\cdot\sum_{\theta \in \{ 0,1 \} } \sum_{x \in \{ 0,1 \}} p_{x|\theta}^{1+s} \nonumber \\
& & = \frac{1}{2} \left[ \mathop{\mathrm{tr}}\nolimits(\rho |0\rangle\langle 0|)^{1+s}+\mathop{\mathrm{tr}}\nolimits(\rho |1\rangle\langle 1|)^{1+s}+\mathop{\mathrm{tr}}\nolimits(\rho |+\rangle\langle +|)^{1+s}+\mathop{\mathrm{tr}}\nolimits(\rho |-\rangle\langle -|)^{1+s} \right] \nonumber \\
& & = \frac{1}{2^{2+s}} \lbrace [1+\mathop{\mathrm{tr}}\nolimits(\sigma_z\rho )]^{1+s}+[1-\mathop{\mathrm{tr}}\nolimits(\sigma_z\rho)]^{1+s}+[1+\mathop{\mathrm{tr}}\nolimits(\sigma_x\rho)]^{1+s}+[1-\mathop{\mathrm{tr}}\nolimits(\sigma_x\rho)]^{1+s} \rbrace \nonumber\\
& & = \frac{1}{2^{2+s}} [ (1+z)^{1+s}+(1-z)^{1+s}+(1+x)^{1+s}+(1-x)^{1+s} ]\ ,
\end{eqnarray}
where $x:=\mathop{\mathrm{tr}}\nolimits(\sigma_x\rho)$ and $z:=\mathop{\mathrm{tr}}\nolimits(\sigma_z\rho)$. For any one qubit state $\rho$, we have the Bloch sphere condition\begin{equation}\label{anticomm}
\mathop{\mathrm{tr}}\nolimits(\sigma_x\rho)^2 + \mathop{\mathrm{tr}}\nolimits(\sigma_y\rho)^2 + \mathop{\mathrm{tr}}\nolimits(\sigma_z\rho)^2 \leq1\ .
\end{equation}
and can therefore parametrize $x$ and $z$ by polar coordinates
\begin{equation}
x=r\sin \phi, z=r\cos \phi\ ,
\end{equation}
where $r \in [0,1]$, and $\phi \in [0,\frac{\pi}{2}]$. $P_\alpha(X|\Theta)$ can then be rewritten as a function depending on the variables $s$, $r$, and $\phi$
\begin{equation}
Q(s,r,\phi)=\frac{1}{2^{2+s}} [ (1+r\cos\phi)^{1+s}+(1-r\cos\phi)^{1+s}+(1+r\sin\phi)^{1+s}+(1-r\sin\phi)^{1+s} ]\ .
\end{equation}
The partial differential of $Q(s,r,\phi)$ with respect to $r$ becomes
\begin{equation}
\frac{\partial Q(s,r,\phi)}{\partial r} = \frac{1+s}{2^{2+s}} [ \cos\phi(1+r\cos\phi)^{s}-\cos\phi(1-r\cos\phi)^{s}+\sin\phi(1+r\sin\phi)^{s}-\sin\phi(1-r\sin\phi)^{s} ]\ .
\end{equation}
Since in the range of $\phi$, $\sin\phi$ and $\cos\phi$ are positive, we obtain $\frac{\partial Q(s,r,\phi)}{\partial r} \geq 0$, which implies that the maximum is attained at $r=1$. The partial differential of $Q(s,r,\phi)$ with respect to $\phi$ at $r=1$ becomes
\begin{eqnarray}\label{firstderphi}
\frac{\partial Q(s,1,\phi)}{\partial \phi} &=& \frac{1+s}{2^{2+s}} [ -\sin\phi(1+\cos\phi)^{s}+\sin\phi(1-\cos\phi)^{s}+\cos\phi(1+\sin\phi)^{s}-\cos\phi(1-\sin\phi)^{s} ] \nonumber\\
&=& \frac{1+s}{2^{2+s}} \lbrace\sin\phi[(1-\cos\phi)^s-(1+\cos\phi)^s]+\cos\phi[(1+\sin\phi)^s-(1-\sin\phi)^s]\rbrace\ .
\end{eqnarray}
For a stationary point of $Q(s,1,\phi)$, \eqref{firstderphi} is zero and the solution is obtained at three points: $\phi=0,~\frac{\pi}{4},~\frac{\pi}{2}$. The characteristics of the endpoints $\phi=0,\frac{\pi}{2}$ are the same, hence it suffices to analyze either. It remains to analyze the characteristic of these stationary points. To do so, we evaluate the second partial derivative at these points as a function of $s$
\begin{eqnarray}
f_1 (s) &=& \frac{\partial^2 Q(s,1,\phi)}{\partial \phi^2} \bigg|_{\phi=0} = \frac{1+s}{2^{1+s}} \left(s-2^{s-1}\right),\quad s\geq0 \\
f_2 (s) &=& \frac{\partial^2 Q(s,1,\phi)}{\partial \phi^2} \bigg|_{\phi=\frac{\pi}{4}}\nonumber\\
&=& \frac{1+s}{2^{2+s}} \left\lbrace s\cdot \left[\left(1-\frac{1}{\sqrt{2}}\right)^{s-1}+\left(1+\frac{1}{\sqrt{2}}\right)^{s-1}\right]-\sqrt{2}\left[\left(1+\frac{1}{\sqrt{2}}\right)^{s}-\left(1-\frac{1}{\sqrt{2}}\right)^{s}\right]\right\rbrace\ .\label{f2s}
\end{eqnarray}
To determine if the stationary point is a local minima or maxima, we show the positivity/negativity of these functions over the interval $s\in(0,1]$. Note that $f_1(0)=-\frac{1}{4}$ and $f_1(1)=0$, while $f_1(s)$ is always increasing since $\frac{\partial f_1(s)}{\partial s} = \frac{s}{2^{1+s}}[2-(1+s)\ln 2] \geq 0$. Hence $f_1(s)$ is negative, implying the endpoints correspond to a local maxima. On the other hand, note that the second term in~\eqref{f2s} is exactly of the form $g(a,s)$ as stated in Lemma C.\ref{positive} with $a=\frac{1}{\sqrt{2}}$. With this, we conclude that the point $\phi=\frac{\pi}{4}$ is a local minimum. This leaves the endpoints as the only candidates for optimal parameters that achieve the maxima of $Q(s,1,\phi)$. Evaluating $Q(s,1,0)$ then provides us the bound
\begin{equation}
P_{1+s}(X|\Theta) \leq Q(s,1,0) = \frac{1}{2^{1+s}} (2^{s}+1)\ ,
\end{equation}
and plugging this back into \eqref{alpharenyi} gives \eqref{minimal_renyi_BB84}.
\end{proof}
\subsubsection{Step 2 : Relation for \texorpdfstring{$n$}{n}-qubits}
The goal is to prove that for any $n$-qubit state measured independently on each qubit in BB84 bases, the minimal output $\alpha$-R{\'e}nyi entropy is additive. Let first $n=2$ with the first system denoted by $A$ and the second by $B$. We have
\begin{eqnarray}\label{purity}
P_\alpha(X_{A}X_{B}|\Theta_{A}\Theta_{B})&=& \sum_{\theta_A,\theta_B} p_{\theta_A,\theta_B} \sum_{x_A,x_B}p_{x_A,x_B|\theta_A,\theta_B}^\alpha \nonumber\\
&=& \frac{1}{2}\cdot\sum_{x_A,\theta_A} p_{x_A|\theta_A}^\alpha \cdot \frac{1}{2} \sum_{x_B,\theta_B} p_{x_B|x_A,\theta_A,\theta_B}^\alpha
\end{eqnarray}
where $p_{\Theta_B|\Theta_A}=p_{\Theta_B}$ and $p_{\Theta_B=0}=p_{\Theta_b=1}=1/2$. Now assume that we have a one qubit upper bound
\begin{align}\label{eq:additive}
\frac{1}{2}\cdot\sum_{x_A,\theta_A} p_{x_A|\theta_A}^\alpha\leq c
\end{align}
for $P_\alpha(X|\Theta)$. Note that the second summation term in \eqref{purity} corresponds to $P_\alpha(X|\Theta)$ of the single qubit density operator
\begin{equation}\label{sigmaaftermeas}
\sigma_{B}=\mathop{\mathrm{tr}}\nolimits_A \left[ \frac{M_{x_A|\theta_A}~\rho_{AB}~M_{x_A|\theta_A}^\dagger}{\mathop{\mathrm{tr}}\nolimits[M_{x_A|\theta_A}~\rho_{AB}~M_{x_A|\theta_A}^\dagger]} \right]\ ,
\end{equation}
where $M_{x_A|\theta_A}=\mathbf{H}^{\theta_A}|x_A\rangle\langle x_A|\mathbf{H}^{\theta_A}\otimes\mathbb{I}_{B}$. Hence we have
\begin{eqnarray}
P_\alpha(X_{A}X_{B}|\Theta_{A}\Theta_{B})&\leq& \frac{c}{2}\cdot\sum_{x_A,\theta_A} p_{x_A|\theta_A}^\alpha\leq c^{2}\ .
\end{eqnarray}
The following lemma generalizes this argument to arbitrary $n$.
\begin{lemmaApp} For $\rho\in\mathcal{S}((\mathbb{C}^{2})^{\otimes n})$ measured independently on each qubit in BB84 bases, the minimal conditional $\alpha$-R{\'e}nyi entropy of $X^n$ with respect to $\Theta^n$ is additive.
\begin{proof} Consider
\begin{eqnarray}
P_\alpha(X^n|\Theta^n)_{\rho|\rho} &=& \sum_{\theta^n\in\lbrace0,1\rbrace^n} p_{\theta^n} \sum_{x^n\in\lbrace0,1\rbrace^n} p^\alpha_{x^n|\theta^n} \nonumber\\
&=& \frac{1}{2^n} \cdot\sum_{\theta^n\in\lbrace0,1\rbrace^n} \sum_{x^n\in\lbrace0,1\rbrace^n} \left(\displaystyle\prod_{i=1}^n p_{i|x^{i-1},\theta^{i-1}}\right)^\alpha
\end{eqnarray}
where $p_{i|x^{i-1},\theta^{i-1}} = p_{x_i|X^{i-1}=x^{i-1},\Theta^{i-1}=\theta^{i-1},K=k}$ for $i\geq2$ and $p_1=p_{x_1|\theta_1,K=k}$. Assuming the same upper bound as in~\eqref{eq:additive} we get
\begin{eqnarray}
P_\alpha(X^n|\Theta^n)_{\rho|\rho} &=& \frac{1}{2^{n-1}} \sum_{\theta^n\in\lbrace0,1\rbrace^n} \sum_{x^n\in\lbrace0,1\rbrace^n}\left(\displaystyle\prod_{i=1}^{n-1} p_{i|x^{i-1},\theta^{i-1}}\right)^\alpha \cdot \frac{1}{2} p_{n|x^{n-1},\theta^{n-1}}^\alpha \nonumber\\
&\leq& c \cdot \frac{1}{2^{n-1}} \sum_{\Theta^{n-1},X^{n-1}\in\lbrace0,1\rbrace^{n-1}}\left(\displaystyle\prod_{i=1}^{n-1} p_i\right)^\alpha\leq c^{n}\ .
\end{eqnarray}
\end{proof}
\end{lemmaApp}
Combining this with the one qubit uncertainty relation derived before, we obtain the following.
\begin{corollaryApp}\label{bb84uncert}
For $\alpha=1+s$ with $s\in(0,1]$, and $\rho\in\mathcal{S}((\mathbb{C}^{2})^{\otimes n})$ measured independently on each qubit in BB84 bases, we have
\begin{equation}
{\rm H_{\alpha}} (X^n|\Theta^n)_{\rho|\rho} \geq n\cdot\frac{1}{s}\left[1+s-\log(1+2^{s})\right]\ .
\end{equation}
\end{corollaryApp}
\subsubsection{Step 3 : Classical side information K}
In Corollary A.\ref{bb84uncert} we have obtained an uncertainty relation ${\rm H_{\alpha}} (X^n|\Theta^n)_{\rho|\rho}$ for any $n$-qubit states $\rho$. But generally we want to consider $n$-qubit states $\rho_k$ labelled with classical information $K$, and we need to make a relation to the quantity ${\rm H_{\alpha}}(X^{n}|\Theta^{n}K)_{\rho|\rho}$ for the state $\rho=\sum_k p_k \rho_k$. That is, the $\alpha$-R{\'e}nyi entropy is also conditioned on classical information $K$. This quantity is evaluated as
\begin{eqnarray}
{\rm H_{\alpha}} (X^n|\Theta^n K)_{\rho|\rho} &=& \frac{1}{1-\alpha} \log \sum_{k} \sum_{\theta^n\in\{0,1\}^n} p_{k,\theta^n} \sum_{x^n\in\{0,1\}^n} p^\alpha_{x^n|\theta^n,k}\nonumber\\
&=& \frac{1}{1-\alpha} \log \sum_{k} p_k \sum_{\theta^n\in\{0,1\}^n} p_{\theta^n|k} \sum_{x^n\in\{0,1\}^n} p^\alpha_{x^n|\theta^n,k}\ ,
\end{eqnarray}
where the difference is that now $p(\Theta|K=k)$ is conditioned on the classical information $K=k$. However, in our case $\Theta^n$ is chosen randomly regardless of what state is prepared. Thus $p(\Theta^n|K=k)=p(\Theta^n)=2^{-n}$ and we get
\begin{eqnarray}
{\rm H_{\alpha}} (X^n|\Theta^{n}K)_{\rho|\rho} &=& \frac{1}{1-\alpha} \log \sum_{k} p_k \sum_{\theta^n\in\{0,1\}^n} p_{\theta^n} \sum_{x^n\in\{0,1\}^n} p^\alpha_{x^n|\theta^n,k}\nonumber\\
&\geq &n\cdot\frac{1}{s}\left[1+s-\log(1+2^{s})\right]\ .
\end{eqnarray}
\subsubsection{Step 4 : Relation to the min-entropy}
After obtaining a bound on ${\rm H_{\alpha}}(X^n|\Theta^{n}K)_{\rho|\rho}$, we now link this to a bound on ${\rm H}^\eps_{\rm min}(X^n|\Theta^{n}K)_{\rho}$. It is shown in~\cite[Theorem 7]{QAEP} that for $\rho_{AB}\in \mathcal{S}(\mathcal{H}_{AB})$, $\epsilon\geq0$, and $\alpha\in(1,2]$
\begin{equation}
{\rm H}^\eps_{\rm min} (A|B)_{\rho} \geq{\rm H_{\alpha}}(A|B)_{\rho|\rho} -\frac{1}{\alpha-1}\log\frac{2}{\epsilon^2}\ .
\end{equation}
Thus the smooth conditional min-entropy is lower bounded by general conditional $\alpha$-R{\'e}nyi entropies, with a correction term growing logarithmically in $1/\epsilon^2$. For the Shannon entropy ($\alpha\rightarrow1$) this term diverges, but considering $\alpha\in(1,2]$ the bound is very useful. Namely, the smooth conditional min-entropy of $X^n$ given $\Theta^{n}K$ is bounded to
\begin{equation}\label{finalbb84}
\frac{1}{n} ~{\rm H}^\eps_{\rm min} (X^n|\Theta^n K)_\rho \geq ~\frac{1}{n}H_{\alpha}(X^n|\Theta^n K)_{\rho|\rho} \geq \max_{s\in(0,1]}~ \frac{1}{s} [1+s-\log(1+2^s)] - \frac{1}{sn} \log \frac{2}{\epsilon^2}\ .
\end{equation}
Note that the maximum value of \eqref{finalbb84} is obtained for different values of $s$, as $n$ and $\epsilon$ varies.
\section{B: Uncertainty relation for six-state measurements}\label{sixstate}
In this section, we make use of the same methods as in Appendix A. We derive an uncertainty relation for any $n$-qubit state measured independently on each qubit in six-state bases. For the single qubit version, we have to consider
\begin{eqnarray}\label{alpharenyi2}
{\rm H_{\alpha}} (X|\Theta)_{\rho|\rho} &=& \frac{1}{1-\alpha} \log~P_\alpha (X|\Theta)\nonumber\\
P_\alpha (X|\Theta) &=& \mathop{\mathrm{tr}}\nolimits\left[\rho_{X\Theta}^\alpha ( \mathbb{I}_X \otimes \rho_{\Theta})^{1-\alpha}\right]\nonumber\\
\rho_{X\Theta}&=&\frac{1}{3}\cdot\sum_{\theta,x}p_{x|\theta}\proj{x}\otimes\proj{\theta}\nonumber\\
p_{x|\theta}&=&\mathrm{tr}(N_{x|\theta}\rho)\ ,
\end{eqnarray}
with $N_{x|\theta}=\mathbf{T}^{\theta}\proj{x}\mathbf{T}^{\theta}$, and $\mathbf{T}=\frac{1}{\sqrt{2}}\begin{pmatrix}1&-i\\1&i\end{pmatrix}$ the matrix that cyclically permutes the eigenbases of Pauli $\sigma_x$, $\sigma_y$, and $\sigma_z$.
\begin{theoremAppB}\label{theorem2}
Let $\rho\in\mathcal{S}(\mathbb{C}^{2})$, and $\alpha=1+s$ with $s\in(0,1]$. Then we have for six-state measurements as in~\eqref{alpharenyi2} that
\begin{equation}\label{minimal_renyi_6}
{\rm H_{\alpha}}(X|\Theta)_{\rho|\rho} \geq\frac{-1}{s} \log\left[\frac{1}{3}(1+2^{1-s})\right]\ .
\end{equation}
\begin{proof}
We evaluate the term
\begin{eqnarray}\label{sixstP}
P_{1+s}(X|\Theta) &=& \frac{1}{3}\cdot\sum_{x \in \{ 0,1 \} } \sum_{\theta \in \{ 0,1,2 \}} p_{x|\theta}^{1+s} \nonumber \\
& & = \frac{1}{3}\cdot\frac{1}{2^{1+s}} \sum_{i=0}^2[(1+x_i)^{1+s}+(1-x_i)^{1+s}]\ ,
\end{eqnarray}
where $\lbrace x_0,x_1,x_2\rbrace:=\lbrace x,y,z\rbrace$ and $x_i:=\mathop{\mathrm{tr}}\nolimits(\sigma_{x_i}\rho)$. Parametrizing this in terms of spherical coordinates, we write
\begin{equation}
x_0 = r\sin\phi\sin\theta, ~~~~x_1 = r\cos\phi\sin\theta, ~~~~ x_2 = r\cos\theta\ ,
\end{equation}
where $0\leq r \leq 1$, $0\leq \phi,\theta \leq \frac{\pi}{2}$. The expression~\eqref{sixstP} can be rewritten in terms of these new coordinates as
\begin{equation}
M(s,r,\phi,\theta):= \frac{1}{3}\cdot\frac{1}{2^{1+s}} \lbrace\sum_{p=0,1}\left[1+(-1)^p r\sin\phi\sin\theta\right]^{1+s}+\sum_{p=0,1}\left[1+(-1)^pr\cos\phi\sin\theta\right]^{1+s}+\sum_{p=0,1}\left[1+(-1)^p\cos\theta\right]^{1+s}\rbrace\ .
\end{equation}
Evaluating the partial differential of $Q(s,r,\phi)$ with respect to $r$
\begin{eqnarray}\nonumber
\frac{\partial M(s,r,\phi,\theta)}{\partial r} &=& \frac{1+s}{3}\cdot\frac{1}{2^{1+s}} \sin\theta \cdot \lbrace \sin\phi[(1+r\sin\phi\sin\theta)^{s}-(1-r\sin\phi\sin\theta)^{s}]\\
&&+\cos\phi[(1+r\cos\phi\sin\theta)^{s}-(1-r\cos\phi\sin\theta)^{s}] \rbrace\ .
\end{eqnarray}
Again we see that since in the range of $\phi,\theta$, all values of sines and cosines are positive, we obtain $\frac{\partial M(s,r,\phi,\theta)}{\partial r} \geq 0$, which implies the maximum is attained at $r=1$. Subsequently, evaluating the partial derivative
\begin{eqnarray}\nonumber
\frac{\partial M(s,1,\phi,\theta)}{\partial \phi} &=& \frac{1+s}{3}\cdot\frac{1}{2^{1+s}} \sin\theta \cdot \lbrace \cos\phi[(1+\sin\phi\sin\theta)^{s}-(1-\sin\phi\sin\theta)^{s}]\\
&&-\sin\phi[(1+r\cos\phi\sin\theta)^{s}-(1-r\cos\phi\sin\theta)^{s}] \rbrace\ ,
\end{eqnarray}
gives the points $\phi=0,~\frac{\pi}{4},~\frac{\pi}{2}$ as solutions. We continue by evaluating the second partial derivative at these points
\begin{eqnarray}\nonumber
\frac{\partial^2 M(s,1,\phi,\theta)}{\partial \phi^2}\bigg|_{\phi=0} &=& \frac{1+s}{3} \cdot \frac{1}{2^{1+s}} \cdot \sin\theta [2s\sin\theta - [(1+\sin\theta)^s-(1-\sin\theta)^s]]\\
\frac{\partial^2 M(s,1,\phi,\theta)}{\partial \phi^2}\bigg|_{\phi=\frac{\pi}{4}} &=& \frac{1+s}{3}\cdot\frac{1}{2^{s}} \cdot c^2 \cdot \lbrace s \cdot \left[(1+c)^{s-1}+(1-c)^{s-1} \right]- \frac{1}{c} \left[(1+c)^{s}-(1-c)^{s}\right] \rbrace\ ,
\end{eqnarray}
where $c=\frac{\sin\theta}{\sqrt{2}}$. By expanding in Taylor's series, the first equation is negative for $s\in(0,1]$, whereas the second equation is positive. Hence the maximum is obtained at $\phi=0$. The last step is to evaluate
\begin{eqnarray}\nonumber
\frac{\partial M(s,1,0,\theta)}{\partial \theta} &=& \frac{1+s}{3}\cdot\frac{1}{2^{1+s}} \sin\theta \cdot \lbrace \cos\phi[(1+\sin\phi\sin\theta)^{s}-(1-\sin\phi\sin\theta)^{s}]\\
&&-\sin\phi[(1+r\cos\phi\sin\theta)^{s}-(1-r\cos\phi\sin\theta)^{s}] \rbrace\ .
\end{eqnarray}
But then this is of similar form as~\eqref{firstderphi}, and thus the maxima is obtained at $\theta=0$. Evaluating $M(s,1,0,0)$ then results in the claim
\begin{equation}
P_{1+s}(X|\theta) \leq M(s,1,0,0) = \frac{1}{3}(1+2^{1-s})\ .
\end{equation}
\end{proof}
\end{theoremAppB}
The additivity of minimal entropy holds by using the same argument as in Step 2 of Appendix A. Namely, given a string divided into parts $A$ and $B$, where $B$ denotes a single qubit system, the uncertainty relation for $B$ holds for the state
\begin{equation}
\sigma_B =\mathop{\mathrm{tr}}\nolimits_A \left[ \frac{N_{x_A|\theta_A}~\rho_{AB}~N_{x_A|\theta_A}^\dagger}{\mathop{\mathrm{tr}}\nolimits[N_{x_A|\theta_A}~\rho_{AB}~N_{x_A|\theta_A}^\dagger]} \right]\ ,
\end{equation}
where $N_{x_A|\theta_A}=\mathbf{T}^{\theta_A}|x_A\rangle\langle x_A|\mathbf{T}^{\theta_A}\otimes\mathbb{I}_{B}$. By exactly the same arguments as in Steps 3 and 4 in Appendix A, the smooth conditional min-entropy of the string $X^n\in\{0,1\}^n$ conditioned on the basis $\theta^n\in\lbrace0,2\rbrace^n$ and the classical side information $K$ can then be bounded by
\begin{equation}
\frac{1}{n} ~{\rm H}^\eps_{\rm min} (X^n|\Theta^n K)_\rho \geq ~\frac{1}{n} H_{\alpha}(X^n|\Theta^n K)_{\rho|\rho} \geq \max_{s\in(0,1]}~ \frac{-1}{s} \log\left[\frac{1}{3}(1+2^{1-s})\right]-\frac{1}{sn}\cdot\log\frac{2}{\epsilon^{2}}\ .
\end{equation}
\section{C: Technical Lemmas}\label{technical}
\begin{lemmaAppC}\label{positive}
Given the function $g:\mathbb{R}\times\mathbb{R} \rightarrow \mathbb{R}$,
\begin{equation}
g(a,s):= s\cdot [(1+a)^{s-1}+(1-a)^{s-1}]-\frac{1}{a}\cdot [(1+a)^s-(1-a)^s]\ .
\end{equation}
Then $g(a,s)\geq 0$ for $a\in[0,1)$ and $s\in(0,1]$.
\end{lemmaAppC}
\begin{proof}
Since $a$ lies within the convergence radius of the function $(1\pm a)^s$, we expand the function in Taylor's series
\begin{align}
&s\cdot [(1+a)^{s-1}+(1-a)^{s-1}]-\frac{1}{a}(1+a)^s-(1-a)^s]\nonumber\\
&= 2s\cdot \left[1+\sum_{n=2,4...}\frac{(s-1)(s-2)...(s-n)}{n!}a^n\right]-\frac{1}{a} \left[2as+2\sum_{n=3,5...}\frac{s(s-1)...(s-n+1)}{n!} a^n\right]\nonumber\\
&= 2s\cdot \left[\sum_{n=2,4...}\frac{(s-1)(s-2)...(s-n)}{n!}a^n-\sum_{n=3,5...}\frac{(s-1)(s-2)...(s-n+1)}{n!} a^{n-1}\right]\nonumber\\
&= 2s\cdot \left[\sum_{n=2,4...}\frac{(s-1)(s-2)...(s-n)}{n!}a^n-\sum_{j=2,4...}\frac{(s-1)(s-2)...(s-j)}{(j+1)!} a^{j}\right]\nonumber\\
&= 2s\cdot \sum_{n=2,4...}(s-1)(s-2)...(s-n)~\frac{n}{(n+1)!}~a^n\nonumber\\
&\geq 0\ .
\end{align}
The first equality holds by a straightforward expansion of Taylor's series, the second equality by extracting $2s$ and absorbing $\frac{1}{a}$ into the second summation term, the third equality follows from redefining the summation variable $j=n-1$, and the last inequality follows because $(s-1)...(s-n)\geq 0$ for $s\in(0,1]$ and $n$ being an even integer.
\end{proof}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 2,758 |
A fratura na costela é a quebra de um dos ossos da costela. Isto normalmente resulta em dor no peito que piora com a respiração. Hematomas podem ocorrer no local da quebra. Quando várias costelas estão quebradas em vários lugares resulta em um tórax instável. Complicações potenciais incluem um pneumotórax, contusão pulmonar e pneumonia.
As fraturas de costelas ocorrem, geralmente, a partir de um golpe direto no peito, como durante uma colisão de veículo motorizado ou a partir de um esmagamento. Tosse ou câncer metastático pode também resultar em uma costela quebrada. A parte meio das costelas são mais frequentemente fraturadas. As fraturas da primeira ou segunda costela são mais susceptíveis de serem associadas a complicações. O diagnóstico pode ser feito com base nos sintomas e suportado pela geração de imagens médicas.
O controle da dor é uma parte importante do tratamento. Isto pode incluir o uso de paracetamol (acetaminofeno), AINEs, ou opiáceos. Um bloqueio do nervo pode ser outra opção. Enquanto as costelas fraturadas são coladas, isso pode aumentar as complicações. Aqueles que apresentam tórax instável, a cirurgia pode melhorar os resultados. Elas são lesões comuns na sequência do traumatismo.
Fraturas
Lesões | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 1,175 |
{"url":"https:\/\/physics.stackexchange.com\/questions\/216810\/confusion-in-difraction-experiments","text":"# Confusion in difraction experiments\n\nI had a confusion in the diffraction experiment about which I was taught in school.\n\nAccording to the formula we Derived The width of the central maxima decreases on decreasing width of slit and vice versa.\n\nBut what i wonder is what happens to the max intensity of central maxima in these cases? I asked two physics teachers and they gave me opposite answers-one said it increases on decreasing slit width which i feel is right but the other said it would be the same as light is of constant wavelength(monochromatic).\n\nWhat do you think? Please explain:)\n\n\u2022 I think that the maximum intensity of the central peak should decrease in intensity as decrease the slit width for the simple reason that less photons can get through the slit at strike the screen directly in front of it at zero-angle. So now you have three different opinions! BTW, all this should be very easy to confirm experimentally. I remember playing around with single-slit diffraction and lasers as an undergrad physics lab, and I certainly don't recall the central peak getting brighter as I closed down the slits. I recall all the features getting dimmer. \u2013\u00a0user93237 Nov 5 '15 at 15:45\n\nI did a quick calculation of the single-slit diffraction patterns for three different slit widths of 100, 60, and 40 microns. Wavelength was that of green light (about 5000 Angstroms). Intensity plot is shown below for a progression of decreasing slit widths from 100 microns width (blue curve), to 60 microns width (green curve), and then to 40 microns width (red curve):\n\nNote that, as expected, the central peak gets broader as the slit width decreases. Also, again as expected, the intensity of the central peak decreases as the slit width decreases.\n\nOn axis or central lobe field increases with the area of the slit (and thus the intensity $I\\propto area^2$).\nIf you have a monochromatic plane wave traveling along the $z$ axis incident on a diffraction slit of width $w$ (as shown in the image below), then the initial field is $E(r,t) = E_0 e^{ik_0 z- \\omega t}$. Now immediately after the slit (assumed to be at $z=0$) the field is (ignoring the temporal component since this is a monochromatic wave) $$E(r) = E_0\\operatorname{rect}(x\/w),$$ where $\\operatorname{rect}$ is the rectangle function (equal to one if $|x\/w|<1\/2$ and zero outside this range).\nNow we can still write the field as a sum of plane waves in what is known as the angular spectrum representation, i.e. $$E(r) = \\int \\tilde E(k_x) e^{ik_x x} dk_x\/2\\pi,$$ which we can Fourier transform to find the amplitude in the plane wave $\\tilde E(k_x)$, i.e. $$\\tilde E(k_x) = \\int E(x) e^{ik_x x} dx.$$ The image above shows a sample of three of these plane wave components (at $\\theta = 0$ and at $\\pm 45$ degrees).\nNow if $E(x)$ has a finite width (e.g. by a slit of width $w$), then we can change variables to $dx = wdx'$ and we see that this equation is proportional to the width $w$. In particular, if the slit is a hard slit given by the rectangle function, then we know that the Fourier transform is simply given by $$\\tilde E(k_x) = wE_0\\operatorname{sinc}(wk_x\/2\\pi),$$ where $\\operatorname{sinc}$ is the sinc function For the on axis (central lobe) intensity, this gives $$|\\tilde E(k_x=0)|^2 = w^2|E_0|^2.$$\nNote for a 2D \"slit\" or hole, the field for $k_x = k_y = 0$ would be proportional to $\\int dxdy \\to area$, and thus the intensity would be $|E_0|^2\\times area^2$.","date":"2020-03-28 21:55:07","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8873574137687683, \"perplexity\": 310.6496474600649}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-16\/segments\/1585370493120.15\/warc\/CC-MAIN-20200328194743-20200328224743-00316.warc.gz\"}"} | null | null |
Offshore oil and gas in the United States provides a large portion of the nation's oil and gas supply. Large oil and gas reservoirs are found under the sea offshore from Louisiana, Texas, California, and Alaska. Environmental concerns have prevented or restricted offshore drilling in some areas, and the issue has been hotly debated at the local and national levels.
Production
From 1954 to 2007, federal offshore tracts produced of oil and of natural gas.
In 2007, federal offshore tracts produced 27% of the oil and 14% of the natural gas in the United States. Three of the top ten oil fields in the United States in terms of the proven remaining reserves were offshore in the Gulf of Mexico in 2007 (Mars-Ursa, Thunder Horse, and Atlantis). The oil production in the offshore area owned by the federal government reached in 2007, down from the record of produced in 2002. 2.86 TCF of offshore gas produced in 2007 was down from the high of 5.25 TCF produced in 1996.
Ownership of offshore oil and gas
The issue of state versus federal ownership has a long and contentious history (see Tidelands). The US Supreme Court ruled in 1947 that the federal government owned all the seabed off the California coast; the court applied the same doctrine against Louisiana and Texas in 1950. The court ruling invalidated existing state leases over producing offshore oil fields in the three states. However, the US Congress passed the Submerged Land Act in 1953, which recognized state ownership of the seabed within of the shore. That same year Congress also passed the Outer Continental Shelf Act, which gave the federal government jurisdiction over minerals on and under the seabed farther offshore from state waters.
The first federal offshore lease sale was held in 1954, to offer oil production rights under federal seabed in offshore Louisiana.
State ownership
Each coastal state owns the territory extending from the shore at mean low tide, and has jurisdiction to decide whether or not, and under what terms, to lease the territory for oil and gas. Exceptions include Texas and the west coast of Florida, which for historical reasons own the seabed out to from the shore. Louisiana is included in the 3 nautical mile rule, but because it had active offshore leases defined before 1950 (and before most other states), its territory is measured using the Admiralty Nautical Mile, while other states use the International Nautical Mile, adopted by the United States in 1954.
The exact definition of the shoreline, dividing state waters offshore and potential private land onshore, depends on state law. Shifting shorelines in the vicinity of oil fields have been a thorny issue. In the Goose Creek field along the coast in Harris County, Texas, so much oil was produced that the ground elevation sank ten or more feet, putting some privately owned oil-producing lands below sea level. The state of Texas then declared that the newly submerged land, along with its oil revenue, was state property.
Federal ownership
Traditionally, nations owned the territory extending from shore. In 1945, President Harry Truman issued a proclamation extending US jurisdiction over mineral resources to the edge of the continental shelf; the proclamation was codified by Outer Continental Shelf Lands Act of 1953. The Geneva Convention on the continental shelf in 1958 recognized the right of each nation to the mineral resources on its adjacent continental shelf, out to a water depth of 200 m. In 1983, President Ronald Reagan issued a proclamation extending the US Exclusive Economic Zone to from the shore.
By virtue of the Law of the Sea, which the US has signed but not ratified, each nation controls an Exclusive Economic Zone (EEZ) that extends from its shore. The EEZ confers exclusive rights to a nation to explore and produce minerals, including oil and gas. In areas within 200 nautical miles of two or more nations, the territorial line is drawn equidistant from the shores of the two nations. The US and Canada referred a dispute over the EEZ boundary in the Atlantic Ocean to the International Court of Justice, which decided the matter (see Georges Bank). Article 76 of the Law of the Sea provides that nations may under certain circumstances extend their EEZs beyond the 200 nautical mile limit, to up to from shore.
The 200 nautical mile EEZ left a small area in the western Gulf of Mexico enclosed by the EEZs of the United States and Mexico, but outside the EEZ of either nation (the "doughnut hole"). The US and Mexico concluded a treaty to divide the area between them. A second, and as yet unresolved "doughnut hole" exists in the eastern Gulf of Mexico, bordered by the EEZs of the US, Mexico, and Cuba.
Leasing and drilling on federal offshore seabed is controlled by the Bureau of Ocean Energy Management (BOEM), and the Bureau of Safety and Environmental Enforcement (BSEE), formerly named the Minerals Management Service (MMS). The BOEM issues leases through competitive bidding by sealed bids. The oil and gas company offering the highest up-front payment to the government (called a bonus) wins the lease. The government also receives a fixed annual rental based on the area for non-producing leases, and a percentage of the market value of any oil or gas produced and sold (royalty). The leases expire after a set number of years, or continue however long afterward that oil and gas are continually produced from the lease. Individual tracts are generally . Current leases being offered in the Gulf of Mexico have 5-year terms for tracts in water depths of less than 400 m, and 8 years for tracts in water greater than 400 m. Royalty rates are 18.75% regardless of water depth. From 1954 through 2004, the federal government received $64 billion in bonuses, $3 billion in rentals, $89 billion in royalties, and $3 billion in taken-in-kind oil deliveries in lieu of royalties.
International ownership
Some potential petroleum deposits underlie areas outside the EEZ. For example, the North Chuckchi Basin in the Arctic Ocean is partly inside and partly outside the US EEZ.
Offshore drilling by area
Historically, offshore drilling began by extending known coastal oil- and gas-producing trends out into the ocean. For this reason, most US offshore drilling has taken place offshore Louisiana, Texas, California, and Alaska, areas with coastal onshore oil and gas fields.
"Possibly in some future age, when all known petroleum fields shall have been drained of their richness, oil men may seek refuge in King Neptune's realm."
- Oil & Gas Journal, 24 June 1915
Alaska
The first federal lease sale offshore Alaska was held in 1976. Alaska produces oil and gas from offshore areas in the Cook Inlet and the Arctic Ocean. Endicott Island is an artificial island built to produce oil from beneath the Beaufort Sea. There are currently four artificial islands being used for drilling.
By Executive Order dated April 28, 2017, the Bureau of Ocean Energy Management will begin selling offshore leases in 2019.
California
Offshore drilling began in California in 1896, when operators in the Summerland Oil Field in Santa Barbara County followed the field into the ocean by drilling from piers built out over the ocean.
Leasing California state seabed is controlled by the State Lands Commission, which halted further leasing of state offshore tracts after the Santa Barbara oil spill in 1969. In 1994 the California legislature codified the ban on new leases by passing the California Coastal Sanctuary Act, which prohibited new leasing of state offshore tracts. The federal government has had no new lease sales for offshore California since 1982. Offshore drilling has continued from existing platforms in state and federal waters.
State offshore seabed in California produced of oil per day, and federal offshore tracts produced of oil per day in November 2008. State and federal offshore tracts together made up 16% of the state's oil production. Notable offshore fields include the Ellwood Oil Field, which is partially onshore and partially offshore, and the Dos Cuadras Field, source of the 1969 spill, which is entirely within the federal zone.
Gulf of Mexico
See Offshore oil and gas in the US Gulf of Mexico
The western and central Gulf of Mexico, which includes offshore Texas, Louisiana, Mississippi, and Alabama, is one of the major petroleum-producing areas of the United States. In 2007, federal leases in the western and central Gulf of Mexico produced 25% of the nation's oil and 14% of the nation's natural gas. In 2008, federal leases in the Gulf of Mexico produced of oil, down from in 2002; however, due to new deep-water discoveries, the MMS projects that oil production from the Gulf of Mexico will increase to per year by 2013.
Major fields include Atlantis Oil Field, and the Tiber oilfield (discovered 2009) and others in the Keathley Canyon protraction area. Notable oil platforms include Baldpate, Bullwinkle, Mad Dog, Magnolia, Mars, Petronius, and Thunder Horse. Notable individual wells include Jack 2 and Knotty Head.
The eastern Gulf of Mexico, which includes offshore Gulf Coast Florida, has never been a petroleum-producing area due to federal and state restrictions on exploration. Offshore platforms currently exist as far east as to near the Florida-Alabama border.
East Coast
See Offshore drilling on the US Atlantic coast
In the late 1970s and early 1980s oil companies drilled 51 exploratory wells on federal leases on the outer continental shelf of the Atlantic coast. All the leases have now reverted to the government. A 1996 study by the MMS estimated undiscovered conventionally recoverable resources in Atlantic federal waters to be of oil and gas.
Pacific Northwest Coast
The Sunshine Mining Co. made Washington state's first oil discovery in July 1957, at a location 1.4 miles south down the coast from Ocean City. The discovery was on a state lease, and was below mean high tide, which made it an offshore well. Additional wells were drilled in the area, but none produced any oil. The discovery well made 11,032 barrels of oil before it was plugged in 1959.
In 1964, the federal government leased tracts totaling in offshore Oregon and Washington. Oil companies drilled six tests offshore Washington (three in state waters and three in federal waters) and seven tests in federal waters offshore Oregon. The OCS P-0130 well drilled offshore Oregon by Union Oil in 1966 was described as having "potential for commercial gas production", but none of the wells were completed as producers, and the federal leases expired in 1969.
Farther north, in Canadian waters, Shell Canada drilled 14 wells offshore from Vancouver Island from 1967-1969. None were successful. Canada has had a federal moratorium on offshore drilling on its west coast since 1972.
Great Lakes
The lakebeds under the US portion of the Great Lakes are owned by the adjacent states. The only state that has allowed oil and gas drilling beneath the Great Lakes in recent years has been Michigan, and that only by directional drilling from onshore surface locations. All oil and gas drilling, either on or directionally beneath the Great Lakes, has been banned by federal law since 2002.
The Canadian side of Lake Erie has 480 producing gas wells in the lake. Gas production in Canadian Lake Erie waters dates back to 1913; more than 2000 wells have been drilled to the Clinton Sand, up to a few miles from the US side of the lake; the same formation produces gas from many onshore wells on the US side, but there are no wells in the US portion of the lake. The province of Ontario, where the offshore wells are located, allows gas wells, but not oil wells, in offshore Lake Erie. Ontario allows oil production from below the lake only from wells drilled directionally from onshore surface locations; a number of such directional wells are producing oil from beneath Lake Erie in the Goldsmith/Lakeshore Field.
A gas well to the Medina Group was drilled in 1958 in Pennsylvania state waters in Lake Erie.
The only state that has current oil and gas production from beneath the Great Lakes is Michigan. Michigan has 13 oil and gas wells that produce from below Lake Michigan, all drilled directionally from surface locations on shore. Directional drilling beneath the Great Lakes is now banned.
The US Geological Survey has estimated that of recoverable petroleum liquids and of recoverable natural gas underlie the US portion of the Great Lakes. Petroleum potential was given to all of the lakes except Lake Superior. The majority of the natural gas (3.0 TCF) is thought to underlie Lake Erie.
Restrictions on offshore drilling
State restrictions
A number of states, including California and Florida, have banned leasing of state waters for oil and gas drilling. In 2009 a bill that would have partially rescinded a ban on oil and gas leasing of Florida state waters failed in the Florida statehouse (see Offshore oil and gas in Florida).
In 2018, New Jersey enacted a law to ban oil and gas drilling in state waters and prohibit any constructions of oil and gas drilling infrastructure and facilities including pipelines and docks in New Jersey and its waters. This is an effort to cut off infrastructure to make it harder to drill in federal waters. New York, California, South Carolina and Rhode Island introduced similar bills in their states.
Federal restrictions
Congress passed the Marine Protection, Research, and Sanctuaries Act of 1972, which provided for the establishment of National Marine Sanctuaries, in which certain activities, including oil and gas drilling, are prohibited. To date, 13 sanctuaries with a combined area of have been so designated.
In 1982, the US Congress directed that no federal funds be used to lease federal tracts off the coasts of Washington, Oregon, or central and northern California. Over the years Congress added other areas until the prohibited area included all the east and west coasts, and the eastern Gulf of Mexico. Congress repeated the effective ban on offshore drilling in these areas every year until September 2008, when an appropriations bill passed the House and Senate without the ban.
In 1990, Congress passed the North Carolina Outer Banks Protection Act, prohibiting leasing and drilling on federal seabed offshore from North Carolina.
In 1990, President George H. W. Bush issued an executive moratorium restricting federal offshore leasing to Texas, Louisiana, Mississippi, Alabama, and parts of Alaska. The moratorium banned federal leasing through the year 2000 off the East Coast, West Coast, the eastern Gulf of Mexico (offshore Florida Gulf Coast), and the Northern Aleutian Basin of Alaska. In 1998, President Bill Clinton extended the moratorium through 2012. In July 2008, President George W. Bush rescinded the executive order.
In 2002, Congress imposed a moratorium on drilling on or directionally beneath the Great Lakes. The ban was made permanent by the Energy Policy Act of 2005.
Part of the central and most of the eastern Gulf of Mexico was declared off-limits to oil and gas leasing until 2022 by the Gulf of Mexico Energy Security Act of 2006.
Other mineral resources
Offshore areas of the US may contain resources of sulfur, salt, sand and gravel, phosphate rock, manganese nodules, and heavy-mineral placer deposits. To date, none of these has been produced for commercial purposes except sulfur and salt in the Gulf of Mexico, and gold in Alaskan state waters near Nome. The MMS has allowed sand dredging in federal waters to restore damaged beaches in Maryland, Virginia, South Carolina, and Florida. Formerly, very large gold dredges existed for many years, mining off of Nome, Alaska. The state lease auctions for up to the three mile limit are still held infrequently.
From 1960 to 2000, sulfur was mined by the Frasch process from the caprock of a salt dome at the Main Pass 299 mine, offshore Louisiana. A total of 34 million short tons of sulfur were recovered before the mine was closed; salt was also recovered.
See also
Off shore oil drilling
Oil platform
US offshore drilling debate
References
External links
Map of wells drilled in federal waters, Gulf of Mexico
United States public land law | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 3,011 |
Canadian singer and songwriter Tamia has released nine albums (including seven studio albums, one extended play, and one compilation albums), and twenty-six singles (including four as a featured artist and one charity singles). She began her career in 1995 as a protégé of musician Quincy Jones, who offered her the chance to appear on his album Q's Jook Joint (1995). Selected as the album's first single, their collaboration "You Put a Move on My Heart" became a top 20 success on the US Billboard Hot R&B/Hip-Hop Songs. The song, along with their second collaboration "Slow Jams" and "Missing You", a song she recorded with Brandy, Gladys Knight, and Chaka Khan for the soundtrack of the 1996 motion picture Set It Off, was later nominated for a Grammy Award.
Signed to Jones's Qwest Records, Tamia's self-titled debut album was released in 1998. The album took her work further into the contemporary R&B and hip hop genres but became a moderate commercial success, peaking at number sixty-seven on the US Billboard 200 chart and entered the top twenty of the Top R&B/Hip-Hop Albums chart. From the five singles that were released from the album, "Imagination" and "So into You" reached the top forty of Billboard Hot 100. The album was certified gold in Japan in June 1998 for 100,000 copies shipped to stores. In the United States, Tamia sold 416,000 copies in total. After a transition to Elektra Records, Tamia released her second album A Nu Day in 2000. Chiefly produced by Shep Crawford and Missy Elliott, the album entered the top ten on Billboards Top R&B/Hip-Hop Albums chart. It included the singles "Can't Go for That" and "Stranger in My House," the latter of which reached number 10 on the US Billboard Hot 100, making it her highest-charting single to date. Her strongest seller yet, A Nu Day sold over 665,000 copies in the United States and was certified gold by the Recording Industry Association of America (RIAA).
Tamia released her long-delayed third album, More, following her diagnosis with multiple sclerosis in mid-2004. Pushed by the top five success of "Into You", an updated version of her 1998 single "So into You", which rapper Fabolous recorded for his album Street Dreams, More became her highest-charting album yet, debuting and peaking at number 17 on Billboard 200. The album spawned three singles. Feeling restricted by record label obligations, Tamia later split from Elektra to go independent with her own company, Plus One Music Group. Her first project with the label was Between Friends. Her second album with the rooster, Beautiful Surprise, was released in 2012 after a nearly six-year absence in which she had devoted herself to the education of her two children with retired basketball player Grant Hill. It debuted at number six on the Top R&B/Hip-Hop Albums chart and also garnered two Grammy Award nominations. In 2014, Tamia entered a joint venture with Def Jam Recordings to release her sixth album Love Life. Released in June 2015, it debuted at number 24 on the US Billboard 200, while reaching the top of Billboards Top R&B Albums chart and number two on the Top R&B/Hip-Hop Albums chart, becoming her highest-charting album ever on both charts. Passion Like Fire, her seventh studio album, was released in September 2018.
Albums
Studio albums
Compilation albums
EPs
Singles
As lead artist
As featured artist
Appearances
Albums
Soundtracks
Music video
See also
List of songs recorded by Tamia
Notes
References
External links
Discographies of Canadian artists
Rhythm and blues discographies
Soul music discographies | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 5,371 |
Anse es una comuna francesa situada en el departamento de Ródano, de la región de Auvernia-Ródano-Alpes. Es la cabecera (bureau centralisateur en francés) y mayor población del cantón de su nombre. Pertenece a la comunidad de comunas Beaujolais-Pierres Dorées.
Geografía
Situada a las puertas de la región Beaujolais, la comuna de Anse se extiende sobre una superficie de hectáreas.
Es la comuna de confluencia entre el río Azergues y Saona
Demografía
Referencias
Enlaces externos
INSEE
Localidades de Ródano | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 29 |
Stockport's £65m Royal George Village project could be sold
Work on the 442-apartment complex has not progressed since planning permission was granted two years ago.
By Emma Davidson | December 2nd '22
Investar Property Group, the property development behind the 442-apartment complex on the site of the Stockport College campus, is reviewing its options for the build.
The Group has been unable to get the redevelopment off the ground since planning permission was granted, due to "build cost inflation and interest rate rises", according to Michael Dong, chief executive of Investar.
The developer is exploring alternative options for the project, with the most likely result seeing the site offloaded, according to market sources. Investar is also looking into possibly forming a joint venture with another developer.
Speaking to Place North West, Michael furthered: "Instead of waiting for the right turn in the market next year, we want to proactively look for alternative options to speed things up. The market in Stockport is attractive to a number of buyers and JV partners."
Image / Place North West
Investar acquired the project from Stockport College back in 2020 and the scheme would see four of the existing buildings on the site demolished.
For example, The Torkington Building would be developed into 122 apartments, while the Grade-II listed Greek Street building would be converted into a collaborative workspace.
In addition, a new six-storey gateway building would be constructed to provide 62 brand-new apartments. Meanwhile, the Hexagon lecture hall would be demolished to make way for a public realm and civic space.
A decision is still yet to be made on its future.
Far East Consortium progresses proposals for 1,500 homes at Red Bank
The best yoga studios in Manchester to help you unwind this January
Last updated 1 week ago
The most anticipated new openings in Manchester for 2023
Dogs to be permanently allowed on Metrolink trams | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 3,651 |
Q: How do I find the coordinates of nan values in a numpy array? I tried np.where but without success:
>>>a = np.array([np.nan, 1])
>>>np.where(a == np.nan)
(array([], dtype=int64),)
A: You need to change
np.where(a == np.nan)
to
np.where(np.isnan(a))
NaN values always return false in equality checks, even with another NaN value. So you need to use special functions to check for NaN like np.isnan.
A: import numpy as np
x = np.array([0,0,-1,1,np.nan, 0, 1, np.nan])
print np.where(np.isnan(x))
Returns:
(array([4, 7]),)
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 1,472 |
\section{Introduction}
The description of the nonperturbative structure of hadrons using generalized parton distributions (GPDs) is related to phenomenology, therefore attracting numerous dedicated experimental and theoretical efforts~\cite{Ji:PRL,Polyakov:1999gs,Diehl:2003ny,Belitsky:2005qn,Goeke:2001tz,Choi:2001fc,Kaur:2018ewq,deTeramond:2018ecg,Ji:2006ea,Fanelli:2016aqc,Chouika:2017rzs,Mezrag:2014jka,Broniowski:2007si,Zhang:2021mtn,Zhang:2021shm,Hwang:2007tb,Mondal:2017wbf,Chakrabarti:2014cwa,Adhikari:2016idg,Adhikari:2018umb,Meissner:2007rx,Alexandrou:2020zbe,Chen:2019lcm,Dupre:2016mai,Kriesten:2021sqc}. These GPDs are experimentally accessible through exclusive processes including deeply virtual Compton scattering (DVCS) and deeply virtual meson production (DVMP). The GPDs present an attractive testing ground for comparing theory with experiment since they encode a wealth of information about the spatial structure of the hadron as well as the partonic distribution of spin and orbital angular momenta.
Unlike the parton distribution functions (PDFs), which are solely functions of longitudinal momentum fraction ($x$) carried by the active parton, the GPDs are functions of $x$, the skewness ($\zeta$) which represents the longitudinal momentum transfer, and the square of total momentum transfer ($t$) to the hadrons.
The GPDs provide a picture that unites PDFs with form factors (FFs), where the former describe the longitudinal momentum distribution of partons within a hadron while the latter characterize the spatial extent. One obtains the FFs, charge distributions, PDFs, etc. from the GPDs by marginalizing~\cite{guidal-ff, nikkhoo-ff, miller-charge}. Additionally, in the absence of the longitudinal momentum transfer ($\zeta=0$), the GPDs are converted to the impact parameter dependent parton distributions via Fourier transform with respect to the transverse momentum transfer. Unlike the GPDs themselves, the impact parameter dependent parton distribution is the probability density of partons at a given combination of the longitudinal momentum fraction and the transverse distance from the center of the hadron~\cite{Burkardt:2002hr,Burkardt:2000za,Ralston:2001xs,Broniowski:2003rp}. For different polarizations of the partons, spin densities can be expressed in terms of the polarized impact parameter dependent GPDs~\cite{Brommel:2007xd,Gockeler:2006zu,Pasquini:2007xz,Maji:2017ill,Diehl:2005jf}.
For many years, DVCS and DVMP data have been accumulated by J-PARC, Hall-A and Hall-B of JLab by the CLAS collaboration and by COMPASS at CERN \cite{gpd-exp, gpd-exp1, gpd-exp2, gpd-exp3, gpd-exp4, gpd-exp6, gpd-exp7}. Recently, JLab has also started a positron initiated DVCS experiment~\cite{Accardi:2020swt}, COMPASS at CERN will start to collect more DVCS data, while future Electron-Ion Colliders~\cite{AbdulKhalek:2021gbh,Anderle:2021wcy} are planned to explore the GPDs through DVCS. However, experimental extractions of the GPDs are not straightforward. In particular, fitting of DVCS data does not provide direct information about the GPDs but, instead, provides some weighted integrals of the GPDs. Since nonperturbative QCD predictions are not yet possible from the first principles, model predictions of the GPDs are useful for constraining the GPDs and data fitting in order to develop insights into GPDs from DVCS data.
Among known hadrons, the pion plays a leading role for comparing theory with experiment. From the Drell-Yan process with pion beams \cite{Drell:1970wh, Christenson:1970um}, we can access the partonic structure of the pion by colliding them with nuclear targets \cite{Peng:2014hta, Chang:2013opa, Reimer:2007iy, McGaughey:1999mq}.
Chiral symmetry is dynamically broken in QCD leading to generation of the Goldstone bosons (pions) having a small mass when compared to other
hadrons. On the one hand, the pions are salient in providing the force that binds the neutrons and the protons inside the nuclei and they also affect the properties of the isolated nucleons. Hence one can safely say that our understanding of visible (baryonic) matter is incomplete without detailed knowledge of the structure and interactions of the pion.
On the other hand, the pseudoscalar kaons, counterparts
of the pions with one strange valence quark, play a critical role in our understanding of Charge and Parity (CP) symmetry
violation~\cite{Woods:1988za,Barr:1993rx,Gibbons:1993zq}. In this paper, we investigate the partonic structure of the pions and the kaons in terms of their GPDs.
As background, we note that different theoretical analyses have provided useful insights regarding the pion GPDs, \emph{e.g}.\ Refs.~\cite{Broniowski:2003rp,Polyakov:1999gs,Frederico:2009fk,Kaur:2018ewq,Theussl:2002xp,Dalley:2003sz,Broniowski:2007si,Mezrag:2014jka,Fanelli:2016aqc,Kumano:2017lhr,Ma:2019agv,Zhang:2020ecj,Shi:2020pqe,Gutsche:2015,Gutsche:2013zia,deTeramond:2018ecg,Chang:2020kjj,Brommel:2007xd,Hagler:2009ni,dalley,dalley1,Chen:2019lcm,Zhang:2021mtn,Roberts:2021nhw,Kaur:2020vkq,Raya:2021zrz}, while for the kaon, foundations are just being laid and several significant analyses can be found in Refs.~\cite{Kaur:2020vkq,Nam:2011yw, Xu:2018eii, Kock:2020frx,Kaur:2019jow,Zhang:2021mtn,Zhang:2021tnr,Raya:2021zrz}.
Another salient issue is the transversity of the hadrons~\cite{Barone:2001sp}, which provides access to their spin structures. Due to transversity's chiral-odd nature, it is challenging to measure experimentally. Nevertheless, the transverse spin asymmetry in Drell-Yan
processes in $p\bar{p}$ reactions~\cite{Anselmino:2004ki,Pasquini:2006iv} and the azimuthal single spin asymmetry in semi-inclusive deep inelastic scattering (SIDIS)~\cite{Anselmino:2007fs} can be used to extract valuable information on the transversity of the nucleon. While the transversity of the nucleon is nonzero and has now been well determined~\cite{Radici:2018iag}, it vanishes for the spin-zero hadrons. However, the chiral-odd GPDs defined as off-forward matrix elements of the tensor current are nonzero and much less information is available for them in the case of the pion and the kaon.
From the perspective of theory, the QCDSF/UKQCD Collaboration has reported the first result for the pion's chiral-odd GPD using lattice QCD~\cite{Brommel:2007xd}. They have also presented the probability density of the polarized quarks inside the pion and found that their spatial distribution is strongly distorted when the quarks are transversely polarized. The distortion in the density occurs due to the pion tensor FF. The lattice QCD results have triggered various theoretical studies on the pion and the kaon tensor FFs. The models for such results include constituent quark models~\cite{Frederico:2009fk,Fanelli:2016aqc}, the Nambu--Jona-Lasinio (NJL) model with Pauli-Villars regularization~\cite{Broniowski:2010nt,Dorokhov:2011ew}, and the nonlocal chiral quark model (N$\chi$QM) from the instanton vacuum~\cite{Nam:2010pt,Nam:2011yw}.
In this paper, we evaluate the GPDs of the light pseudoscalar mesons using the light-front wave functions (LFWFs) based on the theoretical framework of basis light front quantization (BLFQ) \cite{Vary:2009gt}, with only the valence Fock sector of mesons considered. The effective Hamiltonian incorporates the confining potential adopted from the light-front holography in the transverse direction \cite{Brodsky:2014yha}, a longitudinal confinement \cite{Li:2015zda,Li:2017mlw}, and the color-singlet NJL interactions~\cite{Klimt:1989pm,Shigetani:1993dx} to account for the dynamical chiral symmetry breaking of QCD. The nonperturbative solutions for the LFWFs are given by the recent BLFQ study of light mesons~\cite{Jia:2018ary}. These LFWFs have been applied successfully to predict the decay constants, electromagnetic form factors (EMFFs), charge radii, PDFs, and many other quantities of the pion and the kaon~\cite{Jia:2018ary, Lan:2019rba,Lan:2019vui,Mondal:2021czk}. Here, we extend those investigations to study the pion and the kaon GPDs and their QCD evolution.
We use the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) equation of QCD \cite{Dokshitzer:1977sg,Gribov:1972ri,Altarelli:1977zs} up to the next-to-next-to-leading order (NNLO) for the evolution of the valence quark GPDs.
We also calculate the pion and the kaon tensor FFs in the space-like region. Combining the result of the tensor FFs with the EMFFs, which have been evaluated previously in Ref.~\cite{Jia:2018ary} within the BLFQ-NJL framework, we then compute the probability density of transversely polarized quarks inside the pion and the kaon. We further calculate the $x$-dependent squared radius of the quark density in the transverse plane that describes the transverse size of the hadron.
We organize the main results of this paper in the following sequence. We briefly summarize the BLFQ-NJL
formalism for the light mesons in Sec.~\ref{sc:BLFQ_NJL}. We then present a detailed description of the GPDs and the associated distributions in Sec.~\ref{formalism}. Sec.~\ref{result} details our numerical results for the GPDs, electromagnetic and gravitational FFs, impact parameter dependent GPDs, and spin densities of the pion and the kaons. We summarize the outcomes in Sec.~\ref{summary}.
\section{BLFQ-NJL model for the light mesons}\label{sc:BLFQ_NJL}
In this section, we provide an overview of the BLFQ-NJL model for the light mesons following Ref.~\cite{Jia:2018ary}. The BLFQ approach represents the dynamics of bound state constituents in quantum field theory through a light-front quantum many-body Hamiltonian~\cite{Vary:2009gt,Zhao:2014xaa,Wiecki:2014ola,Li:2015zda,Jia:2018ary,Tang:2018myz,Tang:2019gvn,Xu:2019xhk,Lan:2021wok}. The structures of the bound states are encoded in the LFWFs achievable as the eigenfunctions of the light-front eigenvalue equation
\begin{equation}
H_{\mathrm{eff}}\vert \Psi\rangle=M^2\vert \Psi\rangle,\label{eq:LF_Schrodinger}
\end{equation}
where $H_{\mathrm{eff}}=P^+P^-$ with $P^\pm=P^0 \pm P^3$ being the light-front Hamitonian ($P^-$) and the longitudinal momentum ($P^+$) of the system, respectively. The mass squared, $M^2$, is the corresponding eigenvalue of the state $\vert \Psi\rangle$.
In the constituent quark-antiquark representation, our adopted effective light-front Hamiltonian for the light mesons with non-singlet flavor wave functions is written a
\begin{align}
H_\mathrm{eff} =& \frac{\vec k^{\perp2} + m_q^2}{x} + \frac{\vec k^{\perp2}+m_{\bar q}^2}{1-x}
+ \kappa^4 \vec \zeta^{\perp2} \nonumber\\&- \frac{\kappa^4}{(m_q+m_{\bar q})^2} \partial_x\big( x(1-x) \partial_x \big)+H^{\rm eff}_{\rm NJL}.\label{eqn:Heff}
\end{align}
The first two terms in Eq.~\eqref{eqn:Heff} are the light-front kinetic energy for the quark and the antiquark,
where $m_q$ ($m_{\bar q}$) is the mass of the quark (antiquark),
$x=k^+/P^+$ is the longitudinal momentum fraction carried by the valence quark, and
$\vec{k}^{\perp}$ is its transverse momentum. The third and the fourth terms are respectively the confining potential in the transverse direction based on the light-front holographic QCD~\cite{Brodsky:2014yha} and a longitudinal confining potential~\cite{Li:2015zda}. The parameter $\kappa$ is the strength of the confinement. The holographic variable is defined as~$\vec \zeta^{\perp} \equiv \sqrt{x(1-x)} \vec r^\perp$~\cite{Brodsky:2014yha}, where $\vec{r}^\perp$ is the transverse separation between the quark and antiquark and is conjugated to $\vec{k}^\perp$. The $x$-derivative is defined as $\partial_x f(x, \vec\zeta^\perp) = \partial f(x, \vec \zeta^\perp)/\partial x|_{\vec\zeta^\perp}$. The last term in the effective Hamiltonian, $H_{\mathrm{NJL}}^{\mathrm{eff}}$, represents the color-singlet NJL interaction to account for the chiral dynamics~\cite{Klimt:1989pm}.
For the positively-charged pion, the NJL interaction is given by~\cite{Jia:2018ary},
\begin{align}
&H_{\mathrm{NJL},\pi}^{\mathrm{eff}} =G_\pi\, \big\{\overline{u}_{\mathrm{u}s1'}(p_1')u_{\mathrm{u}s1}(p_1)\,\overline{v}_{\mathrm{d}s2}(p_2)v_{\mathrm{d}s2'}(p_2')\nonumber\\
&\quad\quad+ \overline{u}_{\mathrm{u}s1'}(p_1')\gamma_5 u_{\mathrm{u}s1}(p_1)\,\overline{v}_{\mathrm{d}s2}(p_2)\gamma_5 v_{\mathrm{d}s2'}(p_2') \nonumber\\
&\quad\quad+ 2\,\overline{u}_{\mathrm{u}s1'}(p_1')\gamma_5 v_{\mathrm{d}s2'}(p_2')\,\overline{v}_{\mathrm{d}s2}(p_2)\gamma_5 u_{\mathrm{u}s1}(p_1) \big\}.\label{eq:H_eff_NJL_pi_ori}
\end{align}
While, for the positively charged kaon, the interaction is given by
\begin{align}
&H^{\mathrm{eff}}_{\mathrm{NJL},K}=G_K\,\big\{- 2\,\overline{u}_{\mathrm{u}s1'}(p_1') v_{\mathrm{s}s2'}(p_2')\,\overline{v}_{\mathrm{s}s2}(p_2) u_{\mathrm{u}s1}(p_1) \nonumber\\
&\quad\quad + 2\,\overline{u}_{\mathrm{u}s1'}(p_1')\gamma_5 v_{\mathrm{s}s2'}(p_2')\,\overline{v}_{\mathrm{s}s2}(p_2)\gamma_5 u_{\mathrm{u}s1}(p_1) \big\}.\label{eq:H_eff_NJL_SU_3_ori}
\end{align}
Equations~(\ref{eq:H_eff_NJL_pi_ori}) and (\ref{eq:H_eff_NJL_SU_3_ori}) are obtained from the NJL Lagrangian after the Legendre transform in the two and three flavor NJL model, respectively~\cite{Klimt:1989pm,Vogl:1989ea,Vogl:1991qt,Klevansky:1992qe}. Here, ${u_{\mathrm{f}s}(p)}$ and ${v_{\mathrm{f}s}(p)}$ are the Dirac spinors with the nonitalic subscripts representing the flavors and the italic subscripts denoting the spins. Meanwhile, $p_1$ and $p_2$ are the momenta of the valence quark and the valence antiquark, respectively.
The coefficients $G_{\pi}$ and $G_{K}$ are independent coupling constants of the theory. In the interactions, we only include the combinations of Dirac bilinears relevant to the valence Fock sector LFWFs of the systems. The instantaneous terms due to the NJL interactions have been omitted. The explicit expressions and the detailed calculations of the matrix elements of the NJL interactions in the BLFQ formalism can be found in Ref.~\cite{Jia:2018ary}.
In the leading Fock sector, the eigenstate for the mesons reads
\begin{align}
&\big\vert\Psi(P^+,\vec{P}^\perp)\big\rangle =\sum_{r,s}\int_{0}^{1}\dfrac{dx}{4\pi x(1-x)}\int\dfrac{d\vec{\kappa}^\perp}{(2\pi)^2}\,\nonumber\\
&\quad\quad\times\,\psi_{rs}(x,\vec{\kappa}^\perp)\, b_r^\dagger(xP^+,\vec{\kappa}^\perp+x\vec{P}^\perp) \nonumber\\
&\quad\quad\times\,
d_s^\dagger((1-x)P^+,-\vec{\kappa}^\perp+(1-x)\vec{P}^\perp)\,|0\rangle,\label{eq:Psi_meson_qqbar}
\end{align}
where $P$ is the momentum of the meson. The relative transverse momentum of the valence quark is $\vec{\kappa}^\perp=\vec{k}^\perp-x\vec{P}^\perp$. The coefficients of the expansion, $\psi_{rs}(x,\vec{\kappa}^\perp)$, are the valence sector LFWFs with $r$($s$) representing the spin of the quark(antiquark).
To compute the Hamiltonian matrix, one needs to construct the BLFQ basis. The two-dimensional (2D) harmonic oscillator (HO) basis functions are adopted in the transverse direction, which are defined as~\cite{Vary:2009gt,Li:2015zda}:
\begin{align}
\phi_{nm}\left(\vec{q}^\perp;b_h \right)& =\dfrac{1}{b_h}\sqrt{\dfrac{4\pi n!}{(n+|m|)!}} \left(\dfrac{\vert\vec{q}^\perp\vert}{b_h}\right)^{|m|} \nonumber\\
&\times\, \exp\left(-\dfrac{\vec{q}^{\perp 2}}{2b_h^2}\right)
L_n^{|m|} \left(\dfrac{\vec{q}^{\perp 2}}{b_h^2}\right)\,e^{im\varphi},\label{eq:def_phi_nm}
\end{align}
with $\tan(\varphi)=q^2/q^1$, $b_h$ is the HO basis scale parameter with dimension of mass, $n$ and $m$ are the radial and the angular quantum numbers, $L_n^{|m|}(z)$ is the associated Laguerre polynomial. Meanwhile, in the longitudinal direction, the basis functions are defined as~\cite{Li:2015zda}
\begin{align}
\chi_l(x;\alpha,\beta)&= \sqrt{4\pi(2l+\alpha+\beta+1)}\nonumber\\
&\times\,\sqrt{\dfrac{\Gamma(l+1)\Gamma(l+\alpha+\beta+1)}{\Gamma(l+\alpha+1)\Gamma(l+\beta+1)}} \nonumber\\
&\times\, x^{\beta/2}(1-x)^{\alpha/2}\,P_l^{(\alpha,\beta)}(2x-1),\label{eq:def_chi_l}
\end{align}
where $P_{l}^{(\alpha,\beta)}(z)$ is the Jacobi polynomial and the dimensionless parameters ${\alpha =2m_{\overline{q}}(m_q+m_{\overline{q}})/\kappa^2}$, ${\beta=2m_q(m_q+m_{\overline{q}})/\kappa^2}$ and $l=0,~1,~2,...$.
The valence LFWFs are then expanded in the orthonormal bases given in Eqs.~(\ref{eq:def_phi_nm}) and (\ref{eq:def_chi_l}):
\begin{align}
\psi_{rs}(x,\vec{\kappa}^\perp) &=\sum_{n, m, l} \langle n, m, l, r, s | \psi\rangle~ \nonumber\\
&\times\,\phi_{nm}\left(\dfrac{\vec{\kappa}^\perp}{\sqrt{x(1-x)}};b_h\right)\chi_l(x),\label{eq:psi_rs_basis_expansions}
\end{align}
where the coefficients $\langle n, m, l,r,s|\psi\rangle$ are obtained in the BLFQ basis space by diagonalizing the truncated Hamiltonian matrix.
The infinite dimensional basis is truncated to a finite dimension by restricting the quantum numbers usin
\begin{equation}
0 \leq n \leq N_{\mathrm{max}}, \quad -2 \leq m \leq 2, \quad 0 \leq l \leq L_{\mathrm{max}},
\label{eq:nmax}
\end{equation}
where $N_{\text{max}}$ controls the transverse momentum covered by 2D HO functions and $L_{\text{max}}$ provides the basis resolution in the longitudinal direction.
Note that we have a natural truncation for $m$ as the NJL interactions do not couple to $\vert m\vert \geq 3$ basis states~\cite{Jia:2018ary}.
The LFWF $\psi_{rs}(x,\vec{\kappa}^\perp)$ is normalized as
\begin{align}
\sum_{r,s}
\int_0^1 \!\! \frac{dx}{2x(1-x)} \!\int \! \frac{d^2 \vec{\kappa}^\perp}{(2\pi)^3} \big | \psi_{rs}(x,\vec{\kappa}^\perp)\big |^2 \!\! =\!\!1. \label{eq:normalization_LFWF}
\end{align}
Parameters in the BLFQ-NJL model are fixed to reproduce the ground state masses of the light pseudoscalar and vector mesons as well as the experimental charge radii of the $\pi^+$ and the $K^+$~\cite{Jia:2018ary}. The LFWFs in this model have been successfully applied to compute the parton distribution amplitudes and the EMFFs \cite{Jia:2018ary}, PDFs for the pion and the kaon and pion-nucleus induced Drell-Yan cross sections~\cite{Lan:2019vui,Lan:2019rba}.
\begin{table}
\caption{Summary of the model parameters~\cite{Jia:2018ary}.
}\label{tab:model_parameters}
\centering
\begin{tabular}{ccc ccc ccc c}
\toprule
Valence flavor & $N_{\text{max}}$ & $L_{\text{max}}$ & $\kappa (\text {MeV})$ & $m_q (\text {MeV})$ & $m_{\bar {q}} (\text {MeV})$ & \\
\colrule
$u\bar{d}$ & 8 & 8$-$32 & 227 & 337 & 337 & \\
\colrule
$u\bar{s}$ & 8& 8$-$32 & 276 & 308 & 445 & \\
\botrule
\end{tabular}
\end{table}
\begin{figure*}
\begin{tabular}{cc}
\subfloat[]{\includegraphics[scale=0.57]{Hu_pion_Lmax_32.pdf}}
\end{tabular}
\begin{tabular}{cc}
\subfloat[]{\includegraphics[scale=0.57]{Etu_pion_Lmax_32.pdf}}
\end{tabular}
\caption{The valence ($u$ or $\bar{d}$) quark GPDs of the pion: (a) $H(x,0,t)$ and (b) $E_T(x,0,t)$ as functions of $x$ and the invariant momentum transfer $-t$. The GPDs are evaluated with $N_\text{max} = 8$ and $L_\text{max} = 32$ in the BLFQ-NJL model.}
\label{pion_gpds}
\end{figure*}
\begin{figure*}
\begin{tabular}{cc}
\subfloat[]{\includegraphics[scale=0.57]{Hu_kaon_Lmax_32.pdf}}
\end{tabular}
\begin{tabular}{cc}
\subfloat[]{\includegraphics[scale=0.57]{Hs_kaon_Lmax_32.pdf}}
\end{tabular}
\begin{tabular}{cc}
\subfloat[]{\includegraphics[scale=0.57]{Etu_kaon_Lmax_32.pdf}}
\end{tabular}
\begin{tabular}{cc}
\subfloat[]{\includegraphics[scale=0.57]{Ets_kaon_Lmax_32.pdf}}
\end{tabular}
\caption{The valence quark GPDs of the kaon: (a) $H(x,0,t)$ and (c) $E_T(x,0,t)$ are for the valence $u$ quark; (b) and (d) are same as (a) and (c), respectively, but for the valence $\bar{s}$ quark as functions of $x$ and the invariant momentum transfer $-t$. These GPDs are evaluated with $N_\text{max} = 8$ and $L_\text{max} = 32$ in the BLFQ-NJL model.}
\label{kaon_gpds}
\end{figure*}
\section{generalized parton distributions: KINEMATICS AND FORMALISM \label{formalism}}
At leading twist, there are two independent GPDs for a spin-0 meson. One of them is chirally even, while the other is chirally odd.
Those GPDs are defined as off-forward matrix elements of the bilocal operator of light-front correlation functions of vector and tensor currents, respectively as~\cite{Diehl:2003ny,Hagler:2009ni},
\begin{align}
&H^{\mathcal{P}}(x,\zeta,t)= \int \frac{dz^-}{4 \pi} e^{i x P^+ z^-/2}\nonumber\\
&\quad\times\,\langle\mathcal{P}(P')\vert\bar{\Psi}_q(0)\gamma^+ \Psi_q(z)\vert\mathcal{P}(P)\rangle\vert_{z^+=\textbf{z}^\perp=0},\label{H}\\
&\frac{i \epsilon_{ij}^\perp q_i^\perp}{2M_\mathcal{P}}E_T^\mathcal{P}(x,\zeta,t)= \int \frac{dz^-}{4 \pi} e^{i x P^+ z^-/2}\nonumber\\
&\quad\times\,\langle\mathcal{P}(P')\vert\bar{\Psi}_q(0)i \sigma^{j+}\gamma_5 \Psi(z)_q\vert\mathcal{P}(P)\rangle\vert_{z^+=\textbf{z}^\perp=0},\label{ET}
\end{align}
where $\Psi_q(z)$ is the quark field operator and $P(P')$ denotes the meson momentum of initial (final) state of the meson $(\mathcal{P})$. $M_{\mathcal{P}}$ defines the mass of the meson; $\epsilon_{ij}^\perp$ is the anti-symmetric tensor in the transverse plane and $\sigma^{j+}=\frac{i}{2}[\gamma^j,\gamma^+]$ with $j=1,~2$ as transverse index. The $H$, called the unpolarized quark GPD, is chirally even, while the transversely polarized quark GPD, $E_T$, is chirally odd. The GPD $E_T$ is responsible for the distortion in the spatial distribution of a transversely polarized quark, revealing a nontrivial spin structure of the meson~\cite{Brommel:2007xd}. The moments of the GPD $E_T$ can be linked to the Boer-Mulders function, which describes the correlation between transverse spin and intrinsic transverse momentum of the quark in the meson~\cite{Burkardt:2003uw,Burkardt:2002ks,Ahmady:2019yvo}. Recently, the limits of validity of this relationship have been discussed in Ref.~\cite{Pasquini:2019evu}. In the symmetric frame, the kinematical variables are
\begin{equation}
\bar{P}^\mu=\frac{(P+P')^\mu}{2}, ~ \Delta^\mu=P'^\mu-P^\mu, ~ \zeta=-\Delta^+/2\bar{P}^+,
\end{equation}
and $t=\Delta^2$. Here, we choose the light cone gauge $A^+=0$ implying that the gauge link between the quark fields in Eqs.~(\ref{H}) and (\ref{ET}) is unity therefore omitted.
By inserting the initial and the final states of the meson, Eq.~(\ref{eq:Psi_meson_qqbar}), in above Eqs.~(\ref{H}) and (\ref{ET}), one obtains the quark GPDs $H$ and $E_T$ in terms of overlaps of LFWFs. We restrict ourselves to the kinematical region: $0<x<1$ at zero skewness. This domain corresponds to the situation where a quark is removed from the initial meson with light-front
longitudinal momentum $xP^+$ and reinserted into the final meson with the same longitudinal momentum. Therefore, the
change in momentum occurs purely in the transverse
direction. The particle number $(n_p)$ remains conserved in this kinematical domain describing the diagonal $n_p\to n_p$ overlaps. The GPDs, $H$ and $E_T$, at zero skewness, in the diagonal $2 \to 2$ overlap representation in terms of LFWFs are given by
\begin{align}
H(x,\zeta=0,t)&=\dfrac{1}{4\pi\,x(1-x)}\sum_{rs}\int \dfrac{d^2\vec{\kappa}^\perp}{(2\pi)^2}\,\nonumber\\
&\times\,\psi^*_{rs}(x,\vec{\kappa}'^\perp)\,\psi_{rs}(x,\vec{\kappa}^\perp),\label{eq:H_valence_psi}\\
\frac{i \Delta^\perp_j}{2M_{\mathcal{P}}}E_T(x,\zeta=0,t)&=\dfrac{1}{4\pi\,x(1-x)}\sum_{s}\int \dfrac{d^2\vec{\kappa}^\perp}{(2\pi)^2}\,\nonumber\\
&\times\,\Big[(-i)^j\psi^*_{\uparrow s}(x,\vec{\kappa}'^\perp)\,\psi_{\downarrow s}(x,\vec{\kappa}^\perp)\nonumber\\
& +(i)^j\psi^*_{\downarrow s}(x,\vec{\kappa}'^\perp)\,\psi_{\uparrow s}(x,\vec{\kappa}^\perp)\Big],\label{eq:ET_valence_psi}
\end{align}
where, for the struck quark, $\vec{\kappa}'^\perp=\vec{\kappa}^\perp+(1-x){\vec \Delta}^\perp$ and for the spectator, $\vec{\kappa}'^\perp=\vec{\kappa}^\perp-x{\vec \Delta}^\perp$ and the total momentum transferred to the meson is $t=-{\vec \Delta}^{\perp 2}$.
Note that integrating the bilocal matrix element in Eq.~(\ref{H}) over the momentum fraction $x$ yields the local matrix elements that provide FFs. In the Drell-Yan frame, the expressions for the GPDs are very similar to those for FFs, except that the longitudinal momentum fraction $x$ of the struck parton is not integrated out. Therefore, GPDs defined in Eqs.~(\ref{eq:H_valence_psi}) and (\ref{eq:ET_valence_psi}) are also known as momentum-dissected FFs and measure the contribution of the struck parton with momentum fraction $x$ to the corresponding FFs. Consequently, the first moments of the GPDs can be related to the FFs for the spin-0 hadrons by the sum rules on the light-front as~\cite{Ji:1998pc}
\begin{align}
F(t)&=\int dx \, H(x,\zeta,t),\nonumber\\
F_T(t)&=\int dx \, E_T(x,\zeta,t).\label{eq:FF}
\end{align}
Meanwhile, the gravitational FFs which are expressed as
the matrix elements of the energy-momentum tensor, are linked to GPDs through the second-moment as~\cite{Ji:1998pc}
\begin{align}
A(t)&=\int dx \, x\,H(x,\zeta,t),\nonumber\\
B_T(t)&=\int dx \, x\, E_T(x,\zeta,t).\label{eq:GF}
\end{align}
Aside from these FFs, the impact parameter dependent GPDs are defined as the Fourier transform of the GPDs with respect to the momentum transfer along the transverse direction $\vec{\Delta}^\perp$~\cite{Burkardt:2002hr}:
\begin{align}
q(x, {\vec b}^\perp)& =
\int \frac{d^2{\vec \Delta}^\perp}{(2\pi)^2}
e^{-i {\vec \Delta}^\perp \cdot {\vec b}^\perp }
H(x,0,-{\vec \Delta}^{\perp 2}),\label{eq:Hb}\\
q_T(x, {\vec b}^\perp)& =\int \frac{d^2{\vec \Delta}^\perp}{(2\pi)^2}
e^{-i {\vec \Delta}^\perp \cdot {\vec b}^\perp }
E_T(x,0,-{\vec \Delta}^{\perp 2}),\label{eq:Eb}
\end{align}
where ${\vec b}^\perp$ is the Fourier conjugate to the momentum transfer ${\vec \Delta}^\perp$. The impact parameter ${b}^\perp=|{\vec b}^\perp|$ corresponds to the transverse displacement of the struck parton from the center of momentum of the hadron. For zero skewness, $\vec{b}^{\perp}$ provides a measure of the transverse distance of the struck parton from the center of momentum of the hadron. The variable $\vec{b}^{\perp}$ follows the condition $\sum_i x_i \vec{b}^{\perp}_{i}=0$, where the sum runs over the number of partons. The relative distance between the center of momentum of the spectator and the struck parton is ${{b}^{\perp}/(1-x})$, therefore providing an estimate of the transverse size of the hadron \cite{Diehl:2003ny}.
Following the standard formulation \cite{Miller:2007uy}, one can further define the transverse charge density $\rho({\vec b}^\perp)$ by
\begin{align}
\rho({\vec b}^\perp)&= \int \frac{d^2{\vec \Delta}^\perp}{(2\pi)^2}
e^{-i {\vec \Delta}^\perp \cdot {\vec b}^\perp } F(-{\vec \Delta}^{\perp 2})\nonumber\\&=\int_0^1 dx \, q(x, {\vec b}^\perp)\label{eq:rho},
\end{align}
while the longitudinal momentum density for a given transverse seperation is given by~\cite{Abidin:2008sb,Chakrabarti:2015lba,Mondal:2015fok,Kumar:2017dbf,Mondal:2016xsm}
\begin{align}
p({\vec b}^\perp)&= \int \frac{d^2{\vec \Delta}^\perp}{(2\pi)^2}
e^{-i {\vec \Delta}^\perp \cdot {\vec b}^\perp } A(-{\vec \Delta}^{\perp 2})\nonumber\\&=\int_0^1 dx \, x\, q(x, {\vec b}^\perp)\label{eq:rho2}.
\end{align}
\begin{figure*}
\begin{tabular}{cc}
\subfloat[]{\includegraphics[scale=0.3]{GPD_pion_H_evolved.pdf}}
\end{tabular}
\begin{tabular}{cc}
\subfloat[]{\includegraphics[scale=0.3]{GPD_pion_Et_evolved.pdf}}
\end{tabular}
\caption{Scale evolution of the valence ($u$ or $\bar{d}$) quark GPDs of the pion: (a) $H(x,0,t)$ and (b) $E_T(x,0,t)$ as functions of $x$ and fixed value of $-t=0.11$ GeV$^2$. The GPDs are evaluated with $N_\text{max} = 8$ and $L_\text{max} = 32$ within the BLFQ-NJL model. The GPDs are evolved from our model scale for the pion $\mu_{0\pi}^2=0.240$ GeV$^2$ to the final scales $\mu^2=1,~10,~100$ GeV$^2$.}
\label{evolution_pion_gpds}
\end{figure*}
\begin{figure*}
\begin{tabular}{cc}
\subfloat[]{\includegraphics[scale=0.3]{GPD_kaon_Hu_evolved.pdf}}
\end{tabular}
\begin{tabular}{cc}
\subfloat[]{\includegraphics[scale=0.3]{GPD_kaon_Hs_evolved.pdf}}
\end{tabular}
\begin{tabular}{cc}
\subfloat[]{\includegraphics[scale=0.3]{GPD_kaon_Etu_evolved.pdf}}
\end{tabular}
\begin{tabular}{cc}
\subfloat[]{\includegraphics[scale=0.3]{GPD_kaon_Ets_evolved.pdf}}
\end{tabular}
\caption{Scale evolution of the valence quark GPDs of the kaon: (a) $H(x,0,t)$ and (c) $E_T(x,0,t)$ are for the valence $u$ quark: (b) and (d) are same as (a) and (c), respectively, but for the valence $\bar{s}$ quark as functions of $x$ and fixed value of $-t=0.11$ GeV$^2$. The GPDs are evaluated with $N_\text{max} = 8$ and $L_\text{max} = 32$ within the BLFQ-NJL model. The GPDs are evolved from our model scale for the kaon $\mu_{0K}^2=0.246$ GeV$^2$ to the final scales $\mu^2=1,~10,~100$ GeV$^2$.}
\label{evolution_kaon_gpds}
\end{figure*}
\begin{figure*}
\begin{tabular}{cc}
\subfloat[]{\includegraphics[scale=0.3]{momentH_pion.pdf}}
\end{tabular}
\begin{tabular}{cc}
\subfloat[]{\includegraphics[scale=0.3]{momentEt_pion.pdf}}
\end{tabular}
\caption{The first two Mellin moments of the valence quark GPDs of the pion: (a) $-tA^{\pi,u(\bar{d})}_{n0}(t)$ and (b) $-tB^{\pi,u(\bar{d})}_{Tn0}(t)$ for $n=1$ (black lines) and $n=2$ (blue lines) as functions of $-t$.
The electromagnetic form factor $A_{10}(t)$ of the pion is compared with the experimental data~\cite{Amendolia:1986wj,Bebek:1974iz,Bebek:1974ww,Bebek:1977pe,Volmer:2000ek,Horn:2006tm} and the lattice QCD result~\cite{Brommel:2006ww}. The gravitational FF $A_{20}(t)$ is compared with the parameterization of lattice QCD simulations at $\mu^2=4$ GeV$^2$, while $B_{T10}(t)$ and $B_{T20}(t)$ are compared with lattice QCD and the $\chi$QM results at the same scale $\mu^2=4$ GeV$^2$.
The lines with circle and triangle symbols correspond to the results calculated in the BLFQ-NJL model (present work). The dashed ($n=1$) and dotted ($n=2$) lines represent the lattice QCD results~\cite{Brommel:2007xd}, whereas the dash-dotted ($n=1$) and solid ($n=2$) lines in (b) represent the $\chi$QM~\cite{Nam:2010pt} results. The experimental results in (a) are for the EMFF only.
}
\label{pion_moments}
\end{figure*}
\begin{figure*}
\begin{tabular}{cc}
\subfloat[]{\includegraphics[scale=0.3]{moment_kaon_A_part1.pdf}}
\end{tabular}
\begin{tabular}{cc}
\subfloat[]{\includegraphics[scale=0.15]{moment_kaon_A_part2.pdf}}\\
\end{tabular}
\begin{tabular}{cc}
\subfloat[]{\includegraphics[scale=0.335]{moment_Et_kaon2.pdf}}
\end{tabular}
\caption{The first two Mellin moments of the valence quark GPDs of the kaon: (a) $-tA^{K,u(\bar{s})}_{n0}(t)$, (b) $-tA^{K}_{n0}(t)$, and (c) $-tB^{K,u(\bar{s})}_{Tn0}(t)$ for $n=1$ (black lines) and $n=2$ (blue lines) as functions of $-t$.
The electromagnetic form factor $A_{10}(t)$ of the kaon in (b) is compared with the experimental data~\cite{Dally:1980dj,Amendolia:1986ui}. The inset in (b) provides an expanded view of the EMFF at low $-t$. The tensor FFs in (c) $B_{T10}(t)$ and $B_{T20}(t)$ are compared with the $\chi$QM~\cite{Nam:2011yw} at $\mu^2=4$ GeV$^2$. The lines with circle and triangle symbols correspond to the results calculated in the BLFQ-NJL model (this work). The lines without symbols in (c) represent the $\chi$QM results. The experimental results in (b) are for the EMFF only.
}
\label{kaon_H_moments}
\end{figure*}
\section{numerical results and discussion \label{result}}
\subsection{GPDs and generalised form factors \label{FFs}}
The LFWFs of the valence quarks in the pion and the kaon~\cite{Jia:2018ary} have been solved in the BLFQ framework using the NJL interactions as briefly discussed in the Subsection~\ref{sc:BLFQ_NJL}.
We insert the valence wave functions given by Eq.~\eqref{eq:psi_rs_basis_expansions} into Eqs.~\eqref{eq:H_valence_psi} and \eqref{eq:ET_valence_psi} to calculate the GPDs for the pion and the kaon. We employ the wave functions obtained at the basis truncation $N_\text{max} = 8$ and $L_\text{max} = 32$ with other model parameters given in Table~\ref{tab:model_parameters}.
We illustrate the valence GPDs $H^q$ and $E_T^q$ ($q\equiv u$ or $\bar{d}$) as functions of $x$ and $-t$ for the pion in Fig.~\ref{pion_gpds}. In the forward limit ($-t=0$), the unpolarized GPD $H$ reduces to the ordinary PDF, which peaks at $x=0.5$ for the pion, reflecting the symmetry between the valence quark and the valence antiquark. Unlike the unpolarized GPD $H$, the chiral-odd GPD, $E_T$ in the pion has its peak located below the central value of $x$ and is asymmetric under $x\leftrightarrow (1-x)$ even when $-t=0$. This is due to the fact that $E_T$ involves the overlaps of the wavefunctions with different orbital angular momentum $L_z=0$ and $L_z=\pm 1$. The peaks of these GPDs shift towards higher values of $x$ and the magnitudes of distributions decrease with increasing the value of $-t$.
The valence quark GPDs for the kaon are shown in Fig.~\ref{kaon_gpds}. The up quark GPD $H(x,0,t)$ in the kaon, unlike the valence quark GPD $H(x,0,t)$ in the pion, has the maximum at lower $x$ ($<0.5$) when $t=0$, whereas, due to its heavy mass, the peak in the strange quark distribution appears at higher $x$ ($>0.5$). Meanwhile, the peaks along $x$ get shifted to larger values of $x$ with increasing $-t$ similar to that observed in the pion GPD. This seems to be a model independent behavior of the GPDs which has been noticed in other phenomenological models for the pion~\cite{Kaur:2018ewq} as well as for the nucleon~\cite{Chakrabarti:2013gra,Mondal:2015uha,Chakrabarti:2015ama,Xu:2021wwj}. We also notice that the GPD $E_T$ for the up quark in the kaon exhibits a behavior similar to that observed in the pion, however, the magnitude of $E_{ Tu}$ in the kaon is larger than that in the pion. Meanwhile, $E_{ Ts}$ displays a different behavior compared to $E_{Tu}$ in the kaon: $E_{ Ts}$ is broader along $x$ and falls slower at large $x$ compared to $E_{ Tu}$. As $-t$ increases, $E_{ Tu}$ also falls faster than $E_{ Ts}$ in the kaon. One can also observe oscillations in the GPDs along $x$ in Fig.~\ref{pion_gpds} and Fig.~\ref{kaon_gpds}, which are numerical artifacts due to longitudinal cutoff $L_{\rm max}$. The amplitudes of the oscillations decrease with increasing $L_{\rm max}$~\cite{Lan:2019rba}.
By performing the QCD evolution, the valence quark GPDs at high $\mu^2$ scale can be obtained with the input GPDs at the model scale $\mu_0^2$.
We adopt the DGLAP equations \cite{Dokshitzer:1977sg,Gribov:1972ri,Altarelli:1977zs} of QCD with NNLO for this scale evolution.
Explicitly, we evolve our input GPDs to the relevant experimental scales with independently adjustable initial scales of the pion and the kaon GPDs utilizing the higher order perturbative parton evolution toolkit (HOPPET)~\cite{Salam:2008qg}. We adopt ${\mu_{0\pi}^2=0.240\pm0.024~\rm{GeV}^2}$ for the initial scale of the pion GPDs and ${\mu_{0K}^2=0.246\pm0.024~\rm{GeV}^2}$ for the initial scale of the kaon GPDs which we determined by requiring the results after NNLO DGLAP evolution to fit both the pion and the kaon PDFs results from the experiments~\cite{Lan:2019rba}.
We show the valence quark GPDs in the pion and the kaon for a fixed value of $-t$ at different $\mu^2$ evolved from the corresponding initial scales in Fig.~\ref{evolution_pion_gpds} and Fig.~\ref{evolution_kaon_gpds}, respectively. We observe that the peaks of the distributions move to lower $x$ as we evolve the GPDs to higher scales. The moments of the distributions decrease uniformly as the scale $\mu^2$ increases.
The qualitative behavior of the evolved GPDs is similar in both the pion and the kaon.
The Mellin moments of the valence GPDs give the generalized FFs. The Mellin moments are defined as~\cite{Brommel:2007xd}
\begin{eqnarray}
A^q_{n0}(t)=\int_0^1~dx\, x^{n-1}\,H^q(x,0,t),\label{moment_formula_A}\\
B^q_{Tn0}(t)=\int_0^1~dx\, x^{n-1}\,E_T^q(x,0,t),\label{moment_formula_B}
\end{eqnarray}
where the index $n=1,2,3\dots$, and the second subscript corresponds to the fact that the moments are evaluated at zero skewness ($\zeta=0$).
The first moments of the unpolarized GPD ${H}^q(x,0,t)$ give the electromagnetic FF, $F^q(t)=A^q_{10}(t)$ of an unpolarized quark, while in the forward limit, i.e, $t = 0$, the FF $F^q(0)$ gives the number of the valence quarks of flavor $q$.
The first moment of chiral-odd GPD ${E}^q_{T}(x,0,t)$ provides the tensor FF $B_T^q(t)$ when the quark is transversely polarized.
The second moments of these GPDs correspond to the gravitational FFs of the quarks. Meanwhile, the third moments of the GPDs provide the FFs of a twist-two operator having two covariant derivatives~\cite{Diehl:2003ny,Belitsky:2005qn} and the higher order moments produce the FFs of higher-twist operators.
\begin{table*}
\caption{BLFQ-NJL model predictions for $A^{\pi(K),q}_{20}(0)$, $B^{\pi(K),q}_{T10}(0)$ and $B^{\pi(K),q}_{T20}(0)$ at the scale 4 GeV$^2$. We compare our results with the available lattice QCD simulations~\cite{Brommel:2007xd},
the $\chi$QMs~\cite{Nam:2010pt,Nam:2011yw}, and the CCQM~\cite{Fanelli:2016aqc} at the same scale 4 GeV$^2$. The errors in our results correspond to the QCD evolution from the initial scales ${\mu_{0\pi}^2=0.240\pm0.024~\rm{GeV}^2}$ for the pion and ${\mu_{0K}^2=0.246\pm0.024~\rm{GeV}^2}$ for the kaon.} \label{tab:tensor_charge}
\centering
\begin{tabular}{ccc ccc ccc ccc ccc ccc ccc}
\toprule
Quantity ~&~ BLFQ-NJL ~&~ Lattice QCD~\cite{Brommel:2007xd} ~&~ $\chi$QM~\cite{Nam:2010pt} ~&~ $\chi$QM~\cite{Nam:2011yw} ~&~ $\chi$QM~\cite{Nam:2011yw} ~&~ CCQM~\cite{Fanelli:2016aqc} ~&\\
~&~ (this work) ~&~ ~&~ ~&~ (model I) ~&~ (model II)& &\\
\hline
\vspace{0.1cm}
$A^{\pi,q}_{20}(0)$ & $0.244\pm 0.018$ & $0.27 \pm 0.01$ & ... & ... & ... & $0.248$ & \\
\vspace{0.1cm}
$A^{K,u}_{20}(0)$ & $0.235 \pm 0.018$ & ... & ... & ... & ... & ... & \\
\vspace{0.1cm}
$A^{K,\bar{s}}_{20}(0)$ & $0.265 \pm 0.020$ & ... & ... & ... & ... & ...& \\
\hline
\vspace{0.1cm}
$B^{\pi,q}_{T10}(0)$ & $0.229\pm 0.004$ & $0.216 \pm 0.034$ & $0.216$ & ... & ... & $0.126$ & \\\vspace{0.1cm}
$B^{K,u}_{T10}(0)$ & $0.821 \pm 0.014$ & ... & ... & $0.783$ & $0.611$ & ... & \\
\vspace{0.1cm}
$B^{K,\bar{s}}_{T10}(0)$ & $0.706\pm0.010$ & ... & ... & $0.676$ & $0.421$ & ... & \\
\hline
\vspace{0.1cm}
$B^{\pi,q}_{T20}(0)$ & $0.045\pm0.004$ & $0.039 \pm 0.010$ & $0.032$ & ... & ... & $0.028$ &\\
\vspace{0.1cm}
$B^{K,u}_{T20}(0)$ & $0.152\pm 0.011$ & ... & ... & $0.139$ & $0.090$ & ... & \\
\vspace{0.1cm}
$B^{K,\bar{s}}_{T20}(0)$ & $0.152\pm 0.011$ & ... & ... & $0.100$ & $0.076$ & ... & \\
\botrule
\end{tabular}
\end{table*}
\begin{figure*}
\begin{tabular}{cc}
\subfloat[]{\includegraphics[scale=0.45]{Hu_pion_bspace_Lmax_32.pdf}}
\end{tabular}
\begin{tabular}{cc}
\subfloat[]{\includegraphics[scale=0.45]{Etu_pion_bspace_Lmax_32.pdf}}
\end{tabular}
\caption{The valence ($u$ or $\bar{d}$) quark GPDs for the pion in the transverse impact parameter space: (a) $q(x, {\vec b}^{\perp})$ and (b) $q_T(x,{\vec b}^{\perp})$ as functions of $x$ and $b^\perp$,
where $b^\perp$ is the transverse distance of the active quark from the center of momentum of the hadron.}
\label{pion_impact_gpds}
\end{figure*}
\begin{figure*}
\begin{tabular}{cc}
\subfloat[]{\includegraphics[scale=0.45]{Hu_kaon_bspace_Lmax_32.pdf}}
\end{tabular}
\begin{tabular}{cc}
\subfloat[]{\includegraphics[scale=0.45]{Hs_kaon_bspace_Lmax_32.pdf}}
\end{tabular}
\begin{tabular}{cc}
\subfloat[]{\includegraphics[scale=0.45]{Etu_kaon_bspace_Lmax_32.pdf}}
\end{tabular}
\begin{tabular}{cc}
\subfloat[]{\includegraphics[scale=0.45]{Ets_kaon_bspace_Lmax_32.pdf}}
\end{tabular}
\caption{The valence quark GPDs of the kaon in the transverse impact parameter space: (a) $q(x, {\vec b}^{\perp})$ and (c) $q_T(x, {\vec b}^{\perp})$ are for the valence $u$ quark; (b) and (d) are same as (a) and (c), respectively, but for the valence $\bar{s}$ quark as functions of $x$ and $b^\perp$, where $b^\perp$ is the transverse distance of the active quark from the center of momentum of the hadron.}
\label{kaon_impact_gpds}
\end{figure*}
In Fig.~\ref{pion_moments}(a), we present the first two moments of the GPD ${H}^q(x,0,t)$ of the pion. The EMFF of the pion is given by $F^\pi(t)=e_u A^{\pi,u}_{10}(t)+e_{\bar{d}}A^{\pi,\bar{d}}_{10}(t)$, where $e_q$ denotes the charge of the quark $q$. We find that the pion EMFF within our BLFQ-NJL model is in good agreement with the experimental data and with the lattice QCD simulations. The second moment of the GPD ${H}^q(x,0,t)$ is the gravitational FF $A^q_{20}(t)$, which, at $t=0$, provides the momentum, $\langle x\rangle_q$ carried by the quark. For the pion $A^{\pi,u}_{20}(0)=A^{\pi,\bar{d}}_{20}(0)=0.5$ at the model scale. To compare with lattice QCD, we evolve the GPD to the relevant scale. As summarized in Table~\ref{tab:tensor_charge}, we obtain that at $\mu^2=4$ GeV$^2$, $A^{\pi,q}_{20}(0)=0.244 \pm 0.018$, which is compatible with the result from the covariant constituent quark model (CCQM) model~~\cite{Fanelli:2016aqc}, while the lattice QCD provides the value of $0.27\pm 0.01$~\cite{Brommel:2007xd}.
In addition, substantial difference between our BLFQ-NJL model and lattice QCD for $A^{\pi,q}_{20}(t)$ is observed when $-t$ is nonzero with disagreement increasing as $-t$ increases, as can be seen in Fig.~\ref{pion_moments}(a). We show the tensor FFs of the pion in Fig.~\ref{pion_moments}(b), where we also compare the FFs $B^{\pi,q}_{T10}(t)$ and $B^{\pi,q}_{T20}(t)$ with the lattice QCD results evaluated at the physical pion mass~\cite{Brommel:2007xd}. At $\mu^2=4$ GeV$^2$, we obtain: $B_{T10}^{\pi,q}(0)=0.229\pm 0.004$ and $B_{T20}^{\pi,q}(0)=0.045\pm 0.004$, which reasonably agree with the lattice QCD simulations within the uncertainty: $B_{T10}^{\pi,q}(0)=0.216\pm0.034$ and $B_{T20}^{\pi,q}(0)=0.039\pm0.010$, respectively. It is notable that $B_{T10}^{\pi,q}(0)$ in the CCQM~\cite{Fanelli:2016aqc} differs significantly from our result. The qualitative behavior of the tensor FFs $B^{\pi,q}_{T10}(t)$ and $B^{\pi,q}_{T20}(t)$ is also found to be comparable with the lattice QCD calculations and the chiral quark model ($\chi$QM)~\cite{Nam:2010pt} as shown in Fig.~\ref{pion_moments}(b).
Fig.~\ref{kaon_H_moments} shows the moments of the kaon GPDs. As can be seen from Fig.~\ref{kaon_H_moments}(a), the magnitude of $-tA^{K,u}_{n0}(t)$ is lower than that for $\bar{s}$ quark, implying the faster fall-off of the $u$ quark EMFF compared to the $\bar{s}$ quark in the kaon as $-t$ increases. The EMFF of the kaon, $F^K(t)=e_u A^{K,u}_{10}(t)+e_{\bar{s}}A^{K,\bar{s}}_{10}(t)$, is in good agreement with the experimental data as shown in Fig.~\ref{kaon_H_moments}(b). This is expected because model parameters of the BLFQ-NJL model are partially determined based on the experimental charge radii. On the other hand, we obtain $A^{K,u}_{20}(0)=0.43$ and $A^{K,\bar s}_{20}(0)=0.57$ at the model scale, whereas at $\mu^2=4$ GeV$^2$, the corresponding values are $A^{K,u}_{20}(0)=0.235\pm 0.018$ and $A^{K,\bar s}_{20}(0)=0.265\pm 0.020$ as summarized in Table~\ref{tab:tensor_charge}. We also illustrate the $t$ dependence of the kaon gravitational FFs $A^{K,u}_{20}(t)$ and $A^{K,\bar s}_{20}(t)$ at $\mu^2=4$ GeV$^2$ in Fig.~\ref{kaon_H_moments}(a). The tensor FFs for the kaon in our BLFQ-NJL model are presented in Fig.~\ref{kaon_H_moments}(c), in comparison with that of the $\chi$QM calculations (model I in Ref.~\cite{Nam:2011yw}). The qualitative behavior of $-tB^{K,q}_{Tn0}(t)$ in those models agree. At large $-t$, $-tB_{Tn0}(t)$ for the $\bar{s}$ quark is larger than that for the $u$ quark in the BLFQ-NJL model, while in the $\chi$QM one observes the opposite. We also compare the quark tensor FFs at $t=0$ in the kaon with the $\chi$QM in Table~\ref{tab:tensor_charge}.
\begin{figure*}
\begin{tabular}{cc}
\subfloat[]{\includegraphics[scale=0.25]{ourContourPlotUnpol_pion.pdf}}
\end{tabular}
\begin{tabular}{cc}
\subfloat[]{\includegraphics[scale=0.25]{ourContourPlotTranv2_pion.pdf}}
\end{tabular}
\begin{tabular}{cc}
\subfloat[]{\includegraphics[scale=0.3]{our2dPlotUnpol_pion.pdf}}
\end{tabular}
\begin{tabular}{cc}
\subfloat[]{\includegraphics[scale=0.3]{our2dPlotTranv2_pion.pdf}}
\end{tabular}
\caption{The valence quark probability density in the pion in the transverse impact parameter plane; (a) when the quark is unpolarized $\rho({\vec b}^{\perp})=\rho^{n=1}({\vec b}^{\perp},{\vec s}^{\perp}={\vec 0})$ and (b) when quark is transversely polarized along-$x$ direction $\rho_T({\vec b}^{\perp})=\rho^{n=1}({\vec b}^{\perp},{\vec s}^{\perp}=(1,0))$. Our corresponding results are compared with lattice QCD and the $\chi$QM model in (c) and (d), respectively. The solid, dotted and dash-dotted lines correspond to the BLFQ-NJL model, lattice QCD~\cite{Brommel:2007xd} and the $\chi$QM~\cite{Nam:2010pt}, respectively.}
\label{pion_density}
\end{figure*}
\begin{figure*}
\begin{tabular}{cc}
\subfloat[]{\includegraphics[scale=0.25]{ourContourPlotUnpoln1_uquark.pdf}}
\end{tabular}
\begin{tabular}{cc}
\subfloat[]{\includegraphics[scale=0.25]{ourContourPlotTranv2_uquark.pdf}}
\end{tabular}
\begin{tabular}{cc}
\subfloat[]{\includegraphics[scale=0.25]{ourContourPlotUnpoln1_squark.pdf}}
\end{tabular}
\begin{tabular}{cc}
\subfloat[]{\includegraphics[scale=0.25]{ourContourPlotTranv2_squark.pdf}}
\end{tabular}
\caption{The valence quarks' probability densities in the kaon in the transverse impact parameter plane; (a) for unpolarized $u$ quark $\rho({\vec b}^{\perp})=\rho^{n=1}({\vec b}^{\perp},{\vec s}^{\perp}=\vec{0})$ and (b) for polarized $u$ quark $\rho_T({\vec b}^{\perp})=\rho^{n=1}({\vec b}^{\perp},{\vec s}^{\perp}=(1,0))$. (c) and (d) are same as (a) and (b), respectively but for $\bar{s}$ quark in the kaon.}
\label{kaon_density}
\end{figure*}
\begin{figure*}
\begin{tabular}{cc}
\subfloat[]{\includegraphics[scale=0.3]{our2dPlotUnpol_uquark.pdf}}
\end{tabular}
\begin{tabular}{cc}
\subfloat[]{\includegraphics[scale=0.3]{our2dPlotTranv2_uquark.pdf}}
\end{tabular}
\begin{tabular}{cc}
\subfloat[]{\includegraphics[scale=0.3]{our2dPlotUnpol_squark.pdf}}
\end{tabular}
\begin{tabular}{cc}
\subfloat[]{\includegraphics[scale=0.3]{our2dPlotTranv2_squark.pdf}}
\end{tabular}
\caption{The valence quarks' probability densities in the kaon in our BLFQ-NJL model (solid lines) are compared with $\chi$QM predictions~\cite{Nam:2011yw} (dashed lines).}
\label{kaon_density_2d}
\end{figure*}
\subsection{Spin densities of the pion and the kaon \label{densities}}
The GPDs in the transverse impact parameter space at zero skewness can be interpreted as the densities of quarks with longitudinal momentum fraction $x$ and transverse location ${\vec b}^{\perp}$ with respect to the center of momentum of the hadron independent of the polarization. On the one hand, the density $\rho(x,{\vec b}^{\perp},\lambda)$ of quarks with helicity $\lambda$
in the pion (kaon) is determined by the unpolarized density, $2\rho(x,{\vec b}^{\perp},\lambda) = q(x,{\vec b}^{\perp})$, where the latter is the ${\vec b}^{\perp}$-dependent GPD at zero skewness given by Eq.~(\ref{eq:Hb}). On the other hand, the density of quarks with transverse spin $\vec{s}^\perp$, $\rho(x,{\vec b}^{\perp},{\vec s}^{\perp})$, in the pion (kaon) can be expressed in a combination of the GPDs $q(x,{\vec b}^{\perp})$ and $q_T(x,{\vec b}^{\perp})$ as
\begin{align}
\rho(x,{\vec b}^{\perp},{\vec s}^{\perp})=\frac{1}{2} \left[ q(x,{\vec b}^{\perp})
- \frac{{\vec s}^{\perp}_i \epsilon_{ij}^{\perp} {\vec b}^{\perp}_j}{M_{\mathcal{P}}}\,
q_T^{\prime}(x,{\vec b}^{\perp})
\,\right] \,,
\end{align}
where $q_T^{\prime}(x,{\vec b}^{\perp})=\frac{\partial}{\partial (b^\perp)^2}\,q_T(x,{\vec b}^{\perp})$. The quark spin densities have been investigated in Refs.~\cite{Diehl:2005jf,Gockeler:2006zu,Pasquini:2007xz,Maji:2017ill} for quarks with transverse spin ${\vec s}^{\perp}$ in the nucleon having transverse spin~(${\vec S}^{\perp}$). The corresponding expression for transversely polarized quarks in the pseudoscalar mesons is achieved by setting ${\vec S}^{\perp}=0$ in the nucleon densities~\cite{Gockeler:2006zu}. One finds that the result is much simpler but still involves a dipole term $\propto {\vec s}^{\perp}_i \epsilon_{ij}^{\perp} {\vec b}^{\perp}_j$ leading to a
dependence on the direction of ${\vec b}^{\perp}$ for fixed ${\vec s}^{\perp}$. The $x$-moments of quark spin densities are then given by~\cite{Brommel:2007xd}
\begin{align}
\rho^{n}({\vec b}^{\perp},{\vec s}^{\perp})
&= \int_{0}^{1} dx\, x^{n-1} \rho(x,{\vec b}^{\perp},{\vec s}^{\perp}) \nonumber\\&
= \frac{1}{2} \left[ \mathcal{A}^q_{n0}({\vec b}^{\perp})
- \frac{{\vec s}^{\perp}_i \epsilon_{ij}^{\perp} {\vec b}^{\perp}_j}{M_{\mathcal{P}}}\,
\mathcal{B}_{Tn0}^{q\prime}({\vec b}^{\perp})
\,\right] \,,
\label{density}
\end{align}
where the ${\vec b}^{\perp}$-dependent vector and tensor generalized FFs, $\mathcal{A}^q_{n0}$ and $\mathcal{B}^q_{Tn0}$, are obtained by performing the Fourier transform of the FFs $A^q_{n0}(t)$ and $B^q_{Tn0}(t)$ with respect to $\vec{\Delta}^\perp$ or equivalently by taking the $x$ moments of the impact parameter dependent GPDs $q(x,{\vec b}^{\perp})$ and $q_T(x,{\vec b}^{\perp})$, respectively:
\begin{align}
\mathcal{A}^q_{n0}({\vec b}^{\perp})&=\int \frac{d^2{\vec \Delta}^\perp}{(2\pi)^2}
e^{-i {\vec \Delta}^\perp \cdot {\vec b}^\perp } {A}^q_{n0}(-{\vec \Delta}^{\perp 2})\nonumber\\&=\int_{0}^1 dx\, x^{n-1} q(x,{\vec b}^{\perp}) \,,
\nonumber\\
\mathcal{B}^q_{T n0}({\vec b}^{\perp})&=\int \frac{d^2{\vec \Delta}^\perp}{(2\pi)^2}
e^{-i {\vec \Delta}^\perp \cdot {\vec b}^\perp } {B}^q_{Tn0}(-{\vec \Delta}^{\perp 2})\nonumber\\&=\int_{0}^1 dx\, x^{n-1} q_T(x,{\vec b}^{\perp}) \,.
\label{EqGPDmoments}
\end{align}
The impact parameter dependent GPDs $q(x,{\vec b}^{\perp})$ and $q_T(x,{\vec b}^{\perp})$ for the pion are presented in Fig.~\ref{pion_impact_gpds}. We find that both distributions have sharp peaks located at the center of the pion ($b^\perp=0$) when the quark carries large longitudinal momentum. Nevertheless, the magnitude of the unpolarized distribution is much higher compared
to that of the polarized distribution. A substantial difference is also observed between $q(x,{\vec b}^{\perp})$ and $q_T(x,{\vec b}^{\perp})$ at large $x$. We also notice that the qualitative behavior of the GPDs $q(x,{\vec b}^{\perp})$ and $q_T(x,{\vec b}^{\perp})$ for the kaon, shown in Fig.~\ref{kaon_impact_gpds}, is very similar to those for the pion. However, due to the heavier mass of the $\bar{s}$ quark, its distributions are narrower than those distributions for the $u$ quark in the kaon. Another interesting feature is that the width of all the GPDs in the transverse impact parameter space decrease as $x$ increases. This indicates that the distributions are more localized near the center of the momentum ($b_\perp=0$) when quarks are carrying higher longitudinal momentum. This characteristic of the GPDs in the transverse impact parameter space is reassuring since the distributions in the momentum space become broader in $-t$ with increasing $x$, as can be seen from Figs.~\ref{pion_gpds} and \ref{kaon_gpds}. On the light-front, this is understood as the larger the momentum fraction, the lower the kinetic energy carried by the quarks. As the total kinetic energy remains limited, the distribution in the transverse momentum is required to become broader to carry a larger portion of the kinetic energy. This model-independent property of the GPDs is also observed in the case of the nucleon~\cite{Chakrabarti:2013gra,Mondal:2015uha,Chakrabarti:2015ama,Maji:2017ill}.
We present the first moment of the quark-spin probability density $\rho^{n=1}({\vec b}^{\perp},{\vec s}^{\perp})$, defined in Eq.~(\ref{density}), in Fig.~\ref{pion_density}. When the quark is unpolarized (${\vec s}^{\perp}= \vec{0}$), only $\mathcal{A}^q_{10}({\vec b}^{\perp})$ contributes in the probability density, which is rotationally symmetric in the two-dimensional impact parameters $(b_x,b_y)$ plane as shown in Fig.~\ref{pion_density}(a) and hence, one does not see any interesting structures from this. We now turn our attention to the case when the quark is transversely polarized. Without loss of generality, we consider the quark polarized along the $x$-axis, i.e., ${\vec s}^{\perp}= (+1,0)$ and show
the numerical results as functions of $b_x$ and $b_y$. The probability density becomes distorted when the quark inside the pion is transversely polarized as can be seen from Fig.~\ref{pion_density}(b) indicating the spin structure inside the pion. The second
term in Eq.~(\ref{density}) provides the distortion, and one can clearly observe the deviation from rotational symmetry of the unpolarized density due to the polarization. We also find that the present results are very similar to those given by the lattice QCD
calculation~\cite{Brommel:2007xd}. For instance, in the lower panel of Fig.~\ref{pion_density} we show the probability densities as a function of $b_y$ at fixed $b_x=0.15$ fm, comparing those with that of the lattice QCD simulations and the $\chi$QM~\cite{Nam:2010pt}. The BLFQ-NJL model results are found to be consistent with the results of lattice QCD and the $\chi$QM. The spin densities of the $u$ and $\bar{s}$ quarks in the kaon are shown in Fig.~\ref{kaon_density}, where we notice the similar patterns of the quark-spin probability densities as observed in the pion. It is however interesting to note that $\bar{s}$ quark densities, due to the heavier $\bar{s}$ mass, are more localized near the origin compared to the $u$-quark densities in the kaon. We also find that the qualitative behavior of the present results in the BLFQ-NJL model is compatible with the results obtained in the $\chi$QM~\cite{Nam:2011yw} as shown in Fig.~\ref{kaon_density_2d}, where we plot the probability densities as a function of $b_y$ at fixed $b_x=0.15$ fm.
\subsection{Average transverse shift and transverse squared radius \label{transverse_shift}}
It is also interesting to examine the average transverse shift of the peak position of the probability density along the $b_y$ direction for a transverse quark spin in the $x$-direction, which is defined as~\cite{Brommel:2007xd}
\begin{align}
\label{shift}
\langle b_y^\perp \rangle_{n}
&= \frac{\int d^2 \vec{b}^\perp\, b_{y}^\perp\, \rho^{n}({\vec b}^{\perp},{\vec s}^{\perp})}{%
\int d^2 \vec{b}^\perp\, \rho^{n}({\vec b}^{\perp},{\vec s}^{\perp})}
= \frac{1}{2 M_{\mathcal{P}}}\, \frac{B^q_{T n0}(0)}{A^{q}_{n0}(0)}.
\end{align}
Our BLFQ-NJL model results for the pion give $\langle b_y^\perp \rangle_1=0.162\pm 0.003$ fm and $\langle b_y^\perp \rangle_2=0.131\pm 0.003$ fm, while the lattice simulations provide~\cite{Brommel:2007xd} $\langle b_y^\perp \rangle_1 = 0.151(24)$ fm and $\langle b_y^\perp \rangle_2 = 0.106(28)$ fm. Our results for $\langle b_y^\perp \rangle_{n=1,2}$ for the pion and the kaon are compared with the lattice QCD, the $\chi$QM, CCQM, and NJL model in Table~\ref{tab:shift}.
\begin{table*}
\caption{BLFQ-NJL model predictions for average transverse shift $\langle b_y^\perp \rangle_{1,2}^{q}$ in the pion and the kaon. Our results are compared with the available lattice simulations~\cite{Brommel:2007xd}, the $\chi$QM~\cite{Nam:2010pt,Nam:2011yw}, CCQM~\cite{Fanelli:2016aqc}, and NJL model~\cite{Zhang:2021tnr}.
}\label{tab:shift}
\centering
\begin{tabular}{ccc ccc ccc ccc ccc ccc ccc}
\toprule
Approach ~&~ $\langle b_y^\perp \rangle_{1}^{q,\pi}$ fm ~&~ $\langle b_y^\perp \rangle_{2}^{q,\pi}$ fm ~&~ $\langle b_y^\perp \rangle_{1}^{u,K}$ fm ~&~ $\langle b_y^\perp \rangle_{1}^{\bar{s},K}$ fm ~&~ $\langle b_y^\perp \rangle_{2}^{u,K}$ fm ~&~ $\langle b_y^\perp \rangle_{2}^{\bar{s},K}$ fm &\\
\colrule
BLFQ-NJL (this work) & $0.162\pm 0.003$ & $0.131\pm 0.003$ & $0.164\pm 0.003$ & $0.141\pm 0.002$ & $0.114\pm 0.002$ & $0.114\pm 0.002$ &\\
\vspace{0.1cm}
Latiice QCD~\cite{Brommel:2007xd} & 0.151 $\pm$ 0.024 & 0.106 $\pm$ 0.028 & ... & ... & ... & ... & \\
\vspace{0.1cm}
$\chi$QM~\cite{Nam:2010pt} & 0.152 & ... & ... & ... & ... & ... & \\
\vspace{0.1cm}
$\chi$QM (model I)~\cite{Nam:2011yw} & ... & ... & 0.168 & 0.166 & ... & ... &\\
\vspace{0.1cm}
$\chi$QM (model II)~\cite{Nam:2011yw} & ... & ... & 0.139 & 0.100 & ... & ... &\\
\vspace{0.1cm}
CCQM~\cite{Fanelli:2016aqc} & $0.090\pm 0.001$ & $0.080\pm 0.001$ & ... & ... & ... & ... &\\
NJL model~\cite{Zhang:2021tnr} & ... & ... & 0.116 & ... & 0.083 & ... &\\
\botrule
\end{tabular}
\end{table*}
\begin{figure*}[htbp]
\centering
\includegraphics[scale=0.35]{b2perp_plotv3.pdf}
\caption{$x$-dependence of $\langle b^2_\perp \rangle$ for quarks in the pion (dashed line) and the kaon (solid line) in the BLFQ-NJL model.}
\label{b2x}
\end{figure*}
One can also define the $x$-dependent squared radius of the quark density in the transverse plane as~\cite{Dupre:2016mai}:
\begin{align}
\langle b_{\perp}^2 \rangle^q (x) = \frac{ \int d^2 {\vec{ b}^\perp} \, ({\vec b}^{\perp})^2 q(x, {\vec b}^{\perp})}{\int d^2 {\vec{ b}^\perp}\, q(x, {\vec b}^{\perp})},
\label{eq:b2}
\end{align}
which can also be written through the GPD $H(x,0,t)$ as:
\begin{equation}
\langle b_{\perp}^2 \rangle^q (x)= 4 \frac{\partial}{\partial t} \ln H^q(x,0,t) \biggr| _{t = 0}.
\label{eq:crgpd}
\end{equation}
For the pion and the kaon the squared radius $\langle b_{\perp}^2 \rangle(x)$ is obtained as the charge-weighted sum over the valence quarks: $\langle b_{\perp}^2 \rangle^\pi(x)=e_u \langle b_{\perp}^2 \rangle^u(x) + e_{\bar{d}}\langle b_{\perp}^2 \rangle^{\bar{d}}(x)$ and $\langle b_{\perp}^2 \rangle^K(x)=e_u \langle b_{\perp}^2 \rangle^u(x) + e_{\bar{s}}\langle b_{\perp}^2 \rangle^{\bar{s}}(x)$. The
$\langle b^2_\perp \rangle (x)$ describes the transverse
size of the hadron and shows an increase of transverse radius with decreasing value of the quark momentum fraction $x$~\cite{Dupre:2016mai}. As can be seen from Fig.~\ref{b2x} and as expected, the transverse size of the kaon is smaller than that of the pion for a fixed value of $x$. We also compute the pion's and the kaon's transverse squared radius through the following average over $x$~\cite{Dupre:2016mai}
\begin{equation}
\langle b_{\perp}^2 \rangle = \sum_q e_q \frac{1}{N_q}\int_0^1 dx H^q(x,0,0) {\langle b_{\perp}^2 \rangle}^q(x)\,,
\end{equation}
with the integrated number of valence quark $N_q$ of flavor $q$.
We obtain the squared radius of the pion and the kaon, $\langle b_{\perp}^2 \rangle^\pi=0.285$ fm$^2$ and $\langle b_{\perp}^2 \rangle^K=0.223$ fm$^2$, respectively. The quantity $\langle b_{\perp}^2 \rangle$ is connected to the conventionally defined squared radius $\langle r_{c}^2 \rangle$ from the EMFF by $\langle b_{\perp}^2 \rangle=\frac{2}{3} \langle r_{c}^2 \rangle$~\cite{Dupre:2016mai,Li:2017mlw}. Our results are close to the experimental data for the pion, $\langle b_{\perp}^2 \rangle^\pi_{\rm exp}=0.301\pm 0.014$ fm$^2$ and for the kaon, $\langle b_{\perp}^2 \rangle^K_{\rm exp}=0.209\pm 0.047$ fm$^2$~\cite{ParticleDataGroup:2018ovx} and also consistent with the previously computed charge radii of the pion and the kaon in the BLFQ-NJL model~\cite{Jia:2018ary}.
\section{Summary \label{summary}}
We have investigated the valence quark GPDs of the light pseudoscalar mesons in the framework of BLFQ using a light-front model for light mesons that incorporates light-front holography, longitudinal confinement, and the color-singlet Nambu-Jona--Lasinio interactions. The parameters in the BLFQ-NJL model have previously been adjusted to generate the experimental mass spectrum and the charge radii of the light mesons~\cite{Jia:2018ary}. We have evaluated the quark unpolarized GPD $H$ and tensor GPD $E_T$ in the pion and the kaon in both momentum and transverse position space. The generalized form factors for the pion and the kaon, i.e., vector and tensor form factors from the first two moments of the quark unpolarized and tensor GPDs have been calculated. We have verified the agreement of the the electromagnetic form factors resulting from the unpolarized GPD with the experimental data of the pion and the kaon. The moments of the tensor GPD $E_T$, which give the tensor form factors, have been found to be comparable with the parameterization of lattice QCD simulations as well as with the results of the $\chi$QM.
We have subsequently calculated the probability densities of the unpolarized and polarized quarks inside the pion and the kaon. We have observed that the spatial distribution of the unpolarized quarks is axially symmetric, while it strongly distorted when quarks are transversely polarized, revealing a non-trivial distribution of quark polarization in the pseudoscalar mesons.
The quark probability densities in the BLFQ-NJL model have been found to be in good agreement to those from lattice QCD. The qualitative nature of the quark densities in the kaon was also consistent with those in the $\chi$QM.
In order to examine the shift of the peaks of the densities in the $b_y$ direction, we have computed the average value of $b_y$, which turned out to be compatible with the lattice QCD and the $\chi$QM
We have also evaluated the $x$-dependent squared radius of the
quark density in the transverse plane, which describes the transverse size of the hadron. We have found that, with increasing quark longitudinal momentum, the transverse radius of the pion and the kaon decreases. A similar effect has also been observed in the nucleon~\cite{Dupre:2016mai}. We have noticed that the quarks are more transversely localized in the kaon than in the pion.
\section*{ACKNOWLEDGMENTS}
C. M. is supported by new faculty start up funding by the Institute of Modern Physics, Chinese Academy of Sciences, Grant No. E129952YR0.
C. M. and S. N. thank the Chinese Academy of Sciences Presidents International Fellowship Initiative for the support via Grants No. 2021PM0023 and 2021PM0021, respectively. X. Z. is supported by new faculty startup funding by the Institute of Modern Physics, Chinese Academy of Sciences, by Key Research Program of Frontier Sciences, Chinese Academy of Sciences, Grant No. ZDB-SLY-7020, by the Natural Science Foundation of Gansu Province, China, Grant No. 20JR10RA067 and by the Strategic Priority Research Program of the Chinese Academy of Sciences, Grant No. XDB34000000. J. P. V. is supported by the Department of Energy under Grants No. DE-FG02-87ER40371, and No. DE-SC0018223 (SciDAC4/NUCLEI). S. J. is supported by U.S. Department of Energy, Office of Science, Office of Nuclear Physics, contract no. DE-AC02-06CH11357. This research used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility operated under Contract No. DE-AC02-05CH11231. A portion of the computational resources were also provided by Gansu Computing Center.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 315 |
Szydlik iglasty, sum iglasty (Farlowella acus) – gatunek słodkowodnej ryby promieniopłetwej z rodziny zbrojnikowatych (Loricariidae). Gatunek typowy dla rodzaju Farlowella, jeden z najpospolitszych z tego rodzaju. Narażony na wyginięcie. Bywa hodowany w akwariach.
Systematyka
Gatunek ten został opisany w 1853 roku przez Rudolfa Knera, który nadał gatunkowi nazwę Acestra acus. Jeszcze w tym samym roku włoski doktor i zoolog Fillippo De Filippi nadał dwie nazwy gatunkowi: Loricaria scolopacina i Farlowella scolopacina. Gatunki z rodzaju Farlowella wyglądają bardzo do siebie podobnie, ich rozróżnienie nastręcza trudności nawet ekspertom. Gatunek ten należy do zbrojnikowatcyh, określanych często jako glonojady.
Występowanie
Zbrojnik ten występuje w dwóch miejscach w Ameryce Południowej, w rzece Torito i jeziorze Valencia w Wenezueli.
Morfologia
W wielu źródłach długość ciała jest różnie podawana, szacuje się ją pomiędzy 16 a 25 cm.
Ciało jest wydłużone i mocno spłaszczone, przypominając małą gałązkę. Ciało pokryte płytkami kostnymi zamiast łuskami. Pysk wydłużony, a pod spodem znajduje się otwór gębowy (przyssawka). Ubarwienie szare lub popielate. Po bokach biegnie ciemny (brunatny) pas. Spód ciała jasny. Na oczach znajdują się płaty skórne, które chronią oczy ryby. Jest mistrzem kamuflażu.
Ekologia i zachowanie
Występuje blisko brzegów, wśród gęstej roślinności w wodach wolno lub średnio płynących. Ludzie często omylnie pozyskują młode osobniki szydlika iglastego z korzeniami i roślinami, gdyż podczas żerowania nie wykonuje żadnych ruchów, nawet po wyciągnięciu jej na ląd.
Odżywia się glonami, roślinami wodnymi, pokarmem przeznaczonym dla zwierząt akwariowych, drewnem oraz okazjonalnie bezkręgowcami. Szydlikowi iglastemu w akwarium można podawać surową marchew.
Rozmnażanie
Dymorfizm płciowy słabo widoczny, samiec ma wyrostki na pyszczku ("bokobrody") i jest mniejszy. Tarło odbywa się o świcie lub wieczorem. Samica składa ikrę (60–80 jaj) na gładkich powierzchniach (głazach i np. szybie). Następnie samiec opiekuje się ikrą, aż do wylęgu narybku.
Warunki w akwarium
W akwarium 100L można chować parkę szydlików. Temperatura w akwarium może wynosić 22–28°C . Podłoże w akwarium powinno być piaszczyste, ażeby ryba nie pokaleczyła swojego otworu gębowego. Należy umieścić otoczaki, korzenie potorfowiskowe i rośliny. Może współgrać z innymi, łagodnymi rybami akwariowymi.
Status
Międzynarodowa Unia Ochrony Przyrody (IUCN) uznaje szydlika iglastego za gatunek narażony na wyginiecie (VU – Vulnerable). Liczebność populacji nie jest znana, a jej trend uznaje się za spadkowy.
Przypisy
Zbrojnikowate hodowane w akwariach
Ryby Ameryki Południowej
Gatunki i podgatunki zwierząt nazwane w 1853 roku | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 6,369 |
\section{Introduction}
For simplicity let us call a closed convex cone simply cone.
Both the isotonicity \cite{IsacNemeth1990b,IsacNemeth1990c} and the subadditivity
\cite{AbbasNemeth2012,NemethNemeth2012}, of a projection onto a pointed cone with respect to
the order defined by the cone can be used for iterative methods for finding solutions of
complementarity problems with respect to the cone. Iterative methods are widely used for
solving various types of equilibrium problems (such as variational inequalities,
complementarity problems etc.) In the recent years the isotonicity gained more and more ground
for handling such problems (see \cite{NishimuraOk2012}, \cite{CarlHeikilla2011} and the
large number of references in \cite{CarlHeikilla2011} related to ordered vector spaces). If
a complementarity problem is defined by a cone $K\subset\Hi$
and a mapping $f:K\to\Hi$, where $(\Hi,\lng\cdot,\cdot\rng)$ is a Hilbert space, then $x$
is a solution of the corresponding complementarity problem (that is, $x\in K$, $f(x)\in K^*$
and $\lng x,f(x)\rng=0$, where $K^*$ is the dual of $K$), if and only if $x=P_K(x-f(x))$.
Thus, if
$f$ is continuous and the sequence $x^n$ given by the iteration $x^{n+1}=P_K(x^n-f(x^n))$ is
convergent, then its limit is a fixed point of the mapping $x\mapsto P_K(x-f(x))$ and
therefore
a solution of the corresponding complementarity problem. A specific way for showing the
convergence of the sequence $x^{n+1}=P_K(x^n-f(x^n))$ is to use the isotonicity
\cite{IsacNemeth1990b,IsacNemeth1990c} or subadditivity \cite{AbbasNemeth2012,NemethNemeth2012}
of $P_K$ with respect to the order induced by the cone $K$. In finite dimension the
isotonicity (subadditivity) of $P_K$ imposes strong constraints on the structure of $K$
($K^*$). If $P_K$ is isotone (subadditive), then $K$ ($K^*$) has to be a direct sum of the
subspace $V=K\cap(-K)$ ($V=K^*\cap(-K^*)$) with a latticial cone of a specific structure in
the orthogonal complement of the subspace $V$ (see
\cite{GuyaderJegouNemeth2012,IsacNemeth1992,NemethNemeth2012}). There exist cones of this
type which are important from the practical point of view, such as the monotone
cone (see \cite{GuyaderJegouNemeth2012}) and the monotone nonnegative cone
(see \cite{Dattorro2005}). For Euclidean spaces
the authors of \cite{NemethNemeth2012} showed that $P_K$ is isotone with respect to the order
induced by $K$ if and only if $P_L$ is subadditive with respect to the order induced by $L$,
where $K$ and $L$ are mutually dual pointed closed convex cones. If $K$ is also pointed and
generating, then the isotonicity of $P_K$ with
respect to the order induced by $K$ implies the latticiality of the cone in Hilbert spaces as
well (see \cite{IsacNemeth1990,IsacNemeth1990b,IsacNemeth1990c}). The main result of this paper
states that $P_K$ is isotone with respect to the order induced by $K$ (i.e., $v-u\in K$ implies $P_Kv-P_Ku\in K$) if and only if $P_L$ is
subadditive with respect to the order induced by $L$
(i.e., $P_Lu+P_Lv-P_L(u+v)\in L$ for any $u,\;v \in \R^n$), where $K$ and $L$ are mutually dual
pointed closed convex cones of a Hilbert space, thus extending the result of
\cite{NemethNemeth2012}. This result also implies that if $K$ is a pointed generating cone
in a Hilbert space such that $P_K$ is subadditive with respect to the order induced by
$K$, then it must be latticial. The latter two results have been already proved in Euclidean
spaces (see \cite{NemethNemeth2012} and \cite{IsacNemeth1992}), but they were open until now
in Hilbert spaces, except for the particular case of a Hilbert lattice \cite{Nemeth2003}.
Although originally
motivated by complementarity problems, recently it turned out that the isotonicity and
subadditivity of projections are also motivated by other practical problems at least as
important as the complementarity problems such as the problem of map-making from relative
distance information e.g., stellar cartography
(see
\vspace{2mm}
\noindent {\small www.convexoptimization.com/wikimization/index.php/Projection\_on\_Polyhedral\_Convex\_Cone}
\vspace{2mm}
\noindent and Section 5.13.2 in \cite{Dattorro2005}) and isotone regression
\cite{GuyaderJegouNemeth2012}, where the equivalence between two classical algorithms in
statistics is proved by using theoretical results about isotone projections. We remark that
our proofs are essentially infinite dimensional and apparently there is no easy way to extend
the methods of \cite{NemethNemeth2012} to infinite dimensions. The paper
\cite{GuyaderJegouNemeth2012} shows that investigation of the structure of cones admitting
isotone and subadditive projections is important for possible future applications. The proofs
presented here also provide a more elegant way of
proving the results of \cite{NemethNemeth2012}. However, the difference is that they do not
contain the proof of the latticiality of the involved cones. (For pointed generating cones in
Hilbert spaces this is the consequence of the main result in \cite{IsacNemeth1990}.)
The structure of this note is as follows: After
some preliminary terminology we introduce the main tools for our proofs the Moreau's
decomposition theorem (i.e., Lemma \ref{lm}) and the lattice-like
operations related to a projection onto a cone and then we proceed to showing our main result.
\section{Preliminaries}
Let $\Hi$ be a a real Hilbert space endowed with a scalar product $\lng\cdot,\cdot\rng$ and let
$\|.\|$ the norm generated by the scalar product $\lng\cdot,\cdot\rng$.
Throughout this note we shall use some standard terms and results from convex geometry
(see e.g. \cite{Rockafellar1970}).
Let $K$ be a \emph{closed convex cone} in $\Hi$, i. e., a nonempty closed set with
$tK+sK\subset K,\;\forall \;t,s\in \R_+=[0,+\infty)$. The closed convex cone $K$ is called
\emph{pointed}, if $K\cap(-K)=\{0\}.$
The cone $K$ is {\it generating} if $K-K=\Hi$.
The convex cone $K$ defines a pre-order relation (i.e., a reflexive and transitive binary
relation) $\leq_K$, where $x\leq_Ky$ if and only if $y-x\in K$.
The relation is {\it compatible with the vector structure} of $\Hi$ in the sense that
$x\leq_K y$ implies $tx+z\leq_K ty+z$ for all $z\in \Hi$, and all
$t\in \R_+$. If $\sqsubseteq$ is a reflexive and transitive relation on $\Hi$ which is
compatible with the vector structure of $\Hi$, then $\sqsubseteq=\leq_K$ with
$K=\{x\in\Hi:0\sqsubseteq x\}.$ If $K$ is pointed, then $\leq_K$ it is \emph{antisymmetric}
too, that is $x\leq_K y$ and
$y\leq_K x$ imply that $x=y.$ Hence, in this case $\le_K$ becomes an order relation (i.e, a
reflexive, transitive and antisymmetric binary relation). The elements $x$ and $y$ are called
\emph{comparable} if $x\leq_K y$ or $y\leq_K x.$
We say that $\leq_K$ is a \emph{latticial order} if for each pair of elements $x,y\in \Hi$
there exist the lowest upper bound $\sup\{x,y\}$ (denoted by $x\vee y$) and the uppest lower bound $\inf\{x,y\}$ of
the set $\{x,y\}$ (denoted by $x\wedge y$) with respect to the relation $\leq_K$. In this case $K$ is said a
\emph{latticial or simplicial cone}, and $\Hi$ equipped with a latticial order is called a
\emph{Riesz space} or \emph{vector lattice}.
The \emph{dual} of the convex cone $K$ is the set
$$K^*:=\{y\in \Hi:\;\lng x,y\rng \geq 0,\;\forall x\in K\}.$$
The set $K^*$ is a closed convex cone.
If $K$ is a closed cone, then the extended Farkas lemma (see Exercise 2.31 (f)
in \cite{BoydVandenberghe2004}) says that $(K^*)^*=K.$
Hence denoting $L=K^*$ we see that $K=L^*$ and $L^*=K$. For the
closed cones $K$ and $L$ related by these relations we say that they
are \emph{mutually dual cones}.
The cone $K$ is called \emph{self-dual}, if $K=K^*.$ If $K$ is self-dual, then it is a
generating, pointed, closed convex cone.
Let $K$ be a closed convex cone and $\rho:\Hi\to\Hi$ a mapping. Then, $\rho$ is called \emph{$K$-isotone} if $x\le_K y$ implies $\rho(x)\le_K\rho(y)$ and \emph{$K$-subadditive}
if $\rho(x+y)\le_K\rho(x)+\rho(y)$, for any $x,y\in H$.
Denote by $P_D$ the projection mapping onto a nonempty closed convex set $D$ of the Hilbert
space $\Hi,$ that is the mapping which associates to $x\in \Hi$ the unique nearest point of
$x$ in $D$ (\cite{Zarantonello1971}):
\[ P_Dx\in D,\;\; \textrm{and}\;\; \|x-P_Dx\|= \inf \{\|x-y\|: \;y\in D\}. \]
Next, we shall frequently use the
following simplified form of the Moreau's decomposition theorem \cite{Moreau1962}:
\begin{lemma}\label{lm}
Let $K$ and $L$ be mutually dual cones in the Hilbert space $\Hi$. For any $x$ in $K$
we have $x=P_Kx-P_L(-x)$ and $\lng P_Kx,P_L(-x)\rng=0$. The relation $P_Kx=0$ holds if
and only if $x\in -L$.
\end{lemma}
Let $K$ and $L$ be mutually dual cones in the Hilbert space $\Hi$. Define the
following operations in $\Hi$:
\[x\sa_K y=P_{x-K}y,\textrm{ }x\su_K y=P_{x+K}y,\textrm{ }x\sa_L y=P_{x-L}y,\textrm{ and }
x\su_L y=P_{x+L}y.\]
Assume that
the operations $\su_K$, $\sa_K$, $\su_L$ and $\sa_L$ have precedence over the addition of
vectors and multiplication of vectors by scalars.
If $K$ is self-dual, then $\su_K = \su_L$ and $\sa_K=\sa_L$ and we arrive to the generalized
lattice operations defined by Gowda, Sznajder and Tao in \cite{GowdaSznajderTao2004}, and used
by our paper~\cite{NemethNemeth2012b}.
A direct checking yields that if $K$ is a self-dual latticial cone, then
$\sa_K=\sa_L=\wedge$, and $\su_K=\su_L=\vee$.
That is $\sa_K$, $\sa_L$, $\su_K$ and $\su_L$ are some \emph{lattice-like operations}.
We shall simply call a set $M$ which is invariant with respect to the operations $\sa_K$,
$\sa_L$, $\su_K$ and $\su_L$ \emph{$K$-invariant}.
The following Theorem greatly extends Lemma 2.4 of \cite{NishimuraOk2012} and since it can be
shown very similarly to Theorem 1 of \cite{NemethNemeth2012c} (i.e., the corresponding result
in Euclidean spaces), we state it here without proof.
\begin{theorem}\label{ISO}
Let $K\subset\Hi$ be a closed convex cone and $C\subset\Hi$ be a closed convex set. Then $C$ is $K$-invariant, if and
only if $P_C$ is $K$-isotone.
\end{theorem}
\section{The main result}
\begin{theorem}\label{tm}
Let $K,L$ be mutually dual closed convex cones in a Hilbert space $\Hi$. Then, the
following two statements are equivalent:
\begin{enumerate}[(i)]
\item\label{tma} $P_K$ is $K$-isotone.
\item\label{tmb} $P_L$ is $L$-subadditive.
\end{enumerate}
\end{theorem}
\begin{proof}
\vspace{1mm}
(\ref{tma})$\implies$(\ref{tmb}): From Theorem \ref{ISO}, it follows that $K$ is $K$-invariant. From the definition of the $K$-invariance it is easy to see that $K$ is also
$L$-invariant. Hence, by using again Theorem \ref{ISO}, it follows that $P_K$ is $L$-isotone. Now let $x,y\in\Hi$ be arbitrary. From Lemma \ref{lm}, we have
$x+y\le_L P_K(x)+P_K(y)$, because \[P_K(x)+P_K(y)-x-y=P_K(x)-x+P_K(y)-y=P_L(-x)+P_L(-y)\in L+L\subset L.\] Hence, by the $L$-isotonicity of $P_K$, we have
\[P_K(x+y)\le_L P_K(P_K(x)+P_K(y))=P_K(x)+P_K(y),\] which means that $P_K$ is $L$-subadditive. Thus, by using Lemma \ref{lm}, we get
\begin{gather*}
P_L(x+y)=x+y+P_K(-x-y)\le_L x+y+P_K(-x)+P_K(-y)\\=x+P_K(-x)+y+P_K(-y)
=P_L(x)+P_L(y),
\end{gather*}
which is equivalent to the $L$-subadditivity of $P_L$.
\vspace{1mm}
(\ref{tmb})$\implies$(\ref{tma}): Let $x\le_L y$. Then, $y-x\in L$ and therefore, by the $L$-subadditivity of $P_L$ and Lemma \ref{lm}, we get
\begin{eqnarray*}
P_K(x)=P_L(-x)+x=P_L(y-x-y)+x\le_L P_L(y-x)+P_L(-y)+x\\=y-x+P_L(-y)+x=P_K(y)
\end{eqnarray*}
Hence, $P_K$ is $L$-isotone and therefore Theorem \ref{ISO} implies that $K$ is $L$-invariant. From the definition of the $K$-invariance it is easy to see that $K$ is also
$K$-invariant. Therefore, by using again Theorem \ref{ISO}, it follows that $P_K$ is $K$-isotone.
\end{proof}
\bibliographystyle{habbrv}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 1,923 |
#ifndef LWIP_HDR_NETIFAPI_H
#define LWIP_HDR_NETIFAPI_H
#include "lwip/opt.h"
#if LWIP_NETIF_API /* don't build if not configured for use in lwipopts.h */
#include "lwip/sys.h"
#include "lwip/netif.h"
#include "lwip/dhcp.h"
#include "lwip/autoip.h"
#include "lwip/priv/tcpip_priv.h"
#ifdef __cplusplus
extern "C" {
#endif
#if LWIP_MPU_COMPATIBLE
#define NETIFAPI_IPADDR_DEF(type, m) type m
#else /* LWIP_MPU_COMPATIBLE */
#define NETIFAPI_IPADDR_DEF(type, m) const type * m
#endif /* LWIP_MPU_COMPATIBLE */
typedef void (*netifapi_void_fn)(struct netif *netif);
typedef err_t (*netifapi_errt_fn)(struct netif *netif);
struct netifapi_msg {
struct tcpip_api_call call;
struct netif *netif;
union {
struct {
#if LWIP_IPV4
NETIFAPI_IPADDR_DEF(ip4_addr_t, ipaddr);
NETIFAPI_IPADDR_DEF(ip4_addr_t, netmask);
NETIFAPI_IPADDR_DEF(ip4_addr_t, gw);
#endif /* LWIP_IPV4 */
void *state;
netif_init_fn init;
netif_input_fn input;
} add;
struct {
netifapi_void_fn voidfunc;
netifapi_errt_fn errtfunc;
} common;
} msg;
};
/* API for application */
err_t netifapi_netif_add(struct netif *netif,
#if LWIP_IPV4
const ip4_addr_t *ipaddr, const ip4_addr_t *netmask, const ip4_addr_t *gw,
#endif /* LWIP_IPV4 */
void *state, netif_init_fn init, netif_input_fn input);
#if LWIP_IPV4
err_t netifapi_netif_set_addr(struct netif *netif, const ip4_addr_t *ipaddr,
const ip4_addr_t *netmask, const ip4_addr_t *gw);
#endif /* LWIP_IPV4*/
err_t netifapi_netif_common(struct netif *netif, netifapi_void_fn voidfunc,
netifapi_errt_fn errtfunc);
#define netifapi_netif_remove(n) netifapi_netif_common(n, netif_remove, NULL)
#define netifapi_netif_set_up(n) netifapi_netif_common(n, netif_set_up, NULL)
#define netifapi_netif_set_down(n) netifapi_netif_common(n, netif_set_down, NULL)
#define netifapi_netif_set_default(n) netifapi_netif_common(n, netif_set_default, NULL)
#define netifapi_dhcp_start(n) netifapi_netif_common(n, NULL, dhcp_start)
#define netifapi_dhcp_stop(n) netifapi_netif_common(n, dhcp_stop, NULL)
#define netifapi_dhcp_inform(n) netifapi_netif_common(n, dhcp_inform, NULL)
#define netifapi_dhcp_renew(n) netifapi_netif_common(n, NULL, dhcp_renew)
#define netifapi_dhcp_release(n) netifapi_netif_common(n, NULL, dhcp_release)
#define netifapi_autoip_start(n) netifapi_netif_common(n, NULL, autoip_start)
#define netifapi_autoip_stop(n) netifapi_netif_common(n, NULL, autoip_stop)
#ifdef __cplusplus
}
#endif
#endif /* LWIP_NETIF_API */
#endif /* LWIP_HDR_NETIFAPI_H */
| {
"redpajama_set_name": "RedPajamaGithub"
} | 8,789 |
Les élections au Parlement basque de 2016 () se tiennent le dimanche , afin d'élire les soixante-quinze députés de la onzième législature du Parlement basque.
Contexte
Au cours des élections autonomiques anticipées du , le Parti nationaliste basque (EAJ/PNV), emmené par son président Iñigo Urkullu, confirme sa position de premier parti de la communauté autonome en remportant 34,2 % des suffrages et . Ce résultat constitue un recul de par rapport au scrutin de . La deuxième position revient à la coalition de la gauche souverainiste (EH Bildu), qui récolte 24,7 % des voix et , soit le meilleur score de ce courant politique depuis . Le Parti socialiste du Pays basque-Gauche basque-PSOE (PSE-EE-PSOE) de Patxi López, au pouvoir depuis les précédentes élections, est sanctionné d'une rétrogradation à la troisième place, comptant tout de même 18,9 % des suffrages et . Le Parti populaire basque (PPV) d'Antonio Basagoiti tombe à 11,6 % et , ce qui constitue sa pire performance depuis . Quant à la formation antinationaliste et social-libérale Union, progrès et démocratie (UPyD), elle conserve son seul député autonomique.
Le , Urkullu et Laura Mintegi (Bildu) se soumettent à l'investiture du Parlement. Le premier obtient et la deuxième , tandis que les restants émettent un vote blanc. Ce vote se répétant le , Iñigo Urkullu est investi président du gouvernement du Pays basque () et forme un gouvernement minoritaire. Moins de six mois plus tard, Arantza Quiroga, présidente du parlement autonomique entre et , est nommée présidente du PPV en remplacement de Basagoiti.
Cette répartition des forces politiques va se trouver globalement confirmée par les élections européennes du . La Coalition pour l'Europe (CpE), dont fait partie l'EAJ/PNV, vire en effet en tête avec 27,96 % des voix devant EH Bildu qui obtient 23,8 %. Troisième, le Parti socialiste ouvrier espagnol (PSOE) reçoit 14,05 %, tandis que le Parti populaire (PP) compte 10,4 % des suffrages et se maintient en quatrième position. La cinquième place revient à Podemos, nouvelle formation de gauche anti-austérité, qui cumule 7,04 %. Quant à la Gauche unie (IU), elle se hisse à 5 % et devance UPyD, qui parvient à emporter 3,4 %. À la suite de ce scrutin, Patxi López renonce à diriger la fédération basque du PSOE et les militants lui désignent Idoia Mendia, ancienne conseillère à la Justice du gouvernement autonomique, comme successeur.
Environ un an plus tard, la carte politique est maintenue par les résultats des élections municipales du . Toujours première force politique de la communauté autonome, le Parti nationaliste basque cumule 33,8 % des voix et 13 des 19 plus grandes villes du Pays basque, dont les trois capitales de province Bilbao, Saint-Sébastien et Vitoria-Gasteiz. Deuxième avec 23,9 %, Réunir le Pays basque fait élire trois maires parmi les 19 communes les plus peuplées. Le Parti socialiste en fait autant avec ses 14,8 %, tandis que le Parti populaire, qui n'a jamais gouverné que Vitoria-Gasteiz par intermittence depuis , se retrouve sans exécutif majeur et compte 9,6 % des voix. En outre, l'EAJ/PNV réalise le grand chelem des Juntes générales des trois provinces, reprenant au PPV la présidence de députation forale d'Alava que ce dernier exerçait depuis . Le , la présidente du PP basque Arantza Quiroga annonce sa démission après que ses propositions idéologiques ont été désavouées par la direction nationale. Le ministre de la Santé Alfonso Alonso est alors désigné pour prendre sa suite.
Les élections législatives du vont amener à une sensible modification de la carte électorale. Le parti Podemos devient en effet la première force politique de la communauté autonome en totalisant 26,2 % des suffrages dans les trois provinces, contre 24,9 % au Parti nationaliste basque, soit d'avance. Cependant, la répartition des suffrages donne à l'EAJ/PNV contre cinq à Podemos. Toujours troisième, Réunir le Pays basque se contente de 15,2 % des voix et , alors que le Parti socialiste en reçoit trois avec 13,3 %. Les deux derniers sièges à pourvoir reviennent au Parti populaire, qui récolte 11,7 %. Le parti antinationaliste libéral Ciudadanos se contente de 4,1 % et ne fait élire aucun parlementaire.
Le Congrès des députés n'étant pas parvenu à investir un président du gouvernement, des élections législatives anticipées sont convoquées le . Elles sont marquées par la victoire de la coalition Unidos Podemos, qui accumule 29 % des voix et , tandis que le Parti nationaliste confirme ses 24,9 % mais perd . Avec 14,2 % des suffrages exprimés, les socialistes remontent en troisième position et maintiennent leur représentation, tout comme Bildu qui descend à 13,3 %. Les conservateurs gardent eux aussi leurs tout en opérant une légère remontée à 12,8 %. Quant à Ciudadanos, il recule à 3,5 %.
Le , Urkullu annonce que les élections autonomiques seront convoquées le dimanche suivant, qui correspond à la fête annuelle de l'EAJ/PNV (), et non le comme attendu. Elles coïncideront alors avec les élections au Parlement de Galice.
Mode de scrutin
Le Parlement basque (, ) se compose de , élus pour un mandat de quatre ans au suffrage universel direct, suivant le scrutin proportionnel à la plus forte moyenne d'Hondt.
Chaque province constitue une circonscription, à raison de par province. Seules les forces politiques – partis, coalitions, indépendants – ayant remporté au moins 3 % des suffrages exprimés au niveau d'un territoire provincial participent à la répartition des sièges.
Comme dans toute l'Espagne, le vote blanc est reconnu et comptabilisé comme un vote valide. Il est par conséquent pris en compte pour déterminer si un parti a franchi ou non le seuil électoral. En revanche, conformément à l'article 96.5 de la LOREG, seuls les suffrages exprimés sont pris en compte pour la répartition des sièges à pourvoir.
Campagne
Principaux partis et chefs de file
Résultats
Voix et sièges
Total régional
Par circonscription
Analyse
Conséquences
Notes et références
Voir aussi
Articles connexes
Élections au Parlement basque
Parlement basque
Lehendakari
Lien externe
Site sur les élections 2016 mis en place par le gouvernement basque
2016
Pays basque
pays basque | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 7,341 |
{"url":"http:\/\/cta.irap.omp.eu\/gammalib\/help\/issues.html","text":"# Known issues\u00b6\n\nBelow you will find a list of known issues.\n\nCompilation issues\n\nUnit testing issues\n\nOther issues\n\n## Compilation issues\u00b6\n\nGammaLib does not compile against conda Python\n\nTrying to compile GammaLib against conda Python may fail due to incompatibility issues. If you\u2019d like to compile GammaLib against conda Python, make sure that gcc, swig, and cfitsio are all installed via anaconda\n\n$conda install gcc swig$ conda install -c conda-forge cfitsio\n\n\nGammaLib does not compile on Mac OS X\n\nDependent on the Mac OS X version you are using, not everything that is needed to install GammaLib may be available (e.g. automake, libtool, etc\u2026). The easiest way to get the needed software is using a package management system such as MacPorts or Homebrew. On a fresh El Capitan install you need for example the following from Homebrew:\n\n$brew install automake$ brew install libtool\n$brew install cfitsio$ brew install swig\n\n\nswig is only necessary if you installed the code from git. Dependening on your system, you also may need to install the Python development package.\n\nGammaLib does not compile on Solaris\n\nAlthough GammaLib builds on Solaris using the Sun compiler, there are problems with global symbols in shared libraries and exception catching, which prevents the FITS interface to work correctly. GammaLib has however been built and tested successfully using the GNU compiler, and this is the only build method that is currently supported. Problems have also been encountered when compiling cfitsio versions more recent than 3.250. The problems have been reported to the cfitsio developer team, and are likely to be solved in the future. For the time being, it is recommended to use cfitsio version 3.250 on Solaris.\n\nGammaLib does not compile on OpenSolaris\n\nOn OpenSolaris, the same problems concerning the SunStudio compiler occur as for Solaris, and also here, the GNU compiler is the recommended tool to build GammaLib. Also here, cfitsio version 3.250 is the recommended library as more recent version feature relocation problems. GammaLib has been tested using gcc 4.3.2 on OpenSolaris 2009.06. Make sure to create the following symbolic links if they do not yet exist on your system:\n\n$ln -s \/usr\/bin\/gcc4.3.2 \/usr\/bin\/gcc$ ln -s \/usr\/bin\/g++4.3.2 \/usr\/bin\/g++\n\n\nThey avoid excess warnings during compilation.\n\n## Unit testing issues\u00b6\n\nMany (but not all) unit tests fail\n\nOccasionally it may happen that the cfitsio library is not found when configuring GammaLib. The library will compile successfully without cfitsio, but in that case FITS I\/O will not be supported. Consequently, many unit tests will fail. If you are sure that cfitsio is installed, but the path where the library and the path where the fitsio.h header reside are non-standard, you may add the paths explicitly during configuration using:\n\n$.\/configure LDFLAGS='-L\/path\/to\/cftsio\/library' CPPFLAGS='-I\/path\/to\/fitsio.h\/header' The same logic applies for finding the readline and ncurses libraries, although these libraries are not manadatory for getting the full GammaLib functionalities. Alternatively, cfitsio can be found when compiling GammaLib but not when using the shared library. To solve the issue, locate the directory where the shared libcfitsio library resides and then type $ export LD_LIBRARY_PATH=\/directory\/to\/lib:$LD_LIBRARY_PATH on Unix based systems or $ export DYLD_LIBRARY_PATH=\/directory\/to\/lib:\\$DYLD_LIBRARY_PATH\n\n\non Mac OS X (\/directory\/to\/lib should be replaced by the correct library path on your system).\n\n## Other issues\u00b6\n\nPython module does not work\n\nGammaLib includes a Python module that is built from so called wrapper files that are autogenerated using the swig tool. These wrapper files are shipped with a GammaLib release, but if you use the code from git you need swig to generate the wrapper files during the build step. In any case, to compile the Python module GammaLib needs the Python.h header file which may not necessarily be installed on your system. Check the output of .\/configure to examine the configuration that GammaLib has detected. You may see the following:\n\n* Python (yes)\n* Python.h (yes)\n* Python wrappers (yes)\n* swig (yes)\n\n\nRecall, if the wrappers exist you do not need swig, but if the wrappers don\u2019t exist you need swig. If the Python.h header file does not exist then install the Python development package.","date":"2021-06-15 04:38:58","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.1778487265110016, \"perplexity\": 4672.770420871623}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-25\/segments\/1623487616657.20\/warc\/CC-MAIN-20210615022806-20210615052806-00614.warc.gz\"}"} | null | null |
Bladeless Fans Now in Manila: Who Came Up With This Craziness?
Meet the Inspiring Man Behind Dyson
James Dyson, the man behind the brand
If you are a fan of home cleaning, you've probably heard of Dyson. Dyson is a household brand that revolutionizes the art of home cleaning, with futuristic gadgets that are more suited to The Jetsons than the time now. Its claim to fame is the Dual Cyclone in 1993, which was the world's first and only vacuum cleaner technology that with no bag and no loss of suction. Since then, it has created even more hi-tech gadgets like the Dyson digital motor, the Dyson Airblade hand dryer, the Dyson Air Multiplier fan, the Dyson Hot fan heater, and most recently, the Dyson Digital Slim, a cordless vacuum powered by the Dyson digital motor.
Because of its innovations, the brand has been awarded Design Week's Designer of the Decade in 1999 and 2000, Japan Industrial Designer's Association award in 2002, The Queen's Award for Innovation in 2004, and The Queens Award for International Trade in 2006.
But who knew that the man behind the brand, James Dyson, faced a lot of rejection when he was building his brand?
James Dyson with his revolutionary air multiplier. Even with the new iPhone 6, the air multiplier still looks ahead of its time
A rural upbringing in Norfolk amongst a family of academics and clergy might seem an unconventional route into engineering, but James Dyson showed an obstinate streak from the off. He was a man with a determination to do things differently.
While at school, James followed in his father's footsteps by studying classics. The murky world of manufacturing loomed like a threat. In those days, a factory was where those who failed their exams would end up, but James earned a place to study art at the Byam Shaw Art School in London. Here he found himself increasingly drawn away from art towards design. Next stop was the Royal College of Art where James took the leap from furniture design to industrial design – a chance to get his hands dirty, working with plastic and stainless steel. And so began a lifelong passion for functional design.
After graduating from the RCA, James was employed by local engineering company, Rotork, where he designed his first project, the Sea Truck, a high-speed landing craft. Working alongside Jeremy Fry, James adopted an Edisonian approach to design; making prototype after prototype until he got it just right.
For James, frustration has proved the mother of invention: a wheelbarrow which sank in the mud and chipped paintwork was the inspiration for Ballbarrow. Ballbarrow had a large inflatable ball instead of a wheel, which along with chunky feet, gave it stability. The barrow itself was made from plastic, which didn't rust and didn't dent walls.
Then in 1979, when James bought the ten top of the range vacuum cleaners, he became frustrated with how it instantly clogged and began to lose suction. James emptied the bag to try and get it going but this had no effect. The engineer's instinct kicked in. He ripped open the bag and noticed a layer of dust inside, clogging the pores. A fundamental flaw with vacuum technology, undetected and unchallenged for almost 100 years. James became determined to develop a better vacuum cleaner that worked properly.
The new Dyson handheld vacuums look more like robots than household cleaning products, but that's what makes it more fun to use
During a chance visit to a local sawmill, James noticed how the sawdust was removed from the air by large industrial cyclones. Could that principle work on a smaller scale, in a vacuum cleaner? He took his vacuum apart and rigged it up with a cardboard cyclone. He then began to clean the room with it. Amazingly it picked up more than his old bag machine. The world's first vacuum cleaner without a bag was born.
But that was just the beginning of James' battle. One by one, short sighted multi-nationals rejected his idea. Even when they were satisfied that the technology worked, none of them listened; they were more interested in defending their own product – and making a pretty profit from the lucrative bag market, worth $500m a year worldwide.
By the mid 1980's James was heavily in debt but doggedly continued on his one-man licensing tour. Eventually, James received a call from a Japanese company, Apex Inc. An Aeroflot flight and several all-night meetings later, James had signed a deal and in 1986 production of 'G-Force' began. It took 15 years of frustration, perseverance, and over 5,000 prototypes, for James to finally launch the Dyson DCO1 vacuum cleaner under his own name. Within 18 months it became the best-selling cleaner in the UK.
Wired magazine featured James Dyson on its cover, with the tagline "How one man transformed six industries (and how he's doing it again)"
James founded The James Dyson Foundation, a registered charity, in 2002 with the objective of building on Dyson's history of philanthropic work. It aims to support medical research charities, design technology and engineering educational work and community projects in and around Wiltshire.
Today, Dyson is one of Britain's most inventive companies, filing the second highest number of patents after Rolls-Royce. Every year, the company invest half of its profits back into research and development labs in Wiltshire. Dyson employ 3,900 people worldwide, of which nearly half are engineers working across the UK, Malaysia, and Singapore.
Their current turnover is over £1bn and Dyson vacuum cleaners are available in over 50 countries. In the UK, one in three households now owns a Dyson vacuum cleaner. Instead of enjoying his money and fame, James still continues to work alongside his team of engineers and scientists, developing new technologies to overcome everyday frustrations.
This inspiring story proves how hard work, determination, and passion can pay off in the future. How about you? What's your passion? Share it with us!
Dyson Philippines
Website | www.dyson.ph
Stores | Dyson Concept Store in Century City Mall, Makati; Rustan's Makati and Shangri-La; Robinsons Appliance, Magnolia; SM Appliance Rockwell and Megamall; and Abenson Alabang. (exclusively distributed by Whiteplanet Inc.)
Contact Number | 09172711543 / 7226587 / 7274092 loc 123 (Monday – Friday 9:00 – 5:30PM)
Email | dysonservice@whiteplanetinc.com
Air multipliers in the Philippinesbladeless fanbladeless fan manilaDyson Maniladyson philippinesHousehold cleaning productsVacuum cleaners in the Philippines
Laugh & Stack Open Mic Showdown 12th Leg
Seems Legit: This Phishing Email, 5 Million...
ASUS Philippines New Gadget Launch: K Series Laptops
ASUS New Gadget Launch: Eee Pad Transformer
Speed Magazine 8th Anniversary Event: Green Fashion Technology
MORE IN Gaming & Tech | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 6,421 |
The Battle of Port Royal Ferry
January 1, 1862 in Port Royal, South Carolina
Brig. Gen. I. Stevens and Capt. Rogers
4,500 2 13 ?
Gens. Gregg and John Pope
8,000 est 8 24 ?
The battle of was a brisk fight at Port Royal Ferry, about 25 miles from Hilton Head. The expedition which achieved this Union victory was a combined land and naval effort. The command was a joint command of Brig. Gen. I. Stevens and Capt. Rogers of the flagship U.S.S. Wabash. The Union army consisted of the 8th Michigan Regiment, Pennsylvania Round Heads, 50th Pennsylvania, 79th New York Militia, and the 47th and 48th New York Volunteers. The naval force also had 4 gunboats, the Ellen, Ottawa, Seneca, and the Pembina.
Steven's brigade advanced on Port Royal and took possession of the Confederate batteries after a short skirmish with the Confederate defense. The Federals were assisted by the gunboats which shelled the batteries from the shoreline. The Federals pursued the Confederates to within 6 miles from the Charleston Railroad.
After the fight, the Confederates issued a flag of truce for the purpose of burying their dead. Stevens agreed to the truce. One hour was granted to the Confederates for the burial detail, after which the confederates fell back upon their fortifications near the railroad, which was very extensive, leaving behind them 1 large gun, which they had spiked. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 7,505 |
Q: Running a Sonarqube analysis from Gradle I'm trying to run a SonarQube analysis on my project from my build.gradle file. When I do I get the following error:
Caused by: java.lang.IllegalStateException: Fail to create temp file in ?/.sonar/cache/_tmp
at org.sonarsource.scanner.api.internal.cache.FileCache.newTempFile(FileCache.java:138)
at org.sonarsource.scanner.api.internal.cache.FileCache.get(FileCache.java:83)
at org.sonarsource.scanner.api.internal.JarDownloader.lambda$getScannerEngineFiles$0(JarDownloader.java:60)
at org.sonarsource.scanner.api.internal.JarDownloader.getScannerEngineFiles(JarDownloader.java:61)
at org.sonarsource.scanner.api.internal.JarDownloader.download(JarDownloader.java:53)
at org.sonarsource.scanner.api.internal.IsolatedLauncherFactory.lambda$createLauncher$0(IsolatedLauncherFactory.java:76)
I've tried setting the sonar.path.temp to "/tmp/.sonar", but that doesn't seem to have any effect. Is there some other setting that I'm missing to make this work?
A: What helped me was setting -Dsonar.userHome='pwd'/.sonar also changing SONAR_USER_HOME may help.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 9,021 |
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<!-- NewPage -->
<html lang="en">
<head>
<!-- Generated by javadoc (1.8.0_66) on Mon Dec 14 15:42:38 CST 2015 -->
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<title>Uses of Package play.mvc.results (Play! API)</title>
<meta name="date" content="2015-12-14">
<link rel="stylesheet" type="text/css" href="../../../stylesheet.css" title="Style">
<script type="text/javascript" src="../../../script.js"></script>
</head>
<body>
<script type="text/javascript"><!--
try {
if (location.href.indexOf('is-external=true') == -1) {
parent.document.title="Uses of Package play.mvc.results (Play! API)";
}
}
catch(err) {
}
//-->
</script>
<noscript>
<div>JavaScript is disabled on your browser.</div>
</noscript>
<!-- ========= START OF TOP NAVBAR ======= -->
<div class="topNav"><a name="navbar.top">
<!-- -->
</a>
<div class="skipNav"><a href="#skip.navbar.top" title="Skip navigation links">Skip navigation links</a></div>
<a name="navbar.top.firstrow">
<!-- -->
</a>
<ul class="navList" title="Navigation">
<li><a href="../../../overview-summary.html">Overview</a></li>
<li><a href="package-summary.html">Package</a></li>
<li>Class</li>
<li class="navBarCell1Rev">Use</li>
<li><a href="package-tree.html">Tree</a></li>
<li><a href="../../../deprecated-list.html">Deprecated</a></li>
<li><a href="../../../index-all.html">Index</a></li>
<li><a href="../../../help-doc.html">Help</a></li>
</ul>
</div>
<div class="subNav">
<ul class="navList">
<li>Prev</li>
<li>Next</li>
</ul>
<ul class="navList">
<li><a href="../../../index.html?play/mvc/results/package-use.html" target="_top">Frames</a></li>
<li><a href="package-use.html" target="_top">No Frames</a></li>
</ul>
<ul class="navList" id="allclasses_navbar_top">
<li><a href="../../../allclasses-noframe.html">All Classes</a></li>
</ul>
<div>
<script type="text/javascript"><!--
allClassesLink = document.getElementById("allclasses_navbar_top");
if(window==top) {
allClassesLink.style.display = "block";
}
else {
allClassesLink.style.display = "none";
}
//-->
</script>
</div>
<a name="skip.navbar.top">
<!-- -->
</a></div>
<!-- ========= END OF TOP NAVBAR ========= -->
<div class="header">
<h1 title="Uses of Package play.mvc.results" class="title">Uses of Package<br>play.mvc.results</h1>
</div>
<div class="contentContainer">
<ul class="blockList">
<li class="blockList">
<table class="useSummary" border="0" cellpadding="3" cellspacing="0" summary="Use table, listing packages, and an explanation">
<caption><span>Packages that use <a href="../../../play/mvc/results/package-summary.html">play.mvc.results</a></span><span class="tabEnd"> </span></caption>
<tr>
<th class="colFirst" scope="col">Package</th>
<th class="colLast" scope="col">Description</th>
</tr>
<tbody>
<tr class="altColor">
<td class="colFirst"><a href="#play">play</a></td>
<td class="colLast"> </td>
</tr>
<tr class="rowColor">
<td class="colFirst"><a href="#play.data.validation">play.data.validation</a></td>
<td class="colLast"> </td>
</tr>
<tr class="altColor">
<td class="colFirst"><a href="#play.mvc.results">play.mvc.results</a></td>
<td class="colLast"> </td>
</tr>
<tr class="rowColor">
<td class="colFirst"><a href="#play.plugins">play.plugins</a></td>
<td class="colLast"> </td>
</tr>
<tr class="altColor">
<td class="colFirst"><a href="#play.server">play.server</a></td>
<td class="colLast"> </td>
</tr>
</tbody>
</table>
</li>
<li class="blockList"><a name="play">
<!-- -->
</a>
<table class="useSummary" border="0" cellpadding="3" cellspacing="0" summary="Use table, listing classes, and an explanation">
<caption><span>Classes in <a href="../../../play/mvc/results/package-summary.html">play.mvc.results</a> used by <a href="../../../play/package-summary.html">play</a></span><span class="tabEnd"> </span></caption>
<tr>
<th class="colOne" scope="col">Class and Description</th>
</tr>
<tbody>
<tr class="altColor">
<td class="colOne"><a href="../../../play/mvc/results/class-use/Result.html#play">Result</a>
<div class="block">Result support</div>
</td>
</tr>
</tbody>
</table>
</li>
<li class="blockList"><a name="play.data.validation">
<!-- -->
</a>
<table class="useSummary" border="0" cellpadding="3" cellspacing="0" summary="Use table, listing classes, and an explanation">
<caption><span>Classes in <a href="../../../play/mvc/results/package-summary.html">play.mvc.results</a> used by <a href="../../../play/data/validation/package-summary.html">play.data.validation</a></span><span class="tabEnd"> </span></caption>
<tr>
<th class="colOne" scope="col">Class and Description</th>
</tr>
<tbody>
<tr class="altColor">
<td class="colOne"><a href="../../../play/mvc/results/class-use/Result.html#play.data.validation">Result</a>
<div class="block">Result support</div>
</td>
</tr>
</tbody>
</table>
</li>
<li class="blockList"><a name="play.mvc.results">
<!-- -->
</a>
<table class="useSummary" border="0" cellpadding="3" cellspacing="0" summary="Use table, listing classes, and an explanation">
<caption><span>Classes in <a href="../../../play/mvc/results/package-summary.html">play.mvc.results</a> used by <a href="../../../play/mvc/results/package-summary.html">play.mvc.results</a></span><span class="tabEnd"> </span></caption>
<tr>
<th class="colOne" scope="col">Class and Description</th>
</tr>
<tbody>
<tr class="altColor">
<td class="colOne"><a href="../../../play/mvc/results/class-use/Result.html#play.mvc.results">Result</a>
<div class="block">Result support</div>
</td>
</tr>
<tr class="rowColor">
<td class="colOne"><a href="../../../play/mvc/results/class-use/WebSocketResult.html#play.mvc.results">WebSocketResult</a>
<div class="block">WebSocket Result support</div>
</td>
</tr>
</tbody>
</table>
</li>
<li class="blockList"><a name="play.plugins">
<!-- -->
</a>
<table class="useSummary" border="0" cellpadding="3" cellspacing="0" summary="Use table, listing classes, and an explanation">
<caption><span>Classes in <a href="../../../play/mvc/results/package-summary.html">play.mvc.results</a> used by <a href="../../../play/plugins/package-summary.html">play.plugins</a></span><span class="tabEnd"> </span></caption>
<tr>
<th class="colOne" scope="col">Class and Description</th>
</tr>
<tbody>
<tr class="altColor">
<td class="colOne"><a href="../../../play/mvc/results/class-use/Result.html#play.plugins">Result</a>
<div class="block">Result support</div>
</td>
</tr>
</tbody>
</table>
</li>
<li class="blockList"><a name="play.server">
<!-- -->
</a>
<table class="useSummary" border="0" cellpadding="3" cellspacing="0" summary="Use table, listing classes, and an explanation">
<caption><span>Classes in <a href="../../../play/mvc/results/package-summary.html">play.mvc.results</a> used by <a href="../../../play/server/package-summary.html">play.server</a></span><span class="tabEnd"> </span></caption>
<tr>
<th class="colOne" scope="col">Class and Description</th>
</tr>
<tbody>
<tr class="altColor">
<td class="colOne"><a href="../../../play/mvc/results/class-use/NotFound.html#play.server">NotFound</a>
<div class="block">404 not found</div>
</td>
</tr>
<tr class="rowColor">
<td class="colOne"><a href="../../../play/mvc/results/class-use/RenderStatic.html#play.server">RenderStatic</a> </td>
</tr>
</tbody>
</table>
</li>
</ul>
</div>
<!-- ======= START OF BOTTOM NAVBAR ====== -->
<div class="bottomNav"><a name="navbar.bottom">
<!-- -->
</a>
<div class="skipNav"><a href="#skip.navbar.bottom" title="Skip navigation links">Skip navigation links</a></div>
<a name="navbar.bottom.firstrow">
<!-- -->
</a>
<ul class="navList" title="Navigation">
<li><a href="../../../overview-summary.html">Overview</a></li>
<li><a href="package-summary.html">Package</a></li>
<li>Class</li>
<li class="navBarCell1Rev">Use</li>
<li><a href="package-tree.html">Tree</a></li>
<li><a href="../../../deprecated-list.html">Deprecated</a></li>
<li><a href="../../../index-all.html">Index</a></li>
<li><a href="../../../help-doc.html">Help</a></li>
</ul>
</div>
<div class="subNav">
<ul class="navList">
<li>Prev</li>
<li>Next</li>
</ul>
<ul class="navList">
<li><a href="../../../index.html?play/mvc/results/package-use.html" target="_top">Frames</a></li>
<li><a href="package-use.html" target="_top">No Frames</a></li>
</ul>
<ul class="navList" id="allclasses_navbar_bottom">
<li><a href="../../../allclasses-noframe.html">All Classes</a></li>
</ul>
<div>
<script type="text/javascript"><!--
allClassesLink = document.getElementById("allclasses_navbar_bottom");
if(window==top) {
allClassesLink.style.display = "block";
}
else {
allClassesLink.style.display = "none";
}
//-->
</script>
</div>
<a name="skip.navbar.bottom">
<!-- -->
</a></div>
<!-- ======== END OF BOTTOM NAVBAR ======= -->
<p class="legalCopy"><small><a href=http://guillaume.bort.fr>Guillaume Bort</a> & <a href=http://www.zenexity.fr>zenexity</a> - Distributed under <a href=http://www.apache.org/licenses/LICENSE-2.0.html>Apache 2 licence</a>, without any warrantly</small></p>
</body>
</html>
| {
"redpajama_set_name": "RedPajamaGithub"
} | 5,660 |
<?php
namespace TVListings\Domain\Repository;
use TVListings\Domain\Entity\Channel;
use TVListings\Domain\Entity\Listing;
use TVListings\Domain\Service\EntityManager;
class ChannelRepository
{
/**
* @var EntityManager
*/
private $entityManager;
public function __construct(EntityManager $entityManager)
{
$this->entityManager = $entityManager;
}
/**
* @param Channel $channel
*/
public function persist(Channel $channel)
{
$this->entityManager->persist($channel);
}
/**
* @param Channel $channel
*/
public function delete(Channel $channel)
{
$this->entityManager->remove($channel);
}
/**
* @return array
*/
public function findAll()
{
return $this->entityManager->findAll("TVListings\Domain\Entity\Channel");
}
/**
* @param string slug
*/
public function findOneBySlug($slug)
{
return $this->entityManager->findOneBy(Channel::class, 'slug', $slug);
}
/**
* @param Channel $channel
* @return array
*/
public function getTodayListings(Channel $channel)
{
$criteria = array(
'channel' => array(
'builder' => function ($alias) {
return sprintf("%s.channel", $alias);
},
'value' => $channel
),
'programDate' => array(
'builder' => function ($alias) {
return sprintf("DATE(%s.programDate)", $alias);
},
'value' => (new \DateTime())->format('Y-m-d')
),
'orderBy' => array(
'builder' => function ($alias) {
return sprintf("%s.programDate", $alias);
},
'value' => 'ASC',
),
);
return $this->entityManager->findBy(
Listing::class,
$criteria
);
}
/**
* @param Channel $channel
* @param \DateTimeImmutable $specifiedDate
* @return array
*/
public function getListingsOf(Channel $channel, \DateTimeImmutable $specifiedDate)
{
$criteria = array(
'channel' => array(
'builder' => function ($alias) {
return sprintf("%s.channel", $alias);
},
'value' => $channel
),
'programDate' => array(
'builder' => function ($alias) {
return sprintf("DATE(%s.programDate)", $alias);
},
'value' => $specifiedDate->format('Y-m-d')
),
'orderBy' => array(
'builder' => function ($alias) {
return sprintf("%s.programDate", $alias);
},
'value' => 'ASC',
),
);
return $this->entityManager->findBy(
Listing::class,
$criteria
);
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 8,231 |
{"url":"http:\/\/math.stackexchange.com\/questions\/312920\/proving-identities-like-sum-k-1nkn-choose-k2-n2n-1-choose-n-combinato","text":"# Proving identities like $\\sum_{k=1}^nk{n\\choose k}^2=n{2n-1\\choose n}$ combinatorially\n\nI have to give a combinatorial proof of\n\n$$\\sum_{k=1}^nk{n\\choose k}^2=n{2n-1\\choose n}.$$\n\nI find it difficult to solve such problems. I'm not a brilliant person and never will be so I need to have algorithms to solve most problems. Usually I won't just \"see\" solutions unless I have a thorough enough understanding of a theory. This is also the case here. I know what the symbols mean, but I don't see what I should count and how to obtain the identity.\n\nI think the right-hand side counts $n$-element subsets of a $(2n-1)$-element set with one element chosen to be \"special\". I think I should find a partition of the set of such subsets into $n$ parts so that the left-hand side is the sum of the cardinalities of the parts. Is there $n$ of anything on the right-hand side? Well, there are $n$ choices for the \"special\" elements, but only after I've chosen the $n$-element subset. Otherwise I can choose the \"special element\" in $2n-1$ ways.\n\nThis is a homework problem so I probably shouldn't ask for a full solution. But I would like to have some guidelines for approaching such problems, perhaps based on this example.\n\n-\n\n## 4 Answers\n\nIt seems easier to look at the left-hand side :\n\n$\\sum \\binom n k ^2 = \\binom {2n} n$ is the number of ways to choose $n$ elements out of a set $X$ of $2n$ elements, where we are given a partition $X = X_1 \\cup X_2$ with $\\# X_i = n$.\nNext, $\\sum k \\binom n k ^2$ is the number of ways to do this and then choose a special element among the ones chosen in $X_1$. So, reversing the order of the choices (choose the special element first), then forgetting as much as possible about the partition, you get that is the number of ways to pick one special element in $X_1$ and then pick $n-1$ other elements in $X$.\n\nThen it shouldn't be too hard to explain why this is equal to the right-hand side.\n\n-\nNice approach, +1. \u2013\u00a0 1015 Feb 24 '13 at 14:40\nOut of curiosity, is there a combinatorial proof that $\\sum \\binom{n}{k}^2 = \\binom{2n}{n}$ ? \u2013\u00a0 user54147 Feb 24 '13 at 15:58\n@LevLivnev : as hinted, $\\binom n k ^2 = \\binom n k \\binom n {n-k}$ is the number of ways to choose $k$ elements in $A_1$ and $n-k$ elements in $A_2$. When you forget about the partition, this sum is the number of ways to choose $n$ elements in $A_1 \\cup A_2$, which is of size $2n$. \u2013\u00a0 mercio Feb 24 '13 at 16:02\n\nSince $\\binom{n}{k} = \\dfrac{n}{k}\\binom{n-1}{k-1}$ the identity can be written as $$\\sum_{k=1}^n\\binom{n}{n-k}\\binom{n-1}{k-1} = \\binom{2n-1}{n-1}.$$\n\nSuppose we have $2n-1$ items in line and we need to choose $n-1$ items from them. The right hand side is the number of ways to do this in the obvious way. Another way to choose our $n-1$ items from this line: Divide the line into two sections; the first section contains the first $n$ items, the second contains the last $n-1$ items. When picking $n-1$ items from the entire line, we could pick all $n-1$ from the first line and none from the second ($k=1$), or $n-2$ from the first line and $1$ from the second ($k=2$) ... ... or none from the first line and $n-1$ from the second ($k=n$).\n\n-\nI don't understand this solution. I think the identity is ${n\\choose k}={n\\over k}{n-1\\choose k-1}.$ But then I get $$\\sum_{k=1}^nk\\cdot{n\\over k}{n\\choose n-k}{n-1\\choose k-1}=n{2n-1\\choose n}$$ or $$\\sum_{k=1}^n{n\\choose n-k}{n-1\\choose k-1}={2n-1\\choose n},$$ which is different from what you wrote. \u2013\u00a0 Bartek Mar 3 '13 at 23:31\n@Bartek Remember that $\\binom{n}{k} = \\binom{n}{n-k}.$ \u2013\u00a0 Ragib Zaman Mar 4 '13 at 2:19\nOh! Right. But still the identity at the beginning of your post needs to be edited I think. Thank you for the solution. \u2013\u00a0 Bartek Mar 4 '13 at 2:23\n\nFor $a,b,n \\in \\mathbb{N}$ we have\n\n$$\\binom{a+b}{n} = \\sum_{p+q=n} \\binom{a}{p} \\cdot \\binom{b}{q}$$\n\nsince both sides count the subsets of $\\{1,\\dotsc,a\\} \\sqcup \\{1,\\dotsc,b\\}$ with $n$ elements. This could also be derived from the isomorphism $\\Lambda^n(V \\oplus W) = \\bigoplus_{p+q=n} \\Lambda^p(V) \\otimes \\Lambda^q(W)$ of exterior powers, where $V,W$ are free modules of rank $a$ resp. $b$ (decategorification).\n\nIn particular,\n\n$$\\sum_{k=0}^{n} k \\binom{n}{k}^2 = n \\sum_{k=0}^{n} \\binom{n}{n-k} \\binom{n-1}{k-1}=n \\binom{2n-1}{n-1}=n \\binom{2n-1}{n}.$$\n\nOne can also derive a lot of other formulas as a special case, for example\n\n$$\\sum_{k=0}^{n} \\binom{n}{k}^2 = \\sum_{k=0}^{n} \\binom{n}{k} \\binom{n}{n-k} = \\binom{2n}{n}.$$\n\n-\n\nSince you said you'd prefer a hint, I will show you how a related problem is approached. In retrospect I can see that this is not what is meant by a combinatorial proof, but maybe it will still be helpful.\n\nSuppose you want to prove that: $$\\sum^n_{s=0} \\binom{n}{s}^2 = \\binom{2n}{n}$$ To do the above, consider the binomial expansions of both sides of: $$(1+x)^n(1+x)^n = (1+x)^{2n}$$ (i.e. expand the RHS binomially and expand both parts of the LHS binmially and write out the product).\n\nI have not verified it yet but I believe a similar approach may work for the problem you provided, as long as you choose an appropriate identity to start with.\n\n-\ncombinatorial proof generally means proof by counting arguments. this might be of use. \u2013\u00a0 user45099 Feb 24 '13 at 14:41\n@user1709828 you're right, this is far from combinatorial. I couldn't think of a more direct way so I guess I'll just leave this here for reference. \u2013\u00a0 user54147 Feb 24 '13 at 15:21\nAlso I think it may be useful since it suggests how to prove the assertion that is made at the beginning of mercio's answer. \u2013\u00a0 user54147 Feb 24 '13 at 15:22\nin a non-combinatorial way. \u2013\u00a0 user54147 Feb 24 '13 at 17:04\nBut actually these kinds of proofs hold in every symmetric monoidal category with direct sums, in particular for the category of (finite) sets, and thus is a \"bijective proof\" in disguise. See also the wonderful paper \"From natural numbers to Feynman diagrams\" by Baez and Dolan. \u2013\u00a0 Martin Brandenburg Mar 4 '13 at 2:33","date":"2014-08-28 13:51:04","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.911932110786438, \"perplexity\": 182.51346336386658}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2014-35\/segments\/1408500830834.3\/warc\/CC-MAIN-20140820021350-00192-ip-10-180-136-8.ec2.internal.warc.gz\"}"} | null | null |
Eijiro Takeda (født 11. juli 1988) er en japansk fodboldspiller, som spiller for den japanske fodboldklub Yokohama FC.
Referencer
Eksterne henvisninger
Fodboldspillere fra Japan | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 1,056 |
\section{Introduction}
Assume we have a dataset of \textit{instances} $\{(\ensuremath\boldsymbol{x}_i, y_i)\}_{i=1}^N$ with sample size $N$ and dimensionality $\ensuremath\boldsymbol{x}_i \in \mathbb{R}^d$ and $y_i \in \mathbb{R}$.
The $\{\ensuremath\boldsymbol{x}_i\}_{i=1}^N$ are the input data to the model and the $\{y_i\}_{i=1}^N$ are the observations (labels).
We denote the dataset by $\mathcal{D}$ so that $N := |\mathcal{D}|$. This dataset is the union of the disjoint subsets, i.e., training set $\mathcal{T}$ and test set $\mathcal{R}$; therefore:
\begin{align}
&\mathcal{D} = \mathcal{T} \cup \mathcal{R}, \\
&\mathcal{T} \cap \mathcal{R} = \varnothing.
\end{align}
For the training set, the observations (labels), $y_i$'s, are available. Although for the test set, we might also have $y_i$'s, but we do not use them for training the model. The observations are continuous or come from a finite discrete set of values in classification and prediction (regression) tasks, respectively.
Assume the sample size of training and test sets are $n := |\mathcal{T}|$ and $m := N-n$, respectively; therefore, we have $\{(\ensuremath\boldsymbol{x}_i, y_i)\}_{i=1}^n$ as the training set.
In some cases where we want to have validation set $\mathcal{V}$ as well, the datasets includes three disjoint subsets:
\begin{align}
&\mathcal{D} = \mathcal{T} \cup \mathcal{R} \cup \mathcal{V}, \\
&\mathcal{T} \cap \mathcal{R} = \varnothing, \mathcal{T} \cap \mathcal{V} = \varnothing, \mathcal{V} \cap \mathcal{R} = \varnothing.
\end{align}
We will define the intuitions of training, test, and validation sets later in this paper.
In this paper, we introduce overfitting, cross validation, generalized cross validation, regularization, bagging, and boosting and explain why they work theoretically.
We also provide some examples of these methods in machine learning and computer vision.
\begin{figure*}[!t]
\centering
\includegraphics[width=5in]{./images/dart}
\caption{The dart example for (a) high bias and low variance, (b) low bias and high variance, (c) high bias and high variance, and (d) low bias and low variance. The worst and best cases are (c) and (d), respectively. The center of the circles is the true value of the variable.}
\label{figure_dart}
\end{figure*}
\section{Mean Squared Error, Variance, and Bias}
\subsection{Measures for a Random Variable}
Assume we have variable $X$ and we estimate it. Let the random variable $\widehat{X}$ denote the estimate of $X$. The \textit{variance} of estimating this random variable is defined as:
\begin{align}\label{equation_variance}
\mathbb{V}\text{ar}(\widehat{X}) := \mathbb{E}\big((\widehat{X} - \mathbb{E}(\widehat{X}))^2\big),
\end{align}
which means average deviation of $\widehat{X}$ from the mean of our estimate, $\mathbb{E}(\widehat{X})$, where the deviation is squared for symmetry of difference.
This variance can be restated as:
\begin{align}
\mathbb{V}\text{ar}(\widehat{X}) &= \mathbb{E}\big( \widehat{X}^2 + (\mathbb{E}(\widehat{X}))^2 - 2\widehat{X}\mathbb{E}(\widehat{X}) \big) \nonumber \\
&\overset{(a)}{=} \mathbb{E}(\widehat{X}^2) + (\mathbb{E}(\widehat{X}))^2 - 2\mathbb{E}(\widehat{X})\mathbb{E}(\widehat{X}) \nonumber \\
&= \mathbb{E}(\widehat{X}^2) - (\mathbb{E}(\widehat{X}))^2, \label{equation_variance_2}
\end{align}
where $(a)$ is because expectation is a linear operator and $\mathbb{E}(\widehat{X})$ is not a random variable.
Our estimation can have a bias. The \textit{bias} of our estimate is defined as:
\begin{align}\label{equation_bias}
\mathbb{B}\text{ias}(\widehat{X}) := \mathbb{E}(\widehat{X}) - X,
\end{align}
which means how much the mean of our estimate deviates from the original $X$.
The \textit{Mean Squared Error (MSE)} of our estimate, $\widehat{X}$, is defined as:
\begin{align}\label{equation_MSE}
\text{MSE}(\widehat{X}) := \mathbb{E}\big((\widehat{X} - X)^2\big),
\end{align}
which means how much our estimate deviates from the original $X$.
The intuition of bias, variance, and MSE is illustrated in Fig. \ref{figure_dart} where the estimations are like a dart game. We have four cases with low/high values of bias and variance which are depicted in this figure.
The relation of MSE, variance, and bias is as follows:
\begin{align}
&\text{MSE}(\widehat{X}) = \mathbb{E}\big((\widehat{X} - X)^2\big) \nonumber \\
&= \mathbb{E}\big((\widehat{X} - \mathbb{E}(\widehat{X}) + \mathbb{E}(\widehat{X}) - X)^2\big) \nonumber \\
&= \mathbb{E}\big((\widehat{X} - \mathbb{E}(\widehat{X}))^2 + (\mathbb{E}(\widehat{X}) - X)^2 \nonumber \\
&~~~~ + 2 (\widehat{X} - \mathbb{E}(\widehat{X})) (\mathbb{E}(\widehat{X}) - X) \big) \nonumber \\
&\overset{(a)}{=} \mathbb{E}\big( (\widehat{X} - \mathbb{E}(\widehat{X}))^2 \big) + (\mathbb{E}(\widehat{X}) - X)^2 \nonumber \\
&~~~~ + 2 \underbrace{(\mathbb{E}(\widehat{X}) - \mathbb{E}(\widehat{X}))}_{0} (\mathbb{E}(\widehat{X}) - X) \nonumber \\
&\overset{(b)}{=} \mathbb{V}\text{ar}(\widehat{X}) + (\mathbb{B}\text{ias}(\widehat{X}))^2, \label{equation_relation_MSE_variance_bias}
\end{align}
where $(a)$ is because expectation is a linear operator and $X$ and $\mathbb{E}(\widehat{X})$ are not random, and $(b)$ is because of Eqs. (\ref{equation_variance}) and (\ref{equation_bias}).
If we have two random variables $\widehat{X}$ and $\widehat{Y}$, we can say:
\begin{align}
&\mathbb{V}\text{ar}(a\widehat{X} + b\widehat{Y}) \overset{(\ref{equation_variance_2})}{=} \mathbb{E}\big((a\widehat{X} + b\widehat{Y})^2\big) - \big(\mathbb{E}(a\widehat{X} + b\widehat{Y})\big)^2 \nonumber \\
&\overset{(a)}{=} a^2\, \mathbb{E}(\widehat{X}^2) + b^2\, \mathbb{E}(\widehat{Y}^2) + 2ab\, \mathbb{E}(\widehat{X}\widehat{Y}) \nonumber \\
&~~~~ - a^2\, (\mathbb{E}(\widehat{X}))^2 - b^2\, (\mathbb{E}(\widehat{Y}))^2 - 2ab\, \mathbb{E}(\widehat{Y})\mathbb{E}(\widehat{Y}) \nonumber \\
&\overset{(\ref{equation_variance_2})}{=} a^2\, \mathbb{V}\text{ar}(\widehat{X}) + b^2\, \mathbb{V}\text{ar}(\widehat{X}) + 2ab\, \mathbb{C}\text{ov}(\widehat{X},\widehat{Y}), \label{equation_variance_of_two_variables}
\end{align}
where $(a)$ is because of linearity of expectation and the $\mathbb{C}\text{ov}(\widehat{X},\widehat{Y})$ is \textit{covariance} defined as:
\begin{align}\label{equation_covariance}
\mathbb{C}\text{ov}(\widehat{X},\widehat{Y}) := \mathbb{E}(\widehat{X}\widehat{Y}) - \mathbb{E}(\widehat{Y})\,\mathbb{E}(\widehat{Y}).
\end{align}
If the two random variables are independent, i.e., $X \perp\!\!\!\perp Y$, we have:
\begin{align}
&\mathbb{E}(\widehat{X}\widehat{Y}) \overset{(a)}{=} \int\!\!\! \int \widehat{x} \widehat{y} f(\widehat{x}, \widehat{y}) d\widehat{x} d\widehat{y} \overset{\perp\!\!\!\perp}{=} \int\!\!\! \int \widehat{x} \widehat{y} f(\widehat{x}) f(\widehat{y}) d\widehat{x} d\widehat{y} \nonumber \\
&= \int \widehat{y} f(\widehat{y}) \underbrace{\int \widehat{x} f(\widehat{x}) d\widehat{x}}_{\mathbb{E}(\widehat{X})} d\widehat{y} = \mathbb{E}(\widehat{X}) \underbrace{\int \widehat{y} f(\widehat{y}) d\widehat{y}}_{\mathbb{E}(\widehat{Y})} \nonumber \\
&= \mathbb{E}(\widehat{X})\, \mathbb{E}(\widehat{Y}) \implies \mathbb{C}\text{ov}(\widehat{X},\widehat{Y}) = 0, \label{equation_expectation_independent}
\end{align}
where $(a)$ is according to definition of expectation.
Note that Eq. (\ref{equation_expectation_independent}) is not true for the reverse implication (we can prove by counterexample).
We can extend Eqs. (\ref{equation_variance_of_two_variables}) and (\ref{equation_covariance}) to multiple random variables:
\begin{align}
&\mathbb{V}\text{ar}\Big(\sum_{i=1}^k a_i X_i \Big) \nonumber \\
&~~~~~~~ = \sum_{i=1}^k a_i^2\, \mathbb{V}\text{ar}(X_i) + \sum_{i=1}^k \sum_{j=1, j\neq i}^k a_i a_j \mathbb{C}\text{ov}(X_i, X_j), \label{equation_variance_multiple} \\
&\mathbb{C}\text{ov}\Big( \sum_{i=1}^{k_1} a_i X_i, \sum_{j=1}^{k_2} b_j Y_j \Big) = \sum_{i=1}^{k_1} \sum_{j=1}^{k_2} a_i\, b_j\, \mathbb{C}\text{ov}(X_i, Y_j),
\end{align}
where $a_i$'s and $b_j$'s are not random.
\subsection{Measures for a Model}
Assume we have a \textit{function} $f$ which gets the $i$-th input $\ensuremath\boldsymbol{x}_i$ and outputs $f_i = f(\ensuremath\boldsymbol{x}_i)$. Figure \ref{figure_model} shows this function and its input and output.
We wish to know the function which we call it the \textit{true model} but we do not have access to it as it is unknown. Also, the pure outputs (true observations), $f_i$'s, are not available. The output may be corrupted with an additive noise $\varepsilon_i$:
\begin{align}\label{equation_y_f_varepsilon}
y_i = f_i + \varepsilon_i,
\end{align}
where the noise is $\varepsilon_i \sim \mathcal{N}(0, \sigma^2)$. Therefore:
\begin{align}\label{equation_noise_mean_variance}
\mathbb{E}(\varepsilon_i) = 0, ~~~~ \mathbb{E}(\varepsilon_i^2) \overset{(\ref{equation_variance_2})}{=} \mathbb{V}\text{ar}(\varepsilon_i) + (\mathbb{E}(\varepsilon_i))^2 = \sigma^2,
\end{align}
The true observation $f_i$ is not random, thus:
\begin{align}\label{equation_expectationOf_model}
\mathbb{E}(f_i) = f_i.
\end{align}
The input training data $\{\ensuremath\boldsymbol{x}_i\}_{i=1}^n$ and their corrupted observations $\{y_i\}_{i=1}^n$ are available to us. We would like to approximate (estimate) the true model by a \textit{model} $\widehat{f}$ in order to estimate the observations $\{y_i\}_{i=1}^n$ from the input $\{\ensuremath\boldsymbol{x}_i\}_{i=1}^n$. Calling the estimated observations by $\{\widehat{y}_i\}_{i=1}^n$, we want the $\{\widehat{y}_i\}_{i=1}^n$ to be as close as possible to $\{y_i\}_{i=1}^n$ for the training input data $\{\ensuremath\boldsymbol{x}_i\}_{i=1}^n$. We train the model using the training data in order to estimate the true model. After training the model, it can be used to estimate the output of the model for both the training input $\{\ensuremath\boldsymbol{x}_i\}_{i=1}^n$ and the unseen test input $\{\ensuremath\boldsymbol{x}_i\}_{i=1}^m$ to have the estimates $\{\widehat{y}_i\}_{i=1}^n$ and $\{\widehat{y}_i\}_{i=1}^m$, respectively. The explained details are illustrated in Fig. \ref{figure_model}.
In this work, we denote the estimation of the observation of the $i$-th instance with either $\widehat{y}_i$ or $\widehat{f}_i$.
The model can be a \textit{regression (prediction)} or \textit{classification} model. In regression, the model's estimation is continuous while in classification, the estimation is a member of a discrete set of possible observations.
\begin{figure}[!t]
\centering
\includegraphics[width=3in]{./images/model}
\caption{The true model and the estimated model which is trained using the input training data and their observations. The observations are obtained from the outputs of the true model fed with the training input but corrupted by the noise. After the model is trained, it can be used to estimate the observation for either training or test input data.}
\label{figure_model}
\end{figure}
The definitions of variance, bias, and MSE, i.e., Eqs. (\ref{equation_variance}), (\ref{equation_bias}), and (\ref{equation_MSE}), can also be used for the estimation $\widehat{f}_i$ of the true model $f_i$.
The Eq. (\ref{equation_relation_MSE_variance_bias}) can be illustrated for the model $f$ as in Fig. \ref{figure_triangle_model} which holds because of Pythagorean theorem.
\begin{figure}[!t]
\centering
\includegraphics[width=1.7in]{./images/triangle_model}
\caption{The triangle of variance, bias, and MSE. The $f$ and $\widehat{f}$ are the true and estimated models, respectively.}
\label{figure_triangle_model}
\end{figure}
\subsection{Measures for Ensemble of Models}
If we have an ensemble of models \cite{polikar2012ensemble}, we can have some similar definitions of bias and variance (e.g., see the Appendix C in \cite{schapire1998boosting}).
Here, we assume the models are classifiers.
If $\mathbb{P}(.)$ denotes the probability, the \textit{expected error} or \textit{prediction error} (PE) of the model $f_i$ is defined as:
\begin{align}
\text{PE}(f) := \mathbb{P}(\widehat{f}_i \neq y_i),
\end{align}
where $\widehat{f}_i$ is the estimation of trained model for the observation $y_i$ (input $\ensuremath\boldsymbol{x}_i$).
In the parentheses, it is required to mention that Bayesian classifier is the optimal classifier because it can be seen as an ensemble of hypotheses (models) in the hypothesis (model) space and no other ensemble of hypotheses can outperform it (see Chapter 6, Page 175 in \cite{mitchell1997machine}). In the literature, it is referred to as \textit{Bayes optimal classifier}. However, implementing Bayesian classifier is difficult so they approximate it by \textit{naive Bayes} \cite{zhang2004optimality}.
Back to our main discussion, let $\widehat{f}^*$ denote the Bayes optimal prediction.
Also, let the estimate of each of the trained models by an ensemble learning method (such as bagging or boosting which will be introduced later) be denoted by $\widehat{f}$ which is trained using $\{(\ensuremath\boldsymbol{x}_i, y_i)\}_{i=1}^n$. Finally, let $\widehat{f}^m$ denote the classification using majority voting between the models.
The bias and variance of the model can be defined as \cite{kong1995error}:
\begin{align}
&\mathbb{B}\text{ias}(\widehat{f}) := \text{PE}(\widehat{f}^m) - \text{PE}(\widehat{f}^*), \\
&\mathbb{V}\text{ar}(\widehat{f}) := \mathbb{E}(\text{PE}(\widehat{f})) - \text{PE}(\widehat{f}^m).
\end{align}
There also exist another definition in the literature.
Suppose the sample space of data is the union of two disjoint subsets $\mathcal{U}$ and $\mathcal{B}$ which are the unbiased and biased sets with $\widehat{f}_i^m = \widehat{f}_i^*$ and $\widehat{f}_i^m \neq \widehat{f}_i^*$, respectively. We can define \cite{breiman1998arcing}:
\begin{align}
&\mathbb{B}\text{ias}(\widehat{f}_i) \nonumber \\
&:= \mathbb{P}(\widehat{f}_i^* = y_i, \ensuremath\boldsymbol{x}_i \in \mathcal{B}) - \mathbb{E}(\mathbb{P}(\widehat{f}_i = y_i, \ensuremath\boldsymbol{x}_i \in \mathcal{B})), \\
&\mathbb{V}\text{ar}(\widehat{f}_i) \nonumber \\
&:= \mathbb{P}(\widehat{f}_i^* = y_i, \ensuremath\boldsymbol{x}_i \in \mathcal{U}) - \mathbb{E}(\mathbb{P}(\widehat{f}_i = y_i, \ensuremath\boldsymbol{x}_i \in \mathcal{U})).
\end{align}
\section{Mean Squared Error of the Estimation of Observations}
Suppose we have an instance $(\ensuremath\boldsymbol{x}_0, y_0)$. This instance can be either a training or test/validation instance. We will cover both cases.
According to Eq. (\ref{equation_y_f_varepsilon}), the observation $y_0$ is:
\begin{align}\label{equation_y0}
y_0 = f_0 + \varepsilon_0.
\end{align}
Assume the model's estimation of $y_0$ is $\widehat{f}_0$.
According to Eq. (\ref{equation_MSE}), the MSE of the estimation is:
\begin{align}
\mathbb{E}\big(&(\widehat{f}_0 - y_0)^2 \big) \overset{(\ref{equation_y0})}{=} \mathbb{E}\big((\widehat{f}_0 - f_0 - \varepsilon_0)^2 \big) \nonumber \\
&= \mathbb{E}\big((\widehat{f}_0 - f_0)^2 + \varepsilon_0^2 -2\,\varepsilon_0 (\widehat{f}_0 - f_0) \big) \nonumber \\
&= \mathbb{E}\big((\widehat{f}_0 - f_0)^2\big) + \mathbb{E}(\varepsilon_0^2) -2\,\mathbb{E}\big(\varepsilon_0 (\widehat{f}_0 - f_0) \big) \nonumber \\
&\overset{(\ref{equation_noise_mean_variance})}{=} \mathbb{E}\big((\widehat{f}_0 - f_0)^2\big) + \sigma^2 -2\,\mathbb{E}\big(\varepsilon_0 (\widehat{f}_0 - f_0) \big). \label{equation_MSE_y0}
\end{align}
The last term is:
\begin{align}
\mathbb{E}\big(\varepsilon_0 (\widehat{f}_0 - f_0) \big) \overset{(\ref{equation_y0})}{=} \mathbb{E}\big( (y_0 - f_0) (\widehat{f}_0 - f_0) \big).
\end{align}
For calculation of this term, we have two cases: (I) whether the instance $(\ensuremath\boldsymbol{x}_0, y_0)$ is in the training set or (II) not in the training set. In other words, whether the instance was used to train the model (estimator) or not.
\subsection{Case I: Instance not in the Training Set}
Assume the instance $(\ensuremath\boldsymbol{x}_0, y_0)$ was not in the training set, i.e., it was not used for training the model.
In other words, we have $y_0 \notin \mathcal{T}$.
This means that the estimation $\widehat{f}_0$ is independent of the observation $y_0$ because the observation was not used to train the model but the estimation is obtained from the model.
Therefore:
\begin{align*}
&\therefore ~~~ y_0 \perp\!\!\!\perp \widehat{f}_0 \implies (y_0 - f_0) \perp\!\!\!\perp (\widehat{f}_0 - f_0) \\
&\implies \mathbb{E}\big( (y_0 - f_0) (\widehat{f}_0 - f_0) \big) \\
&\overset{(a)}{=} \mathbb{E}\big( (y_0 - f_0) \big) \, \mathbb{E}\big( (\widehat{f}_0 - f_0) \big) \overset{(b)}{=} 0 \times \mathbb{E}\big( (\widehat{f}_0 - f_0) \big) = 0,
\end{align*}
where $(a)$ is because $(y_0 - f_0) \perp\!\!\!\perp (\widehat{f}_0 - f_0)$ and $(b)$ is because:
\begin{align*}
\mathbb{E}\big( (y_0 - f_0) \big) = \mathbb{E}(y_0) - \mathbb{E}(f_0) \overset{(c)}{=} f_0 - f_0 = 0,
\end{align*}
where $(c)$ is because of Eq. (\ref{equation_expectationOf_model}) and:
\begin{align*}
\mathbb{E}(y_0) \overset{(\ref{equation_y0})}{=} \mathbb{E}(f_0) + \mathbb{E}(\varepsilon_0) = f_0 + 0 = f_0.
\end{align*}
Therefore, in this case, the last term in Eq. (\ref{equation_MSE_y0}) is zero. Thus:
\begin{align}
\mathbb{E}\big(&(\widehat{f}_0 - y_0)^2 \big) = \mathbb{E}\big((\widehat{f}_0 - f_0)^2\big) + \sigma^2
\end{align}
Suppose the number of instances which are not in the training set is $m$. We can sum the MSE over all the $m$ instances:
\begin{align}\label{equation_MSE_test_instance_2}
\sum_{i=1}^m (\widehat{f}_i - y_i)^2 = \sum_{i=1}^m (\widehat{f}_0 - f_0)^2 + \underbrace{\sum_{i=1}^m \sigma^2}_{= m \sigma^2}.
\end{align}
The first term, $\sum_{i=1}^m (\widehat{f}_i - y_i)^2$, is the error for the training data. This error is referred to as \textit{empirical error} or \textit{training error} and is denoted by \textbf{err}.
The second term, $\sum_{i=1}^m (\widehat{f}_0 - f_0)^2$, is the error for the testing (or validation) data. This error is referred to as \textit{true error}, \textit{test error}, or \textit{generalization error} and is denoted by \textbf{Err}.
Therefore:
\begin{align}\label{equation_MSE_test_instance}
\textbf{Err} = \textbf{err} - m\, \sigma^2.
\end{align}
Hence, in this case, the empirical error is a good estimation of the true error. Thus, we can minimize the empirical error in order to properly minimize the true error.
\subsection{Case II: Instance in the Training Set}
Consider a multivariate random variable $\mathbb{R}^d \ni \ensuremath\boldsymbol{z} = [z_1, \dots, z_d]^\top$ whose components are independent random variables with normal distribution, i.e., $z_i \sim \mathcal{N}(\mu_i, \sigma)$. Take $\mathbb{R}^d \ni \ensuremath\boldsymbol{\mu} = [\mu_1, \dots, \mu_d]^\top$ and let $\mathbb{R}^d \ni \ensuremath\boldsymbol{g}(\ensuremath\boldsymbol{z}) = [g_1, \dots, g_d]^\top$ be a function of the random variable $\ensuremath\boldsymbol{z}$ with $\ensuremath\boldsymbol{g}(\ensuremath\boldsymbol{z}): \mathbb{R}^d \rightarrow \mathbb{R}^d$.
There exists a lemma, named \textit{Stein's Lemma}, which states:
\begin{align}\label{equation_stien_lemma_vector}
\mathbb{E}\big((\ensuremath\boldsymbol{z} - \ensuremath\boldsymbol{\mu})^\top\, \ensuremath\boldsymbol{g}(\ensuremath\boldsymbol{z})\big) = \sigma^2\, \sum_{i=1}^d \mathbb{E}\big(\frac{\partial g_i}{\partial z_i}\big),
\end{align}
which is used in Stein's Unbiased Risk Estimate (SURE) \cite{stein1981estimation}. See Appendix \ref{section_stein_proof} in this tutorial paper for the proof of Eq. (\ref{equation_stien_lemma_vector}).
If the random variable is a univariate variable, the Stein's lemma becomes:
\begin{align}\label{equation_stien_lemma_scalar}
\mathbb{E}\big((z - \mu)\, g(z)\big) = \sigma^2\, \mathbb{E}\big(\frac{\partial g(z)}{\partial z}\big).
\end{align}
Suppose we take $\varepsilon_0$, $0$, and $\widehat{f}_0 - f_0$ as the $z$, $\mu$, and $g(z)$, respectively.
Using Eq. (\ref{equation_stien_lemma_scalar}), the last term in Eq. (\ref{equation_MSE_y0}) is:
\begin{align*}
\mathbb{E}\big((\varepsilon_0 - 0) &(\widehat{f}_0 - f_0) \big) = \sigma^2\, \mathbb{E}\big(\frac{\partial (\widehat{f}_0 - f_0)}{\partial \varepsilon_0}\big) \\
&= \sigma^2\, \mathbb{E}\big(\frac{\partial \widehat{f}_0}{\partial \varepsilon_0} - \frac{\partial f_0}{\partial \varepsilon_0}\big) \overset{(a)}{=} \sigma^2\, \mathbb{E}\big(\frac{\partial \widehat{f}_0}{\partial \varepsilon_0}\big) \\
&\overset{(b)}{=} \sigma^2\, \mathbb{E}\big(\frac{\partial \widehat{f}_0}{\partial y_0} \times \frac{\partial y_0}{\partial \varepsilon_0}\big) \overset{(c)}{=} \sigma^2\, \mathbb{E}\big(\frac{\partial \widehat{f}_0}{\partial y_0} \big),
\end{align*}
where $(a)$ is because the true model $f$ is not dependent on the noise, $(b)$ is because of the chain rule in derivative, and $(c)$ is because:
\begin{align*}
y_0 \overset{(\ref{equation_y0})}{=} f_0 + \varepsilon_0 \implies \frac{\partial y_0}{\partial \varepsilon_0} = 1.
\end{align*}
Therefore, in this case, the Eq. (\ref{equation_MSE_y0}) is:
\begin{align}
\mathbb{E}\big(&(\widehat{f}_0 - y_0)^2 \big) = \mathbb{E}\big((\widehat{f}_0 - f_0)^2\big) + \sigma^2 - 2\sigma^2\mathbb{E}\big(\frac{\partial \widehat{f}_0}{\partial y_0} \big).
\end{align}
Suppose the number of training instances is $n$. We can sum the MSE over all the $n$ training instances:
\begin{align}
\sum_{i=1}^n &(\widehat{f}_i - y_i)^2 = \sum_{i=1}^n (\widehat{f}_0 - f_0)^2 + \underbrace{\sum_{i=1}^n \sigma^2}_{= n \sigma^2} - 2\sigma^2 \sum_{i=1}^n \frac{\partial \widehat{f}_i}{\partial y_i}.
\end{align}
The first term is the empirical error (denoted by \textbf{err}) and the second term is the true error (denoted by \textbf{Err}). Therefore:
\begin{align}\label{equation_MSE_train_instance}
\textbf{Err} = \textbf{err} - n\, \sigma^2 + 2\,\sigma^2 \sum_{i=1}^n \frac{\partial \widehat{f}_i}{\partial y_i},
\end{align}
which is the \textit{Stein's Unbiased Risk Estimate (SURE)} \cite{stein1981estimation}.
The last term in Eq. (\ref{equation_MSE_train_instance}) is a measure of \textit{complexity} (or \textit{overfitting}) of the model. Note that $\partial \widehat{f}_i/\partial y_i$ means if we move the $i$-th training instance, how much the model's estimation of that instance will change? This shows how much the model is complex or overfitted. For better understanding, suppose a line regressing a training set via least squares problem. If we change a point, the line will not change significantly because the model is not complex (is underfitted). On the other hand, consider a regression model passing through ``all'' the points. If we move a training point, the regressing curve changes noticeably which is because the model is very complex (overfitted). See Fig. \ref{figure_overfitting_points} illustrating the explained examples.
\begin{figure}[!t]
\centering
\includegraphics[width=2.5in]{./images/overfitting_points}
\caption{An example for a simple and a complex model: (a) a simple model, (b) the modified simple model after moving a training instance (shown in green), (c) a complex model, and (d) the modified simple model after moving a training instance (shown in green). The complex model is impacted significantly by moving the instance while the simple model is not that much affected.}
\label{figure_overfitting_points}
\end{figure}
According to Eq. (\ref{equation_MSE_train_instance}), in the case where the instance is in the training set, the empirical error is not a good estimation of the true error. the reason is that minimization of \textbf{err} usually increases the complexity of the model cancelling out the minimization of \textbf{Err} after some level of training.
\subsection{Estimation of $\sigma$ in MSE of the Model}
The MSE of the model in the both mentioned cases include $\sigma$ which is the standard deviation of the noise. An unbiased estimation of the variance of the noise is:
\begin{align}\label{equation_MSE_sigma_estimation}
\sigma^2 \approx \frac{1}{n-1} \sum_{i=1}^n (y_i - \widehat{f}_i),
\end{align}
which uses the training observations and their estimation by the model. However, the model's estimations of the training observations are themselves dependent on the complexity of the model.
For the explained problem, in practice, we use an estimator with high bias and low variance in order to estimate the $\sigma$ in order not to have an estimation dependent on the complexity fo the model. For example, we use a line fitted to the training data using least squares problem (linear regression) in order to have estimations $\widehat{f}_i$'s of the $y_i$'s and then we use Eq. (\ref{equation_MSE_sigma_estimation}). Thereafter, for the sake of simplicity, we do not change the $\sigma$ (assume it is fixed) when we change the complexity of the model.
\section{Overfitting, Underfitting, and Generalization}\label{section_overfitting}
If the model is trained in an extremely simple way so that its estimation has low variance but high bias, we have \textit{underfitting}. Note that underfitting is also referred to as \textit{over-generalization}.
On the other hand, if the model is trained in an extremely complex way so that its estimation has high variance but low bias, we have \textit{overfitting}. To summarize:
\begin{itemize}
\setlength{\parskip}{0pt}
\setlength{\itemsep}{0pt plus 1pt}
\item in underfitting: low variance, high variance, and low complexity.
\item in overfitting: high variance, low variance, high complexity.
\end{itemize}
An example for underfitting, good fit, and overfitting is illustrated in Fig. \ref{figure_overfitting_example}. As this figure shows, in bot underfitting and overfitting, the estimation of a test instance might be very weak while in a good fit, the test instance, which was not seen in the training phase, is estimated good enough with smaller error. The ability of the model to estimate the unseen test (out-of-sample) data is referred to as \textit{generalization}. The lack of generalization is the reason why both overfitting and underfitting, especially overfitting, is not acceptable. In overfitting, the training error, i.e., \textbf{err}, is very small while the test (true) error, i.e., \textbf{Err}, is usually awful!
\begin{figure}[!t]
\centering
\includegraphics[width=2.2in]{./images/overfitting}
\caption{An example for (a) underfitting, (b) good fit, and (c) overfitting. The black circles and red square are training and test instances, respectively. The red curve is the fitted curve.}
\label{figure_overfitting_example}
\end{figure}
\section{Cross Validation}\label{section_crossValidation}
\subsection{Definition}
In order to either (I) find out until which complexity we should train the model or (II) tune the parameters of the model, we should use \textit{cross validation} \cite{arlot2010survey}.
In cross validation, we divide the dataset $\mathcal{D}$ into two partitions, i.e., training set denoted by $\mathcal{T}$ and test set denoted by $\mathcal{R}$ where the union of these two subsets is the whole dataset and the intersection of them is the empty set:
\begin{align}
\mathcal{T} \cup \mathcal{R} = \mathcal{D}, \\
\mathcal{T} \cap \mathcal{R} = \varnothing.
\end{align}
The $\mathcal{T}$ is used for training the model. After the model is trained, the $\mathcal{R}$ is used for testing the performance of the model.
We have different methods for cross validation. Two of the most well-known methods for cross validation are $K$-fold cross validation and Leave-One-Out Cross Validation (LOOCV).
In $K$-fold cross validation, we randomly split the dataset $\mathcal{D}$ into $K$ partitions $\{\mathcal{D}_1, \dots, \mathcal{D}_K\}$ where:
\begin{align}
& |\mathcal{D}_1| \approx |\mathcal{D}_2| \approx \dots \approx |\mathcal{D}_K|, \\
& \bigcup_{i=1}^K \mathcal{D}_i = \mathcal{D}, \\
& \mathcal{D}_i \cap \mathcal{D}_j = \varnothing, ~~~ \forall i,j \in \{1, \dots, K\}, ~ i \neq j,
\end{align}
where $|.|$ denoted the cardinality of set. Sometimes, the dataset $\mathcal{D}$ is shuffled before the cross validation for better randomization. Moreover, both simple random sampling without replacement and stratified sampling \cite{barnett1974elements} can be used for this splitting.
The $K$-fold cross validation includes $K$ iterations in each of them, one of the partitions is used as the test set and the rest of data is used for training. The overall estimation error is the average test error of iterations.
Note that we usually have $K = 2,5,10$ in the literature but $K=10$ is the most common.
The algorithm of $K$-fold cross validation is shown in Algorithm \ref{algorithm_K_fold_CV}.
\begin{figure}[!t]
\centering
\includegraphics[width=3.2in]{./images/overfitting_curve}
\caption{The overfitting of model: (a) training error and true error, (b) depiction of Eq. (\ref{equation_MSE_train_instance}).}
\label{figure_overfitting_curve}
\end{figure}
\SetAlCapSkip{0.5em}
\IncMargin{0.8em}
\begin{algorithm2e}[!t]
\DontPrintSemicolon
Randomly split $\mathcal{D}$ into $K$ partitions with almost equal sizes.\;
\For{$k$ from $1$ to $K$}{
$\mathcal{R}$ $\gets$ Partition $k$ from $\mathcal{D}$.\;
$\mathcal{T}$ $\gets$ $\mathcal{D} \setminus \mathcal{R}$.\;
Use $\mathcal{T}$ to train the model.\;
$\textbf{Err}_k \gets$ Use the trained model to predict $\mathcal{R}$.\;
}
$\textbf{Err} \gets \frac{1}{K} \sum_{k=1}^K \textbf{Err}_k$\;
\caption{$K$-fold Cross Validation}\label{algorithm_K_fold_CV}
\end{algorithm2e}
\DecMargin{0.8em}
In LOOCV, we iterate for $|\mathcal{D}|=N$ times and in each iteration, we take one instance as the $\mathcal{R}$ (so that $|\mathcal{R}| = 1$) and the rest of instances as the training set. The overall estimation error is the average test error of iterations.
The algorithm of LOOCV is shown in Algorithm \ref{algorithm_LOOCV}.
Usually, when the size of dataset is small, LOOCV is used in order to use the most of dataset for training and then test the model properly.
If we want to train the model and then test it, the cross validation should be done using training and test sets as explained. Note that the test set and the training set should be disjoint, i.e., $\mathcal{T} \cap \mathcal{R} = \varnothing$; otherwise, we are introducing the whole or a part of the test instances to the model to learn them. Of course, in that way, the model will learn to estimate the test instances easier and better; however, in the real-world applications, the test data is not available at the time of training. Therefore, if we mistakenly have $\mathcal{T} \cap \mathcal{R} \neq \varnothing$, it is referred to as \textit{cheating} in machine learning (we call it \textit{cheating \#1} here).
\SetAlCapSkip{0.5em}
\IncMargin{0.8em}
\begin{algorithm2e}[!t]
\DontPrintSemicolon
\For{$k$ from $1$ to $|\mathcal{D}|=N$}{
$\mathcal{R}$ $\gets$ Take the $k$-th instance from $\mathcal{D}$.\;
$\mathcal{T}$ $\gets$ $\mathcal{D} \setminus \mathcal{R}$.\;
Use $\mathcal{T}$ to train the model.\;
$\textbf{Err}_k \gets$ Use the trained model to predict $\mathcal{R}$.\;
}
$\textbf{Err} \gets \frac{1}{|\mathcal{D}|} \sum_{k=1}^{|\mathcal{D}|} \textbf{Err}_k$\;
\caption{Leave-One-Out Cross Validation}\label{algorithm_LOOCV}
\end{algorithm2e}
\DecMargin{0.8em}
In some cases, the model has some parameters which need to be determined. In this case, we split the data $\mathcal{D}$ to three subsets, i.e., training set $\mathcal{T}$, test set $\mathcal{R}$, and validation set $\mathcal{V}$. Usually, we have $|\mathcal{T}| > |\mathcal{R}|$ and $|\mathcal{T}| > |\mathcal{V}|$. First, we want to find the best parameters. For this, the training set is used to train the model with different values of parameters. For every value of parameter(s), after the model is trained, it is tested on the validation set. This is performed for all desired values of parameters. The parameter value resulting in the best estimation performance on the validation set is selected to be the value of parameter(s).
After finding the values of parameters, the model is trained using the training set (where the found parameter value is used). Then, the model is tested on the test set and the estimation performance is the average test set over the cross validation iterations.
In cross validation with validation set, we have:
\begin{align}
\mathcal{T} \cap \mathcal{R} = \varnothing, ~~ \mathcal{T} \cap \mathcal{V} = \varnothing, ~~ \mathcal{V} \cap \mathcal{R} = \varnothing.
\end{align}
The validation and test sets should be disjoint because the parameters of the model should not be optimized by testing on the test set. In other words, in real-world applications, the training and validation sets are available but the test set is not available yet. If we mistakenly have $\mathcal{V} \cap \mathcal{R} \neq \varnothing$, it is referred to as \textit{cheating} in machine learning (we call it \textit{cheating \#2} here). Note that this kind of mistake is very common in the literature unfortunately, where some people optimize the parameters by testing on the test set without having a validation set.
Moreover, the training and test sets should be disjoint as explained beforehand; otherwise, that would be another kind of \textit{cheating} in machine learning (introduced before as \textit{cheating \#1}).
On the other hand, the training and validation sets should be disjoint. Although having $\mathcal{T} \cap \mathcal{V} \neq \varnothing$ is not cheating but it should not be done for the reason which will be explained later in this section.
In order to have validation set in cross validation, we usually first split the dataset $\mathcal{D}$ into $\mathcal{T}'$ and $\mathcal{R}$ where $\mathcal{T}' \cup \mathcal{R} = \mathcal{D}$ and $\mathcal{T}' \cap \mathcal{R} = \varnothing$. Then, we split the set $\mathcal{T}'$ into the training and validation sets, i.e., $\mathcal{T} \cup \mathcal{V} = \mathcal{T}'$ and $\mathcal{T} \cap \mathcal{V} = \varnothing$ and usually $|\mathcal{T}| > |\mathcal{V}|$.
The algorithms of $K$-fold cross validation and LOOCV can be modified accordingly to include the validation set. In LOOCV, we usually have $|\mathcal{V}| = 1$.
\subsection{Theory}
Recall the Eqs. (\ref{equation_MSE_test_instance}) and (\ref{equation_MSE_train_instance}) where the true error for the test (not in training set) and training instance are related to the training error, respectively.
When the instance is not in the training set, the true error, \textbf{Err}, and the test error, \textbf{err}, behave differently as shown in Fig. \ref{figure_overfitting_curve}-a. At the first stages of training, the \textbf{err} and \textbf{Err} both decrease; however, after some training, the model becomes more complex and goes toward overfitting. In that stage, the \textbf{Err} starts to increase. We should end the training when the \textbf{Err} starts to increase because that stage is the good fit. Usually, in order to find out when to stop training, we train the model for one stage (e.g., iteration) and then test the trained model on the validation set where the error is named \textbf{Err}. This is commonly used in training neural networks \cite{goodfellow2016deep} where \textbf{Err} is measured after every epoch for example. For neural networks, we usually save a history of \textbf{Err} for several last epochs and if the overall pattern of \textbf{Err} is increasing, we stop and take the last best trained model with least \textbf{Err}. We do this because in complex models such as neural networks, the curve of \textbf{Err} usually has some small fluctuations which we do not want to misjudge the stopping criterion based on those.
This procedure is named \textit{early stopping} in neural networks \cite{prechelt1998early} which will be explained later in Section \ref{section_early_stopping}.
The reason why \textbf{Err} increases after a while of training is according to Eq. (\ref{equation_MSE_train_instance}). Dropping the constant $n \sigma^2$ from that expression, we have: $\textbf{Err} = \textbf{err} + 2\,\sigma^2 \sum_{i=1}^n \frac{\partial \widehat{f}_i}{\partial y_i}$ where the term $2\,\sigma^2 \sum_{i=1}^n \frac{\partial \widehat{f}_i}{\partial y_i}$ shows the model complexity. See Fig. \ref{figure_overfitting_curve}-b where both \textbf{err} and the model complexity are illustrated as a function of training stages (iterations). According to Eq. (\ref{equation_MSE_train_instance}), the \textbf{Err} is the summation of these two curves which clarifies the reason of its behavior. That is why we should not train a lot on the trainign set because the model will get very fitted (biased) on the training set and will lose its ability to generalize to new unseen data.
The Fig. \ref{figure_overfitting_curve}-a and Eq. (\ref{equation_MSE_train_instance}) show that it is better to have $\mathcal{T} \cap \mathcal{V} = 0$. Otherwise, for example if we have $\mathcal{T} = \mathcal{V}$, the \textbf{Err} will be equivalent to \textbf{err} and thus it will go down even in overfitting stages. This is harmful to our training because we will not notice overfitting properly. The Eq. (\ref{equation_MSE_test_instance}) also explains that the error on validation or test set is a good measure for the true error. That is why we can use test or validation error in order to know until what stage we can train the model without overfitting.
Finally, it is noteworthy to discuss the intersections of training, test, and validation sets according to above explanations and the previous sub-section.
If we have only training and test sets without validation set:
\begin{itemize}
\item $\mathcal{T} \cap \mathcal{R} \neq \varnothing \implies $ cheating \#1
\end{itemize}
If we have training, test, and validation sets without validation set:
\begin{itemize}
\item $\mathcal{T} \cap \mathcal{R} \neq \varnothing \implies $ cheating \#1
\item $\mathcal{V} \cap \mathcal{R} \neq \varnothing \implies $ cheating \#2
\item $\mathcal{T} \cap \mathcal{V} \neq \varnothing \implies $ harmful to training (not noticing overfitting properly)
\end{itemize}
The first two items are advantageous to the model's performance on test data but that is cheating and also it may be disadvantageous to future new test data. The third item is disadvantageous to the model's performance on test data because we may not find out overfitting or we may find it out late and the generalization error will become worse; therefore, it is better not to do it.
\section{Generalized Cross Validation}
In this section, we consider the model which estimates the observations $\{y_i\}_{i=1}^N$ as:
\begin{align}\label{equation_generalized_CV_model}
\widehat{\ensuremath\boldsymbol{y}} = \ensuremath\boldsymbol{\Gamma}\, \ensuremath\boldsymbol{y},
\end{align}
where $\widehat{\ensuremath\boldsymbol{y}} = [\widehat{y}_1, \dots, \widehat{y}_N]^\top$ and $\ensuremath\boldsymbol{y} = [y_1, \dots, y_N]^\top$ assuming that the observations for the whole dataset are available. The $\ensuremath\boldsymbol{\Gamma} \in \mathbb{R}^{N \times N}$ is called the hat matrix because it puts a hat on $\ensuremath\boldsymbol{y}$.
An example of $\ensuremath\boldsymbol{\Gamma}$ is $\ensuremath\boldsymbol{\Gamma} = \ensuremath\boldsymbol{X}(\ensuremath\boldsymbol{X}^\top \ensuremath\boldsymbol{X})^{-1} \ensuremath\boldsymbol{X}^\top$ which is used in linear regression \cite{friedman2001elements}.
If $\gamma_{ij}$ denotes the $(i,j)$-th element of $\ensuremath\boldsymbol{\Gamma}$, the $i$-th element of $\widehat{\ensuremath\boldsymbol{y}}$ can be stated as:
\begin{align}
\widehat{y}_i = \sum_{j=1}^N \gamma_{ij}\, y_j.
\end{align}
Now assume that we remove the $i$-th instance for the sake of having LOOCV. Assume that $\widehat{y}_i^{(-i)}$ denotes the model's estimate of $y_i$ where the model is trained using $\mathcal{D} \setminus \{x_i\}$ (using the entire data except the $i$-th instance). We can say:
\begin{align}\label{equation_generalized_CV_midEq_1}
\widehat{y}_i^{(-i)} = \Big(\sum_{j=1}^N \gamma_{ij}\, y_j\Big) - \gamma_{ii}\, y_i + \gamma_{ij}\, \widehat{y}_i^{(-i)},
\end{align}
which means that we remove the estimation of $y_i$ using the model trained by the whole $\mathcal{D}$ and instead we put the estimation of the model trained by $\mathcal{D} \setminus \{x_i\}$.
Adding $y_i$ to the left- and right-hand sides of Eq. (\ref{equation_generalized_CV_midEq_1}) gives:
\begin{align}\label{equation_generalized_CV_LOOCV}
y_i - \widehat{y}_i^{(-i)} = \frac{y_i - \widehat{y}_i}{1 - \gamma_{ii}}.
\end{align}
The Eq. (\ref{equation_generalized_CV_LOOCV}) means that we can do LOOCV for the model of Eq. (\ref{equation_generalized_CV_model}) without the need of iteration over the instances. We can train the model once using the whole $\mathcal{D}$ and then use Eq. (\ref{equation_generalized_CV_LOOCV}) to find the error of every iteration of LOOCV. The overall scaled mean squared error of LOOCV, then, is:
\begin{align}\label{equation_generalized_CV_LOOCV_average}
\sum_{i=1}^N \big(y_i - \widehat{y}_i^{(-i)}\big)^2 = \sum_{i=1}^N \Big(\frac{y_i - \widehat{y}_i}{1 - \gamma_{ii}}\Big)^2.
\end{align}
Suppose that we replace the $\gamma_{ii}$ by its average:
\begin{align}\label{equation_generalized_CV_average_Gamma}
\frac{1}{N} \sum_{i=1}^N \gamma_{ii} \overset{(a)}{=} \frac{1}{N} \textbf{tr}(\ensuremath\boldsymbol{\Gamma}) \overset{(b)}{=} \frac{p}{N},
\end{align}
where $\textbf{tr}(.)$ is the trace of matrix, $(a)$ is because trace is equivalent to summation of diagonal, and $(b)$ assumes that the trace of the hat matrix is $p$. The $p$ can be considered as a the dimensionality of the subspace if the Eq. (\ref{equation_generalized_CV_model}) is considered as a projection into a subspace.
Using Eq. (\ref{equation_generalized_CV_average_Gamma}) in Eq. (\ref{equation_generalized_CV_LOOCV_average}) gives:
\begin{align}\label{equation_generalized_CV_LOOCV_final}
\sum_{i=1}^N \big(y_i - \widehat{y}_i^{(-i)}\big)^2 = \sum_{i=1}^N \Big(\frac{y_i - \widehat{y}_i}{1 - p/N}\Big)^2.
\end{align}
The Eq. (\ref{equation_generalized_CV_LOOCV_final}) is referred to as \textit{generalized cross validation} \cite{carven1979smoothing,golub1979generalized}.
It is noteworthy that the generalized cross validation can also be related to SURE \cite{stein1981estimation} which was introduced before (see \cite{li1985stein}).
\section{Regularization}
\subsection{Definition}
We can minimize the true error, \textbf{Err}, using optimization. According to Eq. (\ref{equation_MSE_train_instance}), we have:
\begin{align}\label{equation_regularization_optimization_Err}
\text{minimize} ~~~ \textbf{err} - n\, \sigma^2 + 2\,\sigma^2 \sum_{i=1}^n \frac{\partial \widehat{f}_i}{\partial y_i}.
\end{align}
As the term $n\, \sigma^2$ is a constant, we can drop it. Moreover, calculation of $\partial \widehat{f}_i / \partial y_i$ is usually very difficult; therefore, we usually use a penalty term in place of it where the penalty increases as the complexity of the model increases in order to imitate the behavior of $\partial \widehat{f}_i / \partial y_i$.
therefore, the optimization can be written as a \textit{regularized optimization} problem:
\begin{align}\label{equation_regularization_optimization}
\underset{\ensuremath\boldsymbol{x}}{\text{minimize}} ~~~ \widetilde{J}(\ensuremath\boldsymbol{x}; \theta) := J(\ensuremath\boldsymbol{x}; \theta) + \alpha\,\Omega(\ensuremath\boldsymbol{x}),
\end{align}
where $\theta$ is the parameter(s) of the cost function, $J(.)$ is the objective \textbf{err} to be minimized, $\Omega(.)$ is the penalty function representing the complexity of model, $\alpha >0 $ is the regularization parameter, and $\widetilde{J}(.)$ is the \textit{regularized objective function}.
The penalty function can be different things such as $\ell_2$ norm \cite{friedman2001elements}, $\ell_1$ norm \cite{tibshirani1996regression,schmidt2005least}, $\ell_{2,1}$ norm \cite{changl21}, etc. The $\ell_1$ and $\ell_{2,1}$ norms are useful for having sparsity \cite{bach2011convex,bach2012optimization}.
The sparsity is very effective because of the \textit{``bet on sparsity''} principal: ``Use a procedure that does well in sparse problems, since no procedure does well in dense problems \cite{friedman2001elements,tibshirani2015statistical}.''
The effectiveness of the sparsity can also be explained by Occam's razor \cite{domingos1999role} stating that ``simpler solutions are more likely to be correct than complex ones'' or ``simplicity is a goal in itself''.
Note that in Eqs. (\ref{equation_regularization_optimization_Err}) and (\ref{equation_regularization_optimization}), we are minimizing the \textbf{Err} (i.e., $\widetilde{J}(\ensuremath\boldsymbol{x}; \theta)$) and not \textbf{err} (i.e., $J(\ensuremath\boldsymbol{x}; \theta)$).
As discussed in Sections \ref{section_overfitting} and \ref{section_crossValidation}, minimizing \textbf{err} results in overfitting. Therefore, regularization helps avoid overfitting.
\subsection{Theory for $\ell_2$ Norm Regularization}
In this section, we briefly explain the theory behind the $\ell_2$ norm regularization \cite{friedman2001elements}, which is:
\begin{align}\label{equation_regularization_optimization_l2}
\underset{\ensuremath\boldsymbol{x}}{\text{minimize}} ~~~ \widetilde{J}(\ensuremath\boldsymbol{x}; \theta) := J(\ensuremath\boldsymbol{x}; \theta) + \frac{\alpha}{2}\, ||\ensuremath\boldsymbol{x}||_2^2.
\end{align}
The $\ell_2$ norm regularization is also referred to as \textit{ridge regression} or \textit{Tikhonov regularization} \cite{goodfellow2016deep}.
Suppose $\ensuremath\boldsymbol{x}^*$ is minimizer of the $J(\ensuremath\boldsymbol{x}; \theta)$, i.e.:
\begin{align}\label{equation_regularization_derivative_J}
\nabla J(\ensuremath\boldsymbol{x}^*; \theta) = 0.
\end{align}
The Taylor series expansion of $J(\ensuremath\boldsymbol{x}; \theta)$ up to the second derivative at $\ensuremath\boldsymbol{x}^*$ gives:
\begin{align}
\widehat{J}(\ensuremath\boldsymbol{x}; \theta) &\approx J(\ensuremath\boldsymbol{x}^*; \theta) + \nabla J(\ensuremath\boldsymbol{x}^*; \theta) \nonumber \\
&+ \frac{1}{2} (\ensuremath\boldsymbol{x} - \ensuremath\boldsymbol{x}^*)^\top \ensuremath\boldsymbol{H} (\ensuremath\boldsymbol{x} - \ensuremath\boldsymbol{x}^*) \nonumber \\
&= J(\ensuremath\boldsymbol{x}^*; \theta) + \frac{1}{2} (\ensuremath\boldsymbol{x} - \ensuremath\boldsymbol{x}^*)^\top \ensuremath\boldsymbol{H} (\ensuremath\boldsymbol{x} - \ensuremath\boldsymbol{x}^*), \label{equation_regularization_J_hat}
\end{align}
where $\ensuremath\boldsymbol{H} \in \mathbb{R}^{d \times d}$ is the Hessian.
Using the Taylor approximation in the cost gives us \cite{goodfellow2016deep}:
\begin{align}
&\widetilde{J}(\ensuremath\boldsymbol{x}; \theta) = \widehat{J}(\ensuremath\boldsymbol{x}; \theta) + \frac{\alpha}{2}\, ||\ensuremath\boldsymbol{x}||_2^2 \nonumber \\
&= J(\ensuremath\boldsymbol{x}^*; \theta) + \frac{1}{2} (\ensuremath\boldsymbol{x} - \ensuremath\boldsymbol{x}^*)^\top \ensuremath\boldsymbol{H} (\ensuremath\boldsymbol{x} - \ensuremath\boldsymbol{x}^*) + \frac{\alpha}{2}\, ||\ensuremath\boldsymbol{x}||_2^2, \nonumber \\
&\frac{\partial \widetilde{J}(\ensuremath\boldsymbol{x}; \theta)}{\partial \ensuremath\boldsymbol{x}} = \ensuremath\boldsymbol{0} + \ensuremath\boldsymbol{H} (\ensuremath\boldsymbol{x}^{\dagger} - \ensuremath\boldsymbol{x}^*) + \alpha\, \ensuremath\boldsymbol{x}^{\dagger} \overset{\text{set}}{=} \ensuremath\boldsymbol{0}, \nonumber \\
&\implies (\ensuremath\boldsymbol{H} + \alpha\ensuremath\boldsymbol{I})\, \ensuremath\boldsymbol{x}^{\dagger} = \ensuremath\boldsymbol{H} \ensuremath\boldsymbol{x}^* \nonumber \\
&\implies \ensuremath\boldsymbol{x}^{\dagger} = (\ensuremath\boldsymbol{H} + \alpha\ensuremath\boldsymbol{I})^{-1} \ensuremath\boldsymbol{H} \ensuremath\boldsymbol{x}^*, \label{equation_regularization_x_dagger}
\end{align}
where $\ensuremath\boldsymbol{x}^{\dagger}$ is the minimizer of $\widetilde{J}(\ensuremath\boldsymbol{x}; \theta)$. Note that in calculations we take $\partial J(\ensuremath\boldsymbol{x}^*; \theta) / \partial \ensuremath\boldsymbol{x} = \ensuremath\boldsymbol{0}$ because the $J(\ensuremath\boldsymbol{x}^*; \theta)$ is a constant vector with respect to $\ensuremath\boldsymbol{x}$.
The Eq. (\ref{equation_regularization_x_dagger}) makes sense because if $\alpha=0$, which means we do not have the regularization term, we will have $\ensuremath\boldsymbol{x}^{\dagger} = \ensuremath\boldsymbol{x}^*$. This means that the minimizer of $\widetilde{J}(\ensuremath\boldsymbol{x},\theta)$ will be the same as the minimizer of $J(\ensuremath\boldsymbol{x}; \theta)$ which is correct according to Eq. (\ref{equation_regularization_optimization_l2}) where $\lambda=0$.
If we apply Singular Value Decomposition (SVD) on the Hessian matrix, we will have:
\begin{align}\label{equation_regularization_Hessian_SVD}
\ensuremath\boldsymbol{H} = \ensuremath\boldsymbol{U}\ensuremath\boldsymbol{\Lambda}\ensuremath\boldsymbol{U}^\top,
\end{align}
where the left and right matrices of singular vectors are equivalent because the Hessian matrix is symmetric.
Using this decomposition in Eq. (\ref{equation_regularization_x_dagger}) gives us:
\begin{align}
\ensuremath\boldsymbol{x}^{\dagger} &= (\ensuremath\boldsymbol{U}\ensuremath\boldsymbol{\Lambda}\ensuremath\boldsymbol{U}^\top + \alpha\ensuremath\boldsymbol{I})^{-1} \ensuremath\boldsymbol{U}\ensuremath\boldsymbol{\Lambda}\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{x}^* \nonumber \\
&\overset{(a)}{=} (\ensuremath\boldsymbol{U}\ensuremath\boldsymbol{\Lambda}\ensuremath\boldsymbol{U}^\top + \ensuremath\boldsymbol{U}\ensuremath\boldsymbol{U}^\top\alpha\ensuremath\boldsymbol{I})^{-1} \ensuremath\boldsymbol{U}\ensuremath\boldsymbol{\Lambda}\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{x}^* \nonumber \\
&\overset{(b)}{=} (\ensuremath\boldsymbol{U}\ensuremath\boldsymbol{\Lambda}\ensuremath\boldsymbol{U}^\top + \ensuremath\boldsymbol{U}\alpha\ensuremath\boldsymbol{I}\ensuremath\boldsymbol{U}^\top)^{-1} \ensuremath\boldsymbol{U}\ensuremath\boldsymbol{\Lambda}\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{x}^* \nonumber \\
&= \big(\ensuremath\boldsymbol{U}(\ensuremath\boldsymbol{\Lambda} + \alpha\ensuremath\boldsymbol{I})\ensuremath\boldsymbol{U}^\top\big)^{-1} \ensuremath\boldsymbol{U}\ensuremath\boldsymbol{\Lambda}\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{x}^* \nonumber \\
&\overset{(c)}{=} \ensuremath\boldsymbol{U}(\ensuremath\boldsymbol{\Lambda} + \alpha\ensuremath\boldsymbol{I})^{-1} \underbrace{\ensuremath\boldsymbol{U}^{-1} \ensuremath\boldsymbol{U}}_{\ensuremath\boldsymbol{I}} \ensuremath\boldsymbol{\Lambda}\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{x}^* \nonumber \\
&= \ensuremath\boldsymbol{U}(\ensuremath\boldsymbol{\Lambda} + \alpha\ensuremath\boldsymbol{I})^{-1} \ensuremath\boldsymbol{\Lambda}\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{x}^*, \label{equation_regularization_x_dagger_2}
\end{align}
where $(a)$ and $(c)$ are because $\ensuremath\boldsymbol{U}$ is an orthogonal matrix so we have $\ensuremath\boldsymbol{U}^{-1} = \ensuremath\boldsymbol{U}^\top$ which yields to $\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{U} = \ensuremath\boldsymbol{I}$ and $\ensuremath\boldsymbol{U} \ensuremath\boldsymbol{U}^\top = \ensuremath\boldsymbol{I}$ (because $\ensuremath\boldsymbol{U}$ is not truncated). The $(b)$ is because $\alpha$ is a scalar and can move between the multiplication of matrices.
The Eq. (\ref{equation_regularization_x_dagger_2}) means that we are rotating $\ensuremath\boldsymbol{x}^*$ by $\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{x}^*$ but before rotating it back with $\ensuremath\boldsymbol{U} \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{x}^*$, we manipulate it with the term $(\ensuremath\boldsymbol{\Lambda} + \alpha\ensuremath\boldsymbol{I})^{-1} \ensuremath\boldsymbol{\Lambda}$.
Based on Eq. (\ref{equation_regularization_x_dagger_2}), we can have the following interpretations:
\begin{itemize}
\item If $\alpha = 0$, we have:
\begin{align*}
\ensuremath\boldsymbol{x}^{\dagger} &= \ensuremath\boldsymbol{U}\underbrace{\ensuremath\boldsymbol{\Lambda}^{-1} \ensuremath\boldsymbol{\Lambda}}_{\ensuremath\boldsymbol{I}} \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{x}^* = \ensuremath\boldsymbol{U} \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{x}^* \\
&\overset{(a)}{=} \underbrace{\ensuremath\boldsymbol{U} \ensuremath\boldsymbol{U}^{-1}}_{\ensuremath\boldsymbol{I}} \ensuremath\boldsymbol{x}^* = \ensuremath\boldsymbol{x}^*,
\end{align*}
where $(a)$ is because $\ensuremath\boldsymbol{U}$ is an orthogonal matrix and $(b)$ is because $\ensuremath\boldsymbol{U}$ is a non-truncated orthogonal matrix. This means that if we do not have the penalty term, the minimizer of $\widetilde{J}(\ensuremath\boldsymbol{x}; \theta)$ is the minimizer of $J(\ensuremath\boldsymbol{x}; \theta)$ as expected. In other words, we are rotating the solution $\ensuremath\boldsymbol{x}^*$ by $\ensuremath\boldsymbol{U}^\top$ and then rotate it back by $\ensuremath\boldsymbol{U}$.
\item If $\alpha \neq 0$, the term $(\ensuremath\boldsymbol{\Lambda} + \alpha\ensuremath\boldsymbol{I})^{-1} \ensuremath\boldsymbol{\Lambda}$ is:
\begin{align*}
(\ensuremath\boldsymbol{\Lambda} + \alpha\ensuremath\boldsymbol{I})^{-1} \ensuremath\boldsymbol{\Lambda} =
\begin{bmatrix}
\frac{\lambda_1}{\lambda_1 + \alpha} & 0 & \dots & 0 \\
0 & \frac{\lambda_2}{\lambda_2 + \alpha} & \dots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \dots & \frac{\lambda_d}{\lambda_d + \alpha}
\end{bmatrix},
\end{align*}
where $\ensuremath\boldsymbol{\Lambda} = \textbf{diag}([\lambda_1, \dots, \lambda_d]^\top)$.
Therefore, for the $j$-th direction of Hessian, we have $\frac{\lambda_j}{\lambda_j + \alpha}$.
\begin{itemize}
\item If $\lambda_j \gg \alpha$, we will have $\frac{\lambda_j}{\lambda_j + \alpha} \approx 1$ so for the $j$-th direction we have $(\ensuremath\boldsymbol{\Lambda} + \alpha\ensuremath\boldsymbol{I})^{-1} \ensuremath\boldsymbol{\Lambda} \approx \ensuremath\boldsymbol{I}$; therefore, $\ensuremath\boldsymbol{x}^\dagger \approx \ensuremath\boldsymbol{x}^*$. This makes sense because $\lambda_i \gg \alpha$ means that the $j$-th direction of Hessian and thus the $j$-th direction of $J(\ensuremath\boldsymbol{x}; \theta)$ is large enough to be effective. Therefore, the penalty is roughly ignored with respect to it.
\item If $\lambda_j \ll \alpha$, we will have $\frac{\lambda_j}{\lambda_j + \alpha} \approx 0$ so for the $j$-th direction we have $(\ensuremath\boldsymbol{\Lambda} + \alpha\ensuremath\boldsymbol{I})^{-1} \ensuremath\boldsymbol{\Lambda} \approx \ensuremath\boldsymbol{0}$; therefore, $\ensuremath\boldsymbol{x}^\dagger \approx \ensuremath\boldsymbol{0}$. This makes sense because $\lambda_j \ll \alpha$ means that the $j$-th direction of Hessian and thus the $j$-th direction of $J(\ensuremath\boldsymbol{x}; \theta)$ is small and not effective. Therefore, the penalty shrinks that direction to almost zero.
\end{itemize}
Therefore, the $\ell_2$ norm regularization keeps the effective directions but shrinks the weak directions to zero.
Note that the following measure is referred to as \textit{effective number of parameters} or \textit{degree of freedom} \cite{friedman2001elements}:
\begin{align}
\sum_{j=1}^d \frac{\lambda_j}{\lambda_j + \alpha},
\end{align}
because it counts the number of effective directions as discussed above.
Moreover, the term $\lambda_j / (\lambda_j + \alpha)$ or $(\ensuremath\boldsymbol{\Lambda} + \alpha\ensuremath\boldsymbol{I})^{-1} \ensuremath\boldsymbol{\Lambda}$ is called the \textit{shrinkage factor} because it shrinks the weak directions.
\end{itemize}
\subsection{Theory for $\ell_1$ Norm Regularization}
As explained before, sparsity is very useful and effective. If $\ensuremath\boldsymbol{x} = [x_1, \dots, x_d]^\top$, for having sparsity, we should use \textit{subset selection} for the regularization:
\begin{align}\label{equation_regularization_optimization_l0}
\underset{\ensuremath\boldsymbol{x}}{\text{minimize}} ~~~ \widetilde{J}(\ensuremath\boldsymbol{x}; \theta) := J(\ensuremath\boldsymbol{x}; \theta) + \alpha\, ||\ensuremath\boldsymbol{x}||_0,
\end{align}
where:
\begin{align}
||\ensuremath\boldsymbol{x}||_0 := \sum_{j=1}^d \mathbb{I}(x_j \neq 0) =
\left\{
\begin{array}{ll}
0 & \text{if } x_j = 0, \\
1 & \text{if } x_j \neq 0,
\end{array}
\right.
\end{align}
is ``$\ell_0$'' norm, which is not a norm (so we used ``.'' for it) because it does not satisfy the norm properties \cite{boyd2004convex}. The ``$\ell_0$'' norm counts the number of non-zero elements so when we penalize it, it means that we want to have sparser solutions with many zero entries.
According to \cite{donoho2006most}, the convex relaxation of ``$\ell_0$'' norm (subset selection) is $\ell_1$ norm. Therefore, we write the regularized optimization as:
\begin{align}\label{equation_regularization_optimization_l1}
\underset{\ensuremath\boldsymbol{x}}{\text{minimize}} ~~~ \widetilde{J}(\ensuremath\boldsymbol{x}; \theta) := J(\ensuremath\boldsymbol{x}; \theta) + \alpha\, ||\ensuremath\boldsymbol{x}||_1.
\end{align}
Note that the $\ell_1$ regularization is also referred to as \textit{lasso} regularization \cite{tibshirani1996regression}.
Different methods exist for solving optimization having $\ell_1$ norm, such as proximal algorithm using soft thresholding \cite{parikh2014proximal} and coordinate descent \cite{wright2015coordinate,wu2008coordinate}.
Here, we explain solving the optimization using the coordinate descent algorithm.
The idea of coordinate descent algorithm is similar to the idea of Gibbs sampling \cite{casella1992explaining} where we work on the dimensions of the variable one by one.
Similar to what we did for obtaining Eq. (\ref{equation_regularization_x_dagger}), we have:
\begin{align*}
&\widetilde{J}(\ensuremath\boldsymbol{x}; \theta) = \widehat{J}(\ensuremath\boldsymbol{x}; \theta) + \alpha\, ||\ensuremath\boldsymbol{x}||_1 \\
&= J(\ensuremath\boldsymbol{x}^*; \theta) + \frac{1}{2} (\ensuremath\boldsymbol{x} - \ensuremath\boldsymbol{x}^*)^\top \ensuremath\boldsymbol{H} (\ensuremath\boldsymbol{x} - \ensuremath\boldsymbol{x}^*) + \alpha\, ||\ensuremath\boldsymbol{x}||_1.
\end{align*}
For simplicity in deriving an interpretable expression, we assume that the Hessian matrix is diagonal \cite{goodfellow2016deep}.
For coordinate descent, we look at the $j$-th coordinate (dimension):
\begin{align*}
&\widetilde{J}(x_j; \theta) = \widehat{J}(x_j; \theta) + \alpha\, |x_j| \\
&= J(x_j^*; \theta) + \frac{1}{2} (x_j - x_j^*)^2 h_i + \alpha\, |x_j| + c,
\end{align*}
where $\ensuremath\boldsymbol{x} = [x_1, \dots, x_d]^\top$, $\ensuremath\boldsymbol{x}^* = [x_1^*, \dots, x_d^*]^\top$, $h_j$ is the $(j,j)$-th element of the diagonal Hessian matrix, and $c$ is a constant term with respect to $x_j$ (not dependent to $x_j$). Taking derivative with respect to $x_j$ gives us:
\begin{align}
&\frac{\partial \widetilde{J}(x_j; \theta)}{\partial x_j} = 0 + (x_j - x_j^*)\, h_j + \alpha\, \textbf{sign}(x_j) \overset{\text{set}}{=} 0 \implies \nonumber \\
& x_j^{\dagger} = x_j^* - \frac{\alpha}{h_j}\, \textbf{sign}(x_j) =
\left\{
\begin{array}{ll}
x_j^* - \frac{\alpha}{h_j} & \text{if } x_j > 0, \\
x_j^* + \frac{\alpha}{h_j} & \text{if } x_j < 0,
\end{array}
\right.
\end{align}
which is a soft thresholding function. This function is depicted in Fig. \ref{figure_soft_thresholding}. As can be seen in this figure, if $|x_j^*| < (\alpha / h_j)$, the solution to the regularized problem, i.e., $\ensuremath\boldsymbol{x}_j^{\dagger}$, is zero. Recall that in $\ell_2$ norm regularization, we shrank the weak solutions close to zero; however, here in $\ell_1$ norm regularization, we are setting the weak solutions exactly to zero. That is why the solutions are relatively sparse in $\ell_1$ norm regularization.
Notice that in $\ell_1$ norm regularization, as shown in Fig. \ref{figure_soft_thresholding}, even the strong solutions are a little shrunk (from the $x_j^{\dagger} = x_j^*$ line), the fact that we also had in $\ell_2$ norm regularization.
\begin{figure}[!t]
\centering
\includegraphics[width=2in]{./images/soft_thresholding}
\caption{The soft thresholding function.}
\label{figure_soft_thresholding}
\end{figure}
Another intuition for why the $\ell_1$ norm regularization is sparse is illustrated in Fig. \ref{figure_unit_balls} \cite{tibshirani1996regression}. As this figure shows, the objective $J(\ensuremath\boldsymbol{x}; \theta)$ has some contour levels like a bowl (if it is convex). The regularization term is also a norm ball, which is a sphere bowl (cone) for $\ell_2$ norm and a diamond bowl (cone) for $\ell_1$ norm \cite{boyd2004convex}.
As Fig. \ref{figure_unit_balls} shows, for $\ell_2$ norm regularization, the objective and the penalty term contact at a point where some of the coordinates might be small; however, for $\ell_1$ norm, the contact point can be at some point where some variables are exactly zero. This again shows the reason of sparsity in $\ell_1$ norm regularization.
\begin{figure}[!t]
\centering
\includegraphics[width=2.5in]{./images/unit_balls}
\caption{The unit balls for $\ell_1$ and $\ell_2$ norm regularizations: (a) $\ell_2$ norm regularization and (b) $\ell_1$ norm regularization. The green curves are the contour levels of the non-regularized objective function. The red balls show the unit balls for the norm penalties. The data are assumed to be two dimensional. A third dimension can be imagined for the value of cost function.}
\label{figure_unit_balls}
\end{figure}
\subsection{Examples in Machine Learning: Regression, Weight Decay, Noise Injection, and Early Stopping}
\subsubsection{Linear, Ridge, and Lasso Regression}
Let $\ensuremath\boldsymbol{X} = [\ensuremath\boldsymbol{1}, [\ensuremath\boldsymbol{x}_1, \dots, \ensuremath\boldsymbol{x}_n]^\top] \in \mathbb{R}^{n \times (d+1)}$ and $\ensuremath\boldsymbol{\beta} \in \mathbb{R}^{d+1}$.
In \textit{linear regression}, the optimization is \cite{friedman2001elements}:
\begin{align}\label{equation_linear_regression_optimization}
\underset{\ensuremath\boldsymbol{\beta}}{\text{minimize}} ~~~ ||\ensuremath\boldsymbol{y} - \ensuremath\boldsymbol{X}\ensuremath\boldsymbol{\beta}||_2^2.
\end{align}
The result of this optimization is:
\begin{align}
\ensuremath\boldsymbol{\beta} = (\ensuremath\boldsymbol{X}^\top \ensuremath\boldsymbol{X})^{-1} \ensuremath\boldsymbol{X}^\top \ensuremath\boldsymbol{y}.
\end{align}
We can penalize the regression coefficients using $\ell_2$ norm regularization. This is referred to as \textit{ridge regression} whose optimization is \cite{friedman2001elements}:
\begin{align}\label{equation_ridge_regression_optimization}
\underset{\ensuremath\boldsymbol{\beta}}{\text{minimize}} ~~~ ||\ensuremath\boldsymbol{y} - \ensuremath\boldsymbol{\beta}\ensuremath\boldsymbol{X}||_2^2 + \frac{\alpha}{2}\, ||\ensuremath\boldsymbol{\beta}||_2^2.
\end{align}
The result of this optimization is:
\begin{align}
\ensuremath\boldsymbol{\beta} = (\ensuremath\boldsymbol{X}^\top \ensuremath\boldsymbol{X} + \alpha \ensuremath\boldsymbol{I})^{-1} \ensuremath\boldsymbol{X}^\top \ensuremath\boldsymbol{y}.
\end{align}
Note that one intuition of ridge regression is that adding $\alpha \ensuremath\boldsymbol{I}$ strengthens the main diagonal of $\ensuremath\boldsymbol{X}^\top \ensuremath\boldsymbol{X}$ in order to make it full-rank and non-singular for inversion.
We can also have $\ell_1$ norm regularization, named \textit{lasso regression} \cite{tibshirani1996regression}, which makes the coefficients sparse. The optimization of lasso regression is:
\begin{align}\label{equation_lasso_regression_optimization}
\underset{\ensuremath\boldsymbol{\beta}}{\text{minimize}} ~~~ ||\ensuremath\boldsymbol{y} - \ensuremath\boldsymbol{\beta}\ensuremath\boldsymbol{X}||_2^2 + \alpha\, ||\ensuremath\boldsymbol{\beta}||_1,
\end{align}
which dose not have a closed form solution but an iterative solution as explained before.
\subsubsection{Weight Decay}
Recall Eq. (\ref{equation_regularization_optimization_l2}).
If we replace the objective variable $\ensuremath\boldsymbol{x}$ with the vector of neural network weights $\ensuremath\boldsymbol{w}$, we will have:
\begin{align}\label{equation_optimization_weight_decay}
\underset{\ensuremath\boldsymbol{w}}{\text{minimize}} ~~~ \widetilde{J}(\ensuremath\boldsymbol{w}; \theta) := J(\ensuremath\boldsymbol{w}; \theta) + \frac{\alpha}{2}\, ||\ensuremath\boldsymbol{w}||_2^2,
\end{align}
which can be the loss function optimized in a neural network \cite{goodfellow2016deep}. Penalizing the weights with regularization is referred to as \textit{weight decay} \cite{krogh1992simple,chiu1994modifying}.
This penalty prevents neural network from becoming too non-linear (complex) and thus overfitted. the reason is that according to non-linear activation functions such as hyperbolic tangent, very large weights (very positive or very negative) are in the very non-linear parts of the activation functions. although neural network should not be completely linear in order to be able to learn non-linear patterns, it should not be very non-linear as well not to be overfitted to the training data. Penalizing the weights makes the weights relatively small (where the activation functions are almost linear) in to have a balance in linearity and non-linearity.
According to Eq. (\ref{equation_regularization_x_dagger_2}), the result of Eq. (\ref{equation_optimization_weight_decay}) is:
\begin{align}\label{equation_weight_decay_w}
\ensuremath\boldsymbol{w}^{\dagger} = \ensuremath\boldsymbol{U}(\ensuremath\boldsymbol{\Lambda} + \alpha\ensuremath\boldsymbol{I})^{-1} \ensuremath\boldsymbol{\Lambda}\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{w}^*,
\end{align}
which has the similar interpretations as we discussed before.
\subsubsection{Noise Injection to Input in Neural Networks}
In training neural networks, it is beneficial to add noise to the input \cite{matsuoka1992noise}.
One perspective to why adding noise to input helps better training of network is \textit{data augmentation} \cite{van2001art,devries2017dataset}.
Data augmentation is useful for training deep networks because they have a huge number of weights (parameters) and if we do not introduce enough training data to them, they will overfit to the training data.
Another interpretation of noise injection to input is regularization \cite{grandvalet1997noise,goodfellow2016deep}.
Assume that the optimization of neural network is:
\begin{align}\label{equation_noise_injection_notNoise}
\underset{\ensuremath\boldsymbol{w}}{\text{minimize}} ~~~ J:= \mathbb{E}((\widehat{y}(\ensuremath\boldsymbol{x}) - y)^2),
\end{align}
where $\ensuremath\boldsymbol{x}$, $\widehat{y}(\ensuremath\boldsymbol{x})$, and $y$ are the input, the estimation (output) of network, and the training label, respectively.
We add noise $\ensuremath\boldsymbol{\varepsilon} \sim \mathcal{N}(\ensuremath\boldsymbol{0}, \sigma^2\ensuremath\boldsymbol{I})$ to the input, so the objective function changes to:
\begin{align*}
\widetilde{J} &:= \mathbb{E}((\widehat{y}(\ensuremath\boldsymbol{x} + \ensuremath\boldsymbol{\varepsilon}) - y)^2) \\
&= \mathbb{E}(\widehat{y}^2(\ensuremath\boldsymbol{x} + \ensuremath\boldsymbol{\varepsilon}) - 2y\widehat{y}(\ensuremath\boldsymbol{x} + \ensuremath\boldsymbol{\varepsilon}) + y^2) \\
&= \mathbb{E}(\widehat{y}^2(\ensuremath\boldsymbol{x} + \ensuremath\boldsymbol{\varepsilon})) - 2\mathbb{E}(y\widehat{y}(\ensuremath\boldsymbol{x} + \ensuremath\boldsymbol{\varepsilon})) + \mathbb{E}(y^2).
\end{align*}
Assuming that the variance of noise is small, the Taylor series expansion of $\widehat{y}(\ensuremath\boldsymbol{x} + \ensuremath\boldsymbol{\varepsilon})$ is:
\begin{align*}
\widehat{y}(\ensuremath\boldsymbol{x} + \ensuremath\boldsymbol{\varepsilon}) =\, &\widehat{y}(\ensuremath\boldsymbol{x}) + \ensuremath\boldsymbol{\varepsilon}^\top \nabla_{\ensuremath\boldsymbol{x}} \widehat{y}(\ensuremath\boldsymbol{x}) \\
&+ \frac{1}{2} \ensuremath\boldsymbol{\varepsilon}^\top \nabla^2_{\ensuremath\boldsymbol{x}} \widehat{y}(\ensuremath\boldsymbol{x})\, \ensuremath\boldsymbol{\varepsilon} + o(\ensuremath\boldsymbol{\varepsilon}^3).
\end{align*}
Therefore:
\begin{align*}
\widetilde{J} &\approx \mathbb{E}\Big( \big(\widehat{y}(\ensuremath\boldsymbol{x}) + \ensuremath\boldsymbol{\varepsilon}^\top \nabla_{\ensuremath\boldsymbol{x}} \widehat{y}(\ensuremath\boldsymbol{x}) + \frac{1}{2} \ensuremath\boldsymbol{\varepsilon}^\top \nabla^2_{\ensuremath\boldsymbol{x}} \widehat{y}(\ensuremath\boldsymbol{x})\, \ensuremath\boldsymbol{\varepsilon} \big)^2 \Big) \\
&- 2\mathbb{E}\Big( y\widehat{y}(\ensuremath\boldsymbol{x}) + y\ensuremath\boldsymbol{\varepsilon}^\top \nabla_{\ensuremath\boldsymbol{x}} \widehat{y}(\ensuremath\boldsymbol{x}) + \frac{1}{2} y \ensuremath\boldsymbol{\varepsilon}^\top \nabla^2_{\ensuremath\boldsymbol{x}} \widehat{y}(\ensuremath\boldsymbol{x})\, \ensuremath\boldsymbol{\varepsilon} \Big) \\
&+ \mathbb{E}(y^2) \\
&= \mathbb{E}\Big(\widehat{y}(\ensuremath\boldsymbol{x})^2 + y^2 - 2y\widehat{y}(\ensuremath\boldsymbol{x})\Big) -2\mathbb{E}\Big(\frac{1}{2} y \ensuremath\boldsymbol{\varepsilon}^\top \nabla^2_{\ensuremath\boldsymbol{x}} \widehat{y}(\ensuremath\boldsymbol{x})\, \ensuremath\boldsymbol{\varepsilon}\Big) \\
&+ \mathbb{E}\Big(\widehat{y}(\ensuremath\boldsymbol{x}) \ensuremath\boldsymbol{\varepsilon}^\top \nabla_{\ensuremath\boldsymbol{x}}^2 \widehat{y}(\ensuremath\boldsymbol{x}) \ensuremath\boldsymbol{\varepsilon} + (\ensuremath\boldsymbol{\varepsilon}^\top \nabla_{\ensuremath\boldsymbol{x}} \widehat{y}(\ensuremath\boldsymbol{x}))^2 + o(\ensuremath\boldsymbol{\varepsilon}^3)\Big).
\end{align*}
The first term, $\mathbb{E}(\widehat{y}(\ensuremath\boldsymbol{x})^2 + y^2 - 2y\widehat{y}(\ensuremath\boldsymbol{x})) = \mathbb{E}((\widehat{y}(\ensuremath\boldsymbol{x}) - y)^2)$, is the loss function before adding the noise to the input, according to Eq. (\ref{equation_noise_injection_notNoise}).
Also, because of $\ensuremath\boldsymbol{\varepsilon} \sim \mathcal{N}(\ensuremath\boldsymbol{0}, \sigma^2\ensuremath\boldsymbol{I})$, we have $\mathbb{E}(\ensuremath\boldsymbol{\varepsilon}^\top \ensuremath\boldsymbol{\varepsilon}) = \sigma^2$. As the noise and the input are independent, the following term is simplified as:
\begin{align*}
\mathbb{E}\Big( (\ensuremath\boldsymbol{\varepsilon}^\top \nabla_{\ensuremath\boldsymbol{x}} \widehat{y}(\ensuremath\boldsymbol{x}))^2 \Big) &\overset{\perp\!\!\!\perp}{=} \mathbb{E}(\ensuremath\boldsymbol{\varepsilon}^\top \ensuremath\boldsymbol{\varepsilon})\, \mathbb{E}(||\nabla_{\ensuremath\boldsymbol{x}} \widehat{y}(\ensuremath\boldsymbol{x})||_2^2) \\
&= \sigma^2\, \mathbb{E}(||\nabla_{\ensuremath\boldsymbol{x}} \widehat{y}(\ensuremath\boldsymbol{x})||_2^2),
\end{align*}
and the rest of expression is simplified as:
\begin{align*}
&\mathbb{E}\Big(\widehat{y}(\ensuremath\boldsymbol{x}) \ensuremath\boldsymbol{\varepsilon}^\top \nabla_{\ensuremath\boldsymbol{x}}^2 \widehat{y}(\ensuremath\boldsymbol{x}) \ensuremath\boldsymbol{\varepsilon} \Big) -2\mathbb{E}\Big(\frac{1}{2} y \ensuremath\boldsymbol{\varepsilon}^\top \nabla^2_{\ensuremath\boldsymbol{x}} \widehat{y}(\ensuremath\boldsymbol{x})\, \ensuremath\boldsymbol{\varepsilon}\Big) \\
&= \mathbb{E}\Big(\widehat{y}(\ensuremath\boldsymbol{x}) \ensuremath\boldsymbol{\varepsilon}^\top \nabla_{\ensuremath\boldsymbol{x}}^2 \widehat{y}(\ensuremath\boldsymbol{x}) \ensuremath\boldsymbol{\varepsilon} \Big) -\mathbb{E}\Big(y \ensuremath\boldsymbol{\varepsilon}^\top \nabla^2_{\ensuremath\boldsymbol{x}} \widehat{y}(\ensuremath\boldsymbol{x})\, \ensuremath\boldsymbol{\varepsilon}\Big) \\
&\overset{\perp\!\!\!\perp}{=} \mathbb{E}(\ensuremath\boldsymbol{\varepsilon}^\top \ensuremath\boldsymbol{\varepsilon})\, \mathbb{E}\big(\widehat{y}(\ensuremath\boldsymbol{x}) \nabla_{\ensuremath\boldsymbol{x}}^2 \widehat{y}(\ensuremath\boldsymbol{x}) \big) - \mathbb{E}(\ensuremath\boldsymbol{\varepsilon}^\top \ensuremath\boldsymbol{\varepsilon})\, \mathbb{E}\big(y \nabla^2_{\ensuremath\boldsymbol{x}} \widehat{y}(\ensuremath\boldsymbol{x})\big) \\
&= \sigma^2\, \mathbb{E}\Big( \big(\widehat{y}(\ensuremath\boldsymbol{x}) - y\big) \nabla_{\ensuremath\boldsymbol{x}}^2 \widehat{y}(\ensuremath\boldsymbol{x}) \Big).
\end{align*}
Hence, the overall loss function after noise injection to the input is simplified to:
\begin{equation}
\begin{aligned}
\widetilde{J} \approx J &+ \sigma^2\, \mathbb{E}\Big( \big(\widehat{y}(\ensuremath\boldsymbol{x}) - y\big) \nabla_{\ensuremath\boldsymbol{x}}^2 \widehat{y}(\ensuremath\boldsymbol{x}) \Big) \\
&+ \sigma^2\, \mathbb{E}(||\nabla_{\ensuremath\boldsymbol{x}} \widehat{y}(\ensuremath\boldsymbol{x})||_2^2),
\end{aligned}
\end{equation}
which is a regularized optimization problem with $\ell_2$ norm penalty (see Eq. (\ref{equation_regularization_optimization_l2})). The penalty is on the second derivatives of outputs of neural network. This means that we do not want to have significant changes in the output of neural network. This penalization prevents from overfitting.
Note that the technique of adding noise to the input is also used in denoising autoencoders \cite{vincent2008extracting}. Moreover, an overcomplete autoencoder with one hidden layer \cite{goodfellow2016deep} (where the number of hidden neurons is greater than the dimension of data) needs a noisy input; otherwise, the mapping in the autoencoder will be just coping the input to output without learning a latent space.
It is also noteworthy that injecting noise to the weights of neural network \cite{goodfellow2016deep,ho2008weight} can be interpreted similar to injecting noise to the input. Therefore, noise injection to the weights can also be interpreted as regularization where the regularization penalty term is $\sigma^2\, \mathbb{E}(||\nabla_{\ensuremath\boldsymbol{w}} \widehat{y}(\ensuremath\boldsymbol{x})||_2^2)$ where $\ensuremath\boldsymbol{w}$ is the vector of weights \cite{goodfellow2016deep}.
\subsubsection{Early Stopping in Neural Networks}\label{section_early_stopping}
As we mentioned in the explanations of Fig. \ref{figure_overfitting_curve}-a, we train neural network up to a point where the overfitting is starting. This is referred to as \textit{early stopping} \cite{prechelt1998early,yao2007early} which helps avoid overfitting \cite{caruana2001overfitting}.
According to Eq. (\ref{equation_regularization_J_hat}), we have:
\begin{align*}
\nabla_{\ensuremath\boldsymbol{w}} \widehat{J}(\ensuremath\boldsymbol{w}) &\approx \nabla_{\ensuremath\boldsymbol{w}} J(\ensuremath\boldsymbol{w}^*) + \ensuremath\boldsymbol{H} (\ensuremath\boldsymbol{w} - \ensuremath\boldsymbol{w}^*) \overset{(\ref{equation_regularization_derivative_J})}{=} \ensuremath\boldsymbol{H} (\ensuremath\boldsymbol{w} - \ensuremath\boldsymbol{w}^*).
\end{align*}
The gradient descent (with $\eta$ as the learning rate) used in back-propagation of neural network is \cite{boyd2004convex}:
\begin{align*}
&\ensuremath\boldsymbol{w}^{(t)} := \ensuremath\boldsymbol{w}^{(t-1)} - \eta \nabla_{\ensuremath\boldsymbol{w}} \widehat{J}(\ensuremath\boldsymbol{w}^{(t)}) \\
& ~~~~~~~~ = \ensuremath\boldsymbol{w}^{(t-1)} - \eta \ensuremath\boldsymbol{H} (\ensuremath\boldsymbol{w}^{(t-1)} - \ensuremath\boldsymbol{w}^*) \\
&\implies \ensuremath\boldsymbol{w}^{(t)} - \ensuremath\boldsymbol{w}^* = (\ensuremath\boldsymbol{I} - \eta \ensuremath\boldsymbol{H}) (\ensuremath\boldsymbol{w}^{(t-1)} - \ensuremath\boldsymbol{w}^*),
\end{align*}
where $t$ is the index of iteration.
According to Eq. (\ref{equation_regularization_Hessian_SVD}), we have:
\begin{align*}
\ensuremath\boldsymbol{w}^{(t)} - \ensuremath\boldsymbol{w}^* = (\ensuremath\boldsymbol{I} - \eta\, \ensuremath\boldsymbol{U}\ensuremath\boldsymbol{\Lambda}\ensuremath\boldsymbol{U}^\top) (\ensuremath\boldsymbol{w}^{(t-1)} - \ensuremath\boldsymbol{w}^*).
\end{align*}
Assuming the initial weights are $\ensuremath\boldsymbol{w}^{(0)} = 0$, we have:
\begin{align*}
&\ensuremath\boldsymbol{w}^{(1)} - \ensuremath\boldsymbol{w}^* = -(\ensuremath\boldsymbol{I} - \eta\, \ensuremath\boldsymbol{U}\ensuremath\boldsymbol{\Lambda}\ensuremath\boldsymbol{U}^\top) \ensuremath\boldsymbol{w}^* \\
&\implies \ensuremath\boldsymbol{w}^{(1)} = \big( \ensuremath\boldsymbol{I} - (\ensuremath\boldsymbol{I} - \eta\, \ensuremath\boldsymbol{U}\ensuremath\boldsymbol{\Lambda}\ensuremath\boldsymbol{U}^\top) \big) \ensuremath\boldsymbol{w}^* \\
&\overset{(a)}{\implies} \ensuremath\boldsymbol{w}^{(1)} = \big( \ensuremath\boldsymbol{U}\ensuremath\boldsymbol{U}^\top - (\ensuremath\boldsymbol{U}\ensuremath\boldsymbol{U}^\top - \eta\, \ensuremath\boldsymbol{U}\ensuremath\boldsymbol{\Lambda}\ensuremath\boldsymbol{U}^\top) \big) \ensuremath\boldsymbol{w}^* \\
&\implies \ensuremath\boldsymbol{w}^{(1)} = \ensuremath\boldsymbol{U} \big( \ensuremath\boldsymbol{I} - (\ensuremath\boldsymbol{I} - \eta\, \ensuremath\boldsymbol{\Lambda}) \big) \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{w}^*,
\end{align*}
where $(a)$ is because $\ensuremath\boldsymbol{U}$ is a non-truncated orthogonal matrix so $\ensuremath\boldsymbol{U}\ensuremath\boldsymbol{U}^\top = \ensuremath\boldsymbol{I}$.
By induction, we have:
\begin{align}
&\ensuremath\boldsymbol{w}^{(t)} = \ensuremath\boldsymbol{U} \big( \ensuremath\boldsymbol{I} - (\ensuremath\boldsymbol{I} - \eta\, \ensuremath\boldsymbol{\Lambda})^t \big) \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{w}^*, \nonumber \\
&\implies \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{w}^{(t)} = \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{U} \big( \ensuremath\boldsymbol{I} - (\ensuremath\boldsymbol{I} - \eta\, \ensuremath\boldsymbol{\Lambda})^t \big) \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{w}^*, \nonumber \\
&\overset{(a)}{\implies} \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{w}^{(t)} = \big( \ensuremath\boldsymbol{I} - (\ensuremath\boldsymbol{I} - \eta\, \ensuremath\boldsymbol{\Lambda})^t \big) \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{w}^*, \label{equation_early_stopping_U_transpose_w}
\end{align}
where $(a)$ is because $\ensuremath\boldsymbol{U}$ is an orthogonal matrix so $\ensuremath\boldsymbol{U}^\top\ensuremath\boldsymbol{U} = \ensuremath\boldsymbol{I}$.
On the other hand, recall Eq. (\ref{equation_weight_decay_w}):
\begin{align}
&\ensuremath\boldsymbol{w}^{\dagger} = \ensuremath\boldsymbol{U}(\ensuremath\boldsymbol{\Lambda} + \alpha\ensuremath\boldsymbol{I})^{-1} \ensuremath\boldsymbol{\Lambda}\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{w}^*, \nonumber \\
&\implies \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{w}^{\dagger} = (\ensuremath\boldsymbol{\Lambda} + \alpha\ensuremath\boldsymbol{I})^{-1} \ensuremath\boldsymbol{\Lambda}\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{w}^*, \nonumber \\
&\overset{(a)}{\implies} \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{w}^{\dagger} = \big( \ensuremath\boldsymbol{I} - (\ensuremath\boldsymbol{\Lambda} + \alpha\ensuremath\boldsymbol{I})^{-1} \alpha \big) \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{w}^*, \label{equation_weight_decay_U_transpose_w}
\end{align}
where $(a)$ is because of an expression rearrangement asserted in \cite{goodfellow2016deep}.
Comparing Eqs. (\ref{equation_early_stopping_U_transpose_w}) and (\ref{equation_weight_decay_U_transpose_w}) shows that early stopping can be seen as a $\ell_2$ norm regularization or weight decay \cite{goodfellow2016deep}.
Actually, the Eqs. (\ref{equation_early_stopping_U_transpose_w}) and (\ref{equation_weight_decay_U_transpose_w}) are equivalent if:
\begin{align}\label{equation_early_stopping_equating_expressions}
(\ensuremath\boldsymbol{I} - \eta\, \ensuremath\boldsymbol{\Lambda})^t = (\ensuremath\boldsymbol{\Lambda} + \alpha\ensuremath\boldsymbol{I})^{-1} \alpha,
\end{align}
for some $\eta$, $t$, and $\alpha$.
If we take the logarithm from these expressions and use Taylor series expansion for $\log(1+x)$, we have:
\begin{align}
\log (\ensuremath\boldsymbol{I} - \eta\, \ensuremath\boldsymbol{\Lambda})^t &= t \log (\ensuremath\boldsymbol{I} - \eta\, \ensuremath\boldsymbol{\Lambda}) \nonumber \\
&\approx -t\, (\eta \ensuremath\boldsymbol{\Lambda} + \frac{1}{2} \eta^2 \ensuremath\boldsymbol{\Lambda}^2 + \frac{1}{3} \eta^3 \ensuremath\boldsymbol{\Lambda}^3 + \cdots), \label{equation_early_stopping_log_1}
\end{align}
\begin{align}
\log (\ensuremath\boldsymbol{\Lambda} + \alpha\ensuremath\boldsymbol{I})^{-1} \alpha &= -\log (\ensuremath\boldsymbol{\Lambda} + \alpha\ensuremath\boldsymbol{I}) + \log \alpha \nonumber \\
&= -\log (\alpha (\ensuremath\boldsymbol{I} + \frac{1}{\alpha} \ensuremath\boldsymbol{\Lambda})) + \log \alpha \nonumber \\
&= -\log \alpha - \log (\ensuremath\boldsymbol{I} + \frac{1}{\alpha} \ensuremath\boldsymbol{\Lambda}) + \log \alpha \nonumber \\
&\approx \frac{-1}{\alpha} \ensuremath\boldsymbol{\Lambda} + \frac{1}{2\alpha^2} \ensuremath\boldsymbol{\Lambda}^2 - \frac{1}{3\alpha^3} \ensuremath\boldsymbol{\Lambda}^3 + \cdots. \label{equation_early_stopping_log_2}
\end{align}
Equating Eqs. (\ref{equation_early_stopping_log_1}) and (\ref{equation_early_stopping_log_2}) because of Eq. (\ref{equation_early_stopping_equating_expressions}) gives us:
\begin{align}
\alpha \approx \frac{1}{t\, \eta}, ~~~ t \approx \frac{1}{\alpha\, \eta},
\end{align}
which shows that the inverse of number of iterations is proportional to the weight decay ($\ell_2$ norm) regularization parameter. In other words, the more training iterations we have, the less we are penalizing the weights and the more the network might get overfitted.
Moreover, some empirical studies \cite{zur2009noise} show that noise injection and weight decay have more effectiveness than early stopping for avoiding overfitting, although early stopping has its own merits.
\section{Bagging}
\subsection{Definition}
Bagging is short for Bootstrap AGGregatING, first proposed by \cite{breiman1996bagging}. It is a meta algorithm which can be used with any model (classifier, regression, etc).
The definition of \textit{bootstrapping} is as follows.
Suppose we have a sample $\{\ensuremath\boldsymbol{x}_i\}_{i=1}^n$ with size $n$ where $f(\ensuremath\boldsymbol{x})$ is the unknown distribution of the sample, i.e., $\ensuremath\boldsymbol{x}_i \overset{iid}{\sim} f(\ensuremath\boldsymbol{x})$. We would like to sample from this distribution but we do not know the $f(\ensuremath\boldsymbol{x})$. Approximating the sampling from the distribution by randomly sampling from the available sample is named bootstrapping. In bootstrapping, we use simple random sampling with replacement. The drawn sample is named \textit{bootstrap sample}.
In bagging, we draw $k$ \textit{bootstrap} samples each with some sample size. Then, we train the model $h_j$ using the $j$-th bootstrap sample, $\forall j \in \{1, \dots, k\}$. Hence, we have $k$ trained models rather than one model. Finally, we \textit{aggregate} the results of estimations of the $k$ models for an instance $\ensuremath\boldsymbol{x}$:
\begin{align}\label{equation_bagging_f_hat}
\widehat{f}(\ensuremath\boldsymbol{x}) = \frac{1}{k} \sum_{j=1}^k h_j(\ensuremath\boldsymbol{x}).
\end{align}
If the model is classifier, we should probably use sign function:
\begin{align}
\widehat{f}(\ensuremath\boldsymbol{x}) = \text{sign}\big( \frac{1}{k} \sum_{j=1}^k h_j(\ensuremath\boldsymbol{x}) \big).
\end{align}
\subsection{Theory}
Let $e_j$ denote the error of the $j$-th model in estimation of the observation of an instance. Suppose this error is a random variable with normal distribution having mean zero, i.e., $e_j \overset{iid}{\sim} \mathcal{N}(0,s)$ where $s := \sigma^2$. We denote the covariance of estimations of two trained models using two different bootstrap samples by $c$. Therefore, we have:
\begin{align}
& \mathbb{E}(e_j^2) = s \nonumber \\
&\implies \mathbb{V}\text{ar}(e_j) = \mathbb{E}(e_j^2) - (\mathbb{E}(e_j))^2 = s - 0 = s \nonumber \\
&\implies \mathbb{V}\text{ar}(h_j(\ensuremath\boldsymbol{x})) = s, \label{equation_bagging_variance} \\
& \mathbb{E}(e_j\, e_{\ell}) = c \nonumber \\
&\implies \mathbb{C}\text{ov}(e_j, e_{\ell}) = \mathbb{E}(e_j\, e_{\ell}) - \mathbb{E}(e_j) \mathbb{E}(e_{\ell}) \nonumber \\
& = c - (0 \times 0) = c \implies \mathbb{C}\text{ov}(h_j(\ensuremath\boldsymbol{x}), h_{\ell}(\ensuremath\boldsymbol{x})) = c, \label{equation_bagging_covariance}
\end{align}
for all $j, \ell \in \{1, \dots, k\}, j \neq \ell$.
According to Eqs. (\ref{equation_bagging_f_hat}), (\ref{equation_bagging_variance}), and (\ref{equation_bagging_covariance}), we have:
\begin{align}
&\mathbb{V}\text{ar}\big(\widehat{f}(\ensuremath\boldsymbol{x})\big) = \frac{1}{k^2} \mathbb{V}\text{ar}\big(\sum_{j=1}^k h_j(\ensuremath\boldsymbol{x})\big) \nonumber \\
&\overset{(\ref{equation_variance_multiple})}{=} \frac{1}{k^2} \sum_{j=1}^k \mathbb{V}\text{ar}(h_j(\ensuremath\boldsymbol{x})) \nonumber \\
&~~~~~~~~ + \frac{1}{k^2} \sum_{j=1}^k \sum_{\ell=1, \ell\neq j}^k \mathbb{C}\text{ov}(h_j(\ensuremath\boldsymbol{x}), h_{\ell}(\ensuremath\boldsymbol{x})) \nonumber \\
&= \frac{1}{k^2} ks + \frac{1}{k^2} k(k-1)c = \frac{1}{k} s + \frac{k-1}{k} c.
\end{align}
The obtained expression has an interesting interpretation: If two trained models with two different bootstrap samples are very correlated, we will have $c \approx s$, thus:
\begin{align}
\lim_{c \rightarrow s} \mathbb{V}\text{ar}\big(\widehat{f}(\ensuremath\boldsymbol{x})\big) = \frac{1}{k} s + \frac{k-1}{k} s = s,
\end{align}
and if the two trained models are very different (uncorrelated), we will have $c \approx 0$, hence:
\begin{align}
\lim_{c \rightarrow 0} \mathbb{V}\text{ar}\big(\widehat{f}(\ensuremath\boldsymbol{x})\big) = \frac{1}{k} s + \frac{k-1}{k} 0 = \frac{1}{k} s.
\end{align}
This means that if the trained models are very correlated in bagging, there is not any difference from using only one model; however, if we have different trained models, the variance of estimation improves significantly by the factor of $k$.
This also implies that bagging never is destructive; it either is not effective or improves the estimation in terms of variance \cite{buhlmann2000explaining,breiman1996bagging}.
Figure \ref{figure_overfitting_example} shows that the more complex model usually has more variance and less bias. This trade-off is shown in Fig. \ref{figure_overfitting_curve}.
Therefore, the more variance corresponds to overfitting. As bagging helps decrease the variance of estimation, it helps prevent overfitting. Therefore, bagging is a meta algorithm useful to have less variance and not to get overfitted \cite{breiman1998arcing}.
Moreover, as also will be mentioned in Section \ref{section_boosting_generalization_error_bound}, bagging can be seen as an \textit{ensemble learning} method \cite{polikar2012ensemble} which is useful because of \textit{model averaging} \cite{hoeting1999bayesian,claeskens2008model}.
\subsection{Examples in Machine Learning: Random Forest and Dropout}
\subsubsection{Random Forest}
One of the examples of using bagging in machine learning is \textit{random forest} \cite{liaw2002classification}.
In random forest, we train different models (trees) using different bootstrap samples (subsets of the training set). However, as the trees work similarly, they will be very correlated. Foe the already explained reason, this will not have a significant improvement from using one tree. Random forest addresses this issue by also sampling from the features (dimensions) of the bootstrap sample. This makes the trained trees very different and thus results in a noticeable improvement.
\subsubsection{Dropout}
Another example of bagging is \textit{dropout} in neural networks \cite{srivastava2014dropout}. According to dropout, in every iteration of training phase, the neurons are randomly removed with probability $p = 0.5$, i.e., we sample from a Bernoulli distribution.
This makes the training phase as training different neural networks as we have different models in bagging.
In the test time, all the neurons are used but their output is multiplied by the $p$. This imitates the model averaging of bagging in Eq. (\ref{equation_bagging_f_hat}). That is why dropout prevents neural network from overfitting.
Another intuition of why dropout works is making the neural network sparse which is very effective because of principal of sparsity \cite{friedman2001elements,tibshirani2015statistical} or Occam's razor \cite{domingos1999role} introduced before.
\subsection{Examples in Computer Vision: HOG and SSD}
\subsubsection{Histogram of Oriented Gradients}
An example of bagging is Histogram of Oriented Gradients (HOG) \cite{dalal2005histograms} used in computer vision, especially for human detection in images.
In HOG, different cells or blocks are used each of which includes a histogram of gradients of a sub-region of image. Finally, using bagging, the histograms are combined into one histogram. The effectiveness of HOG is because of effectiveness of bagging.
\subsubsection{Single Shot multi-box Detector}
Single Shot multi-box Detector (SSD) \cite{liu2016ssd} is another usage of bagging in computer vision and object detection using deep neural networks \cite{lecun2015deep,goodfellow2016deep}.
In SSD, a set of bounding boxes (i.e., the models in bagging) with different sizes are used which are processed and learned using convolution layers in neural network. Some of the boxes are matched and their weighted summation is used as the loss function of the neural network to optimize.
\section{Boosting}
\subsection{Definition}
Boosting is a meta algorithm which can be used with any model (classifier, regression, etc). For binary classification, for example, if we use boosting with a classifier even slightly better than flipping a coin, we will have a strong classifier (we will explain the reason later in Section \ref{section_boosting_generalization_error_bound}). Thus, we can say boosting makes the estimation or classification very strong. In other words, boosting addresses the question whether a strong classifier can be obtained from a set of weak classifiers \cite{kearns1988thoughts,kearns1994cryptographic}.
The idea of boosting is to learn $k$ models in a hierarchy where every model gives more attention (larger weight) to the instances misclassified (or estimated very badly) by the previous model. Figure \ref{figure_boosting} shows this hierarchy. Finally, the overall estimation or classification is a weighted summation (average) of the $k$ estimations.
For an instance $\ensuremath\boldsymbol{x}$, we have:
\begin{align}\label{equation_boosting_f_hat}
\widehat{f}(\ensuremath\boldsymbol{x}) = \sum_{j=1}^k \alpha_j\, h_j(\ensuremath\boldsymbol{x}).
\end{align}
If the model is classifier, we should probably use sign function:
\begin{align}
\widehat{f}(\ensuremath\boldsymbol{x}) = \text{sign}\big( \sum_{j=1}^k \alpha_j h_j(\ensuremath\boldsymbol{x}) \big),
\end{align}
which is equivalent to \textit{majority voting} among the trained classifiers.
\begin{figure}[!t]
\centering
\includegraphics[width=2.6in]{./images/boosting}
\caption{The training phase in boosting $k$ models.}
\label{figure_boosting}
\end{figure}
Different methods have been proposed for boosting, one of the most well-known ones is AdaBoost (Adaptive Boosting) \cite{freund1996experiments}.
The algorithm of AdaBoost for binary classification is shown in Algorithm \ref{algorithm_AdaBoost}.
In this algorithm, $L_j$ is the cost function minimized in the $j$-th model $h_j$, the $\mathbb{I}(.)$ is the indicator function which is one and zero if its condition is and is not satisfied, respectively, and $w_i$ is the weight associated to the $i$-th instance for weighting it as the input to the next layer of boosting.
Here, we can have several cases which help us understand the interpretation of the AdaBoost algorithm:
\begin{itemize}
\item if an instance is correctly classified, the $\mathbb{I}(y_i \neq h_j(\ensuremath\boldsymbol{x}_i))$ is zero and thus the $w_i$ will be still $w_i$ without any change. This makes sense because the correctly classified instance should not gain a significant weight in the next layer of boosting.
\item if an instance is misclassified, the $\mathbb{I}(y_i \neq h_j(\ensuremath\boldsymbol{x}_i))$ is one. In this case, we can have two sub-cases:
\begin{itemize}
\item If the classifier which classified that instance was a bad classifier, its cost would be like flipping a coin, i.e., $L_j = 0.5$. Therefore, we will have $\alpha_j = \log(1)=0$ and again the $w_i$ will still be $w_i$ without any change. This makes sense because we cannot trust the bad classifier whether the instance is correctly or incorrectly classified and thus we should not make any decision based on that.
\item If the classifier which classified that instance was a good classifier, then we have $L_j = 0.5$ and as we also have $\mathbb{I}(y_i \neq h_j(\ensuremath\boldsymbol{x}_i)) = 1$, the weight will change as $w_i := w_i\, \exp(\alpha_j)$. This also is intuitive because the previous model in the boosting was a good classifier and we can trust it and that good classifier could not classify the instance correctly. therefore, we should notice that instance more in the next model in the boosting hierarchy.
\end{itemize}
\end{itemize}
Note that the cost in AdaBoost is:
\begin{align}\label{equation_AdaBoost_cost}
L_j = \frac{\sum_{i=1}^n w_i\, \mathbb{I}(y_i \neq h_j(\ensuremath\boldsymbol{x}_i))}{\sum_{i=1}^n w_i},
\end{align}
which makes sense because it gets larger if the observations of more instances are estimated incorrectly.
\SetAlCapSkip{0.5em}
\IncMargin{0.8em}
\begin{algorithm2e}[!t]
\DontPrintSemicolon
\textbf{Initialize} $w_i = 1/n, \forall i \in \{1, \dots, n\}$\;
\For{$j$ from $1$ to $k$}{
$h_j(\ensuremath\boldsymbol{x}) = \arg \min L_j$\;
$\alpha_j = \log(\frac{1-L_j}{L_j})$\; \label{algorithm_AdaBoost_alpha}
$w_i = w_i\, \exp\big(\alpha_j\, \mathbb{I}(y_i \neq h_j(\ensuremath\boldsymbol{x}_i))\big)$\; \label{algorithm_AdaBoost_w}
}
\caption{The AdaBoost Algorithm}\label{algorithm_AdaBoost}
\end{algorithm2e}
\DecMargin{0.8em}
\subsection{Theory Based on Additive Models}
Additive models \cite{hastie1986generalized} can be used to explain why boosting works \cite{friedman2000additive,rojas2009adaboost}.
In additive model, we map the data as $\ensuremath\boldsymbol{x} \mapsto \phi_j(\ensuremath\boldsymbol{x}), \forall j\in \{1, \dots, k\}$ and then add them using some weights $\beta_j$'s:
\begin{align}\label{equation_additive_model}
\phi(\ensuremath\boldsymbol{x}) = \sum_{j=1}^k \beta_j\, \phi_j(\ensuremath\boldsymbol{x}).
\end{align}
A well-known example of the additive model is Radial Basis Function (RBF) neural network (here with $k$ hidden nodes) which uses Gaussian mappings \cite{broomhead1988multivariable,schwenker2001three}.
Now, consider a cost function for an instance as:
\begin{align}
L(y, h(\ensuremath\boldsymbol{x})) := \exp(- y\, h(\ensuremath\boldsymbol{x})),
\end{align}
where $y$ is the observation or label for $\ensuremath\boldsymbol{x}$ and $h(\ensuremath\boldsymbol{x})$ is the model's estimation of $y$. This cost is intuitive because when the instance is misclassified, the signs of $y$ and $h(\ensuremath\boldsymbol{x})$ will be different and the cost will be large, while in case of correct classification, the signs are similar and the cost is small.
If we add up the cost over the $n$ training instances, we have:
\begin{align}\label{equation_boosting_L_t}
L_t(y, h(\ensuremath\boldsymbol{x})) := \sum_{i=1}^n \exp(- y_i\, h(\ensuremath\boldsymbol{x}_i)),
\end{align}
where $L_t$ denotes the total cost.
In Eq. (\ref{equation_additive_model}), if we rename the mapping to $h(\ensuremath\boldsymbol{x})$, which is the model used in boosting, we will have:
\begin{align}
h(\ensuremath\boldsymbol{x}) = \sum_{j=1}^k \beta_j\, h_j(\ensuremath\boldsymbol{x}).
\end{align}
We can write this expression as a \textit{forward stage-wise additive model} \cite{friedman2000additive,rojas2009adaboost} in which we work with the models one by one where we add up the previously worked models:
\begin{align}
&f_{q-1}(\ensuremath\boldsymbol{x}) = \sum_{j=1}^{q-1} \beta_j\, h_j(\ensuremath\boldsymbol{x}), \label{equation_forward_additive_model_1} \\
&f_{q}(\ensuremath\boldsymbol{x}) = f_{q-1}(\ensuremath\boldsymbol{x}) + \beta_q\, h_q(\ensuremath\boldsymbol{x}), ~~~ q \leq k, \label{equation_forward_additive_model_2}
\end{align}
where $h(\ensuremath\boldsymbol{x}) = f_k(\ensuremath\boldsymbol{x})$.
Therefore, minimizing the cost, i.e., Eq. (\ref{equation_boosting_L_t}), for the $j$-th model in the additive manner is:
\begin{align*}
&\min_{\beta_j, h_j} \sum_{i=1}^n \exp\big(\! - y_i\, [f_{j-1}(\ensuremath\boldsymbol{x}_i) + \beta_j\, h_j(\ensuremath\boldsymbol{x}_i)]\big) \\
&= \min_{\beta_j, h_j} \sum_{i=1}^n \exp(- y_i\, f_{j-1}(\ensuremath\boldsymbol{x}_i)) \exp(-y_i\, \beta_j\, h_j(\ensuremath\boldsymbol{x}_i)).
\end{align*}
The first term is a constant with respect to $\beta_j$ and $h_j$ so we name it by $w_i$:
\begin{align}\label{equation_boosting_w}
w_i := \exp(- y_i\, f_{j-1}(\ensuremath\boldsymbol{x}_i)).
\end{align}
Thus:
\begin{align*}
&\min_{\beta_j, h_j} \sum_{i=1}^n w_i \exp(-y_i\, \beta_j\, h_j(\ensuremath\boldsymbol{x}_i)).
\end{align*}
As in binary AdaBoost, we have $\pm 1$ for $y_i$ and $h_j$, we can say:
\begin{align*}
&\min_{\beta_j, h_j} \exp(-\beta_j) \sum_{i=1}^n w_i\, \mathbb{I}(y_i = h_j(\ensuremath\boldsymbol{x}_i)) \\
&~~~~~~~~ + \exp(\beta_j) \sum_{i=1}^n w_i\, \mathbb{I}(y_i \neq h_j(\ensuremath\boldsymbol{x}_i)) \\
&\overset{(a)}{=} \min_{\beta_j, h_j} \exp(-\beta_j) \sum_{i=1}^n w_i \\
&~~~~~~~~ - \exp(-\beta_j) \sum_{i=1}^n w_i\, \mathbb{I}(y_i \neq h_j(\ensuremath\boldsymbol{x}_i)) \\
&~~~~~~~~ + \exp(\beta_j) \sum_{i=1}^n w_i\, \mathbb{I}(y_i \neq h_j(\ensuremath\boldsymbol{x}_i)), \\
\end{align*}
where $(a)$ is because:
\begin{align*}
\sum_{i=1}^n w_i\, \mathbb{I}(y_i = h_j(\ensuremath\boldsymbol{x}_i)) = \sum_{i=1}^n w_i - \sum_{i=1}^n w_i\, \mathbb{I}(y_i \neq h_j(\ensuremath\boldsymbol{x}_i)).
\end{align*}
For the sake of minimization, we take the derivative:
\begin{align*}
&\frac{\partial L_t}{\partial \beta_j} =
-\exp(-\beta_j) \sum_{i=1}^n w_i \\
&~~~~~~~~ + \exp(-\beta_j) \sum_{i=1}^n w_i\, \mathbb{I}(y_i \neq h_j(\ensuremath\boldsymbol{x}_i)) \\
&~~~~~~~~ + \exp(\beta_j) \sum_{i=1}^n w_i\, \mathbb{I}(y_i \neq h_j(\ensuremath\boldsymbol{x}_i)) \overset{\text{set}}{=} 0,
\end{align*}
which gives:
\begin{align}
&\implies (\exp(-\beta_j) + \exp(\beta_j)) \times \nonumber \\
&~~~~~~~~~ \frac{\mathbb{I}(y_i \neq h_j(\ensuremath\boldsymbol{x}_i))}{\sum_{i=1}^n w_i} = \exp(-\beta_j) \nonumber \\
&\overset{(\ref{equation_AdaBoost_cost})}{\implies} (\exp(-\beta_j) + \exp(\beta_j))\, L_j = \exp(-\beta_j) \nonumber \\
&\implies L_j = \frac{\exp(-\beta_j)}{\exp(-\beta_j) + \exp(\beta_j)} \nonumber \\
&\implies \exp(2\beta_j) = \frac{1 - L_j}{L_j} \implies 2\beta = \log(\frac{1-L_j}{L_j}) \nonumber \\
&\overset{(a)}{\implies} \alpha_j = 2\beta_j, \label{equation_boosting_alpha_beta}
\end{align}
where $(a)$ is because of the line \ref{algorithm_AdaBoost_alpha} in Algorithm \ref{algorithm_AdaBoost}.
According to Eqs. (\ref{equation_forward_additive_model_1}), (\ref{equation_forward_additive_model_2}), and (\ref{equation_boosting_w}), we have:
\begin{align}\label{equation_boosting_w_2}
w_i := w_i\, \exp(-y_i\, \beta_j\, h_j(\ensuremath\boldsymbol{x}_i)).
\end{align}
As we have $y_i\, h_j(\ensuremath\boldsymbol{x}_i) = \pm 1$, we can say:
\begin{align}\label{equation_boosting_y_h}
-y_i\, h_j(\ensuremath\boldsymbol{x}_i) = 2\, \mathbb{I}(y_i \neq h_j(\ensuremath\boldsymbol{x}_i)) - 1.
\end{align}
According Eqs. (\ref{equation_boosting_alpha_beta}), (\ref{equation_boosting_w_2}), and (\ref{equation_boosting_y_h}), we have:
\begin{align}
w_i := w_i\, \exp\big(\alpha_j\, \mathbb{I}(y_i \neq h(\ensuremath\boldsymbol{x}_i))\big)\, \exp(-\beta_j),
\end{align}
which is equivalent to the line \ref{algorithm_AdaBoost_w} in Algorithm \ref{algorithm_AdaBoost} with a factor of $\exp(-\beta_j)$.
This factor does not have impact on whether the instance is correctly classified or not.
\subsection{Theory Based on Maximum Margin}
\subsubsection{Upper Bound on the Generalization Error of Boosting}\label{section_boosting_generalization_error_bound}
There is an upper bound on the generalization error of boosting \cite{schapire1998boosting}.
In binary boosting, we have $\pm 1$ for $y_i$ and also the sign of $\widehat{f}(\ensuremath\boldsymbol{x}_i)$ is important; therefore, $y_i\,\widehat{f}(\ensuremath\boldsymbol{x}_i) < 0$ means that we have error for estimating the $i$-th instance.
Thus, for an error, we have:
\begin{align}\label{equation_boosting_y_f_theta}
y_i\,\widehat{f}(\ensuremath\boldsymbol{x}_i) \leq \theta,
\end{align}
for a $\theta>0$.
Recall the Eq. (\ref{equation_boosting_f_hat}). We can normalize this equation because the sign of it is important:
\begin{align}\label{equation_boosting_f_hat_normalized}
\widehat{f}(\ensuremath\boldsymbol{x}_i) = \frac{\sum_{j=1}^k \alpha_j\, h_j(\ensuremath\boldsymbol{x}_i)}{\sum_{j=1}^k \alpha_j}.
\end{align}
According to Eqs. (\ref{equation_boosting_y_f_theta}) and (\ref{equation_boosting_f_hat_normalized}), we have:
\begin{align*}
&y_i\,\widehat{f}(\ensuremath\boldsymbol{x}_i) \leq \theta \Longleftrightarrow y_i \sum_{j=1}^k \alpha_j\, h_j(\ensuremath\boldsymbol{x}_i) \leq \theta \sum_{j=1}^k \alpha_j \\
&\Longleftrightarrow \exp\big(-y_i \sum_{j=1}^k \alpha_j\, h_j(\ensuremath\boldsymbol{x}_i) + \theta \sum_{j=1}^k \alpha_j\big) \geq 1.
\end{align*}
Therefore, in terms of probability, we have:
\begin{align}
&\mathbb{P}(y_i\,\widehat{f}(\ensuremath\boldsymbol{x}_i) \leq \theta) \nonumber \\
&~~~= \mathbb{P}\Big(\exp\big(-y_i \sum_{j=1}^k \alpha_j\, h_j(\ensuremath\boldsymbol{x}_i) + \theta \sum_{j=1}^k \alpha_j\big) \geq 1\Big). \label{equation_boosting_probs_equal}
\end{align}
According to the Markov's inequality which is (for $a>0$ and a random variable $X$):
\begin{align}
\mathbb{P}(X \geq a) \leq \frac{\mathbb{E}(X)}{a},
\end{align}
and Eq. (\ref{equation_boosting_probs_equal}), we have (take $a=1$ and the exponential term as $X$ in Markov's inequality):
\begin{align}
&\mathbb{P}(y_i\,\widehat{f}(\ensuremath\boldsymbol{x}_i) \leq \theta) \nonumber \\
&\leq \mathbb{E}\Big(\exp\big(-y_i \sum_{j=1}^k \alpha_j\, h_j(\ensuremath\boldsymbol{x}_i) + \theta \sum_{j=1}^k \alpha_j\big)\Big) \nonumber \\
&\overset{(a)}{=} \exp\big(\theta \sum_{j=1}^k \alpha_j\big)\, \mathbb{E}\Big(\exp\big(-y_i \sum_{j=1}^k \alpha_j\, h_j(\ensuremath\boldsymbol{x}_i) \big)\Big) \nonumber \\
&\overset{(b)}{=} \frac{1}{n} \exp\big(\theta \sum_{j=1}^k \alpha_j\big)\, \sum_{i=1}^n \exp\big(-y_i \sum_{j=1}^k \alpha_j\, h_j(\ensuremath\boldsymbol{x}_i) \big), \label{equation_boosting_bound_prob_1}
\end{align}
where $(a)$ is because the expectation is with respect to the data, i.e., $\ensuremath\boldsymbol{x}_i$ and $y_i$ and $(b)$ is according to definition of expectation.
Recall the line \ref{algorithm_AdaBoost_w} in Algorithm \ref{algorithm_AdaBoost}:
\begin{align*}
w_i^{(j+1)} = w_i^{(j)}\, \exp\big(\alpha_j\, \mathbb{I}(y_i \neq h_j(\ensuremath\boldsymbol{x}_i))\big),
\end{align*}
which can be restated as:
\begin{align*}
w_i^{(j+1)} = w_i^{(j)}\, \exp\big(\!-y_i\, \alpha_j\, h_j(\ensuremath\boldsymbol{x}_i)\big),
\end{align*}
because $y_i = \pm 1$ and $h_j(\ensuremath\boldsymbol{x}_i) = \pm 1$.
It is not harmful to AdaBoost if we use the normalized weights:
\begin{align}\label{equation_boosting_normalized_weight}
w_i^{(j+1)} = \frac{w_i^{(j)}\, \exp\big(\!-y_i\, \alpha_j\, h_j(\ensuremath\boldsymbol{x}_i)\big)}{z_j},
\end{align}
where:
\begin{align}\label{equation_boosting_normalized_weight_denominator}
z_j := \sum_{i=1}^n w_i^{(j)} \exp\big(\!-y_i\, \alpha_j\, h_j(\ensuremath\boldsymbol{x}_i)\big).
\end{align}
Considering that $w_i^{(1)} = 1/n$, we can have recursive expression for the weights:
\begin{align}
&w_i^{(k+1)} = \frac{w_i^{(k)}\, \exp\big(\!-y_i\, \alpha_k\, h_k(\ensuremath\boldsymbol{x}_i)\big)}{z_k} \nonumber \\
&= w_i^{(1)} \times \frac{1}{z_k \times \dots \times z_1} \times \nonumber \\
&\exp\big(\!-y_i\, \alpha_k\, h_k(\ensuremath\boldsymbol{x}_i)\big) \times \dots \times \exp\big(\!-y_i\, \alpha_1\, h_1(\ensuremath\boldsymbol{x}_i)\big) \nonumber \\
&= \frac{1}{n} \times \frac{1}{\prod_{j=1}^k z_j} \times \prod_{j=1}^k \exp\big(\!-y_i\, \alpha_j\, h_j(\ensuremath\boldsymbol{x}_i)\big) \nonumber \\
&= \frac{1}{n} \times \frac{1}{\prod_{j=1}^k z_j} \times \exp\big(\!-y_i \sum_{j=1}^k \alpha_j\, h_j(\ensuremath\boldsymbol{x}_i)\big). \label{equation_boosting_bound_prob_2}
\end{align}
We continue the Eq. (\ref{equation_boosting_bound_prob_1}):
\begin{align*}
&\mathbb{P}(y_i\,\widehat{f}(\ensuremath\boldsymbol{x}_i) \leq \theta) \\
&\leq \frac{1}{n} \exp\big(\theta \sum_{j=1}^k \alpha_j\big)\, \sum_{i=1}^n \exp\big(-y_i \sum_{j=1}^k \alpha_j\, h_j(\ensuremath\boldsymbol{x}_i) \big) \\
&\overset{(\ref{equation_boosting_bound_prob_2})}{=} \exp\big(\theta \sum_{j=1}^k \alpha_j\big)\, \Big(\prod_{j=1}^k z_j\Big) \sum_{i=1}^n w_i^{(k+1)}.
\end{align*}
According to Eqs. (\ref{equation_boosting_normalized_weight}) and (\ref{equation_boosting_normalized_weight_denominator}), we have:
\begin{align*}
\sum_{i=1}^n w_i^{(j+1)} = \frac{\sum_{i=1}^n w_i^{(j)}\, \exp\big(\!-y_i\, \alpha_j\, h_j(\ensuremath\boldsymbol{x}_i)\big)}{\sum_{i=1}^n w_i^{(j)} \exp\big(\!-y_i\, \alpha_j\, h_j(\ensuremath\boldsymbol{x}_i)\big)} = 1.
\end{align*}
Therefore:
\begin{align}\label{equation_boosting_generalization_error_1}
\therefore ~~~ \mathbb{P}(y_i\,\widehat{f}(\ensuremath\boldsymbol{x}_i) \leq \theta) \leq \exp\big(\theta \sum_{j=1}^k \alpha_j\big)\, \Big(\prod_{j=1}^k z_j\Big).
\end{align}
On the other hand, according to Eq. (\ref{equation_boosting_normalized_weight_denominator}), we have:
\begin{align}
z_j &= \sum_{i=1}^n w_i^{(j)} \exp\big(\!-y_i\, \alpha_j\, h_j(\ensuremath\boldsymbol{x}_i)\big) \nonumber \\
&= \sum_{i=1}^n w_i^{(j)} \exp(-\alpha_j)\, \mathbb{I}(y_i = h_j(\ensuremath\boldsymbol{x}_i)) \nonumber \\
&~~~~ + \sum_{i=1}^n w_i^{(j)} \exp(\alpha_j)\, \mathbb{I}(y_i \neq h_j(\ensuremath\boldsymbol{x}_i)) \nonumber \\
&= \exp(-\alpha_j) \sum_{i=1}^n w_i^{(j)} \mathbb{I}(y_i = h_j(\ensuremath\boldsymbol{x}_i)) \nonumber \\
&~~~~ + \exp(\alpha_j) \sum_{i=1}^n w_i^{(j)} \mathbb{I}(y_i \neq h_j(\ensuremath\boldsymbol{x}_i)). \label{equation_boosting_z_expansion}
\end{align}
Recall Eq. (\ref{equation_boosting_normalized_weight}) for $w_i^{j+1}$. This is in the range $[0,1]$ and its summation over error cases can be considered as the probability of error:
\begin{align}
\sum_{i=1}^n w_i^{(j)} \mathbb{I}(y_i \neq h_j(\ensuremath\boldsymbol{x}_i)) = \mathbb{P}(y_i \neq h_j(\ensuremath\boldsymbol{x}_i)) \overset{(a)}{=} L_j,
\end{align}
where $(a)$ is because the Eq. (\ref{equation_AdaBoost_cost}) is the cost which is the probability of error.
Therefore, the Eq. (\ref{equation_boosting_z_expansion}) becomes:
\begin{align*}
z_j = \exp(-\alpha_j)\, (1 - L_j) + \exp(\alpha_j)\, L_j.
\end{align*}
Recall the $\alpha_j$ in line \ref{algorithm_AdaBoost_alpha} in Algorithm \ref{algorithm_AdaBoost}. Scaling it is not harmful to AdaBoost:
\begin{align}\label{equation_boosting_alpha_scaled}
\alpha_j = \frac{1}{2} \log(\frac{1-L_j}{L_j}).
\end{align}
Therefore, we can have:
\begin{align}\label{equation_boosting_z_relatedTo_L}
z_j = 2 \sqrt{L_j (1 - L_j)}.
\end{align}
Plugging Eqs. (\ref{equation_boosting_alpha_scaled}) and (\ref{equation_boosting_z_relatedTo_L}) in Eq. (\ref{equation_boosting_generalization_error_1}) gives:
\begin{align*}
&\mathbb{P}(y_i\,\widehat{f}(\ensuremath\boldsymbol{x}_i) \leq \theta) \\
&\leq \exp\big(\frac{1}{2} \theta \sum_{j=1}^k \log(\frac{1-L_j}{L_j}) \big)\, \Big(2^k \prod_{j=1}^k \sqrt{L_j (1 - L_j)}\Big) \\
&= 2^k \exp\big(\sum_{j=1}^k \log((\frac{1-L_j}{L_j})^{\theta/2}) \big)\, \prod_{j=1}^k \sqrt{L_j (1 - L_j)} \\
&= 2^k \prod_{j=1}^k \exp\big(\log((\frac{1-L_j}{L_j})^{\theta/2}) \big)\, \prod_{j=1}^k \sqrt{L_j (1 - L_j)},
\end{align*}
which simplifies to the upper bound on the generalization error of AdaBoost \cite{schapire1998boosting}:
\begin{align}\label{equation_boosting_generalization_error_bound}
&\mathbb{P}\big(y_i\,\widehat{f}(\ensuremath\boldsymbol{x}_i) \leq \theta\big) \leq 2^k \prod_{j=1}^k \sqrt{L_j^{1-\theta} (1 - L_j)^{1+\theta}},
\end{align}
where $\mathbb{P}\big(y_i\,\widehat{f}(\ensuremath\boldsymbol{x}_i) \leq \theta\big)$ is the probability that the generalization (true) error for the $i$-th instance is less than $\theta > 0$.
According to Eq. (\ref{equation_AdaBoost_cost}), we have $L_j \in [0,1]$. If we have $L_j \leq 0.5 - \xi$, where $\xi \in (0,0.5)$, the Eq. (\ref{equation_boosting_generalization_error_bound}) becomes:
\begin{align}\label{equation_boosting_generalization_error_bound_2}
\mathbb{P}\big(y_i\,\widehat{f}(\ensuremath\boldsymbol{x}_i) \leq \theta\big) \leq \bigg(\!\sqrt{(1-2\xi)^{1-\theta} (1+2\xi)^{1+\theta}}\bigg)^k,
\end{align}
which is a very good upper bound because if $\theta < \xi$, we have $\sqrt{(1-2\xi)^{1-\theta} (1+2\xi)^{1+\theta}} < 1$; thus, the probability of error, $\mathbb{P}\big(y_i\,\widehat{f}(\ensuremath\boldsymbol{x}_i) \leq \theta\big)$, decreases \textit{exponentially} with $k$ which is the number of models used in boosting.
This shows that boosting helps us reduce the generalization error and thus helps us avoid overfitting. In other words, because of the bound on generalization error, boosting overfits very hardly.
If $\xi$ is a very small positive number, the $L_j \leq 0.5 - \xi$ is a little smaller than $0.5$, i.e., $L_j \lessapprox 0.5$. As we are discussing binary classification in boosting, $L_j = 0.5$ means random classification by flipping a coin. Therefore, for having the great bound of Eq. (\ref{equation_boosting_generalization_error_bound_2}), having weak base models (a little better than random decision) suffices. This shows the effectiveness of boosting.
Note that a very small $\xi$ means a very small $\theta$ because of $\theta < \xi$; therefore, it means a very small probability of error because of $\mathbb{P}\big(y_i\,\widehat{f}(\ensuremath\boldsymbol{x}_i) \leq \theta\big)$.
It is noteworthy that both boosting and bagging can be seen as \textit{ensemble learning} \cite{polikar2012ensemble} (or majority voting) methods which use \textit{model averaging} \cite{hoeting1999bayesian,claeskens2008model} and are very effective in learning theory.
Moreover, both boosting and bagging reduce the variance of estimation \cite{breiman1998arcing,schapire1998boosting}, especially for the models with high variance of estimation such as trees \cite{quinlan1996bagging}.
In the above, we analyzed boosting for \textit{binary} classification. A similar discussion can be done for \textit{multi-class} classification in boosting and find an upper bound on the generalization error (see the appendix in \cite{schapire1998boosting} for more details).
\subsubsection{Boosting as Maximum Margin Classifier}
In another perspective, the found upper bound for boosting shows that boosting can be seen as a method to increase (maximize) the margins of training error which results in a good generalization error \cite{boser1992training}. This phenomenon is the base for the theory of Support Vector Machines (SVM) \cite{cortes1995support,burges1998tutorial}.
In the following, we analyze the analogy between maximum margin classifier (i.e., SVM) and boosting \cite{schapire1998boosting}.
In addition to \cite{schapire1998boosting}, some more discussion exist for upper bound and margin of boosting \cite{wang2008margin,gao2013doubt} to which we refer the interested readers.
Assume we have training instances $\{(\ensuremath\boldsymbol{x}_i, y_i)\}_{i=1}^n$ where $y_i \in \{-1,+1\}$ for binary classification.
The two classes may not be linearly separable. In order to handle this case, we map the data to higher dimensional feature space using kernels \cite{scholkopf2001learning,hofmann2008kernel}, hoping that they become linearly separable in the feature space.
Assume $\ensuremath\boldsymbol{h}(\ensuremath\boldsymbol{x})$ is a vector which non-linearly maps data to the feature space.
Considering $\ensuremath\boldsymbol{\alpha}$ as the vector of dual variables, the dual optimization problem \cite{boyd2004convex} in SVM is \cite{burges1998tutorial,schapire1998boosting}:
\begin{align}\label{equation_SVM_dual_optimization}
\underset{\ensuremath\boldsymbol{\alpha}}{\text{maximize}} ~~~~ \underset{\{(\ensuremath\boldsymbol{x}_i, y_i)\}_{i=1}^n}{\text{minimize}} ~~~~ \frac{y_i\, (\ensuremath\boldsymbol{\alpha}^\top \ensuremath\boldsymbol{h}(\ensuremath\boldsymbol{x}_i))}{||\ensuremath\boldsymbol{\alpha}||_2}.
\end{align}
Note that $y_i = \pm 1$ and $\ensuremath\boldsymbol{\alpha}^\top \ensuremath\boldsymbol{h}(\ensuremath\boldsymbol{x}_i) \gtrless 0$; therefore, the sign of $y_i\, (\ensuremath\boldsymbol{\alpha}^\top \ensuremath\boldsymbol{h}(\ensuremath\boldsymbol{x}_i))$ determines the class of the $i$-th instance.
On the other hand, the Eq. (\ref{equation_boosting_f_hat_normalized}) can be written in a vector form:
\begin{align}\label{equation_boosting_f_hat_normalized_vectorForm}
\widehat{f}(\ensuremath\boldsymbol{x}_i) = \frac{\sum_{j=1}^k \alpha_j\, h_j(\ensuremath\boldsymbol{x}_i)}{\sum_{j=1}^k \alpha_j} = \frac{\ensuremath\boldsymbol{\alpha}^\top \ensuremath\boldsymbol{h}(\ensuremath\boldsymbol{x}_i)}{||\ensuremath\boldsymbol{\alpha}||_1},
\end{align}
where $\ensuremath\boldsymbol{h}(\ensuremath\boldsymbol{x}_i) = [h_1(\ensuremath\boldsymbol{x}_i), \dots, h_k(\ensuremath\boldsymbol{x}_i)]^\top$ and $\ensuremath\boldsymbol{\alpha} = [\alpha_1, \dots, \alpha_k]^\top$. Note that here, $h(\ensuremath\boldsymbol{x}_i) = \pm 1$ and $\alpha_j$ is obtained from Eq. (\ref{equation_boosting_alpha_scaled}) or the line \ref{algorithm_AdaBoost_alpha} in Algorithm \ref{algorithm_AdaBoost}.
The similarity between the Eq. (\ref{equation_boosting_f_hat_normalized_vectorForm}) and the cost function in Eq. (\ref{equation_SVM_dual_optimization}) shows that boosting can be seen as maximizing the margin of classification resulting in a good generalization error \cite{schapire1998boosting}.
In other words, finding a linear combination in the high dimensional feature space having a large margin between the training instances of the classes is performed in the two methods.
Note that a slight difference is the type of norm which is interpretable because the mapping to feature space in boosting is only to $h(\ensuremath\boldsymbol{x}_i) = \pm 1$ while in SVM, it can be any number where the sign is important. Therefore, $\ell_1$ and $\ell_2$ norms are suitable in boosting and SVM, respectively \cite{schapire1998boosting}.
Another connection between SVM (maximum margin classifier) and boosting is that some of the training instances are found to be most important instances, called \textit{support vectors} \cite{burges1998tutorial}. In boosting, also, weighting the training instances can be seen as selecting some \textit{informative} models \cite{freund1995boosting} which can be analogous to support vectors.
\subsection{Examples in Machine Learning: Boosting Trees and SVMs}
Both bagging and boosting are used a lot with trees \cite{quinlan1996bagging,friedman2001elements}.
The reason why boosting is very effective with trees is that trees have a large variance of estimation where the boosting and bagging can significantly reduce the variance of estimation as discussed before.
Note that boosting is also used with SVM for imbalanced data \cite{wang2010boosting}.
\section{Conclusion}
This paper was a tutorial paper introducing overfitting, cross validation, generalized cross validation, regularization, bagging, and boosting. The theory behind these methods were explained and some examples of them in machine learning and computer vision were provided.
\section*{Acknowledgment}
The authors hugely thank Prof. Ali Ghodsi (see his great online related courses \cite{web_classification,web_deep_learning}), Prof. Mu Zhu, Prof. Hoda Mohammadzade, Prof. Wayne Oldford, etc, whose courses have partly covered the materials mentioned in this tutorial paper.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 6,435 |
{"url":"https:\/\/physics.stackexchange.com\/questions\/555559\/is-it-possible-for-beta-be-negative-in-boltzmann-probability-distributions","text":"Is it possible for $\\beta$ be negative in Boltzmann probability distributions? [duplicate]\n\nI am studying the basics of statistical mechanics and Boltzmann distribution.\n\nI tried to use the idea to find natural income distributions, through the method of maximization of probability using Lagrange multipliers, where:\n\naverage energy => income per capita\nnumber of particles => number of people\nEnergy levels => income ranges\n\nThe total population, income per capita and income ranges are fixed. There is a minimum and a maximum income. The interval between ranges are all equal.\n\nWhen the income per capita is below the average between max and min income the curve seems realistic.\n\nWhen it is equal to that average, all ranges have the same number of people, and the curve is linear.\n\nBut when it is above the average, the distribution is odd because there are many more people in the high income levels, but I don't think it is impossible. At least it is not contradictory. The problem is that in this case, one of the Lagrange multipliers (usually called $$\\beta$$), is negative.\n\nIn statistical mechanics $$\\beta$$ is associated to the inverse of absolute temperature. So it can not be negative. But on the other hand, there is nothing in the method of Lagrange multipliers that obliges it (as I can see).\n\nThe graph is shown below.\n\nSo, is the fact that $$\\beta$$ must be positive a specific feature of energy configuration of particles, or has it a broader interpretation? .\n\n\u2022 Possible duplicate, with interesting answers.\n\u2013\u00a0rob\nMay 29 '20 at 0:21\n\u2022 Possible duplicates: physics.stackexchange.com\/q\/21851\/2451 and links therein. May 29 '20 at 0:22\n\u2022 There is a lot of discussion about the subject indeed. May 29 '20 at 2:21","date":"2022-01-19 20:15:08","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 3, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8313433527946472, \"perplexity\": 497.61407676400256}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-05\/segments\/1642320301488.71\/warc\/CC-MAIN-20220119185232-20220119215232-00310.warc.gz\"}"} | null | null |
\section{Introduction}
A direct and very
stringent test of Supersymmetry (SUSY) is provided by
the search for the lightest Higgs boson, since the prediction of a
relatively
light Higgs boson is common to all supersymmetric models whose
couplings remain in the perturbative regime up to a very high energy
scale~\cite{susylighthiggs}.
A precise prediction for the mass of the lightest Higgs boson, $m_h$,
in terms
of the relevant SUSY parameters is crucial in order to determine the
discovery and exclusion potential of LEP2 and the upgraded Tevatron.
If the Higgs boson exists, it will be accessible at the LHC and future
linear colliders, where then a high-precision
measurement of the mass of this particle will become feasible.
A precise knowledge of the mass of the heavier ${\cal CP}$-even Higgs boson,
$m_H$, is important for
resolving the mass splitting between the ${\cal CP}$-even and -odd
Higgs-boson masses.
In the Minimal Supersymmetric Standard Model (MSSM)~\cite{mssm} the
mass of the lightest Higgs boson is restricted at the tree
level to be smaller
than the $Z$-boson mass. This bound, however, is strongly affected by
the inclusion of radiative corrections,
which yield an upper bound of about $130 \,\, {\mathrm GeV}$%
~\cite{mhiggs1l,mhiggs1lfullb,mhiggs1lfull,mhiggs2la,mhiggs2lb,mhiggsRG,mhiggsEffPot}.
Results beyond one-loop\ order have been obtained in several
approaches:
a Feynman diagrammatic calculation of the leading QCD
corrections has been performed~\cite{mhiggs2la,mhiggs2lb}.
Also renormalization group (RG)
methods have been applied in order to obtain leading logarithmic
higher-order contributions~\cite{mhiggsRG}.
Furthermore the leading two-loop\ QCD corrections have been calculated in the
effective potential method~\cite{mhiggsEffPot}.
All these calculations show that the corrections beyond one-loop\ order
lead to a sizable decrease of $m_h$
of up to $20 \,\, {\mathrm GeV}$.
Concerning the calculation of the lighter and heavier neutral
${\cal CP}$-even Higgs bosons two different
kinds of computer codes have been used for phenomenological analyses
so far: they are either based on the RG improved one-loop\ effective
potential approach~\cite{mhiggsRG} or on the one-loop\ diagrammatic
on-shell calculation~\cite{mhiggs1lfullb,mhiggs1lfull}. These
approaches differ by ${\cal O}(10 \,\, {\mathrm GeV})$.
\smallskip
Here we present a new Fortran program named {\em FeynHiggs}, which is based on
the results of the Feynman-diagrammatic calculations up to ${\cal O}(\alpha\alpha_s)$ given in
Refs.~\cite{mhiggs1lfull,mhiggs2la,mhiggs2lb}. It includes the
complete diagrammatic on-shell results at the one-loop\ level,
the leading diagrammatic two-loop\ QCD contributions and further improvements
taking into account leading electroweak two-loop\ and
leading higher-order QCD corrections.
The calculation of $m_h$ and $m_H$ is performed for arbitrary
values of parameters in the $t-\tilde{t}$-sector and
the Higgs sector of the MSSM. As a subroutine, {\em FeynHiggs}\ can be linked to other
programs thus incorporating in an easy way a precise prediction for
$m_h$ and $m_H$.
In addition the program provides as an option the calculation of the SUSY
contribution to the $\rho$-parameter in ${\cal O}(\alpha\alpha_s)$, based
on~\citeres{drhosuqcd,precobssusyqcd}. In this way
experimentally disfavored combinations of squark masses can
automatically be excluded.
\smallskip
The paper is organized as follows: in section~2 we specify our
notations and give a brief outline of the calculation of $m_h$ and $m_H$. In
section~3 the program {\em FeynHiggs}\ is described in detail. Examples
of how to use {\em FeynHiggs}\ are shown in section~4. The conclusions are given
in section~5.
\section{The calculational basis}
\subsection{The top-squark sector of the MSSM}
In order to fix the notation we shortly list our conventions for the
MSSM scalar top sector:
the mass matrix in the basis of the current eigenstates $\tilde{t}_L$ and
$\tilde{t}_R$ is given by
\begin{equation}
\label{stopmassmatrix}
{\cal M}^2_{\tilde{t}} =
\left( \begin{array}{cc} M_{\tilde{t}_L}^2 + m_{t}^2 + \cos 2\beta\hspace{1mm} (\frac{1}{2} - \frac{2}{3} s_W^2) M_Z^2 &
m_{t} M_{t}^{LR} \\
m_{t} M_{t}^{LR} &
M_{\tilde{t}_R}^2 + m_{t}^2 + \frac{2}{3} \cos 2\beta\hspace{1mm} s_W^2 M_Z^2
\end{array} \right),
\end{equation}
where
\begin{equation}
m_{t} M_{t}^{LR} = m_{t} (A_t - \mu \cot \beta\hspace{1mm})~.
\end{equation}
Diagonalizing the $\tilde{t}$-mass matrix yields the mass eigenvalues
$m_{\tilde{t}_1}, m_{\tilde{t}_2}$ and the $\tilde{t}$~mixing angle~$\theta_{\tilde{t}}$, which relates
the current eigenstates to the mass eigenstates:
\begin{equation}
\left( \begin{array}{c} \tilde{t}_1 \\ \tilde{t}_2 \end{array} \right) = \left( \begin{array}{cc} \cos\tst & \sin\tst \\ -\sin\tst & \cos\tst \end{array} \right)
\left( \begin{array}{c} \tilde{t}_L \\ \tilde{t}_R \end{array} \right)~.
\label{sfermrotation}
\end{equation}
\subsection{Calculation of the Higgs-boson masses}
\label{subsec:mhcalc}
Contrary to the Standard Model (SM), in the MSSM two Higgs doublets
are required.
The Higgs potential~\cite{hhg}
\begin{eqnarray}
\label{Higgspot}
V &=& m_1^2 H_1\bar{H}_1 + m_2^2 H_2\bar{H}_2 - m_{12}^2 (\epsilon_{ab}
H_1^aH_2^b + \mbox {h.c.}) \nonumber \\
&& \mbox{} + \frac{g'^2 + g^2}{8}\, (H_1\bar{H}_1 - H_2\bar{H}_2)^2
+\frac{g^2}{2}\, |H_1\bar{H}_2|^2
\end{eqnarray}
contains $m_1, m_2, m_{12}$ as soft SUSY breaking parameters;
$g, g'$ are the $SU(2)$ and $U(1)$ gauge couplings, and
$\epsilon_{12} = -1$.
The doublet fields $H_1$ and $H_2$ are decomposed in the following way:
\begin{eqnarray}
H_1 &=& \left( \begin{array}{c} H_1^1 \\ H_1^2 \end{array} \right) = \left( \begin{array}{c} v_1 + (\phi_1^{0} + i\chi_1^{0})
/\sqrt2 \\ \phi_1^- \end{array} \right) ,\nonumber \\
H_2 &=& \left( \begin{array}{c} H_2^1 \\ H_2^2 \end{array} \right) = \left( \begin{array}{c} \phi_2^+ \\ v_2 + (\phi_2^0
+ i\chi_2^0)/\sqrt2 \end{array} \right).
\label{eq:hidoubl}
\end{eqnarray}
The potential (\ref{Higgspot}) can be described with the help of two
independent parameters (besides $g$ and $g'$):
$\tan \beta\hspace{1mm} = v_2/v_1$ and $M_A^2 = -m_{12}^2(\tan \beta\hspace{1mm}+\cot \beta\hspace{1mm})$,
where $M_A$ is the mass of the ${\cal CP}$-odd $A$ boson.
In order to obtain the ${\cal CP}$-even neutral mass eigenstates, the rotation
\begin{eqnarray}
\left( \begin{array}{c} H^0 \\ h^0 \end{array} \right) &=& \left( \begin{array}{cc} \cos \alpha\hspace{1mm} & \sin \alpha\hspace{1mm} \\ -\sin \alpha\hspace{1mm} & \cos \alpha\hspace{1mm} \end{array} \right)
\left( \begin{array}{c} \phi_1^0 \\ \phi_2^0 \end{array} \right)
\label{higgsrotation}
\end{eqnarray}
is performed, where the mixing angle $\alpha$ is given in terms of
$\tan \beta\hspace{1mm}$ and $M_A$ as follows:
\begin{equation}
\tan 2\alpha = \tan 2\beta \frac{M_A^2 + M_Z^2}{M_A^2 - M_Z^2},
\quad - \frac{\pi}{2} < \alpha < 0.
\end{equation}
\bigskip
At tree level the mass matrix of the neutral ${\cal CP}$-even Higgs bosons
in the $\phi_1-\phi_2$ basis can be expressed
in terms of $M_Z$ and $M_A$ as follows:
\begin{eqnarray}
M_{\rm Higgs}^{2, {\rm tree}} &=& \left( \begin{array}{cc} m_{\Pe}^2 & m_{\PePz}^2 \\
m_{\PePz}^2 & m_{\Pz}^2 \end{array} \right) \nonumber\\
&=& \left( \begin{array}{cc} M_A^2 \sin^2\beta\hspace{1mm} + M_Z^2 \cos^2\beta\hspace{1mm} & -(M_A^2 + M_Z^2) \sin \beta\hspace{1mm} \cos \beta\hspace{1mm} \\
-(M_A^2 + M_Z^2) \sin \beta\hspace{1mm} \cos \beta\hspace{1mm} & M_A^2 \cos^2\beta\hspace{1mm} + M_Z^2 \sin^2\beta\hspace{1mm} \end{array} \right),
\end{eqnarray}
which by diagonalization
according to \refeq{higgsrotation} yields the tree-level
Higgs-boson masses
\begin{equation}
M_{\rm Higgs}^{2, {\rm tree}}
\stackrel{\alpha}{\longrightarrow}
\left( \begin{array}{cc} m_{H,{\rm tree}}^2 & 0 \\ 0 & m_{h,{\rm tree}}^2 \end{array} \right).
\end{equation}
\bigskip
In the Feynman-diagrammatic approach the
Higgs-boson masses in higher orders are derived by finding the poles
of the $h-H$-propagator matrix whose inverse reads:
\begin{equation}
\left(\Delta_{\rm Higgs}\right)^{-1}
= - i \left( \begin{array}{cc} q^2 - m_{H,{\rm tree}}^2 + \hat{\Sigma}_{H}(q^2) & \hat{\Sigma}_{hH}(q^2) \\
\hat{\Sigma}_{hH}(q^2) & q^2 - m_{h,{\rm tree}}^2 + \hat{\Sigma}_{h}(q^2) \end{array} \right)~.
\label{higgsmassmatrixnondiag}
\end{equation}
The poles are then obtained by solving the equation
\begin{equation}
(q^2 - m_{h,{\rm tree}}^2 + \hat{\Sigma}_{h}(q^2))
(q^2 - m_{H,{\rm tree}}^2 + \hat{\Sigma}_{H}(q^2))
-(\hat{\Sigma}_{hH}(q^2))^2 = 0~.
\label{higgsmasseq}
\end{equation}
In the following the $\hat{\Sigma}^{(i)}$ denote the one-loop\ ($i=1$) and the two-loop\
($i=2$) contributions to the renormalized self-energies.
In {\em FeynHiggs}\ the one-loop\ results for the Higgs-boson self-energies $\hat{\Sigma}^{(1)}_s(q^2)$
are calculated according
to \citere{mhiggs1lfull}. They contain the full one-loop\ contribution
obtained via an explicit Feynman-diagrammatic
calculation in the on-shell scheme. Here the gaugino parameters $M_1$
and $M (\equiv M_2)$ enter in
the neutralino mass matrix. $M_1$ is fixed via the GUT relation
\begin{equation}
M_1 = \frac{5}{3} \frac{s_W^2}{c_W^2} M,
\end{equation}
whereas $M$ is kept as a free input parameter.
The two-loop\ results for the Higgs-boson self-energies $\hat{\Sigma}^{(2)}_s$ are taken from
Refs.~\cite{mhiggs2la,mhiggs2lb}. The leading two-loop\ corrections
have been obtained by calculating the ${\cal O}(\alpha\alpha_s)$ contribution of the
$t-\tilde{t}$-sector to the renormalized Higgs-boson self-energies at zero
external momentum from the Yukawa part of the theory. These
two-loop\ QCD corrections are
expected to constitute the most sizable part of the full set of two-loop\
corrections.
In Refs.~\cite{mhiggs2la,mhiggs2lb} the self-energies have
been computed first in the
$\phi_1-\phi_2$ basis and afterwards rotated into the $h-H$ basis
according to \refeq{higgsrotation}:
\begin{eqnarray}
\hat{\Sigma}^{(2)}_{H} &=& \cos^2\alpha\hspace{1mm} \hat{\Sigma}^{(2)}_{\phi_1} + \sin^2\alpha\hspace{1mm} \hat{\Sigma}^{(2)}_{\phi_2} +
2 \sin \alpha\hspace{1mm} \cos \alpha\hspace{1mm} \hat{\Sigma}^{(2)}_{\phi_1\phi_2} \nonumber \\
\hat{\Sigma}^{(2)}_{h} &=& \sin^2\alpha\hspace{1mm} \hat{\Sigma}^{(2)}_{\phi_1} + \cos^2\alpha\hspace{1mm} \hat{\Sigma}^{(2)}_{\phi_2} -
2 \sin \alpha\hspace{1mm} \cos \alpha\hspace{1mm} \hat{\Sigma}^{(2)}_{\phi_1\phi_2} \nonumber \\
\hat{\Sigma}^{(2)}_{hH} &=& - \sin \alpha\hspace{1mm} \cos \alpha\hspace{1mm} \left( \hat{\Sigma}^{(2)}_{\phi_1} - \hat{\Sigma}^{(2)}_{\phi_2} \right) +
(\cos^2\alpha\hspace{1mm} - \sin^2\alpha\hspace{1mm}) \hat{\Sigma}^{(2)}_{\phi_1\phi_2} .
\label{higgsserotation}
\end{eqnarray}
Thus for our results up to the two-loop\ level the
matrix~(\ref{higgsmassmatrixnondiag})
contains the renormalized Higgs-boson self-energies
\begin{equation}
\hat{\Sigma}_s(q^2) = \hat{\Sigma}^{(1)}_s(q^2) + \hat{\Sigma}^{(2)}_s(0), \quad s = h, H, hH,
\label{renhiggsse}
\end{equation}
where the momentum dependence is neglected only in the two-loop\
contribution.
The calculation is performed
for arbitrary parameters of the Higgs and the scalar top sector and
for arbitrary gluino mass~$m_{\tilde{g}}$.
Thus the accuracy of the calculation does not
depend on how the parameters of the $\tilde{t}$ sector
$m_{\tilde{t}_1}, m_{\tilde{t}_2}$ and $\theta_{\tilde{t}}$ are chosen.
\bigskip
In order to take into account the leading electroweak two-loop\
contribution to the mass of the lightest Higgs boson,
we have implemented the leading Yukawa correction of
${\cal O}(G_F^2 m_{t}^6)$, which gives a sizable contribution only for $m_h$.
The formula is taken over from the result obtained by renormalization
group methods. It reads~\cite{mhiggsRGmhref}
\begin{eqnarray}
\label{yukawaterm}
\Deltam_h^2 &=& \frac{9}{16\pi^4} G_F^2 m_{t}^6
\left[ \tilde{X} t + t^2 \right] \\
\mbox{with} && \tilde{X} = \Bigg[
\left( \frac{m_{\tilde{t}_2}^2 - m_{\tilde{t}_1}^2}{4 m_{t}^2} \sin^2 2\tst \right)^2
\left( 2 - \frac{m_{\tilde{t}_2}^2 + m_{\tilde{t}_1}^2}{m_{\tilde{t}_2}^2 - m_{\tilde{t}_1}^2}
\log\left( \frac{m_{\tilde{t}_2}^2}{m_{\tilde{t}_1}^2} \right) \right) \nonumber\\
&& \mbox{}\hspace{1cm}
+ \frac{m_{\tilde{t}_2}^2 - m_{\tilde{t}_1}^2}{2 m_{t}^2} \sin^2 2\tst
\log\left( \frac{m_{\tilde{t}_2}^2}{m_{\tilde{t}_1}^2} \right) \Bigg], \\
&& t = \frac{1}{2} \log \left( \frac{m_{\tilde{t}_1}^2 m_{\tilde{t}_2}^2}{m_{t}^4} \right) .
\end{eqnarray}
The second step of refinement incorporated into {\em FeynHiggs}\ concerns leading
QCD corrections beyond two-loop\ order. They are taken into account by
using the $\overline{MS}$ top mass
\begin{equation}
\label{mtmsbar}
\overline{m}_t = \overline{m}_t(m_{t}) \approx \frac{m_{t}}{1 + \frac{4}{3\,\pi}\alpha_s(m_{t})}
\end{equation}
for the two-loop\ contributions instead of the pole mass, $m_{t} = 175 \,\, {\mathrm GeV}$.
\smallskip
The results implemented in {\em FeynHiggs}\ have been compared to
the calculations using RG methods. Good agreement has
been found in the case of no mixing in the $\tilde{t}$ sector, i.e.
$M_{t}^{LR} = 0 \,\, {\mathrm GeV}$, whereas sizable deviations can occur when mixing in
the $\tilde{t}$-sector is taken into account.
This has been discussed in detail in~\citere{mhiggs2lb}.
\subsection{Calculation of the $\rho$-parameter}
We have also implemented the calculation of the MSSM contributions to
$\Delta\rho$~\cite{drhosuqcd,precobssusyqcd}. Here the corrections
arising from $\tilde{t}/\tilde{b}$-loops up to ${\cal O}(\alpha\alpha_s)$
have been taken into account. The result is valid for arbitrary
parameters in
the $\tilde{t}$- and $\tilde{b}$-sector, also taking into account the mixing
in the $\tilde{b}$-sector which can have a non-negligible
effect in the large $\tan \beta\hspace{1mm}$ scenario~\cite{precobssusyqcd}.
The two-loop\ result is separated into the pure gluon-exchange contribution,
which can be expressed by a very compact formula that allows a very
fast evaluation, and the pure
gluino-exchange contribution, which is given by a rather lengthy
expression. The latter correction goes to zero with increasing gluino mass
and can thus be discarded for a heavy gluino%
\footnote{
An additional contribution to $\Delta\rho$, arising from a shift in the
squark masses when the soft SUSY breaking parameters are used as
input (due to the $SU(2)$ invariance of these parameters
in the squark sector), is not implemented in {\em FeynHiggs}.
This correction is described in detail
in \citere{drhosuqcd}.
}.
The $\rho$-parameter can be used as an additional constraint (besides
the experimental bounds) on the squark masses.
A value of $\Delta\rho$ outside the experimentally preferred region of
$\Delta\rho^{\mathrm {SUSY}} \approx 10^{-3}$~\cite{delrhoexp} indicates experimentally
disfavored $\tilde{t}$- and $\tilde{b}$-masses.
\section{The Fortran program {\em FeynHiggs}}
\subsection{The main structure}
The complete program {\em FeynHiggs}\ consists of about 50.000 lines Fortran code,
where the main part belongs to the formulas for the renormalized Higgs
self-energies at the two-loop\ level and the gluino contribution to
$\Delta\rho$. The executable file fills about 4 MB disk space.
The calculation for one set of parameters, including the $\Delta\rho$
constraint, with the highest accuracy takes less than 2 seconds on
a third-generation Alpha 21164 microprocessor (300 MHz processing
speed).
\smallskip
There exists a Home page for {\em FeynHiggs}:\\
{\tt http://www-itp.physik.uni-karlsruhe.de/feynhiggs}~.\\
Here a uu-encoded version as well as an ASCII version is available,
together with a short instruction, information about bug fixes etc.
\bigskip
{\em FeynHiggs}\ consists of several subprograms which are listed in
Table~\ref{fhproglist}.
We now describe the different subprograms in detail.
\begin{table}[ht!]
\renewcommand{\arraystretch}{1.5}
\begin{center}
\begin{tabular}{|l||l|} \hline
{\bf subprogram} & {\bf function of the program} \\ \hline \hline
FeynHiggs.f & front-end \\ \hline
FeynHiggsSub.f & main part of {\em FeynHiggs}\ : calculation of $m_h,m_H$ \\ \hline
Hhmasssr2.f & one-loop\ self-energies \\ \hline
varcom.h & definition of variables \\
bc.f & one-loop\ functions \\
lamspen.f & further mathematical functions \\
def2.f & definitions for the MSSM parameters \\ \hline
P1secode.f & Fortran code for $\hat{\Sigma}^{(1)}_{\phi_1}(0)$ \\
P1sesum.f & putting together the different parts of
$\hat{\Sigma}^{(1)}_{\phi_1}(0)$ \\
P1sevar.f & definition of variables for $\hat{\Sigma}^{(1)}_{\phi_1}(0)$ \\ \hline
P2se[code,sum,var].f & same for $\hat{\Sigma}^{(1)}_{\phi_2}(0)$ \\ \hline
P1P2se[code,sum,var].f & same for $\hat{\Sigma}^{(1)}_{\phi_1\phi_2}(0)$ \\ \hline
P1setl[code,sum,var].f & same for $\hat{\Sigma}^{(2)}_{\phi_1}(0)$ \\ \hline
P2setl[code,sum,var].f & same for $\hat{\Sigma}^{(2)}_{\phi_2}(0)$ \\ \hline
P1P2setl[code,sum,var].f & same for $\hat{\Sigma}^{(2)}_{\phi_1\phi_2}(0)$ \\ \hline
delrhosub.f & main part for the calculation of $\Delta\rho$ \\
delrhoGluino[code,sum,var].f & subroutines for the gluino-exchange
contribution to $\Delta\rho$ \\ \hline
\end{tabular}
\renewcommand{\arraystretch}{1}
\caption[]{The subprograms of {\em FeynHiggs}.}
\label{fhproglist}
\end{center}
\end{table}
\begin{itemize}
\item
{\bf FeynHiggs.f}
is the front-end for the whole program. Here all
variables are set, all options are chosen, the subprogram
{\bf FeynHiggsSub.f} is called, and the results for the Higgs masses are
printed out.
For an easy use of {\em FeynHiggs}\ this front-end can be manipulated at will,
the rest of the program need not to be changed.
\item
{\bf FeynHiggsSub.f}:
Here the actual calculation is carried out. The various results
for the
renormalized Higgs-boson self-energies~(\ref{renhiggsse}) are put together.
Eq.~(\ref{higgsmasseq}) is solved numerically, the refinement terms (the
leading two-loop\ Yukawa contribution and the leading QCD corrections
beyond two-loop\ order) are incorporated.
\item
{\bf Hhmasssr2.f}:
The results for the complete one-loop\ self-energies are evaluated.
\item
{\bf varcom.h}:
The variables needed for the files {\bf *code.f} are defined and
grouped in common blocks.
\item
{\bf bc.f, lamspen.f}
contain mathematical functions needed for the
one- and two-loop\ self-energies.
\item
{\bf def2.f}:
The masses of the stops and sbottoms are calculated from the
parameters in the squark mass matrices. The opposite way is also
possible.
\item
{\bf P1setlcode.f}
contains the complete code for the two-loop\ contribution to the
renormalized $\phi_1$ self-energy in
${\cal O}(\alpha\alpha_s)$. This code was first calculated with the help of {\em Mathematica}\ packages
(see~\cite{mhiggs2la,mhiggs2lb}) and was afterwards transformed
automatically into this Fortran code.
The complete code has been split up into 13 smaller functions in order not
to exceed the maximal number of continuation
lines of the Fortran compiler.
\item
{\bf P1setlsum.f}:
The sum of 13 functions for the renormalized $\phi_1$
self-energy at the two-loop\ level, contained in {\bf P1setlcode.f}, is
put together.
\item
{\bf P1setlvar.f}
contains the variable definition for the 13 subfunctions described
above.
\item
The above three files exist in an analogous way for the other
renormalized Higgs-boson self-energies at the one- and at the two-loop\
level. The one-loop\ self-energies in these files are given in the
approximation explained in detail in~\cite{mhiggs2la,mhiggs2lb} and are
only needed internally for consistency checks. For the real
calculation of $m_h$ and $m_H$ the complete one-loop\ self-energies are
calculated in {\bf Hhmasssr2.f}.
\item
{\bf delrhosub.f}
is needed for the calculation of the MSSM contribution to
$\Delta\rho$. In this program the evaluation of the leading one-loop\
contribution is
performed. In addition also the gluon-exchange correction in ${\cal O}(\alpha\alpha_s)$
is calculated. The ${\cal O}(\alpha\alpha_s)$ contribution to $\Delta\rho$ is completed by
including the gluino-exchange contribution by calling the subprogram
{\bf delrhoGluinosum.f}.
\item
{\bf delrhoGluino[code,sum,var].f}:
The code for the gluino exchange contribution to $\Delta\rho$ is
implemented in the same way as it is described for the renormalized
Higgs-boson self-energies.
\end{itemize}
\subsection{Options and setting of variables}
{\em FeynHiggs}\ can be run in several ways, determined by the choice of several
options.
\begin{itemize}
\item
{\bf Depth of calculation}
allows to choose to what extent the
refinements described in section~\ref{subsec:mhcalc} should be applied.
\item
{\bf Selection of input parameters:}
One can either use the physical parameters of the $\tilde{t}$ mass matrix
($m_{\tilde{t}_1}, m_{\tilde{t}_2}$ and $\theta_{\tilde{t}}$) or the soft SUSY breaking parameters
($M_{\tilde{t}_L}, M_{\tilde{t}_R}$ and $M_{t}^{LR}$) as input parameters.
\item
{\bf $m_{t}$ in the $\tilde{t}$ mass matrix:}
If the soft SUSY breaking parameters are used as input parameters, one can
choose whether the top pole mass, $m_{t}$, or the running top mass,
$\overline{m}_t$, should be used in the $\tilde{t}$ mass matrix in order to
determine the masses of the eigenstates $m_{\tilde{t}_1}, m_{\tilde{t}_2}$ and
$\theta_{\tilde{t}}$.
\item
{\bf Limit for $\Delta\rho^{\mathrm {SUSY}}$:}
One can specify the maximally allowed value for the MSSM contribution
to $\Delta\rho$. If $\Delta\rho^{\mathrm {SUSY}}$ exceeds this limit a warning is
printed out.
\item
{\bf Selection for the one-loop\ accuracy:}
Before the calculation of the Higgs masses starts, one has to
specify to what accuracy the one-loop\ renormalized self-energies should
be evaluated. One can either take into account the top sector only,
one can choose to use the top {\em and} the bottom sector, or one can
select the option that the complete MSSM should be taken into account.
\end{itemize}
\begin{table}[ht!]
\renewcommand{\arraystretch}{1.5}
\begin{center}
\begin{tabular}{|c||c||c|} \hline
{\bf input for {\em FeynHiggs}} & {\bf expression in the MSSM} &
{\bf internal expr. in {\em FeynHiggs}}
\\ \hline \hline
{\tt tan(beta)} & $\tan \beta\hspace{1mm}$ & {\tt ttb} \\
{\tt Msusy\_top\_L } & $M_{\tilde{t}_L}$ & {\tt msusytl} \\
{\tt Msusy\_top\_R} & $M_{\tilde{t}_R}$ & {\tt msusytr} \\
& $M_{\tilde{b}_L}$ & {\tt msusybl} \\
& $M_{\tilde{b}_R}$ & {\tt msusybr} \\
{\tt MtLR } & $M_{t}^{LR}$ & {\tt mtlr} \\
{\tt MSt2} & $m_{\tilde{t}_2}$ & {\tt mst2} \\
{\tt delmst} & $\Delta\mst = m_{\tilde{t}_2} - m_{\tilde{t}_1}$ & {\tt delmst} \\
{\tt sin(theta\_stop)} & $\sin\tst$ & {\tt stt} \\
& $\sin\tsb$ & {\tt stb} \\
{\tt Mtop}~~or~~{\tt mt} & $m_{t}$ & {\tt mmt} \\
& $m_{b}$ & {\tt mbb} \\
{\tt Mgluino} & $m_{\tilde{g}}$ & {\tt mgl} \\
{\tt Mue} & $\mu$ & {\tt mmue} \\
{\tt M} & $M$ & {\tt mmm} \\
{\tt MA} & $M_A$ & {\tt mma} \\
\hline
\end{tabular}
\renewcommand{\arraystretch}{1}
\caption[]{The meaning of the different MSSM variables which have to
be entered into {\em FeynHiggs}.}
\label{mssmparameters}
\end{center}
\end{table}
In {\bf FeynHiggs.f} the Standard Model (SM) variables are set to
their present experimental values. New values for these variables can
be implemented by the user into the code of the file {\bf FeynHiggs.f}
easily.
The MSSM variables can be chosen by
the user at will. In addition, since the dependence of the Higgs
masses on the top mass is very strong, also $m_{t}$ can be chosen at
will. In Table~\ref{mssmparameters} the meaning of the different
parameters {\em FeynHiggs}\ asks for is explained%
\footnote{
Some MSSM variables exist internally in {\em FeynHiggs}, but are not input
parameters. These variables have no entry in the left column of
Table~\ref{mssmparameters}.
}.
All these variables are transferred to the different subprograms by
common blocks.
\section{How to use {\em FeynHiggs}}
In this section we will give two examples of how {\em FeynHiggs}\ is used.
As stated before, the front-end {\bf FeynHiggs.f} can be
manipulated by the user at will, whereas the subprogram {\bf
FeynHiggsSub.f} and all the other subprograms should not be
changed.
Concerning a modificaton of the front-end one has to note that
all variables have to be defined in the front-end.
The Higgs masses are then obtained by calling the subroutine via
\begin{eqnarray}
{\tt call~feynhiggssub(mh1,mh2,mh12,mh22)}~,\nonumber
\end{eqnarray}
where {\tt mh1} and {\tt mh2} are the one-loop\ corrected values for
$m_h$ and $m_H$, respectively. {\tt mh12} and {\tt mh22} contain the
values for the two-loop\ corrected Higgs masses, including the refinement
terms as it has been specified in the options.
\bigskip
In the following two examples are given, how the application of {\em FeynHiggs}\ looks
like, using the given front-end of the currently distributed
version. The user's input is given in
{\tt {\bf bold face}} letters.
\newpage
\subsection{Example 1}
{\tt~>{\bf FeynHiggs.exe}\\
\dots~Introduction~\dots \\
~---------------------------------------------------\\
\\
~depth~of~calculation~?\\
~1:~full~1-loop~+~2-loop~QCD\\
~2:~same~as~1,~but~in~addition~with\\
\mbox{}~~~~~~~~~~~~~~mt~=~166.5~at~2-loop\\
~3:~same~as~2,~but~in~addition~with\\
\mbox{}~~~~~~~~~~~~~~Yukawa~term~added~for~light~Higgs\\
{\bf~1}\\
\\
~Select~input:\\
~1:~Msusy,~MtLR,~...\\
~2:~MSt2,~delmst,~stt,~...\\
{\bf~2}\\
\\
~Limit~for~Delta~rho~=~1.3~*~10\^~-3~?~(0~=~ok)\\
{\bf~0}\\
\\
\\
\\
~tan(beta)~=~?\\
{\bf~1.6}\\
\\
~MSt2~=~?\\
{\bf~400}\\
~delmst~=~?\\
{\bf~100}\\
~sin(theta\_stop)~=~?\\
~(0:~stt~=~0~//~1:~stt~=~sin(-Pi/4))\\
{\bf~1}\\
~Mtop~=~175~?~(0~=~ok)\\
{\bf~173.8}\\
~Mgluino~=~500~?~(0~=~ok)\\
{\bf~300}\\
~Mue~=~-200~?~(0~=~ok)\\
{\bf~100}\\
~M~=~?~(0:~M~=~400~//~1:~M~=~Msusy)\\
{\bf~1}\\
\noindent%
MA~=~?\\
{\bf~500}\\
\\
~Selection:~1~=~top~only,~2~=~top/bottom~only,~3~=~all\\
{\bf~3}\\
\\
~Your~parameters:\\
~tb,~Msusy(top-left),~Msusy(top-right),~MtLR\\
~MT,~Mgl,~Mue,~M,~MA\\
~~~1.600000~~~~~~~309.2813~~~~~~~308.0858~~~~~~~200.0000~\\
~~~173.8000~~~~~~~300.0000~~~~~~~100.0000~~~~~~~309.2813~~~~~~~500.0000\\
\\
~-------------------------------------------------\\
~The~results:~~light~Higgs~~~~~~heavy~Higgs\\
\\
~mh-tree~:~~~~~~~39.42879~~~~~~~~506.7153~~\\
~mh-1loop:~~~~~~~69.53253~~~~~~~~508.7314~~~\\
~mh-2loop:~~~~~~~64.10773~~~~~~~~508.3541~~~\\
~-------------------------------------------------\\
~Delta~rho~1-loop~~~~~~~~~:~~~5.909909368803866E-004\\
~Delta~rho~2-loop~(gluon)~:~~~6.177715322384914E-005\\
~Delta~rho~2-loop~(gluino):~~~4.755834893556049E-006\\
~Delta~rho~total~~~~~~~~~~:~~~6.575239249977918E-004\\
~-------------------------------------------------\\
\\
\\
~tan(beta)~=~?\\
\dots
}
In this example the physical parameters in the $\tilde{t}$-sector have been
chosen as
input parameters, no refinement term has been included. The selection
for $M$ sets $M = M_{\tilde{t}_L}$, where $M_{\tilde{t}_L}$ is calculated from
$m_{\tilde{t}_1}$, $m_{\tilde{t}_2}$ and $\sin\tst$, see \refeq{stopmassmatrix}.
\newpage
\subsection{Example~2}
{\tt~>{\bf FeynHiggs.exe}\\
\dots~Introduction~\dots~\\
~---------------------------------------------------\\
\\
~depth~of~calculation~?\\
~1:~full~1-loop~+~2-loop~QCD\\
~2:~same~as~1,~but~in~addition~with\\
\mbox{}~~~~~~~~~~~~~~mt~=~166.5~at~2-loop\\
~3:~same~as~2,~but~in~addition~with\\
\mbox{}~~~~~~~~~~~~~~Yukawa~term~added~for~light~Higgs\\
{\bf~3}\\
\\
~mt~in~the~stop~mass~matrix~at~2-loop~?\\
~1:~mt~=~pole~mass\\
~2:~mt~=~running~mass\\
{\bf~2}\\
\\
~Select~input:\\
~1:~Msusy,~MtLR,~...\\
~2:~MSt2,~delmst,~stt,~...\\
{\bf~1}\\
\\
~Limit~for~Delta~rho~=~1.3~*~10\^~-3~?~(0~=~ok)\\
{\bf~0.00001}\\
\\
\\
\\
~tan(beta)~=~?\\
{\bf~20}\\
\\
~Msusy\_top\_L~=~?\\
{\bf~1000}\\
~Msusy\_top\_R~=~?~(0:~Msusy\_top\_R~=~Msusy\_top\_L)\\
{\bf~0}\\
~Mtop~=~175~?~(0~=~ok)\\
{\bf~0}\\
~Mgluino~=~500~?~(0~=~ok)\\
{\bf~0}\\
~Mue~=~-200~?~(0~=~ok)\\
{\bf~-100}\\
\noindent%
M~=~?~(0:~M~=~400~//~1:~M~=~Msusy)\\
{\bf~300}\\
~MA~=~?\\
{\bf~400}\\
~MtLR~=~?\\
{\bf~1000}\\
\\
~Selection:~1~=~top~only,~2~=~top/bottom~only,~3~=~all\\
{\bf~3}\\
\\
~Your~parameters:\\
~tb,~Msusy(top-left),~Msusy(top-right),~MtLR\\
~MT,~Mgl,~Mue,~M,~MA\\
~~~20.00000~~~~~~~1000.000~~~~~~~1000.000~~~~~~~1000.000~\\
~~~175.0000~~~~~~~500.0000~~~~~~-100.0000~~~~~~~300.0000~~~~~~~400.0000\\
\\
~-------------------------------------------------\\
~The~results:~~light~Higgs~~~~~heavy~Higgs\\
\\
~mh-tree~:~~~~~~90.70748~~~~~~~400.1090~~\\~~
~mh-1loop:~~~~~~133.6753~~~~~~~400.1799~~\\~~
~mh-2loop:~~~~~~118.8657~~~~~~~400.1770~~\\~~
~-------------------------------------------------\\
~using~running~mt~for~two-loop~contribution:~~~167.338636349986\\
~...~also~for~mt~in~Stop~mass~matrix\\
~mh-2loop:~~~~~~120.8774~~~~~~~400.1780\\
~-------------------------------------------------\\
~using~running~mt~for~two-loop~contribution:~~~167.338636349986~\\
~...~also~for~mt~in~Stop~mass~matrix\\
~adding~Yukawa~term~for~light~Higgs\\
~...~also~with~running~mt~in~stop~mass~matrix\\
~mh-2loop:~~~~~~122.2504~~~~~~~400.1780~~\\
~-------------------------------------------------\\
~WARNING:~Delta~rho~>~experimental~limit\\
~Delta~rho~1-loop~~~~~~~~~:~~~3.224156596235517E-005\\
~Delta~rho~2-loop~(gluon)~:~~~3.475299800114144E-006\\
~Delta~rho~2-loop~(gluino):~~~2.896993903992095E-005\\
~Delta~rho~total~~~~~~~~~~:~~~6.468680480239026E-005\\
~-------------------------------------------------\\
\\
\\
~tan(beta)~=~?\\
\dots
}
\newpage
If one selects to include all refinement terms, the front-end
automatically calculates the Higgs masses in all three steps of
accuracy. The running top mass (in this example
{\tt 167.338636349986}~GeV) is used
also in the $\tilde{t}$ mass matrix.
In this example only a very small SUSY contribution to $\Delta\rho$ ia
allowed and the value of $\Delta\rho^{\mathrm {SUSY}}$ for the chosen parameters
exceeds the above specified maximal value (a warning is printed out.)
\section{Conclusions}
{\em FeynHiggs}\ is a Fortran code for the calculation of the masses of the
neutral ${\cal CP}$-even Higgs bosons of the MSSM.
It is based on results which have been obtained using
the Feynman-diagrammatic approach. The results consist of the full
one-loop\ and the leading two-loop\ QCD corrections.
Two further steps of refinements have been implemented.
The Fortran code for the one-loop\ contribution has been taken over from a
program written by A.~Dabelstein.
The Fortran code for the two-loop\ correction has
been generated automatically from a {\em Mathematica}\ result.
The program is available via the WWW page\\
{\tt http://www-itp.physik.uni-karlsruhe.de/feynhiggs}~.
The code consists of a front-end and a subroutine. The front-end can
be manipulated at the user's will. The different parts
of the code have been described in detail, and the meaning of the
variables used
in the code has been explained. We have given two examples of how to
use {\em FeynHiggs}.
The subroutine is self-contained and can be linked as an independent
part to other existing program
\footnote{
This has already successfully been performed for a Fortran program
used by members of the DELPHI collaboration at
Karlsruhe~\cite{sopczak}.
}
thus providing a precise prediction for $m_h$ and $m_H$, which can
then be used for further computations.
\bigskip
\subsection*{Acknowledgements}
We thank A.~Dabelstein for providing his one-loop\ program for the
calculation of the Higgs-boson masses.
W.H. gratefully acknowledges support by the Volkswagenstiftung.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 3,914 |
Children of all ages are welcome to join us for a fun and educational experience to celebrate your special day.
Our themed parties cater to children aged 1–10 and include Museum admission for your guests the day of the party!
This exclusive birthday celebration takes place in one of the most sought after areas of the Museum, our Totally Tots exhibit. Yes, that's right—this exhibit is all yours before or after Museum hours. Enjoy 45 minutes of guided play with our Party Hosts, as your wee ones discover the magical wonders of our waterworks pond, visit our sandbox, and more! Make some noise as you explore, crawl, wiggle and giggle your way through the tunnels, slides, and music makers in this unique and colorful space.
Ages 1–9 years Come on an adventure around the Museum! Party guests will explore three of our hands-on exhibits, guided by one of our fun-loving Party Hosts! Make a pizza at L&B's Spumoni Gardens! Ring up some goodies at our International Grocery Store. Then wrap up your adventure with food, party games, and who could forget the cake? Join us for a one of a kind birthday adventure!
To learn more, select one of the tabs above, or view our birthday flier. | {
"redpajama_set_name": "RedPajamaC4"
} | 8,092 |
White House budget director Peter Orszag says he's leaving simply because "it's time." | John Shinkle/POLITICO
Orszag leaves his mark
Updated 07/30/2010 06:45 AM EDT
Peter Orszag may have had the shortest tenure of any member of the Obama Cabinet, but in an administration dominated by a powerful White House staff, he left as deep a mark as almost anyone in it.
The 41-year-old economist, whose last day is Friday, was a key shaper and negotiator of the stimulus package; he designed cost-control plans and other features of this year's sweeping health care legislation and he has cultivated a reputation as the administration's leading deficit hawk.
But Orszag made enemies in the House leadership for his emphasis on controlling cost, and his independent profile at times rankled the White House as much as any disagreements over deficits. Officials there say he had a habit of going around them to the press and to members of Congress, and they winced at his highly publicized personal life.
An economist and policy wonk, Orszag is being replaced by veteran government manager Jacob Lew — a shift that hints at the preferences of a president whose administration has favored gregarious, political team players over sharp-elbowed "propeller heads" who have their own policy views, a phrase President Barack Obama employed for Orszag.
In an interview this week, Orszag first rejected the contention that his high profile rankled the White House, then added: "But if that were the case, it's partly a reflection of the tension that's embedded in this job."
The job, he said, consists of being both "part of the president's econ team and a West-Wing-adviser-type" and "the head of an agency, part of the Cabinet [with] statuatory and legislative obligations."
Orszag's academic credentials and his former job as director of the Congressional Budget Office made him a player in the administration's internal economic debates, which have been dominated by a small circle led by White House adviser Larry Summers and Treasury Secretary Timothy Geithner. Orszag pushed — not always successfully — for aggressive measures to control the long-term deficit, even as he favored stimulus spending.
But it was less substantive differences that set him apart. He became a celebrity of sorts, a formerly eligible bachelor whose thick, dark hair and geek chic drew the kind of attention possible only in Washington. In an administration whose internal motto is "no drama," he started a fire in his office on one of his first days, and his complicated personal life later put him on the front page of the New York Post. He was a familiar figure to the media — other administration officials grumbled that he spoke too frequently to The New York Times's Jackie Calmes — and to allies among Senate deficit hawks in an administration that frowns on outside alliances.
Once seen as a potential successor to chief of staff Rahm Emanuel, Orszag saw his portfolio increasingly shaped into that of a traditional budget director — a powerful force in designing and executing policy but rarely its visionary or originator.
Orszag said he saw the diminution of his political role in the passage of the recovery act as a natural progression. That early measure was the product of an administration "in a kind of emergency footing," he said. "As the White House was staffed out and everyone settled into their roles, the roles evolved into their proper functions."
But Orszag said he's leaving simply because he's been in government for four years, first as head of the CBO and then at Office of Management and Budget, and "it's time."
"The budget job is brutal," recalled Reagan budget director James Miller. "The responsibilities are great. It's your job to worry about lots of things. You're worrying about the pile of paper that crosses your desk every day. And there's some terrible thing under there that might jump out and grab you by the throat."
Orszag leaves for a position as a distinguished visiting fellow at the Council on Foreign Relations in New York, a position he said he hopes to use to think through issues that came up at the OMB and to plot his next step.
"Not surprisingly, there have been a whole series of opportunities that have arisen, and I haven't explored any of them for ethics reasons," he said, adding that he wouldn't rule out a post on Wall Street, despite the administration's clashes with the banking sector.
Orszag said he leaves proud of the administration's economic accomplishments but worried that a series of factors could still derail a nascent recovery.
"The economy relative to where one would have reasonably expected it to be a year and a half ago — given the implosion in private-sector borrowing and the collapse of demand — is doing fairly well," he said. "Even having positive growth is a huge accomplishment."
But Orszag said the labor market remains "weaker than anyone would like," and he continues to have concerns about the future.
Orszag said he's worried that "the recovery act is peeling off, in that its impact on growth will recede in the future." And he fears that an estimated $140 billion shortfall in state and local government budgets will prompt the kind of cuts that will be "harmful from a macroeconomic perspective."
The other factors, he said, include the end of a recent bump in economic growth traced to companies simply resuming production and the fears — though fading ones — of contamination from a European fiscal crisis.
Orszag said his roles in the stimulus and health care battles are among his proudest accomplishments, along with working to improve governmental information technology. But the crush of the job, and the economic crisis," he said, "swept away some of his pet projects."
"A significant effort to update and overhaul the nation's economic statistics is now timely, so that we're better measuring an evolving economy," he said. "I would have loved to have spent even more time on that, but it just got crowded out by fighting fires."
The raging fire with which Orszag was most closely identified was the deficit, and he was reportedly an internal advocate both of politically risky new revenues and spending cuts in the medium and long terms.
In a speech at the Brookings Institution on Wednesday, Orszag raised some eyebrows by calling it "foolish to dramatically reduce the deficit immediately, because that would choke off the nascent economic recovery."
During the interview, he rejected the notion of an internal debate between deficit hawks and those who favor more aggressive stimulus.
"There's a false debate going on because we need more support for the economy in the very short run and we need more medium- and long-term fiscal discipline," he said. "We need more of both, and it's a question of timing."
Orszag's hawkish legacy will include a role in shaping the administration's deficit commission and the Independent Payment Advisory Board, a commission that will help set Medicare reimbursement rates. The board will shift some payment decisions out of the hands of congressional committees, a move The New York Times's Matt Bai said constitutes a major power grab for the executive branch.
Orszag leaves office a bit scarred by the celebrit-ization of politics. The news that his ex-girlfriend, a Greek shipping heiress, was pregnant at the time of his engagement to ABC News anchor and business correspondent Bianna Golodryga made the front page of the New York Post and did not thrill the "no drama" White House image makers.
"It's unfortunate that my personal life was discussed in the tabloids, but what I think was the correct response was just to focus on continuing and doing my work and that's what I did," he said.
Golodryga is based in New York, and while Orszag said the couple will be splitting their time between there and the capital, his life will be shifting north.
His boxed-up books, he said, have undergone an "elaborate sorting process between New York and D.C.," but his favorites will go to New York. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 1,785 |
Slaget vid Limanowa utkämpades vid Limanowa under första världskriget, den 5-12 december 1914.
Bakgrund
Den 3:e ryska armén, på sin vänstra flygel understödd av delar av 8:e armén (Aleksej Brusilov) ryckte fram och möttes av huvuddelen av 4:e österrikiska armén, skyndsamt transporterad till trakten sydväst om Kraków, samt vänstra flygeln av den i Karpaterna stående 3:e armén (Svetozar Borojević von Bojna).
Slaget
De båda motståndarna möttes 5 december vid och nordväst om Limanow, och där utkämpades ett fältslag, som varade i flera dagar och slutade med, att ryssarna blev slagna - de förlorade bland annat 30 000 man i fångar - och måste gå tillbaka bakom Wisłoka. Samtidigt med att slaget vid Limanowa pågick, framryckte huvuddelen av Boroevic von Bojnas armé mot Brusilovs, vilken senare måste utrymma de västligaste Karpaterpassen
Efter slaget
Efter att Brusilovs armé förstärkts med delar av 11:e armén (Selivanov, nyuppsatt av reservfördelningar), som belägrade Přzemyśl, gick den över till motanfall (20 december) och trängde österrikarna tillbaka in i passen.
De rörliga operationerna i Galizien liksom i Polen övergick mot slutet av 1914 till ställningskrig. Fronten följde närmast söder om Wisla Dunajecs nedre lopp (väster om Tarnów) och böjde därefter av åt sydöst följande i stort sett Karpaternas (Östbeskidernas) kam ända till Körösmezö (sydväst om Kolomea i Bukovina). Större delen av Bukovina var fortfarande besatt av ryssarna. Däremot höll sig ännu fästningen Přzemyśl.
Källor
Om slaget från Nordisk familjebok
Krigsåret 1914
Limanowa
Limanowa
Limanowa
Limanowa | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 7,452 |
<?php
use Symfony\Component\DependencyInjection\Argument\RewindableGenerator;
// This file has been auto-generated by the Symfony Dependency Injection Component for internal use.
// Returns the private 'form.type_extension.submit.validator' shared service.
return $this->services['form.type_extension.submit.validator'] = new \Symfony\Component\Form\Extension\Validator\Type\SubmitTypeValidatorExtension();
| {
"redpajama_set_name": "RedPajamaGithub"
} | 795 |
Q: Hex code of background-color is not displayed by Edge I use the following simple HTML and CSS code:
.content {
background-color: #e9ddafff;
}
<div class="content">
Content goes here.
</div>
This code works perfectly in all browsers except for Edge. When you open it in Edge the browser does not recognize the background-color. I guess somehow Edge cannot deal with the hex code #e9ddafff because when I change it to red everything works.
Do you have any idea what I need to change in my code to also make it work in the Edge browser?
JSfiddle can be found here.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 0 |
\section{Introduction}
\vspace{-3pt}
\label{sec:intro}
Huntington disease (HD) is a fatal, autosomal neurodegenerative disease that typically affects 12 per 100,000 people in the Western World~\cite{mason2018predicting}. HD is insidious and progressive, affecting motor skills, speech, cognition, and behavior~\cite{paulsen2010early}. The diagnosis of HD is based on unequivocal motor symptoms and is typically made when individuals are in their mid-40's~\cite{hinzen2017systematic,vogel2012speech}.
Current research suggests that speech motor deficits precede the onset of limb and trunk chorea~\cite{vogel2012speech}, providing an opportunity to leverage changes in speech as a sensitive biomarker. These biomarkers can then be used to support distributed ecologically valid symptom tracking. This paper builds towards that goal, investigating how the speech signal can be used to automatically detect HD.
HD speech is typically characterized by decreases in the number of words pronounced, syntactic complexity, and speech rate, in addition to increases in paraphasic errors, filler usage, and sentence duration~\cite{HD_speech_characterization, vogel2012speech}. Language and speech symptoms are common in HD, occurring in approximately 90\% of cases~\cite{dysarthria_1, dysarthria_2}. As such, acoustic analyses may provide meaningful therapeutic and diagnostic information for individuals with HD, especially given that preliminary research has shown that HD-related speech deficits can be objectively characterized and increase with disease progression~\cite{kaploun2011acoustic}. The development of an objective, non-invasive acoustic biomarker, sensitive enough to detect disease progression in people with premanifest and manifest HD, will provide new avenues for clinical research and treatments.
We present an initial step towards detecting changes in HD severity by first demonstrating the efficacy of the speech signal for detecting the presence of HD. The system includes transcription, feature extraction, and classification. The transcripts are generated either by humans or by training in-domain automatic speech recognition (ASR) systems. The features are clinically-inspired and include filler usage, pauses in speech, speech rate, and pronunciation errors. We investigate the static and dynamic feature properties. The static feature sets describe speech behavior using summary statistics; the dynamic feature sets provide an opportunity to directly model the feature variation between utterances. We model the static feature sets using k-Nearest Neighbors ($k$-NN) with Euclidean distance and Deep Neural Networks (DNN). We model the dynamic feature sets using $k$-NN with Dynamic Time Warping (DTW) Distance and Long-Short-Term Memory Networks (LSTM). We investigate the impact of transcription error, moving from manual transcriptions, to transcripts generated using forced alignment to known prompts, and finally to automatic speech recognition (ASR).
Our results demonstrate the efficacy of speech-centered approaches for detecting HD. We show that we can accurately detect HD using a simple static ($k$-NN) approach, resulting in an accuracy of 0.81 (chance performance is 0.5). We then show that these results can be improved using dynamic feature sets (DTW) or deep methods, which can capture the non-linear relationships in our feature sets (DNN/LSTM). Finally, we demonstrate that in domains with limited lexical variability, manual transcripts can be replaced with ASR transcripts without suffering from a degradation in performance, even given a word error rate of 9.4\%. This indicates the robustness of the identified speech features. The novel aspects of our approach include one of the first investigations into automated speech-centered HD detection and a focus on understanding the importance of modeling temporal variability for detecting HD.
\begin{table*}[t]
\vspace{-5pt}
\caption{Summary of participant demographics}
\vspace{-5pt}
\label{tab:speaker_data}
\centering
\begin{tabular}{c | c c c c c c}
\hline
& Premanifest (n=12) & Early (n=12) & Late (n=7) & Control (n=31) & All (n=62) \\
\hline
\hline
Age | Mean (SD) & 42.6 (9.8) & 52.0 (11.2) & 54.6 (9.7) & 50.3 (11.1) & 49.8 (11.1) \\
Gender | \% Male & 36.4 & 41.7 & 37.5 & 38.7 & 38.7 \\
Race | \% White & 90.9 & 100 & 87.5 & 90.3 & 91.9 \\
Race | \% Black & 0 & 0 & 12.5 & 6.5 & 4.8 \\
Race | \% Other & 9.1 & 0 & 0 & 3.2 & 3.3 \\
\hline
\end{tabular}
\vspace{-6pt}
\end{table*}
\vspace{-3pt}
\section{Related Work}
\vspace{-3pt}
\label{sec:rw}
\begin{table}[t]
\vspace{-3pt}
\caption{Dataset summary: total dataset size (seconds) and average utterance duration (seconds). HC describes the healthy controls and HD, the participants with Huntington}
\vspace{-3pt}
\label{tab:utt_data}
\centering
\begin{tabular}{c | c c }
\hline
Group & Duration (avg. dur) & \# Utt. (avg. num) \\
\hline
\hline
HC & 1,469.83 (47.41 $\pm$ 7.68) & 310 (10.0 $\pm$ 0.00) \\
HD & 2,641.83 (85.2 $\pm$ 38.18) & 320 (10.32 $\pm$ 0.74) \\
\hline
\end{tabular}
\vspace{-16pt}
\end{table}
Previous works have demonstrated the feasibility of automated speech assessment for various neurocognitive disorders such as dementia~\cite{dementia}, aphasia~\cite{aphasia}, and Alzheimer's disease~\cite{toth2015automatic}. Research has investigated the feasibility of using automatic speech recognition (ASR) to extract lexical features from transcribed audio~\cite{fraser2013automatic, sadeghian2015using}. However, off-the-shelf ASR systems are not well suited to this domain because of the abnormal speech patterns, high speaker variability, and lack of data that are common in dysarthric speech~\cite{zhou2016speech,mengistu2011comparing}. Furthermore, these off-the-shelf ASR systems may miss out on critical cues such as the presence of fillers, stutters, or mispronunciations, all of which contribute to the perception of disordered speech~\cite{toth2015automatic}. Another approach has been to train specialty ASR models directly on the domain-relevant data. Le et al. trained an ASR system on aphasic speech and used this system to extract speech features for the prediction of intelligibility, including: the number of fillers, phone rate, error rate, utterance length, and goodness of pronunciation~\cite{aphasia}. Similar work from Peintner et al. focused on the automatic diagnosis of neurodegenerative diseases~\cite{peintner2008learning}. In addition, work from Guerra et al. demonstrated the potential of using non-linear classifiers and multi-dimensional features to detect dysarthric speech.~\cite{guerra2003modern}.
\vspace{-3pt}
\section{Data Description}
\vspace{-3pt}
\label{sec:data}
The data in this study was collected from an HD study conducted at the University of Michigan. The data consists of 62 speakers, 31 healthy and 31 with HD. Out of the 31 individuals with HD, 11 are premanifest, 12 are in the early stage, and 8 are in the late stage. HD groups were created using the Total Motor Score (TMS) and the Total Functional Capacity (TFC) score \cite{shoulson1989assessment} from the Unified Huntington's Disease Rating Scale (UHDRS) \cite{kieburtz2001unified}. Specifically, individuals were designated as premanifest HD if they had a positive gene test (HD CAG $>$ 35) and a clinician-rated score of less than 4 on the last item of the TMS (which provides an index of clinician-rated diagnostic confidence). Those with clinician-rated scores greater than or equal to 4 on the last item of the TMS were included in the manifest HD group. For those with manifest HD, TFC scores (which provide an index of clinician-rated functional capacity) were used to determine HD stage. Specifically, scores range from 0 (low functioning) to 13 (highest level of functioning); TFC sum scores of 7-13 were considered early-stage and sum scores of 0-6 were considered late-stage HD.
The data includes both read speech and spontaneous speech sections. This study focuses only on the read speech portion, during which participants read the Grandfather Passage~\cite{Grandfather_p1}. The Grandfather Passage~\cite{duffy1995motor, zraick2004reliability, darley1975motor} is a phonetically balanced paragraph containing 129 words and 169 syllables, and is a standard reading passage used in speech-language pathology. This passage is commonly used to test for dysarthric speech~\cite{patel2013caterpillar}. Table~\ref{tab:utt_data} shows additional information about the scope of our data such as the size and number of utterances.
\section{Data Transcription}
\label{sec:trans}
\subsection{Human Transcriptions}
\label{sec:orat}
Recordings were deidentified and transcribed using the CHAT approach~\cite{macwhinney2000childes} and Computerized Language Analysis (CLAN) software. CHAT transcriptions identify speech errors (phonological, semantic, or neologistic), vowel distortions, word repetitions, retracing, assimilations, dialect variances, letter and word omissions, utterances, pauses, glottal sounds unique to HD, vocalizations, spontaneous speech for each participant, and variations in rate, fundamental frequency (F0), and voice quality. Interrater reliability (greater than or equal to 90\% agreement) was established between two trained raters and a Ph.D.-level Speech Language Pathologist. The raters then individually transcribed each recording and their transcriptions were compared. Raters were required to reach a consensus for all identified discrepancies. In cases where consensus could not be reached, the Speech Language Pathologist was consulted.
\begin{table}[t]
\vspace{-3pt}
\caption{Performance of ASR system (per speaker) for healthy and HD speech, including WER, insertion (ins), deletion (del), and substitution (sub)}
\vspace{-3pt}
\label{tab:wer}
\centering
\begin{tabular}{c | c c c c}
\hline
&WER \%&Ins&Del&Sub\\
\hline
All & $9.4\pm14.8$ & $1.4\pm3.1$ & $5.0\pm11.2$ & $8.9\pm17.3$\\
HD & $16.0\pm18.7$ & $2.5\pm4.0$ & $8.7\pm14.9$ & $15.5\pm22.5$\\
HC & $2.8\pm2.3$ & $0.3\pm0.9$ & $1.3\pm1.2$ & $2.3\pm2.4$\\
\hline
\end{tabular}
\vspace{-16pt}
\end{table}
\begin{figure*}[t]
\begin{center}
\includegraphics[scale=0.63]{images/pipeline_compact.pdf}
\vspace{-3pt}
\caption{System diagram for HD classification (note: seg. is segmentation) \label{fig:data_pipeline}}
\end{center}
\vspace{-16pt}
\end{figure*}
\vspace{-3pt}
\subsection{Automated Transcripts}
\vspace{-3pt}
\label{sec:asrt}
Manual transcripts often represent a bottleneck because they are costly and time-intensive to obtain. ASR can provide an alternative. However, off-the-shelf systems are often unusable due to the acoustic mismatch between the healthy speech used to train these systems and the speech patterns of individuals of the target population. We address this by training in-domain acoustic and language models using a specialized lexicon.
The lexicon we used is initialized using standard English phone-level pronunciations provided by the CMU pronunciation dictionary \cite{weide1998cmu}. We augment this using the pronunciation errors identified in the manual transcripts. We use a bigram language model extracted over the manual transcripts.
The acoustic model is a monophone Hidden Markov Model (HMM) with three-states per phone and a left-to-right topology. The emission probability of each state is estimated using a Gaussian Mixture Model (GMM). We use a monophone acoustic model rather than a triphone model due to the relatively small size of the dataset ($\sim$1 hour of total speech). The final model consists of 500 Gaussian mixtures.
The input acoustic features are Mel-frequency Cepstral Coefficients (MFCC), extracted with a 25ms Hamming window and 10ms overlap using Kaldi~\cite{povey2011kaldi}. We include the 13 MFCC coefficients, the first, second, and third order deltas.
The acoustic and language models are trained and tested using a Leave-One-Subject-Out (LOSO) approach over all HC and HD speakers. The performance of our ASR system is shown in Table~\ref{tab:wer}, obtaining an overall WER of $9.4\pm14.9\%$. This relatively low WER is strongly attributed to the constrained speech produced by reading the grandfather passage.
\vspace{-3pt}
\subsection{Preprocessing Methods}
\vspace{-3pt}
We present three approaches to investigate the effect of error propagation. We first assume the availability of manual transcripts and investigate the feasibility of extracting features for HD classification (\emph{force-aligned oracle transcription}, FA-ORAT). Next, we assume that the subject prompt is available and ask if the subject's speech goals can be used as a target for force alignment (\emph{forced-aligned grandfather transcription}, FA-GF). This allows us to investigate the effect of the mismatch introduced by speech errors. Finally, we assume only that segmentation information is available, noting when speaker utterances begin and end, but that the transcript itself is unknown (\emph{ASR transcription}, ASRT). This provides insight into the effectiveness of an automatic system that can transcribe, extract features, and predict diagnosis (Figure~\ref{fig:data_pipeline}).
All approaches use the acoustic model discussed in Section~\ref{sec:asrt}. The ASRTs are discussed in Section~\ref{sec:asrt}. FA-ORATs are generated by force-aligning the input audio to the manual transcripts. FA-GFs are generated by force-aligning the input audio to the original Grandfather passage text.
FA-ORAT and ASRT utterances are segmented using the manual transcripts before both ASR and force alignment (future work will remove the reliance on manual segmentation for ASRT). FA-GF utterances are segmented into utterances using the natural sentences within the Grandfather passage ( Figure~\ref{fig:data_pipeline}).
\begin{table}[t]
\vspace{-2pt}
\caption{Average dimension size of feature vectors}
\vspace{-3pt}
\label{tab:dim-size}
\centering
\begin{tabular}{c | c c c}
\hline
&FA-ORAT & FA-GF & ASRT\\
\hline
Utt-level & $43.7\pm1.2$ & $29.2\pm1.3$ & $44.3\pm1.1$\\
Spkr-level & $336.8\pm10.6$ & $235.9\pm8.0$ & $359.0\pm8.0$\\
\hline
\end{tabular}
\vspace{-10pt}
\end{table}
\vspace{-3pt}
\section{Feature Extraction}
\vspace{-3pt}
\label{sec:feat}
We describe two feature sets: utterance-level (dynamic) and speaker-level (static). The utterance-level features are extracted over each utterance and provide insight into the relationship between the time-series behavior of the features and diagnosis. We normalize the utterance-level features using speaker-dependent z-normalization. The speaker-level features are calculated by applying summary statistics to the normalized utterance-level features, including max, min, mean, SD, range, and quartiles (25th, 50th, 75th). We group all features by subject and remove features that have either zero variance or zero information gain with respect to the target class (Table~\ref{tab:dim-size}). We perform this once at the utterance-level for our dynamic features and again after summary statistics are computed for our static features.
\vspace{4pt}
\noindent\textbf{Filler Features}: Fillers are parts of speech that are not purposeful and that do not contain formal meaning (i.e., ah, eh, um, uh). Fillers are labeled during the human transcription process. They are preserved in ORAT and are estimated in ASRT. They are ignored in FA-GF due to the absence of fillers in the original passage. The utterance-level features include: number of fillers, number of fillers per second, number of fillers per word, number of fillers per phone, total filler duration per utterance, and total filler duration per second.
\vspace{4pt}
\noindent\textbf{Pause Features}: Pauses are periods without speech that last for at least 150 ms~\cite{roark2011spoken}. The utterance-level features include: the number of pauses, number of pauses per second, number of pauses per word, number of pauses per phone, total pause duration, and total pause duration per second.
\vspace{4pt}
\noindent\textbf{Speech Rate Features}: Speech rate captures an individual's speaking speed. The utterance-level features include: the number of phones, number of phones per second, number of phones per word, number of words, and number of words per second.
\vspace{4pt}
\noindent\textbf{Goodness of Pronunciation Features}: Goodness of Pronunciation (GoP) measures the fitness of a reference acoustic model (trained over all HD and HC speakers) to a given phone by computing the difference between the average acoustic log-likelihood of a force-aligned phoneme and that of an unconstrained phone loop~\cite{GoP}:
\vspace{-2pt}
\begin{equation}
\vspace{-1pt}
\text{GoP}(\mathbf{p}) = \frac{1}{N}\text{log}\frac{P(O|\mathbf{p})}{P(O|PL)},
\label{eq:GoP}
\end{equation}
\noindent where $\mathbf{p}$ is a sequence of phones, $O$ is the MFCC acoustic observation, $N$ is the number of frames, and $PL$ is the unconstrained phone loop. The utterance-level features include: the GoP score for each phone in the utterance.
\begin{table*}
\vspace{-2pt}
\caption{Classification results (FA = forced alignment, ORAT = oracle, GF = grandfather, ASRT = ASR system)}
\label{tab:results-new}
\centering
\begin{tabular}{ c | c | c | c |c|c|c} \hline
\multirow{2}{*}{Method}&
\multicolumn{2}{c|}{FA-ORAT}&
\multicolumn{2}{c|}{FA-GF}&
\multicolumn{2}{c}{ASRT}\\
\cline{2-7}
& Accuracy & F1 (HD) & Accuracy & F1 (HD) &
Accuracy & F1 (HD) \\
\hline
\hline
$k$-NN & 0.81 & 0.77 & 0.82 & 0.79 & 0.81 & 0.77 \\
DTW & 0.87 & 0.86 & 0.84 & 0.81 & 0.81 & 0.77\\
DNN & 0.87 & 0.87 & 0.85 & 0.84 & 0.85 & 0.84\\
LSTM-RNN & 0.87 & 0.86 & 0.84 & 0.82 & 0.85 & 0.84 \\
\hline
\hline
\end{tabular}
\vspace{-10pt}
\end{table*}
\vspace{-3pt}
\section{Methods}
\vspace{-3pt}
\label{sec:methods}
We use Leave-One-Subject-Out (LOSO) paradigm: in each run, a single subject is held-out as the test speaker and the model is trained and validated on the remaining speakers. Within the training partition, 80\% of the data is used to train the model and 20\% is used to validate. This process is repeated over all speakers. All results presented are accuracies averaged over all subjects in the study. We train four models: $k$-NN with Euclidean distance ($k$-NN), $k$-NN with DTW distance (DTW), Deep Neural Networks (DNN), and Long-Short-Term Memory Recurrent Neural Networks (LSTM-RNN).
We hypothesize that HD can be detected using speech features. We test this hypothesis first using $k$-NN, which assigns a label to an instance based on the plurality of its closest $k$ neighbors. $k$-NN uses the speaker-level features, while DTW uses the utterance-level features. In both approaches, we sweep over the number of neighbors, $k$
We further hypothesize that the relationship between speech features and diagnosis can be more accurately modeled by exploiting non-linear feature interactions. We test this hypothesis using Deep Neural Networks (DNN) over the speaker-level features and Long-Short-Term Memory Recurrent Neural Networks (LSTM-RNN) over the utterance-level features. Both DNN and LSTM-RNN are implemented using Keras with a Tensorflow~\cite{abadi2016tensorflow} backend. The DNN is comprised of two fully connected layers with ReLU activation functions, a softmax output layer for binary classification, and dropout layers between each fully connected layer in the network. Our LSTM-RNN is comprised of a two LSTM layers with recurrent dropout, bias l2 regularization, and kernel l2 regularization followed by a softmax output layer. In both networks, we perform a hyperparameter sweep for layer width (32, 64, 128) and dropout rate (0.0, 0.2, 0.4). We use an ensemble approach, in which we train five separate models and the mode of the five predictions is used as the final prediction of the system.
\begin{table}[t]
\vspace{-2pt}
\caption{The confusion matrix derived from the average percentage of classifications across all data processing methods and classifiers. Rows = ground truth, columns = prediction}
\label{tab:conf}
\centering
\begin{tabular}{c | c | c }
\hline
& Healthy & HD \\
\hline
\hline
Healthy & 0.95 & 0.05 \\
Premanifest & 0.54 & 0.46 \\
Early & 0.14 & 0.86 \\
Late & 0.02 & 0.98 \\
\hline
\end{tabular}
\vspace{-14pt}
\end{table}
\vspace{-3pt}
\section{Results}
\label{sec:results}
\vspace{-3pt}
The results (Table~\ref{tab:results-new}) show the feasibility of HD detection using all presented methods.
The FA-ORAT approach results in the most accurate HD predictions, with an accuracy of 0.81 for $k$-NN, 0.87 for DTW, 0.87 for DNN, and 0.87 for LSTM-RNN. In the majority of cases in Table~\ref{tab:results-new}, the accuracy slightly decreases when less accurate transcriptions are used in the place of ORAT. For example, we obtain an accuracy of 0.85 for DNN when using both FA-GF and ASRT.
We assess the statistical significance of the changes in performance across classification and transcription approaches using Cochran's Q test, which compares the binary prediction over each of the 62 speakers across all methods and classifiers. We assert significance when $p<0.05$. The result demonstrates that there is not an overall statistically significant difference over individual classifiers and transcription method (Q(11)=15.6, p=0.157). This suggests that there are multiple opportunities to recognize symptomatology and avenues to research how speech changes are associated with illness. Further, given appropriately constrained content, ASR transcripts can be used as a substitute to manual transcripts for extracting speech features to assess HD symptomatology.
\begin{table}[t]
\vspace{-2pt}
\caption{Percentage of features (GoP, Speech Rate (SR), Pauses (P)) extracted from ORAT that are statistically significantly different (p$<$0.05) compared to the HC population}
\label{tab:features}
\centering
\begin{tabular}{c | c| c |c | c } \hline
\multirow{2}{*}{Feature}&\multicolumn{1}{c|}{}&\multicolumn{3}{c}{ORAT}\\
\cline{2-5}
& Feature & Increase & Decrease & No Change \\
\hline\hline
\multirow{3}{*}{\begin{turn}{90}Pre\end{turn}} & GoP & 0.0 & 0.06 & 0.94 \\
& SR & 0.04 & 0.32 & 0.64\\
& P & 0.35 & 0.0 & 0.65 \\
\hline
\multirow{3}{*}{\begin{turn}{90}Early\end{turn}} & GoP & 0.12 & 0.76 & 0.12 \\
& SR & 0.2 & 0.48 & 0.32 \\
& P & 0.85 & 0.0 & 0.15\\
\hline
\multirow{3}{*}{\begin{turn}{90}Late\end{turn}} & GoP & 0.12 & 0.76 & 0.12 \\
& SR & 0.2 & 0.6 & 0.2 \\
& P & 0.64 & 0.0 & 0.36 \\
\hline
\end{tabular}
\vspace{-14pt}
\end{table}
Our system is accurately able to distinguish between healthy patients and individuals with early and late stage HD (Table~\ref{tab:conf}). Our results show improved classification for later HD stages, which suggests that our features can more accurately capture HD speech for individuals whose disease is more advanced, compared to those at earlier stages. Further, it points to the difficulty in recognizing premanifest HD due to similarities in speech compared to both healthy and HD populations.
We analyze the relationship between feature category and disease stage, focusing on the static ORAT feature set. We aggregate the test sets generated over each run of LOSO (62 sets), retaining only the features that have non-zero variance and information gain across all 62 speakers (GoP, speech rate, pause, and filler features). We then separate the data into the four disease categories (HC, premanifest, early, late) and identify the subset of features that are significantly different between the HC population and each of the disease stages. We assert significance when $p<0.05$, using a two-tailed independent samples t-test. We apply Bonferroni correction to account for the family-wise error rate. We present the percentage of features that are statistically significant between the HC population and each disease stage and note whether the features of the individuals with HD are greater than, less than, or not statistically significantly different from those of the HC population. The results demonstrate that generally, GoP decreases, speech rate decreases, and the number of pauses increase with disease severity (Table~\ref{tab:features}).
\vspace{-3pt}
\section{Conclusion}
\vspace{-3pt}
\label{sec:conc}
In this work, we demonstrate the effectiveness of classifying HD using key speech features, including speech rate, pauses, fillers, and GoP. Our experimental results show that automated approaches can be used to generate transcripts, extract features, and classify HD. The accuracy of the presented method increases with disease stage, which suggests that speech may serve as an effective biomarker that could be used to track HD progression. Finally, the performance of both the static and dynamic approaches suggests that there are mulitple opportunities for tracking symptomatology in this domain. Further improvements for our automated system can be made by increasing the performance of ASR by incorporating additional out-of-domain data (e.g.,~\cite{le2017automatic}). We will also investigate the development of new, more descriptive, features.
\vspace{-3pt}
\section{Acknowledgements}
\vspace{-3pt}
Work on this manuscript was supported by the National Institutes of Health (NIH), National Center for Advancing Translational Sciences (UL1TR000433). In addition, a portion of this study sample was collected in conjunction National Institutes of Health (NIH), National Institute of Neurological Disorders and Stroke (R01NS077946) and/or Enroll-HD (funded by the CHDI Foundation). Lastly, this work was also supported by the National Science Foundation (CAREER-1651740).
\bibliographystyle{IEEEtran}
\section{Introduction}
\vspace{-3pt}
\label{sec:intro}
Huntington disease (HD) is a fatal, autosomal neurodegenerative disease that typically affects 12 per 100,000 people in the Western World~\cite{mason2018predicting}. HD is insidious and progressive, affecting motor skills, speech, cognition, and behavior~\cite{paulsen2010early}. The diagnosis of HD is based on unequivocal motor symptoms and is typically made when individuals are in their mid-40's~\cite{hinzen2017systematic,vogel2012speech}.
Current research suggests that speech motor deficits precede the onset of limb and trunk chorea~\cite{vogel2012speech}, providing an opportunity to leverage changes in speech as a sensitive biomarker. These biomarkers can then be used to support distributed ecologically valid symptom tracking. This paper builds towards that goal, investigating how the speech signal can be used to automatically detect HD.
HD speech is typically characterized by decreases in the number of words pronounced, syntactic complexity, and speech rate, in addition to increases in paraphasic errors, filler usage, and sentence duration~\cite{HD_speech_characterization, vogel2012speech}. Language and speech symptoms are common in HD, occurring in approximately 90\% of cases~\cite{dysarthria_1, dysarthria_2}. As such, acoustic analyses may provide meaningful therapeutic and diagnostic information for individuals with HD, especially given that preliminary research has shown that HD-related speech deficits can be objectively characterized and increase with disease progression~\cite{kaploun2011acoustic}. The development of an objective, non-invasive acoustic biomarker, sensitive enough to detect disease progression in people with premanifest and manifest HD, will provide new avenues for clinical research and treatments.
We present an initial step towards detecting changes in HD severity by first demonstrating the efficacy of the speech signal for detecting the presence of HD. The system includes transcription, feature extraction, and classification. The transcripts are generated either by humans or by training in-domain automatic speech recognition (ASR) systems. The features are clinically-inspired and include filler usage, pauses in speech, speech rate, and pronunciation errors. We investigate the static and dynamic feature properties. The static feature sets describe speech behavior using summary statistics; the dynamic feature sets provide an opportunity to directly model the feature variation between utterances. We model the static feature sets using k-Nearest Neighbors ($k$-NN) with Euclidean distance and Deep Neural Networks (DNN). We model the dynamic feature sets using $k$-NN with Dynamic Time Warping (DTW) Distance and Long-Short-Term Memory Networks (LSTM). We investigate the impact of transcription error, moving from manual transcriptions, to transcripts generated using forced alignment to known prompts, and finally to automatic speech recognition (ASR).
Our results demonstrate the efficacy of speech-centered approaches for detecting HD. We show that we can accurately detect HD using a simple static ($k$-NN) approach, resulting in an accuracy of 0.81 (chance performance is 0.5). We then show that these results can be improved using dynamic feature sets (DTW) or deep methods, which can capture the non-linear relationships in our feature sets (DNN/LSTM). Finally, we demonstrate that in domains with limited lexical variability, manual transcripts can be replaced with ASR transcripts without suffering from a degradation in performance, even given a word error rate of 9.4\%. This indicates the robustness of the identified speech features. The novel aspects of our approach include one of the first investigations into automated speech-centered HD detection and a focus on understanding the importance of modeling temporal variability for detecting HD.
\begin{table*}[t]
\vspace{-5pt}
\caption{Summary of participant demographics}
\vspace{-5pt}
\label{tab:speaker_data}
\centering
\begin{tabular}{c | c c c c c c}
\hline
& Premanifest (n=12) & Early (n=12) & Late (n=7) & Control (n=31) & All (n=62) \\
\hline
\hline
Age | Mean (SD) & 42.6 (9.8) & 52.0 (11.2) & 54.6 (9.7) & 50.3 (11.1) & 49.8 (11.1) \\
Gender | \% Male & 36.4 & 41.7 & 37.5 & 38.7 & 38.7 \\
Race | \% White & 90.9 & 100 & 87.5 & 90.3 & 91.9 \\
Race | \% Black & 0 & 0 & 12.5 & 6.5 & 4.8 \\
Race | \% Other & 9.1 & 0 & 0 & 3.2 & 3.3 \\
\hline
\end{tabular}
\vspace{-6pt}
\end{table*}
\vspace{-3pt}
\section{Related Work}
\vspace{-3pt}
\label{sec:rw}
\begin{table}[t]
\vspace{-3pt}
\caption{Dataset summary: total dataset size (seconds) and average utterance duration (seconds). HC describes the healthy controls and HD, the participants with Huntington}
\vspace{-3pt}
\label{tab:utt_data}
\centering
\begin{tabular}{c | c c }
\hline
Group & Duration (avg. dur) & \# Utt. (avg. num) \\
\hline
\hline
HC & 1,469.83 (47.41 $\pm$ 7.68) & 310 (10.0 $\pm$ 0.00) \\
HD & 2,641.83 (85.2 $\pm$ 38.18) & 320 (10.32 $\pm$ 0.74) \\
\hline
\end{tabular}
\vspace{-16pt}
\end{table}
Previous works have demonstrated the feasibility of automated speech assessment for various neurocognitive disorders such as dementia~\cite{dementia}, aphasia~\cite{aphasia}, and Alzheimer's disease~\cite{toth2015automatic}. Research has investigated the feasibility of using automatic speech recognition (ASR) to extract lexical features from transcribed audio~\cite{fraser2013automatic, sadeghian2015using}. However, off-the-shelf ASR systems are not well suited to this domain because of the abnormal speech patterns, high speaker variability, and lack of data that are common in dysarthric speech~\cite{zhou2016speech,mengistu2011comparing}. Furthermore, these off-the-shelf ASR systems may miss out on critical cues such as the presence of fillers, stutters, or mispronunciations, all of which contribute to the perception of disordered speech~\cite{toth2015automatic}. Another approach has been to train specialty ASR models directly on the domain-relevant data. Le et al. trained an ASR system on aphasic speech and used this system to extract speech features for the prediction of intelligibility, including: the number of fillers, phone rate, error rate, utterance length, and goodness of pronunciation~\cite{aphasia}. Similar work from Peintner et al. focused on the automatic diagnosis of neurodegenerative diseases~\cite{peintner2008learning}. In addition, work from Guerra et al. demonstrated the potential of using non-linear classifiers and multi-dimensional features to detect dysarthric speech.~\cite{guerra2003modern}.
\vspace{-3pt}
\section{Data Description}
\vspace{-3pt}
\label{sec:data}
The data in this study was collected from an HD study conducted at the University of Michigan. The data consists of 62 speakers, 31 healthy and 31 with HD. Out of the 31 individuals with HD, 11 are premanifest, 12 are in the early stage, and 8 are in the late stage. HD groups were created using the Total Motor Score (TMS) and the Total Functional Capacity (TFC) score \cite{shoulson1989assessment} from the Unified Huntington's Disease Rating Scale (UHDRS) \cite{kieburtz2001unified}. Specifically, individuals were designated as premanifest HD if they had a positive gene test (HD CAG $>$ 35) and a clinician-rated score of less than 4 on the last item of the TMS (which provides an index of clinician-rated diagnostic confidence). Those with clinician-rated scores greater than or equal to 4 on the last item of the TMS were included in the manifest HD group. For those with manifest HD, TFC scores (which provide an index of clinician-rated functional capacity) were used to determine HD stage. Specifically, scores range from 0 (low functioning) to 13 (highest level of functioning); TFC sum scores of 7-13 were considered early-stage and sum scores of 0-6 were considered late-stage HD.
The data includes both read speech and spontaneous speech sections. This study focuses only on the read speech portion, during which participants read the Grandfather Passage~\cite{Grandfather_p1}. The Grandfather Passage~\cite{duffy1995motor, zraick2004reliability, darley1975motor} is a phonetically balanced paragraph containing 129 words and 169 syllables, and is a standard reading passage used in speech-language pathology. This passage is commonly used to test for dysarthric speech~\cite{patel2013caterpillar}. Table~\ref{tab:utt_data} shows additional information about the scope of our data such as the size and number of utterances.
\section{Data Transcription}
\label{sec:trans}
\subsection{Human Transcriptions}
\label{sec:orat}
Recordings were deidentified and transcribed using the CHAT approach~\cite{macwhinney2000childes} and Computerized Language Analysis (CLAN) software. CHAT transcriptions identify speech errors (phonological, semantic, or neologistic), vowel distortions, word repetitions, retracing, assimilations, dialect variances, letter and word omissions, utterances, pauses, glottal sounds unique to HD, vocalizations, spontaneous speech for each participant, and variations in rate, fundamental frequency (F0), and voice quality. Interrater reliability (greater than or equal to 90\% agreement) was established between two trained raters and a Ph.D.-level Speech Language Pathologist. The raters then individually transcribed each recording and their transcriptions were compared. Raters were required to reach a consensus for all identified discrepancies. In cases where consensus could not be reached, the Speech Language Pathologist was consulted.
\begin{table}[t]
\vspace{-3pt}
\caption{Performance of ASR system (per speaker) for healthy and HD speech, including WER, insertion (ins), deletion (del), and substitution (sub)}
\vspace{-3pt}
\label{tab:wer}
\centering
\begin{tabular}{c | c c c c}
\hline
&WER \%&Ins&Del&Sub\\
\hline
All & $9.4\pm14.8$ & $1.4\pm3.1$ & $5.0\pm11.2$ & $8.9\pm17.3$\\
HD & $16.0\pm18.7$ & $2.5\pm4.0$ & $8.7\pm14.9$ & $15.5\pm22.5$\\
HC & $2.8\pm2.3$ & $0.3\pm0.9$ & $1.3\pm1.2$ & $2.3\pm2.4$\\
\hline
\end{tabular}
\vspace{-16pt}
\end{table}
\begin{figure*}[t]
\begin{center}
\includegraphics[scale=0.63]{images/pipeline_compact.pdf}
\vspace{-3pt}
\caption{System diagram for HD classification (note: seg. is segmentation) \label{fig:data_pipeline}}
\end{center}
\vspace{-16pt}
\end{figure*}
\vspace{-3pt}
\subsection{Automated Transcripts}
\vspace{-3pt}
\label{sec:asrt}
Manual transcripts often represent a bottleneck because they are costly and time-intensive to obtain. ASR can provide an alternative. However, off-the-shelf systems are often unusable due to the acoustic mismatch between the healthy speech used to train these systems and the speech patterns of individuals of the target population. We address this by training in-domain acoustic and language models using a specialized lexicon.
The lexicon we used is initialized using standard English phone-level pronunciations provided by the CMU pronunciation dictionary \cite{weide1998cmu}. We augment this using the pronunciation errors identified in the manual transcripts. We use a bigram language model extracted over the manual transcripts.
The acoustic model is a monophone Hidden Markov Model (HMM) with three-states per phone and a left-to-right topology. The emission probability of each state is estimated using a Gaussian Mixture Model (GMM). We use a monophone acoustic model rather than a triphone model due to the relatively small size of the dataset ($\sim$1 hour of total speech). The final model consists of 500 Gaussian mixtures.
The input acoustic features are Mel-frequency Cepstral Coefficients (MFCC), extracted with a 25ms Hamming window and 10ms overlap using Kaldi~\cite{povey2011kaldi}. We include the 13 MFCC coefficients, the first, second, and third order deltas.
The acoustic and language models are trained and tested using a Leave-One-Subject-Out (LOSO) approach over all HC and HD speakers. The performance of our ASR system is shown in Table~\ref{tab:wer}, obtaining an overall WER of $9.4\pm14.9\%$. This relatively low WER is strongly attributed to the constrained speech produced by reading the grandfather passage.
\vspace{-3pt}
\subsection{Preprocessing Methods}
\vspace{-3pt}
We present three approaches to investigate the effect of error propagation. We first assume the availability of manual transcripts and investigate the feasibility of extracting features for HD classification (\emph{force-aligned oracle transcription}, FA-ORAT). Next, we assume that the subject prompt is available and ask if the subject's speech goals can be used as a target for force alignment (\emph{forced-aligned grandfather transcription}, FA-GF). This allows us to investigate the effect of the mismatch introduced by speech errors. Finally, we assume only that segmentation information is available, noting when speaker utterances begin and end, but that the transcript itself is unknown (\emph{ASR transcription}, ASRT). This provides insight into the effectiveness of an automatic system that can transcribe, extract features, and predict diagnosis (Figure~\ref{fig:data_pipeline}).
All approaches use the acoustic model discussed in Section~\ref{sec:asrt}. The ASRTs are discussed in Section~\ref{sec:asrt}. FA-ORATs are generated by force-aligning the input audio to the manual transcripts. FA-GFs are generated by force-aligning the input audio to the original Grandfather passage text.
FA-ORAT and ASRT utterances are segmented using the manual transcripts before both ASR and force alignment (future work will remove the reliance on manual segmentation for ASRT). FA-GF utterances are segmented into utterances using the natural sentences within the Grandfather passage ( Figure~\ref{fig:data_pipeline}).
\begin{table}[t]
\vspace{-2pt}
\caption{Average dimension size of feature vectors}
\vspace{-3pt}
\label{tab:dim-size}
\centering
\begin{tabular}{c | c c c}
\hline
&FA-ORAT & FA-GF & ASRT\\
\hline
Utt-level & $43.7\pm1.2$ & $29.2\pm1.3$ & $44.3\pm1.1$\\
Spkr-level & $336.8\pm10.6$ & $235.9\pm8.0$ & $359.0\pm8.0$\\
\hline
\end{tabular}
\vspace{-10pt}
\end{table}
\vspace{-3pt}
\section{Feature Extraction}
\vspace{-3pt}
\label{sec:feat}
We describe two feature sets: utterance-level (dynamic) and speaker-level (static). The utterance-level features are extracted over each utterance and provide insight into the relationship between the time-series behavior of the features and diagnosis. We normalize the utterance-level features using speaker-dependent z-normalization. The speaker-level features are calculated by applying summary statistics to the normalized utterance-level features, including max, min, mean, SD, range, and quartiles (25th, 50th, 75th). We group all features by subject and remove features that have either zero variance or zero information gain with respect to the target class (Table~\ref{tab:dim-size}). We perform this once at the utterance-level for our dynamic features and again after summary statistics are computed for our static features.
\vspace{4pt}
\noindent\textbf{Filler Features}: Fillers are parts of speech that are not purposeful and that do not contain formal meaning (i.e., ah, eh, um, uh). Fillers are labeled during the human transcription process. They are preserved in ORAT and are estimated in ASRT. They are ignored in FA-GF due to the absence of fillers in the original passage. The utterance-level features include: number of fillers, number of fillers per second, number of fillers per word, number of fillers per phone, total filler duration per utterance, and total filler duration per second.
\vspace{4pt}
\noindent\textbf{Pause Features}: Pauses are periods without speech that last for at least 150 ms~\cite{roark2011spoken}. The utterance-level features include: the number of pauses, number of pauses per second, number of pauses per word, number of pauses per phone, total pause duration, and total pause duration per second.
\vspace{4pt}
\noindent\textbf{Speech Rate Features}: Speech rate captures an individual's speaking speed. The utterance-level features include: the number of phones, number of phones per second, number of phones per word, number of words, and number of words per second.
\vspace{4pt}
\noindent\textbf{Goodness of Pronunciation Features}: Goodness of Pronunciation (GoP) measures the fitness of a reference acoustic model (trained over all HD and HC speakers) to a given phone by computing the difference between the average acoustic log-likelihood of a force-aligned phoneme and that of an unconstrained phone loop~\cite{GoP}:
\vspace{-2pt}
\begin{equation}
\vspace{-1pt}
\text{GoP}(\mathbf{p}) = \frac{1}{N}\text{log}\frac{P(O|\mathbf{p})}{P(O|PL)},
\label{eq:GoP}
\end{equation}
\noindent where $\mathbf{p}$ is a sequence of phones, $O$ is the MFCC acoustic observation, $N$ is the number of frames, and $PL$ is the unconstrained phone loop. The utterance-level features include: the GoP score for each phone in the utterance.
\begin{table*}
\vspace{-2pt}
\caption{Classification results (FA = forced alignment, ORAT = oracle, GF = grandfather, ASRT = ASR system)}
\label{tab:results-new}
\centering
\begin{tabular}{ c | c | c | c |c|c|c} \hline
\multirow{2}{*}{Method}&
\multicolumn{2}{c|}{FA-ORAT}&
\multicolumn{2}{c|}{FA-GF}&
\multicolumn{2}{c}{ASRT}\\
\cline{2-7}
& Accuracy & F1 (HD) & Accuracy & F1 (HD) &
Accuracy & F1 (HD) \\
\hline
\hline
$k$-NN & 0.81 & 0.77 & 0.82 & 0.79 & 0.81 & 0.77 \\
DTW & 0.87 & 0.86 & 0.84 & 0.81 & 0.81 & 0.77\\
DNN & 0.87 & 0.87 & 0.85 & 0.84 & 0.85 & 0.84\\
LSTM-RNN & 0.87 & 0.86 & 0.84 & 0.82 & 0.85 & 0.84 \\
\hline
\hline
\end{tabular}
\vspace{-10pt}
\end{table*}
\vspace{-3pt}
\section{Methods}
\vspace{-3pt}
\label{sec:methods}
We use Leave-One-Subject-Out (LOSO) paradigm: in each run, a single subject is held-out as the test speaker and the model is trained and validated on the remaining speakers. Within the training partition, 80\% of the data is used to train the model and 20\% is used to validate. This process is repeated over all speakers. All results presented are accuracies averaged over all subjects in the study. We train four models: $k$-NN with Euclidean distance ($k$-NN), $k$-NN with DTW distance (DTW), Deep Neural Networks (DNN), and Long-Short-Term Memory Recurrent Neural Networks (LSTM-RNN).
We hypothesize that HD can be detected using speech features. We test this hypothesis first using $k$-NN, which assigns a label to an instance based on the plurality of its closest $k$ neighbors. $k$-NN uses the speaker-level features, while DTW uses the utterance-level features. In both approaches, we sweep over the number of neighbors, $k$
We further hypothesize that the relationship between speech features and diagnosis can be more accurately modeled by exploiting non-linear feature interactions. We test this hypothesis using Deep Neural Networks (DNN) over the speaker-level features and Long-Short-Term Memory Recurrent Neural Networks (LSTM-RNN) over the utterance-level features. Both DNN and LSTM-RNN are implemented using Keras with a Tensorflow~\cite{abadi2016tensorflow} backend. The DNN is comprised of two fully connected layers with ReLU activation functions, a softmax output layer for binary classification, and dropout layers between each fully connected layer in the network. Our LSTM-RNN is comprised of a two LSTM layers with recurrent dropout, bias l2 regularization, and kernel l2 regularization followed by a softmax output layer. In both networks, we perform a hyperparameter sweep for layer width (32, 64, 128) and dropout rate (0.0, 0.2, 0.4). We use an ensemble approach, in which we train five separate models and the mode of the five predictions is used as the final prediction of the system.
\begin{table}[t]
\vspace{-2pt}
\caption{The confusion matrix derived from the average percentage of classifications across all data processing methods and classifiers. Rows = ground truth, columns = prediction}
\label{tab:conf}
\centering
\begin{tabular}{c | c | c }
\hline
& Healthy & HD \\
\hline
\hline
Healthy & 0.95 & 0.05 \\
Premanifest & 0.54 & 0.46 \\
Early & 0.14 & 0.86 \\
Late & 0.02 & 0.98 \\
\hline
\end{tabular}
\vspace{-14pt}
\end{table}
\vspace{-3pt}
\section{Results}
\label{sec:results}
\vspace{-3pt}
The results (Table~\ref{tab:results-new}) show the feasibility of HD detection using all presented methods.
The FA-ORAT approach results in the most accurate HD predictions, with an accuracy of 0.81 for $k$-NN, 0.87 for DTW, 0.87 for DNN, and 0.87 for LSTM-RNN. In the majority of cases in Table~\ref{tab:results-new}, the accuracy slightly decreases when less accurate transcriptions are used in the place of ORAT. For example, we obtain an accuracy of 0.85 for DNN when using both FA-GF and ASRT.
We assess the statistical significance of the changes in performance across classification and transcription approaches using Cochran's Q test, which compares the binary prediction over each of the 62 speakers across all methods and classifiers. We assert significance when $p<0.05$. The result demonstrates that there is not an overall statistically significant difference over individual classifiers and transcription method (Q(11)=15.6, p=0.157). This suggests that there are multiple opportunities to recognize symptomatology and avenues to research how speech changes are associated with illness. Further, given appropriately constrained content, ASR transcripts can be used as a substitute to manual transcripts for extracting speech features to assess HD symptomatology.
\begin{table}[t]
\vspace{-2pt}
\caption{Percentage of features (GoP, Speech Rate (SR), Pauses (P)) extracted from ORAT that are statistically significantly different (p$<$0.05) compared to the HC population}
\label{tab:features}
\centering
\begin{tabular}{c | c| c |c | c } \hline
\multirow{2}{*}{Feature}&\multicolumn{1}{c|}{}&\multicolumn{3}{c}{ORAT}\\
\cline{2-5}
& Feature & Increase & Decrease & No Change \\
\hline\hline
\multirow{3}{*}{\begin{turn}{90}Pre\end{turn}} & GoP & 0.0 & 0.06 & 0.94 \\
& SR & 0.04 & 0.32 & 0.64\\
& P & 0.35 & 0.0 & 0.65 \\
\hline
\multirow{3}{*}{\begin{turn}{90}Early\end{turn}} & GoP & 0.12 & 0.76 & 0.12 \\
& SR & 0.2 & 0.48 & 0.32 \\
& P & 0.85 & 0.0 & 0.15\\
\hline
\multirow{3}{*}{\begin{turn}{90}Late\end{turn}} & GoP & 0.12 & 0.76 & 0.12 \\
& SR & 0.2 & 0.6 & 0.2 \\
& P & 0.64 & 0.0 & 0.36 \\
\hline
\end{tabular}
\vspace{-14pt}
\end{table}
Our system is accurately able to distinguish between healthy patients and individuals with early and late stage HD (Table~\ref{tab:conf}). Our results show improved classification for later HD stages, which suggests that our features can more accurately capture HD speech for individuals whose disease is more advanced, compared to those at earlier stages. Further, it points to the difficulty in recognizing premanifest HD due to similarities in speech compared to both healthy and HD populations.
We analyze the relationship between feature category and disease stage, focusing on the static ORAT feature set. We aggregate the test sets generated over each run of LOSO (62 sets), retaining only the features that have non-zero variance and information gain across all 62 speakers (GoP, speech rate, pause, and filler features). We then separate the data into the four disease categories (HC, premanifest, early, late) and identify the subset of features that are significantly different between the HC population and each of the disease stages. We assert significance when $p<0.05$, using a two-tailed independent samples t-test. We apply Bonferroni correction to account for the family-wise error rate. We present the percentage of features that are statistically significant between the HC population and each disease stage and note whether the features of the individuals with HD are greater than, less than, or not statistically significantly different from those of the HC population. The results demonstrate that generally, GoP decreases, speech rate decreases, and the number of pauses increase with disease severity (Table~\ref{tab:features}).
\vspace{-3pt}
\section{Conclusion}
\vspace{-3pt}
\label{sec:conc}
In this work, we demonstrate the effectiveness of classifying HD using key speech features, including speech rate, pauses, fillers, and GoP. Our experimental results show that automated approaches can be used to generate transcripts, extract features, and classify HD. The accuracy of the presented method increases with disease stage, which suggests that speech may serve as an effective biomarker that could be used to track HD progression. Finally, the performance of both the static and dynamic approaches suggests that there are mulitple opportunities for tracking symptomatology in this domain. Further improvements for our automated system can be made by increasing the performance of ASR by incorporating additional out-of-domain data (e.g.,~\cite{le2017automatic}). We will also investigate the development of new, more descriptive, features.
\vspace{-3pt}
\section{Acknowledgements}
\vspace{-3pt}
Work on this manuscript was supported by the National Institutes of Health (NIH), National Center for Advancing Translational Sciences (UL1TR000433). In addition, a portion of this study sample was collected in conjunction National Institutes of Health (NIH), National Institute of Neurological Disorders and Stroke (R01NS077946) and/or Enroll-HD (funded by the CHDI Foundation). Lastly, this work was also supported by the National Science Foundation (CAREER-1651740).
\bibliographystyle{IEEEtran}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 4,534 |
Soguillo del Páramo es una localidad española perteneciente al municipio de Laguna Dalga, en la provincia de León, comunidad autónoma de Castilla y León.
Geografía
Ubicación
Demografía
Evolución de la población
{{Gráfica de evolución|tipo=demográfica|anchura=600|color_18=blue|nombre=Soguillo del Páramo|2000|119|2001|117|2002|114|2003|115|2004|112|2005|111|2006|108|2007|105|2008|102|2009|101|2010|101|2011|98|2012|93|2013|93|2014|91|2015|88|2016|83|notas=}}
Véase también
Referencias
Enlaces externos
Ayuntamiento de Laguna Dalga
Localidades de Laguna Dalga | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 5,924 |
{"url":"https:\/\/smlhelp.github.io\/book\/concepts\/workspan.html","text":"# Work and Span\n\nBy Aditi Gupta and Brandon Wu, May 2020. Revised September 2020\n\nWe will now turn towards a more robust notion of work and span that let us analyze our conception of asymptotic runtime more effectively. It is still dependent on asymptotic analysis, but merely involves being more involved with how we go about generating the asymptotic bound for a function from the code itself. Additionally, we will not only analyze the approximate number of steps of the program (which corresponds to the runtime of the program, given sequential execution), but also the approximate longest chain of dependencies that exists in the program, assuming that computations can be run in parallel. We will elaborate more on this idea in this chapter.\n\n## Parallel Computing\n\nIt is intuitive to view things occurring sequentially. Whether it is reading a tutorial, writing a list of instructions, or making a plan for the future, sequential actions are very easy to think about. Even programs are written in a stepwise manner, with computations happening one after the other in a prescribed way. It seems to be in our nature to impose some kind of order on a list of actions.\n\nDespite that, however, sequential evaluation is not always the most efficient. Sequential evaluation introduces dependencies where other subtasks cannot be started until we have finished the current subtask, which has the effect of potentially inducing wait times where none exist. For instance, if your plan is to do the laundry and your homework, it might not be the most time-efficient to wait until the washer is done before get started on your work. There is no dependency between laundry and homework - there is no logical reason why you should have to wait for one before the other, so you could do them both at the same time, or in parallel.\n\nParallel computing is a principle that is becoming more and more important as time goes on. Computers now more frequently have multiple cores in their processors, which means that tasks can be subdivided and assigned out to independent acting agents.\n\nThe benefits of doing so are clear. Suppose that we are stacking a shelf with merchandise. If the shelf is tall, this may take us a while - roughly linear in the height of the shelf, we can imagine (supposing we have an infinite stock of items, and that climbing a ladder somehow isn't a factor). If we had a person to dedicate to each shelf, however, then we could stock the shelves in \"constant\" time - independent of the number of shelves that there actually are. This will be a driving idea behind how we look at parallelism.\n\nWhile we will not delve into the implementation details of parallel computing (which is more apt for a systems perspective), we will perform basic analysis of asymptotic complexity based on that premise. These take the form of work and span, fundamental ideas that will drive the next section.\n\n## Theory\n\nFirst, let us consider what we will term a task dependency graph. This is not so important of a concept to memorize, but it will help in conceptualizing work and span. A task dependency graph is a directed acyclic graph (that is, a graph whose edges are one-way, and there exist no loops) that represents the dependencies when trying to perform a set of tasks. Each node is a task, labelled with the time that it takes to execute it (which is a singular unit, unable to be reduced otherwise), as well as edges that represent the dependencies in the graph. Any task cannot be started until all of the tasks that have edges directed towards it are finished - that is, all of a task's inbound edges denote its prerequisites.\n\nWith this knowledge, we will be able to define what we mean by work and span.\n\n[Work] The work of a computation denotes the number of steps it takes to run, assuming access to only a single processor. Work thus represents the worst-case sequential evaluation time, and can be upper bounded with asymptotic analysis.\n\n[Span] The span of a computation denotes the number of steps that it takes to run, assuming access to infinitely many processors that allow us to run tasks in parallel. Span thus represents the worst-case parallel evaluation time, and can be upper bounded with asymptotic analysis.\n\nWhat we find is that work directly corresponds to our previous intuition of the complexity of a computation, since our previous analyses have always been sequential. Span, however, is not quite as easy to eyeball. Now, we are really looking for the longest chain of dependencies, that is, the longest sequence of tasks that must be run sequentially, since everything else can be run concurrently. Infinitely many processors, while obviously unrealistic, helps to simplify our analysis, and provides us a \"target\" for the efficiency of a fully parallel algorithm.\n\nWe illustrate these concepts with the following graph.\n\nSo in this example, the work of our graph would be $$1+3+6+2+5+9+3+3+10 = 42$$, since with a single processor, the dependencies don't really matter to us. We have no choice but to complete every task, and the precise order doesn't matter. That isn't to say that we can execute the tasks in any order, that plainly isn't true - we simply mean that there is no order that can change what our runtime is.\n\nOn the other hand, for span we must consider the length of the longest path. The span of this graph would thus be $$1 + 3 + 6 + 9 + 10 = 29$$, since that is the longest path. Even being able to execute everything else in a parallel way, we cannot avoid the fact that these nodes must follow one after the other. This path is thus the limiting factor in our runtime - ultimately it constrains the amount of time that we expend.\n\nTask dependency graphs are a concept that we discuss purely for theoretically being able to understand the idea of work and span. We will look at examples in terms of actual SML code in the next section, which will be primarily where we do our work\/span analysis.\n\n## Work\/Span Analysis of Code\n\nThe previous example was rather contrived. For one thing, it is prespecified - we already knew all of the tasks that there were, along with its dependencies and task times. As such, we could compute a simple, numerical answer. This will likely not be the case. We are interested in work\/span analysis of algorithms, which will yield us another function - one describing the runtime complexity of the algorithm as a function of some notion of input size.\n\nFor recursive functions, work\/span analysis is very easy to do. We characterize it in terms of recurrence relations, which are themselves recursive functions describing the work or span of some code. Then, we simply solve for the closed form of the recurrence relation and estimate a Big-O bound to arrive at our desired complexity.\n\nConsider the following example:\n\nfun length ([] : int list) : int = 0\n| length (x::xs : int list) : int = 1 + length xs\n\n\nThe first step to determining the work and span of such a function is to write a recurrence relation. These steps are explicit - the code should determine the recurrence relation, and the recurrence relation should determine the Big-O bound. We will first analyze this function's work complexity, then move on to span.\n\nFirst, we should fix some notion of input size. This will differ based on what our recurrence is recursing on, but in this example it seems to be the size of the input list. Note that this follows directly from the code - if this were the factorial function, we may say that the recurrence is in terms of the value of the input, and as we will later see, if the input was a tree, we may write the recurrence in terms of the number of nodes in the tree.\n\nSo we can write the following recurrence for work. We will explain soon what exactly it means, and how to arrive at it:\n\n$$W_{length}(n) = c_0 + W_{length}(n-1)$$\n$$W_{length}(0) = c_1$$\n\nThis recurrence is made of two parts - the recursive case and the base case. The first equation for $$W_{length}(n)$$ simply denotes what the work for an input size of $$n$$ should be - defined recursively. The second equation for $$W_{length}(0)$$ defines what the work for an input size of $$0$$ should be. This directly corresponds to our code, which has two clauses for a list of length $$0$$ (that being []), and for the general case. This is an important observation to make, that the recurrence follows directly from the code.\n\nThe recursive case says that, for an input of size $$n$$, the work done is $$c_0 + W_{length}(n-1)$$. Here, $$c_0$$ denotes some constant. This is supposed to correspond to the recursive case of the function, and if we look at it, we have a recursive call length xs, as well as some other work of adding one. Adding one, being an arithmetic operation, is a constant-time process, meaning that it takes a non-zero constant amount of time. This is what $$c_0$$ is supposed to represent - the constant amount of non-recursive work that must be done, after the recursive call has finished. It is not important what $$c_0$$ is, just that it is some unspecified amount of work that is not a function of $$n$$.\n\nConversely, $$W_{length}(n-1)$$ represents exactly the amount of work done by the recursive call, since it is literally defined to be the amount of work done by an input of size $$n-1$$, which is exactly what happens when we call length xs, where xs has length $$n-1$$.\n\nNOTE: Even if we did not have the addition operation, we would still have $$c_0$$. This is because merely entering the function and figuring out which case to execute takes some non-zero amount of work - it is impossible to run the recursive call perfectly with no other time expense. As such, we would see exactly the same recurrence even if the recursive case was length (x::xs : int list) : int = length xs (which would also be a very bad length function).\n\nFor the base case, we have that $$W_{length}(0) = c_1$$, since in the base case we just return 0. This has a constant amount of work associated with it, as argued previously, so we use the constant $$c_1$$ to denote that, since the amount of work is likely not the same constant as that in the recursive case, when adding 1.\n\nSo this is how we arrive at the work recurrence for length. We will now turn to the span recurrence, which we obtain as:\n\n$$S_{length}(n) = c_0 + S_{length}(n-1)$$\n$$S_{length}(0) = c_1$$\n\nNote that the span recurrence is exactly the same as the work recurrence. This should make sense, because there is no opportunity for parallelism in the length function - we can only pop off elements one by one from the list. In the recursive case, we must wait for the result of the recursive call on xs, which means we unavoidably must expend the span of $$S_{length}(n-1)$$ - additionally, we have a data dependency. We cannot execute the addition in 1 + length xs until we obtain the result for length xs, which means that we must sum the time it takes to compute length xs (that being $$S_{length}(n-1)$$) and the time it takes to carry out the addition operation (that being $$c_1$$).\n\nNow we will begin the task of actually solving the recurrence. They are the same recurrence, so without loss of generality we will solve just the work recurrence.\n\nWe know that it has the form of $$W_{length}(n) = c_0 + W_{length}(n-1)$$, and eventually reaches a base case at $$W_{length}(0) = c_1$$. We can \"unroll\" the recurrence a few times to see if we can see a pattern, and then arrive at our answer.\n\nSo we start out with $$W_{length}(n) = c_0 + W_{length}(n-1)$$, but if we invoke the definition of $$W_{length}(n-1)$$, we can produce $$c_0 + c_0 + W_{length}(n-2)$$, since $$W_{length}(n-1) = c_0 + W_{length}(n-2)$$. By doing the same for $$W_{length}(n-2)$$, we get $$c_0 + c_0 + c_0 + W_{length}(n-3)$$. It seems we've hit upon a pattern - each time we \"unroll\" the definition of $$W_{length}(n)$$, for progressively lower $$n$$, we get another $$c_0$$ term back out. Then, we know that the recurrence should eventually solve to:\n\n$$W_{length}(n) = (\\sum_{i=1}^n c_0) + c_1$$\n\nWe will usually omit the $$c_1$$, since it does not matter asymptotically. Then, clearly this is equivalent to $$nc_0 + c_1$$. We see that this closed-form solution is linear in $$n$$ - so then we have that the work and span of this function is in $$O(n)$$, which is consistent with what we would expect if we had \"eyeballed\" it.\n\n## Work\/Span Analysis: Trees\n\nFirst, we will discuss the definition of a binary tree in SML:\n\ndatatype tree = Empty\n| Node of tree * int * tree\n\n\nThis denotes that a tree is either the constant constructor Empty denoting the empty tree, or a Node that contains an integer value, as well as two tree children, that can themselves be Nodes or Empty.\n\nSo for instance, we may represent the above tree with Node(Node(Node(Empty, 4, Empty), 3, Empty), 1, Node(Empty, 2, Empty)). Put more fancily:\n\nNode(\nNode(\nNode(\nEmpty,\n4,\nEmpty\n),\n3,\nEmpty\n),\n1,\nNode(\nEmpty,\n2,\nEmpty\n)\n)\n\n\nNow we will analyze the complexity of finding the size of a tree. Consider the following implementation for doing so:\n\nfun size (Empty : tree) : int = 0\n| size (Node (L,x,R) : tree) : int = size L + 1 + size R\n\n\nFirst convince yourself that it actually works. It simply recursively finds the size of the left and right tree, then adds one for the node that it is currently at. In the empty case, we consider the empty tree to have a size of 0.\n\nThe major difference between this function and the previous length function was that length had one recursive call - size has two. We will need to reflect this change when we write our recurrences. Additionally, we need a new variable for our recurrence - we no longer have a list whose length we can induct on. A similar analogue will be $$n$$, the number of nodes in the tree, so we will take that as our recurrence variable. We will focus first on work.\n\nWe will obtain the following work recurrence:\n\n$$W_{size}(n) = c_0 + W_{size}(n_l) + W_{size}(n_r)$$\n$$W_{size}(0) = c_1$$\n\nwhere we define the number of nodes in the tree $$n = 1 + n_l + n_r$$, and $$n_l$$ and $$n_r$$ denote the number of nodes in the left and right subtree, respectively. This follows similarly to our recurrence for length in the previous part, where c_0 is just some constant amount of work that we necessarily have to do, and the two $$W_{size}$$ calls are from the two recursive calls we make to L and R.\n\nNow, we don't know precisely how big $$n_l$$ and $$n_r$$ are, with respect to $$n$$. This makes our analysis a little more tricky, but essentially all we need to do is think of the worst case, as we are interested in the worst-case asymptotic complexity of this function. For work, however, there is no worst-case - no matter how the tree is structured, we must visit every node once, doing a constant amount of work each time. So we should obtain, in the end, $$W_{size}(n) = nc_0 + c_1$$, which we know is $$O(n)$$. So in this case, we didn't have to think about the structure of the tree. In the next section, it will matter.\n\n## Work\/Span Analysis: Balanced vs Unbalanced Trees\n\nWe will revisit the same example, except from the perspective of span.\n\nThe important point to note is that, now, we have two separate recursive calls that are happening in the recursive call of size. These recursive calls have no data dependency - neither running depends on the other. This means that they can be run in parallel, which means that the total span that we compute should just be the max over both. This is because we can imagine that both of them lead to different \"paths\" in our task-dependency graph - we are only interested in the maximum-length path. So we will run both results, and whichever one takes longer to return an answer to us is the \"limiting reagent\" of our computation.\n\nSo we will write the span recurrence as follows:\n\n$$S_{size}(n) = c_0 + max(S_{size}(n_l), S_{size}(n_r))$$\n$$S_{size}(0) = c_1$$\n\nNow note that we are taking the max over the two recursive calls. Now, we cannot handwave the structure of the tree like we did in the previous part - if one path is significantly longer than the other, then it will stall the computation for longer. We still must visit every node, but some of them can occur in parallel.\n\nWe will consider the first case - if we have an unbalanced tree. Suppose that the tree is heavily unbalanced - akin to a (diagonal) linked list. Without loss of generality, let it be \"pointing\" to the left. Then, $$n_l = n - 1$$, and $$n_r = 0$$. Then, the max over both recursive calls should clearly be that of $$S_{size}(n-1)$$, since it has to compute the size of a larger tree.\n\nSo we can update our recurrence and obtain:\n\n$$S_{size}(n) = c_0 + S_{size}(n-1)$$\n$$S_{size}(0) = c_1$$\n\nThis recurrence is exactly the same as that of length, so we know that we will get that $$S(n) \\in O(n)$$. This should make sense intuitively, since the depth of the tree is $$n$$, and there are dependencies between each level - we cannot go to the next level until we are done with the current one. So we cannot avoid having to visit every level sequentially, which results in $$O(n)$$ span.\n\nNow, what if we consider a balanced tree? Well, the balanced case would be if the number of nodes in the left and right subtrees are roughly equal - that is, $$n_l = n_r = \\frac{n}{2}$$. We will consider them exactly equal to simplify our analysis, but we will obtain the same asymptotic answer. Then, we know that the maximum is just any one of them, since they will have the same span.\n\nSo we can update our recurrence and obtain:\n\n$$S_{size}(n) = c_0 + S_{size}(\\frac{n}{2})$$\n$$S_{size}(0) = c_1$$\n\nThis is slightly different than our length recurrence. We will try unrolling to make sense of this recurrence.\n\nWe have that $$S_{size}(n) = c_0 + S_{size}(\\frac{n}{2})$$. Plugging in the recursive definition of $$S_{size}(\\frac{n}{2})$$, we get that this expands to $$c_0 + c_0 + S_{size}(\\frac{n}{4})$$, which by the same trick expands to $$c_0+ c_0 + c_0 + S_{size}(\\frac{n}{8})$$, and so on and so forth. We note that we are dividing the number of nodes by 2 each time - and we know that we can divide $$n$$ by two roughly $$\\log_2(n)$$ times. So in total, we can solve the summation of $$S_{size}(n)$$ as $$S_{size} = (\\sum_{i=1}^{\\log_2(n)} c_0) + c_1$$.\n\nSo then this simplifies to $$S_{size}(n) = \\log_2(n)c_0 + c_1$$. This is a logarithmic function of $$n$$, so we get that the span of size is in $$O(\\log n)$$. Thus, we obtain a different span for balanced trees versus unbalanced trees - balanced trees are more efficient and parallelism-friendly.\n\n## Work\/Span Analysis: Size-dependent Operations\n\nIn these past two examples, we have only seen examples that did a constant amount of non-recursive work. We will now analyze a function that does non-recursive work that is a function of the input size $$n$$. This will result in different kinds of recurrences. First, however, we will digress briefly to motivate the example that we will analyze.\n\n[Case Study: Tree Traversal]\n\nWhen analyzing trees, it is often prudent to utilize tree traversal, or a systematic way of enumerating the elements in a tree. There are multiple different ways to do this, depending on what your intentions are - a few namely being preorder, inorder, and postorder traversal.\n\nWith these different methods of traversal, we can turn a tree into a different kind of ordered data structure, such as a list or sequence. This can come in handy when we desire future fast access to any arbitrarily ranked node in the tree, or if we want to convert it for purposes of printing, for instance.\n\nEach traversal is characterized by a certain \"strategy\" of traversal, depending on how it ranks the three possible directions that it can go - root, left, and right. Inorder traversal, for instance, is characterized by left-root-right prioritization - this means that it goes left first, and if it can't go left, then it visits the root node, and otherwise it goes right. Note that this does not mean that it visits the root of the left subtree first - it simply reruns the same process on the entire left subtree. No matter what the traversal strategy is, a node is never actually visited until the \"root\" action is taken. Preorder traversal is root-left-right, and postorder is left-right-root. Examples of preorder and inorder traversals (the most common you will see in this book) are below.\n\nTree traversals can also come in handy when generating different notations for mathematical expressions when represented in the form of an binary expression tree, which has nodes that consist of either a numeric constant, which has no children, a unary operation with a single child, or a binary operation with two children. For instance, a binary expression tree for the mathematical expression $$(4-1) * 2$$ is shown below.\n\nWith inorder traversal of this expression tree, we can generate the constants and symbols in exactly the same order as $$(4-1) * 2$$, which is how we would normally interpret it. Preorder and postorder traversal, however, result in an interesting interpretation - what is known as prefix (or Polish) and postfix (or Reverse Polish) notation.\n\nIn prefix notation, by using preorder traversal, we obtain the expression $$* - 4 1 2$$, which is how we would interpret the same expression if all of our operators appeared before their operands. Similarly, with postorder traversal, we obtain the expression $$4 1 - 2 *$$ in postfix notation. Prefix and postfix notation have significance in their lack of ambiguity - while infix notation is easy for humans to read, it requires parentheses sometimes to denote how operator precedence takes place. Prefix and postfix notation have no such flaw - they are unambiguous in how operations take place. In programming language interpreters, such notations are sometimes used to represent mathematical expressions.\n\nSuch a digression serves as motivation for the next function that we will analyze - which is writing the preorder traversal of a tree in SML. The code looks like this:\n\nfun preord (Empty : tree) : int list = []\n| preord (Node(L, x, R) : tree) : int list = x :: (preord L @ preord R)\n\n\nWe can readily see that this follows the root-left-right order that we specified earlier for preorder traversal. Recall that @ is the function for list concatenation, and has a complexity of $$O(n_l)$$ in $$n_l$$, the size of the left input list. Thus, as stated before, this function has a non-constant amount of work at each recursive call - we must evaluate @ of the result of preord L and preord R, which is a function of $$n$$, the number of nodes in the tree.\n\nWe will analyze only the balanced case for this function. We invite the reader to think about the unbalanced case on their own.\n\nFor the recursive case, we know that $$W_{preord}(n)$$ will take the form of $$c_0 + W_@(n_l) + W_{preord}(n_l) + W_{preord}(n_r)$$. By our balanced assumption, we know $$n_l = n_r = \\frac{n}{2}$$, so we can write our work recurrence as:\n\n$$W_{preord}(n) = c_0 + \\frac{n}{2}c_1 + 2W_{preord}(\\frac{n}{2})$$\n$$W_{preord}(0) = c_2$$\n\nNote that the term $$W_@(n_l)$$ is a recurrence in terms of $$n$$, the size of the left list given as input to @. Since we know that the work complexity of @ is $$O(n)$$, we can replace $$W_@(\\frac{n}{2})$$ with $$\\frac{n}{2}c_1$$, which is simply some constant $$c_1$$, scaled by a linear factor of the input $$\\frac{n}{2}$$. This is how we will generally deal with analyzing the complexity of functions that make use of helper functions.\n\nWe will make use of a new method to solve this recurrence - the Tree Method.\n\n[Tree Method] The tree method is a method for solving recurrences of certain recurrences that sum to the same quantity across levels, and usually have multiple recursive calls. In essence, if each level has the same amount of computation, then the recurrence solves to the (number of levels) * (amount at each level).\n\nThe below diagram illustrates the Tree Method.\n\nWe will now explore exactly how we arrived at this conclusion.\n\nFirst, note that this tree exists as a result of the recurrence. We used the code to specify the recurrence, and then the recurrence itself described this tree. It has a branching factor of 2 (that is, two children of each node that are non-leaves) since the recursive case of the recurrence has two recursive calls, and at each level the size of the input changes. Since the recursive calls are called on inputs of size $$\\frac{n}{2}$$, each level results in a division by two of the input size.\n\nAdditionally, we know that the amount of work at each node (of input size $$n$$) is necessarily $$c_1 \\frac{n}{2}$$. There is technically also a $$c_0$$ term, but we will omit it since it is asymptotically dominated by $$c_1 \\frac{n}{2}$$. The precise non-recursive work done by each \"node\" is specified slightly down and to the left of each node. Individually, they don't look very nice to sum over - at level $$i$$, it appears the work at each node is $$c_1 \\frac{n}{2^{i+1}}$$. However, level $$i$$ also has $$2^i$$ nodes, by the branching nature of the recurrence tree. As such, the total amount of work done at level $$i$$ is just $$c_1 \\frac{n}{2^{i+1}} * 2^i = c_1 \\frac{n}{2}$$, which is not a function of the level $$i$$.\n\nAs such, each level has the same amount of work - which is very convenient, as we can now just take that quantity and multiply it by the number of levels. So in total, when we solve out the recurrence, we should obtain that $$W(n) = (\\sum_{i=1}^{\\log_2(n)} c_1\\frac{n}{2}) + c_2n$$, since the $$c_2n$$ term is separately obtained from the base level, due to the $$n$$ leaves that each have $$c_2$$ work.\n\nThe term $$\\sum_{i=1}^{\\log_2(n)} c_1\\frac{n}{2}$$ thus goes to $$\\frac{c_1}{2} n\\log_2(n)$$, so in total we obtain that $$W(n) = \\frac{c_1}{2} n\\log_2(n) + c_2n$$, which is in $$O(n \\log n)$$. So we're done.\n\nThe tree method is really nothing more than just a visual way to view the recurrence - it is possible to achieve the same effect by just writing a summation. It is sometimes more intuitive to try and visualize, however, and for recurrences where the levels sum to the same amount, the tree method is very effective. However, not all recurrences exhibit such behavior, and it's hard to know a priori whether a given recurrence is such a one. Nevertheless, it is a powerful method and sufficient for many purposes.\n\nWe omit the span analysis of this function for the reader.\n\n## Conclusions\n\nAsymptotic analysis is a very important technique for attempting to categorize the efficiency of programs. Moreover, it is not enough to simply find the asymptotic sequential complexity of a function - parallel computation is becoming increasingly more important, and purely sequential analyses are not representative of real-world algorithms. Work and span analyzed through recurrence relations form a powerful framework for examining the complexity of recursive functions, which is robust enough to classify many kinds of algorithms.","date":"2021-10-25 08:19:02","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8168247938156128, \"perplexity\": 280.0366128123332}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-43\/segments\/1634323587655.10\/warc\/CC-MAIN-20211025061300-20211025091300-00023.warc.gz\"}"} | null | null |
Tag: Debut
You´ll remember Jarryd James!
May 14, 2015 March 7, 2016 Posted in New talent!Tagged Debut, Hottest new talent, music video, News, single, SoulLeave a comment
This is pure beauty, the song and video to Jarryd James single "Do you remember". Take note of the lush production, the smooth falsetto. This is a musical talent worth following an checking out!
Here´s a taster – the musicvideo "Do You Remember" by Jarryd James!
To be honest, I´ll be mighty surprised if you don´t remeber that name after hearing and seeing this beautiful track and it´s video!
Make sure to follow Jarryd James on Facebook and Twitter you can also hear him on Soundcloud. If you like photos, make sure you add Jarryd James to your Instagram
Q&A: Bob Frank, interview with a cult hero
April 17, 2014 March 10, 2016 Posted in Legend, Legendary artist, Q&A / InterviewTagged Bob Frank, Classic, Cult Icon, Debut7 Comments
He was young, he was once called "the new Bob Dylan", he released one self titled album.
The album has since then become a much sought after collectors item.
He has became a cult hero – and is probably one of the best singer-songwriters you have never heard*.
The year was 1972.
His name is Bob Frank…
But he didn´t dissapear, he was always there, writing songs, playing music. And he started recording again almost 30 years after his cult album "Bob Frank" was released and has since then recorded 7 albums all in all.
This Q&A interview with Bob Frank is mostly about his debut "Bob Frank", a bit about how his life has been, and how remarkably little the record industry has changed since 1972
Click on the album cover to listen to Bob Franks debut!
-Back in 1972, you´d released you self titled debut album. How was the process to record this album?
The album was put together by Cletus Haegert and Gary Walker. They picked the songs and they produced it. I just sat in the studio and drank wine and smoked weed and sang the songs, over and over again, until they were happy with it. Then they got some great musicians to add the finishing touches, Charlie McCoy on harmonica, Buddy Spicher on fiddle, Eric Weissberg on guitar, and Russell George on bass. So the overall sound of the album was achieved by Clete and Gary.
-It seem like you have a narrative going trough the album, would you call it a concept album? If´s so could you tell more about the concept?
Well, at the time, it was not any specific concept, other than a bunch of story songs, little vignettes. Like I say, Gary and Clete picked the songs. If there was any concept there, it was theirs.
-You were called "the new dylan" back in the days – was this something that caused a lot of pressure for you and your career?
Not really. I was never anything like Dylan actually, other than, we both wrote songs and played the guitar and sang. I don't think we were very similar in the way we did this.
Please, send me link if you know where this picture is borrowed from, thanks!
-It seemed like your career took a quick stop after your debut, and your album has been a much sought after collectors item. What happened?
One thing that happened was, at the release party at Max's Kansas City in New York, I refused to play any of the songs on the album. I did this because Maynard Solomon at Vanguard had promised me that he would not release the album unless I was totally satisfied with it, but in fact, there were a couple of things I wanted him to do that he never did, so I figured "two can play at this game." So to "get even" with him, I figured I wouldn't play the songs from the album at that gig. It was a dumb thing to do. As Jim Dickinson said, "Not a good career move." Basically, it ended my "career" before it ever got started. Vanguard didn't ship any more of the albums, other than the ones they'd already shipped. So it was only in a few places that people heard it. Mostly in North Carolina and KFAT, a radio station in Gilroy, California. Also, some places down south, in Arkansas and Texas.
-I read somewhere that you were dubbed something like «the greats singer-songwriter you´ve never heard of» (was it Rolling Stone magazine?) how is it to get a title like this? Jim Dickinson gave me that title. It was one of his unique sayings. He had a lot of 'em. I thought it was perfect, fit me like a glove, so I put it on my website. My "music career" is mostly a joke anyway, so it is very appropriate to have a joke for a motto.
-Close to 30 years after your debut, you did a comeback in 2001 with «A Little Gest of Robin Hood» what made you come back, and start recording again? I was retiring from my regular job doing irrigation work for the City of Oakland, and my intention was to get that "music career" going again…. I always considered myself a professional songwriter, even though I never made any money at it.
-You have recorded and released 7 albums, , since your comeback, do you feel your fans buy your records after they discovered you in 00´s or is there old fans, who followed you from the start, who still follows you and your career?
Both. Although, not very many of either.
-There was 30 years between your debut and your second album – and much has happened in the music industry since then. Do you feel its easier to write and record music in the 00´s or was it easier back in the 70s
It's easier now. Now, everybody makes an album, a CD. You can do it at home, in the living room. You can put it on CDBaby and Youtube, promote it yourself through the internet, and make some sort of living at it, seems like. Before, you had to have a big record label pick you up and produce you and promote you. There wasn't much room for you then. They only took a few artists to do that with. The rest of them had to get a day job.
-What were you doing in the years between '72-01? Were you still writing music, thinking about writing music?
I've always been writing songs. It's a habit I started early in life and have never been able to shake. Not that I ever wanted to…. Anyway, yes, I've always been doing that. I write songs in my sleep. "Judas Iscariot" was written in my sleep. In a dream.
-It's often said that musicians have music in their blood, meaning they have to share their stories. Are your feeling it the same way?
I'm not really a "musician." I write songs and play the guitar. I never could play it the way most songwriters and folk singers do it, with that Cotton picking, or Travis picking. Never could do that. So I developed my own way to do it. Sounds more like I'm playing a harp than a guitar. But I'm very happy with it now. Mainly, I leave out a lot of notes and just play the ones that convey an emotion. I write songs with people in mind. One individual person per song, usually. I write 'em like I'm singing it to one person, make it something I know they would like to hear. But of course, I want everybody else to hear them too, and like them. That's all a part of it. But I never do anything to actually get anybody else to hear them. Somebody else has to line up gigs for me. I was never any good at doing that. I go to a few folk festivals, in Arizona mostly, where the people like to hear my songs, and other than that, "I just stay home, laying in a chair, that's about as far as they'll get, right there."
-Do you feel it´s easier to reach out to the music lovers out there these days than it was back in the 70s? Has your fans changed?
I think it's pretty easy to reach music lovers these days, what with the internet and all. People you never heard of contact you via email (like you did) and boom! there's a connection. Never had that back in the old days. My "fans" (all 6 of them) are probly the same sort of people I attracted back in the 70's. People who see the sacred in the profane. Mystical misfits. Rednecks and dope fiends. Soldiers and anarchists. Your normal human beings. "The usual suspects."
-What do you think is the future of music, will we go back to the physical formats like we see these days, where we see vinyl is the hotted medium these days. Will the CD die?
I have no idea. I was never interested in subjects like that. I just like to write songs.
Bob Frank, picture borrowed from his homepage
-If you listen to music digitally – like streams, what are your favorite source to finding new music/inspiration?
I never listen to music, unless a friend sends me a recording of a new song he or she wrote. Actually, I don't care anything about music. I'd rather listen to a sports talk show.
-And the last question, is as always in these Q&A interviews; What are your musical inspiration? And what are your all-time favorite albums?
My musical inspiration came from rock songs in the '50s. Groups like the Drifters, or Richie Valens, Buddy Holly, etc. And also from old cowboy songs. Like all kids who grew up in the 50's, I listened to Gene Autry. Dickinson said, "If it weren't for Gene Autry, there never would have been Elvis." So I also listened to Elvis, too, but mainly his ballads. The songs I liked best were the pretty songs, not the fast ones, not the blues, but songs like "Teen Angel" or the Everly Brothers. Another big influence on me was Jimmie Driftwood. Look him up. His "career" was a lot like mine. Had an album out when he was young, then nothing for thirty years. Then, "The Battle of New Orleans." Also, Jimmie Rodgers, the "yodeling brakeman" from Mississippi. Also, the Irish singers, the Clancy Brothers. For awhile there, in the early '60's, they were my favorite group. After that, it was just old folk songs, cowboy songs, and so on. One of my strongest influences was Jim Dickinson. I don't have any all time favorite albums. I like a lot of them, but like I say, I don't listen to any of them any more.
Maybe I just got my heart broken by music, so I don't trust it any more. It's not a true source of happiness. When you die, it won't help you at all.
*Quote by Jim Dickinson taken from Bob Franks homepage. All photos borrowed from http://bobfranksongs.com Bob Franks self titled cult debut can be heard on Spotify – Bought on iTunes or BUY THE PHYSICAL COPY FOR THE REAL EXPERIENCE on vinyl and CD from any good retailer!
The Q&A: The etheral folk pop blues of Dana Williams
July 10, 2013 December 30, 2014 Posted in Music video, New release, Q&A / Interview, YouTubeTagged Dana Williams, Debut, single2 Comments
Sometimes a discribtion of ones genre fits like a hand in a glove. When I asked Dana Williams to describe her music to some one the first time the answered with just four words "etheral folk pop blues" hence the title on this interview & Question & Answer session with Dana Williams.
Dana is the daughter of the late David Williams (November 21, 50 – March 6, 2009), a popular sessions guitarist on albums and tours with famous artists like Michael Jackson (and The Jacksons), Madonna, The Temptations, Rod Stewart, Agretha Franklin, Paul McCartney, The Crusaders, Whitney Houston, Diana Ross, Michael McDonals to name a few, feel free to read the full list on Davids Williams wikipedia post. Even if her father played with some of the worlds biggest bands, and artists – and her music isn´t very directly influences, she sites her father as a great inspirastion for her career: "What made me want to make music was probably my dad. He was a rhythm guitar player, songwriter and producer and so growing up I was always around music. Watching him perform and watching his creative process really inspired me. He worked really hard at what he did and so that inspired me to work really hard at what I do"
Dana Williams has also a sister, Davida Williams, and she is a singer and a actress known for series "The Fresh Prince of Bel Air" the famous sitcom with actor/rapper Will Smith, and been part of the teen sitcom Lizzie McGuire with Hillary Duff and more series and movies.
When you hear "Keep Me Waiting", the debut single from Dana Williams, you hear a folk inspired voice classic soul inspired – voice, with hints of Duffy (remember her?), a thick layer of strings, piano, acustic guitars and trip hop inspired drums. All sounding like live instruments. Her vocal inspiration comes from Ella Fitzgerald, since falling inn love with her sound when she was young; "I really admire her skill and sound and so that really encouraged me to work on my vocal capabilities"
I even think I hear a little Corrine Bailey Rae in her voice and a bit Lion Babe´s vocalist Jillian Hervey & musician Lucas Goodman inspiration in her voice and sound, and when asking if she had heard and if there was any link between her and the duo she said "I have known Jillian of Lion Babe for a little while, she is a lovely person. I love their music but I wouldn't say our vibes are similar. I guess you could say our vibes are similar, in that, they are new." If you haven´t heard Lion Babe check out the blogpost about them here. Highly recommended if you like Dana Williams!
The sound of "Keep Me Waiting" is down earthy folk soul, and if this – or her future release will be released on vinyl I know I have to get it. Dana Williams is a young artists – and this shines through when I asked her about her prefered listening format; "The way we listen to music has changed so much as of recently and now that the internet has made everything so much more accessible, while I love listening to Vinyl on my record player, it is too easy to stream stuff online 😦 It's all about immediate gratification"
On the other hand she´s a big fan of the really old legends of soul – because when I asked her who musician she would prefer working with – any living or dead – her answer was quite surprising "I think it would be cool to play with Sam Cooke. I think singing a duet with him would be awesome and satisfying, I love singing harmonies and his style is amazing".
Check out Dana Williams on Facebook (all pictures in this blogpost are borrowed from her FB-page) – and on twitter – and her nice homepage and inspiring blog here
…And if i should conclude this Q&A session I guess the answer to Dana Williams classic folk-pop-soul sound, is the combination of her inspirations – the way and music format she prefers to listen to her influences, and then making up her own mix of music that sounds both old and new at the same time.
There will be more music from Dana Williams, she has a plan for her future "I plan on releasing an EP within the next couple of months. It will sound similar to the single I released "Keep Me Waiting". "It was written by me and produced by Maxwell Drummey (Lauryn Hill) of Chester French and Dan Stringer so the sound is pretty consistent".
"I am inspired by all sorts of musical genres and styles".
Helena Jesele – Is she this years Lana Del Rey?
March 2, 2013 May 29, 2015 Posted in Debut album, New releaseTagged Amy Winehouse, Debut, Dusty Springfield, Exciting new name, Helena Jesele, iTunes, Lana Del Rey, Look out for..., Moloko, Retrosoul, Roisin Murphy, Saint Etienne, Sarah Cracknell, Soul6 Comments
Let´s cut to the chase at once; Helena Jesele and her debut album "Sweet Sticky Fix" is a great new discovery – and I´ll think you´ll be hearing a lot from her this year.
Think the Lana Del Rey sound stripped down a few layers and added sharper trip hop beats – in a 60s/early 70s spy movie with a bossa nova soundtrack produced by David Axelrod.
Helena Jesele explains who she is perfect on her twitter profile; Manchester born, Dublin raised singer and song-writer. Six foot-tall, full of Irish spirit, Loves New York soul (end quote). Probably the best and most on-point self description I´ve seen in a while
As you can understand, Helena Jesele has it all;
The Look:
The 60s chick – or dare I say chic – look! Huge deer-like blue eyes, black make-up, the pouting mouth, smoking a cigarette, the 60s haircut and black, classic-cut dress.
La-la background vocals, female and sometimes male. funky/jazzy drumming, a great horn section, heavy on strings. Guitar with a lot of soundtrack influence.
Her voice:
She sounds like Lana Del Rey, Dusty Springfield and Amy Winehouse is having a meeting with Sarah Cracknell (of Saint Etienne fame) and Roisin Murphy (of Moloko fame) – in a french inspired swinging London boardwalk cafe.
The album sleeve:
A simple – black and white photo of Helena Jesele walking barefooted with her shoes in her hand while glimpsing at the camera with a pouting mouth. Her name is written in a Dymo-label maker style font on a black label – and the albums title in the same, but on a red label. You have probably seen a lot of those look-a-like pictures on Instagram where there is used the black-and-white filter – and the label app Labelbox to create a similar look. Great and highly recognizable! You and I could make a album cover like this any day on our smart phones! Simple and effective. I like it a lot.
This whole package is:
Funky, soulful and sexy smooth. Perfect music and songs when we say goodbye to the winter and welcomes the fall and upcoming summer. I can gladly see myself on the frontporch with a funky 70s cup full of black coffee while the birds sing and I watch the trees bloom.
This was about it, while writing this my cup of tea has gotten cold – and need to be heated up again.
I push play once more and let myself dream away to the sound of Helena Jesele, who could become one of the biggest female soulful singer-songwriter names this year.
I just say; Lana Del Rey watch your back!
Oh! This is a soul record by Allen Stone!
January 31, 2013 May 29, 2015 Posted in New releaseTagged 2013, Allen Stone, Blue eyed soul, Debut, Donny Hathaway, Hottest new talent, James Morrison, Jamie Lidell, Jamiroquai, jason tang, Jay K, music, New Release, one of years best, R&B, Robin Ticke, singer song writer, Singer-Songwriter, Soul, Stacie Orrico, Stevie WonderLeave a comment
2011/2012 was the year James Morrison got his big breakthrough – and he is now having a well-earned break. Therefore it´s time to look and listen to a brand new talent!
His name is Allen Stone, and this is going the year he and his music reaches out to the big masses. Allen Stone is one of these white american soul singers – that stick to the classic formula with great success. Through his sound and voice I feel I can draw a line from classic artists like Stevie Wonder, Donny Hathaway and to "new" white soul men like Jay K from Jamiroquai, Jamie Lidell, Robin Ticke and James Morrison.
Photo by Jason Tang http://www.jktang.com
PHOTOS BY JASON TANG http://www.jktang.com
Allen Stone is a 25-year-old soul singer/song-writer from Washington, US. According to his Wikipedia profile – he has sung almost since he was born, and sung in church at the age of three. Church has been like his second home – and the place he developed his love for music. Soul music was discovered later on, when he became a teenager, and started collecting soul albums from the 60s and 70s. At 15 he discovered Stevie Wonders fantastic "Innervisions". A classic soul album and a reference to his musical landscape.
But he decided to make music his living – after hearing his teenage friend, artists Stacie Orrico – and this is so good that I quote as is the Wikipedia article "She was traveling, singing everywhere, and recording," Stone says, "She was just a year older than me and I was like, "Man that would be so much fun to do, sing and actually have people listen" (This quote is taken from an Erica Thompson interview with Allen Stone in Rolling Stone magazine October 2012)
You can even find Allen Stone on Facebook (This picture is from his FB profile)
If you´d like to twitter, you´ll find him at twitter/allen_stone
It´s tempting to do another quote, from the Rolling Stone article (link here) about what Allen Stones fans can expect from his second album "Allen Stone"; "Sonically, it's a soul record" and continues "But it's the music that I really love making. It's not my attempt to cover all this soul music, it's just really to fit in there and play the music that I love" and he ends it like this "People will listen to it and be like; Oh, this is a soul record".
According to the Rolling Stone interview Allen Stone is working with Raphael Saadiqs backing band and one of the late jazz legend Miles Davis keyboard players. The album is being recorder with producer Lior Goldenberg from L.A who has also worked with big name artist like Macy Gray, Sheryl Crow, and son of late reggae legend Bob Marley, Ziggy Marley!It must be so cool to work with big names like that – and that is only on the second album.
It makes me wonder; Whats next for Allen Stone?
Back in 2009 he self-released his debut "Last To Speak" on his own label StickyStones records. His first album is a really soulful offering – containing 11 soulful singer/songwriter songs performed with mostly acoustic instruments and his fantastic voice – and he´s handling the guitar himself. I feel the album is representative with his self-confessed hippie with soul (link to Wikipedia) image and sound.
Allen Stones, new and second album – the self titled album "Allen Stone" – is ready for release at the end of february – and is already possible to pre-order in iTunes (which i off course have done). The original release of the album was back in 2011 – but it´s not available on iTunes until now in February. He has already released the "Allen Stone EP" – a mini-album containing 4 tracks.
You can hear Allen Stones second and self titled album here on Spotify – when it´s released
Since the only thing I have heard from Allen Stones upcoming album is his "Allen Stone EP" – I can't tell too much about how his second album is going to sound, but if the EP is representative for that – it seems like the upcoming album is a bit more produced this time, more groovy – more classic sounding – and without taking off the loose feeling of his debut album.
The title of his debut album and last song on the same album is called "Last To Speak". It´s tempting to use this as the final sentence of this blog post about the talented Allen Stone and answer it with "Ok, so you may be the last to speak – but make sure you're not this is the last you sing" Because this is only the start of the amazing career of Allen Stone.
We need artist like Allen Stone!
I need artists like Allen Stone!
A dark yet beautiful world!
November 7, 2012 May 29, 2015 Posted in Debut albumTagged Album, Dark, Debut, Ethiopian, Finland, Finnish, Hope Sandoval, Johnny Cash, Leonard Cohen, Liz Phair, Mazzy Star, Mirel Wagner, Nick Cave, Pink Floyd, PJ Harvey, Release, Sensual, Spotify, Syd Barret1 Comment
It´s not often I stumble upon a female solo artist that sounds like – and write songs – that brings to mind gravel voiced artist like Leonard Cohen, Johnny Cash and Nick Cave – and sirens like Hope Sandoval of Mazzy Star, PJ Harvey and Liz Phair. This sextet's musical expression is recreated in the form of a woman of in er mid-twenties of Ethiopian origin but now residing in Finland.
The name Mirel Wagner sounds less Ethiopian and absolutely not like a name from Finland. It´s a name that seems to have a lot of dark history – and her stories is as dark as someone at least twice her age.
After name dropping Cash, Cohen, Cave, Sandoval, Harvey and Phair who is still reading – and wants to hear more about Mirel Wagner and her self titled debut album?
The Mirel Wagner record was recorded in just two days, mostly accompanied by a naked guitar. Minimalist picking the guitar. From bitter sounding melodies to more mantra-riff-o-rama – all performed solely on the guitar. Only accompanied by Mirel Wagners lonely yet strong and mellow voice
The voice and the songs need to be explained more closely; When Mirel sings, her voice is monotonous, coarse, rough, tired of life – it sounds as if each word must be forced out of her mouth. At the same time, there's still something oddly beautiful – and she has a natural sensuality in her voice. The recording of her voice and instruments is so quiet and calm that every little strum the guitars neck can be heard. When the nails touches the guitar strings – and her fingers are being dragged up the guitar neck it makes this sliding sound. As if the guitar sighs.
You´ll hear Mirel Wagners voice deep in your ears, so close – you can almost feel the warmth of her breath. Hear the sound of her lips when she separate them and starts to sing – and you´ll even hear them touch each other again when she finishes the song. Even when Mirels tongue moves around her mouth to make this small clicking sound – and it hits the palate and her teeth in the oral cavity. Every small sigh of air she's making as she prepare her self to sing the next line in her songs – and the sigh of relief that comes at the end of each sentence. I got a feeling that she may be laying down on either bed or on the naked concrete floor or sitting all alone in the dark while recording the album
So tight, close and without doubt very intimate.
Heres the album cover to Mirel Wagners self-titled debut album from 2011. You can hear it here;
Click HERE to listen to the album in Spotify!
From start to end the albums steals only 35 minutes of your lifetime. The themes of the album is pitch-dark and grim stories of death, deep grief, problems and nightmarish small short stories spread over the 9 tracks.
If i should sort her albums in my record collection I sure would put her album in one of these fictional music genres; doom soul, dark folk, goth blues, minimalist folk.
If Mirel Wagner was a color – i would describe her as the non-colors black and white. As her musical world, and the album art, the pictures, the font and the liner notes and lyrics; All in rough black and white!
Only the song "No Hands", in all of its monotone performance and simple naive text, are envisioned a tiny hope of simple childlike joy …
been riding my bicycle
all day long up and down the old
dusty dirt road
look mother no hands
see the sun filter through the trees
…But the happines lasts only until the last open-for-interpretation and disturbing verse…
the wind and the speed
can not see the danger
Maybe Mirel Wagners "No Hands" is written in a parallel world with the late Syd Barret in mind? Or at least her song is a part two of the song he wrote for his band Pink Floyd – the naive child like same-themed song "Bike"
The lyrics, music, photos
Step into Mirel Wagner
Dark, sensual world!
There´s something about Franky!
October 5, 2012 May 26, 2015 Posted in Debut album, Q&A / InterviewTagged Actress, Advocate, AOR, Britney Spears, Dancer, Darling Stilettos, Debut, Destinys Child, Disco, Franky Manzo, Guns'n'Roses, Interview, Kylie Minogue, Maudlyn Strangers, Michael Jackson, Miley Cyrus, Movie producer, Pitch Perfect 2, Producer, Q&A, Singer-Songwriter, single, video, Womanizer, YouTube8 Comments
There really is something about Franky Manzos debut single "MJs Coursing"
Is it the intro to her single – a little AOR inspired – until her voice and the beats kick in?
Is it the dry simple piano line? The slightly auto-tuned vocals?
The cool 70s inspired "Oh, beep-beep, Oh beep-beep" bridge in the middle of the song?
The percussion, and the hand claps, and the dry synthetic sax stabs?
The mix of 70s-80s and 90s disco-club squeezed into 3 minutes?
Is it the story of her past as a dancer behind the likes of Destiny's Child, Britney Spears, Miley Cyrus and more. Or maybe her career with former Guns´n´Roses drummer Matt Sorum in Darling Stilettos, a band that looked and sounded like a cross between Pussycat Dolls, The Runaways and 80s Prince protoges Vanity 6 and Appolonia 6 with the motto of Spice Girls; GIRL POWER!
Even if Frances Manzo (her real name) is still young she has had a long career. She has not only been a dancer behind some of the worlds biggest stars, but she has also been doing choreography, commercials, movies, music videos – the most famous being a dancer in Britney Spears "Womanizer" video, been an on stage performer, dance teacher, and being in television productions. Franky is also a music co-writer- and co-producer of Maudlyn Strangers self titled upcoming debut album!
She looks like a rock chick, but also has this late 80s-90s style. And since she is a fan of – and has worked with great rock musicians – there's no doubt she has her own style!
The MJ in the song "MJs Coursing" is, as you have probably guessed already, the late great Michael Jackson.
Franky Manzo, or if you prefer her real name Frances Manzo, said on her Reverbnation page "This song encompasses everything I wanted in my first song because of my love for Michael Jackson and the era of disco I heard as a child from my mom."
After reading this quote I had to hear more about the story behind "MJ´s Coursing" and sent Frances Manzo a few questions to her Facebook profile – and she shortly after, she sent me her answers! – here it goes;
5 QUESTIONS: a Q&A with Franky Manzo
You say on your reverbnation profile that "This song encompasses everything I wanted in my first song because of my love for Michael Jackson and the era of disco I heard as a child from my mom."
Q: Are you inspired by any special MJ songs on the song? Or any special discotunes your mothers liked?
A: Difficult question! It's hard to choose which MJ songs inspired me, but I can try narrow it down to "They Don't Care About Us", "Rock With You", "Billie Jean", and "Remember the Time". As far as disco songs, my mom definitely had "Heart of Glass", "Harlem Shuffle", and "Bad Girls" on her favorite cassette tape…which she played all the time! I credit my mom with my rhythmic ability because when I was about 2, she and I used to dance to those songs every morning. So I think those songs will always stick with me, and you can hear their influence in my single.
Q: Is there any plans for a full album – how do think it will sound?
A: Absolutely! I think much of the album will be disco inspired, but I don't necessarily want to limit myself to just one genre. I am truly inspired by what Blondie did…she was a total punk rocker turned disco diva, and back again! I expect that the album will throw a curveball in there somewhere!
Q: I feel the video to "MJs Coursing" is a bit inspired by Kylie Minogue and especially her "Confide In Me" vid? Am I right?
A: Yes, I love Kylie! I got to meet her about 10 years ago at a show, and she's an absolute delight! I actually hadn't seen the "Confide in Me"* video, but I can see the similarity!! The original concept that I had for my video was quite grandiose, but alas, being on an indie artist budget isn't exactly conducive to being grandiose. So, we came up with an alternative, the video that's out now, and it came out great!!! I'm so happy with it.
(*Note: I see i refered to the wrong Kylie song and video here – my fault…but it seems like Franky Manzo understood what I meant)
Q: I see you have been dancing onstage behind several big names, like Britney Spears, Destiny's Child and Miley Cyrus to mention a few. Are you supposed to be working with any big named and well-known producers or writers on your album?
A: I had the opportunity to work with big name producers when I was in Darling Stilettos, and I've found that the most important thing is that I trust who I'm working with, especially because I'm barely getting my feet wet as a solo artist. My producer, Michael Binikos, is someone that I trust. He's worked with Leann Rimes, Brie Larson, and the like. I'm very lucky to have him working on my album.
As far as writers go, I'm not opposed to letting someone write for me in the future, but right now, I have so much I want to convey as an artist. I think that it's important for me to write my own material. I really want to show the world what I'm capable of…and it's only the beginning for me.
Q: What other artist are you inspired by?
A: Blondie, Kylie Minogue, and Michael Jackson are some of my favorites. Aside from those artists, I'm totally inspired by Robert Plant! I LOVE Led Zeppelin. I really appreciate his lyrics. Sophie Ellis-Bextor is another artist I love. Her songs are so catchy and they just make you feel good. Gwen Stefani is one of my absolute favorites. I used to wear dickies and cropped wife beaters just like her…still do! Scott Weiland…I know it's a bit of a stray, but I can't help but love the grunge era. He's a dynamic performer with amazing songwriting skills.
It may be hard to tell that I'm inspired by these artists based on my single, but I love writing for other bands…and that's where you can see a lot of these influences. I co-wrote and co-produced many of the songs on Maudlin Strangers' upcoming debut album. "Suffer, Kate" and "Long Way Down" were definitely Zeppelin and Stone Temple Pilots inspired!
Franky Manzo ends my Q&A like this Haha, I tend to be wordy sometimes. I really appreciate this!!! Thank you so much!!!"
If someone is going be thankful here it´s gotta be me – for giving me so good answers on my questions. Wordy is no problem as long as the answers good or what?
I must say I love to see her cool genre mix. With Michael Jackson and Kylie Minogue on one side, and her love for Robert Plant and Led Zeppelin and Stone Temple Pilots Scott Weiland on the other. A good lover of music – sees no bonderies and limits, only opportunities! Thank you again Franky Manzo for your great answers!
The single "MJs Coursing" is one piece of perfect radio friendly pop!
I would love to see some great remixes of the track; How about a great classic Masters At Work remix, like they did in the last part of the 90s with loads of live-instruments, like a fat bass line – some guitar and lots of great percussion? Kenny Dope and Little Louie Vega – open your ears!
But as the song "MJs Coursing" is now – it´s a nice piece of great radio disco-pop – And it gets better and better with each listening!
Look forward to follow the adventurous future of Franky Manzo!
Frank Ocean "Channel Orange" 7/17/12
June 25, 2012 May 24, 2015 Posted in Debut albumTagged Andre 3000, Debut, Earl Sweatshirt, Frank Ocean, Funk, John Mayer, Odd Future, Outkast, Preview, Release, SoulLeave a comment
Frank Oceans official debut album, named "Channel Orange" to be released the 17th of July 2012.
YOU SHOULD MARK THAT DATE IN YOUR CALENDAR!
On his official homepage (tumblr btw!) http://frankocean.com/ you can see the front and back cover
It can contains as you can see – it´s first single "Pyramids" and the earlier released "Thinking About You".
Guests include (for me, until now, unknown) Earl Sweatshirt fra Odd Future crewet som også Frank Ocean er en del av, the fabulous John Mayer and André 3000 from OutKast.
Have you checked out Frank Oceans unofficial debutalbum, the classic mix tape "Nostalgia, Ultra", you can read more about it here – or download it for free on the datpiff mixtape page!
SO LOOKING FORWARD TO "CHANNEL ORANGE"!
Jem Warren vs John Mayer
June 5, 2012 May 26, 2015 Posted in Debut albumTagged Debut, Jem Warren, John Mayer, Release, Singer-SongwriterLeave a comment
It's maybe like av battle between David vs Goliath this small fight between the debutant Jem Warren vs the established artist that is John Mayer
Lets get ready to rumble!
Play album in Spotify by clicking the cover!
Play album in Spotify by clicking cover!
It may be unfair but hey! It's just for fun.
They both released their records within a couple of weeks. John Mayer releases his 5th studio album "Born & Raised" and Jem Warren released his debut album "Heart Knows How"
The cool thing is that both artist have a quite similar genre mix; Americana, folk – with a large dash of singer-songwriter. So there is some likeness there.
They both sounded like they have the time of their life in the studio.
Jem Warren has released as mentioned earlier – his first album – and this has become a really great record, full of fabulous writing and real songwriting – all songs and lyrics by Jem himself! This has to be one of this years strongest debut-album. Give the album time – and it´ll stick like glue. I´ll never tire of the album – even if I'm listening to it time after time – even several plays per day!
This album should be a real door opener for Jems future!
I really hope people open their eyes and give him a spin or two…Do you hear radio stations?
The last album om John Mayer is probably his best ever. The man seems to be a source of non-stop writing potential singles. Not one song of the album sound out of place! It´s really fabolous!
John has had a mighty trilogy with the album "Continuum" – made his most pop and AOR album in "Battle Studies", and comes back with av bomb of a album with "Born & Raised"
Even is both Jem and John have their own voices and their own sound – these two albums can be played back-to-back without it feeling wierd or out of place! To really high quality albums you should check out!
Jem Warrens "Heart Knows How" vs John Mayer "Born & Raised"
Since both albums is really good in their own way – this battle ends with no losers, only two winners! | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 5,763 |
{"url":"https:\/\/adriaanrol.com\/posts\/quantum-computational-supremacy\/","text":"# Quantum Computational Supremacy\n\nLast week Google and collaborators published a paper in which they claim to have achieved Quantum Supremacy , one of the major milestones in quantum computing. The idea of quantum supremacy is to use a programmable quantum device to perform a task that is out-of-reach for any classical computer Google claims to have solved a problem in seconds that would take tens of thousands of years on a state of the art supercomputer. The quantum supremacy experiment has been a long-standing milestone in the field of quantum computation, and as such, skepticism has arised; soon after publication of the article a group in IBM research has challenged the results 1.\n\nRather than joining in on the controversy of whether or not Google has really achieved quantum supremacy , I want to focus on some more basic questions: what is quantum supremacy, how does one demonstrate quantum supremacy and why is this such an important milestone?\n\nAt its core the idea of quantum supremacy is quite straightforward: use a quantum computational device to perform a task that cannot be performed on a classical computer. Because the task itself does not need to be useful in any way, it can be tailor-made to take maximal advantage of the strengths of the quantum computer while containing as little structure as possible. The latter makes it harder to simulate classically. The simplest such experiment is to sample outcomes from random circuits that contain a lot of entangling gates, so that non-classical states are created. However, simply doing random things that are hard to simulate is not enough, after all, scattering laser light of a dust particle creates a similarly hard to simulate interference pattern but we would not call that a computation. The quantum computational device must involve some degree of programmability, otherwise it is just an elaborate physics experiment that is hard to simulate.\n\nThe experiment is performed on a device containing a grid of 53 2 superconducting qubits that are operating in the circuit based paradigm for quantum computing. A key feature of these types of devices is that they are fully programmable, one can in principle execute any quantum circuit on them 3. The computational task is to determine the distribution of outcome probabilities one gets after applying a pseudo-random circuit to the system. The pseudo random circuits are composed of multiple layers that perform random single qubit gates on all qubits simultaneously followed by a specific two qubit gate on specific pairs of qubits.\n\nThe moment we have a device capable of achieving quantum supremacy, a very interesting problem arises: if we cannot solve the chosen problem classically, how do we verify that the quantum computer is working and not just producing random garbage bits as output? After all, the solution to this problem cannot easily be verified as is the case in for example prime factorization. The key to this problem lies in a combination of control circuits and finding a way to quantify how well the bit outcomes of the computational device correspond to the expected outcomes. For highly entangled states the distribution of bit outcomes is not uniformly random but rather follows a specific distribution 4. Because of the nature of highly entangled states, even a single error in the circuit results in a very different distribution. Because of this, the deviation of the observed distribution with respect to the expected one forms a good means of quantifying circuit performance. This is expressed as the so-called linear cross-entropy benchmarking fidelity $F_{\\mathrm{XEB}}$.\n\nBecause evaluating $F_{\\mathrm{XEB}}$ still requires comparing to the expected distribution, and thus simulating the circuit, the authors needed to play some tricks in order to trust the outcomes in the supremacy regime. The authors performed three different variants of the pseudo random circuits. \u201cPatch\u201d circuits in which a slice of two-qubit gates is removed to create two non-interacting patches of qubits. \u201cElided\u201d circuits where only a fraction of the gates between the patches is removed, and full \u201cverification\u201d circuits in which all gates are present. Both the \u201cpatch\u201d and \u201celided\u201d circuits are designed to reduce the amount of entanglement involved so that it is feasible to simulate the experiments and thus determine $F_{\\mathrm{XEB}}$. In addition to these three types of circuits the Google team performed a careful characterization of all operations (single- and two-qubit gates, readout) to be able to predict $F_{\\mathrm{XEB}}$ as they increased the number of qubits.\n\nBy performing the three variants of these pseudo random circuits while involving more and more qubits and increasing the number of cycles, it was possible to determine $F_{\\mathrm{XEB}}$ while gradually improving the complexity of the experiments. In the key figure of the Quantum Supremacy paper the authors showed the measured $F_{\\mathrm{XEB}}$ for all circuit variants slowly decreasing with increasing qubit count. More importantly however, the performance of all three variants agreed almost perfectly while also corresponding well to a prediction based on the characterized error rates of the operations. This last bit is important because it means that the performance of the simpler, \u201ceasy\u201d to simulate, circuits is a good predictor of the performance of the full circuits. Once the performance was pushed into the supremacy regime, it was no longer possible to simulate the full circuits, and hence not possible to determine $F_{\\mathrm{XEB}}$. However, the control circuits show the same trend in performance continuing as expected. Because the performance of the control circuits corresponds so well to the full circuit, it is reasonable to state that the full circuit is performing as expected, thereby demonstrating quantum supremacy.\n\nNow that we understand what was done to demonstrate Quantum Supremacy, we can briefly discuss what it means. Let\u2019s get the first big misconception out of the way, if you have come this far it should be pretty obvious that the Quantum Supremacy demonstration by itself is not very useful 5. None of the promises of quantum computers are achieved: factorizing large prime numbers, simulating big molecules and quantum phenomena, etc. One might then question why this experiment is widely compared to the Wright brothers first flight. Although the Wright brothers were not the first to build an experimental aircraft, nor was their first flight (12s, 37m) particularly useful, they are credited with the first controlled, sustained flight of a powered, heavier-than-air aircraft. In a similar spirit Google is not the first to build a prototype quantum computer but this is the first demonstration of a quantum computer performing a task that is extremely difficult for classical computers. Although useful applications of quantum computers may still be many years away and classical computers still overpower the upcoming quantum hardware for useful applications, the quantum supremacy experiments puts quantum computation on a different level than classical computers. And this gives hope in this technology for years to come.\n\nFootnotes and references\n\n1\n\nIf you are interested in the nuances between Google\u2019s claim and IBM\u2019s counter claim I recommend this excellent blog post by Scott Aaronson.\n\n2\n\nActually 54 but one is broken.\n\n3\n\nSubject of course to practical constraints such as the connectivity, native gate set, number of qubits and error rates.\n\n4\n\nThe Porter-Thomas distribution\n\n5\n\nWith the exception of being used as a fancy random number generator","date":"2020-09-22 02:24:39","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.551494836807251, \"perplexity\": 509.50998770411326}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-40\/segments\/1600400202686.56\/warc\/CC-MAIN-20200922000730-20200922030730-00094.warc.gz\"}"} | null | null |
Сурат е окръг разположен в щата Гуджарат, Индия, с площ 7657 км2 и население 4 995 174 души (2001). Главен град е Сурат.
Административно деление
Окръга е разделен на 10 талука.
Население
Населението на окръга през 2001 година е 4 995 174 души, около 74,65 % от населението е неграмотно.
Религия
(2001)
4 350 795 – индуисти
447 951 – мюсюлмани
86 607 – джайнисти
Външни препратки
Окръзи в Гуджарат | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 7,090 |
package org.apache.axis.tools.ant.wsdl;
import org.apache.axis.encoding.TypeMappingRegistryImpl;
import org.apache.axis.encoding.TypeMappingDelegate;
import org.apache.axis.wsdl.fromJava.Emitter;
import org.apache.tools.ant.AntClassLoader;
import org.apache.tools.ant.BuildException;
import org.apache.tools.ant.Project;
import org.apache.tools.ant.Task;
import org.apache.tools.ant.types.Path;
import org.apache.tools.ant.types.Reference;
import org.apache.tools.ant.types.Environment;
import org.apache.tools.ant.types.CommandlineJava;
import java.io.File;
import java.io.PrintWriter;
import java.io.StringWriter;
import java.util.HashMap;
import java.util.Iterator;
import java.util.LinkedList;
import java.util.List;
import java.util.Map;
/*
* Important. we autogenerate the ant task docs from this.
* after adding a new attribute
* 1. add the javadoc for the end users. Make it meaningful
* 2. get jakarta_ant/proposals/xdocs from ant CVS
* 3. run the xdocs target in tools/build.xml
* this creates xml files in xdocs/build
* 4. run proposals/xdocs/dvsl build.xml to create the html files
* these are also created under xdocs/build
* 5. copy the the html files to docs/ant
* 4. check in the changes in docs/ant
*/
/**
* Generates a WSDL description from a Java class.
* @author Rich Scheuerle (scheu@us.ibm.com)
* @author Steve Loughran
* @ant.task category="axis" name="axis-java2wsdl"
*/
public class Java2WsdlAntTask extends Task
{
private String namespace = "";
private String namespaceImpl = null;
private HashMap namespaceMap = new HashMap();
private String location = "";
private String locationImport = null;
private String output = "." ;
private String importSchema = null ;
private String input = null ;
private String outputImpl = null;
private String className = "." ;
private String servicePortName = null ;
private String portTypeName = null ;
private String bindingName = null ;
private String implClass = null;
private boolean useInheritedMethods = false;
private String exclude = null;
private String stopClasses = null;
private String typeMappingVersion = TypeMappingVersionEnum.DEFAULT_VERSION;
private String style = null;
private String serviceElementName=null;
private String methods=null;
private String use = null;
private MappingSet mappings=new MappingSet();
private String extraClasses = null;
private Path classpath = null;
private String soapAction = null;
private List complexTypes = new LinkedList();
private boolean isDeploy = false;
private CommandlineJava commandline = new CommandlineJava();
/**
* trace out parameters
* @param logLevel to log at
* @see org.apache.tools.ant.Project#log
*/
public void traceParams(int logLevel) {
log("Running Java2WsdlAntTask with parameters:", logLevel);
log("\tnamespace:" + namespace, logLevel);
log("\tPkgtoNS:" + namespaceMap, logLevel);
log("\tlocation:" + location, logLevel);
log("\toutput:" + output, logLevel);
log("\timportSchema:" + importSchema, logLevel);
log("\tinput:" + input, logLevel);
log("\tclassName:" + className, logLevel);
log("\tservicePortName:" + servicePortName, logLevel);
log("\tportTypeName:" + portTypeName, logLevel);
log("\tbindingName:" + bindingName, logLevel);
log("\timplClass:" + implClass, logLevel);
log("\tinheritance:" + useInheritedMethods, logLevel);
log("\texcluded:" + exclude, logLevel);
log("\tstopClasses:" + stopClasses, logLevel);
log("\ttypeMappingVersion:" + typeMappingVersion, logLevel);
log("\tstyle:" + style, logLevel);
log("\toutputImpl:" + outputImpl, logLevel);
log("\tuse:" + use, logLevel);
log("\tnamespaceImpl:" + namespaceImpl, logLevel);
log("\tlocationImport:" + locationImport, logLevel);
log("\tserviceElementName:" + serviceElementName, logLevel);
log("\tmethods:" + methods, logLevel);
log("\textraClasses:" + extraClasses, logLevel);
log("\tsoapAction:" + soapAction, logLevel);
log("\tclasspath:" + classpath, logLevel);
}
/**
* validation code
* @throws BuildException if validation failed
*/
protected void validate()
throws BuildException {
if(className==null || className.length() ==0) {
throw new BuildException("No classname was specified");
}
if(location==null || location.length() == 0) {
throw new BuildException("No location was specified");
}
}
/**
* execute the task
* @throws BuildException
*/
public void execute() throws BuildException {
AntClassLoader cl = new AntClassLoader(getClass().getClassLoader(),
getProject(),
classpath == null ? createClasspath() : classpath,
true);
CommandlineJava.SysProperties sysProperties =
commandline.getSystemProperties();
if (sysProperties != null) {
sysProperties.setSystem();
}
try {
traceParams(Project.MSG_VERBOSE);
validate();
// Instantiate the emitter
Emitter emitter = new Emitter();
//do the mappings, packages are the key for this map
mappings.execute(this,namespaceMap, true);
if (!namespaceMap.isEmpty()) {
emitter.setNamespaceMap(namespaceMap);
}
if (servicePortName != null) {
emitter.setServicePortName(servicePortName);
}
if (portTypeName != null) {
emitter.setPortTypeName(portTypeName);
}
if (bindingName != null) {
emitter.setBindingName(bindingName);
}
log("Java2WSDL " + className, Project.MSG_INFO);
emitter.setCls(cl.loadClass(className));
if (implClass != null) {
emitter.setImplCls(cl.loadClass(implClass));
}
if (exclude != null) {
emitter.setDisallowedMethods(exclude);
}
if (stopClasses != null) {
emitter.setStopClasses(stopClasses);
}
if (extraClasses != null) {
emitter.setExtraClasses(extraClasses, cl);
}
TypeMappingRegistryImpl tmr = new TypeMappingRegistryImpl();
tmr.doRegisterFromVersion(typeMappingVersion);
emitter.setTypeMappingRegistry(tmr);
// Create TypeMapping and register complex types
TypeMappingDelegate tmi = (TypeMappingDelegate)tmr.getDefaultTypeMapping();
Iterator i = complexTypes.iterator();
while (i.hasNext()) {
((ComplexType) i.next()).register(cl, tmi);
}
if (style != null) {
emitter.setStyle(style);
}
if (use != null) {
emitter.setUse(use);
}
if (importSchema != null) {
emitter.setInputSchema(importSchema);
}
if (input != null) {
emitter.setInputWSDL(input);
}
emitter.setIntfNamespace(namespace);
emitter.setImplNamespace(namespaceImpl);
emitter.setLocationUrl(location);
emitter.setImportUrl(locationImport);
emitter.setUseInheritedMethods(useInheritedMethods);
if(serviceElementName!=null) {
emitter.setServiceElementName(serviceElementName);
}
if(methods!=null) {
emitter.setAllowedMethods(methods);
}
if (soapAction != null) {
emitter.setSoapAction(soapAction);
}
if (outputImpl == null) {
// Normal case
emitter.emit(output, Emitter.MODE_ALL);
} else {
// Emit interface and implementation wsdls
emitter.emit(output, outputImpl);
}
if (isDeploy == true) {
generateServerSide(emitter, (outputImpl != null) ? outputImpl : output);
}
} catch(BuildException b) {
//pass build exceptions up the wire
throw b;
} catch (Throwable t) {
//other trouble: stack trace the trouble and throw an exception
StringWriter writer = new StringWriter();
t.printStackTrace(new PrintWriter(writer));
log(writer.getBuffer().toString(), Project.MSG_ERR);
throw new BuildException("Error while running " + getClass().getName(), t);
} finally {
if (sysProperties != null) {
sysProperties.restoreSystem();
}
}
}
/**
* The name of the output WSDL file.
* If not specified, a suitable default WSDL file is written into
* the current directory.
* @param parameter
*/
public void setOutput(File parameter) {
this.output = parameter.getPath();
}
/**
* Option attribute that indicates the name of an XML Schema file that
* should be physically imported into the generated WSDL.
* @param parameter
*/
public void setImportSchema(File parameter) throws BuildException {
try {
this.importSchema = parameter.toURL().toString();
} catch (java.io.IOException ioe) {
throw new BuildException(ioe);
}
}
/**
* Optional attribute that indicates the name of the input wsdl file.
* The output wsdl file will contain everything from the input wsdl
* file plus the new constructs. If a new construct is already present
* in the input wsdl file, it is not added. This option is useful for
* constructing a wsdl file with multiple ports, bindings, or portTypes.
* @param parameter filename
*/
public void setInput(File parameter) {
this.input = parameter.getPath();
}
/**
* Use this option to indicate the name of the output implementation WSDL
* file. If specified, Java2WSDL will produce separate interface and implementation
* WSDL files. If not, a single WSDL file is generated
* @param parameter
*/
public void setOutputImpl(File parameter) {
this.outputImpl = parameter.getPath();
}
/**
* The url of the location of the service. The name after the last slash or
* backslash is the name of the service port (unless overridden by the -s
* option). The service port address location attribute is assigned the
* specified value.
* @param parameter a URL
*/
public void setLocation(String parameter) {
this.location = parameter;
}
/**
* the location of the interface WSDL when generating an implementation WSDL
* Required when <tt>outputImpl</tt> is set
* @param parameter URL?
*/
public void setLocationImport(String parameter) {
this.locationImport = parameter;
}
/**
* the class name to import, eg. org.example.Foo. Required.
* The class must be on the classpath.
* @param parameter fully qualified class name
*/
public void setClassName(String parameter) {
this.className = parameter;
}
/**
* Sometimes extra information is available in the implementation class
* file. Use this option to specify the implementation class.
* @param parameter
*/
public void setImplClass(String parameter) {
this.implClass = parameter;
}
/**
* service port name (obtained from location if not specified)
* @param parameter portname
*/
public void setServicePortName(String parameter) {
this.servicePortName = parameter;
}
/**
* Indicates the name to use use for the portType element.
* If not specified, the class-of-portType name is used.
* @param parameter
*/
public void setPortTypeName(String parameter) {
this.portTypeName = parameter;
}
/**
* The name to use use for the binding element.
* If not specified, the value of the
* <tt>servicePortName</tt> + "SoapBinding" is used.
* @param parameter
*/
public void setBindingName(String parameter) {
this.bindingName = parameter;
}
/**
* the target namespace. Required.
* @param parameter
*/
public void setNamespace(String parameter) {
this.namespace = parameter;
}
/**
* Namespace of the implementation WSDL.
* @param parameter
*/
public void setNamespaceImpl(String parameter) {
this.namespaceImpl = parameter;
}
/**
* should inherited methods be exported too? Default=false
* @param parameter
*/
public void setUseInheritedMethods(boolean parameter) {
this.useInheritedMethods = parameter;
}
/**
* Comma separated list of methods to exclude from the wsdl file.
* @param exclude
*/
public void setExclude(String exclude) {
this.exclude = exclude;
}
/**
* Comma separated list of classes which stop the Java2WSDL
* inheritance search.
* @param stopClasses
*/
public void setStopClasses(String stopClasses) {
this.stopClasses = stopClasses;
}
/**
* The style of the WSDL document: RPC, DOCUMENT or WRAPPED.
* If RPC, a rpc/encoded wsdl is generated. If DOCUMENT, a
* document/literal wsdl is generated. If WRAPPED, a
* document/literal wsdl is generated using the wrapped approach.
* @param style
*/
public void setStyle(String style) {
this.style = style;
}
/**
* add a mapping of namespaces to packages
*/
public void addMapping(NamespaceMapping mapping) {
mappings.addMapping(mapping);
}
/**
* add a mapping of namespaces to packages
*/
public void addMappingSet(MappingSet mappingset) {
mappings.addMappingSet(mappingset);
}
/**
* the default type mapping registry to use. Either 1.1 or 1.2.
* Default is 1.1
* @param parameter new version
*/
public void setTypeMappingVersion(TypeMappingVersionEnum parameter) {
this.typeMappingVersion = parameter.getValue();
}
/**
* If this option is specified, only the indicated methods in your
* interface class will be exported into the WSDL file. The methods list
* must be comma separated. If not specified, all methods declared in
* the interface class will be exported into the WSDL file
* @param methods list of methods
*/
public void setMethods(String methods) {
this.methods = methods;
}
/**
* Set the use option
*/
public void setUse(String use) {
this.use = use;
}
/**
* the name of the service element.
* If not specified, the service element is the <tt>portTypeName</tt>Service.
* @param serviceElementName
*/
public void setServiceElementName(String serviceElementName) {
this.serviceElementName = serviceElementName;
}
/**
* A comma separated list of classes to add to the classpath.
*/
public void setExtraClasses(String extraClasses) {
this.extraClasses = extraClasses;
}
/**
* The setter for the "soapAction" attribute
*/
public void setSoapAction( String soapAction ) {
this.soapAction = soapAction;
}
/**
* Nested element for Complex Types.
* Each Complex Type uses the following fields:
* @param ct
*/
public void addComplexType(ComplexType ct) {
complexTypes.add(ct);
}
/**
* Set the optional classpath
*
* @param classpath the classpath to use when loading class
*/
public void setClasspath(Path classpath) {
createClasspath().append(classpath);
}
/**
* Set the optional classpath
*
* @return a path instance to be configured by the Ant core.
*/
public Path createClasspath() {
if (classpath == null) {
classpath = new Path(getProject());
classpath = classpath.concatSystemClasspath();
}
return classpath.createPath();
}
/**
* Set the reference to an optional classpath
*
* @param r the id of the Ant path instance to act as the classpath
*/
public void setClasspathRef(Reference r) {
createClasspath().setRefid(r);
}
/**
* Adds a system property that tests can access.
* @param sysp environment variable to add
*/
public void addSysproperty(Environment.Variable sysp) {
commandline.addSysproperty(sysp);
}
/**
* Sets the deploy flag
* @param deploy true if deploy mode
*/
public void setDeploy(boolean deploy) {
this.isDeploy = deploy;
}
/**
* Generate the server side artifacts from the generated WSDL
*
* @param j2w the Java2WSDL emitter
* @param wsdlFileName the generated WSDL file
* @throws Exception
*/
protected void generateServerSide(Emitter j2w, String wsdlFileName) throws Exception {
org.apache.axis.wsdl.toJava.Emitter w2j = new org.apache.axis.wsdl.toJava.Emitter();
File wsdlFile = new File(wsdlFileName);
w2j.setServiceDesc(j2w.getServiceDesc());
w2j.setQName2ClassMap(j2w.getQName2ClassMap());
w2j.setOutputDir(wsdlFile.getParent());
w2j.setServerSide(true);
w2j.setDeploy(true);
w2j.setHelperWanted(true);
// setup namespace-to-package mapping
String ns = j2w.getIntfNamespace();
String clsName = j2w.getCls().getName();
int idx = clsName.lastIndexOf(".");
String pkg = null;
if (idx > 0) {
pkg = clsName.substring(0, idx);
w2j.getNamespaceMap().put(ns, pkg);
}
Map nsmap = j2w.getNamespaceMap();
if (nsmap != null) {
for (Iterator i = nsmap.keySet().iterator(); i.hasNext(); ) {
pkg = (String) i.next();
ns = (String) nsmap.get(pkg);
w2j.getNamespaceMap().put(ns, pkg);
}
}
// set 'deploy' mode
w2j.setDeploy(true);
if (j2w.getImplCls() != null) {
w2j.setImplementationClassName(j2w.getImplCls().getName());
} else {
if (!j2w.getCls().isInterface()) {
w2j.setImplementationClassName(j2w.getCls().getName());
} else {
throw new Exception("implementation class is not specified.");
}
}
w2j.run(wsdlFileName);
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 6,859 |
\section{Introduction and Main Result}
The problem of finding normal forms is well known in Complex Analysis\cite{tadej}. The formal constructions of normal forms\cite{bu1},\cite{bu2},\cite{CM},\cite{gosto1},\cite{gosto2}, \cite{ko1},\cite{mowe} represent useful procedures in order to understand fundamental problems in Complex Analysis, such as the local equivalence problem\cite{EHZ},\cite{huyi2}. Several problems, including classification problems related to such Domains, are reduced to the study of the Real-Hypersurfaces, which are generally considered Smooth, or equivalently Formal. The Real-Hypersurfaces are Real Submanifolds of codimension $1$ in Complex Space. They represent boundaries of Domains in Complex Spaces.
In the equidimensional case, such procedures (see \cite{za}) is based on imposing (formally) normalization conditions in the local defining equations determining simultaniously the formal (holomorphic) equivalence, aiming to simplify the local defining equations and especially to find invariants (like Huang-Yin\cite{huyi2}). Such construction may be simple, like Moser-Webster's Normal Form\cite{mowe} which is algebraic, or more complicated as the author's Normal Forms\cite{bu1},\cite{bu2}. Furthermore, such construction may be considered in Almost-Complex Spaces without any assumption of integrability. First steps, towards to such normal forms, have been done by my supervisor\cite{za}, constructing an analogue of Chern-Moser's Normal Form. Regarding the convergence or the divergence of the normal forms, there are recommended Gong-Stolovitch\cite{gosto1},\cite{gosto2} and Lamel-Stolovitch\cite{lasto} to the reader for further reading.
In the non-equidimensional case, such procedure (see \cite{za}) aims to provide classifications
of the formal (holomorphic) mappings between Models in Complex Spaces of different dimensions.
It is based on compositions using suitable automorphisms of such Models, in order to find normal forms
for possibly formal mappings, but sometimes it suffices to work with such mappings just of class $\mathcal{C}^{2}$ or $\mathcal{C}^{3}$ like Faran and Huang, in order to obtain simple classifications, which are directly motivated by the classification of the Proper Holomorphic Mappings between unit balls in Complex Spaces.
In this paper, we construct a formal normal form for a large classof Real-Formal Hypersurfaces in Complex Space. This normal form may be seen as an alternative to the previous constructed normal forms from Zaitsev\cite{za} and Chern-Moser\cite{CM}. The imposed normalizations iteratively cover sums of homogeneous terms respecting a system of weights or pseudo-weighted motivated by \cite{bu2} . In particular, we obtain the $2$-jet determination of the automorphisms of a real-smooth strongly pseudoconvex in $\mathbb{C}^{N+1}$, in the coordinates
$$\left(w,z\right)=\left(w;z_{1},z_{2},\dots,z_{N}\right)\in\mathbb{C}^{N+1}.$$
Let $M\subset\mathbb{C}^{N+1}$ be a real-formal hypersurface defined as follows
\begin{equation}
{\sf Im}\, w=\left({\sf Re}\, w\right)^{m}P\left(z, \overline{z}\right)+ \mbox{O}\left(k_{0}+1\right), \label{N1}\end{equation}
satisfying the following non-degeneracy condition
\begin{equation}\displaystyle\sum_{k,l=1}^{N}\frac{\partial P(z,\overline{z})}{ \partial z_{k}}a_{kl}z_{l}\mapsto a_{kl}=0,\quad\mbox{for all $k,l=1,\dots,N$,}
\end{equation}
such that it holds:
\begin{equation} \mbox{Deg}\left(P\right)+s=k_{0},\quad\mbox{for $\mbox{Deg}\left(P\right),m\in\mathbb{N}$}.
\end{equation}
Then, in order to construct normal forms, we consider the following Model
\begin{equation}
{\sf Im}\, w=\left({\sf Re}\, w\right)^{s}P\left(z, \overline{z}\right),\label{Model}
\end{equation}
when (\ref{N1}) holds, but it is required by (\cite{bu2}) to make homogeneous this Model (\ref{Model}), regardless of its non-triviality. Then, we use by \cite{bu2},\cite{bu3} the following system of pseudo-weights, according to the following notations
\begin{equation}x={\sf Re}\, w,\quad P\left(z, \overline{z}\right)=\displaystyle\sum_{k,l=1\atop{\alpha_{k}+\beta_{l}}=k_{0}-s}^{N} p_{\alpha_{k}\beta_{l}}z_{k}^{\alpha_{k}}\overline{z}_{l}^{\beta_{l}}.\label{ferm}
\end{equation}
We define
\begin{equation}\mbox{wt}\left\{ x\right\}=k_{0},\quad\quad \mbox{wt}\left\{z_{k}\right\}=1,\quad \mbox{wt}\left\{\overline{z}_{k}\right\}=1,\quad\mbox{for all $k=1,\dots, N$.}\label{pp}\end{equation}
We define
\begin{equation} \mbox{wt}\left\{z^{\alpha}\overline{z}^{\beta}\right\}=\alpha+\beta,\quad\mbox{for all $\alpha,\beta\in\mathbb{N}$.}\label{pseu1}
\end{equation}
Similarly, we define
\begin{equation} \mbox{wt}\left\{\overline{z}^{\alpha}x^{\beta}\right\}=\alpha+\beta, \quad\mbox{for all $\alpha,\beta\in\mathbb{N}$ with $\alpha\neq 0$.}\label{pseu2}
\end{equation}
Similarly, we define
\begin{equation} \mbox{wt}\left\{z^{\alpha}x^{\beta}\right\}=\alpha+\beta, \quad\mbox{for all $\alpha,\beta\in\mathbb{N}$ with $\alpha\neq 0$.}\label{pseu3}
\end{equation}
Next, we define
\begin{equation} \mbox{wt}\left\{x^{N} z^{\alpha}\overline{z}^{\beta} \right\}=\left\{\begin{split}& N+\alpha+\beta,\quad\hspace{0.21 cm}\quad\mbox{for all $N, \alpha,\beta\in\mathbb{N}$ with $\alpha+\beta<k_{0}-s$ and $\alpha\neq 0$ or $\beta\neq 0$,}\\& N-s+\alpha+\beta,\quad\mbox{for all $N, \alpha,\beta\in\mathbb{N}$ with $\alpha+\beta=k_{0}-s$ and $\alpha\neq 0$ and $\beta\neq 0$,} \end{split}\right.\label{pseu4}
\end{equation}
extending (\ref{pseu1}), (\ref{pseu2}) and (\ref{pseu3}).
Now, in order to provide further definitions, we observe that the homogeneous polynomial $\mathcal{L}$ of degree $k_{0}-s$, is defined by monomials
$$ z^{a}\overline{z}^{b},\quad\mbox{with $a+b=k_{0}-s$ with $a,b\in\mathbb{N}^{\star}$.}$$
Then, we define
\begin{equation} \mbox{wt}\left\{x^{N} z^{a\beta}\overline{z}^{b\beta} \right\}=\left(N-(s-1)\beta\right)k_{0},\quad\mbox{for all $N,\beta, a,b\in\mathbb{N}$ with $a+b=k_{0}-s$ and $\left(N-(s-1)\beta\right)k_{0}\geq0$.}\label{pseu5}
\end{equation}
Therefore, it is required to define
\begin{equation}\begin{split}& \mbox{wt}\left\{x^{N} z^{a\beta+c}\overline{z}^{b\beta} \right\}=\left(N-(s-1)\beta\right)k_{0}+c,\quad\mbox{for all $N,c,\beta, a,b\in\mathbb{N}$ with $a+b=k_{0}-s$ and $\left(N-(s-1)\beta\right)k_{0}\geq0$,}\\& \mbox{wt}\left\{x^{N} z^{a\beta}\overline{z}^{b\beta+c} \right\}=\left(N-(s-1)\beta\right) k_{0} +c,\quad\mbox{for all $N,c,\beta, a,b\in\mathbb{N}$ with $a+b=k_{0}-s$ and $\left(N-(s-1)\beta\right)k_{0}\geq0$.}\end{split}\label{pseu6}
\end{equation}
Otherwise, when $\left(N-(s-1)\beta\right)k_{0} \leq 0$, we define
\begin{equation} \mbox{wt}\left\{x^{N} z^{a\beta}\overline{z}^{b\beta} \right\}=\left(N-(s-1)\beta\right)k_{0}+\left(a+b\right)\left(\beta-\beta'\right), \label{pseu6}
\end{equation}
for all $N, a,b\in\mathbb{N}$ with $a+b=k_{0}$ and $\beta'\in\mathbb{N}$ maximal such that $\left(N-(s-1)\beta'\right)k_{0}\geq0$.
Clearly, the best definition is attained when the right-hand side of (\ref{pseu6}) is minimal, more precisely when $\beta'\in\mathbb{N}$ is maximal satisfying the above property, because there exist more evaluations available. We denote them by
\begin{equation} \mbox{wt}_{N,a,b}\left\{x^{N} z^{a}\overline{z}^{b} \right\},\quad \mbox{where $N,a,b\in\mathbb{N}$ and $a,b\neq 0$.} \label{pseu7}
\end{equation}
Therefore, the best definition is clearly the following
\begin{equation}\mbox{wt}\left\{x^{N} z^{a}\overline{z}^{b} \right\}=\mbox{Min}\left(\mbox{wt}_{N,a,b}\left\{x^{N} z^{a}\overline{z}^{b} \right\}\right), \quad \mbox{where $N,a,b\in\mathbb{N}$ and $a,b\neq 0$.} \label{pseu77}
\end{equation}
Now, the Model (\ref{Model}) becomes pseudo-weighted-homogeneous in respect to the system of pseudo-weights from (\ref{pseu1}),(\ref{pseu2}),(\ref{pseu3}),(\ref{pseu4}), (\ref{pseu5}),(\ref{pseu6}),(\ref{pseu7}),(\ref{pseu77}). Then, we consider the following weighted Fischer Decompositions
\begin{equation}\begin{split}&\quad\hspace{0.15 cm} z^{I}=\left(x+ix^{s}P(z,z)\right)A_{I}(z,\overline{z},x)+B_{\alpha}(z,\overline{z},x),\quad\hspace{0.28 cm}\mbox{where ${\bf tr\,} \left(B_{I}(z,\overline{z},x)\right)=0$ ,} \\& x\overline{z}_{l}z^{J}=\left(x+ix^{s}P(z,z)\right)\tilde{A}_{J,l}(z,\overline{z},x)+\tilde{B}_{J,l}(z,\overline{z},x),\quad\mbox{where ${\bf tr\,} \left(\tilde{B}_{J,l}(z,\overline{z},x)\right)=0$ ,} \end{split}\label{gogse}
\end{equation}
for all $I, J\in\mathbb{N}^{N}$ such that $\left|I\right|=k $ and $\left|J\right|=k-2$, for all $l=1,\dots,N$ and $k\in\mathbb{N}-\{1\}$, where ${\bf tr\,}$ is the associated pseudo-weighted differential operator,
for all $I, J\in\mathbb{N}^{N}$ such that $\left|I\right|=k $ and $\left|J\right|=k-1$, for all $l=1,\dots,N$ and $k\in\mathbb{N}-\{1\}$, according to the following standard notations
\begin{equation}\begin{split}&I=\left(i_{1},i_{2},\dots,i_{N}\right),\quad\hspace{0.1 cm} \left|I\right|= i_{1}+i_{2}+\dots+i_{N}; \\& J=\left(j_{1},j_{2},\dots,j_{N}\right),\quad\left|J\right|= j_{1}+j_{2}+\dots+j_{N}.\end{split} \label{lo11}
\end{equation}
Next, we focus on the following family of polynomials
\begin{equation}\left\{\tilde{B}_{J,l}(z,\overline{z},x),\quad B_{I}(z,\overline{z},x),\quad\overline{B_{I}(z,\overline{z},x)},\quad\overline{\tilde{B}_{J,l}(z,\overline{z},x)}\right\}_{k\in\mathbb{N}^{\star}-\{1\}\atop{l=1,\dots,N}},\label{gog1}
\end{equation}
which are linearly independent, and therefore we can apply the methods from \cite{bu2}.
Then, it has sense to
consider the following iterative Spaces of Fischer Normalizations denoted as follows
\begin{equation}\mathcal{F}_{p},\quad \mbox{where $p\in\mathbb{N}^{\star}-\{1\}$,}
\label{spaF}
\end{equation}
which consist in real-valued polynomials $P(z,\overline{z},x)=P_{0}(z,\overline{z},x)$ of weighted degree $p $ in $(z,\overline{z},x)$ such that:
$$P_{k}(z,\overline{z},x)=P_{k+1}(z,\overline{z},x)\left(x+i\left<z,z\right>\right)+R_{k+1}(z,\overline{z},x),\hspace{0.1 cm}\mbox{for all $k=0,\dots,\left[\frac{p}{2}\right]$,}$$
where we have
$$R_{k+1}(z,\overline{z},x)\in \displaystyle\bigcap_{I, J\in\mathbb{N}^{N}\atop{{\left|I\right|=k,\hspace{0.1 cm} \left|J\right|=k-1} \atop{l=1,\dots,N}}} \left( \ker \left(\tilde{B}_{J,l}(z,\overline{z},x)\right)^{\star} \displaystyle\bigcap \ker \left(\overline{B_{I}(z,\overline{z},x)}\right)^{\star}\displaystyle\bigcap \ker \left(\overline{B_{I}(z,\overline{z},x)}\right)^{\star} \displaystyle\bigcap \ker \left(\overline{\tilde{B}_{J,l}(z,\overline{z},x)}\right)^{\star} \right) .$$
We obtain:
\begin{Thm}\label{t} Let $M\subset\mathbb{C}^{N+1}$ be a real-formal hypersurface defined as follows
\begin{equation}
{\sf Im}\, w=\left({\sf Re}\, w\right)^{m}P\left(z, \overline{z}\right)+ \displaystyle\sum
_{k\geq 3}\varphi_{k}\left(z,\overline{z},{\sf Re}\, w\right), \label{M1}\end{equation}
where $\varphi_{k}\left(z,\overline{z},{\sf Re}\, w\right)$ is a
polynomial of degree $k$ in $\left(z,\overline{z},x\right)$ for all
$k\geq 3$.
Then, there exists a unique formal transformation of the following type
\begin{equation}\left(F(z,w),G(z,w)\right)
=\left(z+\mbox{O}(2),w+\mbox{O}(2)\right),
\label{map}\end{equation}
which transforms $M$ into the following normal form $M'\subset\mathbb{C}^{N+1}$ defined as follows
\begin{equation}
{\sf Im}\, w'=\left({\sf Re}\, w\right)^{m}P\left(z', \overline{z}'\right)+ \displaystyle\sum
_{k\geq 3}{\varphi'}_{k}\left(z',\overline{z'},{\sf Re}\, w'\right), \label{M2}\end{equation}
where ${\varphi'}_{k}\left(z',\overline{z'},{\sf Re}\, w'\right)$ is a
polynomial of degree $k$ in $\left(z',\overline{z'},{\sf Re}\, w'\right)$ for all
$k\geq k_{0}+1$, respecting the following normalizations
\begin{equation}{\varphi'}_{k}\in \mathcal{F}_{k}\quad \mbox{such that $x^{\star}\left(P_{\frac{k}{k_{0}}-1}\left(z,\overline{z},{\sf Re}\, w\right)\right)=0$},\quad \mbox{for all $k\geq k_{0}+1$},\label{Nor}
\end{equation}
given the following assumptions
\begin{equation}{\sf Re}\, \left(\frac{\partial^{k_{0}} G(z,w)}{\partial w^{k_{0}}}(0,0)\right)=0,\quad {\sf Im}\, \left(\frac{\partial F_{l}(z,w)}{\partial w }(0,0)\right)=0,\quad\mbox{for all $l=1,\dots,N$.}\label{Norr}
\end{equation}
\end{Thm}
This formal normal form (\ref{M2}) is inductively constructed according to the iterative procedure, similar as in \cite{bu2}, computations and the applied strategy described as follows:
\section{Proof of Theorem \ref{t}}We take a formal holomorphic change of coordinates as in (\ref{map}), but which
sends $M\subset\mathbb{C}^{N+1}$, defined by (\ref{M1}), into $M'\subset\mathbb{C}^{N+1}$, defined by (\ref{M2}), obtaining the following equation
\begin{equation} \left.\begin{split}&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\displaystyle \sum
_{m,n\geq0}{\sf Im}\, G_{m,n}(z)\left({\sf Re}\, w+i\left({\sf Re}\, w\right)^{s}P(z,z) +\displaystyle\sum
_{k\geq k_{0}+1}\varphi_{k}\left(z,\overline{z},{\sf Re}\, w\right)\right)^{n} \\& \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad
\quad\quad\quad\quad\quad\quad\quad\quad \begin{tabular}{l} \rotatebox[origin=c]{270}{$=$}\end{tabular} \\&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \left( \sum
_{m,n\geq0}{\sf Re}\, G_{m,n}(z)\left({\sf Re}\, w+i\left({\sf Re}\, w\right)^{s}P(z,z) +\displaystyle\sum
_{k\geq 3}\varphi_{k}\left(z,\overline{z},{\sf Re}\, w\right)\right)^{n}\right)^{s}\cdot\\& \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad P\left(\displaystyle\sum _{m,n \geq 0}
F_{m,n}(z)\left({\sf Re}\, w+i\left({\sf Re}\, w\right)^{s}P(z,z) +\displaystyle\sum
_{k\geq k_{0}+1}\varphi_{k}\left(z,\overline{z},{\sf Re}\, w\right)\right)^{n},\right.\\& \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad \left.\overline{\displaystyle\sum _{m,n \geq 0}
F_{m,n}(z)\left({\sf Re}\, w+i\left({\sf Re}\, w\right)^{s}P(z,z) +\displaystyle\sum
_{k\geq k_{0}+1}\varphi_{k}\left(z,\overline{z},{\sf Re}\, w\right)\right)^{n}}\right) \\& \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad
\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \begin{tabular}{l} \rotatebox[origin=c]{270}{$+$}\end{tabular} \\& \quad\quad\quad\quad\quad\quad\quad\quad\displaystyle\sum
_{k\geq k_{0}+1}\varphi_{k}'
\left(\displaystyle\sum _{m,n \geq
0}F_{m,n}(z)\left({\sf Re}\, w+i\left({\sf Re}\, w\right)^{s}P(z,z) +\displaystyle\sum
_{k\geq k_{0}+1}\varphi_{k}\left(z,\overline{z},{\sf Re}\, w\right)\right)^{n} ,\right.\\& \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \left. \overline{\displaystyle\sum _{m,n \geq
0}F_{m,n}(z)\left({\sf Re}\, w+i\left({\sf Re}\, w\right)^{s}P(z,z) +\displaystyle\sum
_{k\geq k_{0}+1}\varphi_{k}\left(z,\overline{z},{\sf Re}\, w\right)\right)^{n}},\right.\\& \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \left. {\sf Re}\, \displaystyle\sum _{m,n \geq
0}F_{m,n}(z)\left({\sf Re}\, w+i\left({\sf Re}\, w\right)^{s}P(z,z) +\displaystyle\sum
_{k\geq k_{0}+1}\varphi_{k}\left(z,\overline{z},{\sf Re}\, w\right)\right)^{n}\right).
\end{split}\right.
\label{ecuatiese}\end{equation}
Next, by eventually compositing with a linear automorphism of the Model (\ref{Model}), we can assume that we can work with a formal equivalence like (\ref{map}), concluding the following important equation
\begin{equation}\begin{split}& \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\sum
_{m+2n=T }\frac{G_{m,n}(z)-\overline{G_{m,n}(z)}}{2\sqrt{-1}}\left({\sf Re}\, w+i\left({\sf Re}\, w\right)^{s}P(z,z) \right)^{n}\\& \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad
\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \begin{tabular}{l} \rotatebox[origin=c]{270}{$=$}\end{tabular}\\&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\left({\varphi'}_{T}-{\varphi}_{T}\right)\left(z,\overline{z},{\sf Re}\, w\right)\\& \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad +
\\& \left({\sf Re}\, w+i\left({\sf Re}\, w\right)^{s}P(z,z) \right) \left(P\left(\displaystyle\sum_{m+2n=T}
F_{m,n}(z)\left({\sf Re}\, w+i\left({\sf Re}\, w\right)^{s}P(z,z) \right)^{n},z\right) +\overline{P\left(\displaystyle\sum_{m+2n=T}
F_{m,n}(z)\left({\sf Re}\, w+i\left({\sf Re}\, w\right)^{s}P(z,z) \right)^{n},z\right)}\right)\\& \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad +
\\&\sum_{T_{1}+T_{2}=T\atop T_{1},T_{2}\neq 0} \sum_{m+2n=T_{1} }\frac{G_{m,n}(z)+\overline{G_{m,n}(z)}}{2 }\left({\sf Re}\, w+i\left({\sf Re}\, w\right)^{s}P(z,z) \right)^{n}\left(\left<\displaystyle\sum_{m+2n=T_{2}}
F_{m,n}(z)\left({\sf Re}\, w+i\left({\sf Re}\, w\right)^{s}P(z,z) \right)^{n},z\right> \right.\\&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad +
\\&\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad \quad \quad \quad \quad \quad \quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad \left. \overline{\left<\displaystyle\sum_{m+2n=T_{2}}
F_{m,n}(z)\left({\sf Re}\, w+i{\sf Re}\, w\left<z,z\right>\right)^{n},z\right>}\right),
\end{split}\label{ecuatie1}\end{equation}
where ,,$\dots$'' contains terms already determined (of the same pseudo-weight) normalized according to a natural induction process depending on the natural number $T \geq k_{0}+1$, computing (\ref{map}) in respect to the normalizations (\ref{Nor}) and (\ref{Norr}).
Then, (\ref{ecuatie1}) impoplies
\begin{equation}\begin{split}&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \sum
_{m+2n=T }\frac{G_{m,n}(z)-\overline{G_{m,n}(z)}}{2\sqrt{-1}}\left({\sf Re}\, w+i\left({\sf Re}\, w\right)^{s}P(z,z) \right)^{n}\\& \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad
\quad\quad\quad\quad\quad\quad\quad\quad \begin{tabular}{l} \rotatebox[origin=c]{270}{$=$}\end{tabular} \\&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \left({\varphi'}_{T}-{\varphi}_{T}\right)\left(z,\overline{z},{\sf Re}\, w\right)
\\&\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad +\\&\quad\quad\quad\quad \quad\quad\quad\quad \quad\quad\quad\quad \left({\sf Re}\, w+i\left({\sf Re}\, w\right)^{s}P(z,z) \right)\left(\left<\displaystyle\sum_{m+2n=T}
F_{m,n}(z)\left({\sf Re}\, w+i\left({\sf Re}\, w\right)^{s}P(z,z) \right)^{n},z\right> \right.\\&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad +
\\&\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad \quad \quad \quad \quad \quad \quad\quad\quad\quad\quad\quad\quad \quad \left.\overline{\left<\displaystyle\sum_{m+2n=T}
F_{m,n}(z)\left({\sf Re}\, w+i\left({\sf Re}\, w\right)^{s}P(z,z) \right)^{n},z\right>}\right),
\end{split}\label{ecuatie1}\end{equation}
for all $T \geq k_{0}+1$, where ,,$\dots$'' contains terms already determined.
Now, it remains to study the linear independence of the polynomials from (\ref{gog1}) with respect to (\ref{ferm}) and we write with respect to the previous system of pseudo-weights.
Then, in order to analyse the linear independence of (\ref{gog1}), we write
\begin{equation}\begin{split}& \hspace{0.18 cm} B_{I}(x,z)= z^{I}-\left({\sf Re}\, w+i\left({\sf Re}\, w\right)^{s}P(z,z) \right)A_{I}(x,z), \\& \tilde{B}_{J,l}(x,z)=\overline{z}_{l}z^{J}-\left({\sf Re}\, w+i\left({\sf Re}\, w\right)^{s}P(z,z) \right)\tilde{A}_{J,l}(x,z), \end{split}\label{gogsese}\end{equation}
respecting (\ref{lo11}).
Now, we analyse the pure terms in $z$ in (\ref{gogsese}). It is clear that $z^{I}$ is the only pure term as component of the first polynomial in (\ref{gogsese}). Any linear combination among the first class of polynomials in (\ref{gogsese}) indicates linear independence.
Next, we analyse the second class of polynomials in (\ref{gogsese}). Then, (\ref{gogse}) may provide a term which can cancel $\overline{z}_{l}z^{J}$ or not. In the second situation, it provides a pure term multiplied with $x$.
Now, it becomes clear the linear independence of the polynomials considered in (\ref{gogse}) with respect to a lexicografic order, because any linear combination with polynomials from (\ref{gogse}) is trivial. The proof if completed, because (\ref{spaF}) uniquely computes (\ref{map}).
\section{Acknowledgements} It is my own work effectuated independently and derived from my doctoral efforts. I owe to Science Foundation of Ireland, because I was using Irish Funding during my doctoral studies in Trinity College Dublin. Special Thanks to my supervisor (Prof. Dmitri Zaitsev) for many conversations regarding \cite{bu1}, which is the main part of my doctoral thesis, and which was fully supported by Science Foundation Ireland Grant 06/RFP/MAT 018.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 9,258 |
Q: How to include pdf (getting with php script from db) into HTML I want to include a pdf into html.
I´ll get the PDF File from a DB.
The PHP works fine and shows me the pdf in the Browser PDF-Reader.
<?php
ini_set('display_errors', 1);
ini_set('display_startup_errors', 1);
error_reporting(E_ALL);
//MySQL Verbindung hier einfügen
$mysqli = new mysqli("ip", "root", "password", "name");
if ($mysqli->connect_errno) {
die("Connection refused: " . $mysqli->connect_error);
}
try{
$sql = "SELECT file FROM Master_Ordner_AMS WHERE id = 1";
$result = mysqli_query($mysqli, $sql);
$row = mysqli_fetch_object($result);
header('Content-type: application/pdf');
$pdf = $row->file;
}catch(Exception $e){
echo "caught exception: ", $e->getMessage(), "\n";
}
?>
If i want to add it in a simple html to custumize it, it doesnt work..
Just want a little div around it or sth. else
A: Well you can create an iframe and display the pdf file inside the iframe. You can customize the border of the iframe using CSS.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 9,752 |
Norovirus Cruise: Outbreak sickens passengers and crew on two ships
Thursday, January 03, 2013 by: Raw Michelle
Tags: norovirus, outbreak, cruise ships
https://www.naturalnews.com/038528_norovirus_outbreak_cruise_ships.html
(NaturalNews) 400 passengers and crew aboard two Christmas cruise ships sailing in the Caribbean have been sickened by what health officials believe is the winter vomiting bug, or Norovirus. According to federal health authorities, the first outbreak occurred a few days before Christmas on the grand-class cruise ship the Emerald Princess, where 189 passengers and 31 crew members developed vomiting and diarrhea. The second outbreak was reported on the Queen Mary 2 ocean liner, affecting 194 passengers and 11 crew members.
Sombre Christmas for infected passengers
Since procedure requires that outbreaks are reported when more than 2% of the crew and passengers show symptoms of infection, the CDC learned about the Queen Mary 2 situation right on Christmas day. Several people that have been on the cruise ships mentioned that the staff has taken steps to contain the infection.
"The festivities continue and those of us who have avoided this virus continue to enjoy the many offerings we come to expect and appreciate. For those passengers who have been exposed, they are confined to their cabins until declared safe to come out", said one of the Emerald Princess passengers.
A passenger of the Queen Mary 2, however, is less enthusiastic about the prospect of continuing the cruise. "There is a sense of foreboding, with everyone worried that they will be next to come down with the illness", she said.
Norovirus kills 800 people each year, while over 21 million are infected and 70,000 require hospitalization.
What is Norovirus?
A highly contagious infectious agent, Norovirus is transmitted through contaminated food and water supplies, but also from person to person, with potentially disastrous effects in closed communities, such as camps, hospitals, prisons, schools and cruise ships. The virus is the most common cause of acute gastroenteritis (or stomach flu), a type of gastrointestinal tract inflammation.
Symptoms of stomach flu include abdominal pain, nausea, vomiting, and watery diarrhea, coupled with muscle weakness and cramps, low-grade fever, and lethargy. Infants and the elderly are most vulnerable, but immunocompromised individuals may even die from severe Norovirus infections.
Home remedies against Norovirus
While the most efficient way to get rid of Norovirus is to disinfect exposed surfaces and eliminate the source of infection, there are several natural remedies that can bolster immunity and help beat the stomach bug faster and easier.
The most important thing when dealing with vomiting and diarrhea is staying hydrated, so drinking plenty of liquids is also the first measure against Norovirus infection.
Peppermint tea is popular among herbalists for its proven antispasmodic properties, and its ability to swiftly relieve pains and stomach problems, particularly diarrhea.
Apple cider vinegar has been known to have antimicrobial functions, while ginger root is a traditional tummy soother, and its unique properties can help control vomiting.
Coconut water-based probiotic beverages boost immunity by inhibiting the growth of pathogens, and have been known to aid against diarrhea causing viruses, including Norovirus.
http://www.reuters.com
http://news.sky.com
http://uk.news.yahoo.com
http://www.telegraph.co.uk
http://www.aquaticautumn.com
http://cmr.asm.org/content/16/4/658.long
http://www.ncbi.nlm.nih.gov
Raw Michelle is a natural health blogger and researcher, sharing her passions with others, using the Internet as her medium. She discusses topics in a straight forward way in hopes to help people from all walks of life achieve optimal health and well-being. She has authored and published hundreds of articles on topics such as the raw food diet and green living in general. >>> Click here to see more by Michelle
Norovirus at FETCH.news | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 7,670 |
\section{Introduction}\label{introduction}
In this paper we consider the following question: How does the initial data for an SPDE affect the statistics of the solution at a later time? Namely, we consider the Kardar-Parisi-Zhang (KPZ) equation (or equivalently, the stochastic heat equation (SHE)) and probe the lower and upper tails of the centered (by time$/24$) and scaled (by time$^{1/3}$) one-point distribution for the solution at finite and long times.
Our main results (Theorems~\ref{Main1Theorem} and \ref{Main4Theorem}) show that within a very large class of initial data, the tail behavior for the KPZ equation does not change in terms of the super-exponential decay rates and at most changes in terms of the coefficient in the exponential. These results are the first tail bounds for general initial data which capture the correct decay exponents and which respect the long-time scaling behavior of the solution.
In order to state our results, let us recall the KPZ equation, which is formally written as
\begin{align}
\partial_T \mathcal{H}(T,X) &= \frac{1}{2}\partial^2_X \mathcal{H}(T, X) + \frac{1}{2}(\partial_X \mathcal{H}(T,X))^2+ \xi(T, X), \qquad
\mathcal{H}(0,X) = \mathcal{H}_0(X).
\end{align}
Here, $\xi$ is the space-time white noise, whose presence (along with the non-linearity) renders this equation ill-posed.
A proper definition of the solution of the KPZ equation comes from the Cole-Hopf transform by which we {\em define}
\begin{align}\label{eq:ColeHpf}
\mathcal{H}(T,X) := \log \mathcal{Z}(T,X)
\end{align}
where $\mathcal{Z}(T,X)$ is the unique solution of the well-posed SHE
\begin{align}\label{eq:SHEDef}
\partial_T \mathcal{Z}(T,X) = \frac{1}{2} \partial^2_X \mathcal{Z}(T,X) + \mathcal{Z}(T,X) \xi(T,X), \quad \mathcal{Z}_0(X) = e^{\mathcal{H}_0(X)}.
\end{align}
Note that the logarithm in \eqref{eq:ColeHpf} is defined since $\mathcal{Z}(T,X)$ is almost-surely strictly positive for all $T>0$ and $X\in \mathbb{R}$ \cite{Mueller91}. We refer to \cite{Quastel12, Corwin12, Hairer13} for more details about the KPZ equation and the SHE and their relation to random growth, interacting particle systems, directed polymers and other probabilistic systems (see also \cite{Mol96,Davar,Bertini1995,BarraquandCorwin15,Comets16}).
In this paper, we consider very general initial data as now describe.
\begin{defn}\label{Hypothesis}
Fix $\nu \in (0,1)$ and $C,\theta, \kappa, M>0$. A measurable function $f:\mathbb{R}\to \mathbb{R}\cup \{-\infty\}$ satisfies $\mathbf{Hyp}(C,\nu,\theta, \kappa, M)$ if:
\begin{enumerate}[(1)]
\item \mbox{}
\vskip-1.3cm
\begin{align}\label{eq:InMomBd}
f(y) \leq C + \frac{\nu}{2^{2/3}} y^2, \quad \forall y\in \mathbb{R},
\end{align}
\item there exists a subinterval $\mathcal{I}\subset [-M,M]$ with $|\mathcal{I}|=\theta$ such that
\begin{align}\label{eq:LowInitBd}
f(y)\geq -\kappa, \quad \forall y\in \mathcal{I}.
\end{align}
\end{enumerate}
\end{defn}
For a measurable function $f:\mathbb{R}\to \mathbb{R}\cup \{-\infty\}$, and $T>0$ consider the solution to the KPZ equation with initial data $\mathcal{H}_0$ chosen such that
\begin{align}\label{eq:ScaledInitialData}
T^{-\frac{1}{3}}\mathcal{H}_0\big((2T)^{\frac{2}{3}}y\big) = f(y).
\end{align}
We consider the KPZ equation with this initial data and run until time\footnote{Notice that the initial data and time horizon are both dependent on $T$. This allows for a much wider class of initial data which are adapted to the KPZ fixed point scaling.} $T$. Namely, let
\begin{align}\label{eq:ScalCentHeight}
h^f_T(y):=\frac{\mathcal{H}\big(2T, (2T)^{\frac{2}{3}}y\big)+\frac{T}{12} -\frac{2}{3}\log(2T)}{T^{\frac{1}{3}}}.
\end{align}
\begin{figure}[h]
\includegraphics[width=.45\linewidth]{Figure1.pdf}
\caption{Schematic plot of the density (top) and log density (bottom) of $h^f_T(0)$. Letting $s$ denote the horizontal axis variable, there are four regions which display different behaviors. Region $I$ (deep lower tail, when $s\ll -T^{2/3}$): the log density has power law decay with exponent $5/2$. Region $II$ (shallow lower tail, when $-T^{2/3} \ll s\ll 0$): the log density has power law decay with exponent $3$. Region $III$ (center, when $s\approx 0$): the density depends on initial data as predicted by the KPZ fixed point. Region $IV$ (upper tail, when $s\gg 0$): the log density has power law decay with exponent $3/2$. The universality of the power law exponents (in regions $I$, $II$ and $IV$) for general initial data constitutes the main contribution of this paper.}
\label{fig:Figure1}
\end{figure}
Our first main result (Theorem~\ref{Main1Theorem}) provides an upper bound on the lower tail that holds uniformly over $f\in\mathbf{Hyp}(C,\nu, \theta, \kappa, M)$, and $T>1$. The proof of this and our other main results are deferred to the later sections of the paper.
\begin{thm}\label{Main1Theorem}
Fix any $\epsilon, \delta\in (0,\frac{1}{3})$, $C, M,\theta>0$, $\nu\in (0,1)$, and $T_0>0$. There exist $s_0=s_0(\epsilon, \delta, C,M, \theta,\nu,T_0)$ and $K=K(\epsilon, \delta, T_0)>0$ such that for all $s\geq s_0$, $T\geq T_0$, and $f\in \mathbf{Hyp}(C,\nu, \theta, \kappa, M)$ (recall $h^f_T(y)$ is defined in \eqref{eq:ScaledInitialData} and \eqref{eq:ScalCentHeight}),
\begin{align}\label{eq:UpperBoundDetData}
\mathbb{P}\left(h^f_T(0)\leq -s\right)&\leq e^{- T^{1/3}\frac{4(1-\epsilon) s^{5/2}}{15\pi}}
+ e^{- Ks^{3-\delta} - \epsilon s T^{1/3}}+ e^{- \frac{(1-\epsilon)s^3}{12}}.
\end{align}
\end{thm}
\begin{rem}
There are three regions of the lower tail (see $I, II$, and $III$ in Figure \ref{fig:Figure1}). In each region (and for $T$ large) a different one of the three terms on the r.h.s. of \eqref{eq:UpperBoundDetData} becomes active. For instance, for region $I$ when $s\gg T^{2/3}$, the largest term in our bound is the first term in the r.h.s. of \eqref{eq:UpperBoundDetData}.
Likewise, the middle term in the r.h.s. of \eqref{eq:UpperBoundDetData} is active in region $II$ and the last term in region $III$. We presently lack a matching lower bound for the lower tail probability. This is known for only the narrow wedge (see Proposition~\ref{NotMainTheorem}). See Section \ref{sec:previouswork} for some discussion regarding physics literature related to this tail.
Let us also note that one can get similar bound as in \eqref{eq:UpperBoundDetData} on $\mathbb{P}\big(h^{f}_T(y)\leq -s\big)$ when $y\neq 0$. This is explained in Section~\ref{sketch}. Finally, observe that two important choices of initial data --- narrow wedge and Brownian motion --- do not fit into this class\footnote{The flat initial data is in the class and arises from $f\equiv 0$.}. The narrow wedge result is in fact a building block for the proof of this result, while Brownian follows as a fairly easy corollary (see Section~\ref{NWandBM}).
\end{rem}
Our second main result pertains to the upper tail and shows upper and lower bounds which hold uniformly over $f\in\mathbf{Hyp}(C,\nu,\theta,\kappa,M)$, and $T>\pi$.
\begin{thm}\label{Main4Theorem}
Fix any $\nu \in (0,1)$ and $C,\theta, \kappa, M>0$.
For any $T_0>0$, there exist $s_0= s_0(C, \nu, \theta, \kappa, M, T_0)>0$, $c_1= c_1(T_0)>c_2=c_2(T_0)>0$ such that for all $s\geq s_0$, $T>T_0$ and $f\in\mathbf{Hyp}(C,\nu, \theta, \kappa, M)$,
\begin{align}\label{eq:GenBdRough}
e^{-c_1s^{3/2}}\leq \mathbb{P}\big(h^{f}_{T}(0)\geq s\big)\leq e^{-c_2s^{3/2}}.
\end{align}
We may further specify values of $c_1$ and $c_2$ for which \eqref{eq:GenBdRough} holds, provided we assume $T_0>\pi$. In that case, for any $ \epsilon, \mu \in (0, \frac{1}{2})$, there exists $s_0=s_0(\epsilon, \mu, C, \nu, \theta, \kappa, M, T_0)>0$ such that for all $s\geq s^{\prime}_0$, $T\geq T_0$, and $f\in\mathbf{Hyp}(C,\nu, \theta, \kappa, M)$,
\eqref{eq:GenBdRough} holds with the following choices for $c_1>c_2$:
\begin{enumerate}[(i)]
\item If $s_0\leq s< \frac{1}{8}\epsilon^3(1-\frac{2\mu}{3})^{-1} T^{\frac{2}{3}}$ then we may take $c_1=\frac{8}{3}(1+\mu)(1+\epsilon)$ and $c_2= \frac{\sqrt{2}}{3}(1-\mu)(1-\epsilon)$.
\item If $s\geq \max\{s_0, \frac{9}{16}\epsilon^{-2} (1-\frac{2\mu}{3})^{-1} T^{\frac{2}{3}} \}$ then we may take $c_1= 8\sqrt{3}(1+\mu)(1+\epsilon)$ and $c_2=\frac{\sqrt{2}}{3}(1-\mu)(1-\epsilon)$.
\item If $\max\{s_0, \frac{1}{8}\epsilon^3(1-\frac{2\mu}{3})^{-1} T^{\frac{2}{3}} \}\leq s\leq \max\{s_0, \frac{9}{16}\epsilon^{-2} (1-\frac{2\mu}{3})^{-1} T^{\frac{2}{3}} \}$ then we may take $c_1= 2^{9/2}\epsilon^{-3}(1+\mu)$ and $c_2= \frac{\sqrt{2}}{3}(1-\mu)\epsilon$.
\end{enumerate}
\end{thm}
\begin{rem}
In Theorems~\ref{GrandUpTheorem} and \ref{Main6Theorem}, we prove similar results for narrow wedge and Brownian initial data. The upper and lower bounds on the constants $c_1$ and $c_2$ are not optimal. In fact, it is not clear to us how the initial data translates to the optimal value of $c_1$ or $c_2$. There, however, some predictions in the physics literature -- see Section \ref{sec:previouswork}. The condition $T_0>\pi$ assumed in the second part of Theorem \ref{Main4Theorem} could be replaced by an arbitrary lower bound, though the resulting conditions on $s$, $c_1$ and $c_2$ would need to change accordingly. This value $\pi$ turns out to work well in the computations leading to this result; in particular see \eqref{eq:IntegralBd1}.
\end{rem}
\subsection{Proof sketch}\label{sketch}
The fundamental solution to the SHE $\mathcal{Z}^{\mathbf{nw}}(T,X)$ corresponds to delta initial data $\mathcal{Z}_0(X)=\delta_{X=0}$. For any positive $T$, this results in a strictly positive solution, hence the corresponding KPZ equation solution is well-defined for $T>0$ and this initial data is termed \emph{narrow wedge} since in short time $\mathcal{Z}(T,X)$ is well-approximated by the Gaussian heat-kernel whose logarithm is a very thin parabola $\frac{X^2}{2T}$.
\begin{defn}[Cole-Hopf Transform]\label{bd:ColeHopf}
The Cole-Hopf transform of $\mathcal{Z}^{\mathbf{nw}}(T,X)$ is denoted here by $\mathcal{H}^{\mathbf{nw}}(T,X):= \log \mathcal{Z}^{\mathbf{nw}}(T,X)$. We further define a scaled and centered version of this as
\begin{align}\label{eq:DefUpsilon}
\Upsilon_T(y):= \frac{\mathcal{H}^{\mathbf{nw}}(2T, (2T)^{\frac{2}{3}} y)+ \frac{T}{12}}{T^{\frac{1}{3}}}.
\end{align}
\end{defn}
The proof of our main results relies upon a combination of three ingredients: (1) lower tail bounds for the narrow wedge initial data recently proved in \cite{CG18}, (2) Gibbsian line ensemble techniques applied to the KPZ line ensemble \cite{CorHam16}, and (3) explicit integral formulas for moments of the SHE with delta initial data. Now, we give an overview of our proofs. A more involved discussion of the KPZ line ensemble is contained in Section \ref{Tools}.
To prove Theorem~\ref{Main1Theorem}, one of our main tools is the upper and lower bound for the lower tail of the one point distribution of the narrow wedge solution of the KPZ equation given in Proposition~\ref{NotMainTheorem}. However, to use this result, we need a connection between the solution of the KPZ equation under general initial conditions and the narrow wedge solution. This connection is made through the following identity (which follows from the Feynman-Kac formula) which represents the one point distribution of the KPZ equation started from $\mathcal{H}_0$ as a convolution between the spatial process $\Upsilon_T(\cdot)$ and the initial data $\mathcal{H}_0(\cdot)$.
\begin{prop}[Lemma~1.18 of \cite{CorHam16}]\label{Distribution}
For general initial data $\mathcal{H}_{0}(\cdot):=\mathcal{H}(0,\cdot)$ and for a fixed pair $T>0$ and $X\in \mathbb{R}$, the Cole-Hopf solution $\mathcal{H}(T,X)$ of the KPZ equation satisfies
\begin{align}\label{eq:DistUnderGI}
\mathcal{H}(2T,X) \stackrel{d}{=} \log\left(\int^{\infty}_{-\infty} e^{\mathcal{H}^{\mathbf{nw}}(2T,Y)+ \mathcal{H}_0(X-Y)} dY\right)\stackrel{d}{=} -\frac{T}{12}+ \log \Big( \int_{-\infty}^{\infty} e^{T^{\frac{1}{3}} \Upsilon_T((2T)^{-\frac{1}{3}}Y) +\mathcal{H}_0(X-Y) } dY\Big).
\end{align}
\vskip-.2cm
\noindent Furthermore, for $\mathcal{H}_0$ as in \eqref{eq:ScaledInitialData}, we have
\begin{align}\label{eq:DistUnderGI2}
\frac{\mathcal{H}(2T, (2T)^{\frac{1}{3}} X) + \frac{T}{12}- \frac{2}{3}\log (2T)}{T^{\frac{1}{3}}} \stackrel{d}{=} \frac{1}{T^{\frac{1}{3}}}\log \Big( \int_{-\infty}^{\infty} e^{T^{\frac{1}{3}} \big(\Upsilon_T(Y) +f(X-Y)\big)} dY\Big).
\end{align}
\end{prop}
To employ this identity, we need tail bounds for the entire spatial process $\Upsilon_T(\cdot)$. Presently, exact formulas amenable to rigorous asymptotics are only available for one-point tail probabilities, and not multi-point. However, by using the Gibbs property for the KPZ line ensemble (introduced in \cite{CorHam16} and recalled here in Section~\ref{Tools}) we will be able to extend this one-point tail control to the entire spatial process. Working with the Gibbs property is a central technical aspect of our present work and forms the backbone of the proof of Theorem~\ref{Main1Theorem}.
Besides the KPZ line ensemble, another helpful property of the narrow wedge KPZ solution is the stationarity of the spatial process $\Upsilon_T(\cdot)$ after a parabolic shift.
\begin{prop}[Proposition~1.4 of \cite{Amir11}]\label{StationarityProp}
The one point distribution of $\Upsilon_T(y)+\frac{y^2}{2^{2/3}}$ does not depend on the value of $y$.
\end{prop}
The proof of Theorem~\ref{Main4Theorem} shares a similar philosophy with that of Theorem~\ref{Main1Theorem}. We first prove an upper (as Theorem~\ref{GrandUpTheorem}) and a lower bound for the upper tail probability of $\Upsilon_T(0)$. The proof of Theorem~\ref{GrandUpTheorem} employs a combination of the one-point Laplace transform formula (see Proposition~\ref{ppn:PropConnection}) and moment formulas (see the proof of Lemma \ref{MomBoundLem}) for $\mathcal{Z}^{\mathbf{nw}}$.
The rest of the proof of Theorem~\ref{Main4Theorem} is based on the Gibbs property of the KPZ line ensemble and the FKG inequality of the KPZ equation. The FKG inequality of the KPZ equation is, for example (as shown in \cite[Proposition~1]{CQ11}) a consequence of the positive associativity of its discrete analogue, the asymmetric simple exclusion process (ASEP).
\begin{prop}[Proposition~1 of \cite{CQ11}]\label{FKGProp} Let $\mathcal{H}$ be the Cole-Hopf solution to KPZ started from initial data $\mathcal{H}_0$. Fix $k\in \mathbb{Z}_{>0}$. For any $T_1,\ldots ,T_k\geq 0$, $X_1,\ldots , X_k\in \mathbb{R}$ and $s_1, \ldots ,s_k\in \mathbb{R}$,
\begin{align}\label{eq:FKGStatement}
\mathbb{P}\Big(\bigcap_{\ell=1}^{k}\big\{\mathcal{H}(T_\ell,X_\ell)\leq s_\ell\big\}\Big)\geq \prod_{\ell =1}^{k}\mathbb{P}\Big(\mathcal{H}(T_\ell,X_\ell)\leq s_{\ell}\Big).
\end{align}
\end{prop}
A simply corollary of this result is that for $T_1, T_2\in \mathbb{R}_{>0}$, $X_1, X_2 \in \mathbb{R}$ and $s_1, s_2\in \mathbb{R}$,
\begin{align}\label{eq:RevFKG}
\mathbb{P}\Big(\mathcal{H}(T_1, X_1)>s_1, \mathcal{H}(T_2, X_2)>s_2\Big)\geq \mathbb{P}\Big(\mathcal{H}(T_1, X_1)>s_1\Big)\mathbb{P}\Big(\mathcal{H}(T_2, X_2)>s_2\Big).
\end{align}
\subsection{Narrow wedge and Brownian initial data results}\label{NWandBM}
Neither narrow wedge nor two-sided Brownian initial data belongs to the class of functions in Definition~\ref{Hypothesis}. We record here the analogues of Theorems~\ref{Main1Theorem} and \ref{Main4Theorem} for these two cases.
As mentioned in the last section, the one point tail results for the narrow wedge solution are important inputs to the proof of Theorems~\ref{Main1Theorem} and \ref{Main4Theorem}. We recall these below.
\begin{prop}[Theorem~1.1 of \cite{CG18}]\label{NotMainTheorem}
Fix $\epsilon, \delta \in (0,\frac{1}{3})$ and $T_0>0$. Then, there exist $s_0= s_0(\epsilon, \delta, T_0)$, $K_1 = K_1(\epsilon, \delta, T_0)>0$, $K_2= K_2(T_0)>0$ such that for all $s\geq s_0$ and $T\geq T_0$,
\begin{align}\label{eq:PrevRes1}
\mathbb{P}(\Upsilon_T(0)\leq -s)&\leq e^{-T^{1/3}\frac{4s^{5/2}(1-\epsilon)}{15\pi} } + e^{-K_1s^{3-\delta}- \epsilon sT^{1/3}} + e^{-\frac{(1-\epsilon)s^3}{12}}\\
\textrm{and,}\qquad \mathbb{P}(\Upsilon_T(0)\leq -s)&\geq e^{-T^{1/3}\frac{4s^{5/2}(1+\epsilon)}{15\pi} } + e^{-K_2s^3}.
\end{align}
\end{prop}
Our general initial data results also rely upon upper and lower bounds on the upper tail probability of $\Upsilon_T(\cdot)$ which are, in fact, new (see Section \ref{sec:previouswork} for a discussion of previous work).
\begin{thm}\label{GrandUpTheorem}
For any $T_0>0$, there exist $s_0= s_0(T_0)>0$ and $c_1=c_1(T_0)> c_2=c_2( T_0)>0$ such that for all $s\geq s_0$ and $T>T_0$
\begin{align}\label{eq:RoughBd1}
e^{-c_1s^{3/2}}\leq \mathbb{P}(\Upsilon_T(0)\geq s)\leq e^{-c_2s^{3/2}}.
\end{align}
We may further specify values of $c_1$ and $c_2$ for which \eqref{eq:RoughBd1} holds, provided we assume $T_0>\pi$. In that case, for any $\epsilon\in (0,\frac{1}{2})$, there exists $s_0= s_0(\epsilon, T_0)>0$ such that for all $s\geq s_0$ and $T\geq T_0$, \eqref{eq:RoughBd1} holds with the following choices for $c_1>c_2$:
\begin{enumerate}[(i)]
\item If $s_0\leq s< \frac{1}{8}\epsilon^{2} T^{\frac{2}{3}}$ then we may take $c_1= \frac{4}{3}(1+\epsilon)$ and $c_2= \frac{4}{3}(1-\epsilon)$.
\item If $s\geq \max\{s_0, \frac{9}{16}\epsilon^{-2} T^{\frac{2}{3}} \}$ then we may take $c_1= 4\sqrt{3}(1+\epsilon)$ and $c_2= \frac{4}{3}(1-\epsilon)$. Furthermore, for $c_1=\frac{4}{3}(1+\epsilon)$ there exists a sequence $\{s_n\}_{n\geq 1}$ with $s_n\to \infty$ as $n\to \infty$ such that $\mathbb{P}(\Upsilon_T>s_n)> e^{-c_1 s_n^{3/2}}$ for all $n$.
\item If $\max\{s_0, \frac{1}{8}\epsilon^2 T^{\frac{2}{3}} \} \leq s \leq \max\{s_0, \frac{9}{16}\epsilon^{-2} T^{\frac{2}{3}} \}$ then we may take $c_1= 2^{7/2}\epsilon^{-3}$ and $c_2=\frac{4}{3}\epsilon$.
\end{enumerate}
\end{thm}
\begin{rem}
Part $(i)$ of Theorem~\ref{GrandUpTheorem} shows that $\mathbb{P}(\Upsilon_T(0)>s)$ is close to $\exp(-4s^{\frac{3}{2}}/3)$ when $s\ll T^{\frac{2}{3}}$. This is in agreement with the fact that the tail probabilities of $\Upsilon_T(0)$ should be close to the tails of the Tracy-Widom GUE distribution as $T$ increases to $\infty$. Part $(ii)$ of Theorem~\ref{GrandUpTheorem} shows that the upper bound to $\mathbb{P}(\Upsilon_T(0)>s)$ is close $\exp(-4s^{\frac{3}{2}}/3)$ when $s\gg T^{\frac{2}{3}}$. We also have some lower bound which is not tight. However, part $(ii)$ further tells that the lower bound for $\mathbb{P}(\Upsilon_T(0)>s)$ cannot differ much from $\exp(-4s^{\frac{3}{2}}/3)$ for all large $s$. In the regime $s=O(T^{\frac{2}{3}})$, we do not have tight upper and lower bounds in \eqref{eq:RoughBd1}, although, the decay exponent of $\mathbb{P}(\Upsilon_T(0)>s)$ will still be equal to $3/2$.
\end{rem}
Our next two results are about the tail probabilities for the KPZ equation with two sided Brownian motion initial data; as this initial data falls outside our class, some additional arguments are necessary. Define $\mathcal{H}^{\mathrm{Br}}_0:\mathbb{R}\to \mathbb{R}$ as $\mathcal{H}^{\mathrm{Br}}_0(x):= B(x)$ where $B$ is a two sided standard Brownian motion with $B(0)=0$. Denote the Cole-Hopf solution of the KPZ equation started from this initial data $\mathcal{H}^{\mathrm{Br}}_0$ by $\mathcal{H}^{\mathrm{Br}}(\cdot,\cdot)$ and define
\begin{align}\label{eq:h_BrDefine}
h^{\mathrm{Br}}_{T}(y):= \frac{\mathcal{H}^{\mathrm{Br}}(2T,(2T)^{\frac{2}{3}} y)+ \frac{T}{12} - \frac{2}{3}\log (2T)}{T^{\frac{1}{3}}} \quad \forall T >0.
\end{align}
We first state our result on the lower tail of $h^{\mathrm{Br}}_{T}(0)$.
\begin{thm}\label{Main3Theorem}
Fix $\epsilon, \delta \in (0, \frac{1}{3})$ and $T_0>0$.
There exist $s_0=s_0(\epsilon, \delta, T_0)$ and $K=K(\epsilon, \delta, T_0)>0$ such that for all $s\geq s_0$ and $T\geq T_0$,
\begin{align}\label{eq:UpBoundBrData}
\mathbb{P}\big(h^{\mathrm{Br}}_{T}(0)\leq -s\big)\leq e^{- T^{1/3}\frac{4(1-\epsilon) s^{5/2}}{15\pi}} + e^{- Ks^{3-\delta}- \epsilon s T^{1/3}} + e^{-\frac{(1-\epsilon)s^3}{12}}.
\end{align}
\end{thm}
Our last result of this section is about the upper tail probability of $h^{\mathrm{Br}}_T(0)$.
\begin{thm}\label{Main6Theorem}
Fix $\epsilon,\mu\in (0,\frac{1}{2})$ and $T_0>0$. Then, there exists $s_0 = s_0(\epsilon, \mu, T_0)$ such that for all $s\geq s_0$ and $T\geq T_0$,
\begin{align}\label{eq:UpTailBrData1}
e^{-c_1s^{3/2}} \leq \mathbb{P}\big(h^{\mathrm{Br}}_T(0)>s\big)\leq e^{-c_2s^{3/2}}+ e^{-\frac{1}{9\sqrt{3}}(\mu s)^{3/2}}
\end{align}
where $c_1>c_2$ depend on the values of $\epsilon$, $\mu$ and $T_0$ as described in Theorem~\ref{Main4Theorem}.
\end{thm}
In Theorem \ref{Main6Theorem}, the second term of the upper bound (on the right-hand side of the equation) comes from the fact that Brownian motion is random, and the first term arises in an analogous way as it does for deterministic initial data in Theorem~\ref{Main4Theorem}.
As proved in \cite[Theorem~2.17]{BCFV}, $h^{\mathrm{Br}}_T(0)$ converges in law to the Baik-Rain distribution (see \cite{BR00, FS06, SS04, PS04, BFP10}). The following corollary strengthens the notion of that convergence and implies that the moments of $h^{\mathrm{Br}}_T(0)$ converge to the moments of the limiting Baik-Rains distribution. This answers a question posed to us by Jean-Dominique Deutschel (namely, that the variance converges).
\begin{cor}\label{ShortCor}
Let $X$ be a Baik-Rains distributed random variable (see \cite[Definition~2.16]{BCFV}). Then, $\mathbb{E}[e^{t |X|}]<\infty$ and for all $t\in \mathbb{R}$,
\begin{align}\label{eq:MGFConv}
\mathbb{E}\big[e^{t |h^{\mathrm{Br}}_T(0)|}\big]\rightarrow \mathbb{E}\big[e^{t |X|}\big], \quad \text{as } T\to \infty.
\end{align}
\end{cor}
\begin{proof}
Theorems~\ref{Main3Theorem} and \ref{Main6Theorem} show that $e^{t|h^{\mathrm{Br}}_T(0)|}$ is uniformly integrable. The dominated convergence theorem, along with \cite[Theorem~2.17]{BCFV} yields \eqref{eq:MGFConv} and $\mathbb{E}[e^{t |X|}]~<~\infty$.
\end{proof}
\subsection{Previous work and further directions}\label{sec:previouswork}
The study of tail probabilities for the KPZ equation and the SHE has a number of motivations including intermittency and large deviations. We recall some of the relevant previous literature here and compare what is done therein to the results of this present work.
The first result regarding the lower tail probability of $\mathcal{Z}(T,X)$ the proof of its almost sure positivity by \cite{Mueller91}. Later, \cite{CN08} investigated the lower tail of the SHE restricted on the unit interval with general initial data and Dirichlet boundary condition; they bounded $\mathbb{P}(\log \mathcal{Z}(T,X)\leq -s)$ from above by $c_1\exp(-c_2s^{\frac{3}{2}-\delta})$ (where $c_1,c_2$ are two positive constants depending inexplicitly on $T$). In \cite{GMF14}, this upper bound was further improved to $c_1\exp(-c_2s^{2})$ for the delta initial data SHE (the constants are different but still depend inexplicitly on $T$). Using these bounds, \cite{CorHam16} demonstrated similar upper bounds on the lower tail probability of the KPZ equation under general initial data. There are also tail bounds for the fractional Laplacian ($\Delta^{\alpha/2}$ with $\alpha\in (1,2]$) SHE. \cite[Theorem 1.5]{CHN16} generalizes the bound of \cite{CN08} and shows an upper bound\footnote{In light of our results, it might natural to expect the true decay exponent is $3-1/\alpha$. Perhaps the methods of \cite{GMF14} can be applied to give decay at least with exponent $2$. Heuristically, one may be able to see the true exponent by using the physics weak noise theory as in, for example, \cite{MKV16b}.} with exponent $2- 1/\alpha$ ($=3/2$ when $\alpha=2$).
None of the previous SHE lower tail bounds were suitable to taking time $T$ large. Specifically, the constants depend inexplicitly on $T$ and the centering by $T/24$ and scaling by $T^{1/3}$ were not present. Thus, as $T$ grows, the bounds weaken significantly to the point of triviality. For instance, one cannot conclude tightness of the centered and scaled version of $\log \mathcal{Z}(T,X)$ ($\Upsilon_T(X)$ herein) as $T$ goes to infinity using the bounds.
The first lower tail bounds suitable to taking $T$ large came in our previous work \cite{CG18} which dealt with the delta initial data SHE (see Proposition \ref{NotMainTheorem} herein). That result relied upon an identity of \cite{BorGor16} (see Proposition~\ref{ppn:PropConnection}). No analog of that identity seems to exist for general initial data. This is why we use the KPZ line ensemble approach in our present work.
The upper tail probability of the SHE had been studied before in a number of places. For instance, see \cite{ChD15, CJK, KKX17} in regards to its connection to the moments and the intermittency property \cite{GaMo90, GKM07} of the SHE. Again, there is a question of whether results are suitable to taking $T$ large. The only such result is \cite[Corollary 14]{CQ11} which shows that for some constants $c_1, c_2, c^{\prime}_1, c^{\prime}_2$, and $s,T\geq 1$,
$ \mathbb{P}(\Upsilon_T>s)\leq c_1\exp(-c^{\prime}_1s T^{1/3})+ c_2\exp(-c^{\prime}_2s^{3/2}).$
When $s \ll T^{\frac{2}{3}}$ the second bound is active and one sees the expected $3/2$ power-law in the exponent. However, as $s\gg T^{\frac{2}{3}}$, the leading term above become $c_1\exp(-c^{\prime}_1s T^{\frac{1}{3}})$ and only demonstrates exponential decay. Our result (Theorem~\ref{GrandUpTheorem}) shows that $c_1\exp(-c^{\prime}_1s T^{\frac{1}{3}})$ is not a tight upper bound for $\mathbb{P}(\Upsilon_T>s)$ in this regime of $s$. In fact, the $3/2$ power-law is shown to be valid for all $s$ even as $T$ grows (with upper and lower bounds of this sort).
Some works have focused on the large $s$ but fixed $T$ upper tail, e.g. \cite{CJK} showed that
$
\log \mathbb{P}\big(\log \mathcal{Z}(T,X)> s\big) \asymp - s^{\frac{3}{2}} \quad \text{as }s\to \infty
$
where $Z(0,X) \equiv 1$. These results are not suitable for taking $T$ and $s$ large together.
Our results (Theorems~\ref{Main4Theorem}, \ref{GrandUpTheorem} and \ref{Main6Theorem}) provide the first upper and lower bound for the upper tail probability which are well-adapted to taking $T$ large. In particular, we showed that for a wide range of initial data the exponent of the upper tail decay is always $\frac{3}{2}$ (a result which was not proved before for any specific initial data). However, the constants in the exponent for our bounds on the upper tail probability are not optimal.
It is natural to speculate on the values of these optimal coefficients. There is some discussion of this in the physics literature (see, for example, \cite{MKV16b, HLMRS18}) based on numerics and the weak noise theory (WNT)\footnote{The approach is to look at the KPZ equation in short time with very weak noise. This is a different problem than looking at the deep tail, but so far the results one gets from the WNT seem to be true even in long time.}. In the deep lower tail (the $5/2$ exponent region) the coefficient depends on the initial data and can be predicted using the WNT as in \cite{MKV16b}. For the shallow lower tail (the $3$ exponent region) one expects (by reason of continuity) to have a coefficient corresponding to the tail decay of the KPZ fixed point with the corresponding initial data. Remarkably, for the upper tail (the $3/2$ exponent region) it seem that for all deterministic initial data, the upper tail coefficient remains the same\footnote{For instance, for flat and narrow wedge initial data, the upper tail seems to have the same $4/3$ coefficient.}. However, for Brownian initial data, the coefficient changes by a factor of 2.
There have been previous considerations of tail bounds in the direction of studying large deviations for the KPZ equation (i.e., the probability that as $T\to \infty$, $\log \mathcal{Z}(T,X)$ looks like $cT$ for some constant not equal to $-1/24$). The speed for the upper tail and lower tail are different (the former being $T$ and the later being $T^2$). The lower tail large deviation principle has been the subject of significant study in the physics literature (see \cite{SMP17,CGKLT18,KD18b,KD18a} and references therein). Recently, \cite{LCT18} provided a rigorous proof of the lower tail rate function. We are not aware of a rigorous proof of the (likely) simpler upper tail rate function for the KPZ equation (there is some non-rigorous predictions about this, see e.g. \cite{PSG16}). However, for a discrete analog (the log-gamma polymer) and a semi-discrete analog (the O'Connell-Yor polymer) such an upper tail bound is proved in \cite{GS13} and \cite{Janjigian15} respectively.
We finally mention a few directions worth pursuing. Theorem \ref{Main1Theorem} only provides an upper bound on the lower tail. Our KPZ line ensemble methods are able to produce a lower bound, but with a worse (larger) power law. It is only for the narrow wedge initial data that we have a tight matching lower bound. We conjecture that there should be a similarly tight upper and lower bound for the lower tail which holds true for general initial data.
The large deviation result for the lower tail (see \cite{SMP17,CGKLT18,LCT18}) is only shown for narrow wedge initial data (though there is also some work needed for flat and Brownian initial data). It would be interesting to determine how the large deviation rate function depends on the initial data. In fact, even for the KPZ fixed point (e.g. TASEP) this does not seem to be resolved.
\smallskip
\noindent\textbf{Outline.}
Section~\ref{Tools} reviews the KPZ line ensemble and its Gibbs property. Sections~\ref{Proof1Theorem} and \ref{Proof2Theorem} establish the lower tail bounds of Theorems~\ref{Main1Theorem} and~\ref{Main3Theorem} by first analyzing the narrow wedge initial condition tails and then feeding those bounds into an argument leveraging the Gibbs property and the convolution formula of Proposition~\ref{Distribution}. We prove the upper tail bounds of Theorem~\ref{GrandUpTheorem} in Section~\ref{UpperTailNSEC} by analyzing the moment formula (see Lemma~\ref{MomBoundLem}) and the Laplace transform formula (see Proposition~\ref{ppn:PropConnection}) of the narrow wedge solution. Sections ~\ref{Proof4Theorem} and~\ref{Proof5Theorem} contain the proofs of (respectively) Theorems~\ref{Main4Theorem} and~\ref{Main6Theorem} on the upper tail bounds under general initial data.
\smallskip
\noindent\textbf{Acknowledgements.}
We thank J. Baik, G. Barraquand, S. Das, P. Le Doussal, A. Krajenbrink, J. Quastel, L.-C. Tsai, and B. Virag for helpful conversations and comments, as well as an anonymous referee for many helpful comments. I.C. was supported in part by a Packard Fellowship for Science and Engineering, and by NSF DMS-1811143, DMS-1664650.
\section{KPZ line ensemble}\label{Tools}
This section reviews (following the work of \cite{CorHam16}) the KPZ line ensemble and its Gibbs property. We use this construction in order to transfer one-point information (namely, tail bounds) into spatially uniform information for $\Upsilon_T(y)$ (see Definition~\ref{bd:ColeHopf}). It is through this mechanism that we can escape the bonds of exact formulas and generalize the conclusions of \cite{CG18} to general initial data.
\begin{defn}\label{LineEnsemble}
Fix intervals $\Sigma\subset \mathbb{N}$ and $\Lambda\subset \mathbb{R}$. Let $\mathcal{X}$ be the set of all continuous functions $f:\Sigma\times \Lambda \mapsto \mathbb{R}$ endowed with the topology of uniform convergence on the compact subsets of $\Sigma\times \Lambda$. Denote the sigma field generated by the Borel subsets of $\mathcal{X}$ by $\mathcal{C}$.
A $\Sigma\times \Lambda$-indexed \emph{line ensemble} $\mathcal{L}$ is a random variable in a probability space $(\Omega, \mathfrak{B}, \mathbb{P})$ such that it takes values in $\mathcal{X}$ and is measurable with respect to $(\mathfrak{B}, \mathcal{C})$. In simple words, $\mathcal{L}$ is a collection of $\Sigma$-indexed random continuous curves, each mapping $\Lambda$ to $\mathbb{R}$.
Fix two integers $k_1\leq k_2$, $a<b$ and two vectors $\vec{x}, \vec{y}\in \mathbb{R}^{k_2-k_1+1}$. A $\{k_1, \ldots ,k_2\}\times (a,b)$ - indexed line ensemble is called a \emph{free Brownian bridge} line ensemble with the entrance data $\vec{x}$ and the exit data $\vec{y}$ if its law, denoted here as $\mathbb{P}^{k_1,k_2, (a,b), \vec{x}, \vec{y}}_{\mathrm{free}}$, is that of $k_2-k_1+1$ independent Brownian bridges starting at time $a$ at points $\vec{x}$ and ending at time $b$ at points $\vec{y}$. We use the notation $\mathbb{E}^{k_1,k_2, (a,b), \vec{x}, \vec{y}}_{\mathrm{free}}$ for the associated expectation operator.
Consider a continuous function $\mathbf{H}:[0,\infty)\to \mathbb{R}$, which we call a {\it Hamiltonian}. Given $\mathbf{H}$ and two measurable functions $f:[0,\infty)\to \mathbb{R}\cup \{\infty\}$ and $g:[0,\infty)\to \mathbb{R}\cup \{-\infty\}$, we define a $\{k_1, \ldots ,k_2\}\times (a,b)$ - indexed line ensemble with the entrance data $\vec{x}$, the exit data $\vec{y}$, boundary data $(f,g)$ and $\mathbf{H}$ to be the law of $\mathbb{P}^{k_1,k_2, (a,b), \vec{x}, \vec{y}, f, g}_{\mathbf{H}}$ on curves $\mathcal{L}_{k_1}, \ldots , \mathcal{L}_{k_2}:[0,\infty)\to \mathbb{R}$ which is given in terms of the following Radon-Nikodym derivative
\begin{align}
\frac{d\mathbb{P}^{k_1,k_2, (a,b), \vec{x}, \vec{y}, f,g}_{\mathbf{H}}}{d\mathbb{P}^{k_1,k_2,(a,b)}_{\mathrm{free}}}(\mathcal{L}_{k_1}, \ldots , \mathcal{L}_{k_2}) &= \frac{W^{k_1,k_2,(a,b),\vec{x}, \vec{y} f,g}_{\mathbf{H}}(\mathcal{L}_{k_1}, \ldots , \mathcal{L}_{k_2})}{Z^{k_1, k_2, (a,b), \vec{x}, \vec{y}, f, g}_{\mathbf{H}}}\\
W^{k_1, k_2, (a,b), \vec{x}, \vec{y}, f,g}_{\mathbf{H}} (\mathcal{L}_{k_1}, \ldots , \mathcal{L}_{k_2})&= \exp\left\{- \sum_{k=k_1-1}^{k_2}\int^{b}_{a} \mathbf{H}\big(\mathcal{L}_{k_1+1}(u)- \mathcal{L}_{k}(u)\big) du\right\}
\end{align}
with the convention $\mathcal{L}_{k_1-1}= f$ and $\mathcal{L}_{k_2+1}=g$. Here, the normalizing constant is given by
\begin{align}
Z^{k_1,k_2, (a,b), \vec{x}, \vec{y}, f,g}_{\mathbf{H}} = \mathbb{E}^{k_1,k_2, (a,b)}_{\mathrm{free}}\big[ W^{k_1, k_2, (a,b), \vec{x}, \vec{y}, f, g}_{\mathbf{H}} (\mathcal{L}_{k_1}, \ldots , \mathcal{L}_{k_2})\big]
\end{align}
where the curves $(\mathcal{L}_{k_1}, \ldots , \mathcal{L}_{k_2})$ are distributed via $\mathbb{P}^{k_1,k_2, (a,b), \vec{x},\vec{y}}_{\mathrm{free}}$. Throughout this paper we will restrict our attention to one parameter family of Hamiltonians indexed by $T\geq 0$:
\begin{align}\label{eq:Hamiltonian}
\mathbf{H}_T(x): = e^{T^{1/3} x}.
\end{align}
A $\Sigma\times \Lambda$-indexed line ensemble $\mathcal{L}$ satisfies the $\mathbf{H}$-\emph{Brownian Gibbs property} if for any subset $K=\{k_1, k_1+1, \ldots , k_2\} \subset \Sigma$ and $(a,b)\subset \Lambda$, one has the following distributional invariance
\begin{align}\label{eq:GibbsProperty}
\mathrm{Law}\left(\mathcal{L}\big|_{K\times (a,b)} \text{ conditional on } \mathcal{L}\big|_{\Sigma\times \Lambda \backslash K\times (a,b)}\right) = \mathbb{P}^{k_1,k_2, (a,b), \vec{x}, \vec{y}, f,g}_{\mathbf{H}}
\end{align}
where $\vec{x} = (a_{k_1}, \ldots , a_{k_2})$, $\vec{y} = (b_{k_1}, \ldots , b_{k_2})$ and $f=\mathcal{L}_{k_1-1}|_{(a,b)}$, $g= \mathcal{L}_{k_2+1}$ with $f=-\infty$ if $k_1-1\notin \Sigma$ and $g= +\infty$ if $k_2+1\notin \Sigma$. This is a spatial Markov property --- the ensemble in a given region has marginal distribution only dependent on the boundary-values of said region.
Denote the sigma field generated by the curves with indices outside $K\times (a,b)$ by $\mathcal{F}_{\mathrm{ext}}(K\times (a,b))$. The random variable $(\mathfrak{a}, \mathfrak{b})$ is a $K$-\emph{stopping domain} if
$
\{\mathfrak{a}\leq a, \mathfrak{b}\geq b\}\in \mathcal{F}_{\mathrm{ext}}(K\times (a,b)).
$
Let $C^{K}(a,b)$ be the set of continuous functions $(f_{k_1}, \ldots , f_{k_2})$ where $f_i:(a,b)\to \mathbb{R}$ and define
\begin{align}
C^{K}:= \Big\{(a,b, f_{k_1}, \ldots , f_{k_2}): a<b \text{ and }(f_{k_1}, \ldots , f_{k_2})\in C^{K}(a,b) \Big\}.
\end{align}
Denote the set of all Borel measurable functions from $C^{K}$ to $\mathbb{R}$ by $\mathcal{B}(C^{K})$. Then, a $K$-stopping domain $(\mathfrak{a}, \mathfrak{b})$ is said to satisfy the \emph{strong} $\mathbf{H}$-\emph{Brownian Gibbs property} if for all $F\in \mathcal{B}(C^{K})$, following holds $\mathbb{P}$-almost surely,
\begin{align}\label{eq:StrongGibbs}
\mathbb{E}\Big[F\big(\mathfrak{a}, \mathfrak{b},\mathcal{L}\big|_{K\times (\mathfrak{a}, \mathfrak{b})}\big)\Big| \mathcal{F}_{\mathrm{ext}}(K\times (\mathfrak{a}, \mathfrak{b}))\Big] = \mathbb{E}^{k_1,k_2,(\ell,r),\vec{x}, \vec{y}, f,g}_{\mathbf{H}}\Big[F(\ell, r, \mathcal{L}_{k_1}, \ldots , \mathcal{L}_{k_2})\Big]&&&&
\end{align}
where $\ell = \mathfrak{a}$, $r=\mathfrak{b}$, $\vec{x} = \{\mathcal{L}_{i}(\mathfrak{a})\}^{k_2}_{i=k_1}$, $\vec{y} = \{\mathcal{L}_{i}(\mathfrak{b})\}^{k_2}_{i=k_1}$, $f(\cdot) = \mathcal{L}_{k_1-1}(\cdot)$ (or $+\infty$ if $k_1-1\notin \Sigma$) and $g(\cdot) = \mathcal{L}_{k_2+1}(\cdot)$ (or $-\infty$ if $k_2+1\notin \Sigma$). On the l.h.s. of \eqref{eq:StrongGibbs}, $\mathcal{L}\Big|_{K\times (\mathfrak{a}, \mathfrak{b})}$ is the restriction of the $\mathbb{P}$-distributed curves and on the r.h.s. $\mathcal{L}_{k_1}, \ldots , \mathcal{L}_{k_2}$ is $\mathbb{P}^{k_1,k_2,(\ell,r),\vec{x}, \vec{y}, f,g}_{\mathbf{H}}$-distributed.
\end{defn}
\begin{rem}\label{LineEnsembleToBM}
When $k_1=k_2=1$ and $(f,g) = (+\infty, -\infty)$ the measure $\mathbb{P}^{k_1,k_2,(a,b), \vec{x}, \vec{y}, f,g}_{\mathbf{H}}$ is same as the measure of a free Brownian bridge started from $\vec{x}$ and ended at $\vec{y}$.
\end{rem}
The following lemma demonstrates a sufficient condition under which the strong $\mathbf{H}$-Brownian Gibbs property holds.
\begin{lemma}[Lemma 2.5 of \cite{CorHam16}]
Any line ensemble which enjoys the $\mathbf{H}$-Brownian Gibbs property also enjoys the strong $\mathbf{H}$-Brownian Gibbs property.
\end{lemma}
The next proposition relates the narrow wedge KPZ equation to the KPZ line ensemble\footnote{Note, we do not require the full strength of the result proved in Theorem~2.15 of \cite{CorHam16}. That result also proves uniform over $T$ of the local Brownian nature of the top curve $\Upsilon^{(1)}_T(x)$ as $x$ varies.}.
\begin{prop}[Theorem~2.15 of \cite{CorHam16}]\label{NWtoLineEnsemble} Fix any $T>0$. Then there exists an $\mathbb{N}\times \mathbb{R}$-indexed line ensemble $\mathcal{H}_T =\{\mathcal{H}^{n}_{T}(x)\}_{n\in \mathbb{N}, x\in \mathbb{R}}$ satisfying the following properties:
\begin{enumerate}[(1)]
\item The lowest indexed curve $\mathcal{H}^{1}_{T}(X)$ is equal in distribution (as a process in $X$) the Cole-Hopf solution $\mathcal{H}^{\mathbf{nw}}(T,X)$ of KPZ started from the narrow wedge initial data.
\item $\mathcal{H}_T$ satisfies the $\mathbf{H}_{1}$-Brownian Gibbs property (see Definition~\ref{LineEnsemble}).
\item Define the scaled KPZ line ensemble $\{\Upsilon^{(n)}_T(x)\}_{n\in \mathbb{N},x\in \mathbb{R}}$ as follows
\begin{align}\label{eq:UsilonNDef}
\Upsilon^{(n)}_T(x) := \frac{\mathcal{H}^{n}_{2T}\big((2T)^{\frac{1}{3}} x\big)+ \frac{T}{12}}{T^{\frac{1}{3}}}.
\end{align}
Then, $\{2^{-\frac{1}{3}}\Upsilon^{(n)}_T(x)\}_{n\in \mathbb{N},x\in \mathbb{R}}$ satisfies the $\mathbf{H}_{2T}$-Brownian Gibbs property\footnote{This pesky $2^{-\frac{1}{3}}$ compensates for the fact that it is missing in the denominator of $\Upsilon^{(n)}_T(x)$.}
\end{enumerate}
\end{prop}
The following proposition is a monotonicity result which shows that two line ensembles with the same index set can be coupled in such a way that if the boundary conditions of one ensemble dominates the other, then likewise do the curves.
\begin{prop}[Lemmas ~2.6 and 2.7 of \cite{CorHam16}]\label{Coupling1}
Fix an interval $K=\{k_1, \ldots , k_2\}\subset \Sigma $ for some fixed positive integers $k_1<k_2$, $(a,b)\subset \Lambda$ for $a<b$ and two pairs of vectors $\vec{x}_{1},\vec{x}_{2}$ and $\vec{y}_{1}, \vec{y}_{2}$ in $ \mathbb{R}^{k_2-k_1+1}$. Consider any two pairs of measurable functions $f, \tilde{f}: (a,b)\to\mathbb{R} \cup \{+\infty\} $ and $g, \tilde{g}: (a,b) \to \mathbb{R} \cup \{-\infty\}$ such that $\tilde{f}(s)\leq f(s)$, $\tilde{g}(s)\leq g(s)$ for all $s\in (a,b)$ and $x^{(k)}_{2}\leq x^{(k)}_{1}$, $y^{(k)}_{2}\leq y^{(k)}_{1}$ for all $k\in K$. Let $\mathcal{Q}= \{\mathcal{Q}^{(n)}(x)\}_{n\in K, x\in(a,b)}$ and $\widetilde{\mathcal{Q}}= \{\widetilde{\mathcal{Q}}^{(n)}(x)\}_{n\in K, x\in(a,b)}$ be two $K\times (a, b)$-indexed line ensembles in the probability space $(\Omega, \mathcal{B}, \mathbb{P})$ and $(\widetilde{\Omega}, \widetilde{\mathcal{B}}, \widetilde{\mathbb{P}})$ respectively such that $\mathbb{P}$ equals to $ \mathbb{P}^{k_1, k_2, (a,b), \vec{x}_1, \vec{y}_1, f,g}_{\mathbf{H}}$ and $\widetilde{\mathbb{P}}$ equals to $ \mathbb{P}^{k_1, k_2, (a,b), \vec{x}_2, \vec{y}_2, \tilde{f},\tilde{g}}_{\mathbf{H}}$.
If $H:[0,\infty)\to \mathbb{R}$ is convex, then, there exists a coupling (i.e., a common probability space upon which both measures are supported) between $\mathbb{P}$ and $\widetilde{\mathbb{P}}$ such that $\widetilde{\mathcal{Q}}^{(j)}(s)\leq \mathcal{Q}^{(j)}(s)$ for all $n\in K$.
\end{prop}
Let us provide the basic idea behind how we use Lemma~\ref{Coupling1}. Note that by $\mathbf{H}$-Brownian Gibbs property the lowest indexed curve $2^{-\frac{1}{3}}\Upsilon^{(1)}_T(\cdot)$ of the $\mathbb{N}$-indexed KPZ line ensemble $\{2^{-\frac{1}{3}}\Upsilon^{(n)}_{T}(x)\}_{n\in \mathbb{N}, x\in \mathbb{R}}$, when restricted to the interval $(a,b)$, has the conditional measure $\mathbb{P}^{1,1,(a,b), 2^{-\frac{1}{3}}\Upsilon^{(1)}_{T}(a), 2^{-\frac{1}{3}}\Upsilon^{(1)}_T(b), +\infty, 2^{-\frac{1}{3}}\Upsilon^{(2)}_T}_{\mathbf{H}_{2T}}$. On the other hand, replacing $2^{-\frac{1}{3}}\Upsilon^{(2)}_T$ by $-\infty$, \newline $\mathbb{P}^{1,1,(a,b), 2^{-\frac{1}{3}}\Upsilon^{(1)}_{T}(a), 2^{-\frac{1}{3}}\Upsilon^{(1)}_T(b), +\infty, -\infty}_{\mathbf{H}_{2T}}$ is the probability measure of a Brownian bridge on the interval $(a,b)$ with the entrance and exit data $2^{-\frac{1}{3}}\Upsilon^{(1)}_{T}(a)$ and $2^{-\frac{1}{3}}\Upsilon^{(1)}_{T}(b)$ respectively. Lemma~\ref{Coupling1} constructs a coupling between these two measures on the curve $2^{-\frac{1}{3}}\Upsilon^{(1)}_T\big\vert_{(a,b)}$ such that
\begin{align}\label{eq:MeasureDom}
\mathbb{P}^{1,1,(a,b), 2^{-\frac{1}{3}}\Upsilon^{(1)}_{T}(a), 2^{-\frac{1}{3}}\Upsilon^{(1)}_T(b), +\infty, 2^{-\frac{1}{3}}\Upsilon^{(2)}_T}_{\mathbf{H}_{2T}}(\mathcal{A})\leq \mathbb{P}^{1,1,(a,b), 2^{-\frac{1}{3}}\Upsilon^{(1)}_{T}(a), 2^{-\frac{1}{3}}\Upsilon^{(1)}_T(b), +\infty, -\infty}_{\mathbf{H}_{2T}}(\mathcal{A})
\end{align}
for any event $\mathcal{A}$ whose chance increases\footnote{If increase is replaced by decrease, then, the inequality \eqref{eq:MeasureDom} is reversed.} under the pointwise decrease of $\Upsilon^{(1)}_T$.
In most of our applications of this idea, it is easy to find upper bounds on the r.h.s. of \eqref{eq:MeasureDom} using Brownian bridge calculations. Via \eqref{eq:MeasureDom}, those bounds transfers to the spatial process $\Upsilon^{(1)}_T(\cdot)$. Since, by Proposition \ref{NWtoLineEnsemble}, this curve is equal in law to $\Upsilon_T(\cdot)$ (the scaled and centered narrow wedge KPZ equation solution), these bounds in conjunction with the convolution formula of Proposition~\ref{Distribution} embodies the core of our techniques to generalize the tail bounds from narrow wedge to general initial data. The following lemma is used in controlling the probabilities which arise on r.h.s. of \eqref{eq:MeasureDom}.
\begin{lemma}\label{BBFlucLem} Let $B(\cdot)$ be a Brownian bridge on $[0,L]$ with $B(0)= x$ and $B(L) = y$. Then,
\begin{align}\label{eq:BBineq}
\mathbb{P}\Big(\inf_{t\in [0,L]} B(t)\leq \min\{x,y\}-s\Big)\leq e^{- \frac{2s^2}{L}}.
\end{align}
\end{lemma}
\begin{proof}
Due to symmetry, we may assume $\min\{x,y\} = y$. Note that $\tau=\min\{t\in [0,1]: B(t)\leq y\}$ is a stopping time for the natural filtration of $B(\cdot)$. Thanks to the resampling invariance property of the Brownian bridge measure, $\{B(t)\}_{t\in [\tau,L]}$ conditioned on the sample paths outside the interval $(\tau,L)$ is again distributed as a Brownian bridge with $B(\tau)=B(L) =y$. Now, applying \cite[(3.40)]{KS91} (see also Lemma~2.11 of \cite{CorHam16}), we get
\begin{align}\label{eq:BBfluc}
\mathbb{P}\Big(\inf_{t\in (\tau, L] }B(t)\leq \min\{x,y\} -s \Big| \mathcal{F}([0,\tau])\Big)= e^{-\frac{2s^2}{(L-\tau)}}.
\end{align}
Here, $\mathcal{F}([0,\tau])$ denotes the natural filtration of $\{B\}_{t\in [0,L]}$ stopped at time $\tau$. Taking expectation of \eqref{eq:BBfluc} with respect to the $\sigma$-algebra $\mathcal{F}_\tau$ and noting $e^{-\frac{2s^2}{(L-\tau)}}\leq e^{-\frac{2s^2}{L}}$ yields \eqref{eq:BBineq}.
\end{proof}
It is worth noting that Proposition 4.3.5.3 of \cite{MMFM} contains an exact formulas for the left hand side of \eqref{eq:BBineq}.
The next result (which follows from \cite[(3.14)]{GT11}) is used in Theorem~\ref{Main6Theorem}.
\begin{lemma}\label{BMminusParabola}
Let $B(\cdot)$ be a two-sided standard Brownian motion with $B(0) =0 $. Then, for any given $\xi\in (0,1)$, there exists $s_0=s_0(\xi)$ such that for all $c>0$ and $s\geq s_0$,
\begin{align}\label{eq:BMParaFluc}
\mathbb{P}\Big(B(t)\geq s+ct^2 \text{ for some }t\in \mathbb{R}\Big)\leq \frac{1}{\sqrt{3}}e^{-\frac{8(1-\xi)\sqrt{c}s^{\frac{3}{2}}}{3\sqrt{3}}}.
\end{align}
\end{lemma}
\section{Lower tail under general initial data}\label{LowerTailSEC}
In this section, we prove Theorems~\ref{Main1Theorem} and \ref{Main3Theorem}. Starting with the tail bounds of Proposition~\ref{NotMainTheorem}, we first bound the lower tail probabilities of the narrow wedge solution at a countable set of points of $\mathbb{R}$ (see Lemma~\ref{MeshBound}). Combining this with the Brownian Gibbs property of the narrow wedge solution and the growth conditions of initial data (given in Definition~\ref{Hypothesis}), we prove the lower tail bound of Theorem~\ref{Main1Theorem} in Section~\ref{Proof1Theorem} via the convolution formula of Proposition~\ref{Distribution}. By controlling the fluctuations of a two sided Brownian motion in small intervals, we prove the lower tail bound of Theorem~\ref{Main3Theorem} (see Section~\ref{Proof2Theorem}) in a similar way.
\subsection{Proof of Theorem~\ref{Main1Theorem}}\label{Proof1Theorem}
Recall that the initial data $\mathcal{H}_0$ is defined from $f$ via \eqref{eq:LowInitBd}. Also recall the definition of $\Upsilon_T(\cdot)$ from \eqref{eq:DefUpsilon}. Fix the sequence $\{\zeta_n\}_{n\in \mathbb{Z}}$ where $\zeta_n:=\frac{n}{s^{1+\delta}}$. Let us define the following events
\begin{align}
\mathcal{A}^{f} &:= \left\{ \int^{\infty}_{-\infty} e^{T^{\frac{1}{3}}\big(\Upsilon_T(y)+ f(-y)\big)} dy\leq e^{-T^{\frac{1}{3}}s}\right\},\label{eq:PrincipleEvent1}\\
E_n &:= \left\{\Upsilon_T(\zeta_n)\leq -\frac{(1+2^{-1}\nu)\zeta^2_n}{2^{2/3}}-(1-\epsilon)s\right\}, \label{eq:PrincipleEvent2}\\
F_n &:= \left\{\Upsilon_T(y)\leq -\frac{(1+\nu) y^2}{2^{2/3}}-\left(1-\frac{\epsilon}{2}\right)s \quad \text{ for some }y \in (\zeta_n, \zeta_{n+1})\right\}. \label{eq:PrincipleEvent3}
\end{align}
Here, we suppress the dependence on the various variables.
By \eqref{eq:DistUnderGI2} of Proposition~\ref{Distribution}, $\mathbb{P}(h^{f}_T(0)\leq -s) = \mathbb{P}(\mathcal{A}^{f})$
which we need to bound. To begin to bound this, note that
\begin{align}\label{eq:BasicStep}
\mathbb{P}(\mathcal{A}^{f}) &\leq \mathbb{P}\Big(\bigcup_{n\in \mathbb{Z}} E_n\Big) + \mathbb{P}\Big(\mathcal{A}^{f}\cap \Big(\bigcup_{n\in \mathbb{Z}} E_n\Big)^{c}\Big)\leq \sum_{n\in \mathbb{Z}}\mathbb{P}\big(E_n\big)+\mathbb{P}\Big(\mathcal{A}^{f}\cap \Big(\bigcup_{n\in \mathbb{Z}} E_n\Big)^{c}\Big).
\end{align}
We focus on bounding separately the two terms on the right side of \eqref{eq:BasicStep}.
\begin{lemma}\label{MeshBound}
There exist $s_0=s_0(\epsilon, \delta, C, \nu, T_0)$ and $K_{*}=K_{*}(\epsilon, \delta, T_0)>0$ such that for all $T\geq T_0$ and $s\geq s_0$,
\begin{equation}\label{eq:SumOfProbBd}
\sum^{\infty}_{n=-\infty}\mathbb{P}\left(E_n\right)\leq e^{-T^{1/3} \frac{4(1-\epsilon)s^{5/2}}{15\pi}} + e^{-K_{*}s^{3-\delta} - \epsilon s T^{1/3}}+e^{-\frac{(1-\epsilon)s^3}{12}}.
\end{equation}
\end{lemma}
\begin{proof}
Recall that the one point distribution of $\Upsilon_T(y)+\frac{y^2}{2^{2/3}}$ is independent of $y$ (see Proposition~\ref{StationarityProp}). Setting $s_n:= (1-\epsilon)s+ \frac{\nu\zeta^2_n}{2^{5/3}}$ and invoking Propositions~\ref{StationarityProp} and \ref{NotMainTheorem}, we write
\begin{align}\label{eq:EnProb}
\mathbb{P}\left(E_n\right)= \mathbb{P}(\Upsilon_T(0)\leq -s_n)\leq e^{-T^{1/3}(1-\epsilon)\frac{4s^{5/2}_n}{15\pi}} + e^{-Ks^{3-\delta}_n - \epsilon s_n T^{1/3}}+e^{-(1-\epsilon)\frac{s^3_n}{12}}.
\end{align}
Applying the reverse Minkowski inequality, we get $s^{\alpha}_n\geq ((1-\epsilon)s)^{\alpha} + (\nu n^2\kappa^2/2^{5/3}s^2)^{\alpha}$ for all $\alpha\geq 1$. Plugging this into \eqref{eq:EnProb} and
summing over all $n\in \mathbb{Z}$, we get
\begin{align}
\sum_{n\in \mathbb{Z}}\mathbb{P}(E_n)\leq & e^{-T^{1/3}(1-\epsilon)\frac{4s^{5/2}}{15\pi}}\sum_{n\in \mathbb{Z}} e^{-T^{1/3}K_{1}\frac{|n|^5}{s^5}} + e^{-(1-\epsilon)\frac{s^3}{12}} \sum_{n\in \mathbb{Z}} e^{-K_{2} \frac{n^6}{s^6}}\\& + e^{- K s^{3-\delta} - \epsilon s T^{1/3}} \sum_{n\in \mathbb{Z}} e^{ -K_3 \frac{|n|^{2(3-\delta)}}{s^{2(3-\delta)}}- \epsilon\frac{\nu}{2^{5/3}}\frac{n^2}{s^2} T^{1/3}}\label{eq:SumExpand}
\end{align}
for three positive constants $K_1$, $K_2$ and $K_3$. By a direct computation, we observe
\begin{align}
\sum_{n\in \mathbb{Z}} e^{-T^{1/3} K_1 s^{-5}|n|^{5}} \leq K^{\prime}_1 T^{-\frac{1}{3}}s^{5}, &\qquad \sum_{n\in \mathbb{Z}} e^{- K_2 \frac{n^6}{s^6}} \leq K^{\prime}_2 s^{6},\label{eq:SumBound1}\\ \sum_{n\in \mathbb{Z}} e^{ -K_3 \frac{|n|^{2(3-\delta)}}{s^{2(3-\delta)}}- \epsilon\frac{\nu}{2^{5/3}}\frac{n^2}{s^2} T^{\frac{1}{3}}} & \leq K^{\prime}_3 \Big(s^{3(2-\delta)}+s^2 T^{-\frac{1}{3}}\Big) .
\label{eq:SumBound2}
\end{align}
Combining \eqref{eq:SumBound1} and \eqref{eq:SumBound2} with \eqref{eq:SumExpand} yields \eqref{eq:SumOfProbBd}.
\end{proof}
Now it suffices to control the second term on the right side of \eqref{eq:BasicStep}. We start by showing:
\smallskip
\begin{lemma}\label{EffOfInitCond}
Under the assumption that $f$ belongs to the class $\mathbf{Hyp}(C, \nu, \theta, \kappa, M)$, there exists $s_1=s_1(C,\nu, \theta, \kappa, M)$ such that for all $s\geq s_1$,
\begin{align}\label{eq:Containment}
\bigcap_{n\in \mathbb{Z}}\{E^c_n\cap F^c_n\} \subset (\mathcal{A}^{f})^c.
\end{align}
\end{lemma}
\begin{proof}
Assume the events on the l.h.s. of \eqref{eq:Containment} occur. Appealing to \eqref{eq:LowInitBd}, we observe
\begin{align*}
\int^{\infty}_{-\infty} & e^{T^{1/3}\big(\Upsilon_T(y)+f(-y)\big)} dy
\geq \int_{\mathcal{I}}e^{-T^{1/3}\Big(\frac{(1+\nu /2) }{2^{2/3}}y^2+(1-\frac{\epsilon}{2})s- \kappa\Big)}dy \geq \theta e^{-T^{1/3}\Big(\frac{1+\nu/2}{2^{2/3}}M^2+\kappa-\frac{\epsilon s}{2}\Big)}e^{-T^{\frac{1}{3}} s}.&
\end{align*}
Clearly, there exists $s_1 = s_1(C,\nu, \theta, \kappa, M)$ such that
the right side above is bounded below by $e^{-T^{\frac{1}{3}} s}$ for all $s\geq s_1$. This shows the claimed containment of the events in \eqref{eq:Containment}.
\end{proof}
Owing to \eqref{eq:Containment} and then, Bonferroni's union bound,
\begin{equation}\label{eq:BonfrBd}
\mathbb{P}\Big(\mathcal{A}^{f}\cap \Big(\bigcup_{n\in \mathbb{Z}} E_n \Big)^{c}\Big) = \mathbb{P}\Big(\mathcal{A}^{f}\cap \Big\{\bigcap_{n\in \mathbb{Z}} E^{c}_n\Big\} \cap \Big\{\bigcup_{n\in \mathbb{Z}} F_n\Big\} \Big)\leq \sum_{n\in \mathbb{Z}}\mathbb{P}\left(E^{c}_n\cap E^c_{n+1} \cap F_n\right).
\end{equation}
We obtain an upper bound of the r.h.s. of \eqref{eq:BonfrBd} in the following lemma.
\begin{lemma}\label{LineEnsmbUse}
There exists $s_2 =s_2(\epsilon)>0$ such that for all $s\geq s_2$
\begin{equation}\label{eq:ResidualEvent}
\sum_{n\in \mathbb{Z}}\mathbb{P}\left( E^{c}_n\cap E^c_{n+1} \cap F_n\right)\leq e^{-s^{3+\delta}}.
\end{equation}
\end{lemma}
Combining \eqref{eq:BonfrBd} with \eqref{eq:ResidualEvent} of Lemma~\ref{LineEnsmbUse} yields
\begin{align}\label{eq:UnionProbBd}
\mathbb{P}\Big(\mathcal{A}^{f}\cap \Big(\bigcup_{n\in \mathbb{Z}} E_n \Big)^{c}\Big)\leq e^{-s^{3+\delta}}
\end{align}
for some $\delta>0$. Plugging the bounds \eqref{eq:SumOfProbBd} and \eqref{eq:UnionProbBd} into the r.h.s. of \eqref{eq:BasicStep} yields \eqref{eq:UpperBoundDetData}. To complete the proof of Theorem~\ref{Main1Theorem}, it only remains to prove Lemma~\ref{LineEnsmbUse} which we show below.
\begin{proof}[Proof of Lemma~\ref{LineEnsmbUse}] We aim to bound $\mathbb{P}(E^{c}_n \cap E^{c}_{n+1}\cap F_n)$. By Proposition~\ref{NWtoLineEnsemble}, $\Upsilon_T$ equals in law the curve $\Upsilon^{(1)}_{T}$ of the scaled KPZ line ensemble $\{2^{-\frac{1}{3}}\Upsilon^{(n)}_T(x)\}_{n\in \mathbb{N},x\in \mathbb{R}}$. Hence, without loss of generality, we replace $\Upsilon_T$ by $\Upsilon^{(1)}_T$ in the definitions of $E_n$ and $F_n$ for the rest of this proof. By the $\mathbf{H}_{2T}$-Brownian Gibbs property of $\{2^{-\frac{1}{3}}\Upsilon^{(n)}_T(x)\}_{n\in \mathbb{N},x\in \mathbb{R}}$,
\begin{align}
\mathbb{P}(E^c_n &\cap E^c_{n+1}\cap F_n)= \mathbb{E}\Big[ \mathbbm{1}(E^c_n\cap E^c_{n+1})\cdot \mathbb{E}\big[\mathbbm{1}(F_n)|\mathcal{F}_{\mathrm{ext}}(\{1\}, (\zeta_n,\zeta_{n+1}))\big]\Big]\\&= \mathbb{E}\Big[\mathbbm{1}(E^c_n\cap E^c_{n+1})\cdot \mathbb{P}^{1,1,(\zeta_n, \zeta_{n+1}), 2^{-\frac{1}{3}}\Upsilon^{(1)}_T(\zeta_n), 2^{-\frac{1}{3}}\Upsilon^{(1)}_T(\zeta_{n+1}), +\infty, 2^{-\frac{1}{3}}\Upsilon^{(2)}_T}_{\mathbf{H}_{2T}}(F_n)\Big].\label{eq:LineEnsemble}
\end{align}
Recall $\mathcal{F}_{\mathrm{ext}}(\{1\}, (\zeta_n,\zeta_{n+1}))$ is the $\sigma$-algebra generated by $\{\Upsilon^{(n)}_T(x)\}_{n\in \mathbb{N},x\in \mathbb{R}}$ outside the set $\{\Upsilon^{(1)}_T(x):x\in (\zeta_n, \zeta_{n+1})\}$.
Via Proposition~\ref{Coupling1}, there exists a monotone coupling between the probability measures $\mathbb{P}_{\mathbf{H}_{2T}}:=\mathbb{P}^{1,1,(\zeta_n, \zeta_{n+1}), 2^{-\frac{1}{3}}\Upsilon^{(1)}_T(\zeta_n), 2^{-\frac{1}{3}}\Upsilon^{(1)}_T(\zeta_{n+1}), +\infty, 2^{-\frac{1}{3}}\Upsilon^{(2)}_T}_{\mathbf{H}_{2T}}$ and $\widetilde{\mathbb{P}}_{\mathbf{H}_{2T}}:=\mathbb{P}^{1,1,(\zeta_n, \zeta_{n+1}), 2^{-\frac{1}{3}}\Upsilon^{(1)}_T(\zeta_n), 2^{-\frac{1}{3}}\Upsilon^{(1)}_T(\zeta_{n+1}), +\infty, -\infty}_{\mathbf{H}_{2T}}= \mathbb{P}^{1,1, (\zeta_n, \zeta_{n+1}), 2^{-\frac{1}{3}}\Upsilon^{(1)}_{T}(\zeta_n), 2^{-\frac{1}{3}}\Upsilon^{(1)}_T(\zeta_{n+1})}_{\mathrm{free}}$ such that
\begin{align}\label{eq:MeasureDominition}
\mathbb{P}_{\mathbf{H}_{2T}}(F_n)\leq \widetilde{\mathbb{P}}_{\mathbf{H}_{2T}}(F_n).
\end{align}
The r.h.s. of \eqref{eq:MeasureDominition} is a probability with respect a Brownian bridge measure.
For the rest of the proof, we use shorthand notation $\theta_n:= (1-\epsilon)s+2^{-\frac{2}{3}}(1+2^{-1}\nu)\zeta^2_n$ for $ n\in \mathbb{Z}$.
The probability of the event $F_n$ increases under the pointwise decrease of the end points of $\Upsilon^{(1)}_T$.
Using $\{E^c_n\cap E^c_{n+1}\}= \{\Upsilon^{(1)}_T(\zeta_n)\geq -\theta_n\}\cap \{\Upsilon^{(1)}_T(\zeta_{n+1})\geq -\theta_{n+1}\}$ and Proposition~\ref{NWtoLineEnsemble},
\begin{align}
\mathbbm{1}(E^{c}_n\cap E^c_{n+1})\times \widetilde{\mathbb{P}}_{\mathbf{H}_{2T}}(F_n)\leq \mathbb{P}^{1,1,(\zeta_n, \zeta_{n+1}), -2^{-\frac{1}{3}}\theta_n, -2^{-\frac{1}{3}}\theta_{n+1}}_{\mathrm{free}}(F_n). \label{eq:OrderedProb}
\end{align}
Combining \eqref{eq:MeasureDominition} and \eqref{eq:OrderedProb} yields
\begin{align}
\mathbbm{1}(E^{c}_n\cap E^c_{n+1})\times &\mathbb{P}^{1,1,(\zeta_n, \zeta_{n+1}), 2^{-\frac{1}{3}}\Upsilon^{(1)}_T(\zeta_n), 2^{-\frac{1}{3}}\Upsilon^{(1)}_T(\zeta_{n+1}), +\infty, 2^{-\frac{1}{3}}\Upsilon^{(2)}_T}_{\mathbf{H}_{2T}}(F_n)\\&\leq \mathbb{P}\Big(\min_{x\in [\zeta_n,\zeta_{n+1}]}B(t)\leq 2^{-\frac{1}{3}}\{\theta_n\wedge \theta_{n+1}\}-\frac{\epsilon s}{2^{4/3}}-\frac{\nu \zeta^2_n}{4}\Big)\label{eq:BrownianDominition}
\end{align}
where $B(\cdot)$ is a Brownian bridge such that $B(\zeta_n)= -2^{-\frac{1}{3}}\theta_{n}$ and $B(\zeta_{n+1}) = -2^{-\frac{1}{3}}\theta_{n+1}$. Applying Lemma~\ref{BBFlucLem} yields $\text{r.h.s. of \eqref{eq:BrownianDominition}}\leq e^{-2^{1/3}s^{1+\delta}\big( \frac{\epsilon s}{2^{4/3}}+\frac{\nu \zeta^2_n}{4}\big)^2}$.
Combining this upper bound with \eqref{eq:BrownianDominition} and taking the expectations, we arrive at
\begin{align}\label{eq:FinalUpperBound}
\mathbb{P}(E^c_n\cap E^c_{n+1}\cap F_n)\leq e^{-2^{1/3}s^{1+\delta}\big( \frac{\epsilon s}{2^{4/3}}+ \frac{\nu n^2}{4 s^{2(1+\delta)}}\big)^2}.
\end{align}
Summing both side of \eqref{eq:FinalUpperBound} over $n\in \mathbb{Z}$, we obtain \eqref{eq:ResidualEvent}.
\end{proof}
\subsection{Proof of Theorem~\ref{Main3Theorem}}\label{Proof2Theorem}
This proof is similar to that of Theorem~\ref{Main1Theorem}. We use the same notations $\zeta_n$, $E_n$ and $F_n$ introduced in the beginning of the proof of Theorem~\ref{Main1Theorem} and additionally define
\begin{align}\label{eq:ABr}
\mathcal{A}^{\mathrm{Br}}= \left\{\int^{\infty}_{-\infty} e^{T^{1/3}\big(\Upsilon_T(y)+ B(-y)\big)} dy \leq e^{-T^{\frac{1}{3}}s}\right\}
\end{align}
where $B$ is a two sided Brownian motion with diffusion coefficient $2^{\frac{1}{3}}$ and $B(0)=0$. In particular, $B(y)\stackrel{d}{=} \widetilde{B}(2^{\frac{2}{3}}y)$ where $\widetilde{B}(\cdot)$ is standard two sided Brownian motion. Owing to \eqref{eq:DistUnderGI2}, $\mathbb{P}(h^{\mathrm{Br}}(0)\leq -s)= \mathbb{P}(\mathcal{B}^{\mathrm{Br}})$ which we need to bound. As in \eqref{eq:BasicStep}, we write
\begin{align}\label{eq:BasicStep2}
\mathbb{P}\big(\mathcal{A}^{\mathrm{Br}}\big)\leq \sum_{n\in \mathbb{Z}} \mathbb{P}(E_n)+\mathbb{P}\Big(\mathcal{A}^{\mathrm{Br}} \cap \Big(\bigcap_{n\in \mathbb{Z}} E_n\Big)^{c}\Big).
\end{align}
We can use \eqref{eq:SumOfProbBd} of Lemma~\ref{MeshBound} to bound $\sum_{n}\mathbb{P}(E_n)$. While the conclusion of Lemma~\ref{EffOfInitCond} does not hold in the present case, we will show that it does hold with high probability.
\begin{lemma}\label{BrowLowLemma}
There exist $s_1=s_1(\epsilon, \delta)$, $c_1=c_1(\epsilon),c_2=c_2(\epsilon)>0$ such that for all $s\geq s_1$,
\begin{align}\label{eq:SmallProb}
\mathbb{P}\Big(\bigcap_{n\in \mathbb{Z}} \big\{E^c_n \cap F^c_n\big\}\cap \mathcal{A}^{\mathrm{Br}}\Big)\leq c_1e^{-c_2s^{3+\delta}}.
\end{align}
\end{lemma}
Combining \eqref{eq:ResidualEvent} of Lemma~\ref{LineEnsmbUse} and \eqref{eq:SmallProb} of Lemma~\ref{BrowLowLemma} yields
\begin{align}\label{eq:ResEvent}
\mathbb{P}\left(\mathcal{A}^{\mathrm{Br}}\cap \Big(\bigcup_{n\in \mathbb{Z}} E_n\Big)^c\right)\leq c_2e^{-c_1s^{3+\delta}}.
\end{align}
Applying \eqref{eq:ResEvent} and \eqref{eq:SumOfProbBd} to \eqref{eq:BasicStep2}, we obtain \eqref{eq:UpBoundBrData}. To complete the proof of Theorem~\ref{Main3Theorem}, we now need to prove Lemma~\ref{BrowLowLemma} which is given as follows.
\begin{proof}[Proof of Lemma~\ref{BrowLowLemma}] Observe first that
\begin{align}
\bigcap_{n\in \mathbb{Z}} \big\{E^c_n \cap F^c_n\big\}\cap \mathcal{A}^{\mathrm{Br}} &\subseteq \Big\{ \int^{\infty}_{-\infty} e^{-T^{1/3}\big(\frac{(1+\nu)y^2}{2^{2/3}}-\frac{\epsilon s}{2}-B_2(y) \big)}dy\leq 1\Big\}.\label{eq:ContainmentProb}
\end{align}
Note that if $B(y)\geq -\frac{\epsilon}{4}s$ for all $y\in [-1/s^{1+\delta}, 1/s^{1+\delta}]$, then,
$\frac{(1+\nu)y^2}{2^{2/3}}-\frac{\epsilon s}{2}-B(y)\leq -\frac{\epsilon}{8}s$ for all $y \in [-1/s^{1+\delta}, 1/s^{1+\delta}]$
which implies
\begin{align}
\int_{-\infty}^{\infty} e^{-T^{1/3}\big(\frac{(1+\nu)y^2}{2^{2/3}}-\frac{\epsilon s}{2}-B(y) \big)}dy \geq \frac{2}{s^{1+\delta}}e^{\frac{\epsilon s}{8}T^{1/3}}> 1
\end{align}
when $s$ is large. Hence, there exists $s_1=s_1(\epsilon, \delta)$ such that for all $s\geq s_1$, one has
\begin{align}
\left\{\int^{\infty}_{-\infty} e^{-T^{1/3}\big(\frac{(1+\nu)y^2}{2^{2/3}}-\frac{\epsilon s}{2}-B(y) \big)}dy \leq 1\right\}\subseteq \left\{ \min_{y\in [-1/s^{1+\delta}, 1/s^{1+\delta}]}B(y)< -\frac{\epsilon}{4}s\right\}.
\end{align}
Thanks to this containment, we get
\begin{equation}\label{eq:TailOfBPlusPar}
\mathbb{P}\Big(\bigcap_{n\in \mathbb{Z}} \big\{E^c_n \cap F^c_n\big\}\cap \mathcal{A}^{\mathrm{Br}}\Big)\leq \mathbb{P}\Big(\min_{y\in [-\frac{1}{s^{1+\delta}}, \frac{1}{s^{1+\delta}}]}B(y)<-\frac{\epsilon}{4}s\Big).
\end{equation}
We bound the r.h.s. of \eqref{eq:TailOfBPlusPar}, via the reflection principle as
\begin{align}\label{eq:RefPrinc}
\mathbb{P}\Big(\min_{y\in [-\frac{1}{s^{1+\delta}}, \frac{1}{s^{1+\delta}}]}B(y)\leq -\frac{\epsilon}{4}s\Big)\leq \mathbb{P}\Big(2|X_1| +2 |X_2|\geq \frac{\epsilon}{4}s\Big)
\end{align}
where $X_1, X_2$ are independent Gaussians with variance $2^{\frac{1}{3}}s^{-(1+\delta)}$. By tail estimates, it follows that the r.h.s. of \eqref{eq:RefPrinc} is bounded above by $c_1e^{-c_2 s^{3+\delta}}$ for some constants $c_1, c_2>0$ which only depend on $\epsilon$.
Plugging this into \eqref{eq:TailOfBPlusPar} and combining with \eqref{eq:ContainmentProb}, we find \eqref{eq:SmallProb}.
\end{proof}
\section{Upper Tail under narrow wedge initial data}\label{UpperTailNSEC}
The aim of this section is to prove Theorem~\ref{GrandUpTheorem}. To achieve this, we first state a few auxiliary results which combine together to prove Theorem~\ref{GrandUpTheorem}. These auxiliary results are proved in the end of Section~\ref{UpperTailNSEC}. Recall the definition of $\Upsilon_T$ from \eqref{eq:DefUpsilon}. Our first result of this section (Proposition~\ref{thm:UpTail}) gives an upper and lower bound for the probability $\mathbb{P}(\Upsilon_T(0)\geq s)$. These bounds are close to optimal when $s\gg T^{\frac{2}{3}}$. When $s=O(T^{\frac{2}{3}})$ or $s\ll T^{\frac{2}{3}}$, those bounds are not optimal (see Remark~\ref{UpperTailNewRemark}). In those cases, we obtain better bounds using Proposition~\ref{UpperDeepTails}.
\begin{prop}\label{thm:UpTail}
Fix some $\zeta\leq \epsilon\in (0, 1)$ and $T_0>0$. There exists $s_0=s_0(\epsilon, \zeta, T_0)$ such that for all $s\geq s_0$ and $T\geq T_0$,
\begin{align}
\mathbb{P}\left(\Upsilon_T(0)> s\right)&\leq e^{-T^{1/3} \zeta s}+ e^{- \frac{4}{3} (1-\epsilon)s^{3/2}},\label{eq:UpBoundBd}\\
1-\exp\big(-e^{-\zeta s T^{1/3}}\big)\mathbb{P}\left(\Upsilon_T(0)\leq s\right)&\geq e^{-T^{1/3}(1+\zeta) s}+ e^{- \frac{4}{3}(1+\epsilon) s^{3/2}}.\label{eq:LowBound}
\end{align}
\end{prop}
\begin{rem}\label{UpperTailNewRemark}
Proposition~\ref{thm:UpTail} implies that for $s\ll T^{\frac{2}{3}}$
\begin{align}\label{eq:Me}
\exp\Big(- \frac{4}{3}(1+\epsilon)s^{\frac{3}{2}}\Big) \leq \mathbb{P}\big(\Upsilon_T(0)>s\big) \leq \exp\Big(- \frac{4}{3}(1-\epsilon)s^{\frac{3}{2}}\Big).
\end{align}
To see this, we first note that
\begin{align}
\text{r.h.s. of \eqref{eq:UpBoundBd}} \leq
\exp\Big(- \frac{4}{3}(1-\epsilon)s^{\frac{3}{2}}\Big), & \qquad \text{when }s\ll T^{\frac{2}{3}}.\label{eq:UpTailPhases}
\end{align}
Using the approximation $1-\exp\big(-e^{-\zeta s T^{1/3}}\big)\approx \exp(-\zeta s T^{1/3})$, we see that \eqref{eq:LowBound} implies
\begin{align}\label{eq:LowBdCons}
\mathbb{P}\big(\Upsilon_T(0)> s\big)\geq \exp\big(e^{-\zeta s T^{\frac{1}{3}}}\big)\Big(e^{-(1+\zeta)s T^{\frac{1}{3}}} - e^{-\zeta s T^{\frac{1}{3}}} + e^{-\frac{4}{3}(1+\epsilon)s^{\frac{3}{2}}}\Big).
\end{align}
The r.h.s. of \eqref{eq:LowBdCons} is bounded below by $\exp(- \frac{4}{3}(1+\epsilon)s^{\frac{3}{2}})$ when $s\ll T^{\frac{2}{3}}$.
Note, when $s\gg T^{\frac{2}{3}}$, the dominating term of the r.h.s. of \eqref{eq:UpBoundBd} is $\exp(-\zeta s T^{1/3})$ which we show in our next theorem is the not correct order of decay of $\mathbb{P}(\Upsilon_T(0)>s)$.
\end{rem}
\begin{prop}\label{UpperDeepTails}
Fix $\epsilon\in (0,1)$. Then, for all pairs $(s,T)$ satisfying $s\geq \frac{9}{16}\epsilon^{-2}T^{\frac{2}{3}}$ and $T>\pi$,
\begin{align}
\mathbb{P}(\Upsilon_T(0)> s) &\leq e^{- \frac{4(1-\epsilon)}{3}s^{3/2}} \label{eq:TightUpBd}\\
\mathbb{P}(\Upsilon_T(0) > s) & \geq e^{-4\sqrt{3}(1+3\epsilon)s^{3/2}}\label{eq:TightLowBd}
\end{align}
Furthermore, for all $s\in \big[\frac{1}{8}\epsilon^2 T^{\frac{2}{3}}, \frac{9}{16} \epsilon^{-2} T^{\frac{2}{3}}\big]$,
\begin{align}\label{eq:LowBdAllWay}
\mathbb{P}\big(\Upsilon_T(0)>s\big)\geq \frac{1}{2}e^{-2^{7/2}\epsilon^{-3}s^{3/2}}.
\end{align}
Moreover, for any $0<T_0\leq \pi$ and $\epsilon \in (0,3/5)$, there exist $c_1=c_1(T_0)>c_2=c_2(T_0)>0$ such that for all $T\in [T_0,\pi]$ and $s\geq \frac{9}{16}\epsilon^{-2}T^{\frac{2}{3}} + 24T^{-\frac{1}{3}}_0(1-\epsilon)^{-1}|\log (T_0/\pi)|$,
\begin{align}\label{eq:UpLowRough}
e^{-c_1s^{3/2}}\leq \mathbb{P}(\Upsilon_T(0)>s) \leq e^{-c_2s^{3/2}}.
\end{align}
\end{prop}
\begin{prop}\label{UpperDeepTailLemma}
Fix $\epsilon\in (0,1)$, $T>\pi$ and $c>\frac{4}{3}\big(1+\frac{1}{3}\epsilon\big)$. Then, there exists $\{s_n\}_{n}$ such that $s_n \to \infty$ as $n\to \infty$ and $\mathbb{P}(\Upsilon_T(0)>s_n)\geq e^{-cs^{3/2}_n}$ for all $n\in \mathbb{N}$.
\end{prop}
\subsection{Proof of Theorem~\ref{GrandUpTheorem}}
We first show \eqref{eq:RoughBd1} when $T_0 \in (0, \pi)$. Fix $\epsilon \in (0,\frac{3}{4})$ and define $s_0= \frac{9}{16}\epsilon^{-2} \pi^{\frac{2}{3}}+ 3(1-\epsilon)^{-1} T^{\frac{1}{3}}_0|\log T_0|$. Then, for all $T\in [T_0, \pi]$ and $s\geq s_0$, \eqref{eq:RoughBd1} follows from \eqref{eq:UpLowRough}.
\vspace{0.2cm}
Now, we show \eqref{eq:RoughBd1} for $T_0>\pi$. Fix $\zeta = \epsilon\in \big(0,\frac{1}{2}\big)$. Proposition~\ref{thm:UpTail} says that there exists $s_0=s_0(\epsilon, T_0)$ such that \eqref{eq:UpBoundBd} and \eqref{eq:LowBound} holds for all $s\geq s_0$ and $T>T_0$.
\noindent (i) For all $s\in (0, \frac{1}{8}\epsilon^{2} T^{\frac{2}{3}})$, we note
\begin{equation}\label{eq:obs1}
\frac{4}{3}(1+\epsilon)s^{\frac{3}{2}}\leq 2 s^{\frac{3}{2}} \leq \frac{1}{\sqrt{2}}\epsilon sT^{\frac{1}{3}}
\end{equation}
where the first and second inequalities follow from $\epsilon\leq \frac{1}{2}$ and $s\leq \frac{1}{8}\epsilon^2 T^{\frac{2}{3}}$ respectively. Furthermore, there exists $s^{\prime}_0 = s^{\prime}_{0}(\epsilon, T_0)$ such that for all $s\geq s^{\prime}_0$, one has
\begin{align}\label{eq:obs2}
\exp\big(-\frac{1}{\sqrt{2}}\epsilon s T^{\frac{1}{3}}\big)\geq 2\exp\big( -\epsilon s T^{\frac{1}{3}}\big).
\end{align}
Combining \eqref{eq:obs1} and \eqref{eq:obs2} yields
\begin{align}\label{eq:obs3}
\exp(-\frac{4}{3}(1+\epsilon)s^{\frac{3}{2}}) \geq 2 \exp(- \epsilon s T^{\frac{1}{3}}), \qquad \forall s\in (s^{\prime}_0, \frac{1}{8}\epsilon^2 T^{\frac{2}{3}}).
\end{align}
Plugging this into the r.h.s. of \eqref{eq:UpBoundBd} yields
\begin{align}\label{eq:NewUpTBd1}
\mathbb{P}(\Upsilon_T(0)> s)\leq 2\exp\big(-\frac{4}{3}(1-\epsilon)s^{\frac{3}{2}}\big)
\end{align}
for all $s\in (\max\{s_0, s^{\prime}_0\}, \frac{1}{8}\epsilon^2 T^{\frac{2}{3}})$ where $s_0=s_0(\epsilon, T_0)$ comes with Proposition~\ref{thm:UpTail}. Moreover, applying \eqref{eq:obs3} in \eqref{eq:LowBdCons}, we observe
\begin{align}\label{eq:NewUpTBd2}
\mathbb{P}(\Upsilon_T(0)>s)\leq \frac{1}{2}\exp\big(-\frac{4}{3}(1+\epsilon)s^{3/2}\big).
\end{align}
Combining \eqref{eq:NewUpTBd1} and \eqref{eq:NewUpTBd2}, we obtain \eqref{eq:RoughBd1} with $c_1\leq \frac{4}{3}(1+\epsilon)$ and $c_2 \geq \frac{4}{3}(1-\epsilon)$ for all $s\in (s^{\prime\prime}_0,\frac{1}{8}\epsilon^2 T^{\frac{2}{3}})$ for some $s^{\prime\prime}_0=s^{\prime\prime}_0(\epsilon, T_0)$.
\smallskip
\noindent (ii) When $s\geq \frac{9}{16}\epsilon^{-2} T^{\frac{2}{3}}$, we first apply Proposition~\ref{UpperDeepTails}. Using \eqref{eq:TightUpBd} and \eqref{eq:TightLowBd}, yields \eqref{eq:RoughBd1} with $c_1\leq 4\sqrt{3}(1+\epsilon)$ and $c_2\geq \frac{4}{3}(1-\epsilon)$. The second part of the claim follows from Proposition~\ref{UpperDeepTailLemma}.
\smallskip
\noindent (iii) For all $s\in (\frac{1}{8}\epsilon^2 T^{\frac{2}{3}}, \frac{9}{16}\epsilon^{-2}T^{\frac{2}{3}})$, appealing to \eqref{eq:LowBdAllWay} of Lemma~\ref{UpperDeepTails}, we get $c_1\leq 2^{\frac{7}{2}}\epsilon^{-3}$. Furthermore, one has the following bound on the r.h.s. of \eqref{eq:UpBoundBd}
\begin{align}\label{eq:Interm}
\exp\Big(-\epsilon s T^{\frac{1}{3}}\Big) + \exp\Big(-\frac{4}{3}(1-\epsilon)s^{\frac{3}{2}}\Big)\leq 2\exp\Big(-\min\big\{\epsilon s T^{\frac{1}{3}}, \frac{4}{3}(1-\epsilon)s^{\frac{3}{2}}\big\}\Big). &&
\end{align}
For all $\epsilon\leq \frac{1}{2}$ and $s\in (\frac{1}{8}\epsilon^2 T^{\frac{2}{3}}, \frac{9}{16}\epsilon^{-2}T^{\frac{2}{3}})$, the r.h.s. of \eqref{eq:Interm} is bounded above by $\exp(-\frac{4}{3}\epsilon s^{\frac{3}{2}})$.
Plugging this bound into \eqref{eq:TightLowBd}, we get
\[\mathbb{P}(\Upsilon_T(0)>s)\leq 2 e^{-\frac{4}{3}\epsilon s^{3/2}}, \quad \forall s\in \Big(\max\{s_0, \frac{1}{8}\epsilon^2 T^{\frac{2}{3}}\},\max\{s_0, \frac{9}{16}\epsilon^{-2} T^{\frac{2}{3}}\} \Big).\]
Therefore, \eqref{eq:RoughBd1} holds when $s$ lies in the interval $(\max\{s_0, \frac{1}{8}\epsilon^2 T^{\frac{2}{3}}\},\max\{s_0, \frac{9}{16}\epsilon^{-2} T^{\frac{2}{3}}\} )$ with $c_1\leq 2^{\frac{7}{2}}\epsilon^{-3}$ and $c_2\geq \frac{4}{3}\epsilon$. This completes the proof of Theorem~\ref{GrandUpTheorem}.
\subsection{Proof of Proposition~\ref{UpperDeepTails}}
To prove Proposition~\ref{UpperDeepTails}, we need the following lemma. Let $$\psi_T(k)=
\begin{cases}
\frac{k!e^{\frac{Tk^3}{12}}}{2\sqrt{\pi T} k^{\frac{3}{2}}}, & \text{when }T\geq \pi\\
\frac{\pi^{(k-1)/2}k!e^{\frac{Tk^3}{12}}}{2T^{k/2} k^{\frac{3}{2}}} & \text{when }T<\pi.
\end{cases} $$
\begin{lemma}\label{MomBoundLem} Fix $k\in \mathbb{N}$ and $T_0\in \mathbb{R}_{+}$. Then, we have
\begin{align}\label{eq:MomBound}
C\psi_T(k)\leq \mathbb{E}\big[\exp\big(kT^{\frac{1}{3}}\Upsilon_T(0)\big)\big] \leq 69 \psi_T(k)
\end{align}
where $C=C(k,T_0)>0$ is bounded below by $1$ for all $T>T_0>\pi$ and by $ T^{(k-1)/2}_0 \pi^{-k/2}$ for all $T\in [T_0, \pi]$.
\end{lemma}
\begin{proof}
Recall that $\mathcal{Z}(2T,0)= \exp(T^{\frac{1}{3}}\Upsilon_T(0)-\frac{T}{12})$. The moments of $\mathcal{Z}(2T, 0)$ are given by\footnote{These formulas were formally derived in \cite{BorodinCorwinMac} with a proof given as \cite[Theorem 2.1]{Ghosal18}.}:
\begin{align}
\mathbb{E}\Big[\frac{\exp(kT^{\frac{1}{3}}\Upsilon_T(0))}{k!}
\Big]& = \sum_{\substack{\lambda\vdash k\\ \lambda = 1^{m_1} 2^{m_2}\ldots }} \frac{1}{m_1!m_2!\ldots }\int^{\mathbf{i}\infty}_{-\mathbf{i}\infty} \frac{dw_1}{2\pi \mathbf{i}} \ldots \int_{-\mathbf{i}\infty}^{\mathbf{i}\infty} \frac{dw_{\ell(\lambda)}}{2\pi \mathbf{i}} \mathrm{det}\left[\frac{1}{w_j+\lambda_j- w_i}\right]^{\ell(\lambda)}_{i,j=1}\\
&\times \exp\left[T \sum_{j=1}^{\ell(\lambda)}\Big(\frac{\lambda^3_j}{12} + \lambda_j\Big(w_j+\frac{\lambda_j}{2}-\frac{1}{2}\Big)^2\Big)\right].\label{eq:MomContFor}
\end{align}
Here, $\lambda\vdash k$ denotes that $\lambda=(\lambda_1\geq \lambda_2\geq \ldots )$ partitions $k$, $\ell(\lambda)=\#\{i:\lambda_i>0\}$ and $m_j = \#\{i:\lambda_i=j\}$.
By Cauchy's determinant formula,
\begin{align}\label{eq:Cauchy}
\mathrm{det}\Big[\frac{1}{w_i+\lambda_i-w_j}\Big] = \prod_{i=1}^{\ell(\lambda)} \frac{1}{\lambda_i}\prod^{\ell(\lambda)}_{i< j} \frac{\big(w_i-w_j+\lambda_i-\lambda_j\big)\big(w_j-w_i\big)}{\big(w_i+\lambda_i-w_j\big)\big(w_j+\lambda_j-w_i\big)}.
\end{align}
Applying \eqref{eq:Cauchy} to \eqref{eq:MomContFor} followed by substituting $\mathbf{i}z_j = T^{\frac{1}{3}}(w_j + \frac{\lambda_j}{2}-\frac{1}{2})$ in \eqref{eq:MomContFor} and deforming the contours to the real axis (note that no pole will be crossed) implies that
\begin{align*}
\text{r.h.s. of \eqref{eq:MomContFor}} &=\!\!\!\!\!\! \sum_{\substack{\lambda\vdash k\\ \lambda = 1^{m_1} 2^{m_2}\ldots }} \!\!\!\!\!\!\frac{\prod_{i=1}^{\ell(\lambda)}e^{ \frac{T\lambda^3_i}{12}}/2\pi}{m_1!m_2!\ldots } \int^{\infty}_{-\infty}\!\!\!\!\ldots \int^{\infty}_{-\infty} \prod_{i=1}^{\ell(\lambda)} \frac{dz_ie^{-T^{\frac{1}{3}}\lambda_i z^2_i }}{T^{\frac{1}{3}}\lambda_i}\prod^{\ell(\lambda)}_{i< j} \frac{\frac{T^{\frac{2}{3}}(\lambda_i-\lambda_j)^2}{4}+ (z_i-z_j)^2}{\frac{T^{\frac{2}{3}}(\lambda_i+\lambda_j)^2}{4} + (z_i-z_j)^2
\end{align*}
Taking $\lambda=(k)$ (i.e., $\lambda_1 =k$ and $\lambda_i=0$ for all $i\geq 2$), evaluating the single integral and noting that all the terms on the r.h.s. above are positive yields the lower bound in \eqref{eq:MomBound} when $T_0>\pi$.
In the case when $T_0<\pi$, the term corresponding to $\lambda =(k)$ is bounded below by $T^{(k_0-1)/2}_0\pi^{k/2}\psi_T(k)$ for all $T\in [T_0,\pi]$. This yields the lower bound in \eqref{eq:MomBound} when $T_0<\pi$.
For the upper bound, we first show that if $\lambda$ is a partition of $k$ not equal to $(k)$ then
\begin{align}\label{eq:PartitionBd}
\frac{k^3}{12} -\sum_{j=1}^{\ell(\lambda)} \frac{\lambda^3_j}{12}\geq \frac{k^2-k}{4}
\end{align}
with equality only when $\lambda=(k-1,1)$. We prove this by induction. It is straightforward to check that \eqref{eq:PartitionBd} holds when $k=1,2$. Assume \eqref{eq:PartitionBd} holds when $k=k_0-1$. Now we show it for $k=k_0$. Let us assume that $\lambda$ is a partition of $k_0$ and write
\[\frac{k^3_0}{12} -\sum_{j=1}^{\ell(\lambda)} \frac{\lambda^3_j}{12}=\frac{k^3_0}{12} -\frac{(k_0-1)^3+1}{12}+ \frac{(k_0-1)^3+1}{12}-\sum_{j=1}^{\ell(\lambda)} \frac{\lambda^3_j}{12}. \]
The right hand side of the above display is equal to $\frac{k^3_0}{12} -\frac{(k_0-1)^3+1}{12}=\frac{k^2_0-k_0}{4}$ when $\lambda =(k_0-1,1)$. It suffices to show
\begin{align}\label{eq:PartitionBDinduc}
\frac{(k_0-1)^3+1}{12}-\sum_{j=1}^{\ell(\lambda)} \frac{\lambda^3_j}{12}\geq 0
\end{align}
when $\lambda\neq (k_0),(k_0-1,1)$. In the case when $\lambda_{\ell(\lambda)}=1$, the above inequality follows by our assumption since $(\lambda_1, \ldots , \lambda_{\ell(\lambda)-1})$ is a partition of $k_0-1$. For $\lambda_{\ell(\lambda)}>1$, we write
\begin{align}
\frac{(k_0-1)^3+1}{12}-\sum_{j=1}^{\ell(\lambda)} \frac{\lambda^3_j}{12} &= \frac{(k_0-1)^3}{12}-\sum_{j=1}^{\ell(\lambda)-1} \frac{\lambda^3_j}{12}-\frac{(\lambda_{\ell(\lambda)}-1)^3}{12} -\frac{(\lambda_{\ell(\lambda)}-1)(\lambda_{\ell(\lambda)}-2)}{4}.
\end{align}
Note that $(\lambda_1, \ldots , \lambda_{\ell(\lambda)}-1)$ is a partition of $k_0-1$. Since $\lambda_{\ell(\lambda)}<k_0$ and \eqref{eq:PartitionBd} holds for $k=k_0-1$, the right hand side of the above display is greater than $0$. This shows \eqref{eq:PartitionBDinduc} and hence, proves \eqref{eq:PartitionBd}.
We return to the proof of the upper bound in \eqref{eq:MomBound}. Observe that by bounding the cross-product over $i<j$ by 1 and using Gaussian integrals, we may bound
\begin{align}\label{eq:IntegralBd1}
\int^{\infty}_{-\infty}\ldots \int^{\infty}_{-\infty} \prod_{i=1}^{\ell(\lambda)} \frac{dz_i e^{- T^{\frac{1}{3}}\lambda_iz^2_i}}{T^{\frac{1}{3}}\lambda_i}\prod^{\ell(\lambda)}_{i<j} \frac{\frac{T^\frac{2}{3}(\lambda_i-\lambda_j)^2}{4} + (z_i-z_j)^2}{\frac{T^{\frac{2}{3}}(\lambda_i +\lambda_j)^2}{4} + (z_i-z_j)^2} \leq \prod_{i=1}^{\ell(\lambda)} \frac{\sqrt{2\pi}}{\sqrt{2T}\lambda_i^{\frac{3}{2}}}
\end{align}
When $T>\pi$, the r.h.s. of \eqref{eq:IntegralBd1} $\leq 1$. Otherwise, the r.h.s. of \eqref{eq:IntegralBd1} is bounded above by $(\pi/T)^{k/2}$.
Owing to this, \eqref{eq:PartitionBd}, and $m_1!m_2!\ldots \leq k!$, we get
\begin{align}
\mathbb{E}\big[\exp(kT^{\frac{1}{3}}\Upsilon_T(0))\big] &\leq \Big(1+ k^{\frac{3}{2}}e^{-\frac{k^2-k}{4}}\#\{\lambda: \lambda\vdash k\} \Big)\times\begin{cases}
\frac{k!e^{\frac{k^3T}{12}}}{2\sqrt{\pi T} k^{\frac{3}{2}}} & T\geq \pi\\
\frac{\pi^{(k-1)/2}k!e^{\frac{k^3T}{12}}}{2T^{k/2} k^{\frac{3}{2}}} & T<\pi.
\end{cases}
\label{eq:MomUppBd}
\end{align}
Applying Siegel's bound (see \cite[pp. 316-318]{Apostol76}, \cite[pp. 88-90]{Knopp70}) on the number partition of any integer $k\geq 1$, we find that
\begin{align}\label{eq:Siegel}
k^{\frac{3}{2}}e^{-\frac{k^2-k}{4}}\#\{\lambda:\lambda\vdash k\}\leq k^{\frac{3}{2}}e^{-\frac{k^2-k}{4}+\pi\sqrt{2k/3}}\leq 68 \quad \forall k\in \mathbb{N}.
\end{align}
Combining \eqref{eq:Siegel} with \eqref{eq:MomUppBd} completes the proof of the upper bound in \eqref{eq:MomBound}.
\end{proof}
\begin{proof}[Proof of \eqref{eq:TightUpBd}]
Combining Markov's inequality and the second inequality of \eqref{eq:MomBound}, we get
\begin{align}\label{eq:UpsilonTB}
\mathbb{P}(\Upsilon_T(0)\geq s) \leq 69 \exp\Big(-\max_{k\in \mathbb{N}} \big[ksT^{\frac{1}{3}}- \log \psi_T(k)\big]\Big).
\end{align}
By Stirling's formula $\psi_T(k)=\exp\big(\frac{Tk^3(1+O(k^{-3/2}))}{12}\big)$.
Set $k_0=\lfloor 2s^{\frac{1}{2}}T^{-\frac{1}{3}}\rfloor$. When $s\geq \frac{9}{16}\epsilon^{-2} T^{\frac{2}{3}}$,
\begin{align}\label{eq:SpecialKBound}
k_0sT^{\frac{1}{3}}-\log\psi_{T}(k_0)\geq k_0sT^{\frac{1}{3}}- \frac{Tk^3_0(1+O(\epsilon^{\frac{3}{2}}))}{12} \geq \frac{4\big(1-\epsilon\big)s^{\frac{3}{2}}}{3}. &&&
\end{align}
The first inequality of \eqref{eq:SpecialKBound} follows by noting that $k_0\geq c\epsilon^{-1}$ for some positive constant $c$. We get the second inequality of \eqref{eq:SpecialKBound} by noticing that $\lfloor2s^{\frac{1}{2}}T^{-\frac{1}{3}}\rfloor\geq 2s^{\frac{1}{2}}T^{-\frac{1}{3}}-1\geq 2s^{\frac{1}{2}} T^{-\frac{1}{3}}\big(1-\frac{2\epsilon}{3}\big)$.
Finally, \eqref{eq:TightUpBd} follows by plugging \eqref{eq:SpecialKBound} into the r.h.s. of \eqref{eq:UpsilonTB}.
\end{proof}
\begin{proof}[Proof of \eqref{eq:TightLowBd}] Fixing now $k_0 = \lceil 2\cdot(3(1+5\epsilon/6)s)^{\frac{1}{2}} T^{-\frac{1}{3}}\rceil$, we observe that
\begin{align}\label{eq:MLowBd}
\exp\big(k_0 s T^{\frac{1}{3}}\big)\leq \frac{1}{2}\frac{k_0!}{2\sqrt{\pi T}k^{\frac{3}{2}}_0}\exp\left(\frac{k^3_0 T}{12}\right).
\end{align}
To prove this inequality first note that
\begin{align}\label{eq:IneqSeri1}
k_0 s T^{\frac{1}{3}}\leq \Big(2\cdot \big(3(1+\frac{5\epsilon}{6})s\big)^{\frac{1}{2}} T^{-\frac{1}{3}} +1\Big) s T^{\frac{1}{3}} \leq 2\sqrt{3}\Big(1+\frac{5\epsilon}{12}+\frac{2\epsilon}{3\sqrt{3}}\Big)s^{\frac{3}{2}}.
\end{align}
where the first inequality follows from $\lceil k\rceil\leq k+1$ and the second inequality is obtained using $s\geq \frac{9}{16}\epsilon^{-2}T^{\frac{2}{3}}$. Moreover, using $k!\geq k^{\frac{3}{2}}$ which holds for all $k\in \mathbb{Z}_{\geq 3}$, we see
\begin{align}
\text{r.h.s. of \eqref{eq:MLowBd}} \geq \frac{1}{4\sqrt{\pi T}} \exp\Big(2\sqrt{3}\big(1+\frac{5\epsilon}{4}\big)s^{\frac{3}{2}}\Big). \label{eq:IneqSeri2}
\end{align}
Now, \eqref{eq:MLowBd} follows from \eqref{eq:IneqSeri1} and \eqref{eq:IneqSeri2} by noting that $\frac{5}{4}\geq \frac{5}{12}+\frac{2}{3\sqrt{3}}$ and $T\leq\frac{64}{27}(\epsilon^2 s)^{\frac{3}{2}}$.
Combining the first inequality of \eqref{eq:MomBound} with \eqref{eq:MLowBd} yields
\begin{align}\label{eq:ReArr}
\mathbb{P}\big(\Upsilon_T(0)> s\big)\geq \mathbb{P}(E),\quad \textrm{with} \quad E = \Big\{\exp\big(k_0T^{\frac{1}{3}}\Upsilon_T(0)\big)> \frac{1}{2}\mathbb{E}[\exp(k_0T^{\frac{1}{3}}\Upsilon_T(0) )]\Big\}.\qquad
\end{align}
\begin{claim} Fix $p,q>1$ such that $p^{-1}+q^{-1}=1$. Then,
\begin{align}\label{eq:PZTypeIneq}
\mathbb{P}(E)\geq 2^{-q}\, \mathbb{E}\big[\exp(k_0T^{\frac{1}{3}}\Upsilon_T(0))\big]^{q}\, \mathbb{E}\big[\exp(pk_0T^{\frac{1}{3}}\Upsilon_T(0))\big]^{-q/p}
\end{align}
\end{claim}
\begin{proof}
Let us write
\begin{align}
\mathbb{E}\big[\exp(k_0T^{\frac{1}{3}} \Upsilon_T(0))\big]= \mathbb{E}\Big[\exp(k_0T^{\frac{1}{3}}\Upsilon_T(0))\mathbbm{1}\big(E^c\big)\Big]+ \mathbb{E}\Big[\exp(k_0T^{\frac{1}{3}}\Upsilon_T(0))\mathbbm{1}\big(E\big)\Big].&&\label{eq:PZarg}
\end{align}
The first term on the r.h.s. of \eqref{eq:PZarg} is bounded above by $\frac{1}{2}\mathbb{E}\big[\exp(k_0T^{\frac{1}{3}}\Upsilon_T(0))\big]$. To bound the second term, we use H\"older's inequality
\begin{align}\label{eq:Holder}
\mathbb{E}\Big[\exp(k_0T^{\frac{1}{3}}\Upsilon_T(0))\mathbbm{1}\big(E\big)\Big]\leq \Big[\mathbb{E}\big[\exp(pk_0T^{\frac{1}{3}}\Upsilon_T(0))\big]\Big]^{\frac{1}{p}} \mathbb{P}(E)^{\frac{1}{q}}
\end{align}
where $p^{-1}+q^{-1}=1$. Plugging the upper bound of \eqref{eq:Holder} into the r.h.s. of \eqref{eq:PZarg} and simplifying yields \eqref{eq:PZTypeIneq} and proves the claim.
\end{proof}
Returning to the proof of \eqref{eq:TightLowBd}, thanks to \eqref{eq:MomBound}, we find that
\[\text{r.h.s. of \eqref{eq:PZTypeIneq} }\geq \exp\Big(-\frac{q(p^2-1)k^3_0 T(1+O(\epsilon^{3/2}))}{12}\Big).\] From $p^{-1}+q^{-1}=1$, it follows that $q(p^2-1) =p(p+1)$. Taking $p = 1+\epsilon/6$ and recalling that $k_0= \lceil 2\cdot(3(1+5\epsilon/6)s)^{\frac{1}{2}} T^{-\frac{1}{3}}\rceil$, we get
$
\text{l.h.s. of \eqref{eq:PZTypeIneq}}\geq 2^{-q}\exp\big(-4\sqrt{3}(1+3\epsilon/2) s^{\frac{3}{2}}\big).
$
Since $q= 6\epsilon^{-1}+1$, we find that the r.h.s. of the above inequality is bounded below by $\exp(-4\sqrt{3}(1+3\epsilon)s^{\frac{3}{2}})$ for all $s\geq \frac{9}{16}\epsilon^{-2}T$ and $T\geq T_0\geq\pi$. This completes the proof.
\end{proof}
\begin{proof}[Proof of \eqref{eq:LowBdAllWay}]
Fix $k_0=\lceil 2\cdot (3(1+5\epsilon/6)s)^{\frac{1}{2}}T^{-\frac{1}{3}}\rceil $. Our aim is to obtain a lower bound for the r.h.s. of \eqref{eq:ReArr}. Applying \eqref{eq:PZTypeIneq} with $p=q=2$ yields
\begin{align}\label{eq:Obs5}
\mathbb{P}(\Upsilon_T(0)>s)\geq \frac{1}{2}\exp\Big(-\frac{7k^3_0 T}{12}\Big).
\end{align}
For $k_0\geq 2$, we have $k_0\leq 2(k_0-1)$ which implies $k_0\leq 4\cdot (3(1+\epsilon))^{\frac{1}{2}} T^{-\frac{1}{3}}$ and hence
$\mathbb{P}(\Upsilon_T(0)>s)\geq \frac{1}{2}\exp(- 2^{6}s^{\frac{3}{2}}).
$
When $k_0=1$, r.h.s. \eqref{eq:Obs5}$\geq \frac{1}{2}\exp(-2^{\frac{7}{2}}\epsilon^{-3}s^{\frac{3}{2}})$ for all $s\geq \frac{1}{8}\epsilon T^{\frac{2}{3}}$.
\end{proof}
\begin{proof}[Proof of \eqref{eq:UpLowRough}]
We first prove the second inequality of \eqref{eq:UpLowRough}. Fix $T \in [T_0,\pi]$. Applying Markov's inequality yields
\begin{align}\label{eq:Markov2}
\mathbb{P}(\Upsilon_{T}(0)\geq s)\leq 69 \exp\big(-\max_{k\in \mathbb{N}}\big[ksT^{\frac{1}{3}}- \log \psi_{T}(k)\big]\big).
\end{align}
Owing to Stirling's formula, we get $\psi_{T}(k)= \exp(Tk^{3}(1+O(k^{-3/2}))-\frac{k}{2}\log T_0)$. Set $k_0= \lfloor 2s^{\frac{1}{3}} T^{-\frac{1}{3}}\rfloor$ and when $s\geq \frac{9}{16}\epsilon^{-2}T^{\frac{2}{3}}+ 24T^{-\frac{1}{3}}_0(1-\epsilon)^{-1}|\log (T_0/\pi)|$, we have
\begin{align}
k_0sT^{\frac{1}{3}} - \log \psi_{T}(k_0)\geq k_0sT^{\frac{1}{3}} - \frac{Tk^3_0(1+O(\epsilon^{\frac{3}{2}}))}{12} + \frac{k}{2}\log T_0\geq \frac{4(1-\epsilon)}{3}s^{\frac{3}{2}} +\frac{k_0}{2}\log T_0. &&\label{eq:SeriesIneq}
\end{align}
for some constant $c=c(\epsilon, T_0)>0$.
The first inequality of \eqref{eq:SeriesIneq} follows since $k_0\geq c\epsilon^{-1}$ for some positive constant $c>0$ and the second inequality follows since $\lfloor 2s^{\frac{1}{2}}T^{-\frac{1}{3}}\rfloor \geq 2s^{\frac{1}{2}}T^{-\frac{1}{3}}(1-\frac{2\epsilon}{3})$. Now, we claim that the r.h.s. of \eqref{eq:SeriesIneq} is bounded below by $(1-\epsilon)s^{\frac{3}{2}}$. To see this, we write
\begin{align}
\frac{k_0}{2}\log T_0\geq \min\{s^{\frac{1}{2}} T^{-\frac{1}{3}}\log T_0, 0\}\geq - \frac{1}{24}s^{\frac{3}{2}}(1-\epsilon)(T_0/T)^{1/3}\geq -\frac{1}{24}(1-\epsilon)s^{\frac{3}{2}}
\end{align}
where the first inequality follows since $k_0\leq 2s^{\frac{1}{2}}T^{-\frac{1}{3}}$, the second inequality holds since $s\geq 24 T^{-\frac{1}{3}}_0(1-\epsilon)^{-1} |\log (T_0/\pi)|$ and the last inequality is obtained by noting that $T_0\leq T$. Substituting the inequalities in the above display in the r.h.s. of \eqref{eq:SeriesIneq} proves the claim. As a consequence, for all $T\in [T_0,\pi]$,
\begin{align}
\max_{k\in \mathbb{N}}\big[ksT^{\frac{1}{3}}- \log \psi_{T}(k)\big] \geq k_0sT^{\frac{1}{3}} - \log \psi_{T}(k_0)\geq (1-\epsilon)s^{\frac{3}{2}}.
\end{align}
Applying the inequality in the above display in the r.h.s. of \eqref{eq:Markov2} yields the second inequality of \eqref{eq:UpLowRough}.
Now, we turn to show the first inequality of \eqref{eq:UpLowRough}. Fix $k_0= \lceil 4s^{\frac{1}{2}} T^{-\frac{1}{3}}\rceil$. We claim that for all $T\in [T_0,\pi]$
\begin{align}\label{eq:BdLow}
\exp\big(k_0 sT^{\frac{1}{3}}\big) \leq \frac{1}{2}\big(T_0/T\big)^{\frac{k_0-1}{2}}\frac{k_0!}{2\sqrt{\pi T}k^{\frac{3}{2}}_0}\exp\big(\frac{k^3_0T}{12}\big).
\end{align}
To prove \eqref{eq:BdLow} we note
\begin{align}\label{eq:BdLowHere1}
k_0sT^{\frac{1}{3}}\leq \big(4s^{\frac{1}{2}} T^{-\frac{1}{3}} +1\big)sT^{\frac{1}{3}}\leq 4\big(1+\frac{\epsilon }{3}\big)s^{\frac{3}{2}}
\end{align}
where the first inequality follows since $\lceil k\rceil \leq k+1$ and the second inequality is obtained using $s\geq \frac{9}{16}\epsilon^{-2}T^{2/3}$. Since we know $T_0\leq T\leq \pi$ and $k^3_0T=(\lceil 4s^{\frac{1}{2}} T^{-\frac{1}{3}}\rceil)^3 T\geq 64s^{\frac{3}{2}}$,
\begin{align}\label{eq:BdLowHere2}
\text{r.h.s. of \eqref{eq:BdLow}}\geq \Big(\frac{T_0}{\pi}\Big)^{\frac{k_0-1}{2}}\frac{k_0!}{4\pi k^{\frac{3}{2}}_0} \exp\Big(\frac{64}{12}s^{\frac{3}{2}}\Big)= \Big(\frac{T_0}{\pi}\Big)^{\frac{k_0-1}{2}} \frac{k_0!}{4\pi k^{\frac{3}{2}}_0} \exp\big((5+3^{-1})s^{\frac{3}{2}}\big) &&
\end{align}
By using the fact that $s\geq \frac{9}{16}\epsilon^{-2}T^{2/3}+24T^{-\frac{1}{3}}_0(1-\epsilon)|\log (T_0/\pi)|$ and $\epsilon<3/5$, we get
\begin{align}
k_0 = \lceil 4s^{\frac{1}{2}} T^{-\frac{1}{3}}\rceil \geq 4s^{\frac{1}{2}} T^{-\frac{1}{3}} > 3\epsilon^{-1}>5, \quad
\frac{1}{3}s^{\frac{3}{2}} > 2s^{\frac{1}{2}}T^{-\frac{1}{3}}_0 |\log (T_0/\pi)|\geq \frac{k_0-1}{2}|\log (T_0/\pi)|.
\end{align}
Now, \eqref{eq:BdLow} follows from \eqref{eq:BdLowHere1}, \eqref{eq:BdLowHere2} and the inequalities of the above display by noting that $4(1+\epsilon/3)\leq 5$, $k_0\geq 6$ and $(T_0/\pi)^{(k_0-1)/2}\exp(3^{-1}s^{3/2})\geq 1$.
For any $T \in [T_0, \pi]$, combining the first inequality of \eqref{eq:MomBound} with \eqref{eq:BdLow} yields
\begin{align}
\mathbb{P}\big(\Upsilon_{T}(0)>s\big)\geq \mathbb{P}(\tilde{E}), \quad \text{where }\tilde{E} =\Big\{\exp\big(k_0T^{\frac{1}{3}}\Upsilon_{T}(0)\big)> \frac{1}{2}\mathbb{E}\big[\exp\big(k_0 T^{\frac{1}{3}} \Upsilon_{T}(0)\big)\big]\Big\}.
\end{align}
Applying \eqref{eq:PZTypeIneq} with $p=q=2$ shows
\begin{align}\label{eq:LowBd2}
\mathbb{P}\big(\Upsilon_{T}(0)>s\big) \geq \frac{1}{2}\exp\Big(-\frac{7k^3_0 T}{12}\Big)\geq \exp\big(-cs^{\frac{3}{2}}\big)
\end{align}
for some absolute constant $c>0$. The last inequality of the above display follows since $k_0 = \lceil 4s^{\frac{1}{2}} T^{-\frac{1}{3}}\rceil$. Note that \eqref{eq:LowBd2} implies the first inequality of \eqref{eq:UpLowRough}. This completes the proof.
\end{proof}
\subsection{Proof of Proposition~\ref{UpperDeepTailLemma}}
We prove this by contradiction. Assume there exists $M>0$ such that $\mathbb{P}(\Upsilon_T(0)>s)\leq e^{-cs^{\frac{3}{2}}}$ for all $s\geq M$. Dividing the expectation integral into $(-\infty,0]$, $[0,M]$ and $(M,\infty)$, we have
\begin{align}
\mathbb{E}\big[\exp(k\Upsilon_T(0) T^{\frac{1}{3}})\big] \leq 1+MkT^{\frac{1}{3}}e^{kMT^{\frac{1}{3}}}+ \int^{\infty}_{M} kT^{\frac{1}{3}}e^{ks T^{\frac{1}{3}} -cs^{\frac{3}{2}}} ds. \label{eq:1stApprox}
\end{align}
Observing that
\begin{align}\label{eq:Maximizer}
\operatornamewithlimits{\textrm{argmax}}_{s\geq 0} \big\{ks T^{\frac{1}{3}} - cs^{\frac{3}{2}}\big\} = \frac{4k^2 T^{\frac{2}{3}}}{9 c^2},
\end{align}
we may choose $k$ to be a sufficiently large integer such that the r.h.s. of \eqref{eq:Maximizer} exceeds $M$. Then, approximating the integral of \eqref{eq:1stApprox} by $C^{\prime}kT^{\frac{1}{3}}\exp(\max_{s\geq 0} \big\{ksT^{\frac{1}{3}} -cs^{\frac{3}{2}}\big\})$ for some absolute constant $C^{\prime}=C^{\prime}(k)$ and plugging in the value of the maximizer from \eqref{eq:Maximizer}, we find
\begin{align}\label{eq:MomUpBd}
\mathbb{E}\big[\exp(k\Upsilon_T(0) T^{\frac{1}{3}})\big]\leq (M+1)kT^{\frac{1}{3}} + C^{\prime}kT^{\frac{1}{3}} e^{\frac{4k^3T}{27c^2}}.
\end{align}
Applying $c>\frac{4}{3}\big(1+\frac{1}{3}\epsilon\big)$ into \eqref{eq:MomUpBd} shows that the r.h.s. of \eqref{eq:MomUpBd} is less than $e^{(1-\epsilon)\frac{k^3T}{12}}$ which contradicts \eqref{eq:MomBound}. Hence, the claim follows.
\subsection{Proof of Proposition~\ref{thm:UpTail}}
Our proof of Proposition~\ref{thm:UpTail} relies on a Laplace transform formula for $\mathcal{Z}^{\mathbf{nw}}(T,0)$ which was proved in \cite{BorGor16} and follows from the exact formula for the probability distribution of $\Upsilon_T(0)$ of \cite{Amir11}. It connects $\mathcal{Z}^{\mathbf{nw}}(T,0)$ with the Airy point process $\mathbf{a}_1> \mathbf{a}_2>\ldots $. The latter is a well studied determinantal point process in random matrix theory (see, e.g., \cite[Section~4.2]{AGZ10}).
For convenience, we introduce following shorthand notations:
\begin{align}
\mathcal{I}_s(x) := \frac{1}{1+\exp(T^{\frac{1}{3}}(x-s))} , \qquad \mathcal{J}_s(x):= \log \big(1+\exp(T^{\frac{1}{3}}(x-s))\big).
\end{align}
It is worth noting that $\mathcal{I}_s(x)= \exp(-\mathcal{J}_s(x))$.
\begin{prop}[Theorem~1 of \cite{BorGor16}]\label{ppn:PropConnection}
For all $s\in\mathbb{R}$,
\begin{align}\label{eq:Connection}
\mathbb{E}_{\mathrm{KPZ}}\Big[\exp\Big(-\exp\big(T^{\frac{1}{3}}(\Upsilon_T(0)-s)\big)\Big)\Big]=\mathbb{E}_{\mathrm{Airy}}\left[\prod_{k=1}^{\infty} \mathcal{I}_s(\mathbf{a}_k)\right].
\end{align}
\end{prop}
We start our proof of Proposition~\ref{thm:UpTail} with upper and lower bounds on the r.h.s. of \eqref{eq:Connection}.
\begin{prop}\label{thm:MainTheorem}
Fix some $\zeta\leq \epsilon\in (0, 1)$ and $T_0>0$. Continuing with the notation of Proposition~\ref{ppn:PropConnection}, there exists $s_0=s_0(\epsilon, \zeta, T_0)$ such that for all $s\geq s_0$,
\begin{align}
1- \mathbb{E}\Big[\prod_{k=1}^{\infty} \mathcal{I}_s(\mathbf{a}_k)\Big]&\leq e^{-\zeta s T^{1/3} }+ e^{- \frac{4}{3} (1-\epsilon)s^{3/2}},\label{eq:UpBound}\\
1- \mathbb{E}\Big[\prod_{k=1}^{\infty} \mathcal{I}_s(\mathbf{a}_k)\Big]&\geq e^{-(1+\zeta)s T^{1/3}} + e^{- \frac{4}{3}(1+\epsilon) s^{3/2}}.\label{eq:LowrBound}
\end{align}
\end{prop}
We defer the proof of Proposition~\ref{thm:MainTheorem} to Section~\ref{NWLaplaceTail}.
\begin{proof}[Proof of Proposition~\ref{thm:UpTail}]
Define $\bar{s}:= (1+\zeta)s$ and $\theta(s) := \exp\big(- \exp\big(T^{\frac{1}{3}}(\Upsilon_T(0)-s)\big)\big)$. Thanks to \eqref{eq:Connection}, we have $\mathbb{E}_{\mathrm{KPZ}}[\theta(s)] = \mathbb{E}_{\mathrm{Airy}}[\prod_{k=1}^{\infty} \mathcal{I}_s(\mathbf{a}_k)]$. Note that
\begin{align}\label{eq:1stStepIneq}
\theta(s)\leq \mathbbm{1}(\Upsilon_T(0)\leq \bar{s})+\mathbbm{1}(\Upsilon_T(0)> \bar{s}) \exp(-\exp(\zeta sT^{1/3})).
\end{align}
Rearranging, taking expectations and applying \eqref{eq:Connection}, we arrive at
\begin{align}\label{eq:StepOfUpperBound}
\mathbb{P}(\Upsilon_T(0)> \bar{s})\leq \Big(1-\exp(-\exp(\zeta sT^{\frac{1}{3}}))\Big)^{-1} \Big(1-\mathbb{E}_{\mathrm{Airy}}\big[\prod_{k=1}^{\infty} \mathcal{I}_s(\mathbf{a}_k)]\Big).
\end{align}
By taking $s$ sufficiently large and $T\geq T_0$, we may assume that $1-\exp(-\exp(\zeta s T^{\frac{1}{3}}))\geq \frac{1}{2}$. Plugging this bound and \eqref{eq:UpBound} into the r.h.s. of \eqref{eq:StepOfUpperBound} yields
\begin{align}
\mathbb{P}(\Upsilon_T(0)\geq \bar{s}) &\leq e^{-\zeta s T^{1/3}} + e^{-\frac{4}{3}(1-\epsilon)s^{3/2}}
\end{align}
for all $s\geq s_{0}$ where $s_0$ depends on $\epsilon, \zeta$ and $ T_0$. This proves \eqref{eq:UpBoundBd}.
We turn now to prove \eqref{eq:LowBound}. Using Markov's inequality,
\begin{align}
\mathbb{P}(\Upsilon_T(0)\leq s)= \mathbb{P}\Big(\theta(\bar{s})\geq \exp\big(- e^{-\zeta sT^{1/3}}\big)\Big)\leq \exp\big(e^{-\zeta sT^{1/3}}\big)\cdot \mathbb{E}[\theta(\bar{s})].
\end{align}
Rearranging yields
$1- \exp\Big(- e^{-\zeta s T^{1/3}}\Big)\mathbb{P}(\Upsilon_T(0)\leq s) \geq 1- \mathbb{E}\left[\theta(\bar{s})\right]$.
Finally, applying \eqref{eq:Connection} and \eqref{eq:LowrBound} to the r.h.s. of this result, we get \eqref{eq:LowBound}.
\end{proof}
\subsubsection{Proof of Proposition~\ref{thm:MainTheorem}}\label{NWLaplaceTail}
\begin{proof}[Proof of \eqref{eq:UpBound}]
We start by noticing the following trivial lower bound
\begin{align}\label{eq:TrivLowerBound}
\mathbb{E}_{\mathrm{Airy}}\big[\prod_{k=1}^{\infty} \mathcal{I}_{s}(\mathbf{a}_k)\big] &\geq \mathbb{E}_{\mathrm{Airy}}\big[\prod_{k=1}^{\infty} \mathcal{I}_{s}(\mathbf{a}_k)\mathbbm{1}(\mathbf{A})\big]
\end{align}
where $\mathbf{A}=\big\{\mathbf{a}_1\leq (1-\zeta)s\big\}$. Setting $k_0:= \lfloor \frac{2}{3\pi}s^{\frac{9}{4}+2\epsilon}\rfloor$ we observe that
\begin{align}
\prod_{k=1}^{k_0} \mathcal{I}_s(\mathbf{a}_k)\mathbbm{1}(\mathbf{A})&= \exp\Big(- \sum_{k=1}^{k_0}\mathcal{J}_s(\mathbf{a}_k)\Big)\mathbbm{1}(\mathbf{A})\geq \exp\Big(-\frac{2}{3\pi}s^{\frac{9}{4}+2\epsilon}e^{-T^{\frac{1}{3}s\zeta}}\Big). \label{eq:1stBound}
\end{align}
where inequality is obtained via $\mathcal{J}_s(\mathbf{a}_k)\leq e^{-T^{\frac{1}{3}}s\zeta}$ which follows on the event $\mathbf{A}$. Our next task is to bound $\prod_{k>k_0} \mathcal{I}_s(\mathbf{a}_k)$ from below. To achieve this, we recall the result of \cite[Proposition~4.5]{CG18} which shows that for any $\epsilon,\delta\in (0,1)$ we can augment the probability space on which the Airy point process is defined so that there exists a random variable $C^{\mathrm{Ai}}_{\epsilon}$ satisfying
\begin{equation}\label{eq:AiryConcentration}
(1+\epsilon)\lambda_{k} - C^{\mathrm{Ai}}_{\epsilon}\leq \mathbf{a}_k\leq (1-\epsilon)\lambda_k+ C^{\mathrm{Ai}}_{\epsilon}\quad \text{for all }k\geq 1\quad \text{ and } \quad \mathbb{P}(C^{\mathrm{Ai}}_{\epsilon}\geq s)\leq e^{- s^{1-\delta}}
\end{equation}
for all $s\geq s_0$ where $s_0=s_0(\epsilon,\delta)$ is a constant. Here, $\lambda_k$ is the $k$-th zero of the Airy function (see \cite[Proposition~4.6]{CG18}) and we fix some $\delta\in(0, \epsilon)$. Define $\phi(s) := s^{\frac{3+8\epsilon/3}{2(1-\delta)}}$.
Now, we write
\begin{align}\label{eq:ResBound}
\prod_{k>k_0} \mathcal{I}_s(\mathbf{a}_k) \geq \prod_{k>k_0} \mathcal{I}_s(\mathbf{a}_k)\mathbbm{1}(C^{\mathrm{Ai}}_{\epsilon}\leq \phi(s)) &\geq \exp\Big(-\sum_{k>k_0} \mathcal{J}_s\big((1-\epsilon)\lambda_k+ \phi(s)\big)\Big).
\end{align}
Appealing to the tail probability of $C^{\mathrm{Ai}}_{\epsilon}$, we have
$\mathbb{P}(C^{\mathrm{Ai}}_{\epsilon}\leq \phi(s))\geq 1- e^{-s^{\frac{3}{2}+\frac{4}{3}\epsilon}}$.
We now claim that for some constant $C>0$,
\begin{equation}\label{eq:ResBoundSep}
\sum_{k>k_0} \mathcal{J}_s((1-\epsilon)\lambda_k+ \phi(s)) \leq \frac{C}{T^{\frac{1}{3}}}\exp(-sT^{\frac{1}{3}}).
\end{equation}
To prove this note that for all $k\geq k_0$,
\begin{align}\label{eq:TwinIneq}
\lambda_k \leq -\Big(\frac{3\pi k}{2}\Big)^{\frac{3}{2}} \quad \text{and, } \quad (1-\epsilon)(\frac{3\pi k}{2}\big)^{\frac{3}{2}} - \phi(s) \geq (1-\epsilon)\Big(\frac{3\pi}{2} (k-k_0)\Big)^{\frac{1}{3}}.
\end{align}
The first inequality of \eqref{eq:TwinIneq} is an outcome of \cite[Proposition~4.6]{CG18} and the second inequality follows from \cite[Lemma~5.6]{CG18}. Applying \eqref{eq:TwinIneq},
we get
\begin{align}\label{eq:JBound}
\mathcal{J}_s\Big((1-\epsilon)\lambda_k+ \phi(s)\Big)\leq e^{T^{1/3}\big(-s -(1-\epsilon)(3\pi k/2)^{2/3}+\phi(s)\big)} \leq e^{T^{1/3}\big(-s-(1-\epsilon)(k-k_0)^{2/3}\big)}.
\end{align}
Summing over $k>k_0$ in \eqref{eq:JBound}, approximating the sum by the corresponding integral, and evaluating yields \eqref{eq:ResBoundSep}.
\smallskip
Now, we turn to complete the proof of \eqref{eq:UpBound}. Plugging \eqref{eq:ResBoundSep} into the r.h.s. of \eqref{eq:ResBound} yields
\begin{align}\label{eq:ResBoundFinal}
\prod_{k>k_0} \mathcal{I}_s(\mathbf{a}_k)\mathbbm{1}(C^{\mathrm{Ai}}_{\epsilon}\leq \phi(s))\geq \exp\left(-\frac{C}{T^{\frac{1}{3}}} \exp(- sT^{\frac{1}{3}})\right).
\end{align}
Combining \eqref{eq:1stBound} and \eqref{eq:ResBoundFinal} yields
\begin{equation}
\text{l.h.s. of \eqref{eq:TrivLowerBound}}\geq \exp\Big(-\frac{2}{3\pi}s^{\frac{9}{4}+2\epsilon}e^{-\zeta sT^{\frac{1}{3}}}-\frac{C}{T^{\frac{1}{3}}}e^{-sT^{\frac{1}{3}}}\Big) \mathbb{P}\big(C^{\mathrm{Ai}}_{\epsilon}\leq \phi(s), \mathbf{A}\big). \label{eq:LowB1}
\end{equation}
To finish the proof, we observe that
\begin{equation}\label{eq:LowB2}
\mathbb{P}\big(C^{\mathrm{Ai}}_{\epsilon}\leq \phi(s),\mathbf{A}\big)\geq 1 - \mathbb{P}(C^{\mathrm{Ai}}_{\epsilon}\geq \phi(s)) - \mathbb{P}(\mathbf{A}^c)\geq 1- e^{-s^{\frac{3}{2}+\frac{4}{3}\epsilon}} - e^{-\frac{4}{3}(1-\epsilon)s^{\frac{3}{2}}}
\end{equation}
for all $s\geq s_0$. The second inequality above used $\mathbb{P}(\mathbf{A}^c)=\mathbb{P}(\mathbf{a}_1\geq (1-\zeta)s)\leq \exp(-\frac{4}{3}(1-\epsilon)s^{\frac{3}{2}})$ which holds when $s$ is sufficiently large (see \cite[Theorem~1.3]{RRV11}).
Plugging \eqref{eq:LowB2} into the r.h.s. of \eqref{eq:LowB1} and rearranging yields
$e^{-(1-\epsilon)\zeta s T^{\frac{1}{3}}}\leq 1- \exp\Big(-\frac{2}{3\pi}s^{\frac{9}{4}+2\epsilon}e^{-\zeta sT^{\frac{1}{3}}}-\frac{C}{T^{\frac{1}{3}}}e^{-sT^{\frac{1}{3}}}\Big)\leq e^{-(1+\epsilon)\zeta s T^{\frac{1}{3}}}$
for sufficiently large $s$. Hence \eqref{eq:UpBound} follows.
\end{proof}
\begin{proof}[Proof of \eqref{eq:LowrBound}]
Here, we need to get an upper bound on $\mathbb{E}\big[\prod_{k=1}^{\infty} \mathcal{I}_s(\mathbf{a}_k)\big]$. We start by splitting $\mathbb{E}\big[\prod_{k=1}^{\infty} \mathcal{I}_s(\mathbf{a}_k)\big]$ into two different parts (again set $\mathbf{A}=\big\{\mathbf{a}_1 \leq (1+\zeta)s\big\}$):
\begin{align}\label{eq:LowTail1stStep}
\mathbb{E}\Big[\prod_{k=1}^{\infty} \mathcal{I}_s(\mathbf{a}_k)\Big]\leq \mathbb{E}\Big[\prod_{k=1}^{\infty} \mathcal{I}_s(\mathbf{a}_k)\mathbbm{1}(\mathbf{A})\Big] + \mathbb{P}(\mathbf{A}^c)\cdot \exp(- \zeta s T^{\frac{1}{3}}).
\end{align}
Let us define $\chi^{\mathrm{Ai}}(s):= \#\{\mathbf{a}_i\geq s\}$ and, for $c\in(0,\tfrac{2}{3\pi})$ fixed, define
\begin{align}
\mathbf{B}: = \Big\{ \chi^{\mathrm{Ai}}(-\zeta s)- \mathbb{E}\big[\chi^{\mathrm{Ai}}(-\zeta s)\big]\geq - c(\zeta s)^{\frac{3}{2}}\Big\}
\end{align}
We split the first term on the r.h.s. of \eqref{eq:LowTail1stStep} as follows
\begin{align}
\mathbb{E}\Big[\prod_{k=1}^{\infty} \mathcal{I}_s(\mathbf{a}_k)\mathbbm{1}(\mathbf{A})\Big] &\leq \mathbb{E}\Big[\prod_{k=1}^{\infty}\mathcal{I}_s(\mathbf{a}_k)\mathbbm{1}\big(\mathbf{B}\cap \mathbf{A}\big)\Big]+ \mathbb{E}\Big[\mathbbm{1}(\mathbf{B}^c \cap \mathbf{A})\Big]. \label{eq:SplitEq1}
\end{align}
On the event $\mathbf{B}$, we may bound $$\prod_{k=1}^{\infty}\mathcal{I}_s(\mathbf{a}_k) \mathbbm{1}(\mathbf{B})\leq \exp\Big(-\Big(\frac{2}{3\pi}-c\Big)(\zeta s)^{\frac{3}{2}}e^{-(1+\zeta)s T^{\frac{1}{3}}}\Big)$$ so that
\begin{align}\label{eq:2ndSplitBd}
\mathbb{E}\Big[\prod_{k=1}^{\infty}\mathcal{I}_s(\mathbf{a}_k)\mathbbm{1}\big(\mathbf{B}\cap\mathbf{A}\big)\Big]\leq \exp\Big(-\Big(\frac{2}{3\pi}-c\Big)(\zeta s)^{\frac{3}{2}}e^{-(1+\zeta)sT^{\frac{1}{3}}}\Big)\cdot\mathbb{P}(\mathbf{A}).
\end{align}
For large $s$, the r.h.s. of \eqref{eq:2ndSplitBd} is bounded above by $\exp\big(-e^{-(1+\zeta)sT^{\frac{1}{3}}}\big)\mathbb{P}(\mathbf{A})$. Thanks to Theorem~1.4 of \cite{CG18}, we know that for any $\delta >0$, there exists $s_{\delta}$ such that $\mathbb{P}(\mathbf{B}^c)\leq e^{- c(\zeta s)^{3-\delta}}$ for all $s\geq s_{\delta}$. Now, we plug these bounds into \eqref{eq:SplitEq1} which provides an upper bound to the first term on the r.h.s. of \eqref{eq:LowTail1stStep}. As a result, we find
\begin{align}
1-\mathbb{E}\big[\prod_{k=1}^{\infty}\mathcal{I}_s(\mathbf{a}_k)\big]&\geq 1- e^{-e^{-(1+\zeta)sT^{\frac{1}{3}}}} - e^{- c(\zeta s)^{3-\delta}} + \mathbb{P}(\mathbf{A}^c)\big( e^{- e^{-(1+\zeta)sT^{\frac{1}{3}}}}- e^{- \zeta s T^{\frac{1}{3}}}\big).\qquad \label{eq:LowrFnStep}
\end{align}
Finally, we note that $\mathbb{P}(\mathbf{A}^c)\geq \exp\big(-\frac{4}{3}(1+\epsilon)s^{\frac{3}{2}}\big)$ (again thanks to \cite[Theorem~1.3]{RRV11}). Thus, the r.h.s. of \eqref{eq:LowrFnStep} is lower bounded by $\frac{1}{2}e^{-(1+\zeta)s T^{1/3}} + e^{-\frac{4}{3}(1+\epsilon)s^{3/2}}$ for sufficiently large $s$. This completes the proof of \eqref{eq:LowrBound} and hence also of Proposition~\ref{thm:MainTheorem}.
\end{proof}
\section{Upper tail under general initial data}\label{UpperTailGSEC}
This section contains the proofs of Theorems~\ref{Main4Theorem} and \ref{Main6Theorem}.
\subsection{Proof of Theorem~\ref{Main4Theorem}}\label{Proof4Theorem}
Theorem~\ref{Main4Theorem} will follow directly from the next two propositions which leverage narrow wedge upper tail decay results to give general initial data results. The cost of this generalization is in terms of both the coefficients in the exponent and the ranges on which the inequalities are shown to hold. Recall $h^{f}_T$ and $\Upsilon_T$ from \eqref{eq:ScalCentHeight} and \eqref{eq:DefUpsilon} respectively.
The following proposition has two parts which correspond to $T$ being greater or, less than equal to $\pi$. The main goal of this proposition is to provide a recipe to deduce upper bounds on $\mathbb{P}(h^{f}_T(0)>s)$ by employing the upper bounds on $\mathbb{P}(\Upsilon_T(0)>s)$. We have noticed in Theorem~\ref{GrandUpTheorem} that the latter bounds vary as $s$ lies in different intervals and furthermore, those intervals vary with $T$. This motivates us to choose a generic set of intervals of $s$ based on a given $T$ and assume upper bounds on $\mathbb{P}(\Upsilon_T(0)>s)$ in those intervals. In what follows, we show how those translate to the upper bounds on $\mathbb{P}(h^{f}_T(0)>s)$.
\begin{prop}\label{SubstituteTheo}
Fix $\epsilon,\mu\in (0,\frac{1}{2})$, $\nu \in (0,1)$, $C,\theta, \kappa, M>0$ and assume that $f$ belongs to $\mathbf{Hyp}(C,\nu, \theta, \kappa, M)$ (see Definition~\ref{Hypothesis}).
\begin{enumerate}[(1)]
\item Fix $T_0>\pi$.
Suppose there exists $s_0= s_0(\epsilon, T_0)$ and for any $T\geq T_0$ there exist $s_1=s_1(\epsilon, T)$ and $s_2=s_2(\epsilon, T)$ with $s_1\leq s_2$ such for any $s\in [s_0,\infty)$,
\begin{align}\label{eq:AssumBd}
\mathbb{P}(\Upsilon_T(0)>s)\leq \begin{cases}
e^{-\frac{4}{3}(1-\epsilon) s^{3/2}} & \text{ if }s\in [s_0, s_1]\cup (s_2, \infty),\\
e^{-\frac{4}{3}\epsilon s^{3/2}} & \text{ if } s\in (s_1, s_2].
\end{cases}
\end{align}
Let
\begin{align}\label{eq:mathbfs}
\mathbf{s}_0 := \frac{s_0}{1-\frac{2\mu}{3}}, \qquad \mathbf{s}_1 := \frac{\epsilon s_1}{1-\frac{2\mu}{3}}, \qquad \mathbf{s}_2:= \frac{s_2}{1-\frac{2\mu}{3}}.
\end{align}
Then, there exists $s^{\prime}_0= s^{\prime}_0(\epsilon,\mu, C,\nu, \theta, \kappa, M, T_0)$ such that for any $T>T_0$ and any $s\in [\max\{s^{\prime}_0, \mathbf{s}_0\},\infty)$, we have
\begin{align}\label{eq:ResultBd}
\mathbb{P}\big(h^{f}_T(0)>s\big)\leq
\begin{cases}
e^{- \frac{\sqrt{2}}{3}(1-\epsilon)(1-\mu)s^{3/2}} & \text{ if }s\in [\mathbf{s}_0,\mathbf{s}_1]\cup (\mathbf{s}_2, \infty),\\
e^{- \frac{\sqrt{2}}{3}\epsilon(1-\mu)s^{3/2}} & \text{ if }s\in (\mathbf{s}_1, \mathbf{s}_2],
\end{cases}
\end{align}
\item Fix $T_0\in (0,\pi)$. Then, there exists $s^{\prime}_0= s^{\prime}_0(C, \nu, \theta, \kappa, M,T_0)$ satisfying the following: if there exist $s_0=s_0(T_0)>0$ and $c=c(T_0)>0$ such that $\mathbb{P}(\Upsilon_T(0)>s)\leq e^{-cs^{3/2}}$ for all $s\in [s_0,\infty)$ and $T\in [T_0, \pi]$, then,
\begin{align}\label{eq:GenRoughUpBd}
\mathbb{P}\big(h^{f}_T(0)>s\big)\leq e^{-\frac{1}{2\sqrt{2}}cs^{3/2}}, \quad \forall s\in [\max\{s^{\prime}_0,s_0\}, \infty), T\in (T_0,\pi].
\end{align}
\end{enumerate}
\end{prop}
The next proposition provides a lower bound on $\mathbb{P}\big(h^{f}_{T}(0)> s\big)$ in terms of the upper tail probability of the narrow wedge solution.
\begin{prop}\label{UpTailLowBd}
Fix $\mu\in (0,\frac{1}{2})$, $n\in \mathbb{Z}_{\geq 3}$, $\nu \in (0,1)$, $C,\theta, \kappa, M>0$ and $T_0>\pi$ and assume that $f\in \mathbf{Hyp}(C,\nu, \theta, \kappa, M)$. Then, there exist $s_0 = s_0(\mu, n, T_0, C, \nu , \theta, \kappa, M)$ and $K=K(\mu)>0$ such that for all $s\geq s_0$ and $T\geq T_0$,
\begin{align}\label{eq:UpTailLowBd}
\mathbb{P}\big(h^{f}_{T}(0)> s\big) \geq \Big(\mathbb{P}\big(\Upsilon_T(0)>\big(1+\tfrac{2\mu}{3}\big)s\big)\Big)^2 - e^{-Ks^n}.
\end{align}
\end{prop}
We prove Propositions~\ref{SubstituteTheo} and~\ref{UpTailLowBd} in Sections~\ref{UpTailUpBdSEC} and \ref{UpTailLowBdSEC} respectively. In what follows, we complete the proof of Theorem~\ref{Main4Theorem} assuming Propositions~\ref{SubstituteTheo} and~\ref{UpTailLowBd}.
\begin{proof}[Proof of Theorem~\ref{Main4Theorem}]
By Theorem~\ref{GrandUpTheorem}, for any $\epsilon\in (0, \frac{1}{2})$ and $T_0>\pi$, there exists $s_0=s_0(\epsilon, T_0)$ such that for all $T>T_0$ and $s\in [s_0, \infty)$
\begin{align}\label{eq:Hypo}
\mathbb{P}(\Upsilon_T>s)\leq \begin{cases}
e^{-\frac{4}{3}(1-\epsilon) s^{3/2}} & \text{ if }s\in [s_0, \frac{1}{8}\epsilon^2 T]\cup (\frac{9}{16}\epsilon^{-2} T, \infty),\\
e^{-\frac{4}{3}\epsilon s^{3/2}} & \text{ if } s\in (\frac{1}{8}\epsilon^2 T, \frac{9}{16}\epsilon^{-2} T].
\end{cases}
\end{align}
For any $\epsilon\in (0,\frac{1}{2})$ and $T>T_0$, \eqref{eq:Hypo} shows that the hypothesis of part (1) of Proposition~\ref{SubstituteTheo} is satisfied with $s_1= \frac{1}{8}\epsilon^2 T$ and $s_2= \frac{9}{16}\epsilon^{-2} T$. Proposition~\ref{SubstituteTheo} yields $s^{\prime}_0=s^{\prime}_0(\epsilon,\mu, T_0, C,\nu, \theta, \kappa, M)$ such that for all $T\geq T_0$ and $s\in [\max\{s^{\prime}_0, s_0/(1-\tfrac{2\mu}{3})\}, \infty)$
\begin{align}\label{eq:FinalUpTailUpBd}
\mathbb{P}\big(h^{f}_T(0)>s\big)\leq
\begin{cases}
e^{- \frac{\sqrt{2}}{3}(1-\epsilon)(1-\mu)s^{3/2}} & \text{ if }s\in \Big[\frac{s_0}{1-\frac{2\mu}{3}},\frac{\epsilon^3 T}{8(1-\frac{2\mu}{3})} \Big]\cup \Big(\frac{9\epsilon^{-2}T}{16(1-\frac{2\mu}{3})} , \infty\Big),\\
e^{- \frac{\sqrt{2}}{3}\epsilon(1-\mu)s^{3/2}} & \text{ if }s\in \Big(\frac{\epsilon^3 T}{8(1-\frac{2\mu}{3})}, \frac{9\epsilon^{-2}T}{16(1-\frac{2\mu}{3})} \Big].
\end{cases}
\end{align}
This shows the upper bound on $\mathbb{P}\big(h^{f}_T(0)>s\big)$ when $T_0>\pi$. For any $T_0\in (0, \pi)$, the upper bound on $\mathbb{P}\big(h^{f}_T(0)>s\big)$ follows from \eqref{eq:GenRoughUpBd} for all $T\in [T_0, \pi]$.
Now, we turn to show the lower bound.
Let us fix $n=3$. Owing to Proposition~\ref{UpTailLowBd} and the lower bound on the probability $\mathbb{P}(\Upsilon_T(0)\geq s)$ in \eqref{eq:RoughBd1} of Theorem~\ref{GrandUpTheorem}, we observe that the second term $e^{-Ks^3}$ of the r.h.s. of \eqref{eq:UpTailLowBd} is less than the half of the first term when $s$ is large enough. Hence, there exist $s^{\prime}_{0}=s^{\prime}_{0}(\epsilon,\mu, C,\nu, \theta, \kappa, M, T_0)$ such that for all $T\geq T_0>\pi$ and $s\in [\max\{s^{\prime}_0,s_0/(1+\tfrac{2\mu}{3})\}, \infty)$
\begin{align}\label{eq:FinalUpTailLowBd}
\mathbb{P}\big(h^{f}_T(0)>s\big)\geq
\begin{cases}
\frac{1}{2}e^{-\frac{8}{3}(1+\epsilon)(1+\mu) s^{3/2}} & \text{ if } s\in \Big[\frac{s_0}{1+\frac{2\mu}{3}}, \frac{\epsilon^2 T}{8(1+\frac{2\mu}{3})}\Big],\\
\frac{1}{2}e^{-2^{\frac{9}{2}}\epsilon^{-3}(1+\mu) s^{3/2}} & \text{ if } s\in \Big(\frac{\epsilon^2 T}{8(1+\frac{2\mu}{3})}, \frac{9\epsilon^{-2}T}{16(1+\frac{2\mu}{3})}\Big],\\
\frac{1}{2}e^{-8\sqrt{3}(1+\epsilon)(1+\mu) s^{3/2}} & \text{ if } s\in \Big(\frac{9\epsilon^{-2}T}{16(1+\frac{2\mu}{3})}, \infty\Big).\\
\end{cases}
\end{align}
The sets of three intervals of \eqref{eq:FinalUpTailUpBd} and \eqref{eq:FinalUpTailLowBd} are not same. Note\footnote{The first inequality uses $\epsilon\leq (1-\tfrac{2\mu}{3})(1+\tfrac{2\mu}{3})^{-1}$ for any $\epsilon, \mu\in (0, \frac{1}{2})$ and the second inequality uses $\mu>0$.} that
$\tfrac{\epsilon^3 T}{8(1-\frac{2\mu}{3})}< \tfrac{\epsilon^2T}{8(1+\frac{2\mu}{3})}$ and $\tfrac{9 \epsilon^{-2}T}{16(1-\frac{2\mu}{3})} > \tfrac{9 \epsilon^{-2}T}{16(1+\frac{2\mu}{3})}$.
From this we see that
$
\Big(\tfrac{\epsilon^2 T}{8(1+\frac{2\mu}{3})}, \tfrac{9\epsilon^{-2}T}{16(1+\frac{2\mu}{3})}\Big] \subset \Big(\tfrac{\epsilon^3 T}{8(1-\frac{2\mu}{3})}, \tfrac{9\epsilon^{-2}T}{16(1-\frac{2\mu}{3})} \Big]$,
$ \Big[\tfrac{s_0}{1-\frac{2\mu}{3}},\tfrac{\epsilon^3 T}{8(1-\frac{2\mu}{3})} \Big]\subset \Big[\frac{s_0}{1+\frac{2\mu}{3}}, \tfrac{\epsilon^2 T}{8(1+\frac{2\mu}{3})}\Big],$ and $\Big(\tfrac{9\epsilon^{-2}T}{16(1-\frac{2\mu}{3})} , \infty\Big)\subset \Big(\tfrac{9\epsilon^{-2}T}{16(1+\frac{2\mu}{3})}, \infty\Big)$.
By these containments and \eqref{eq:FinalUpTailUpBd}-\eqref{eq:FinalUpTailLowBd}, for all $s\in \big[\max\big\{s^{\prime}_0,s_0/(1-\tfrac{2\mu}{3}), s_0/(1+\tfrac{2\mu}{3}) \big\}, \infty\big)$ and $T\geq T_0>\pi$, we have $\exp(-c_1s^{\frac{3}{2}})\leq \mathbb{P}\big(h^{f}_T(0)>s\big)\leq \exp(-c_2s^{\frac{3}{2}})$ where
\begin{align}
\left.
\begin{array}{r}
\frac{\sqrt{2}}{3}(1-\mu)(1-\epsilon)\\
\frac{\sqrt{2}}{3}(1-\mu)\epsilon\\
\frac{\sqrt{2}}{3}(1-\mu)(1-\epsilon)
\end{array}
\right\}\leq c_2<c_1\leq
\begin{cases}
\frac{8}{3}(1+\epsilon)(1+\mu) & \text{ if } s\in \Big[\frac{s_0}{1-\frac{2\mu}{3}}, \frac{\epsilon^3 T}{8(1-\frac{2\mu}{3})}\Big],\\
2^{\frac{9}{2}}\epsilon^{-3}(1+\mu) & \text{ if } s\in \Big(\frac{\epsilon^3 T}{8(1-\frac{2\mu}{3})}, \frac{9\epsilon^{-2}T}{16(1-\frac{2\mu}{3})}\Big],\\
8\sqrt{3}(1+\epsilon)(1+\mu) & \text{ if } s\in \Big(\frac{9\epsilon^{-2}T}{16(1-\frac{2\mu}{3})}, \infty\Big).\\
\end{cases}
\end{align}
The lower bound $\mathbb{P}(h^{f}_{T}(0)>s)\geq e^{-2c_1s^{3/2}}$ for all $T \in [T_0,\pi]$ when $T_0 \in (0,\pi)$ follows by combining the first inequality of \eqref{eq:GenBdRough} with \eqref{eq:UpTailLowBd} (with $n=3$). This completes the proof.
\end{proof}
\subsubsection{Proof of Proposition~\ref{SubstituteTheo}}\label{UpTailUpBdSEC}
Recall $h^{f}_T$ and $\Upsilon_T$ from \eqref{eq:ScalCentHeight} and \eqref{eq:DefUpsilon}. By Proposition~\ref{NotMainTheorem}, $\mathbb{P}(h^{f}_T(0)\geq s)= \mathbb{P}(\widetilde{\mathcal{A}}^{f})$ where
\begin{align}
\widetilde{\mathcal{A}}^{f}:= \Big\{\int^{\infty}_{-\infty} e^{T^{\frac{1}{3}}\Big(\Upsilon_T(y)+ f(-y)dy\Big)} dy\geq e^{T^{\frac{1}{3}}s}\Big\}.
\end{align}
Let $\zeta_n:=\frac{n}{s^{1+\delta}}$, $n\in \mathbb{Z}$ and fix $\tau\in (0,1)$ such that $\nu+\tau<1$. We define the following events:
\begin{align}
\widetilde{E}_n&:= \Big\{\Upsilon_T(\zeta_n)\geq -\frac{1-2^{-1}\tau}{2^{2/3}}\zeta^2_n + \big(1-\tfrac{2\mu}{3}\big)s\Big\}\label{eq:tildeE}\\
\widetilde{F}_n&:= \Big\{\Upsilon_T(y)\geq -\frac{1- \tau}{2^{2/3}} y^2 + \big(1-\frac{\mu}{3}\big)s \quad \text{ for some }y \in [\zeta_n,\zeta_{n+1}]\Big\}.\label{eq:tildeF}
\end{align}
In the same way as in \eqref{eq:BasicStep}, we write
\begin{align}\label{eq:BasicStep3}
\mathbb{P}\big(\widetilde{\mathcal{A}}^{f}\big)\leq \sum_{n\in \mathbb{Z}} \mathbb{P}(\widetilde{E}_n) +\mathbb{P}\Big(\widetilde{\mathcal{A}}^{f}\cap \big(\bigcup_{n\in \mathbb{Z}} \widetilde{E}_n\big)^c\Big).
\end{align}
From now on, we will fix some $T>T_0>\pi$ and assume that there exist $s_0=s_0(\epsilon, T_0)$, $s_1=s_1(\epsilon, T)$ and $s_2=s_2(\epsilon, T)$ with $s_1\leq s_2$ such that for all $s\in [s_0,\infty)$ \eqref{eq:AssumBd} is satisfied. In the next result, we demonstrate some upper bound on the first term on the r.h.s. of \eqref{eq:BasicStep3}.
\begin{lemma}\label{UpSumProbBd}
There exist $\bar{s}=\bar{s}(\epsilon, T_0)$ and $\Theta=\Theta(\epsilon, T_0)$ such that for all $s\in [\max\{\bar{s},\mathbf{s}_0\}, \infty)$,
\begin{align}\label{eq:TotSumBd}
\sum_{n\in \mathbb{Z}}\mathbb{P}\big(\widetilde{E}_n\big)\leq \begin{cases}
\Theta e^{-\frac{4}{3}(1-\epsilon)(1-\mu) s^{3/2}} & \text{ if } s\in [\mathbf{s}_0, \mathbf{s}_1]\cup (\mathbf{s}_2, \infty),\\
\Theta e^{-\frac{4}{3}\epsilon(1-\mu) s^{3/2}} & \text{ if } s\in (\mathbf{s}_1,\mathbf{s}_2],
\end{cases}
\end{align}
where $\mathbf{s}_0$, $\mathbf{s}_1$ and $\mathbf{s}_2$ are defined in \eqref{eq:mathbfs}.
\end{lemma}
\begin{proof}
We first prove \eqref{eq:TotSumBd} when $s\in [\mathbf{s}_0,\mathbf{s}_1]$. If $[\mathbf{s}_0,\mathbf{s}_1]$ is an empty interval, then, nothing to prove. Otherwise, fix any $s\in [\mathbf{s}_0,\mathbf{s}_1]$. Let us denote $$\mathcal{S}_1:= [0,(1-\epsilon)s_1], \quad \mathcal{S}_2:= ((1-\epsilon)s_1, s_2-s_0], \quad \mathcal{S}_3:= (s_2-s_0, \infty).$$
\begin{claim}
\begin{align}\label{eq:IndEventBd}
\mathbb{P}\big(\widetilde{E}_n\big)\leq \begin{cases}
\exp\Big(-\frac{4}{3}(1-\epsilon)\Big(\big(1-\tfrac{2\mu}{3}\big)s+\frac{\tau \zeta^2_n}{2^{5/3}}\Big)^{\frac{3}{2}}\Big) & \text{ when }\frac{\tau \zeta_n^2}{2^{5/3}}\in \mathcal{S}_1\cup \mathcal{S}_3,\\
\exp\Big(-\frac{4}{3}\epsilon\Big(\big(1-\tfrac{2\mu}{3}\big)s+\frac{\tau \zeta^2_n}{2^{5/3}}\Big)^{\frac{3}{2}}\Big) & \text{ when }\frac{\tau \zeta_n^2}{2^{5/3}} \in \mathcal{S}_2.
\end{cases}
\end{align}
\end{claim}
\begin{proof}
Note that $s_0\leq (1-\frac{2\mu}{3})s\leq \epsilon s_1$. This implies $\big(1-\frac{2\mu}{3}\big)s + 2^{-5/3}\tau \zeta^2_n $ is bounded above by $ \epsilon s_1+(1-\epsilon)s_1= s_1$ whenever $2^{-5/3}\tau \zeta^2_n\leq (1-\epsilon)s_1$ whereas it is bounded below by $s_0 + s_2-s_0 =s_2$ if $2^{-5/3}\tau \zeta^2_n>s_2-s_0$. Owing to this and \eqref{eq:AssumBd}, we have
\begin{align}\label{eq:Case1}
\mathbb{P}(\widetilde{E}_n)\leq \exp\Big(-\frac{4}{3}(1-\epsilon)\Big(\big(1-\tfrac{2\mu}{3}\big)s+\frac{\tau \zeta^2_n}{2^{5/3}}\Big)^{\frac{3}{2}}\Big) \quad \text{ when }\frac{\tau \zeta_n^2}{2^{5/3}}\in \mathcal{S}_1\cup \mathcal{S}_3.
\end{align}
Furthermore, $(1-\frac{2\mu}{3})s+2^{-5/3}\tau \zeta^2_n$ is greater than $s_0$ when $s\geq \mathbf{s}_0$. Thanks to $\epsilon<\frac{1}{2}$, one can now see the following from \eqref{eq:AssumBd}:
\begin{align}\label{eq:Case2}
\mathbb{P}(\widetilde{E}_n)\leq \exp\Big(-\frac{4}{3}\epsilon\Big(\big(1-\tfrac{2\mu}{3}\big)s+\frac{\tau \zeta^2_n}{2^{5/3}}\Big)^{\frac{3}{2}}\Big) \quad \text{ when } \frac{\tau \zeta_n^2}{2^{5/3}}\in \mathcal{S}_2.
\end{align}
Combining \eqref{eq:Case1} and \eqref{eq:Case2}, we get \eqref{eq:IndEventBd}.
\end{proof}
Let $n_0=n_0(s,\delta, \tau)<n^{\prime}_0 = n^{\prime}_0(s,\delta, \tau)\in \mathbb{N}$ be such that $2^{-5/3}\tau \zeta^2_n\in \mathcal{S}_2$ for all integer $n$ in $[n_0, n^{\prime}_0]\cup [-n^{\prime}_0, -n_0]$. Using the reverse Minkowski's inequality,
\begin{align}\label{eq:BreakSquare}
\frac{\tau \zeta^2_n}{2^{5/3}}\geq \frac{\tau \zeta^2_{n_0}}{2^{5/3}} + \frac{\tau \zeta^2_{|n|-n_0}}{2^{5/3}}, \quad \forall n\in \{[n_0, n^{\prime}_0]\cup [- n^{\prime}_0, -n_0]\}\cap \mathbb{Z}.
\end{align}
Owing to $s_1\geq \epsilon^{-1}(1-\tfrac{2\mu}{3}) s$, we get
\begin{align}\label{eq:LeftBound}
\frac{\tau \zeta^2_{n_0}}{2^{5/3}}\geq (1-\epsilon)s_1\geq \epsilon^{-1}\big(1-\tfrac{2\mu}{3}\big)(1-\epsilon)s.
\end{align}
Combining \eqref{eq:BreakSquare} with \eqref{eq:LeftBound} and invoking the reverse Minkowski's inequality yields
\begin{equation}
\Big((1-\tfrac{2\mu}{3})s+\frac{\tau \zeta^2_{n}}{2^{5/3}}\Big)^{\frac{3}{2}} \geq \Big(\epsilon^{-1}\big(1-\tfrac{2\mu}{3}\big)(1-\epsilon)s\Big)^{\frac{3}{2}} + \frac{\tau^{3/2}\zeta^3_{|n|-n_0}}{2^{5/2}}, \quad \text{when}\quad\frac{\tau \zeta^2_{n}}{2^{5/3}}\in \mathcal{S}_2.
\end{equation}
Plugging this into \eqref{eq:IndEventBd}, summing in a similar way as in the proof of Lemma~\ref{MeshBound} and noticing
$$\epsilon\Big(\epsilon^{-1}(1-\frac{2\mu}{3})(1-\epsilon)\Big)^{\frac{3}{2}}> \epsilon^{-\frac{1}{2}}(1-\epsilon)^{\frac{1}{2}}(1-\mu)(1-\epsilon)>(1-\mu)(1-\epsilon),$$
we arrive at
\begin{align}
\sum_{n: 2^{-5/3}\tau \zeta^2_n\in \mathcal{S}_2}\mathbb{P}\big(\widetilde{E}_n\big)\leq C_1 \exp\Big(-\frac{4}{3}(1-\epsilon)(1-\mu)s^{\frac{3}{2}}\Big)\label{eq:MidSum}
\end{align}
for some $C_1=C_1(\epsilon, T_0)$ when $s$ is large enough.
From the reverse Minkowski's inequality,
\begin{align}\label{eq:RevMin}
\Big(\big(1-\tfrac{2\mu}{3}\big)s+\frac{\tau \zeta^2_n}{2^{5/3}}\Big)^{\frac{3}{2}}\geq \big(1-\tfrac{2\mu}{3}\big)^{\frac{3}{2}}s^{\frac{3}{2}} + \frac{\tau^{3/2} \zeta^3_{|n|}}{2^{5/2}}.
\end{align}
Applying \eqref{eq:RevMin} to the r.h.s. of \eqref{eq:IndEventBd} for all $n$ such that $2^{-5/3}\tau \zeta^2_n\in \mathcal{S}_1\cup \mathcal{S}_3$ and summing in a similar way as in the proof of Lemma~\ref{MeshBound} yields
\begin{align}
\sum_{n:2^{-5/3}\tau \zeta^2_n\in \mathcal{S}_1\cup \mathcal{S}_3}\mathbb{P}\big(\widetilde{E}_n\big)\leq C_2\exp\Big(-\frac{4}{3}(1-\epsilon)\big(1-\tfrac{2\mu}{3}\big)^{3/2}s^{\frac{3}{2}}\Big) \label{eq:EndSum}
\end{align}
for some $C_2=C_2(\epsilon, T_0)$.
Adding \eqref{eq:MidSum} and \eqref{eq:EndSum} and noticing that $\big(1-\tfrac{2\mu}{3}\big)^{\frac{3}{2}}\geq (1-\mu)$, we obtain \eqref{eq:TotSumBd} if $s\in [\mathbf{s}_0,\mathbf{s}_1]\cap [\bar{s}, \infty)$ where $\bar{s}$ depends on $\epsilon$ and $T_0$.
Now, we turn to the case when $s\in \big\{(\mathbf{s}_1, \mathbf{s}_2]\cup (\mathbf{s}_2, \infty)\big\}\cap[\mathbf{s}_0,\infty)$. Owing to \eqref{eq:AssumBd}, for all $n\in \mathbb{Z}$ and $s\in [\mathbf{s}_0,\infty)$,
\begin{align}
\mathbb{P}(\widetilde{E}_n)\leq \begin{cases}
\exp\Big(-\frac{4}{3}\epsilon\Big(\big(1-\tfrac{2\mu}{3}\big)s+\frac{\tau \zeta^2_n}{2^{5/3}}\Big)^{\frac{3}{2}}\Big) & \text{ if } s\in (\mathbf{s}_1, \mathbf{s}_2],\\\exp\Big(-\frac{4}{3}(1-\epsilon)\Big(\big(1-\tfrac{2\mu}{3}\big)s+\frac{\tau \zeta^2_n}{2^{5/3}}\Big)^{\frac{3}{2}}\Big) & \text{ if }s\in (\mathbf{s}_2, \infty).
\end{cases}\label{eq:IndEventBd2}
\end{align}
Applying \eqref{eq:RevMin} and summing the r.h.s. of \eqref{eq:IndEventBd2} in the same way as \eqref{eq:EndSum}, we find \eqref{eq:TotSumBd}.
\end{proof}
Now, we show an analogue of Lemma~\ref{EffOfInitCond}.
\begin{lemma}\label{SumProbBdLem}
There exists $s^{\prime} = s^{\prime}(\epsilon, T_0, C, \nu, \theta, \kappa, M)$ such that for all $s\geq s^{\prime}$,
\begin{align}\label{eq:Containment3}
\Big(\bigcup_{n\in \mathbb{Z}}\widetilde{E}_n\Big)^c \cap \Big(\bigcup_{n\in \mathbb{Z}} \widetilde{F}_n\Big)^c \subseteq (\widetilde{\mathcal{A}}^f)^c.
\end{align}
\end{lemma}
\begin{proof}
Assume the event of the l.h.s. of \eqref{eq:Containment3} occurs. By \eqref{eq:InMomBd} of Definition~\ref{Hypothesis} and $\tau+\nu<1$,
\begin{align}
\int_{-\infty}^{\infty} e^{T^{1/3}\big(\Upsilon_T(y)+f(-y)\big)} dy\leq \int_{-\infty}^{\infty} e^{T^{1/3}\big(C-\frac{1-\tau}{2^{2/3}}y^2+ (1-\frac{\mu}{3})s + \frac{\nu}{2^{2/3}} y^2\big)} dy\leq \frac{K}{T^{1/6}} e^{(1-\frac{\mu}{3})sT^{1/3} }.
\end{align}
for some $K=K(C,T,\tau,\nu)>0$. There exists $s^{\prime}= s^{\prime}(\mu, T_0, C,\nu, \theta, \kappa, M)$ such that the right hand side of the above inequality is bounded above by $\exp(sT^{\frac{1}{3}})$, thus confirming \eqref{eq:Containment3}.
\end{proof}
Applying \eqref{eq:Containment3} and Bonferroni's union bound, (see \eqref{eq:BonfrBd} for a similar inequality)
\begin{align}\label{eq:MyNewEq}
\mathbb{P}\Big(\widetilde{\mathcal{A}}^{f} \cap \Big(\bigcup_{n\in \mathbb{Z}} \widetilde{E}_n\Big)^c\Big)\leq \sum_{n\in \mathbb{Z}} \mathbb{P}\big( \widetilde{E}^c_{n-1}\cap \widetilde{E}^c_{n+1}\cap \widetilde{F}_n\big).
\end{align}
\begin{lemma}\label{BigMaxApp}
There exists $s^{\prime\prime}= s^{\prime\prime}(\epsilon,\mu, T_0)$ and $\Theta=\Theta(\epsilon, T_0)$ such that for all $s\in [\max\{s^{\prime\prime}, \mathbf{s}_0\}, \infty)$,
\begin{align}\label{eq:NoBigMaxRes}
\sum_{n\in \mathbb{Z}} \mathbb{P}\big(\widetilde{E}^c_{n-1}\cap \widetilde{E}^c_{n+1}\cap \widetilde{F}_n\big) \leq \begin{cases}
\Theta e^{-\frac{\sqrt{2}}{3}(1-\epsilon)(1-\mu) s^{3/2}} & \text{ if } s\in [\mathbf{s}_0, \mathbf{s}_1]\cup (\mathbf{s}_2, \infty),\\
\Theta e^{-\frac{\sqrt{2}}{3}\epsilon(1-\mu) s^{3/2}} & \text{ if } s\in (\mathbf{s}_1,\mathbf{s}_2].
\end{cases}
\end{align}
See \eqref{eq:mathbfs} for the definitions of $\mathbf{s}_0$, $\mathbf{s}_1$ and $\mathbf{s}_2$.
\end{lemma}
\begin{proof}
We need to bound $\mathbb{P}\big(\widetilde{E}^c_{n-1}\cap \widetilde{E}^c_{n+1}\cap \widetilde{F}_n\big)$ for all $n\in \mathbb{Z}$. Define
\begin{align}\label{eq:TwoEvents}
\widetilde{\mathcal{E}}_n := \Big\{ \Upsilon_T(\zeta_n) \geq -\frac{1+2^{-1}\tau}{2^{2/3}}\zeta^2_n- s^{\frac{2}{3}}\Big\}, \qquad \textrm{for } n\in \mathbb{Z}.
\end{align}
We begin with the following inequality
\begin{align*}
\mathbb{P}\big(\widetilde{E}^c_{n-1} \cap \widetilde{E}^c_{n+1} \cap \widetilde{F}_n\big)\leq \mathbb{P} \big((\widetilde{E}^c_{n-1}\cap \widetilde{\mathcal{E}}_{n-1}) \cap (\widetilde{E}^c_{n+1}\cap \widetilde{\mathcal{E}}_{n+1}) \cap \widetilde{F}_n\big) + \mathbb{P}( \widetilde{\mathcal{E}}^c_{n-1})+ \mathbb{P}(\widetilde{\mathcal{E}}^c_{n+1}).
\end{align*}
We will bound each term on the r.h.s. above.
Proposition~\ref{NotMainTheorem} provides $s^{\prime\prime}:=s^{\prime\prime}(\epsilon, T_0)$, $K=K(\epsilon, T_0)>0$ and the following upper bound\footnote{Taking $\epsilon=\delta$ in Proposition~\ref{NotMainTheorem} the r.h.s. of \eqref{eq:PrevRes1} $\leq \exp(-T^{\frac{1}{3}}\frac{4(1-\epsilon)s^{5/2}}{15\pi}) + \exp(-Ks^{3-\epsilon})$.} for $s\geq s^{\prime\prime}$ and $T\geq T_0$
\[\mathbb{P}(\widetilde{\mathcal{E}}^c_n)\leq \exp\Big(- T^{\frac{1}{3}}\frac{4}{15\pi}(1-\epsilon)\big(s^{\frac{2}{3}} +\frac{\tau \zeta^2_n}{2^{5/3}}\big)^{\frac{5}{2}}\Big)+ \exp\Big(-K\big(s^{\frac{2}{3}} +\frac{\tau \zeta^2_n}{2^{5/3}}\big)^{3-\epsilon}\Big).\]
Summing over all $n\in \mathbb{Z}$ (in the same way as in Lemma~\ref{MeshBound}) yields
\begin{align}\label{eq:SumRes}
\sum_{n\in \mathbb{Z}} \big(\mathbb{P}( \widetilde{\mathcal{E}}^c_{n-1})+ \mathbb{P}(\widetilde{\mathcal{E}}^c_{n+1})\big) \leq e^{-T^{1/3} \frac{4}{15\pi}(1-\epsilon)s^{5/3}} + e^{-Ks^{2-2\epsilon/3}}.
\end{align}
\begin{claim}
There exists $s^{\prime\prime} = s^{\prime\prime}(\epsilon, \mu, T_0)$, such that for all $s\geq s^{\prime\prime}$, $T\geq T_0$ and $n\in \mathbb{Z}$,
\begin{align}\label{eq:CplexBd}
\mathbb{P} \big( (\widetilde{E}^c_{n-1}\cap \widetilde{\mathcal{E}}_{n-1}) \cap (\widetilde{E}^c_{n+1}\cap \widetilde{\mathcal{E}}_{n+1}) \cap \widetilde{F}_n\big)\leq 2\mathbb{P}\Big(\Upsilon_T(0)\geq 2^{-\frac{11}{3}}\zeta^2_n+\frac{1}{2}\big(1-\tfrac{2\mu}{3}\big)s\Big).
\end{align}
\end{claim}
\begin{figure}[t]
\includegraphics[width=.5\linewidth]{figure2.pdf}
\caption[]{Illustration from the proof of \eqref{eq:CplexBd}. The three parabolas are $U(\cdot)$, $M(\cdot)$ and $L(\cdot)$. The solid black curve is $\Upsilon^{(1)}_T(\cdot)$ when $\widetilde{E}^c_{n-1}\cap \widetilde{\mathcal{E}}_{n-1} \cap \widetilde{E}^c_{n+1}\cap \widetilde{\mathcal{E}}_{n+1} \cap \widetilde{F}_n$ occurs. Note that $\Upsilon^{(1)}_T(\cdot)$ stays in between $M(\cdot)$ and $L(\cdot)$ at $\zeta_{n-1}$ and $\zeta_{n+1}$. The rightmost point in $(\zeta_n,\zeta_{n+1})$ where $\Upsilon^{(1)}_T(\cdot)$ hits $U(\cdot)$ is labeled $\sigma_n$. The event that the black curve stays above the square at $\zeta_n$ is $\widetilde{\mathfrak{B}}_n$ and $\mathbb{P}_{\mathbf{H}_{2T}}(\widetilde{\mathfrak{B}}_n)$ (see \eqref{eq:DefP1Up} for $\mathbb{P}_{\mathbf{H}_{2T}}$) is the probability of $\widetilde{\mathfrak{B}}_n$ conditioned on the sigma algebra $\mathcal{F}_{\mathrm{ext}}\big(\{1\}\times (\zeta_{n-1}, \sigma_n)\big)$. On the other hand, $\widetilde{\mathbb{P}}_{\mathbf{H}_{2T}}(\widetilde{\mathfrak{B}}_n)$ (see~\eqref{eq:DefP2Up} for $\widetilde{\mathbb{P}}_{\mathbf{H}_{2T}}$) is the probability of $\widetilde{\mathfrak{B}}_n$ under the free Brownian bridge (scaled by $2^{\frac{1}{3}}$) measure on the interval $(\zeta_{n-1}, \sigma_n)$ with same starting and end point as $\Upsilon^{(1)}_T(\cdot)$. The dashed black curve is such a free Brownian bridge coupled to $\Upsilon^{(1)}_T(\cdot)$ so that $B(y)\leq \Upsilon^{(1)}_T(y)$ for all $y \in (\zeta_{n-1}, \sigma_n)$. Owing to this coupling, $\mathbb{P}_{\mathbf{H}_{2T}}(\widetilde{\mathfrak{B}}_n)\geq \widetilde{\mathbb{P}}_{\mathbf{H}_{2T}}(\widetilde{\mathfrak{B}}_n)$. The probability of $B(\sigma_n)$ staying above the bullet point is $\frac{1}{2}$ which implies that $\widetilde{\mathbb{P}}_{\mathbf{H}_{2T}}(\widetilde{\mathfrak{B}}_n)\geq \frac{1}{2}$. Consequently, we can bound the probability of $(\widetilde{E}^c_{n-1}\cap \widetilde{\mathcal{E}}_{n-1}) \cap (\widetilde{E}^c_{n+1}\cap \widetilde{\mathcal{E}}_{n+1}) \cap \widetilde{F}_n$ by $2\mathbb{P}(\widetilde{\mathfrak{B}}_n)$ (see \eqref{eq:AllIsAbove}). The expected value of $\mathbb{P}(\widetilde{\mathfrak{B}}_n)$ can be bounded above by the upper tail probability of $\Upsilon^{(1)}_T(\zeta_n)+\frac{\zeta^2_n}{2^{2/3}}$ (see~\eqref{eq:EachBd}). The upper bound in \eqref{eq:CplexBd} follows then by invoking Proposition~\ref{StationarityProp}.}
\label{fig:Figure2}
\end{figure}
\begin{proof}
We parallel the proof of \cite[Proposition~4.4]{CH14} (see also
\cite[Lemma~4.1]{CorHam16}). Figure~\ref{fig:Figure2} illustrates the main objects in this proof and the argument (whose details we now provide).
By Proposition~\ref{NWtoLineEnsemble} the curve $2^{-\frac{1}{3}}\Upsilon^{(1)}_T(\cdot)$ from the KPZ line ensemble $\{2^{-\frac{1}{3}}\Upsilon^{(n)}_T(x)\}_{n\in \mathbb{N},x\in \mathbb{R}}$ has the same distribution as $2^{-\frac{1}{3}}\Upsilon_T(\cdot)$. For the rest of this proof, we replace $\Upsilon_T$ by $\Upsilon^{(1)}_T$ in the definitions of $\{\widetilde{E}_n\}_{n}$, $\{\widetilde{F}_n\}_n$ and $\{\widetilde{\mathcal{E}}_n\}_{n}$.
We define the following three curves:
\begin{align}
U(y):= -\tfrac{(1-\tau)}{2^{2/3}}y^2+ \Big(1-\tfrac{\mu}{3}\Big)s, \quad L(y):= -\tfrac{(1+2^{-1}\tau)}{2^{2/3}}y^2 -s^{\frac{2}{3}}, \quad M(y):= -\tfrac{(1-\tau)}{2^{2/3}}y^2& + (1-\tfrac{\mu}{3})s.
\end{align}
If $\widetilde{E}^c_{n-1}\cap \widetilde{\mathcal{E}}_{n-1}$ and $\widetilde{E}^c_{n-1}\cap \widetilde{\mathcal{E}}_{n-1}$ occurs, then, $\Upsilon^{(1)}_T(\cdot)$ stays in between the curves $M(\cdot)$ and $L(\cdot)$ at the points $\zeta_{n-1}$ and $\zeta_{n+1}$ respectively. If $\widetilde{F}_n$ occurs, then, $\Upsilon^{(1)}_T(\cdot)$ touches the curve $U(\cdot)$ at some point in the interval $[\zeta_{n}, \zeta_{n+1}]$. Therefore, on the event $(\widetilde{E}^c_{n-1}\cap \widetilde{\mathcal{E}}_{n-1})\cap (\widetilde{E}^c_{n+1} \cap \widetilde{\mathcal{E}}_{n+1})\cap F_n$, $\Upsilon^{(1)}_T(\cdot)$ hits $U(\cdot)$ somewhere in the interval $(\zeta_n, \zeta_{n+1})$ whereas it stays in between $M(\cdot)$ and $L(\cdot)$ at the points $\zeta_{n-1}$ and $\zeta_{n+1}$. Let us define $\sigma_n := \sup \Big\{y\in (\zeta_n, \zeta_{n+1}): \Upsilon^{(1)}_{T}(y)\geq U(y) \Big\}.$
Recall that $\zeta_{n-1}< \zeta_{n}< \zeta_{n+1}$. Consider the following crossing event
\begin{align}\label{eq:DefMathfrakB}
\widetilde{\mathfrak{B}}_n := \Big\{\Upsilon^{(1)}_T(\zeta_n)\geq \frac{\sigma_n-\zeta_n}{\sigma_n -\zeta_{n-1}}L(\zeta_{n-1})+ \frac{\zeta_n-\zeta_{n-1}}{\sigma_n- \zeta_{n-1}}U(\sigma_n)\Big\}.
\end{align}
We will use the following abbreviation for the probability measures
\begin{align}
\mathbb{P}_{\mathbf{H}_{2T}}&:=\mathbb{P}^{1,1,(\zeta_{n-1}, \sigma_n), 2^{-\frac{1}{3}}\Upsilon^{(1)}_T(\zeta_{n-1}), 2^{-\frac{1}{3}}\Upsilon^{(1)}_T(\sigma_{n}), +\infty, 2^{-\frac{1}{3}}\Upsilon^{(2)}_T}_{\mathbf{H}_{2T}},\label{eq:DefP1Up}\\
\widetilde{\mathbb{P}}_{\mathbf{H}_{2T}}&:=\mathbb{P}^{1,1, (\zeta_{n-1}, \sigma_n), 2^{-\frac{1}{3}}\Upsilon^{(1)}_T(\zeta_{n-1}), 2^{-\frac{1}{3}}\Upsilon^{(1)}_T(\sigma_{n}), +\infty, -\infty}_{\mathbf{H}_{2T}}.\label{eq:DefP2Up}
\end{align}
Since, $(\zeta_{n-1}, \sigma_n)$ is a $\{1\}$-stopping domain (see Definition~\ref{LineEnsemble}) for the KPZ line ensemble, the strong $\mathbf{H}_{2T}$-Brownian Gibbs property (see Lemma~2.5 of \cite{CorHam16}) applies to show that
\begin{align}\label{eq:TildeB}
\mathbb{E}&\Big[\mathbbm{1}\big((\widetilde{E}^c_{n-1}\cap \widetilde{\mathcal{E}}_{n-1})\cap (\widetilde{E}^c_{n+1}\cap \widetilde{\mathcal{E}}_{n+1})\cap \widetilde{F}_n\big)\cdot \mathbbm{1}(\widetilde{\mathfrak{B}}_n)|\mathcal{F}_{\mathrm{ext}}\big(\{1\}\times (\zeta_{n-1}, \sigma_{n})\big)\Big] \\ = &\mathbbm{1}\big((\widetilde{E}^c_{n-1}\cap \widetilde{\mathcal{E}}_{n-1})\cap (\widetilde{E}^c_{n+1}\cap \widetilde{\mathcal{E}}_{n+1})\cap \widetilde{F}_n\big)\cdot \mathbb{P}_{\mathbf{H}_{2T}}(\widetilde{\mathfrak{B}}_n).
\end{align}
By Proposition~\ref{Coupling1}, there exists a monotone coupling\footnote{If $B$ is $\mathbb{P}_{\mathbf{H}_{2T}}$ distributed and $\tilde{B}$ is $\widetilde{\mathbb{P}}_{\mathbf{H}_{2T}}$ distributed, then, under the coupling, $B(y)\geq \tilde{B}(y), \space\forall y\in ( \zeta_{n-1}, \sigma_n)$} between the probability measures $\mathbb{P}_{\mathbf{H}_{2T}}$ and $\widetilde{\mathbb{P}}_{\mathbf{H}_{2T}}$.
Using this and the fact that the probability of$\widetilde{\mathfrak{B}}_n$ increases under pointwise increase of its sample paths, we have $\mathbb{P}_{\mathbf{H}_{2T}}(\widetilde{\mathfrak{B}}_n)\geq \widetilde{\mathbb{P}}_{\mathbf{H}_{2T}}(\widetilde{\mathfrak{B}}_n)$. Since $\widetilde{\mathbb{P}}_{\mathbf{H}_{2T}}$ is the law of a Brownian bridge on the interval $(\zeta_{n-1}, \sigma_n)$ with end points $2^{-\frac{1}{3}}\Upsilon^{(1)}_T(\zeta_{n-1})$ and $2^{-\frac{1}{3}}\Upsilon^{(1)}_T(\sigma_n)$, the probability that it stays above the line joining the two end points at a given intermediate point is $\frac{1}{2}$. Therefore $\widetilde{\mathbb{P}}_{\mathbf{H}_{2T}}(\widetilde{\mathfrak{B}}_n)\geq\frac{1}{2}$. Plugging this into \eqref{eq:TildeB} and taking expectation yields
\begin{align}
\mathbb{P}\big( (\widetilde{E}^c_{n-1}\cap \widetilde{\mathcal{E}}_{n-1})\cap (\widetilde{E}^c_{n+1}\cap \widetilde{\mathcal{E}}_{n-1})\cap \widetilde{F}_n\big)\leq 2\mathbb{E}\Big[\mathbbm{1}((\widetilde{E}^c_{n-1}\cap \widetilde{\mathcal{E}}_{n-1})\cap (\widetilde{E}^c_{n+1}\cap \widetilde{\mathcal{E}}_{n+1})\cap \widetilde{F}_n)\cdot \mathbbm{1}(\widetilde{\mathfrak{B}}_n)\Big].&&\label{eq:AllIsAbove}
\end{align}
Now, we bound the r.h.s. of \eqref{eq:AllIsAbove}.
Note the following holds\footnote{To see the first inequality of \eqref{eq:Series3}, note that $(\zeta_n-\zeta_{n-1})/(\sigma_n-\zeta_{n-1})\geq \frac{1}{2}$ and $\sigma^2_n\geq \zeta^2_n -2|\zeta_n|s^{-(1+\delta)}$; the second inequality follows from $8^{-1}\zeta^2_n- 2|\zeta_n|s^{-(1+\delta)}\geq 0$ for all $|n|\geq 16$.} for all $n\in \mathbb{Z}$:
\begin{align}
\frac{(\sigma_n -\zeta_{n})\zeta^2_{n-1}+(\zeta_n -\zeta_{n-1})\sigma^2_{n}}{\sigma_n - \zeta_{n-1}}- \zeta^2_n &= (\sigma_n -\zeta_{n})(\zeta_n -\zeta_{n-1})\leq \frac{1}{s^{2+2\delta}}, &&&\label{eq:Series1}\\
\frac{-\frac{1}{2}(\sigma_n -\zeta_{n})\zeta^2_{n-1}+(\zeta_n -\zeta_{n-1})\sigma^2_{n}}{\sigma_n - \zeta_{n-1}} +\frac{1}{2}\zeta^2_n &= -\frac{1}{2}(\sigma_n -\zeta_{n})(\zeta_n -\zeta_{n-1}) + \frac{3}{2}\frac{\zeta_n -\zeta_{n-1}}{\sigma_n -\zeta_{n-1}}\sigma^2_{n}, &&&\label{eq:Series2}\\
\frac{3}{2}\frac{\zeta_n -\zeta_{n-1}}{\sigma_n -\zeta_{n-1}}\sigma^2_{n}-\frac{1}{2}\zeta^2_n &\geq \frac{1}{4}\zeta^2_n - 2\frac{|\zeta_n|}{s^{1+\delta}} \geq \frac{1}{8}\zeta^2_n - \frac{32}{s^{2+2\delta}}.&&&\label{eq:Series3}
\end{align}
Combining \eqref{eq:Series1}, \eqref{eq:Series2} and \eqref{eq:Series3} yields
\begin{align*}
\frac{\sigma_n-\zeta_n}{\sigma_n -\zeta_{n-1}}L(\zeta_{n-1})+ \frac{\zeta_n-\zeta_{n-1}}{\sigma_n- \zeta_{n-1}}U(\sigma_n)\geq -\frac{(1-8^{-1}\tau)}{2^{2/3}}\zeta^2_n -\frac{(4+34\tau)}{2^{2/3}s^{2+2\delta}}+ \frac{1}{2}\Big(\big(1-\frac{\mu}{3}\big)s - s^{\frac{2}{3}}\Big).
\end{align*}
This implies that when $\widetilde{\mathfrak{B}}_n$ occurs, $\Upsilon^{(1)}_T(\zeta_{n}) $ will be greater than the r.h.s above. The r.h.s is bounded below by $-2^{-\frac{2}{3}}(1-8^{-1}\tau)+\frac{1}{2}\big(1-\tfrac{2\mu}{3}\big)s$ when $s$ is large enough. Hence, we have
\begin{align}
\text{r.h.s. of \eqref{eq:AllIsAbove}} \leq 2\,\mathbb{P}(\widetilde{\mathfrak{B}}_n)\leq 2 \,\mathbb{P}\left(\Upsilon^{(1)}_T(\zeta_n)\geq -\frac{(1-8^{-1}\tau)}{2^{2/3}}\zeta^2_n + \frac{1}{2}\big(1-\tfrac{2\mu}{3}\big)s\right).\label{eq:EachBd}
\end{align}
Now, the claim follows from \eqref{eq:AllIsAbove} and \eqref{eq:EachBd} by recalling that $\Upsilon^{(1)}_T(\zeta^2_n)+\frac{\zeta^2_n}{2^{2/3}}\stackrel{d}{=}\Upsilon_T(0)$.
\end{proof}
Using \eqref{eq:CplexBd} and a similar analysis as in Lemma~\ref{UpSumProbBd}, there exist $s^{\prime\prime}=s^{\prime\prime}(\epsilon, \mu, T_0)$ and $C^{\prime}=C^{\prime}(\epsilon, T_0)$ such that for all $s\in [\max\{s^{\prime\prime}, \mathbf{s}_0\},\infty)$,
\begin{align*}
\sum_{n\in \mathbb{Z}}\mathbb{P} \big( (\widetilde{E}^c_{n-1}\cap \widetilde{\mathcal{E}}_{n-1}) \cap (\widetilde{E}^c_{n+1}\cap \widetilde{\mathcal{E}}_{n+1}) \cap \widetilde{F}_n\big)
&\leq \begin{cases}
C^{\prime} e^{-\frac{\sqrt{2}}{3}(1-\epsilon)(1-\mu) s^{3/2}} & \text{ if } s\in [\mathbf{s}_0, \mathbf{s}_1]\cup (\mathbf{s}_2, \infty),\\
C^{\prime} e^{-\frac{\sqrt{2}}{3}\epsilon(1-\mu) s^{3/2}} & \text{ if } s\in (\mathbf{s}_1,\mathbf{s}_2].
\end{cases}
\end{align*}
Combining this with \eqref{eq:SumRes}, we arrive at \eqref{eq:NoBigMaxRes}.
\end{proof}
\smallskip
\noindent\textsc{Final step of the proof of Proposition~\ref{SubstituteTheo}:} Define $s^{\prime}_0:= \max\{\bar{s}, s^{\prime}, s^{\prime\prime}\}$ where $\bar{s},s^{\prime},s^{\prime\prime}$ are taken from Lemmas~\ref{UpSumProbBd}, \ref{SumProbBdLem} and \ref{BigMaxApp} respectively.
\begin{enumerate}[(1)]
\item Owing to \eqref{eq:MyNewEq} and \eqref{eq:NoBigMaxRes}, when $T_0>\pi$, there exists $\Theta=\Theta(\epsilon, T_0)$ such that for all $s\in [\max\{s^{\prime}_0,\mathbf{s}_0 \},\infty)$
\begin{align}\label{eq:OneCompBd}
\mathbb{P}\left(\widetilde{\mathcal{A}}^{f}\cap \left(\bigcup_{n\in \mathbb{Z}} \widetilde{E}_n\right)^c\right)\leq \begin{cases}
\Theta e^{-\frac{\sqrt{2}}{3}(1-\epsilon)(1-\mu) s^{3/2}} & \text{ when } s\in [\mathbf{s}_0, \mathbf{s}_1]\cup (\mathbf{s}_2, \infty),\\
\Theta e^{-\frac{\sqrt{2}}{3}\epsilon(1-\mu) s^{3/2}} & \text{ when } s\in (\mathbf{s}_1,\mathbf{s}_2].
\end{cases}
\end{align}
Plugging \eqref{eq:OneCompBd} and \eqref{eq:TotSumBd} of Lemma~\ref{UpSumProbBd} into the r.h.s. of \eqref{eq:BasicStep3} yields \eqref{eq:ResultBd}.
\item When $T_0\in (0,\pi)$, the proof of \eqref{eq:GenRoughUpBd} follows in the same way as in the proof of \eqref{eq:ResultBd} by assuming $\mathbb{P}(\Upsilon_T(0)>s)\leq e^{-cs^{3/2}}$ for all $s\geq s_0$ and $T\in [T_0,\pi]$.
\end{enumerate}
\subsubsection{Proof of Proposition~\ref{UpTailLowBd}}\label{UpTailLowBdSEC}
Let $\mathcal{I}$ be a subinterval of $[-M,M]$ with $|\mathcal{I}|=\theta$ such that $f(y)\geq -\kappa$ for all $y\in \mathcal{I}$. Assume $s$ is large enough such that $s^{-n+2}\leq \theta$. Let $\chi_1\leq \chi_2\in \mathcal{I}$ be such that $\chi_2-\chi_1= s^{-n+2}$. Define
\begin{align}\label{eq:NewEvent}
\mathcal{W}_{i} &:= \Big\{ \Upsilon_T(- \chi_{i})\geq - \frac{\chi^2_i}{2^{2/3}} + \big(1+\tfrac{2\mu}{3}\big)s\Big\} \quad \text{for }i=1,2, \\ \mathcal{W}_{\mathrm{int}} &:= \Big\{\Upsilon_T(y)\geq - \frac{y^2}{2^{2/3}} +\big(1+\tfrac{\mu}{3}\big)s \text{ for all }y\in (-\chi_2, -\chi_1)\Big\}.
\end{align}
We claim that there exists $s^{\prime}= s^{\prime}(\mu, \theta ,\kappa, T_0)$ such that for all $s\geq s^{\prime}$ and $T\geq T_0$
\begin{align}\label{eq:LowerEvent}
\mathbb{P}(\mathcal{W}_{1} \cap \mathcal{W}_{2}\cap \mathcal{W}_{\mathrm{int}}) \leq \mathbb{P}(h^{f}_T(0)\geq s).
\end{align}
To show this, assume that the event $\mathcal{W}_{1}\cap \mathcal{W}_{2}\cap \mathcal{W}_{\mathrm{int}} $ occurs. Then\footnote{We use below $-\mathcal{I}$ as a shorthand notation for $\{x: -x\in \mathcal{I}\}$},
\begin{align*}
\int^{\infty}_{-\infty} e^{T^{1/3}(\Upsilon_T(y)+ f(-y))} dy \geq \int_{-\mathcal{I}}e^{T^{1/3}(\Upsilon_T(y)+ f(-y))}dy \geq 2\theta e^{T^{1/3}((1+\mu/3)s-\kappa)} \geq e^{T^{1/3}s}
\end{align*}
where the last inequality holds when $s$ exceeds some $s^{\prime}(\mu, \theta ,\kappa, T_0)$. This shows that
\[\mathbb{P}\big(\mathcal{W}_{1} \cap \mathcal{W}_{2}\cap \mathcal{W}_{\mathrm{int}}\big)\leq \mathbb{P}\Big(\int^{\infty}_{-\infty} e^{T^{1/3}(\Upsilon_T(y)+ f(y))} dy\geq e^{T^{1/3}s}\Big)= \mathbb{P}(h^{f}_T(0)\geq s).\]
To finish the proof of \eqref{eq:UpTailLowBd} we combine \eqref{eq:LowerEvent} with \eqref{eq:AcLowBd} below and take $s_0= \max\{s^{\prime}, s^{\prime\prime}\}$.
\begin{claim} There exist $s^{\prime\prime} = s^{\prime\prime}(\mu, n, T_0)$, $K=K(\mu)>0$ such that for all $s\geq s^{\prime\prime}$ and $T\geq T_0$,
\begin{align}\label{eq:AcLowBd}
\mathbb{P}\left(\mathcal{W}_{1}\cap \mathcal{W}_{2} \cap \mathcal{W}_{\mathrm{int}}\right)\geq \Big(\mathbb{P}\big(\Upsilon_T(0)>\big(1+\tfrac{2\mu}{3}\big)s\big)\Big)^2 - e^{-Ks^n}.
\end{align}
\end{claim}
\begin{proof}
We start by writing $\mathbb{P}\big(\mathcal{W}_{1} \cap \mathcal{W}_{2}\cap \mathcal{W}_{\mathrm{int}}\big) = \mathbb{P}(\mathcal{W}_{1}\cap \mathcal{W}_{2}) - \mathbb{P}(\mathcal{W}_{1}\cap \mathcal{W}_{2} \cap \mathcal{W}^c_{\mathrm{int}})$.
Using the FKG inequality from \eqref{eq:RevFKG}, \begin{align}\label{eq:TwoPtLowBd}
\mathbb{P}(\mathcal{W}_{1}\cap \mathcal{W}_{2})\geq \mathbb{P}(\mathcal{W}_{1})\mathbb{P}(\mathcal{W}_{2})\geq \Big(\mathbb{P}\big(\Upsilon_T(0)>\big(1+\tfrac{2\mu}{3}\big)s\big)\Big)^2
\end{align}
where the last inequality follows from Proposition~\ref{StationarityProp}. Note that \eqref{eq:TwoPtLowBd} provides a lower bound for the first term on the r.h.s. of \eqref{eq:AcLowBd}.
To complete the proof, we need to demonstrate an upper bound on $\mathbb{P}(\mathcal{W}_{1} \cap \mathcal{W}_{2}\cap \mathcal{W}^c_{\mathrm{int}})$ of the form $e^{-Ks^n}$. To achieve this we go to the KPZ line ensemble and use its Brownian Gibbs property. We may replace $\Upsilon_T$ by $\Upsilon^{(1)}_T$ in all definitions without changing the value of $\mathbb{P}(\mathcal{W}_{1} \cap \mathcal{W}_{2}\cap \mathcal{W}^c_{\mathrm{int}})$ (see Proposition~\ref{NWtoLineEnsemble}). Let us define
\begin{align}
\mathbb{P}_{\mathbf{H}_{2T}}&:=\mathbb{P}^{1,1, (-\chi_2, -\chi_1), 2^{-\frac{1}{3}}\Upsilon^{(1)}_T(-\chi_2), 2^{-\frac{1}{3}}\Upsilon^{(1)}_T(-\chi_1), +\infty, 2^{-\frac{1}{3}}\Upsilon^{(2)}_T}_{\mathbf{H}_{2T}},\\
\widetilde{\mathbb{P}}_{\mathbf{H}_{2T}}&:=\mathbb{P}^{1,1, (-\chi_2, -\chi_1), 2^{-\frac{1}{3}}\Upsilon^{(1)}_T(-\chi_2), 2^{-\frac{1}{3}}\Upsilon^{(1)}_T(-\chi_1), +\infty, -\infty}_{\mathbf{H}_{2T}}.
\end{align} Using the $\mathbf{H}_{2T}$-Brownian Gibbs property of the KPZ line ensemble $\{2^{-\frac{1}{3}}\Upsilon^{(n)}_T(x)\}_{n\in \mathbb{N},x\in \mathbb{R}}$,
\begin{align}
\mathbb{P}\big(\mathcal{W}_{1}\cap \mathcal{W}_{2}\cap \mathcal{W}^{c}_{\mathrm{int}}\big) = \mathbb{E}\big[\mathbbm{1}(\mathcal{W}_{1}\cap \mathcal{W}_{2})\cdot \mathbb{P}_{\mathbf{H}_{2T}} (\mathcal{W}^c_{\mathrm{int}})\big].\label{eq:CondInt}
\end{align}
Via Proposition~\ref{Coupling1}, there exists a monotone coupling between $\mathbb{P}_{\mathbf{H}_{2T}}$ and $\widetilde{\mathbb{P}}_{\mathbf{H}_{2T}}$ so that
\begin{align}\label{eq:CouplingIneq}
\mathbb{P}_{\mathbf{H}_{2T}} (\mathcal{W}^c_{\mathrm{int}})\leq \widetilde{\mathbb{P}}_{\mathbf{H}_{2T}} (\mathcal{W}^c_{\mathrm{int}}).
\end{align}
Recall that $\widetilde{\mathbb{P}}_{\mathbf{H}_{2T}}$ is the measure of a Brownian bridge on $(-\chi_2, -\chi_1)$ with starting and end points at $2^{-\frac{1}{3}}\Upsilon^{(1)}_T(-\chi_2)$ and $2^{-\frac{1}{3}}\Upsilon^{(1)}_T(-\chi_1)$.
Applying \eqref{eq:CouplingIneq} into the r.h.s. of \eqref{eq:CondInt} implies
\begin{align}
\mathbbm{1}(\mathcal{W}_{1}\cap \mathcal{W}_{2}) \cdot \widetilde{\mathbb{P}}_{\mathbf{H}_{2T}} (\mathcal{W}^c_{\mathrm{int}})\leq \mathbb{P}^{1,1, (-\chi_2, -\chi_1), -\frac{\chi^2_2}{2}+2^{-\frac{1}{3}}\big(1+\tfrac{2\mu}{3}\big)s, -\frac{\chi^2_1}{2}+2^{-\frac{1}{3}}\big(1+\tfrac{2\mu}{3}\big)s}_{\mathrm{free}} \big(\mathcal{W}^c_{\mathrm{int}}\big).
\end{align}
Therefore (using Lemma~\ref{BBFlucLem} for the second inequality) there exists $K=K(\mu)$ such that
\begin{align*}
\text{l.h.s. of \eqref{eq:CondInt}}\leq \mathbb{P}^{1,1, (-\chi_2, -\chi_1), -\frac{\chi^2_2}{2}+2^{-\frac{1}{3}}\big(1+\tfrac{2\mu}{3}\big)s, -\frac{\chi^2_1}{2}+2^{-\frac{1}{3}}\big(1+\tfrac{2\mu}{3}\big)s}_{\mathrm{free}}(\mathcal{W}^c_{\mathrm{int}})\leq e^{-Ks^{n}}.
\end{align*}
\end{proof}
\subsection{Proof of Theorem~\ref{Main6Theorem}}\label{Proof5Theorem}
Theorem~\ref{Main6Theorem} follows by combining all three parts of Theorem~\ref{GrandUpTheorem} with the following results which are in the same spirit of Proposition~\ref{SubstituteTheo} and~\ref{UpTailLowBd} respectively.
Recall $\Upsilon_T$ and $h^{\mathrm{Br}}_T$ from \eqref{eq:DefUpsilon} and \eqref{eq:h_BrDefine} respectively.
\begin{prop}\label{BrUpTailSubstituteTheo}
Fix $\epsilon, \mu\in (0,\frac{1}{2})$.
\begin{enumerate}[(1)]
\item Fix $T_0>\pi$. Suppose there exists $s_0 =s_0(\epsilon, T_0)$ and for any $T\geq T_0$, there exist $s_2=s_2(\epsilon, T)$ and $s_3=s_3(\epsilon, T)$ with $s_1\leq s_2\leq s_3$ such that for any $s\in [s_0, \infty)$,
\begin{align}\label{eq:AssumeNW}
\mathbb{P}\big(\Upsilon_T(0)>s\big)\leq \begin{cases} e^{-\frac{4}{3}(1-\epsilon)s^{\frac{3}{2}}} & \text{ if } s\in [s_0, s_1]\cup (s_2,\infty),\\
e^{-\frac{4}{3}\epsilon s^{\frac{3}{2}}} & \text{ if } s\in (s_1, s_2].
\end{cases}
\end{align}
Then, there exists $s^{\prime}_0 = s^{\prime}_0(\epsilon, \mu, T_0)$ such that for any $T>T_0$ and $s\in [\max\{s^{\prime}_0, \mathbf{s}_0\},\infty)$, we have (recall $\mathbf{s}_0, \mathbf{s}_1$ and $\mathbf{s}_2$ from \eqref{eq:mathbfs})
\begin{align}\label{eq:BrUpTailUpLow}
\mathbb{P}\big(h^{\mathrm{Br}}_T(0)>s\big)\leq
\begin{cases}
e^{-\frac{\sqrt{2}}{3}(1-\epsilon)(1-\mu)s^{3/2}}+ e^{-\frac{1}{9\sqrt{3}} (\mu s)^{3/2}}& \text{ if } s\in [\mathbf{s}_0, \mathbf{s}_1]\cup (\mathbf{s}_2,\infty),\\
e^{-\frac{\sqrt{2}}{3}\epsilon(1-\mu)s^{3/2}} +e^{-\frac{1}{9\sqrt{3}} (\mu s)^{3/2}}& \text{ if } s\in (\mathbf{s}_1, \mathbf{s}_2].
\end{cases}
\end{align}
\item For any $T_0\in (0,\pi)$, there exists $s^{\prime}_0= s^{\prime}_0(T_0)>0$ satisfying the following: if there exists $s_0 = s_0(T_0)>0$ such that $\mathbb{P}(\Upsilon_T(0)>s)\leq e^{-cs^{3/2}}$ for all $s\geq s_0$ and $T\in [T_0, \pi]$, then,
\begin{align}\label{eq:BrUpRoughBd}
\mathbb{P}\big(h^{f}_{T}(0)>s\big)\leq e^{-cs^{3/2}}, \quad \forall s\in [\max\{s^{\prime}_0,s_0 \}, \infty), T \in [T_0, \pi].
\end{align}
\end{enumerate}
\end{prop}
\begin{prop}\label{BrUpTailLowBdProp}
Fix $\mu\in (0,\frac{1}{2})$, $n\in \mathbb{Z}_{\geq 3}$ and $T_0>\pi$. Then, there exist $s_0=s_0 (\mu,n, T_0), K=K(\mu, n)>0$ such that for all $s\geq s_0$ and $T\geq T_0$,
\begin{align}\label{eq:BrLowTail}
\mathbb{P}(h^{\mathrm{Br}}_T(0)> s)\geq \Big(\mathbb{P}\Big(\Upsilon_T(0)>\big(1+\frac{2\mu}{3}\big)s\Big)\Big)^2 - e^{-Ks^n}.
\end{align}
\end{prop}
We prove these propositions using similar arguments as in Section~\ref{UpTailUpBdSEC} and~\ref{UpTailLowBdSEC}. Propositions ~\ref{BrUpTailSubstituteTheo} and \ref{BrUpTailLowBdProp} are proved in Sections~\ref{BrUpTailUpBd} and Section~\ref{BrUpTailLowBd}, respectively.
\smallskip
\begin{proof}[Proof of Theorem~\ref{Main6Theorem}]
This theorem is proved in the same way as Theorem~\ref{Main4Theorem} by combining Proposition~\ref{BrUpTailSubstituteTheo} and Proposition~\ref{BrUpTailLowBdProp}. We do not duplicate the details.
\end{proof}
\subsubsection{Proof of Proposition~\ref{BrUpTailSubstituteTheo}}\label{BrUpTailUpBd}
To prove this proposition, we use similar arguments as in Section~\ref{UpTailLowBdSEC}. Let $\tau\in (0,\frac{1}{2})$ be fixed (later we choose its value). Recall the events $\widetilde{E}_n$ and $\widetilde{F}_n$ from Section~\ref{UpTailUpBdSEC} and define
\begin{align}\label{eq:UpBrEvent}
\widetilde{\mathcal{A}}^{\mathrm{Br}}:= \left\{\int^{\infty}_{-\infty} e^{T^{1/3}\big(\Upsilon_T(y)+ B(-y)\big)} dy> e^{sT^{1/3}}\right\}
\end{align}
where $B$ is a two sided Brownian motion with diffusion coefficient $2^{\frac{1}{3}}$ and $B(0)=0$. Appealing to Proposition~\ref{Distribution}, we see that $\mathbb{P}(h^{\mathrm{Br}}_T(0)>s)= \mathbb{P(\widetilde{\mathcal{A}}^{\mathrm{Br}}})$. Now, we write
\begin{align}\label{eq:BrSplit}
\mathbb{P}\big(\widetilde{\mathcal{A}}^{\mathrm{Br}}\big)\leq \sum_{n\in \mathbb{Z}}\mathbb{P}\big( \widetilde{E}_n\big) + \mathbb{P}\Big(\widetilde{\mathcal{A}}^{\mathrm{Br}}\cap \big(\bigcup_{n\in \mathbb{Z}} \widetilde{E}_n\big)^{c}\cap \big(\bigcup_{n\in \mathbb{Z}} \widetilde{F}_n\big)\Big) + \mathbb{P}\Big(\widetilde{\mathcal{A}}^{\mathrm{Br}}\cap \big(\bigcup_{n\in \mathbb{Z}} \widetilde{E}_n\big)^{c}\cap \big(\bigcup_{n\in \mathbb{Z}} \widetilde{F}_n\big)^c\Big).
\end{align}
Using Lemma~\ref{UpSumProbBd} (see \eqref{eq:TotSumBd}) and Lemma~\ref{BigMaxApp} (see \eqref{eq:NoBigMaxRes}) we can bound the first two terms on the right side hand side of \eqref{eq:BrSplit}. However, unlike in Theorem~\ref{SubstituteTheo}, the last term in \eqref{eq:BrSplit} is not zero. We now provide an upper bound to this term.
\smallskip
\begin{claim} There exists $s^{\prime}=s^{\prime}(\tau, \mu)$ such that for all $s\geq s^{\prime}$,
\begin{align}\label{eq:BrwLEv}
\mathbb{P}\Big(\widetilde{\mathcal{A}}^{\mathrm{Br}}\cap \big(\bigcup_{n\in \mathbb{Z}} \widetilde{E}_n\big)^{c}\cap \big(\bigcup_{n\in \mathbb{Z}} \widetilde{F}_n\big)^c\Big) \leq \exp\left(-\tfrac{\sqrt{(1-2\tau)}}{3\sqrt{6}}\Big(\tfrac{2\mu s}{3}+\log\big((2\pi)^{-1} \tau (2T)^{\frac{1}{3}}\big)\Big)^{\frac{3}{2}} \right). &&
\end{align}\end{claim}
\begin{proof}
Note that
\begin{align}\label{eq:EveInc}
\Big\{\widetilde{\mathcal{A}}^{\mathrm{Br}}\cap \big(\bigcup_{n\in \mathbb{Z}} \widetilde{E}_n\big)^{c}\cap \big(\bigcup_{n\in \mathbb{Z}} \widetilde{F}_n\big)^c\Big\}& \subseteq \Big\{\int^{\infty}_{-\infty}e^{T^{1/3}\big(-\frac{(1-\tau)y^2}{2^{2/3}}+ B(-y)\big)} dy \geq e^{3^{-1}\mu s T^{1/3}}\Big\}.
\end{align}
We claim that
\begin{align}\label{eq:Cont4}
\text{r.h.s. of \eqref{eq:EveInc}}\subseteq \Big\{\max_{y\in \mathbb{R}}\Big\{ - \frac{(1-2\tau)y^2+ B(-y)}{2^{2/3}}\Big\}\geq \tfrac{1}{3}\mu s +\tfrac{1}{2}\log((2\pi)^{-1}\tau (2T)^{\frac{1}{3}}) \Big\}.
\end{align}
To see this by contradiction, assume the complement of the r.h.s. of \eqref{eq:Cont4}. This implies that
\begin{align}
\int^{\infty}_{-\infty}e^{T^{1/3}\Big(-\frac{(1-\tau)y^2}{2^{2/3}}+ B(-y)\Big)} dy< \sqrt{(2\pi)^{-1} \tau (2T)^{1/3}}e^{3^{-1}\mu s T^{1/3}}\int^{\infty}_{-\infty}e^{-\tau y^2T^{1/3}/2^{\frac{2}{3}}} dy= e^{3^{-1}\mu s T^{1/3}}.
\end{align}
Therefore, \eqref{eq:Cont4} holds. Applying Proposition~\ref{BMminusParabola} (with $\xi=\frac{1}{2}$), we see that
\begin{align*}
\mathbb{P}\big(\textrm{r.h.s. of }\eqref{eq:Cont4}\big)
\leq \frac{1}{\sqrt{3}} \exp\bigg(-\frac{\sqrt{(1-2\tau)}}{3\sqrt{6}}\Big(\frac{2\mu s}{3}+\log((2\pi)^{-1} \tau (2T)^{\frac{1}{3}})\Big)^{\frac{3}{2}} \bigg).
\end{align*}
when $s$ is large enough.
Combining this with \eqref{eq:EveInc}, we arrive at \eqref{eq:BrwLEv} showing the claim.
\end{proof}
Now, we turn to complete the proof of Proposition~\ref{BrUpTailSubstituteTheo}. Choosing $\tau = \frac{1}{8}$, we notice
\[ \text{r.h.s. of \eqref{eq:BrwLEv}} \leq \exp\Big(-\frac{1}{9\sqrt{3}}\Big(\mu s-\frac{3\log(16\pi)}{2}\Big)^{3/2}\Big), \quad \forall\, T>\pi.\]
For the rest of this proof, we will fix some $T\geq T_0$ and assume that there exist $s_0=s_0(\epsilon, T_0)$, $s_1=s_1(\epsilon, T)$ and $s_2=s_2(\epsilon, T)$ with $s_1\leq s_2$ such that \eqref{eq:AssumeNW} is satisfied for all $s\in [s_0,\infty)$. Owing to \eqref{eq:TotSumBd} of Lemma~\ref{UpSumProbBd} and \eqref{eq:NoBigMaxRes} of Lemma~\ref{BigMaxApp}, there exist $\Theta= \Theta(\epsilon, T_0)$ and $\tilde{s}=\tilde{s}(\epsilon, \mu, T_0)$ such that for all $s\in [\max\{\tilde{s},\mathbf{s}_0)\},\infty)$,
\begin{align*}
\mathbb{P}\Big(\bigcup_{n\in \mathbb{Z}} \widetilde{E}_n\Big) +
\mathbb{P}\Big(\widetilde{\mathcal{A}}^{\mathrm{Br}}\cap \big(\bigcup_{n\in \mathbb{Z}} \widetilde{E}_n\big)^c \cap \big(\bigcup_{n\in \mathbb{Z}} \widetilde{F}_n\big)\Big)
\leq \begin{cases}
\Theta e^{- \frac{\sqrt{2}}{3}(1-\epsilon)(1-\mu)s^{3/2}} & \text{ if }s\in [\mathbf{s}_0,\mathbf{s}_1 ]\cup (\mathbf{s}_2, \infty),\\
\Theta e^{- \frac{\sqrt{2}}{3}\epsilon(1-\mu)s^{3/2}} & \text{ if }s\in (\mathbf{s}_1, \mathbf{s}_2].
\end{cases}
\end{align*}
Combining this with \eqref{eq:BrwLEv} and plugging into \eqref{eq:BrSplit}, we get \eqref{eq:BrUpTailUpLow} for all $T>T_0\geq \pi$. In the case when $T_0\in (0,\pi)$, we obtain \eqref{eq:BrUpRoughBd} in a similar way as in the proof of \eqref{eq:GenRoughUpBd} of Proposition~\ref{SubstituteTheo} by combining the inequality of the above display with $\mathbb{P}(\Upsilon_T(0))\leq e^{-cs^{3/2}}$ for all $s\geq s_0$ and $T\in [T_0,\pi].$
\subsubsection{Proof of Proposition~\ref{BrUpTailLowBdProp}}\label{BrUpTailLowBd}
We use similar argument as in Proposition~\ref{UpTailLowBd}. The main difference from the proof of Proposition~\ref{UpTailLowBd} is that we do not expect \eqref{eq:LowerEvent} to hold because the initial data is now a two sided Brownian motion, hence, \eqref{eq:LowInitBd} of Definition~\ref{Hypothesis} is not satisfied. However, it holds with high probability which follows from the following simple consequence of the reflection principle for $B$ (a two-sided Brownian motion with diffusion coefficient $2^{\frac{1}{3}}$ and $B(0)=0$)
\begin{align}\label{eq:BTail}
\mathbb{P}\big(\mathcal{M}_s\big)\leq e^{-\frac{\mu^2 }{36}s^{n}},\qquad \textrm{where} \quad \mathcal{M}_s =\Big\{\min_{y\in [-s^{-n+2}, s^{-n+2}]} B(t) \leq -\tfrac{\mu}{6} s\Big\}.
\end{align}
To complete the proof, let us define:
\begin{align}\label{eq:TildeW}
\widetilde{\mathcal{W}}_{\pm} &:= \left\{\Upsilon_T(\pm s^{-n+2})\geq - \frac{1}{2^{2/3}s^{2(n-2)}}+\big(1+\tfrac{2\mu}{3}\big)\right\},\\\widetilde{\mathcal{W}}_{\mathrm{int}}&:= \left\{\Upsilon_T(y)\geq -\frac{y^2}{2^{2/3}}+ \big(1+\tfrac{\mu}{3}\big)s, \quad \forall y\in [-s^{-n+2}, s^{-n+2}]\right\}.
\end{align}
We claim that there exists $s^{\prime} = s^{\prime}(\mu,n, T_0)$ such that for all $s\geq s^{\prime}$ and $T\geq T_0$,
\begin{align}\label{eq:WInsertion}
\mathbb{P}\big(h^{\mathrm{Br}}_T(0)>s\big)\geq \mathbb{P}\big(\widetilde{\mathcal{W}}_{+}\cap \widetilde{\mathcal{W}}_{-}\cap \widetilde{\mathcal{W}}_{\mathrm{int}}\big) - e^{-\frac{\mu^2 }{36} s^{n}}.
\end{align}
To see this, assume $ \widetilde{\mathcal{W}}_{+}\cap \widetilde{\mathcal{W}}_{-}\cap \widetilde{\mathcal{W}}_{\mathrm{int}}\cap \mathcal{M}_s$ occurs.
Then, for $s$ large enough,
\begin{align}\label{eq:IntDom}
\int^{\infty}_{-\infty}e^{T^{1/3}\big(\Upsilon_T(y)+B(-y)\big)}dy\geq \int^{s^{-n+2}}_{-s^{-n+2}} e^{T^{1/3}\big(-\frac{1}{2^{2/3}s^{2(n-2)}}+(1+\frac{\mu}{6})s\big)} dy> e^{sT^{1/3}}.
\end{align}
By Proposition~\ref{Distribution}, the event $\{$l.h.s. of \eqref{eq:IntDom} $\geq$ r.h.s. of \eqref{eq:IntDom}$\}$ equals $\{h^{\mathrm{Br}}_T(0)>s\}$. Therefore (using \eqref{eq:BTail} for the second inequality) we arrive at the claimed \eqref{eq:WInsertion} via
\begin{align*}
\mathbb{P}\Big(\{h^{\mathrm{Br}}_T(0)>s\}\Big) \geq \mathbb{P}\big(\widetilde{\mathcal{W}}_{+}\cap \widetilde{\mathcal{W}}_{-}\cap \widetilde{\mathcal{W}}_{\mathrm{int}}\cap \mathcal{M}_s\big)\geq \mathbb{P}\Big(\widetilde{\mathcal{W}}_{+}\cap \widetilde{\mathcal{W}}_{-}\cap \widetilde{\mathcal{W}}_{\mathrm{int}}\Big)- e^{-\frac{\mu^2 }{36}s^n}.
\end{align*}
To finish the proof of Proposition~\ref{BrUpTailLowBdProp} we use a similar argument as used to prove \eqref{eq:AcLowBd}. For any $n\in \mathbb{Z}_{\geq 3}$, there exists $s^{\prime\prime}=s^{\prime\prime}(\mu, n, T_0)$ such that for all $s\geq s^{\prime\prime}$ and $T\geq T_0$,
\begin{align*}
\mathbb{P}\big(\widetilde{\mathcal{W}}_{+}\cap \widetilde{\mathcal{W}}_{-}\cap \widetilde{W}_{\mathrm{int}}\big)\geq \Big(\mathbb{P}\big(\Upsilon_T(0)>\big(1+\tfrac{2\mu}{3}\big)s\big)\Big)^2 - e^{- Ks^n}.
\end{align*}
Combining this with \eqref{eq:WInsertion} and taking $s_0= \max\{s^{\prime},s^{\prime\prime}\} $, we arrive at \eqref{eq:BrLowTail} for all $s\geq s_0$.
\bibliographystyle{alpha}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 5,351 |
<?xml version="1.0" encoding="UTF-8"?>
<billStatus>
<bill>
<summaries>
<billSummaries />
</summaries>
<latestAction>
<actionDate>2017-03-28</actionDate>
<links />
<text>Referred to the Subcommittee on Health.</text>
</latestAction>
<updateDate>2017-09-08T14:16:25Z</updateDate>
<version>1.0.0</version>
<billType>HR</billType>
<sponsors>
<item>
<party>R</party>
<bioguideId>B001257</bioguideId>
<state>FL</state>
<byRequestType />
<identifiers>
<lisID>1838</lisID>
<gpoId>7881</gpoId>
<bioguideId>B001257</bioguideId>
</identifiers>
<fullName>Rep. Bilirakis, Gus M. [R-FL-12]</fullName>
<district>12</district>
<middleName>M.</middleName>
<firstName>Gus</firstName>
<lastName>Bilirakis</lastName>
</item>
</sponsors>
<billNumber>1749</billNumber>
<laws />
<committees>
<billCommittees>
<item>
<name>Veterans' Affairs Committee</name>
<chamber>House</chamber>
<systemCode>hsvr00</systemCode>
<subcommittees>
<item>
<name>Health Subcommittee</name>
<systemCode>hsvr03</systemCode>
<activities>
<item>
<name>Referred to</name>
<date>2017-03-28T19:59:27Z</date>
</item>
</activities>
</item>
</subcommittees>
<type>Standing</type>
<activities>
<item>
<name>Referred to</name>
<date>2017-03-28T14:00:32Z</date>
</item>
</activities>
</item>
</billCommittees>
</committees>
<introducedDate>2017-03-28</introducedDate>
<cosponsors>
<item>
<state>NC</state>
<bioguideId>J000255</bioguideId>
<fullName>Rep. Jones, Walter B., Jr. [R-NC-3]</fullName>
<firstName>WALTER</firstName>
<district>3</district>
<isOriginalCosponsor>False</isOriginalCosponsor>
<sponsorshipDate>2017-09-05</sponsorshipDate>
<middleName>B.</middleName>
<party>R</party>
<identifiers>
<bioguideId>J000255</bioguideId>
<lisID>612</lisID>
<gpoId>8026</gpoId>
</identifiers>
<lastName>JONES</lastName>
<sponsorshipWithdrawnDate />
</item>
<item>
<state>PR</state>
<bioguideId>G000582</bioguideId>
<fullName>Rep. Gonzalez-Colon, Jenniffer [R-PR-At Large]</fullName>
<firstName>Jenniffer</firstName>
<district>0</district>
<isOriginalCosponsor>False</isOriginalCosponsor>
<sponsorshipDate>2017-09-07</sponsorshipDate>
<middleName />
<party>R</party>
<identifiers>
<lisID>2347</lisID>
<gpoId />
<bioguideId>G000582</bioguideId>
</identifiers>
<lastName>Gonzalez-Colon</lastName>
<sponsorshipWithdrawnDate />
</item>
</cosponsors>
<originChamber>House</originChamber>
<policyArea>
<name>Armed Forces and National Security</name>
</policyArea>
<titles>
<item>
<chamberName />
<parentTitleType />
<title>VET CARE Act of 2017</title>
<titleType>(Extracted from GPO) Short Titles as Introduced</titleType>
<chamberCode />
</item>
<item>
<chamberName />
<parentTitleType />
<title>Veterans Early Treatment for Chronic Ailment Resurgence through Examinations Act of 2017</title>
<titleType>(Extracted from GPO) Short Titles as Introduced</titleType>
<chamberCode />
</item>
<item>
<chamberName />
<parentTitleType />
<title>VET CARE Act of 2017</title>
<titleType>Short Titles as Introduced</titleType>
<chamberCode />
</item>
<item>
<chamberName />
<parentTitleType />
<title>Veterans Early Treatment for Chronic Ailment Resurgence through Examinations Act of 2017</title>
<titleType>Short Titles as Introduced</titleType>
<chamberCode />
</item>
<item>
<chamberName />
<parentTitleType />
<title>To direct the Secretary of Veterans Affairs to establish a pilot program for the provision of dental care to certain veterans, and for other purposes.</title>
<titleType>Official Title as Introduced</titleType>
<chamberCode />
</item>
<item>
<chamberName />
<parentTitleType />
<title>VET CARE Act of 2017</title>
<titleType>Display Title</titleType>
<chamberCode />
</item>
</titles>
<createDate>2017-03-29T05:08:00Z</createDate>
<relatedBills />
<amendments />
<constitutionalAuthorityStatementText><![CDATA[<pre>From the Congressional Record Online through the Government Publishing Office [<a href='http://www.gpo.gov'>www.gpo.gov</a>]By Mr. BILIRAKIS:H.R. 1749.Congress has the power to enact this legislation pursuantto the following:This bill is enacted pursuant to Article I, Section 8,Clause 1 of the Constitution of the United States and ArticleI, Section 8, Clause 7 of the Constitution of the UnitedStates.Article I, section 8 of the United State Constitution,which grants Congress the power to raise and support an Army;to provide and maintain a Navy; to make rules for thegovernment and regulation of the land and naval forces; andprovide for organizing, arming, and disciplining the militia.[Page H2516]</pre>]]></constitutionalAuthorityStatementText>
<subjects>
<billSubjects>
<policyArea>
<name>Armed Forces and National Security</name>
</policyArea>
<legislativeSubjects>
<item>
<name>Dental care</name>
</item>
<item>
<name>Department of Veterans Affairs</name>
</item>
<item>
<name>Digestive and metabolic diseases</name>
</item>
<item>
<name>Health personnel</name>
</item>
<item>
<name>Home and outpatient care</name>
</item>
<item>
<name>Medical education</name>
</item>
<item>
<name>Veterans' medical care</name>
</item>
</legislativeSubjects>
</billSubjects>
</subjects>
<actions>
<actionTypeCounts>
<introducedInHouse>1</introducedInHouse>
<introducedInTheHouse>1</introducedInTheHouse>
<placeholderTextForH>1</placeholderTextForH>
<billReferrals>1</billReferrals>
</actionTypeCounts>
<item>
<actionDate>2017-03-28</actionDate>
<committee>
<name>Health Subcommittee</name>
<systemCode>hsvr03</systemCode>
</committee>
<links />
<sourceSystem>
<code>1</code>
<name>House committee actions</name>
</sourceSystem>
<text>Referred to the Subcommittee on Health.</text>
<type>Committee</type>
</item>
<item>
<type>IntroReferral</type>
<text>Referred to the House Committee on Veterans' Affairs.</text>
<links />
<sourceSystem>
<code>2</code>
<name>House floor actions</name>
</sourceSystem>
<committee>
<name>Veterans' Affairs Committee</name>
<systemCode>hsvr00</systemCode>
</committee>
<actionCode>H11100</actionCode>
<actionDate>2017-03-28</actionDate>
</item>
<item>
<type>IntroReferral</type>
<text>Introduced in House</text>
<links />
<sourceSystem>
<code>9</code>
<name>Library of Congress</name>
</sourceSystem>
<committee />
<actionCode>Intro-H</actionCode>
<actionDate>2017-03-28</actionDate>
</item>
<item>
<type>IntroReferral</type>
<text>Introduced in House</text>
<links />
<sourceSystem>
<code>9</code>
<name>Library of Congress</name>
</sourceSystem>
<committee />
<actionCode>1000</actionCode>
<actionDate>2017-03-28</actionDate>
</item>
<actionByCounts>
<houseOfRepresentatives>4</houseOfRepresentatives>
</actionByCounts>
</actions>
<calendarNumbers />
<recordedVotes />
<congress>115</congress>
<notes />
<title>VET CARE Act of 2017</title>
<committeeReports />
<cboCostEstimates />
</bill>
<dublinCore xmlns:dc="http://purl.org/dc/elements/1.1/">
<dc:format>text/xml</dc:format>
<dc:language>EN</dc:language>
<dc:rights>Pursuant to Title 17 Section 105 of the United States Code, this file is not subject to copyright protection and is in the public domain.</dc:rights>
<dc:contributor>Congressional Research Service, Library of Congress</dc:contributor>
<dc:description>This file contains bill summaries and statuses for federal legislation. A bill summary describes the most significant provisions of a piece of legislation and details the effects the legislative text may have on current law and federal programs. Bill summaries are authored by the Congressional Research Service (CRS) of the Library of Congress. As stated in Public Law 91-510 (2 USC 166 (d)(6)), one of the duties of CRS is "to prepare summaries and digests of bills and resolutions of a public general nature introduced in the Senate or House of Representatives". For more information, refer to the User Guide that accompanies this file.</dc:description>
</dublinCore>
</billStatus>
| {
"redpajama_set_name": "RedPajamaGithub"
} | 3,706 |
{"url":"https:\/\/gateoverflow.in\/336677\/nielit-2016-mar-scientist-b-section-b-9","text":"388 views\n\nThe value of improper integral\n\n$\\displaystyle\\int_{0}^{1} x\\ln x =?$\n\n1. $1\/4$\n2. $0$\n3. $-1\/4$\n4. $1$\n\nUsing the $ILATE$ rule to select first and second function for integrating it by parts: The first function should be Logarithmic and second should be Algebraic.\n\nFor this question, first function would be \u00a0$ln(x)$ and second would be $x$\n\n$\\therefore$ $\\int_{0}^{1}xln(x)dx=ln(x)\\times \\frac{x^{2}}{2}-\\int \\frac{1}{x}\\times \\frac{x^{2}}{2}dx$\n\n$\\Rightarrow$ \u00a0$\\int_{0}^{1}xln(x)dx=ln(x)\\times \\frac{x^{2}}{2}-\\frac{x^{2}}{4}$\n\n$\\Rightarrow$ \u00a0$\\int_{0}^{1}xln(x)dx=\\left [ {0-\\frac{1}{4}}-0 \\right ]$ $=\\frac{-1}{4}$\n\nOption C is correct.\nby\n\n### 1 comment\n\nAlthough the answer is right but the way you handled improper integral is wrong\n\n1\n355 views\n1 vote\n2\n406 views\n3\n364 views","date":"2023-02-05 14:11:41","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8746432065963745, \"perplexity\": 2585.1890510143244}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-06\/segments\/1674764500255.78\/warc\/CC-MAIN-20230205130241-20230205160241-00519.warc.gz\"}"} | null | null |
Q: tinyMCE execCommand in ie8 I tried following command of tinyMCE
tinyMCE.execCommand('mceRemoveControl',true, editor_id);
for removing control from destroyed editors.
Now when new editors are loaded via ajax call (in hidden form), all other input fields in the same page becomes disable in IE8. However if editors are loaded in visible form it works well.
A: For tinymce 3.2.x, use the following to remove tinyMCE instance in IE8
tinyMCE.remove(editor); //editor is instance of tinymce editor and not editor id
This will fix "Permission Denied" error without disabling other input fields in the same page.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 4,337 |
{"url":"http:\/\/tantalum.academickids.com\/encyclopedia\/index.php\/Gravitational_redshift","text":"# Gravitational redshift\n\nIn the general theory of relativity by Albert Einstein, the gravitational redshift or Einstein shift is the effect that clocks in a gravitational field tick slower when observed by a distant observer. More specifically the term refers to the shift of wavelength of a photon to longer wavelength (the red side in an optical spectrum) when observed from a point in a lower gravitational field. In the latter case the 'clock' is the frequency of the photon and a lower frequency is the same as a longer (\"redder\") wavelength.\n\nThe gravitational redshift is a simple consequence of the Einstein equivalence principle (\"all bodies fall with the same acceleration, independent of their composition\") and was found by Einstein eight years before the full theory,(of relativity).\n\nObserving the gravitational redshift in the solar system, is one of the classical tests of general relativity.\n\n Contents\n\n## First experimental verification\n\nExperimental verification of the gravitational redshift requires good clocks since at Earth the effect is small. The first experimental confirmation came as late as in 1960, in the Pound-Rebka experiment (R.V. Pound, G.A. Rebka, Phys. Rev. Lett. 4, p.337) later improved by Pound and Snider. The famous experiment is generally called the Pound-Rebka-Snider experiment. They used a very well-defined \"clock\" in the form of an atomic transition which results in a very narrow line of electromagnetic radiation (a photon of well-defined energy). A narrow line implies a very well defined frequency. The line is in gamma ray range and emitted from the isotope Fe57 at 14.4 keV. The narrowness of the line is caused by the so called Mossbauer effect.\n\nThe emitter and absorber were placed in a tower of only 22 meter height at the bottom and top respectively. The observed gravitational redshift z, defined as the relative change in wavelength, the ratio\n\n[itex] z = \\frac{\\Delta\\lambda}{\\lambda_e}[itex]\n\nwith [itex] \\Delta\\lambda = \\lambda_{\\rm o} - \\lambda_{\\rm e}[itex] the difference between the observed [itex] \\lambda_{\\rm o}[itex] and emitted [itex] \\lambda_{\\rm e} [itex] wavelength. z is proportional to the difference in gravitational potential. With the gravitational acceleration g of the Earth, c the velocity of light and with a height h=22 m, the prediction\n\n[itex]\n\\Delta\\lambda\/\\lambda = \\frac{gh}{c^2} = 2.5\\times 10^{-15}\n\n[itex] was obtained with a 1% accuracy. Nowadays the accuracy is measured up to 0.02%\n\nNote from the formula above that the loss of energy of the photon is just equal to the difference in potential energy [itex]gh[itex]). You can't make a perpetuum mobile by having photons going up and down in a gravitational field, something that was, strictly speaking, possible within Newton's theory of gravity.\n\n## Gravitational redshift in stars\n\nPhotons emitted from a stellar surface on a star of mass M and radius R are expected to have a redshift equal to the difference in gravitational potential. With G the gravitational constant, this potential at the stellar surface is [itex]-GM\/R[itex] and zero at infinity, so\n\n[itex]\\frac{\\Delta\\lambda}{\\lambda} = \\frac{G}{c^2} \\cdot \\frac{M}{R}[itex]\n\nwhere [itex]c[itex] is the speed of light. The coefficient G\/c2 = 7.414\u00d710-29cm\/g. For the Sun, M = 2.0\u00d71033g and R = 6.955\u00d71011cm, so \u0394\u03bb\/\u03bb = 2.12\u00d710-6. In other words, each spectral line should be shifted towards the red end of the spectrum by a little over one millionth of its original wavelength. This effect was measured for the first time on the Sun in 1962.\n\nIn addition, observation of much more massive and compact stars such as white dwarfs have shown that Einstein shift does occur and is within the correct order of magnitude. Recently also the gravitational redshift of a neutron star has been measured from spectral lines in the x-ray range. The result gives the quantity M\/R, the mass M and radius R of the neutron star. If the mass is obtained by other means (for example from the motion of the neutron star around a campanion star), one can measure the radius of a neutron star in this way.\n\n## Black holes have infinite gravitational redshift\n\nThe gravitational redshift increases to infinity around a black hole when an object approaches the event horizon of the black hole which is situated at the so-called Schwarzschild radius. In fact a black hole can best be defined as a massive compact object surrounded by an area at which the redshift (as observed from a large distance) is infinitely large.\n\nWhen a star is imploding to form a black hole, one never observes the star to pass the Schwarzschild radius. As the star approaches this radius it will appear increasingly redder and dimmer in a very short time. In the past such a star was called a frozen star instead of a black hole. However, in a very short time the collapsing star emits its \"last photon\" and the object thereafter is black indeed. The terminology black hole is preferred above frozen star.\n\nIn general the gravitational redshift z for a spherical mass M with radius R is given\n\n[itex]\n\n1+z = \\frac{1}{\\sqrt{1-\\frac{2GM}{c^2 R} }} [itex] (where G is the gravitational constant and c the velocity of light). This formula reduces to the one used above for the Sun for large R. Note also that this formula reduces to the one used at Earth for a gravitational acceleration [itex]g = GM\/R^2[itex] and a difference in gravitational potential between [itex]R[itex] and [itex]R+h[itex] for small [itex]h[itex].\n\nFor [itex]r[itex] approaching [itex]2GM\/c^2[itex] the redshift [itex] z\\rightarrow\\infty[itex]. The quantity [itex]2GM\/c^2[itex] is called the Schwarzschild radius.\n\n## Gravitational redshift, the \"applied side of general relativity\"\n\nCorrections for gravitational redshift are nowadays common practice in many situations. We could almost call it \"the applied side of General Relativity\". With present-day accuracies, clocks in orbit around the Earth must be corrected for this effect. This is in particular the case with satellite-based navigational systems such as the Global Positioning System (GPS). To get accuracies of order 10 m, light travel times with an accuracy of order 30 ns (nanoseconds) have to be measured. Special-relativistic time dilatation (caused by the velocity) and gravitational redshift corrections in these satellites are of order 30000 ns per day.\n\n\u2022 Art and Cultures\n\u2022 Countries of the World\u00a0(http:\/\/www.academickids.com\/encyclopedia\/index.php\/Countries)\n\u2022 Space and Astronomy","date":"2020-08-11 22:29:15","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8926430344581604, \"perplexity\": 625.0293485887901}, \"config\": {\"markdown_headings\": true, \"markdown_code\": false, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-34\/segments\/1596439738855.80\/warc\/CC-MAIN-20200811205740-20200811235740-00420.warc.gz\"}"} | null | null |
\section{Introduction}
\label{SEC introduction}
In recent years, research in probability and statistics has witnessed the rise of Stein's method but also the emergence of methods to tackle the analysis and application of models which are based on non-normalized probability laws. In this work, we seek to apply findings from the research on Stein's method to contribute to the solution of testing and estimation problems involving non-normalized statistical distributions. We focus on the analysis of discrete probability laws, and how the theoretical results can be used to develop statistical methods. As such, we tie on to \cite{BE:2019:1} who provide similar tools for continuous distributions. A rather well-known approach to the problem of parameter estimation for non-normalized continuous probability distributions is the score matching technique due to \cite{H:2005, H:2007} \citep[see][for recent progress]{YDS:2019}. Another approach is known as noise contrastive estimation \citep[cf.][]{GH:2010}, but a number of 2019/20 papers indicate that the proposition and study of new tools remains an important issue, see \cite{MH:2019}, \cite{UKTM:2019}, and \cite{UMK:2019}.
The tool box which is now known as Stein's method goes back to the work of \cite{S:1972} \citep[see also][]{S:1986} who sought for an alternative proof of the central limit theorem that provides a bound on the rate of the convergence. This inherent feature made Stein's method popular. The idea is applied in all kinds of settings, as it allows to find bounds on distributional distances between sequences of probability laws and a limit distribution, and as it often applies in the absence of stochastic independence. The application of the method to discrete distributions goes back to \cite{Ch:1975}, who first derived corresponding results for the Poisson distribution, known as the Stein-Chen method. The method has since been extended to other discrete distributions, like the binomial distribution \citep[by][]{E:1991}, the geometric distribution \citep[by][]{P:1996}, the negative binomial distribution \citep[by][]{BP:1999}, discrete Gibbs measures \citep[by][]{ER:2008}, and others. The foundation of the method are characterization results for the underlying probability law. While for the first distributions in consideration specific identities were used or devised, general approaches have emerged that apply to many different distributions at once. In this context, we mention the generator approach of \cite{B:1988, B:1990} and \cite{G:1991} who use time-reversible Markov processes, where the stationary distribution is the probability law of interest, to characterize that law. On the other hand, a direct derivation of characterizations is possible, and a well-known class of such identities can be found under the name of 'density approach'. For the continuous case, first ideas on the density approach came from \cite{S:1986} and \cite{SDHR:2004}, and a more complete version is due to \cite{LS:2013:2}. The corresponding characterizations for discrete distributions are given by \cite{LS:2013}.
The contribution at hand is certainly not the first application of Stein's method in statistics. Indeed, similar problems in the context of non-normalized models are tackled with the use of so-called Stein discrepancies by the machine learning community, though many of them refer to the continuous setting. Let us mention some papers that explore these tools. Namely, \cite{CSG:2016}, \cite{LLJ:2016}, and \cite{YLRN:2018} consider the construction of tests of fit, \cite{GM:2015} build measures of sample quality, and \cite{BBDGM:2019} solve estimation problems for non-normalized models.
Our new work is based, in a strict sense, not on what is generally called Stein's method, but rather on the characterization identities we refer to above. More precisely, we take as a starting point the discrete density approach identity as provided by \cite{LS:2013}. To sketch the idea, consider a probability mass function $p$ on $\N_0$ as well as an $\N_0$-valued random variable $X$. Subject to few regularity conditions, $X$ is governed by $p$ if, and only if,
\begin{align*}
\E \bigg[ \Delta^+ f(X) + \frac{\Delta^+ p(X)}{p(X)} \, f(X + 1) \bigg] = 0
\end{align*}
holds for a large enough class of test functions $f$. Hereby $\Delta^+$ denotes the forward difference operator. Our first contribution lies in proving that this characterization can essentially be restated as to $X$ being governed by $p$ if, and only if, the probability mass function $\rho_X$ of $X$ satisfies
\begin{align*}
\rho_X(k)
= \E \bigg[ - \frac{\Delta^+ p(X)}{p(X)} \, \mathds{1}\{ X \geq k \} \bigg], \quad k \in \N_0 .
\end{align*}
With regard to applications in statistics, this second identity is more accessible. We can, for instance, tackle the goodness-of-fit testing problem as follows. Assume we are to test whether a sample $X_1, \dots, X_n$ of $\N_0$-valued random variables follows one of the laws of a parametric family of distributions $\{ p_\vartheta : \vartheta \in \Theta \}$, where $\Theta$ denotes the parameter space. By the above characterization, if $X_1, \dots, X_n$ are governed by one of the $p_\vartheta$, then the difference between
\begin{align*}
\widehat{\rho}_n(k)
= \frac{1}{n} \sum_{j = 1}^n \mathds{1}\{ X_j = k \}
\end{align*}
and
\begin{align*}
\frac{1}{n} \sum_{j = 1}^n - \frac{\Delta^+ p_{\widehat{\vartheta}_n}(X_j)}{p_{\widehat{\vartheta}_n}(X_j)} \, \mathds{1}\{ X_j \geq k \}
\end{align*}
ought to be small for each $k \in \N_0$. Here we denote by $\widehat{\vartheta}_n$ an estimator of $\vartheta$ based on $X_1, \dots, X_n$. Thus, in line with the idea of characterization based goodness-of-fit testing, our proposal is to use
\begin{align*}
\sum_{k = 0}^\infty \bigg( \frac{1}{n} \sum_{j = 1}^n \mathds{1}\{ X_j = k \} + \frac{1}{n} \sum_{j = 1}^n \frac{\Delta^+ p_{\widehat{\vartheta}_n}(X_j)}{p_{\widehat{\vartheta}_n}(X_j)} \, \mathds{1}\{ X_j \geq k \} \bigg)^2
\end{align*}
as a test statistic for the hypothesis
\begin{align*}
\mathbf{H_0} ~ : ~ \rho_{X_1} \in \{ p_\vartheta : \vartheta \in \Theta \} ,
\end{align*}
and to reject the hypothesis for large values of the statistic. Supposing that $X_1, \dots, X_n$ are governed by $p_{\vartheta_0}$ for some (unknown) $\vartheta_0 \in \Theta$, the very same heuristic leads us to propose
\begin{align*}
\widehat{\vartheta}_n
= \mbox{argmin}_{\vartheta \in \Theta} ~ \sum_{k = 0}^\infty \bigg( \frac{1}{n} \sum_{j = 1}^n \mathds{1}\{ X_j = k \} + \frac{1}{n} \sum_{j = 1}^n \frac{\Delta^+ p_{\vartheta}(X_j)}{p_{\vartheta}(X_j)} \, \mathds{1}\{ X_j \geq k \} \bigg)^2
\end{align*}
as an estimator for the unknown $\vartheta_0$. The paper at hand formalizes these ideas and puts them on firm mathematical ground. We also provide examples for the theoretical results as well as for the testing and estimation methods we propose. In Section \ref{SEC foundation} we introduce basic notation and recall the density approach identity. In Section \ref{SEC distri chara pmf} we prove the new characterization result as indicated above. Section \ref{SEC distri chara trafos of pmf} contains further characterizations based on transformations of the probability mass function such as distribution functions, characteristic functions, and generating functions. In Section \ref{SEC example distributions} we discuss examples. We then construct and study empirically, in Section \ref{SEC Poisson goodness-of-fit}, the test of fit for the Poisson distribution. In Section \ref{SEC negative binomial goodness-of-fit} a discrepancy measure as above leads to minimum distance estimators for the negative binomial distribution, which are put to a test in a simulation study. Section \ref{SEC discrete exponential-polynomial parameter estimation} deals with similar parameter estimators in the non-normalized class of discrete exponential-polynomial models.
\section{The foundation: Stein characterizations for discrete distributions}
\label{SEC foundation}
We denote by $p : \Z \to [0, 1]$ a probability mass function (pmf) defined on the integers. We assume that the support of $p$, that is, $\mathrm{spt}(p) = \big\{ k \in \Z : p(k) > 0 \big\}$ is connected, in the sense that
\begin{itemize}
\item[(C1)] $\mathrm{spt}(p) = \{L, L + 1, \dots, R\}$, where $L, R \in \Z \cup \{ \pm \infty \}$, $L < R$ .
\end{itemize}
This prerequisite is quite usual in the context of Stein's method in the discrete setting. We further assume that
\begin{itemize}
\item[(C2)] $\displaystyle \sup_{k \in \{ L, \dots, R - 1 \}} \bigg| \frac{\Delta^{+} p(k) \cdot \min\{ P(k), 1 - P(k) \}}{p(k) \, p(k + 1)} \bigg| < \infty$,
\end{itemize}
where $\Delta^+ f(k) = f(k + 1) - f(k)$ is the forward difference operator. Moreover, we denote by $P(k) = \sum_{\ell = L}^{k} p(\ell)$ the distribution function corresponding to $p$. Assumption (C2) is known from the continuous setting, see Lemma 13.1 of \cite{CGS:2011}. The supremum in (C2) runs from $L$ to $\infty$ whenever $R = \infty$. In what follows, we stick to the convention of empty sums being set to $0$.
\begin{definition} \label{DEF test functions}
Let $p$ be a pmf that satisfies (C1) and (C2). We write $\mathcal{F}_p$ for the class of functions $f : \{ L, \dots, R \} \to \R$ such that
\begin{itemize}
\setlength{\itemindent}{-2.5mm}
\item[($a$)] $\displaystyle \sum_{k = L}^R \big| \Delta^+ \big( p(k) \, f(k) \big) \big| < \infty \,$ and $\,\displaystyle \sum_{k = L}^R \Delta^+ \big( p(k) \, f(k) \big) = 0$, where we put $f(R + 1) = 0$ if $R < \infty$, as well as
\item[($b$)] $\displaystyle \sup_{k \in \{ L, \dots, R \}} \big| \Delta^+ f(k) \big| < \infty$,\, and $\displaystyle \sup_{k \in \{ L, \dots, R \}} \bigg| \frac{\Delta^+ p(k)}{p(k)} \, f(k + 1) \bigg| < \infty$.
\end{itemize}
\end{definition}
Conditions (C2) and ($b$) vanish completely whenever the support of $p$ is finite. We now state the characterization theorem that is known as Stein's density approach for discrete distributions. The proof is an easy adaptation of the proof of Theorem 2.1 from \cite{LS:2013} taking into account the different class of test functions. We give a full proof in Appendix \ref{APP SEC proof of density approach} to make it possible for the reader to comprehend how the assumptions come into play. Denote by $(\Omega, \mathcal{A}, \PP)$ the probability space which underlies all random quantities in this work.
\begin{theorem}[Discrete density approach] \label{THM density approach}
Let $p$ be a pmf which satisfies (C1) and (C2), and let $X : \Omega \to \R$ be a random variable such that $\PP \big( X \in \mathrm{spt}(p) \big) > 0$. Then, $\PP\big( X = k \, | \, X \in \mathrm{spt}(p) \big) = p(k)$, $k \in \Z$, if, and only if,
\begin{align*}
\E \bigg[ \Delta^+ f(X) + \frac{\Delta^+ p(X)}{p(X)} \, f(X + 1) ~ \bigg| ~ X \in \mathrm{spt}(p) \bigg] = 0,
\end{align*}
for all $f \in \mathcal{F}_p$, where $\E[\cdot \, | \, \cdot]$ denotes the conditional expectation.
\end{theorem}
We use the abbreviation $X|p \sim p$ for $\PP\big( X = k \, | \, X \in \mathrm{spt}(p) \big) = p(k)$, $k \in \Z$. There exists a very similar result for continuous probability distributions. This continuous version existed first and was initiated by \cite{S:1986}. For the fully prepared statement we refer to \cite{LS:2011} and \cite{LS:2013:2}, and for further constructions of this type of Stein operators, see \cite{LRS:2017}.
\begin{remark} \label{RMK test functions}
It follows from the proof of Theorem \ref{THM density approach} that, in the case $L > - \infty$, we may assume that $f(L) = 0$ for all $f \in \mathcal{F}_p$.
\end{remark}
\section{Distributional characterizations via the probability mass function}
\label{SEC distri chara pmf}
In this section, we derive explicit distributional characterizations via the probability mass function. The whole theory can be understood as a discrete version of the results from \cite{BE:2019:1} who established similar characterization identities for continuous probability laws starting with a continuous version of Theorem \ref{THM density approach} as stated by \cite{LS:2013:2}. We make the further assumption that the expectation of $p$ exists, that is,
\begin{itemize}
\item[(C3)] $\E |Z| < \infty$, where $Z$ is a discrete random variable with pmf $p$.
\end{itemize}
It follows from (C3) that $\E \big| \Delta^+ p(Z) \cdot Z \, / \, p(Z) \big| < \infty$, and hence we also have $\E \big| \Delta^+ p(Z) \, / \, p(Z) \big| < \infty$. We note our first result, a proof of which is given in Appendix \ref{APP SEC proof of pmf chara}.
\begin{theorem} \label{THM chara pmf forward difference}
Let $p$ be a pmf which satisfies (C1) -- (C3) with $L > - \infty$. Let $X : \Omega \to \R$ be a random variable with $\PP \big( X \in \mathrm{spt}(p) \big) > 0$ as well as
\begin{align*}
\E \bigg| \frac{\Delta^+ p(X)}{p(X)} \, X \cdot \mathds{1}\big\{ X \in \mathrm{spt}(p) \big\} \bigg| < \infty,
\end{align*}
and denote by $\rho_{X|p}(k) = \PP \big( X = k \, | \, X \in \mathrm{spt}(p) \big)$ the pmf of $X$ given $X \in \mathrm{spt}(p)$. Then, $X|p \sim p$ if, and only if,
\begin{align*}
\rho_{X|p}(k) = \E \bigg[ - \frac{\Delta^+ p(X)}{p(X)} \, \mathds{1}\{ X \geq k \} ~ \bigg| ~ X \in \mathrm{spt}(p) \bigg] , \quad k \in \Z, k \geq L.
\end{align*}
\end{theorem}
Notice that the integrability assumption on $X$ implies the existence of the (conditional) expectation that appears in the theorem. Even in stating Theorem \ref{THM chara pmf forward difference} the ordering of the integers is essential. However, if $p$ is an admissible probability mass function on some arbitrary countable set $\mathbb{S}$ (where $\mathbb{S}$ is endowed with the power set as a $\sigma$-field), there exists a bijection $\iota : \mathbb{S} \to \mathbb{N}_0$, which corresponds to imposing an order on the space $\mathbb{S}$, and Theorem \ref{THM chara pmf forward difference} can be applied. This leads to the following corollary which allows the handling of more general state spaces.
\begin{corollary} \label{COR chara pmf general state space}
Let $\mathbb{S}$ be a countable set and $p : \mathbb{S} \to [0, 1]$ such that $\sum_{s \in \mathbb{S}} p(s) = 1$. Let $\iota : \mathbb{S} \to \{ L, \dots, R \}$, with $L > - \infty$, be a bijection so that $\widetilde{p} = p \circ \iota^{-1}$ satisfies (C1) -- (C3). Assume that $X : \Omega \to \mathbb{S}$ is a random variable with $\PP\big( X \in \mathrm{spt}(p) \big) > 0$, and
\begin{align*}
\E \bigg| \frac{(\Delta^+ \widetilde{p})\big( \iota(X) \big)}{p(X)} \, \iota(X) \cdot \mathds{1}\big\{ X \in \mathrm{spt}(p) \big\} \bigg| < \infty .
\end{align*}
Then, $X|p \sim p$ if, and only if,
\begin{align*}
\rho_{X|p}(k) = \E \bigg[ - \frac{(\Delta^+ \widetilde{p})\big( \iota(X) \big)}{p(X)} \, \mathds{1}\big\{ \iota(X) \geq \iota(s) \big\} ~ \bigg| ~ X \in \mathrm{spt}(p) \bigg] , \quad s \in \mathbb{S} .
\end{align*}
\end{corollary}
Any such ordering on $\mathbb{S}$ gives a characterization result, and if $X \sim p$, the (conditional) expectation is the same for every ordering (as $\rho_{X|p}$ does not depend on $\iota$). However, if one intends to use the converse of the characterization (with general $X$), the calculation of the expectation depends on the ordering, so in practice the question of choosing an efficient ordering arises. Finding an order such that the condition (C1) -- (C3) are satisfied is a non-trivial endeavor. We give one example of choosing an order such that a pmf with a support that is not bounded from below can be considered. To state the result, we first need to recall that the backward difference operator $\Delta^-$ is defined by $\Delta^- f(k) = f(k) - f(k - 1)$.
\begin{corollary} \label{COR chara pmf backward difference}
Let $p$ be a pmf on $\{ L, \dots, R \}$ which satisfies (C1) -- (C3) with $R < \infty$. Let $X : \Omega \to \R$ be a random variable which satisfies $\PP \big( X \in \mathrm{spt}(p) \big) > 0$ as well as
\begin{align*}
\E \bigg| \frac{\Delta^- p(X)}{p(X)} \, X \cdot \mathds{1}\big\{ X \in \mathrm{spt}(p) \big\} \bigg| < \infty.
\end{align*}
Then, $X|p \sim p$ if, and only if,
\begin{align*}
\rho_{X|p}(k) = \E \bigg[ \frac{\Delta^- p(X)}{p(X)} \, \mathds{1}\{ X \leq k \} ~ \bigg| ~ X \in \mathrm{spt}(p) \bigg], \quad k \in \Z, k \leq R.
\end{align*}
\end{corollary}
The result follows from Corollary \ref{COR chara pmf general state space} upon choosing $\iota : \{ L, \dots, R \} \to \{ -R, \dots, -L \}$, $\iota(k) = -k$, and observing that
\begin{align*}
(\Delta^+ \widetilde{p})\big( \iota(X) \big)
= (\Delta^+ \widetilde{p})(-X)
= p(X - 1) - p(X)
= - \Delta^- p(X) .
\end{align*}
Note that Corollary \ref{COR chara pmf backward difference} can also be obtained via a different path. With few technical changes in Definition \ref{DEF test functions} and Appendix \ref{APP SEC proof of density approach}, a $\Delta^-$-version of Theorem \ref{THM density approach} can be formulated \cite[see also][]{LS:2013}. Using this result and an adaptation of the proof of Theorem \ref{THM chara pmf forward difference} yields another proof of Corollary \ref{COR chara pmf backward difference}.
\begin{remark} \label{RMK chara theorem for pmf}
Whenever $X$ is assumed a priori to take values in $\{ L, \dots, R \}$, the conditioning on $X \in \mathrm{spt}(p)$ can be omitted, and when $- \infty < L < R < \infty$, the integrability condition on $X$ is trivially satisfied. As for the regularity assumptions (C1) -- (C3), notice that, by Corollary \ref{COR chara pmf general state space}, (C1) is mostly an issue of notation but no hard restriction. Whenever we deal with discrete distributions that have finite support, conditions (C2) and (C3) are trivially satisfied. In case of an infinite support, assumption (C3) is easy to interpret. It is stated to guarantee that the statement of Theorem \ref{THM chara pmf forward difference} is consistent, as it ensures that a random variable $Z \sim p$ satisfies the integrability condition on $X$. A drawback in terms of the assumptions is that we cannot give a general treatment of (C2), and that this condition can sometimes be difficult to check for a given distribution. A similar condition (with identical problems) is required by \cite{BE:2019:1} in the continuous setting. If $L > - \infty$ and $R = \infty$ then (C2) holds if, and only if,
\begin{align} \label{equiv. condition for C2}
\limsup_{k \, \to \, \infty} \bigg| \frac{\Delta^+p(k) \cdot (1 - P(k))}{p(k) \, p(k + 1)} \bigg| < \infty .
\end{align}
Similar thoughts apply to other choices for $L$ and $R$, but this does not solve the problem in general. However, the Stolz-Ces\'{a}ro theorem \citep[see Theorem 2.7.1 of][]{CN:2014} provides a useful tool for checking the condition in practice, see Example \ref{EXA exponential polynomial}.
\end{remark}
\begin{remark}[Non-normalized models] \label{RMK non-normalized models}
As we have explained in the introduction, many statistical models, primarily in machine learning and physics, are too complex for the normalization constant of the distribution to be calculable. As estimation and testing procedures (e.g. the maximum likelihood estimator) normally rely on some knowledge about this constant, they may not be applicable to such models. Thus, we want to emphasize that our explicit characterizations do not need any knowledge about the normalization constants, and neither do any of the characterizations and statistical applications presented in subsequent sections.
\end{remark}
\section{Characterizations via transformations of the probability mass function}
\label{SEC distri chara trafos of pmf}
We also obtain characterizing formulae for transformations of the pmf, like distribution functions, characteristic functions, and probability generating functions. We focus on the identities derived for mass functions on the integers but more general spaces can be treated in the lines of Corollary \ref{COR chara pmf general state space}.
\begin{prop}[Distribution functions] \label{PROP chara distribution functions}
Let $p$ be a pmf which satisfies (C1) -- (C3) with $L > - \infty$. Let $X : \Omega \to \R$ be a random variable with $\PP \big( X \in \mathrm{spt}(p) \big) > 0$ as well as
\begin{align} \label{integrability assumption on X}
\E \bigg| \frac{\Delta^+ p(X)}{p(X)} \, X \cdot \mathds{1}\big\{ X \in \mathrm{spt}(p) \big\} \bigg| < \infty,
\end{align}
and further denote by $F_{X|p}(k) = \PP \big( X \leq k \, | \, X \in \mathrm{spt}(p) \big)$ the distribution function of $X$ given $X \in \mathrm{spt}(p)$. Then, $X|p \sim p$ if, and only if,
\begin{align*}
F_{X|p}(k) = \E \bigg[ - \frac{\Delta^+ p(X)}{p(X)} \big( \min\{ X, k \} - L + 1 \big) ~ \bigg| ~ X \in \mathrm{spt}(p) \bigg] , \quad k \in \Z, k \geq L.
\end{align*}
\end{prop}
The proof of this proposition is provided in Appendix \ref{APP SEC proof of distribution function chara}. We continue to give another characterization based on the characteristic function. The proof (see Appendix \ref{APP SEC proof of characteristic function chara}) features the inversion formula for characteristic functions. In the continuous setting, the inversion formula requires an integrability condition that is not needed in the discrete setting. If one finds a way to handle this integrability condition, similar identities are conceivable for the continuous setting of \cite{BE:2019:1}.
\begin{prop}[Characteristic functions] \label{PROP chara characteristic functions}
Assume that $p$ is a pmf which satisfies (C1) -- (C3) with $L > - \infty$. Take $X : \Omega \to \R$ to be a random variable with $\PP \big( X \in \mathrm{spt}(p) \big) > 0$ such that assumption (\ref{integrability assumption on X}) holds. Denote by $\varphi_{X|p}(t) = \E \big[ e^{i t X} \, \big| \, X \in \mathrm{spt}(p) \big]$, $t \in \R$, the characteristic function of $X$ given $X \in \mathrm{spt}(p)$ (where $i$ is the complex unit). Then, $X|p \sim p$ if, and only if,
\begin{align*}
\varphi_{X|p}(t)
= \E \bigg[ - \frac{\Delta^+ p(X)}{p(X)} \cdot \frac{e^{i t L} - e^{i t (X + 1)}}{1 - e^{i t}} ~ \bigg| ~ X \in \mathrm{spt}(p) \bigg] , \quad t \in \R.
\end{align*}
\end{prop}
We conclude this section on transformations of the pmf with a characterization via the probability generating function, thus specializing to the case $\mathrm{spt}(p) = \N_0$. A proof is given in Appendix \ref{APP SEC proof of probability generating function chara}.
\begin{prop}[Generating functions] \label{PROP chara probability generating functions}
Let $p$ be a pmf that satisfies (C1) -- (C3) with $\mathrm{spt}(p) = \N_0$. Let $X : \Omega \to \N_0$ be a discrete random variable such that
\begin{align*}
\E \bigg| \frac{\Delta^+ p(X)}{p(X)} \, X \bigg| < \infty.
\end{align*}
Denote by $G_{X}(s) = \E [s^X]$, $s \in [0, 1)$, the (probability) generating function of $X$. Then, $X \sim p$ if, and only if,
\begin{align*}
G_{X}(s)
= \E \bigg[ - \frac{\Delta^+ p(X)}{p(X)} \cdot \frac{1 - s^{X + 1}}{1 - s} \bigg] , \quad s \in [0, 1).
\end{align*}
\end{prop}
\section{Examples}
\label{SEC example distributions}
In this section, we provide examples that fit into our framework. For each distribution we indicate why (C1) -- (C3) hold and we explicitly state the characterization via Theorem \ref{THM chara pmf forward difference}, though sometimes in the unconditioned formulation. The characterizations via Propositions \ref{PROP chara distribution functions}, \ref{PROP chara characteristic functions}, and \ref{PROP chara probability generating functions} are not stated explicitly. We start by giving three infinite support examples which are subject of our statistical applications in the subsequent sections. More precisely, we discuss the Poisson and the negative binomial distribution as well as a discrete version of the exponential-polynomial model.
\begin{example}[Poisson distribution] \label{EXA Poisson}
The mass function of the Poisson distribution is given as $p(k) = \lambda^k \cdot e^{-\lambda} \, / \, k!$, $k \in \N_0$, for some rate parameter $\lambda > 0$. In this case, we obtain
\begin{align*}
\frac{\Delta^+ p(k)}{p(k)}
= \frac{\lambda}{k + 1} - 1, \quad k \in \N_0.
\end{align*}
Conditions (C1) and (C3) are obviously true. To see that (C2) holds, note that whenever $\tfrac{\lambda}{k + 2} < 1$, we have
\begin{align*}
\bigg| \frac{\Delta^{+} p(k) \cdot (1 - P(k))}{p(k) \, p(k + 1)} \bigg|
= \bigg| \frac{\lambda}{k + 1} - 1 \bigg| \sum_{\ell = 1}^\infty \frac{\lambda^{\ell - 1}}{(\ell + k)!} \, (k + 1)!
&\leq \bigg| \frac{\lambda}{k + 1} - 1 \bigg| \sum_{\ell = 0}^\infty \bigg( \frac{\lambda}{k + 2} \bigg)^\ell \\
&= \bigg| \frac{\lambda}{k + 1} - 1 \bigg| \cdot \frac{1}{1 - \tfrac{\lambda}{k + 2}} ,
\end{align*}
and therefore (\ref{equiv. condition for C2}) holds which yields (C2). Theorem \ref{THM chara pmf forward difference} implies that a random variable $X : \Omega \to \N_0$ with $\E X < \infty$ has the Poisson distribution with parameter $\lambda$ if, and only if,
\begin{align*}
\rho_X(k)
= \E \bigg[ \bigg( 1 - \frac{\lambda}{X + 1} \bigg) \mathds{1}\{ X \geq k \} \bigg], \quad k \in \N_0.
\end{align*}
\end{example}
\begin{example}[Negative binomial distribution] \label{EXA negative binomial}
Let $p(k) = {k + r - 1 \choose k} (1 - q)^k \, q^r$, $k \in \N_0$, be the probability mass function of the negative binomial distribution with parameters $r > 0$ and $q \in (0, 1)$. An important special case arises for $r = 1$, where the negative binomial distribution reduces to the geometric distribution. These laws are frequently used in the analysis of arrival times. We have
\begin{align*}
\frac{\Delta^+ p(k)}{p(k)}
= \frac{r + k}{k + 1} \, (1 - q) - 1, \quad k \in \N_0.
\end{align*}
Condition (C1) is trivially satisfied, and (C3) is easily verified. We prove (\ref{equiv. condition for C2}) to show that (C2) is satisfied. To this end, observe that
\begin{align*}
\bigg| \frac{\Delta^{+} p(k) \cdot (1 - P(k))}{p(k) \, p(k + 1)} \bigg|
&= \bigg| \frac{r + k}{k + 1} \, (1 - q) - 1 \bigg| \sum_{\ell = 0}^\infty \frac{(\ell + k + r)! \cdot (k + 1)!}{(\ell + k + 1)! \cdot (k + r)!} \, (1 - q)^\ell \\
&= \bigg| \frac{r + k}{k + 1} \, (1 - q) - 1 \bigg| \sum_{\ell = 0}^\infty \frac{(r + k + \ell) \cdot (r + k + \ell - 1) \cdots (r + k + 1)}{(k + \ell + 1) \cdot (k + \ell) \cdots (k + 2)} \, (1 - q)^\ell.
\end{align*}
If $r \leq 1$, the sum on the right hand-side is bounded by $\sum_{\ell = 0}^\infty (1 - q)^\ell = \tfrac{1}{q}$. If $r > 1$, let $k$ be large enough so that $2 \cdot (r - 1) \, / \, (k + 2) < q \, / \, (1 - q)$, and observe that
\begin{align*}
\sum_{\ell = 0}^\infty \frac{(r + k + \ell) \cdots (r + k + 1)}{(k + \ell + 1) \cdots (k + 2)} \, (1 - q)^\ell
&= \sum_{\ell = 0}^\infty \bigg( 1 + \frac{r - 1}{k + \ell + 1} \bigg) \cdot \bigg( 1 + \frac{r - 1}{k + \ell} \bigg) \cdots \bigg( 1 + \frac{r - 1}{k + 2} \bigg) \cdot (1 - q)^\ell \\
&\leq \sum_{\ell = 0}^\infty \bigg( 1 + \frac{1}{2} \cdot \frac{q}{1 - q} \bigg)^\ell \, (1 - q)^\ell = \sum_{\ell = 0}^\infty \bigg(1 - \frac{q}{2} \bigg)^\ell
= \frac{2}{q} ,
\end{align*}
where the products in the sum are empty (hence equal to $1$) for $\ell = 0$. In any case, (\ref{equiv. condition for C2}) follows, so (C2) is valid. Theorem \ref{THM chara pmf forward difference} states that a discrete random variable $X : \Omega \to \N_0$ with $\E X < \infty$ follows the negative binomial law with parameters $r$ and $q$ if, and only if,
\begin{align*}
\rho_X(k)
= \E \bigg[ \bigg( 1 - \frac{r + X}{X + 1} \, (1 - q) \bigg) \mathds{1}\{ X \geq k \} \bigg], \quad k \in \N_0.
\end{align*}
Note that the statement by \cite{JKK:1993} (on p. 223) that "only a few characterizations have been obtained for the negative binomial distribution" appears to still hold true. For one recent characterization related to Stein's method, we refer to \cite{AH:2019}.
\end{example}
\begin{example}[Exponential-polynomial models] \label{EXA exponential polynomial}
We consider the following discrete exponential-polynomial parametric model given through
\begin{align*}
p_\vartheta(k)
= C(\vartheta)^{-1} \exp\big( \vartheta_1 k + \dotso + \vartheta_d k^d \big), \quad k \in \N,
\end{align*}
where
\begin{align*}
C(\vartheta)
= \sum_{k = 1}^\infty \exp\big( \vartheta_1 k + \dotso + \vartheta_d k^d \big) ,
\end{align*}
and $\vartheta = (\vartheta_1, \dots, \vartheta_d) \in \R^{d - 1} \times (- \infty, 0)$. This corresponds to a discrete exponential family in the canonical form with the sufficient statistic containing monomials up to order $d \in \N$, with $d \geq 2$. Clearly condition (C1) is satisfied and the restriction $\vartheta_d < 0$ ensures that (C3) holds for every $\vartheta$ as well as that $C(\vartheta) < \infty$.
We have
\begin{align*}
\frac{p_\vartheta(k + 1)}{p_\vartheta(k)}
= \exp\Big( \vartheta_1 + \vartheta_2 \big( (k + 1)^2 - k^2 \big) + \dotso + \vartheta_d \big( (k + 1)^d - k^d \big) \Big)
\longrightarrow 0 ,
\end{align*}
as $k \to \infty$, and the Stolz-Ces\'{a}ro theorem \cite[Theorem 2.7.1 of][]{CN:2014} yields
\begin{align*}
\lim_{k \, \to \, \infty} \frac{p_\vartheta(k + 1)}{1 - P_\vartheta(k)}
= - \lim_{k \, \to \, \infty} \frac{p_\vartheta(k + 2)}{p_\vartheta(k + 1)} + 1
= 1 .
\end{align*}
Consequently, we obtain
\begin{align*}
\limsup_{k \, \to \, \infty} \bigg| \frac{\Delta^+ p_\vartheta(k) \cdot (1 - P_\vartheta(k))}{p_\vartheta(k) \, p_\vartheta(k + 1)} \bigg|
\leq \limsup_{k \, \to \, \infty} \bigg| \frac{p_\vartheta(k + 1)}{p_\vartheta(k)} - 1 \bigg| \cdot \bigg| \frac{1 - P_\vartheta(k)}{p_\vartheta(k + 1)} \bigg|
= 1
\end{align*}
for every $\vartheta \in \R^{d - 1} \times (- \infty, 0)$, so (C2) holds. Finally observe that, since $p_\vartheta(k + 1) \, / \, p_\vartheta(k) < 1$ for all but finitely many $k \in \N$, an $\N$-valued random variable with $\E X < \infty$ also satisfies
\begin{align*}
\E \bigg| \frac{\Delta^+ p_\vartheta(X)}{p_\vartheta(X)} \, X \bigg|
< \infty .
\end{align*}
Theorem \ref{THM chara pmf forward difference} yields that a random variable $X : \Omega \to \N$ with $\E X < \infty$ has the pmf $p_\vartheta$ if, and only if,
\begin{align*}
\rho_X(k)
= \E \bigg[ \bigg( - \exp\Big( \vartheta_1 + \vartheta_2 \big( (X + 1)^2 - X^2 \big) + \dotso + \vartheta_d \big( (X + 1)^d - X^d \big) \Big) + 1 \bigg) \mathds{1}\{ X \geq k \} \bigg], \quad k \in \N.
\end{align*}
In Section \ref{SEC discrete exponential-polynomial parameter estimation} we use this characterization to construct an estimation method for this type of parametric model, focusing on a two-parameter case where $d = 3$ and $\vartheta_2 = 0$ fixed.
\end{example}
We now take a look at the resulting characterization for the uniform and binomial distribution.
\begin{example}[Uniform distribution] \label{EXA uniform}
Assume that $p$ is given through $p(k) = 1 / m$, for $k = 1, \dots, m$, $m \geq 2$. Then, (C1) -- (C3) are obviously satisfied (recall Remark \ref{RMK chara theorem for pmf}), and we have
\begin{align*}
\frac{\Delta^+ p(k)}{p(k)} = 0, \quad \text{for } k = 1, \dots, m - 1, \quad\quad \text{and} \quad \quad \frac{\Delta^+ p(m)}{p(m)} = -1.
\end{align*}
By Theorem \ref{THM chara pmf forward difference}, a random variable $X : \Omega \to \R$ with $\PP \big( X \in \{ 1, \dots, m \} \big) > 0$ satisfies $X|p \sim p$ if, and only if,
\begin{align*}
\rho_{X|p}(k) = \frac{\PP (X = m)}{\PP \big( X \in \{ 1, \dots, m \} \big)}, \quad k = 1, \dots, m,
\end{align*}
that is, if $\rho_{X|p}(k) = 1 / m$, for $k = 1, \dots, m$. This result is easily derived by a direct argument, so for the uniform distribution, our characterization contains no new information. A similar behavior was observed in the continuous setting by \cite{BE:2019:1}. Note that Corollary \ref{COR chara pmf backward difference} leads to the very same characterization as Theorem \ref{THM chara pmf forward difference}.
\end{example}
\begin{example}[Binomial distribution] \label{EXA binomial}
Let $p(k) = {m \choose k} q^k (1 - q)^{m - k}$, for $k = 0, \dots, m$, $m \geq 1$, and some fixed $q \in (0, 1)$. Then, we have
\begin{align*}
\frac{\Delta^+ p(k)}{p(k)} = \frac{q}{1 - q} \cdot \frac{m - k}{k + 1} - 1, \quad \text{for } k = 0, \dots, m - 1, \quad\quad \text{and} \quad\quad \frac{\Delta^+ p(m)}{p(m)} = -1.
\end{align*}
By Theorem \ref{THM chara pmf forward difference}, a random variable $X : \Omega \to \R$ with $\PP \big( X \in \{ 0, \dots, m \} \big) > 0$ satisfies $X|p \sim p$ if, and only if,
\begin{align*}
\rho_{X|p}(k)
= \frac{\sum_{\ell = k}^{m - 1} \big( 1 - \frac{q}{1 - q} \cdot \frac{m - \ell}{\ell + 1} \big) \, \PP(X = \ell) + \PP(X = m)}{\PP \big( X \in \{ 0, \dots, m \} \big)}, \quad k = 0, \dots, m.
\end{align*}
Moreover, we have
\begin{align*}
\frac{\Delta^- p(k)}{p(k)} = 1 - \frac{1 - q}{q} \cdot \frac{k}{m - k + 1}, \quad \text{for } k = 1, \dots, m, \quad\quad \text{and} \quad\quad \frac{\Delta^- p(0)}{p(0)} = 1,
\end{align*}
so Corollary \ref{COR chara pmf backward difference} yields that $X|p \sim p$ if, and only if,
\begin{align*}
\rho_{X|p}(k)
= \frac{\sum_{\ell = 1}^{k} \big( 1 - \frac{1 - q}{q} \cdot \frac{\ell}{m - \ell + 1} \big) \, \PP(X = \ell) + \PP(X = 0)}{\PP \big( X \in \{ 0, \dots, m \} \big)}, \quad k = 0, \dots, m.
\end{align*}
Thus the example of the binomial distribution shows that Theorem \ref{THM chara pmf forward difference} and Corollary \ref{COR chara pmf backward difference} do not always lead to the same identity in cases where both are applicable.
\end{example}
We conclude this section on examples by showing that discrete Gibbs measures, which describe physical systems with countably many states, also fall into our framework.
\begin{example}[Gibbs (or Boltzmann) distribution] \label{EXA Gibbs distribution}
We distinguish in our discussion between finite and infinite support.
\begin{itemize}
\item Assume that a given system can have $S \in \N$ states, $S \geq 2$, and let $V : \{1, \dots, S\} \to \R \cup \{ \infty \}$ be a map (called the energy function) which assigns each state its corresponding energy. Another map $N : \{ 1, \dots S \} \to \N$ assigns to each state the number of particles the system has in the given state. Let $\mu \in \R$ (the chemical potential), $T > 0$ (the temperature) and denote by $\kappa$ the Boltzmann constant ($\approx 1.380649 \cdot 10^{-23}$ joule per kelvin). The Gibbs distribution is given through
\begin{align*}
p(k) = \frac{1}{\Xi} \, \exp\bigg( \frac{\mu \, N(k) - V(k)}{\kappa \, T} \bigg), \quad k \in \{ 1, \dots, S \},
\end{align*}
where
\begin{align*}
\Xi = \Xi(\mu, T, N, V)
= \sum_{\ell = 1}^{S} \exp\bigg( \frac{\mu \, N(\ell) - V(\ell)}{\kappa \, T} \bigg),
\end{align*}
and where we assume that $V(k) < \infty$ for at least one $k \in \{ 1, \dots S \}$ (which ensures $\Xi > 0$). In this setting with finitely many states, (C1) -- (C3) are trivially satisfied. We obtain $\Delta^+ p(S) \, / \, p(S) = -1$, and
\begin{align*}
\frac{\Delta^+ p(k)}{p(k)}
= \exp\bigg( \frac{V(k) - V(k + 1) + \mu \big( N(k + 1) - N(k) \big)}{\kappa \, T} \bigg) - 1, \quad k = 1, \dots, S - 1.
\end{align*}
Theorem \ref{THM chara pmf forward difference} yields that a random variable $X :\Omega \to \{ 1, \dots, S \}$ follows the Gibbs distribution as above if, and only if, for all $k = 1, \dots, S$,
\begin{align*}
\rho_X(k)
= \sum_{\ell = k}^{S - 1} \bigg( 1 - \exp\bigg( \frac{V(\ell) - V(\ell + 1) + \mu \big( N(\ell + 1) - N(\ell) \big)}{\kappa \, T} \bigg) \bigg) \PP\big( X = \ell \big) + \PP\big( X = S \big).
\end{align*}
\item The Gibbs distribution immediately generalizes to the case where $S = \infty$, that is, a system with infinitely many possible states. In this general setting however, one has to make further assumptions on $\mu$, $T$, $N$, and $V$ to ensure that $\Xi < \infty$ and that (C2) and (C3) hold. One set of assumptions that ensures $\Xi < \infty$ and (C3) is that the number of particles is fixed $N \equiv \overline{N} \in \N$ and that the probability of the system being in one of the states with higher index decreases sufficiently fast, or equivalently, that the energy grows fast enough. More precisely, it is sufficient to assume that there exists some $k_0 \in \N$ such that, for all $k \geq k_0$, we have $V(k) \geq c \cdot k^\alpha$, for some $c > 0$ and $\alpha \geq 1$. In order for (C2) to hold, we could additionally assume that there exists some $k_1 \in \N$ and $c^\prime > 0$ such that, for all $k \geq k_1$,
\begin{align*}
V(k) - V(k + n) \leq - c^\prime \, n, \quad n \in \N_0,
\end{align*}
as this implies
\begin{align*}
\bigg| \frac{\Delta^{+} p(k) \cdot \min\{ P(k), 1 - P(k) \}}{p(k) \, p(k + 1)} \bigg|
\leq 2 \sum_{n = 0}^\infty \bigg( e^{- \tfrac{c^\prime}{\kappa \, T}} \bigg)^n
< \infty, \quad k \geq k_1.
\end{align*}
One choice of $V$ (to satisfy all of the conditions) is thus $V(k) = C \cdot k$ for all $k$ larger than some fixed $k^* \in \N$ and some $C > 0$. The characterization via Theorem \ref{THM chara pmf forward difference} in the case of infinitely many possible states is similar as in the finite case: We have, for any $k \in \N$,
\begin{align*}
\frac{\Delta^+ p(k)}{p(k)}
= \exp\bigg( \frac{V(k) - V(k + 1)}{\kappa \, T} \bigg) - 1.
\end{align*}
A random variable $X : \Omega \to \N$ with $\E X < \infty$ follows the (infinite states) Gibbs distribution if, and only if,
\begin{align*}
\rho_X(k)
= \E \bigg[ \bigg( 1 - \exp\bigg( \frac{V(X) - V(X + 1)}{\kappa \, T} \bigg) \bigg) \mathds{1}\{X \geq k\} \bigg] , \quad k \in \N.
\end{align*}
\end{itemize}
As is indicated by the names that certain quantities in the above display of the Gibbs distribution carry, the model appears in statistical mechanics, see the reprint, \cite{G:2010}, of Josiah Willard Gibbs' work from 1902, or virtually any textbook on statistical mechanics. It also plays an important role in image analysis and processing, see \cite{L:2009}. A crucial observation about our characterizations is that the partition function $\Xi$ vanishes completely.
\end{example}
\section{Goodness-of-fit testing for the Poisson distribution}
\label{SEC Poisson goodness-of-fit}
A first application of the characterization results from the previous sections is the construction of a test of fit for the Poisson distribution. Given a sample of $\N_0$-valued independent identically distributed (i.i.d.) random variables, the problem is to test the composite hypothesis that the sample comes from some Poisson distribution $\mathrm{Po}(\lambda)$ with an unknown rate parameter $\lambda > 0$, that is,
\begin{align*}
\mathbf{H_0} ~ : ~ \mathbb{P}^{X_1} \in \big\{ \mathrm{Po}(\lambda) : \lambda > 0 \big\}.
\end{align*}
This is a classical statistical problem, well studied in the literature. Apart from Pearson's $\chi^2$ test, see \cite{K:2013} for recent developments, the hitherto proposed tests are based on the (conditional) empirical distribution function, see \cite{BB:2019,GH:2000,H:1996,F:2012}, the empirical probability generating function, see \cite{BH:1992,PW:2020,RO:1999}, on the integrated distribution function, see \cite{K:1999}, on a characterization by mean distance, see \cite{SR:2004}, on quadratic forms of score vectors, see \cite{I:2019}, on Charlier polynomials, see \cite{LW:2017}, on conditional probabilities ratio, see \cite{BB:2019}, and on relating first- and second-order moments, see \cite{KLP:1998}. For a survey of classical procedures and a comparative simulation study see \cite{GH:2000}. However, the construction of new and powerful methods is still of relevance: As \cite{N:2017} stated on p.4 of his contribution that "[...] one should keep in mind that any hypothesis has to be tested with several possible criteria. The point of the matter is that with absolute confidence we can only reject it, while each new test which fails to reject the null-hypothesis gradually brings the statistician closer to the perception that this hypothesis is true".
The idea of our new method is to estimate the two quantities that appear in the characterization via Theorem \ref{THM chara pmf forward difference} as given in Example \ref{EXA Poisson}, and to compare these empirical quantities. Based on the sample $X_1, \dots, X_n$, let
\begin{align*}
\widehat{e}_n(k) = \frac{1}{n} \sum_{j = 1}^{n} \bigg( 1 - \frac{\widehat{\lambda}_n}{X_j + 1} \bigg) \mathds{1}\{ X_j \geq k \}, \quad k \in \N_0,
\end{align*}
be an estimator of the expectation that arises in the characterization, where $\widehat{\lambda}_n = n^{-1} \sum_{j = 1}^n X_j$ is a consistent estimator of the rate parameter. Also consider the empirical probability mass function,
\begin{align*}
\widehat{\rho}_n(k) = \frac{1}n \sum_{j = 1}^n \mathds{1}\{ X_j = k \}, \quad k \in \N_0,
\end{align*}
as an estimator of $\rho_{X_1}$. By Theorem \ref{THM chara pmf forward difference} (see Example \ref{EXA Poisson}), if the sample $X_1, \dots, X_n$ comes from a Poisson distribution, the absolute difference between $\widehat{e}_n(k)$ and $\widehat{\rho}_n(k)$ ought to be small for every $k \in \N_0$. On the other hand, if the sample does not come from the Poisson law, we expect their absolute difference to be large. Based on this heuristic, we suggest to use as a test statistic the squared difference of $\widehat{e}_n$ and $\widehat{\rho}_n$ summed over $k \in \N_0$, that is,
\begin{align*}
T_n^{Po}
= \sum_{k = 0}^\infty \big( \widehat{e}_n(k) - \widehat{\rho}_n(k) \big)^2 ,
\end{align*}
and to reject the Poisson hypothesis $\mathbf{H_0}$ for large values of $T_n^{Po}$. Note that we do not need to introduce any weight functions to make the infinite sum in the definition of $T_n^{Po}$ converge, and observe that we choose the squared distance to obtain a finite double sum representation for $T_n^{Po}$, namely
\begin{align*}
T_n^{Po}
&= \frac{1}{n^2} \sum_{j, \, \ell = 1}^n \bigg[ \bigg( 1 - \frac{\widehat{\lambda}_n}{X_j + 1} \bigg) \Big( X_\ell - 1 - \widehat{\lambda}_n \Big) \, \mathds{1}\{ X_j \geq X_\ell \} \\
&\qquad\qquad\qquad + \Big( X_j + 1 - \widehat{\lambda}_n \Big) \bigg( 1 - \frac{\widehat{\lambda}_n}{X_\ell + 1} \bigg) \mathds{1}\{ X_j < X_\ell \} + \mathds{1}\{ X_j = X_\ell \} \bigg] ,
\end{align*}
which is a numerically stable representation, hence easily implemented in a computer. The calculation of $T_n^{Po}$ involves only straight forward algebra and consists, mainly, of writing the squared difference of $\widehat{e}_n(k)$ and $\widehat{\rho}_n(k)$ as a double sum, multiplying the corresponding terms, and solving the sum over $k$ of the indicator functions.
\begin{remark}
The major advantage in using Theorem \ref{THM chara pmf forward difference} to construct the test is that the empirical quantities are integrable with respect to the counting measure $\sum_{k = 0}^{\infty} \delta_k$. Of course, the Propositions \ref{PROP chara distribution functions}, \ref{PROP chara characteristic functions}, and \ref{PROP chara probability generating functions} provide similar heuristics for the construction of goodness-of-fit tests [see also \cite{BE:2019:2}, \cite{ABEV:2019}, and \cite{BE:2020} for the continuous setting], but the quantities are not necessarily integrable/summable and thus require the introduction of some weight function. What is more, it seems difficult to obtain explicit formulae as in the previous equation for $T_n^{Po}$, so the routines would rely on numerical integration which is computationally costly. We must therefore leave it as a problem of further research to employ these characterizations via distribution, characteristic, or generating functions in the construction of goodness-of-fit tests.
\end{remark}
As a proof of concept, we carry out a simulation study in order to compare our new test of poissonity with established procedures. All simulations are performed in the statistical computing environment \texttt{R}, see \cite{r20}. We consider the sample size $n = 50$ and the nominal level of significance is set to $0.05$. Based on the methodology for asymptotic theory detailed by \cite{H:1996}, we expect the (limit) distribution of the test statistics considered in the following to depend on the unknown parameter $\lambda$. Consequently, we use for the implementation of the tests a similar parametric bootstrap procedure as the one suggested by \cite{GH:2000}. For a given sample $X_1, \ldots, X_n$ and a statistic $T_n$, simulate an approximate critical value $c_{n, B}^*$ for a level $\alpha$ test procedure as follows:
\begin{enumerate}
\item[1)] Calculate $\widehat{\lambda}_n(X_1, \ldots, X_n)$ and generate $B$ bootstrap samples of size $n$ with distribution $\mathrm{Po}(\widehat{\lambda}_n)$, i.e., generate i.i.d. $\mathrm{Po}(\widehat{\lambda}_n)$ random variables $X_{j, 1}^*, \ldots, X_{j, n}^*$, $j = 1, \ldots, B$.
\item[2)] Compute $T_{j, n}^* = T_n(X_{j, 1}^*, \ldots, X_{j, n}^*)$ for $j = 1, \ldots, B$.
\item[3)] Derive the order statistics $T_{1:B}^* \leq \ldots \leq T_{B:B}^*$ of $T_{1, n}^*, \ldots, T_{B, n}^*$ and calculate
\begin{equation*}
c_{n, B}^* = T_{k:B}^* + (1 - \alpha) \cdot \big( T_{(k + 1):B}^* - T_{k:B}^* \big),
\end{equation*}
where $k = \lfloor (1 - \alpha) \cdot B \rfloor$ and $\lfloor \cdot \rfloor$ denotes the floor function.
\item[4)] Reject the hypothesis $\mathbf{H_0}$ if $T_n(X_1, \ldots, X_n) > c_{n, B}^*$.
\end{enumerate}
This parametric bootstrap procedure was used for all of the following procedures to generate the critical points. We consider the test of \cite{BH:1992} based on the statistic
\begin{equation*}
\mbox{BH}
= \frac{1}{n} \sum_{i, j = 1}^n \bigg( \frac{\widehat{\lambda}_n^2}{X_i + X_j + 1} + \frac{X_i \, X_j}{X_i + X_j - 1} \bigg) - \widehat{\lambda}_n \Bigg( n - \frac{1}{n} \Bigg( \sum_{j = 1}^n \mathds{1}\{ X_j = 0 \} \Bigg)^2 \Bigg).
\end{equation*}
The mean distance test by \cite{SR:2004} is based on
\begin{equation*}
\mbox{SR}
= n \sum_{j = 0}^\infty \Big( \widehat{M}_n(j) - \mathrm{P}(j; \widehat{\lambda}_n) \Big)^2 \mathrm{p}(j; \widehat{\lambda}_n),
\end{equation*}
where $\widehat{M}_n(j)$ is an estimator of the CDF based on the mean distance and $\mathrm{P}(j; \lambda)$ (resp. $\mathrm{p}(j; \lambda)$) denote the distribution function (resp. pmf) of $\mathrm{Po}(\lambda)$, respectively. Note that $\mbox{SR}$ is implemented in the \texttt{R}-package \texttt{energy}, see \cite{SR:2019}. The test of \cite{ROP:1991} is based on
\begin{equation*}
\mbox{RU}
= \frac{1}{n} \sum_{i, j = 1}^n \frac{1}{X_i + X_j + 1} + \frac{n \cdot (1 - e^{- 2 \widehat{\lambda}_n})}{2 \, \widehat{\lambda}_n} - 2 \sum_{i = 1}^n \Bigg( \frac{(-1)^{X_i} \cdot X_i! \cdot (1 - e^{- \widehat{\lambda}_n})}{\widehat{\lambda}_n^{X_i + 1}} + \sum_{j = 1}^{X_i} \frac{(-1)^{j + 1} \cdot X_i!}{(X_i - j + 1)! \cdot \widehat{\lambda}_n^{j}} \Bigg).
\end{equation*}
Note that in the original paper of \cite{ROP:1991} and in a slight handwritten correction thereof available on the internet, as well as in the work of \cite{GH:2000}, the explicit formula of the $\mbox{RU}$-statistic contains errors. We have corrected and numerically checked the formula given above against the integral representation used to introduce the test. The integrated distribution function based tests of \cite{K:1999} are defined via
\begin{equation*}
\mbox{K}_1
= \sqrt{n} \Bigg( \sum_{j = 0}^M \Big| \widehat{F}_n(j) - \mathrm{P}(j; \widehat{\lambda}_n) \Big| + \widehat{\lambda}_n - \sum_{j = 0}^M \Big( 1 - \mathrm{P}(j; \widehat{\lambda}_n) \Big) \Bigg)
\end{equation*}
and
\begin{equation*}
\mbox{K}_2
= \sqrt{n} \sup_{1 \, \leq \, k \, \leq \, M} \Bigg| \sum_{j = 0}^{k - 1} \Big( \widehat{F}_n(j) - \mathrm{P}(j; \widehat{\lambda}_n) \Big) \Bigg|,
\end{equation*}
where $M = \max\{ X_1, \ldots, X_n \}$ and $\widehat{F}_n$ is the empirical distribution function of $X_1, \ldots, X_n$. For representations of the Kolmogorov-Smirnov statistic and the modified Cram\'{e}r-von Mises statistic, we follow the representation given by \cite{GH:2000}, namely
\begin{align*}
\mbox{KS}
= \sqrt{n} \sup_{0 \, \leq \, k \, \leq \, M} \Big| \widehat{F}_n(x) - \mathrm{P}(k; \widehat{\lambda}_n) \Big|
\end{align*}
and
\begin{align*}
\mbox{CM}
= n \sum_{j = 0}^M \Big( \widehat{F}_n(j) - \mathrm{P}(j; \widehat{\lambda}_n) \Big)^2 \cdot \frac{1}{n} \sum_{k = 1}^n \mathds{1}\{ X_k = j \}.
\end{align*}
The simulation study consists of the following 45 representatives of families of distributions. In order to show that all the considered testing procedures maintain the nominal level $\alpha$ of 5\%, we consider the Po$(\lambda)$ distribution with $\lambda\in\{1, 5, 10, 30\}$. As examples for alternative distributions, we consider the discrete uniform distribution $\mathcal{U}\{ 0, 1, \ldots, m \}$ with $m \in \{ 1, 2, 3, 5, 6 \}$, several different instances of the binomial distribution Bin$(m, q)$, several Poisson mixtures of the form PP$(q; \vartheta_1, \vartheta_2) = q \cdot \mbox{Po}(\vartheta_1) + (1 - q) \cdot \mbox{Po}(\vartheta_2)$, a $0.9/0.1$ mixture of Po$(3)$ and point mass in $0$ denoted by Po$(3)\delta_0$, discrete Weibull distributions W$(\vartheta_1, \vartheta_2)$, zero-modified Poisson distributions zmPo$(\lambda, q)$, the zero-truncated Poisson distributions ztPo$(\lambda)$ with $\lambda \in \{2, 3, 5\}$, and the absolute discrete normal distribution $|\mbox{N}(\mu, 1)|$ with $\mu \in \{0, 2, 3\}$. Note that most distributions were generated by the packages \texttt{extraDistr}, see \cite{W:2019}, and \texttt{actuar}, see \cite{DGP:2008}, and that a significant part of these distributions can also be found in the simulation study presented by \cite{GH:2000}. Furthermore we indicate that the chosen design of simulation parameters coincides with the study by \cite{GH:2000} which facilitates the comparison to other tests of poissonity not considered here.
Every entry in Table \ref{tab:simresults} is based on 100000 repetitions and 500 bootstrap samples of size $50$. All of the considered procedures maintain the significance level $\alpha = 5\%$ under the hypothesis, which supports the statement that the parametric bootstrap procedure is well calibrated. Overall the best performing tests are K$_2$, BH and SR. The new test based on $T_n^{Po}$ is competitive to the stated procedures, although it never outperforms them all at once for the considered alternatives.
\begin{table*}[h!]
\small
\onehalfspacing
\centering
\begin{tabular}{lrrrrrrrrr}
\toprule
{Distr.\,\,/\,\,Test} & & \multicolumn{1}{c}{$T_n^{Po}$} & \multicolumn{1}{c}{BH} & \multicolumn{1}{c}{SR} & \multicolumn{1}{c}{RU} & \multicolumn{1}{c}{K$_1$} & \multicolumn{1}{c}{K$_2$} & \multicolumn{1}{c}{KS} & \multicolumn{1}{c}{CM} \\
\cmidrule[0.4pt](r{0.125em}){1-1}
\cmidrule[0.4pt](lr{0.125em}){3-3}
\cmidrule[0.4pt](lr{0.125em}){4-4}
\cmidrule[0.4pt](lr{0.125em}){5-5}
\cmidrule[0.4pt](l{0.25em}){6-6}
\cmidrule[0.4pt](l{0.25em}){7-7}
\cmidrule[0.4pt](l{0.25em}){8-8}
\cmidrule[0.4pt](l{0.25em}){9-9}
\cmidrule[0.4pt](l{0.25em}){10-10}
Po$(1)$ & & 5 & 5 & 5 & 5 & 5 & 5 & 5 & 5 \\
\rowcolor[gray]{0.925}
Po$(5)$ & & 5 & 5 & 5 & 5 & 5 & 5 & 5 & 5 \\
Po$(10)$ & & 5 & 5 & 5 & 5 & 5 & 5 & 5 & 5 \\
\rowcolor[gray]{0.925}
Po$(30)$ & & 5 & 5 & 5 & 5 & 5 & 5 & 5 & 5 \\[1mm]
$\mathcal{U}\{0,1\}$& & 99 & 99 & 99 & 99 & 99 & 99 & 99 & 99 \\
\rowcolor[gray]{0.925}
$\mathcal{U}\{0,1,2\}$ & & 39 & 9 & 22 & 15 & 64 & 68 & 50 & 58 \\
$\mathcal{U}\{0,1,2,3\}$ & & 46 & 33 & 20 & 27 & 61 & 16 & 45 & 51 \\
\rowcolor[gray]{0.925}
$\mathcal{U}\{0,1,2,3,4,5\}$ & & 69 & 65 & 58 & 62 & 75 & 39 & 60 & 63 \\
$\mathcal{U}\{0,1,2,3,4,5,6\}$ & & 85 & 85 & 83 & 85 & 86 & 66 & 72 & 76 \\[1mm]
\rowcolor[gray]{0.925}
Bin$(2, 0.5)$ & & 81 & 81 & 89 & 87 & 86 & 90 & 83 & 81 \\
Bin$(4, 0.25)$ & & 18 & 22 & 23 & 24 & 18 & 22 & 21 & 15 \\
\rowcolor[gray]{0.925}
Bin$(10, 0.1)$ & & 7 & 7 & 7 & 7 & 6 & 7 & 7 & 6 \\
Bin$(10, 0.5)$ & & 57 & 52 & 49 & 52 & 82 & 88 & 60 & 68 \\
\rowcolor[gray]{0.925}
Bin$(1, 0.5)$ & & 73 & 77 & 82 & 81 & 80 & 82 & 76 & 77 \\
Bin$(2, 2/3)$ & & 34 & 38 & 44 & 44 & 43 & 45 & 37 & 39 \\
\rowcolor[gray]{0.925}
Bin$(3, 0.75)$ & & 19 & 22 & 26 & 26 & 26 & 27 & 21 & 23 \\
Bin$(9, 0.9)$ & & 6 & 7 & 8 & 8 & 8 & 8 & 7 & 8 \\
\rowcolor[gray]{0.925}
Bin$(5, 0.5)$ & & 82 & 85 & 80 & 84 & 88 & 89 & 67 & 71 \\
Bin$(10, 2/3)$ & & 41 & 45 & 41 & 44 & 48 & 50 & 28 & 31 \\
\rowcolor[gray]{0.925}
Bin$(15, 0.75)$ & & 23 & 27 & 26 & 26 & 28 & 30 & 16 & 18 \\
Bin$(45, 0.9)$ & & 8 & 9 & 10 & 9 & 9 & 8 & 6 & 7 \\[1mm]
\rowcolor[gray]{0.925}
PP$(0.5; 2,5)$ & & 64 & 64 & 69 & 65 & 72 & 74 & 53 & 57 \\
PP$(0.5; 3,5)$ & & 17 & 19 & 20 & 19 & 20 & 21 & 12 & 14 \\
\rowcolor[gray]{0.925}
PP$(0.25; 1,5)$ & & 93 & 95 & 96 & 95 & 87 & 88 & 75 & 73 \\
PP$(0.05; 1,5)$ & & 23 & 33 & 32 & 32 & 13 & 12 & 8 & 7 \\
\rowcolor[gray]{0.925}
PP$(0.01; 1,5)$ & & 7 & 9 & 9 & 9 & 6 & 5 & 5 & 5 \\[1mm]
Po$(3)\delta_0$ & & 54 & 62 & 54 & 59 & 32 & 31 & 32 & 26 \\[1mm]
\rowcolor[gray]{0.925}
W$(0.5, 1)$ & & 73 & 77 & 82 & 81 & 80 & 82 & 76 & 77 \\
W$(0.25, 1)$ & & 22 & 24 & 27 & 28 & 26 & 26 & 26 & 25 \\
\rowcolor[gray]{0.925}
W$(0.5, 2)$ & & 49 & 52 & 52 & 51 & 48 & 52 & 51 & 52 \\
W$(0.25, 2)$ & & 8 & 8 & 7 & 6 & 6 & 8 & 7 & 10 \\
\rowcolor[gray]{0.925}
W$(0.75, 2)$ & & 28 & 32 & 35 & 35 & 26 & 32 & 30 & 21 \\
W$(0.1, 1)$ & & 10 & 10 & 10 & 8 & 10 & 10 & 10 & 10 \\
\rowcolor[gray]{0.925}
W$(0.9, 3)$ & & 97 & 97 & 99 & 99 & 98 & 99 & 97 & 93 \\[1mm]
zmPo$(1, 0.1)$ & & 91 & 93 & 90 & 92 & 81 & 84 & 90 & 64 \\
\rowcolor[gray]{0.925}
zmPo$(1, 0.5)$& & 17 & 18 & 19 & 19 & 19 & 18 & 18 & 19 \\
zmPo$(1, 0.8)$ & & 71 & 72 & 74 & 74 & 74 & 74 & 74 & 74 \\
\rowcolor[gray]{0.925}
zmPo$(2, 0.1)$ & & 8 & 9 & 8 & 8 & 6 & 6 & 6 & 6 \\
zmPo$(3, 0.1)$ & & 24 & 30 & 24 & 27 & 13 & 12 & 11 & 9 \\[1mm]
\rowcolor[gray]{0.925}
ztPo$(2)$ & & 93 & 99 & 83 & 95 & 38 & 39 & 56 & 19 \\
ztPo$(3)$ & & 12 & 18 & 18 & 18 & 9 & 10 & 7 & 9 \\
\rowcolor[gray]{0.925}
ztPo$(5)$ & & 4 & 1 & 1 & 1 & 5 & 5 & 5 & 5 \\[1mm]
$|\mbox{N}(0, 1)|$ & & 45 & 48 & 48 & 50 & 42 & 46 & 47 & 41 \\
\rowcolor[gray]{0.925}
$|\mbox{N}(2, 1)|$ & & 44 & 46 & 59 & 53 & 54 & 61 & 42 & 37 \\
$|\mbox{N}(3, 1)|$ & & 88 & 78 & 94 & 86 & 96 & 98 & 85 & 90 \\
\bottomrule
\end{tabular}
\caption{Empirical rejection rates of the tests of poissonity (sample size $n=50$, significance level $\alpha=0.05$).}\label{tab:simresults}
\end{table*}
\newpage
\section{Parameter estimation in the family of negative binomial distributions}
\label{SEC negative binomial goodness-of-fit}
The characterizations we employ contain information about the underlying probability law and lead to empirical discrepancy measures being close to zero if the distribution generating the data is the one stated in the characterization. These measures can be used for estimation of the parameters of the considered parametric family of distributions. To illustrate this point, we propose a minimum distance estimation procedure for the family of negative binomial distributions. Our objective is to estimate the unknown parameters $q \in (0, 1)$ and $r > 0$ of a negative binomial distribution based on an i.i.d. $\N_0$-valued sample $X_1, \dots, X_n$. Estimation in this particular family is not trivial, since \cite{AEE:1992} have shown the conjecture of Anscombe dating back to $1950$, that the maximum likelihood equations have a unique solution if, and only if, $\overline{X}_n < S_n^2$ with $\overline{X}_n = n^{-1} \sum_{j = 1}^n X_j$, the sample mean, and $S_n^2 = n^{-1} \sum_{j = 1}^n ( X_j - \overline{X}_n )^2$, the sample variance. However, as \cite{JKK:1993} state in their Section 8.3, so called "[...] underdispersed samples [...] will occasionally be encountered, even when a negative binomial model is appropriate." The moment estimators defined by $\widetilde{q}_n = \overline{X}_n / S_n^2$ and
\begin{align*}
\widetilde{r}_n
= \frac{\big( \overline{X}_n \big)^2}{(1 - \widehat{q}_n) \, S_n^2}
= \frac{\big( \overline{X}_n \big)^2}{S_n^2 - \overline{X}_n},
\end{align*}
see display (5.49) and (5.50) of \cite{JKK:1993}, perform comparably bad as the maximum likelihood estimators, since, in underdispersed samples, they lead to negative values of $\widetilde{r}_n$ or values of $\widetilde{q}_n$ that are greater than one, see the following simulation study and Figure \ref{fig:bias2}.
The heuristic for our new method is similar to that of the previous section, again based on Theorem \ref{THM chara pmf forward difference} (see also Example \ref{EXA negative binomial}). Thus, we define
\begin{align*}
\widehat{e}_n(k;r,q) = \frac{1}{n} \sum_{j = 1}^{n} \bigg( 1 - \frac{r + X_j}{X_j + 1} \, ( 1 - q ) \bigg) \mathds{1}\{ X_j \geq k \}, \quad k \in \N_0,
\end{align*}
and let $\widehat{\rho}_n$ be as in the previous section. Similar to the test for the Poisson distribution, we consider the empirical discrepancy measure
\begin{align*}
S_n^{NB}(r,q)
&= \sum_{k = 0}^\infty \big( \widehat{e}_n(k;r,q) - \widehat{\rho}_n(k) \big)^2 \\
&= \frac{1}{n^2} \sum_{j, \, \ell = 1}^n \bigg[ \bigg( 1 - \frac{r + X_j}{X_j + 1} \, ( 1 - q ) \bigg) \Big( q \big( r + X_\ell \big) - r - 1 \Big) \, \mathds{1}\{ X_j \geq X_\ell \} \\
&\qquad\qquad\qquad + \Big( q \big( r + X_j \big) - r + 1 \Big) \bigg( 1 - \frac{r + X_\ell}{X_\ell + 1} \, \big( 1 - q \big) \bigg) \mathds{1}\{ X_j < X_\ell \} + \mathds{1}\{ X_j = X_\ell \} \bigg] ,
\end{align*}
where the proposed estimators for $(r,q)$ are defined by
\begin{equation}\label{eq:MDE}
(\widehat{r}_n, \widehat{q}_n)
= \mbox{argmin}_{(r, q)} S_n^{NB}(r, q).
\end{equation}
In this particular example we expect that it is possible to minimize the quadratic equation in $r$ and $q$ explicitly to obtain formulae for the estimators. However, in the following section we cannot hope for an explicit solution to the optimization problem and for reasons of consistency of the presentation, we use a numerical routine to find the values of the estimators in both cases. Note that similar estimators for parametric families of continuous distributions are investigated by \cite{BEK:2019}.
For a comparison of the two presented methods we conduct a simulation study in \texttt{R} and use the \texttt{optim} routine to find the minimal values in (\ref{eq:MDE}). The option \texttt{method} was fixed to \texttt{L-BFGS-B}, thus choosing an implementation of the routine suggested by \cite{BLNZ:1995}, and the maximum number of iterations to \texttt{maxit=1000}. As starting values for the optimization routine we choose independent uniformly distributed random numbers $r_{s} \sim \mathcal{U}(1, 3)$ and $q_s \sim \mathcal{U}(0.1, 0.9)$. For different $(r_0, q_0) \in (0, \infty) \times (0, 1)$ we simulate $200$ i.i.d. samples of size $n = 100$ from a negative binomial distribution with parameters $(r_0, q_0)$ and calculate the minimum distance estimators $(\widehat{r}_n, \widehat{q}_n)$ as well as the moment estimators $(\widetilde{r}_n, \widetilde{q}_n)$. Then the bias of the estimation is derived by subtracting the underlying 'true' parameters $(r_0, q_0)$. In Figures \ref{fig:bias1} and \ref{fig:bias2} the results of the different simulations are plotted, with estimation results of the moment estimators plotted as black crosses and the results of the minimum distance estimators as red circles. It is visible in Figure \ref{fig:bias1} that for small values of $q_0$, both procedures perform comparably, although the values of the moments estimators seem to scatter a little more than those of the minimum distance estimators. A completely different picture is seen in Figure \ref{fig:bias2}, where values of $q_0$ in the neighborhood of $1$ and greater values of $r_0$ are assumed. The moment estimators $(\widetilde{r}_n, \widetilde{q}_n)$ regularly produce values which are clearly outside of the defined parameter space $(0, \infty) \times (0, 1)$ as opposed to the minimum distance estimators $(\widehat{r}_n, \widehat{q}_n)$ which do not show this behavior due to the optimization constraints. Nevertheless, some convergence failures in the optimization routine did occur and they are not exclusively related to the underdispersed samples and only happen for somewhat extreme parameter configurations. We chose to visually assess the quality of estimation, since empirical versions of the bias and mean squared error are very sensitive to big discrepancies, and hence did not provide valuable information on the quality of the estimation procedures. It would be of interest to find theoretical statements for the estimators $(\widehat{r}_n, \widehat{q}_n)$ such as consistency results or a central limit theorem type asymptotic distribution.
\begin{figure}[b!]
\centering
\includegraphics[scale = 0.65]{NegBinFig1.pdf}
\caption{Biases of the minimum distance (red) and method of moments (black) estimation procedures simulated for different parameters, $(r, q)\in\{2,5,10\}\times\{0.05,0.25,0.5\}$, of the negative binomial distribution with sample size $n = 100$ and $200$ repetitions of the simulations. A red circle represents the value of $(\widehat{r}_n-r_0,\widehat{q}_n-q_0)$, while a black cross stands for $(\widetilde{r}_n-r_0,\widetilde{q}_n-q_0)$. The assumed true value $(r_0,q_0)$ in every subfigure is highlighted by the intersection of the dotted lines.} \label{fig:bias1}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale = 0.65]{NegBinFig2.pdf}
\caption{Biases of the minimum distance (red) and method of moments (black) estimation procedures simulated for different parameters, $(r, q)\in\{20,30,50\}\times\{0.8,0.9,0.95\}$, of the negative binomial distribution with sample size $n = 100$ and $200$ repetitions of the simulations. A red circle represents the value of $(\widehat{r}_n-r_0,\widehat{q}_n-q_0)$, while a black cross stands for $(\widetilde{r}_n-r_0,\widetilde{q}_n-q_0)$. The assumed true value $(r_0,q_0)$ in every subfigure is highlighted by the intersection of the dotted lines. The number of cases of underdispersed samples, the number of convergence failures of the optimization routine, and the number of cases where both occurred are stated in the plot.} \label{fig:bias2}
\end{figure}
\section{Parameter estimation in discrete exponential-polynomial models}
\label{SEC discrete exponential-polynomial parameter estimation}
In this final section we present an application to a non-normalized model, namely parameter estimation in the discrete exponential-polynomial models introduced in Example \ref{EXA exponential polynomial}. We follow \cite{BEK:2019} who apply the continuous version of our estimation method to continuous exponential-polynomial models. In their work, they compare the method with two other methods for parameter estimation in non-normalized continuous models. More specifically, they implemented the score matching approach of \cite{H:2007} as well as noise-contrastive estimators from \cite{GH:2012}. As another contribution that focuses on the continuous exponential-polynomial distribution and the corresponding parameter estimation problem, let us mention \cite{HT:2016}. In our search through the literature we have found only few methods for the parameter estimation in the discrete version of the model. As such, contrastive divergence methods based on the initial proposal by \cite{H:2002} can be applied in principle though it does not avoid dealing with the normalization constant $C(\vartheta)$ and \cite{Ly:2009} proposes a discrete version of the score matching approach but does not give details on its implementation.
More recently \cite{TK:2017} proposed a method (that avoids any calculation or approximation of the normalization constant) based on suitable homogeneous divergences which are empirically localized.
We use this latter method as a comparison to our approach.
Assume that $X_1, \dots, X_n$ is an i.i.d. $\N$-valued sample from the exponential-polynomial model $p_{\vartheta^{(0)}}$ in Example \ref{EXA exponential polynomial} with some unknown parameter $\vartheta^{(0)} \in \R^{d - 1} \times (- \infty, 0)$ (with $d \in \N$, $d \geq 2$, fixed and known). We seek to estimate $\vartheta^{(0)}$ based on $X_1, \dots, X_n$. Very similar to the previous section, we consider $\widehat{\rho}_n$ as before, and put
\begin{align*}
\widehat{e}_n(k ; \vartheta)
= \frac{1}{n} \sum_{j = 1}^n \bigg( 1 - \exp\Big( \vartheta_1 + \vartheta_2 \big( (X_j + 1)^2 - X_j^2 \big) + \dotso + \vartheta_d \big( (X_j + 1)^d - X_j^d \big) \Big) \bigg) \mathds{1}\{ X_j \geq k \}, \quad k \in \N.
\end{align*}
We define the empirical discrepancy measure
\begin{align*}
S_n^{PE}(\vartheta)
= \sum_{k = 0}^\infty \big( \widehat{e}_n(k; \vartheta) - \widehat{\rho}_n(k) \big)^2 .
\end{align*}
In line with Theorem \ref{THM chara pmf forward difference}, or more precisely, Example \ref{EXA exponential polynomial}, we propose as an estimator
\begin{equation}\label{eq:exppoly1}
\widehat{\vartheta}_n
= \mbox{argmin}_{\vartheta} S_n^{PE}(\vartheta) .
\end{equation}
To see if this approach leads to sensible estimators, we conduct simulations in a two-parameter special case of the model. Following the continuous-case simulation setting of \cite{BEK:2019}, we consider $d = 3$ but fix $\vartheta_2 = 0$, thus effectively estimating the parameters of the parametric family given through
\begin{equation}\label{eq:exppoly2}
p_{(\vartheta_1, \vartheta_3)}(k)
= C(\vartheta_1, \vartheta_3)^{-1} \exp\big( \vartheta_1 k + \vartheta_3 k^3 \big), \quad k \in \N, \quad \vartheta_1 \in \R, ~ \vartheta_3 < 0 ,
\end{equation}
which, though simpler than the general case, is still a non-normalized model and thus inaccessible to explicit maximum likelihood estimation. The discrepancy measure $S_n^{PE}(\vartheta)$ can be calculated as
\begin{align*}
S_n^{PE}(\vartheta)
= \frac{1}{n^2} \sum_{j, \ell = 1}^n \Big[& \big( E_j(\vartheta_1, \vartheta_3) - 1 \big) \big( E_\ell(\vartheta_1, \vartheta_3) \cdot X_\ell - X_\ell + 2 \big) \mathds{1}\{ X_j \geq X_\ell \} + \mathds{1}\{ X_j = X_\ell \} \\
&+ \big( E_j(\vartheta_1, \vartheta_3) - 1 \big) \big( E_\ell(\vartheta_1, \vartheta_3) - 1 \big) \cdot X_j \cdot \mathds{1}\{ X_j < X_\ell \} \Big] ,
\end{align*}
where
\begin{align*}
E_i (\vartheta_1, \vartheta_3)
= \exp\big( \vartheta_1 + \vartheta_3 + 3 \vartheta_3 X_i + 3 \vartheta_3 X_i^2 \big), \quad i = 1, \dots, n .
\end{align*}
For a comparison, we consider the estimator proposed by \cite{TK:2017}. For positive constants $\alpha,\alpha',\gamma>0$, with $\alpha>\alpha'$, and $\bar{\alpha} = (\alpha + \gamma\alpha') \, / \, (1+\gamma)$, their estimator is given as
\begin{align}\label{eq:takenouchi}
\widetilde{\vartheta}_n = \mbox{argmin}_{\vartheta} \Bigg\{&\frac{1}{1+\gamma}\log\Bigg(\sum_{k\in\mathcal{Z}} \bigg(\frac{n_k}{n}\bigg)^\alpha q_\vartheta(k)^{1-\alpha}\Bigg)
+\frac{\gamma}{1+\gamma}\log\Bigg(\sum_{k\in\mathcal{Z}} \bigg(\frac{n_k}{n}\bigg)^{\alpha'} q_\vartheta(k)^{1-\alpha'}\Bigg)\nonumber\\
&-\log\Bigg(\sum_{k\in\mathcal{Z}} \bigg(\frac{n_k}{n}\bigg)^{\bar{\alpha}} q_\vartheta(k)^{1-\bar{\alpha}}\Bigg)\Bigg\},
\end{align}
where $\mathcal{Z}$ is the set of all values that appear in the sample $X_1,\ldots,X_n$, the variable $n_k$ denotes how often the value $k$ is found in the sample, and
$$
q_\vartheta(k) = \exp\big(\vartheta_1 k +\ldots+\vartheta_d k^d \big), \quad k\in\N.
$$
Since \cite{TK:2017} do not propose a specific way of choosing the constants $\alpha$, $\alpha'$ and $\gamma$, we use the values that appear most frequently in their simulation study and therefore set $\alpha=1.1$, $\alpha'=0.1$ and $\gamma= 1/9$.
\begin{figure}[h]
\centering
\includegraphics[scale = 0.65]{exppoly.pdf}
\caption{Simulated biases of the estimators $\widehat{\vartheta}_n$ (red) and $\widetilde{\vartheta}_n$ (black) in the discrete exponential-polynomial model for different parameters $\big(\vartheta_1^{(0)},\vartheta_3^{(0)}\big)$ ($n=100$; 200 repetitions).
A red circle represents the value of ${\big(\widehat{\vartheta}_{n,1} - \vartheta_1^{(0)}, \widehat{\vartheta}_{n,3}- \vartheta_3^{(0)}\big)}$, while a black cross stands for ${\big(\widetilde{\vartheta}_{n,1} - \vartheta_1^{(0)}, \widetilde{\vartheta}_{n,3}- \vartheta_3^{(0)}\big)}$. The assumed true value in every subfigure is highlighted by the intersection of the dotted lines.} \label{fig:exppoly}
\end{figure}
As in the previous section, we use the software \texttt{R} for the simulation and the \texttt{optim} routine to find the minimal values in \eqref{eq:exppoly1} and \eqref{eq:takenouchi}. Again, the option \texttt{method} is fixed to \texttt{L-BFGS-B} and the maximum number of iterations to \texttt{maxit=1000}.
As starting values for the optimization we choose independent uniformly distributed random numbers $\vartheta_{1}^{(s)} \sim \mathcal{U}(-1,1)$ and $\vartheta_{3}^{(s)} \sim \mathcal{U}(-1,0)$. For different $\big(\vartheta_{1}^{(0)}, \vartheta_{3}^{(0)}\big) \in \R \times (-\infty, 0)$, we simulate $200$ i.i.d. samples of size $n = 100$ from the discrete exponential-polynomial model in \eqref{eq:exppoly2} with parameters $(\vartheta_{1}^{(0)}, \vartheta_{3}^{(0)})$ and calculate the estimators $\big(\widehat{\vartheta}_{n,1}, \widehat{\vartheta}_{n,3}\big)$ and $\big(\widetilde{\vartheta}_{n,1}, \widetilde{\vartheta}_{n,3}\big)$ presented in \eqref{eq:exppoly1} and \eqref{eq:takenouchi} respectively. The biases of the estimators are given by subtracting the underlying 'true' parameters $\big(\vartheta_{1}^{(0)}, \vartheta_{3}^{(0)}\big)$. The simulation of a discrete exponential-polynomial model is rather simple as $\vartheta_{3}^{(0)} < 0$ ensures that the probability $p_{(\vartheta_{1}^{(0)}, \vartheta_{3}^{(0)})}(k)$ is rapidly decreasing as $k$ grows. From a practical point of view and minding the usual calculation accuracy, we only need to deal with a discrete distribution with finite support. In Figure \ref{fig:exppoly} the results of the simulations are presented and it is visible that both procedures perform comparable and overall well. The newly proposed estimators tend to scatter more which favors the competing estimators. However, our new estimators require no (data-dependent or quick fix) choice of parameters \citep[like $\alpha$, $\alpha'$ and $\gamma$ for the estimators of][]{TK:2017}. Introducing additional parameters, for instance through suitable weight functions, is also conceivable for our method. It would certainly allow for some choice which improves the overall performance, but it also leads to a less intuitive implementation as these parameters need to be chosen in practice. Note that, as in the previous simulation, some convergence failures in the optimization routine occurred (less than ten percent per parameter configuration).
\vspace{5mm}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 6,309 |
Q: Exporting a DevExpress XtraReport to Text page by page I would like to be able to export only the first page of an XtraReport to text. I can see how to do it when exporting to HTML (or various other formats) using the exportoptions. I can't see any way to do it when exporting to text though.
Any ideas?
A: Answer from DevExpress:
Thank you for contacting us. To accomplish this task, you can use the
Page Merging technique. Refer to the How to: Merge Pages of Two
Reports article for additional information. See the code below:
[VB.NET]
Dim report As New XtraReport1()
report.CreateDocument()
Dim ps As New PrintingSystem()
ps.Pages.Add(report.Pages(0))
ps.ExportToText(file)
It worked perfectly.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 6,929 |
Q: Copy files from on-prem to azure I'm new to Azure eco system. I'm doing some research on copying data from on-prem to azure. I found following options:
*
*AzCopy
*Azure Data Factory (Copy Data Tool)
*Data Management Gateway
Ours is a Microsoft shop; so, I'm looking for tools that gel with MS platform. Also, down the line, we want to automate the entire thing as much as we can. So, I think, Azure Storage Explorer is out of the question. Is there a preference among the above 3. Or, are there any better tools?
A: I think you are mixing stuff, Copy Data Tool is just an Azure Data Factory Wizard to make some sample data moving between resources. Azure Data Factory uses the data management gateway to get on premises resources such as files and databases.
What you want to do can be made with Azure Data Factory. I recommend using version 2 (even in its preview version) because its Authoring is easier to understand if you are new to the tool. You can graphically configure linked services, datasets and pipelines from there.
I hope this helped, if you need further help just ask away!
A: If you're already familiar with SSIS, there's also the option to use SSIS in ADF that enables on-prem data access via VNet.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 7,348 |
Kolhapur: Young wrestler injured during a wrestling match, dies
Budding wrestler Nilesh Kandurkar, from Kolhapur, Maharashtra, lost his battle for life on Friday morning. Nilesh was critical after suffering severe injuries in a wrestling match on Monday. He was immediately rushed to the hospital, where he remained in a critical state for four days
Injured wrestler Nilesh
Nilesh Kandurkar, the 20-year-old budding wrestler, who had suffered severe neck injuries in a wrestling match on Monday, succumbed to his injuries on Friday in Karad's Krishna hospital.
Doctors tried their best to revive Nilesh, but their efforts went in vain. Nilesh breathed his last at 5 AM on Friday. Nilesh's death has shaken the family members and the entire wrestling fraternity in Kolhapur.
While speaking to My Medical Mantra, Sanjay Khot, Nilesh's brother said, "We had nurtured a dream for years, that one day Nilesh will represent India at the international level. But, the destiny wanted it otherwise. Nilesh took his last breath around 5 AM on Friday."
On Monday, Nilesh, in a local tourney, had suffered severe injuries on his neck, as his opponent tried to maneuver attacking skills, owing to which Nilesh, had hit the ground hard on his neck. Nilesh suffered neck and spinal cord damage due to the massive blow.
Nilesh was shifted to a local hospital in Kolhapur and was later admitted to Krishna hospital in Karad, Kolhapur. But, four days after his accident, Nilesh lost his battle for the life.
Nilesh Kandurkar battling for life, Nilesh Kandurkar passes away, Wrestler Nilesh passes away | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 4,119 |
{"url":"http:\/\/docs.astropy.org\/en\/latest\/api\/astropy.stats.circmoment.html","text":"circmoment\u00b6\n\nastropy.stats.circmoment(data, p=1.0, centered=False, axis=None, weights=None)[source]\n\nComputes the p-th trigonometric circular moment for an array of circular data.\n\nParameters\ndatanumpy.ndarray or Quantity\n\nArray of circular (directional) data, which is assumed to be in radians whenever data is numpy.ndarray.\n\npfloat, optional\n\nOrder of the circular moment.\n\ncenteredbool, optional\n\nIf True, central circular moments are computed. Default value is False.\n\naxisint, optional\n\nAxis along which circular moments are computed. The default is to compute the circular moment of the flattened array.\n\nweightsnumpy.ndarray, optional\n\nIn case of grouped data, the i-th element of weights represents a weighting factor for each group such that sum(weights, axis) equals the number of observations. See [1], remark 1.4, page 22, for detailed explanation.\n\nReturns\ncircmomentnumpy.ndarray or Quantity\n\nThe first and second elements correspond to the direction and length of the p-th circular moment, respectively.\n\nReferences\n\n1\n\nS. R. Jammalamadaka, A. SenGupta. \u201cTopics in Circular Statistics\u201d. Series on Multivariate Analysis, Vol. 5, 2001.\n\n2\n\nC. Agostinelli, U. Lund. \u201cCircular Statistics from \u2018Topics in Circular Statistics (2001)\u2019\u201d. 2015. <https:\/\/cran.r-project.org\/web\/packages\/CircStats\/CircStats.pdf>\n\nExamples\n\n>>> import numpy as np\n>>> from astropy.stats import circmoment\n>>> from astropy import units as u\n>>> data = np.array([51, 67, 40, 109, 31, 358])*u.deg\n>>> circmoment(data, p=2)\n(<Quantity 90.99263082432564 deg>, <Quantity 0.48004283892950717>)","date":"2020-01-25 19:23:07","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.2147277295589447, \"perplexity\": 8011.772208017228}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-05\/segments\/1579251681412.74\/warc\/CC-MAIN-20200125191854-20200125221854-00179.warc.gz\"}"} | null | null |
package org.sklsft.generator.persistence.backup.datasource.interfaces;
import org.apache.commons.dbcp.BasicDataSource;
import org.sklsft.generator.model.exception.DataSourceNotFoundException;
/**
* This class is used in your /data-model/datasource-context.xml<br/>
* given a name, it gives a declared {@link BasicDataSource} that can be accessed to fecth data that will further be injected in your project datasource
* @author Nicolas Thibault
*
*/
public interface InputDataSourceProvider {
public BasicDataSource getDataSource(String dataSourceName) throws DataSourceNotFoundException;
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 3,154 |
"Pascal – you heard of his famous wager? He put it this way, it's better to believe in a God that doesn't exist than not believe in One Who does, and Who might take it personal. It's not a bad way to live, wouldn't you agree?" So says 82-year-old Zan Wiseman: brother, son, 'not-Jewish Jew', proxy twin, sometime Communist, four-times husband, one-time novelist – and bet-hedging atheist.
From an emergency room in Calgary, where an intern hears his poorly timed joke about suicide, Zan winds up on the psychologist's couch. But the doctor's efforts to investigate Zan's mental state are constantly stymied by his misfiring memory, his wry delivery, and his novelist's tendency to embellish. Is he misremembering, misrepresenting, crafting a better story – or all of the above? Through the streets of Strike-era Winnipeg, Toronto during the Depression, and the 1980s Calgary of Zan's new life, Dave Margoshes's compellingly unreliable narrator treats the reader to a magnificent meditation on aging, family ties, faith, and the liquid concept of the truth.
Dimensions of an Orchard, the last poetry collection by Dave Margoshes, won the Anne Szumigalski Poetry Prize at the 2010 Saskatchewan Book Awards. He's published four other books of poetry and over a dozen other books, including Wiseman's Wager from Coteau Books. His poems have won a number of awards, including the inaugural Stephen Leacock Prize for Poetry, and have appeared widely in Canadian literary magazines and anthologies, including twice in the Best Canadian Poetry volumes. His Bix's Trumpet and Other Stories was 2007 Saskatchewan Book of the Year and a ReLit Award finalist. His collection of linked short stories, A Book of Great Worth, also from Coteau, was named one of Amazon.Ca's Top Hundred Books of 2012. One of the stories from that collection was a finalist for the Journey Prize. He lives on a farm near Saskatoon. | {
"redpajama_set_name": "RedPajamaC4"
} | 7,662 |
Welcome to Friends Educational and Charitable Trust (FECT).
FECT (Friends Educational and Charitable Trust) is a non-profit organization, a voluntary group headed to provide education to the poor kids, groove them to a good position in our society to lead their life.
Till now we have identified more than 400 rose buds with good academic records, who were about to stop their education due to lack of financial support. We are satisfied with what we have achieved in this small period of time. By Gods grace, we will expand our self and serve the needy.
FECT urge many more peoples to join hands with this kind of charity work and we, FECT members strongly say that this is one of the important social responsibilities we have.
FECT is an NGO, Charitable Educational trust in Chennai, Tamilnadu. We are also working along with other educational trust in chennai to achieve better results; some of the other trust in chennai are WOW, Aatru Padai, ORIGIN and few others. We get help from other educational trust in tamil nadu (other districts) too. Motive is trace good hearted people and form a strong bonding to work together and pushing ourselves to greater heights.
Friends to care which was started by a group of friends on Jan 2006, has about 50 caring hearts (members) now.
Provide an opportunity of education for the underprivileged. Provide opportunities and facilities for the school dropouts, due to financial crisis. Break away the social and cultural barriers, which stops education for the needy. | {
"redpajama_set_name": "RedPajamaC4"
} | 9,700 |
{"url":"https:\/\/handmade.network\/forums\/t\/6932-memory_management_metaprogramming#21103","text":"24 posts\nMemory management metaprogramming\nEdited by AlexKindel on\nI'm working on a program that reserves two big blocks of virtual memory for use as stacks for things whose size is not statically known. The program tracks pointers to tops of each stack, which I call stack_cursors, and making an allocation consists of calling a function that increments the stack_cursor by the size of the block one wants, commits the next page of memory if this causes it to cross a page boundary, and returns what the position of the cursor was before it was incremented. A function that needs to allocate something that persists beyond its own scope takes a pointer to a stack_cursor, while a function that needs to allocate things for internal use takes a stack_cursor directly, so that its incrementing of the stack_cursor isn't reflected on the caller's end, and the caller's subsequent uses of the stack can overwrite the function's. There are functions that do both internal-only allocations and persistent ones, hence the existence of two stacks.\n\nA shortcoming of this scheme is that, though it commits new pages when a stack grows large enough to need them, it never decommits them if the stack shrinks again. As far as I can tell, whereas stack_cursor \"rewinding,\" such as it is currently, is entirely automatic as long as I correctly pass the stack_cursor by value, if I wanted to support decommitting pages I would have to add a call that manually rewinds the stack_cursor at the end of the scope, to a position that I saved at the beginning of the scope. At that point, continuing to make the distinction between passing a stack_cursor by value versus by reference would only serve as insurance in case I forgot to call the rewind function. There are other reasons taking away that distinction and always passing by pointer is appealing.\n\nIt's the kind of repetitive pattern that suggests the need for an abstraction, but I don't really see a way to take the usual approach of \"factor it into a function.\" Is this the kind of situation where metaprogramming could help? Some kind of syntactic extension to C that automates my more specialized stacks like C automates its own stack seems plausible to me, but I've never done that kind of thing before. I don't know how hard it typically is.\n497 posts\nMemory management metaprogramming\nIt's patterns like this that make scope guards and RAII such a useful feature.\n\nHowever the temp stack would only grow to some \"high water level\" and no further. Adding logic to reduce the stack size would only mean that the stack will very likely grow again to that point in the future.\n\nOne way to avoid that is to keep a high water mark per frame (updated on each allocation) and at the end of each frame copy it into a little ring buffer of stack sizes and only shrink the stack it when multiple frames have passed that didn't need the last k pages.\n24 posts\nMemory management metaprogramming\nI was in the process of typing up an explanation of what I meant by \"there are other reasons...always passing by pointer is appealing,\" but I decided in the middle of it that they weren't good reasons either, haha.\n\nWould any language's built-in RAII mechanism give me a close equivalent of what I'm doing here? If I were to try to leverage C++ constructors and destructors I guess it would mean doing something with placement new.\nSimon Anciaux\n1124 posts\nMemory management metaprogramming\nA simple way would be to have a single place in your application loop where you would always de-commit unused pages. With the watermark ring buffer like ratchetfreak said.\n24 posts\nMemory management metaprogramming\nEdited by AlexKindel on\nAlexKindel\nWould any language's built-in RAII mechanism give me a close equivalent of what I'm doing here? If I were to try to leverage C++ constructors and destructors I guess it would mean doing something with placement new.\n\nTo clarify the question more: suppose that, instead of having the function that does stack allocations take a stack_cursor, I needed to have it take a large Stack struct that contains a stack_cursor. If I wanted a function's internal allocations to get wiped out without any extra work, as is happening now, I would have to pass the Stack by value, but if it was a large enough struct, I wouldn't want to. At that point I would again start considering always passing by pointer, saving the cursor position at the start of the function, and manually resetting it to the saved position before returning, even if I don't care about decommitting memory pages. Would any language's built-in RAII mechanisms help with that?\n\nI have in fact replaced stack_cursors with small structs - a cursor together with a pointer to the end of the reserved block so the allocation function can check whether there is room for the allocation, and to enable splitting off the upper half of the free portion of the reserved block into a second stack in cases where I want to start a new thread. It's still small enough that switching to manual rewinding in order to be able to always pass by pointer probably wouldn't reduce overhead, much less enough to justify the extra code, but it might get bigger. I don't know yet. I suppose it would always be possible to split the stack_cursors back out of the structs and pass them separately, the structs always by pointer and the cursors by pointer or by value as necessary, but I wouldn't look forward to the verbosity of that.\nSimon Anciaux\n1124 posts\nMemory management metaprogramming\nI'm not using C++ much but at some point I used \"defered statements\": defining some code that will be called at the end of the scope:\n 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 \/* Include this somewhere *\/ #include template struct Defer { Defer( F f ) : f( f ) {} ~Defer( ) { f( ); } F f; }; template Defer makeDefer( F f ) { return Defer( f ); }; #define __defer( line ) defer_ ## line #define _defer( line ) __defer( line ) struct defer_dummy { }; template Defer operator+( defer_dummy, F&& f ) { return makeDefer( std::forward( f ) ); } #define defer auto _defer( __LINE__ ) = defer_dummy( ) + [ & ]( ) \/* --- *\/ \/* Use it like this: *\/ void example( ) { stack_thing_to_do( ); defer { stack_clean_up( ); \/* Will be executed at the end of the function. *\/ \/* Any code can go here. *\/ } \/* body of the function here *\/ } \n\nIn my own code to handle temporary stack I do something like this:\n 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 typedef struct data { umm reserved; umm used; union { u8* bytes; char* text; umm offset; void* raw; }; } data; data memory_get_temp( data* memory ){ data result; result.reserved = memory->reserved - memory->used; result.used = 0; result.bytes = memory->bytes + memory->used; return result; } void example( data* memory ) { data temp = memory_get_temp( memory ); \/* Do things with temp, memory stays unchanged. *\/ }","date":"2022-01-21 14:47:05","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.19217537343502045, \"perplexity\": 1486.4123419433035}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-05\/segments\/1642320303385.49\/warc\/CC-MAIN-20220121131830-20220121161830-00026.warc.gz\"}"} | null | null |
Nolan Hoffman (born 23 April 1985) is a South African road and track cyclist, who currently rides for South African team Enza. He was suspended for eighteen months after he tested positive for the use of testosterone on 18 October 2009. He competed in the scratch event at the UCI Track Cycling World Championships in 2012, 2013 and 2014, winning the silver medal in 2012.
Major results
2005
1st Stage 3 (TTT) Tour d'Egypte
2007
5th Road race, African Road Championships
2008
1st Powerade Dome 2 Dome Cycling Spectacular
4th Overall Tour de Korea
1st Stage 9
2009
1st Stage 1 Tour de Korea
3rd Giro del Capo IV
2011
All-Africa Games
1st Road race
1st Team time trial (with Jay Thomson, Reinardt Janse van Rensburg and Darren Lill)
2012
2nd Scratch, UCI Track Cycling World Championships
2014
1st Cape Town Cycle Tour
1st Points classification Mzansi Tour
2015
1st Cape Town Cycle Tour
KZN Autumn Series
8th Mayday Classic
10th PMB Road Classic
2018
1st 100 Cycle Challenge
1st Cape Town Cycle Tour
3rd Road race, National Road Championships
2019
Tour of Good Hope
1st Points classification
1st Stage 1
3rd 100 Cycle Challenge
2021
1st Cape Town Cycle Tour
References
External links
1985 births
Living people
South African track cyclists
South African male cyclists
People from Stellenbosch Local Municipality
Doping cases in cycling
African Games gold medalists for South Africa
African Games medalists in cycling
Competitors at the 2011 All-Africa Games
20th-century South African people
21st-century South African people | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 1,111 |
\section{Introduction}
During the last decades, a huge number of papers deal with the approximation of multivariate functions.
On the one hand there is the classical approach that searches for optimal algorithms
that use samples for computing approximants. On the other hand one is interested in
optimal algorithms that use general linear information of the function in order to construct
an approximant.
Usually, one considers the relation of the approximation error, that is measured in a specific norm,
to the number of used information of the function.
In this context, optimality often means best possible in the sense of an worst case approximation error, i.e.,
the worst case of the above mentioned relation with respect to all functions belonging to the unit ball of a
specific source space.
In this paper, we consider periodic functions and we measure the worst case error in the $L_\infty$ norm, similar
to \cite{KuWaWo08,CoKueSi16}, where the authors are interested in best possible approximations based on linear information,
and \cite{KuWaWo09,KuWaWo09b,ByDuSiUl14,ByKaUlVo16}, where the sampling errors, i.e., the approximation errors based on sampling methods,
are considered.
The constructive approaches of the latter ones use sparse grids or single rank-1 lattices as sampling schemes for determining upper and lower bounds on the sampling errors
in terms of the number of used samples. Often, the bounds only allow for asymptotic statements, where the concrete dependencies on, e.g., the dimension
is not explicitly stated.
Exceptions are the papers \cite{KuWaWo09}, where the authors use single rank-1 lattices as sampling schemes, and \cite{KuWaWo09b}, where a completely non-constructive proof is used to show the existence of suitable sampling sets for tractability considerations. In both papers, the dependencies on all parameters are determined, i.e.,
the presented upper bounds can also be used in order to give error bounds even in pre-asymptotic settings.
Unfortunately, the errors decay with a rate that is far away from optimal ones when increasing the number of
used samples. We stress on the fact that the known upper bounds based on the single rank-1 lattice approach can not be improved significantly, cf.~\cite{ByKaUlVo16}.
Another recent paper~\cite{KaVo19} analyzes a similar approximation method as we investigate in this paper. It also presents estimates on the corresponding $L_\infty$ errors. However, that paper focuses on specific function spaces
of generalized mixed smoothness, which can be treated as special cases of the spaces we study herein. Moreover, the dependencies
of the approximation errors on the number of sampling values is considered only in asymptotics, i.e., the detailed (exponential) dependencies on the dimension are missing and thus the results do not allow for suitable estimates in pre-asymptotic settings.
At this point, we stress that many papers presenting sampling errors associated with constructive designs of the used sampling schemes treat specific fixed function spaces and usually focus on dominating mixed smoothness spaces.
This paper deals with a newly developed sampling strategy that uses sampling schemes
that are unions of several rank-1 lattices.
We present an approximation algorithm and analyze the arising approximation error in full detail.
To this end, we consider suitable reproducing kernel Hilbert spaces and determine
the corresponding approximation error of our sampling strategy in terms of a so called
worst case truncation error, which is actually the best possible worst case approximation error
one can achieve using general linear information, cf.~\cite{CoKueSi16}.
The remainder of the paper is structured as follows.
Section~\ref{sec:pre} gives a short overview on the fundamentals of our considerations.
In Section~\ref{sec:general_framework}, we analyze the new approximation approach
and prove the general framework for estimating associated sampling errors.
It turns out that the sampling error is almost as good as the best possible worst case approximation error, i.e.,
in our setting the approximation computed from the sampling values is -- up to a factor that depends only logarithmically on the number of approximated Fourier coefficients -- identical to the error that occurs when using the exact Fourier partial sum for approximation.
Subsequently, we apply the result to a highly popular type of reproducing kernel Hilbert spaces, often called Korobov spaces or Sobolev spaces of dominating mixed smoothness, that
are most widely used as illustrating examples during tractability considerations, in Section~\ref{sec:tractability}.
We improve tractability results presented in~\cite{KuWaWo09} for lattice algorithms.
In addition, the result of this paper even improves the known upper bounds on the rates of convergence
for tractable $L_\infty$ approximation based on sampling values in general, cf.~\cite{KuWaWo09b}. In fact, the convergence rates proved in this paper corresponds -- up to an arbitrary small $\epsilon$ -- to
the best possible convergence rates that can be achieved by algorithms that use general linear information, cf.~\cite{KuWaWo08} for details. Remark~\ref{rem:tractability} discusses the substantial improvements presented in this paper -- in particular with respect to tractability.
The insight into the calculations that occurs for Korobov spaces leads directly to
the requirements that need to be fulfilled in order to show a strong
relation between approximation numbers and sampling numbers, which we discuss in Section~\ref{sec:app_samp_numbers}.
Naturally, approximation numbers are bounded from above by sampling numbers.
Under mild assumptions on the considered function spaces, our general result from Section~\ref{sec:general_framework}
can be applied in order to estimate sampling numbers in terms of approximation numbers.
It turns out that the $L_\infty$ sampling numbers and approximation numbers may differ at most slightly,
cf.~\cite{CoKueSi16}. In more detail, the main rate of sampling and approximation numbers with respect to
the number of used information remain the same.
We stress the fact that all suggested algorithms, i.e., the algorithm for computing the approximant, cf. Algorithm~\ref{alg:compute_S_I_Lambda}, as well as the algorithm that determines suitable sampling schemes, cf. e.g. \cite[Algorithms~1 \&~3]{Kae17}, can be extremely efficiently performed with respect to the used arithmetic operations and thus offer a reasonable sampling strategy for practical applications.
On a final note, we would like to point out that the very cheap random construction of the suggested sampling sets
may fail with a certain small probability, but checking these requirements has the same complexity as the
suggested fast Fourier transform algorithm and, thus, is extremely efficient, cf.~\cite{Kae17}.
\section{Prerequisites}\label{sec:pre}
\subsection{Reproducing kernel Hilbert spaces}\label{sec:repro_kernel_hilbert_space}
In order to apply sampling strategies, we consider continuous periodic functions $f\,\colon\,\ensuremath{\mathbb{T}}^d\to \ensuremath{\mathbb{C}}$, $\ensuremath{\mathbb{T}}\sim[0,1)$, denote their Fourier coefficients by
\begin{align}
c_{\ensuremath{\boldsymbol{h}}}(f):=\int_{\ensuremath{\mathbb{T}}^d}f({\ensuremath{\boldsymbol{x}}})e^{-2\pi\textnormal{i}{\ensuremath{\boldsymbol{h}}}\cdot{\ensuremath{\boldsymbol{x}}}}\mathrm{d}{\ensuremath{\boldsymbol{x}}},
\end{align}
and think of the function $f$ as a Fourier series
$$
f({\ensuremath{\boldsymbol{x}}}):=\sum_{{\ensuremath{\boldsymbol{h}}}\in\ensuremath{\mathbb{Z}}^d}c_{\ensuremath{\boldsymbol{h}}}(f)\textnormal{e}^{2\pi\textnormal{i}{\ensuremath{\boldsymbol{h}}}\cdot{\ensuremath{\boldsymbol{x}}}},
$$
where ${\ensuremath{\boldsymbol{h}}}\cdot{\ensuremath{\boldsymbol{x}}}=\sum_{j=1}^dh_jx_j$ is the usual inner product in $\ensuremath{\mathbb{R}}^d$.
Furthermore, the function spaces under consideration are reproducing kernel Hilbert spaces, where we assume that the reproducing kernel $K_d\,\colon\,\ensuremath{\mathbb{T}}^d\times\ensuremath{\mathbb{T}}^d\to\ensuremath{\mathbb{C}}$ is given by
\begin{align*}
K_d({\ensuremath{\boldsymbol{x}}},{\ensuremath{\boldsymbol{y}}}):=\sum_{{\ensuremath{\boldsymbol{h}}}\in\ensuremath{\mathbb{Z}}^d}\frac{\textnormal{e}^{2\pi\textnormal{i}{\ensuremath{\boldsymbol{h}}}\cdot({\ensuremath{\boldsymbol{x}}}-{\ensuremath{\boldsymbol{y}}})}}{r_d({\ensuremath{\boldsymbol{h}}})}.
\end{align*}
The occurring weight function
$r_d\,\colon\,\ensuremath{\mathbb{Z}}^d\to(0,\infty)$ is subject to the restriction that
$$\sum_{{\ensuremath{\boldsymbol{h}}}\in\ensuremath{\mathbb{Z}}^d}r_d({\ensuremath{\boldsymbol{h}}})^{-1}<\infty$$ holds, which guarantees the continuity of the positive definite kernel $K_d$.
Due to \cite{Aron50}, the positive definite kernel $K_d$ is indeed an reproducing kernel and it induces an inner product, i.e., $f({\ensuremath{\boldsymbol{y}}})=\langle f,K_d(\circ,{\ensuremath{\boldsymbol{y}}})\rangle_d$ for all appropriate functions $f$, which is given by
\begin{align}
\langle f,g\rangle_d:=\sum_{{\ensuremath{\boldsymbol{h}}}\in\ensuremath{\mathbb{Z}}^d}c_{\ensuremath{\boldsymbol{h}}}(f)\overline{c_{\ensuremath{\boldsymbol{h}}}(g)}r_d({\ensuremath{\boldsymbol{h}}}).
\label{eq:scalar_product}
\end{align}
The associated norm $\|f|\mathcal{H}_r(\ensuremath{\mathbb{T}}^d)\|:=\sqrt{\langle f,f\rangle_d}=\left(\sum_{{\ensuremath{\boldsymbol{k}}}\in\ensuremath{\mathbb{Z}}^d}r_d({\ensuremath{\boldsymbol{h}}})|c_{\ensuremath{\boldsymbol{h}}}(f)|^2\right)^{1/2}$ directly leads to the reproducing kernel Hilbert space
\begin{align*}
\mathcal{H}_r(\ensuremath{\mathbb{T}}^d):=\left\{f\in L_1(\ensuremath{\mathbb{T}}^d)\,\colon\,\|f|\mathcal{H}_r(\ensuremath{\mathbb{T}}^d)\|<\infty\right\}
\end{align*}
of all functions $f$ for which the norm is finite.
\subsection{Multiple Rank-1 lattices}
Recently, a spatial discretization approach for multivariate trigonometric polynomials was presented in \cite{Kae17},
which we will utilize in order to compute approximations based on sampling values. To this end, we define
a rank-1 lattice
$$
\Lambda({\ensuremath{\boldsymbol{z}}},M):=\left\{\frac{j}{M}{\ensuremath{\boldsymbol{z}}}\bmod{{\ensuremath{\boldsymbol{1}}}}\,\colon\,j=0,\ldots,M-1\right\}\subset\ensuremath{\mathbb{T}}^d,
$$
where $M\in\ensuremath{\mathbb{N}}$ is called lattice size and ${\ensuremath{\boldsymbol{z}}}\in\ensuremath{\mathbb{Z}}^d$ is the generating vector of the rank-1 lattice.
One main advantage of rank-1 lattices is the group structure of the sampling set, which allows for fast Fourier transform algorithms, cf.~\cite{kaemmererdiss}. At the same time, this structure is the main disadvantage of a single rank-1 lattice since this structure causes excessive oversampling factors for spatial discretizations of specific trigonometric polynomials, cf. \cite[Chapter~3]{kaemmererdiss}, and -- as a consequence -- sampling rates that are far away from optimal ones, cf.~\cite{ByKaUlVo16}.
In order to avoid these disadvantages, we consider sampling sets that are unions of several rank-1 lattices
$$
\Lambda:=\Lambda({\ensuremath{\boldsymbol{z}}}_1,M_1,\ldots,{\ensuremath{\boldsymbol{z}}}_L,M_L):=\bigcup_{\ell=1}^L\Lambda({\ensuremath{\boldsymbol{z}}}_\ell,M_\ell),
$$
which we call \emph{multiple rank-1 lattice}. Our considerations are essentially based on the observation that
a multiple rank-1 lattice that fulfills the equality
\begin{align}
I=\bigcup_{\ell=1}^L\tilde{I}_\ell,\label{eq:reco_prop}
\end{align}
where $\tilde{I}_\ell:=\left\{{\ensuremath{\boldsymbol{k}}}\in I\,\colon\,{\ensuremath{\boldsymbol{k}}}\cdot{\ensuremath{\boldsymbol{z}}}_\ell\not\equiv{\ensuremath{\boldsymbol{h}}}\cdot{\ensuremath{\boldsymbol{z}}}_\ell\imod{M_\ell}\text{ for all }{\ensuremath{\boldsymbol{h}}}\in I\setminus\{{\ensuremath{\boldsymbol{k}}}\}\right\}$ depends on the rank-1 lattice $\Lambda({\ensuremath{\boldsymbol{z}}}_\ell,M_\ell)$,
is necessarily a spatial discretization for all trigonometric polynomials with frequency support in $I$, cf.~\cite{Kae16, Kae17} for details.
Moreover, we stress the fact that the number of sampling nodes within the multiple rank-1 lattice $\Lambda$ is bounded by
$|\Lambda|\le1+\sum_{\ell=1}^L(M_\ell-1)\le\sum_{\ell=1}^L M_\ell$. Under mild assumptions, $M_\ell$ can be chosen such that $M_\ell\lesssim|I|$ holds. In addition, the number $L$ of used rank-1 lattices can be bounded by $L\lesssim\log|I|$ in order to construct multiple rank-1 lattices that fulfill the reconstruction property in \eqref{eq:reco_prop}, cf.~\cite{Kae17}.
In this work, we equivalently modify the reconstruction property \eqref{eq:reco_prop} in order to simplify the theoretical considerations in Section~\ref{sec:general_framework}. We refer to Remark~\ref{rem:averaging_approx_fc}
for a detailed discussion on different reconstruction algorithms that could be used for approximation.
\section{General framework}\label{sec:general_framework}
The definition of
\begin{align}
I_\ell:=\left\{{\ensuremath{\boldsymbol{h}}}\in I\setminus\bigcupdot_{j=1}^{\ell-1} I_\ell\colon {\ensuremath{\boldsymbol{h}}}\cdot{\ensuremath{\boldsymbol{z}}}_\ell\not\equiv {\ensuremath{\boldsymbol{k}}}\cdot{\ensuremath{\boldsymbol{z}}}_\ell\imod{M_\ell}\text{ for all }{\ensuremath{\boldsymbol{k}}}\in I\setminus\{{\ensuremath{\boldsymbol{h}}}\}\right\}\label{eq:def_Iell}
\end{align}
guarantees that the sets $I_\ell$, $\ell=1,\ldots,L$, are disjoint.
For fixed ${\ensuremath{\boldsymbol{h}}}\in\bigcupdot_{\ell=1}^{L}I_\ell$ there is exactly one $l_{\ensuremath{\boldsymbol{h}}}$ for which ${\ensuremath{\boldsymbol{h}}}\in I_{\ell_{\ensuremath{\boldsymbol{h}}}}$ holds, i.e., the number
\begin{align}
\ell_{\ensuremath{\boldsymbol{h}}}\in\left\{\ell\in\{1,\ldots,L\}\colon {\ensuremath{\boldsymbol{h}}}\in I_\ell\right\}\label{eq:def_lh}
\end{align}
is already uniquely and well defined by \eqref{eq:def_lh}. A clearer style for defining $\ell_{\ensuremath{\boldsymbol{h}}}$ is
\begin{equation}
\ell_{\ensuremath{\boldsymbol{h}}}:=\max\left\{\ell\in\{1,\ldots,L\}\colon {\ensuremath{\boldsymbol{h}}}\in I_\ell\right\}=\min\left\{\ell\in\{1,\ldots,L\}\colon {\ensuremath{\boldsymbol{h}}}\in I_\ell\right\}.
\label{eq:def_lh1}
\end{equation}
\begin{algorithm}[tb]
\caption{Approximation of Fourier coefficients using multiple lattice rules.}\label{alg:compute_S_I_Lambda}
\begin{tabular}{p{2.25cm}p{5cm}p{7cm}}
Input: & $I\subset\ensuremath{\mathbb{Z}}^d$ &frequency set\\
& $\Lambda:=\Lambda({\ensuremath{\boldsymbol{z}}}_1,M_1,\ldots,{\ensuremath{\boldsymbol{z}}}_L,M_L)$ & sampling nodes\\
& $\left\{f({\ensuremath{\boldsymbol{x}}})\colon{\ensuremath{\boldsymbol{x}}}\in\Lambda\right\}$ & sampling values of the function $f$ \\
\end{tabular}
\begin{algorithmic}[1]
\State Set $\tilde{I}=\emptyset$.
\For{$\ell=1 \textnormal{ to }L$}
\State Compute $\hat{g}_t^{(\ell)}:=\sum_{j=0}^{M_\ell-1}p\left(\frac{j}{M_\ell}{\ensuremath{\boldsymbol{z}}}\right)\textnormal{e}^{-2\pi i\frac{jt}{M_\ell}}$, $t=0,\ldots,M_\ell-1$, using a 1d FFT.
\State Determine $I_\ell:=\left\{{\ensuremath{\boldsymbol{k}}}\in I\setminus\tilde{I}\;\colon\; {\ensuremath{\boldsymbol{k}}}\cdot{\ensuremath{\boldsymbol{z}}}_\ell\not\equiv {\ensuremath{\boldsymbol{h}}}\cdot{\ensuremath{\boldsymbol{z}}}_\ell\imod{M_\ell}\text{ for all }{\ensuremath{\boldsymbol{h}}}\in I\setminus\{{\ensuremath{\boldsymbol{k}}}\}\right\}$
\ForEach{${\ensuremath{\boldsymbol{h}}}\in I_\ell$}
\State Set $\hat{f}_{\ensuremath{\boldsymbol{h}}}:=\hat{g}_{{\ensuremath{\boldsymbol{h}}}\cdot{\ensuremath{\boldsymbol{z}}}_\ell\bmod{M_\ell}}^{(\ell)}$.
\EndFor
\State Set $\tilde{I}:=\tilde{I}\cupdot I_\ell$.
\EndFor
\end{algorithmic}
\begin{tabular}{p{2.25cm}p{3cm}p{9cm}}
Output: & $\tilde{I}\subset I$ & frequencies of uniquely reconstructable\\ &&Fourier coefficients for $f\in\Pi_I$ and\\
& $\{\hat{f}_{\ensuremath{\boldsymbol{h}}}\}_{{\ensuremath{\boldsymbol{h}}}\in\tilde{I}}$ & corresponding approximated Fourier coefficients\\
\cmidrule{1-3}
Complexity: & \multicolumn{2}{p{12cm}}{$\OO{L(\tilde{M}\log \tilde{M}+|I|(d+\log|I|))}$, where $\tilde{M}:=\max\{M_\ell\,\colon\,\ell=1,\ldots,L\}$}
\end{tabular}
\end{algorithm}
We compute the approximation of a function $f\in \mathcal{H}_r(\ensuremath{\mathbb{T}}^d)$ using Algorithm \ref{alg:compute_S_I_Lambda} and get
the approximant
$$
S_I^\Lambda f({\ensuremath{\boldsymbol{x}}})=\sum_{\ell=1}^{L}\sum_{{\ensuremath{\boldsymbol{h}}}\in I_\ell}\hat{f}_{\ensuremath{\boldsymbol{h}}}\textnormal{e}^{2\pi\textnormal{i}{\ensuremath{\boldsymbol{h}}}\cdot{\ensuremath{\boldsymbol{x}}}},
$$
where the approximated Fourier coefficients are computed by
\begin{align}
\hat{f}_{\ensuremath{\boldsymbol{h}}}:=\frac{1}{M_{\ell_{\ensuremath{\boldsymbol{h}}}}}\sum_{j=0}^{M_{\ell_{\ensuremath{\boldsymbol{h}}}}-1}f\left(\frac{j{\ensuremath{\boldsymbol{z}}}_{\ell_{\ensuremath{\boldsymbol{h}}}}}{M_{\ell_{\ensuremath{\boldsymbol{h}}}}}\right)\textnormal{e}^{-2\pi\textnormal{i} j\frac{{\ensuremath{\boldsymbol{h}}}\cdot{\ensuremath{\boldsymbol{z}}}_{\ell_{\ensuremath{\boldsymbol{h}}}}}{M_{\ell_{\ensuremath{\boldsymbol{h}}}}}}
=\sum_{\substack{{\ensuremath{\boldsymbol{k}}}\in\ensuremath{\mathbb{Z}}^d\\{\ensuremath{\boldsymbol{k}}}\cdot{\ensuremath{\boldsymbol{z}}}_{\ell_{\ensuremath{\boldsymbol{h}}}}\equiv{\ensuremath{\boldsymbol{h}}}\cdot{\ensuremath{\boldsymbol{z}}}_{\ell_{\ensuremath{\boldsymbol{h}}}}\imod{M_{\ell_{\ensuremath{\boldsymbol{h}}}}}}}c_{\ensuremath{\boldsymbol{k}}}(f).\label{eq:alias_Lambda_l}
\end{align}
Using an efficient sort algorithm for determining the frequency sets $I_\ell$ and one-dimensional fast Fourier transforms of lengths $M_\ell$, $\ell=1,\ldots,L$, for computing all the numbers $\hat{g}_t^{(\ell)}$ leads to the arithmetic complexity $\OO{L|I|(d+\log|I|)+\sum_{\ell=1}^LM_\ell\log M_\ell}$ of Algorithm~\ref{alg:compute_S_I_Lambda}, cf.~\cite{Kae16}.
The equality in \eqref{eq:alias_Lambda_l} holds due to the well known aliasing properties of rank-1 lattices, cf.\ \cite[Theorem~2.8]{SlJo94} and \cite[Section~3.4]{kaemmererdiss}.
At this point, we stress that we observe $S_I^\Lambda f\in\Pi_{\tilde{I}}\subseteq \Pi_I$, $\tilde{I}:=\bigcupdot_{\ell=1}^L I_\ell$, and
that $\tilde{I}\subsetneq I$ might hold, in general.
In the following, we assume $\tilde{I}=I$, which actually is the crucial characteristic of the used multiple rank\mbox{-}1 lattice $\Lambda({\ensuremath{\boldsymbol{z}}}_1,M_1,\ldots,{\ensuremath{\boldsymbol{z}}}_L,M_L)$. This assumption will prove to be very beneficial in the following considerations of the pointwise error of a function $f\in\mathcal{H}_r(\ensuremath{\mathbb{T}}^d)$ and its approximation $S_I^\Lambda f$. We determine
\begin{align}
(f&-\operatorname{S}_I^\Lambda f)({\ensuremath{\boldsymbol{x}}})\nonumber\\
&=\sum_{h\not\in I}c_{\ensuremath{\boldsymbol{h}}}(f)\textnormal{e}^{2\pi\textnormal{i}{\ensuremath{\boldsymbol{h}}}\cdot{\ensuremath{\boldsymbol{x}}}}+\underbrace{\sum_{{\ensuremath{\boldsymbol{h}}}\in I}\left(c_{\ensuremath{\boldsymbol{h}}}(f)-\frac{1}{M_{\ell_{\ensuremath{\boldsymbol{h}}}}}\sum_{j=0}^{M_{\ell_{\ensuremath{\boldsymbol{h}}}}-1}f\left(\frac{j{\ensuremath{\boldsymbol{z}}}_{\ell_{\ensuremath{\boldsymbol{h}}}}}{M_{\ell_{\ensuremath{\boldsymbol{h}}}}}\right)\textnormal{e}^{-2\pi\textnormal{i} j{\ensuremath{\boldsymbol{h}}}\cdot{\ensuremath{\boldsymbol{z}}}_{\ell_{\ensuremath{\boldsymbol{h}}}}/M_{\ell_{\ensuremath{\boldsymbol{h}}}}}\right)\textnormal{e}^{2\pi\textnormal{i}{\ensuremath{\boldsymbol{h}}}\cdot{\ensuremath{\boldsymbol{x}}}}}_{=:\operatorname{R}_I^\Lambda f}\label{eq:aliasing_formula}.
\end{align}
The first summand in \eqref{eq:aliasing_formula} is called the \emph{truncation error} and is inevitable when approximating the function $f$ by a trigonometric polynomial with frequencies supported on the set $I$.
Thus the main focus is on estimating the second summand in \eqref{eq:aliasing_formula}, which is denoted by $\operatorname{R}_I^\Lambda f$. For that reason, we consider the Fourier coefficients of the trigonometric polynomial $\operatorname{R}_I^\Lambda f$. We follow the considerations in \cite{KuSlWo06} and observe
\begin{align}
&c_{\ensuremath{\boldsymbol{h}}}(f)-\frac{1}{M_{\ell_{\ensuremath{\boldsymbol{h}}}}}\sum_{j=0}^{M_{\ell_{\ensuremath{\boldsymbol{h}}}}-1}f\left(\frac{j{\ensuremath{\boldsymbol{z}}}_{\ell_{\ensuremath{\boldsymbol{h}}}}}{M_{\ell_{\ensuremath{\boldsymbol{h}}}}}\right)\textnormal{e}^{-2\pi\textnormal{i} j{\ensuremath{\boldsymbol{h}}}\cdot{\ensuremath{\boldsymbol{z}}}_{\ell_{\ensuremath{\boldsymbol{h}}}}/M_{\ell_{\ensuremath{\boldsymbol{h}}}}}=\langle f,\tau_{\ensuremath{\boldsymbol{h}}}\rangle_d,\nonumber
\intertext{where $\tau_{\ensuremath{\boldsymbol{h}}}$, ${\ensuremath{\boldsymbol{h}}}\in I$, is defined by}
\tau_{\ensuremath{\boldsymbol{h}}}({\ensuremath{\boldsymbol{t}}})&:=\int_{\ensuremath{\mathbb{T}}^d}K_d({\ensuremath{\boldsymbol{t}}},{\ensuremath{\boldsymbol{x}}})\textnormal{e}^{2\pi\textnormal{i}{\ensuremath{\boldsymbol{h}}}\cdot{\ensuremath{\boldsymbol{x}}}}\mathrm{d}{\ensuremath{\boldsymbol{x}}}-\frac{1}{M_{\ell_{\ensuremath{\boldsymbol{h}}}}}\sum_{j=0}^{M_{\ell_{\ensuremath{\boldsymbol{h}}}}-1}K_d\left({\ensuremath{\boldsymbol{t}}},\frac{j{\ensuremath{\boldsymbol{z}}}_{\ell_{\ensuremath{\boldsymbol{h}}}}}{M_{\ell_{\ensuremath{\boldsymbol{h}}}}}\right)\textnormal{e}^{2\pi\textnormal{i} j{\ensuremath{\boldsymbol{h}}}\cdot{\ensuremath{\boldsymbol{z}}}_{\ell_{\ensuremath{\boldsymbol{h}}}}/M_{\ell_{\ensuremath{\boldsymbol{h}}}}}\nonumber
\intertext{since $K_d$ is the reproducing kernel. Exploiting the linearity of the scalar product $\langle\circ,\circ\rangle_d$, cf. \eqref{eq:scalar_product}, yields}
\tau_{\ensuremath{\boldsymbol{h}}}({\ensuremath{\boldsymbol{t}}})&=-\sum_{\substack{{\ensuremath{\boldsymbol{k}}}\in\ensuremath{\mathbb{Z}}^d\setminus\{{\ensuremath{\boldsymbol{h}}}\}\\{\ensuremath{\boldsymbol{h}}}\cdot{\ensuremath{\boldsymbol{z}}}_{\ell_{\ensuremath{\boldsymbol{h}}}}\equiv{\ensuremath{\boldsymbol{k}}}\cdot{\ensuremath{\boldsymbol{z}}}_{\ell_{\ensuremath{\boldsymbol{h}}}}\imod{M_{\ell_{\ensuremath{\boldsymbol{h}}}}}}}\frac{\textnormal{e}^{2\pi\textnormal{i}{\ensuremath{\boldsymbol{k}}}\cdot{\ensuremath{\boldsymbol{t}}}}}{r_d({\ensuremath{\boldsymbol{k}}})}\label{eq:tau_h_in_I}
\end{align}
for all ${\ensuremath{\boldsymbol{h}}}\in I$.
For fixed ${\ensuremath{\boldsymbol{x}}}$, we apply the Cauchy--Schwarz inequality and achieve the estimate
\begin{align}
|(f-\operatorname{S}_I^\Lambda f)({\ensuremath{\boldsymbol{x}}})|=
\left|
\left\langle
f,\sum_{{\ensuremath{\boldsymbol{h}}}\in\ensuremath{\mathbb{Z}}^d}\tau_{\ensuremath{\boldsymbol{h}}} \textnormal{e}^{2\pi\textnormal{i}{\ensuremath{\boldsymbol{h}}}\cdot{\ensuremath{\boldsymbol{x}}}}
\right\rangle_d
\right|
\le\left\|f\,|\mathcal{H}_r(\ensuremath{\mathbb{T}}^d)\right\|\left\|\sum_{{\ensuremath{\boldsymbol{h}}}\in\ensuremath{\mathbb{Z}}^d}\tau_{\ensuremath{\boldsymbol{h}}}\textnormal{e}^{2\pi\textnormal{i}{\ensuremath{\boldsymbol{h}}}\cdot{\ensuremath{\boldsymbol{x}}}}\,\bigg.\bigg|\mathcal{H}_r(\ensuremath{\mathbb{T}}^d)\right\|
\label{eq:wce_derivation}
\end{align}
where the functions $\tau_{\ensuremath{\boldsymbol{h}}}$ are defined by
\begin{align*}
\tau_{\ensuremath{\boldsymbol{h}}}({\ensuremath{\boldsymbol{t}}})&=
\begin{cases}
\text{given in \eqref{eq:tau_h_in_I}}, & {\ensuremath{\boldsymbol{h}}}\in I,\\
\int_{\ensuremath{\mathbb{T}}^d}K_d({\ensuremath{\boldsymbol{t}}},{\ensuremath{\boldsymbol{x}}})\textnormal{e}^{2\pi\textnormal{i}{\ensuremath{\boldsymbol{h}}}\cdot{\ensuremath{\boldsymbol{x}}}}\mathrm{d}
{\ensuremath{\boldsymbol{x}}}=\frac{\textnormal{e}^{2\pi\textnormal{i}{\ensuremath{\boldsymbol{h}}}\cdot{\ensuremath{\boldsymbol{t}}}}}{r_d({\ensuremath{\boldsymbol{h}}})}, & {\ensuremath{\boldsymbol{h}}}\in \ensuremath{\mathbb{Z}}^d\setminus I.
\end{cases}
\end{align*}
We stress that each $\tau_{\ensuremath{\boldsymbol{h}}}$ depends on the spatial variable ${\ensuremath{\boldsymbol{t}}}$ and that the norm in \eqref{eq:wce_derivation} is taking with respect to this variable.
Taking \eqref{eq:wce_derivation} into account, the worst case error measured in the $L_\infty(\ensuremath{\mathbb{T}}^d)$ norm of a function $f\in\mathcal{H}_r(\ensuremath{\mathbb{T}}^d)$ within the unit ball of $\mathcal{H}_r(\ensuremath{\mathbb{T}}^d)$ is bounded from above by
\begin{align*}
\sup_{\|f|\mathcal{H}_r(\ensuremath{\mathbb{T}}^d)\|\le 1}\|f-\operatorname{S}_I^\Lambda f|L_\infty(\ensuremath{\mathbb{T}}^d)\|
\le\sup_{{\ensuremath{\boldsymbol{x}}}\in\ensuremath{\mathbb{T}}^d}\left\|\sum_{{\ensuremath{\boldsymbol{h}}}\in\ensuremath{\mathbb{Z}}^d}\tau_{\ensuremath{\boldsymbol{h}}}\textnormal{e}^{2\pi\textnormal{i}{\ensuremath{\boldsymbol{h}}}\cdot{\ensuremath{\boldsymbol{x}}}}\bigg.\bigg|\mathcal{H}_r(\ensuremath{\mathbb{T}}^d)\right\|.
\end{align*}
For ease of notation, we write ${\ensuremath{\boldsymbol{h}}}\not\in I$ for ${\ensuremath{\boldsymbol{h}}}\in\ensuremath{\mathbb{Z}}^d\setminus I$ in the following.
We estimate
\begin{align}
\sup_{\|f|\mathcal{H}_r(\ensuremath{\mathbb{T}}^d)\|\le 1}&\|f-\operatorname{S}_I^\Lambda f|L_\infty(\ensuremath{\mathbb{T}}^d)\|^2
\le\sup_{{\ensuremath{\boldsymbol{x}}}\in\ensuremath{\mathbb{T}}^d}\left|\sum_{{\ensuremath{\boldsymbol{h}}}\in\ensuremath{\mathbb{Z}}^d}\sum_{{\ensuremath{\boldsymbol{k}}}\in\ensuremath{\mathbb{Z}}^d}\langle\tau_{\ensuremath{\boldsymbol{h}}},\tau_{\ensuremath{\boldsymbol{k}}}\rangle_d\textnormal{e}^{2\pi\textnormal{i}({\ensuremath{\boldsymbol{h}}}+{\ensuremath{\boldsymbol{k}}})\cdot{\ensuremath{\boldsymbol{x}}}}\right|\nonumber\\
&=
\sup_{{\ensuremath{\boldsymbol{x}}}\in\ensuremath{\mathbb{T}}^d}\Bigg|
\sum_{{\ensuremath{\boldsymbol{h}}}\not\in I}\sum_{{\ensuremath{\boldsymbol{k}}}\not\in I}\langle\tau_{\ensuremath{\boldsymbol{h}}},\tau_{\ensuremath{\boldsymbol{k}}}\rangle_d\textnormal{e}^{2\pi\textnormal{i}({\ensuremath{\boldsymbol{h}}}+{\ensuremath{\boldsymbol{k}}})\cdot{\ensuremath{\boldsymbol{x}}}}
+\sum_{{\ensuremath{\boldsymbol{h}}}\in I}\sum_{{\ensuremath{\boldsymbol{k}}}\in I}\langle\tau_{\ensuremath{\boldsymbol{h}}},\tau_{\ensuremath{\boldsymbol{k}}}\rangle_d\textnormal{e}^{2\pi\textnormal{i}({\ensuremath{\boldsymbol{h}}}+{\ensuremath{\boldsymbol{k}}})\cdot{\ensuremath{\boldsymbol{x}}}}\nonumber\\
&\quad+\sum_{{\ensuremath{\boldsymbol{h}}}\not\in I}\sum_{{\ensuremath{\boldsymbol{k}}}\in I}\langle\tau_{\ensuremath{\boldsymbol{h}}},\tau_{\ensuremath{\boldsymbol{k}}}\rangle_d\textnormal{e}^{2\pi\textnormal{i}({\ensuremath{\boldsymbol{h}}}+{\ensuremath{\boldsymbol{k}}})\cdot{\ensuremath{\boldsymbol{x}}}}
+\sum_{{\ensuremath{\boldsymbol{h}}}\in I}\sum_{{\ensuremath{\boldsymbol{k}}}\not\in I}\langle\tau_{\ensuremath{\boldsymbol{h}}},\tau_{\ensuremath{\boldsymbol{k}}}\rangle_d\textnormal{e}^{2\pi\textnormal{i}({\ensuremath{\boldsymbol{h}}}+{\ensuremath{\boldsymbol{k}}})\cdot{\ensuremath{\boldsymbol{x}}}}
\Bigg|\nonumber\\
&\le
\underbrace{\sum_{{\ensuremath{\boldsymbol{h}}}\not\in I}\sum_{{\ensuremath{\boldsymbol{k}}}\not\in I}|\langle\tau_{\ensuremath{\boldsymbol{h}}},\tau_{\ensuremath{\boldsymbol{k}}}\rangle_d|}_{=:\Sigma_{\bcancel{I}\bcancel{I}}}
+2\underbrace{\sum_{{\ensuremath{\boldsymbol{h}}}\not\in I}\sum_{{\ensuremath{\boldsymbol{k}}}\in I}|\langle\tau_{\ensuremath{\boldsymbol{h}}},\tau_{\ensuremath{\boldsymbol{k}}}\rangle_d|}_{=:\Sigma_{\bcancel{I}I}}
+\underbrace{\sum_{{\ensuremath{\boldsymbol{h}}}\in I}\sum_{{\ensuremath{\boldsymbol{k}}}\in I}|\langle\tau_{\ensuremath{\boldsymbol{h}}},\tau_{\ensuremath{\boldsymbol{k}}}\rangle_d|}_{=:\Sigma_{I I}}
\label{eq:split_wceinfty}.
\end{align}
We individually treat the summands $\Sigma_{\bcancel{I}\bcancel{I}}$, $\Sigma_{\bcancel{I}I}$, and $\Sigma_{I I}$
and start with the simplest case.
Formula \eqref{eq:scalar_product} implies
\begin{align*}
\langle\tau_{\ensuremath{\boldsymbol{h}}},\tau_{\ensuremath{\boldsymbol{k}}}\rangle_d=\begin{cases}
0,&{\ensuremath{\boldsymbol{h}}}\neq{\ensuremath{\boldsymbol{k}}}\\
r_d({\ensuremath{\boldsymbol{h}}})^{-1},&{\ensuremath{\boldsymbol{h}}}={\ensuremath{\boldsymbol{k}}}
\end{cases}
\end{align*}
for ${\ensuremath{\boldsymbol{h}}}\not\in I$ and ${\ensuremath{\boldsymbol{k}}}\not\in I$, which directly yields
\begin{align}
\Sigma_{\bcancel{I}\bcancel{I}}=
\sum_{{\ensuremath{\boldsymbol{h}}}\not\in I}\sum_{{\ensuremath{\boldsymbol{k}}}\not\in I}|\langle\tau_{\ensuremath{\boldsymbol{h}}},\tau_{\ensuremath{\boldsymbol{k}}}\rangle_d|
=\sum_{{\ensuremath{\boldsymbol{h}}}\in\ensuremath{\mathbb{Z}}^d\setminus I}r_d({\ensuremath{\boldsymbol{h}}})^{-1}.\label{eq:trunc_error}
\end{align}
We call the term in \eqref{eq:trunc_error} the \emph{worst case truncation error}.
The strategy of the following considerations is to
estimate the terms $\Sigma_{\bcancel{I}I}$ and $\Sigma_{I I}$
by terms in the worst case truncation error $\Sigma_{\bcancel{I}\bcancel{I}}$.
To this end, we analyze \eqref{eq:tau_h_in_I} in more detail using
specific Kronecker delta functions.
\begin{definition}
For each ${\ensuremath{\boldsymbol{k}}}\in\ensuremath{\mathbb{Z}}^d$, $\Lambda({\ensuremath{\boldsymbol{z}}}_1,M_1,\ldots,{\ensuremath{\boldsymbol{z}}}_L,M_L)$ a multiple rank-1 lattice and $\ell\in\{1,\ldots,L\}$, we define the Kronecker delta functions $\delta_{\ensuremath{\boldsymbol{k}}}^{(\ell)}\colon \ensuremath{\mathbb{Z}}^d\to\{0,1\}$ by
$$\delta_{\ensuremath{\boldsymbol{k}}}^{(\ell)}({\ensuremath{\boldsymbol{h}}})=
\begin{cases}
1, & {\ensuremath{\boldsymbol{k}}}\in I_\ell,\;{\ensuremath{\boldsymbol{k}}}\neq{\ensuremath{\boldsymbol{h}}}\in\ensuremath{\mathbb{Z}}^d,\;\text{and }{\ensuremath{\boldsymbol{h}}}\cdot{\ensuremath{\boldsymbol{z}}}_\ell\equiv{\ensuremath{\boldsymbol{k}}}\cdot{\ensuremath{\boldsymbol{z}}}_\ell\imod{M_\ell};\\
0, & \text{otherwise}.
\end{cases}
$$
\end{definition}
We determine some helpful properties of the introduced Kronecker delta functions.
\begin{lemma}\label{lem:properties_of_delta}
Let a frequency index set $I\subset\ensuremath{\mathbb{Z}}^d$, $|I|<\infty$, and a multiple rank-1 lattice $\Lambda({\ensuremath{\boldsymbol{z}}}_1,M_1,\ldots,{\ensuremath{\boldsymbol{z}}}_L,M_L)$ be given.
The frequency sets $I_\ell$, $\ell=1,\ldots,L$, are determined as specified in \eqref{eq:def_Iell}.
Then the following hold.
\begin{itemize}
\item
For each ${\ensuremath{\boldsymbol{k}}}\in I_\ell$, we characterize the frequency set of aliasing Fourier coefficients by
\begin{align}
\left\{{\ensuremath{\boldsymbol{h}}}\in\ensuremath{\mathbb{Z}}^d\setminus\{{\ensuremath{\boldsymbol{k}}}\}\colon{\ensuremath{\boldsymbol{h}}}\cdot{\ensuremath{\boldsymbol{z}}}_\ell\equiv{\ensuremath{\boldsymbol{k}}}\cdot{\ensuremath{\boldsymbol{z}}}_\ell\imod{M_\ell}\right\}
=\{{\ensuremath{\boldsymbol{h}}}\in\ensuremath{\mathbb{Z}}^d\colon
\delta_{\ensuremath{\boldsymbol{k}}}^{(\ell)}({\ensuremath{\boldsymbol{h}}})=1\},\label{eq:aliasing_set_delta}
\end{align}
using the above defined Kronecker delta functions $\delta_{\ensuremath{\boldsymbol{k}}}^{(\ell)}$.
\item
Furthermore, the equality
\begin{equation}
\{{\ensuremath{\boldsymbol{h}}}\in\ensuremath{\mathbb{Z}}^d\colon
\delta_{\ensuremath{\boldsymbol{k}}}^{(\ell)}({\ensuremath{\boldsymbol{h}}})=1\}=\emptyset\quad\text{for each}\quad
{\ensuremath{\boldsymbol{k}}}\in\ensuremath{\mathbb{Z}}^d\setminus I_\ell\label{equ:empty_aliasing}
\end{equation}
holds.
\item
For
fixed ${\ensuremath{\boldsymbol{k}}}\in\ensuremath{\mathbb{Z}}^d$ and fixed $\ell\in\{1,\ldots,L\}$ we observe
\begin{align}
\{{\ensuremath{\boldsymbol{h}}}\in\ensuremath{\mathbb{Z}}^d\colon
\delta_{\ensuremath{\boldsymbol{k}}}^{(\ell)}({\ensuremath{\boldsymbol{h}}})=1\}\cap I=\emptyset.
\label{eq:aliasing_within_I}
\end{align}
\item
Moreover, for each fixed $\ell\in\{1,\ldots,L\}$ and each fixed ${\ensuremath{\boldsymbol{h}}}\in\ensuremath{\mathbb{Z}}^d$ we have
\begin{align}
\sum_{{\ensuremath{\boldsymbol{k}}}\in I_\ell}\delta^{(\ell)}_{\ensuremath{\boldsymbol{k}}}({\ensuremath{\boldsymbol{h}}})\in\{0,1\},\label{eq:aliasing_summation}
\end{align}
\item
which implies for fixed ${\ensuremath{\boldsymbol{h}}}\in\ensuremath{\mathbb{Z}}^d$
\begin{align}
0\le\sum_{{\ensuremath{\boldsymbol{k}}}\in I}\sum_{\ell=1}^L\delta^{(\ell)}_{\ensuremath{\boldsymbol{k}}}({\ensuremath{\boldsymbol{h}}})=\sum_{{\ensuremath{\boldsymbol{k}}}\in \bigcupdot_{\ell=1}^L I_\ell}\delta_{\ensuremath{\boldsymbol{k}}}^{(\ell_{\ensuremath{\boldsymbol{k}}})}({\ensuremath{\boldsymbol{h}}})=\sum_{\ell=1}^L\sum_{{\ensuremath{\boldsymbol{k}}}\in I_\ell}\delta^{(\ell)}_{\ensuremath{\boldsymbol{k}}}({\ensuremath{\boldsymbol{h}}})\le L\label{eq:aliasing_summation_L}.
\end{align}
\end{itemize}
\end{lemma}
\begin{proof}
For ${\ensuremath{\boldsymbol{k}}}\in\ensuremath{\mathbb{Z}}^d\setminus I_\ell$ the Kronecker delta function $\delta_{\ensuremath{\boldsymbol{k}}}^{(\ell)}$ maps to zero for each ${\ensuremath{\boldsymbol{h}}}\in\ensuremath{\mathbb{Z}}^d$. Accordingly, we observe \eqref{equ:empty_aliasing}.
On the other hand, for ${\ensuremath{\boldsymbol{k}}}\in I_\ell$ the function value $\delta_{\ensuremath{\boldsymbol{k}}}^{(\ell)}({\ensuremath{\boldsymbol{h}}})$ is one exactly for those ${\ensuremath{\boldsymbol{h}}}\in\ensuremath{\mathbb{Z}}^d\setminus\{{\ensuremath{\boldsymbol{k}}}\}$, that fulfill the aliasing formula for $\Lambda({\ensuremath{\boldsymbol{z}}}_\ell,M_\ell)$, namely
${\ensuremath{\boldsymbol{h}}}\cdot{\ensuremath{\boldsymbol{z}}}_\ell\equiv{\ensuremath{\boldsymbol{k}}}\cdot{\ensuremath{\boldsymbol{z}}}_\ell\imod{M_\ell}$, which characterizes the set at the left hand side in \eqref{eq:aliasing_set_delta}.
Due to the definition of the set
$$
I_\ell:=\left\{{\ensuremath{\boldsymbol{h}}}\in I\setminus\bigcupdot_{j=1}^{\ell-1} I_\ell\colon {\ensuremath{\boldsymbol{h}}}\cdot{\ensuremath{\boldsymbol{z}}}_\ell\not\equiv {\ensuremath{\boldsymbol{k}}}\cdot{\ensuremath{\boldsymbol{z}}}_\ell\imod{M_\ell}\text{ for all }{\ensuremath{\boldsymbol{k}}}\in I\setminus\{{\ensuremath{\boldsymbol{h}}}\}\right\}
$$
each ${\ensuremath{\boldsymbol{k}}}\in I_\ell$ has no aliasing element in $I$. Accordingly, for ${\ensuremath{\boldsymbol{k}}}\in I_\ell$ the equality in \eqref{eq:aliasing_within_I} holds. For ${\ensuremath{\boldsymbol{k}}}\in\ensuremath{\mathbb{Z}}^d\setminus I_\ell$ the equality also holds, since the set characterized by the Kronecker delta functions is already the empty set.
In order to prove \eqref{eq:aliasing_summation}, we have to show that each ${\ensuremath{\boldsymbol{h}}}\in\ensuremath{\mathbb{Z}}^d$ aliases to at most one
${\ensuremath{\boldsymbol{k}}}\in I_\ell$.
We assume the contrary, i.e.,
assume there exists ${\ensuremath{\boldsymbol{h}}}\in\ensuremath{\mathbb{Z}}^d$ such that there are ${\ensuremath{\boldsymbol{k}}},{\ensuremath{\boldsymbol{k}}}'\in I_\ell\subset I$, ${\ensuremath{\boldsymbol{k}}}\neq{\ensuremath{\boldsymbol{k}}}'$, with $\delta_{\ensuremath{\boldsymbol{k}}}^{(\ell)}({\ensuremath{\boldsymbol{h}}})=1=\delta_{{\ensuremath{\boldsymbol{k}}}'}^{(\ell)}({\ensuremath{\boldsymbol{h}}})$, which implies ${\ensuremath{\boldsymbol{k}}}\neq{\ensuremath{\boldsymbol{h}}}\neq{\ensuremath{\boldsymbol{k}}}'$ and
${\ensuremath{\boldsymbol{k}}}\cdot{\ensuremath{\boldsymbol{z}}}_\ell\equiv{\ensuremath{\boldsymbol{h}}}\cdot{\ensuremath{\boldsymbol{z}}}_\ell\equiv{\ensuremath{\boldsymbol{k}}}'\cdot{\ensuremath{\boldsymbol{z}}}_\ell\imod{M_\ell}$. Accordingly, there is an aliasing element ${\ensuremath{\boldsymbol{k}}}'\in I$ for ${\ensuremath{\boldsymbol{k}}}\in I_\ell$,
which prohibits ${\ensuremath{\boldsymbol{k}}}$ from belonging to $I_\ell$, due to its definition. Thus, we observe the contradiction to our assumptions. Consequently, for each ${\ensuremath{\boldsymbol{h}}}\in\ensuremath{\mathbb{Z}}^d$ at most one $\delta_{\ensuremath{\boldsymbol{k}}}^{(\ell)}({\ensuremath{\boldsymbol{h}}})$, ${\ensuremath{\boldsymbol{k}}}\in I_\ell$ and $\ell$ fixed, can be nonzero. Moreover, for $|I_\ell|<M_\ell$ there exist $M_\ell-|I_\ell|$ of the disjoint sets
$$\{{\ensuremath{\boldsymbol{h}}}\in\ensuremath{\mathbb{Z}}^d\colon j\equiv{\ensuremath{\boldsymbol{h}}}\cdot{\ensuremath{\boldsymbol{z}}}_\ell\imod{M_\ell}\}\qquad j\in\{0,\ldots,M_\ell-1\}$$
that do not contain an element from $I_\ell$. For elements ${\ensuremath{\boldsymbol{h}}}\in\ensuremath{\mathbb{Z}}^d$ of these specific sets we observe
$\sum_{{\ensuremath{\boldsymbol{k}}}\in I_\ell}\delta^{(\ell)}_{\ensuremath{\boldsymbol{k}}}({\ensuremath{\boldsymbol{h}}})=0$.
According to that, the term $\sum_{{\ensuremath{\boldsymbol{k}}}\in I_\ell}\delta^{(\ell)}_{\ensuremath{\boldsymbol{k}}}({\ensuremath{\boldsymbol{h}}})$ is a non-negative integer of at least zero and at most one.
\newline
Finally, we prove \eqref{eq:aliasing_summation_L}.
For ${\ensuremath{\boldsymbol{k}}}\in I\setminus\bigcupdot_{\ell=1}^L I_\ell$
we observe $\delta^{(\ell)}_{\ensuremath{\boldsymbol{k}}}({\ensuremath{\boldsymbol{h}}})=0$ for all $\ell\in\{1,\ldots,L\}$, for ${\ensuremath{\boldsymbol{k}}}\in \bigcupdot_{\ell=1}^L I_\ell$, we have
$\delta^{(\ell)}_{\ensuremath{\boldsymbol{k}}}({\ensuremath{\boldsymbol{h}}})=0$ for all $\ell\in\{1,\ldots,L\}\setminus\{\ell_{\ensuremath{\boldsymbol{k}}}\}$, which
justifies the first equality. The same observation yields the second equality
\begin{align*}
\sum_{{\ensuremath{\boldsymbol{k}}}\in I}\sum_{j=1}^L\delta_{\ensuremath{\boldsymbol{k}}}^{(j)}({\ensuremath{\boldsymbol{h}}})
=\sum_{\ell=1}^L\sum_{{\ensuremath{\boldsymbol{k}}}\in I_\ell}\sum_{j=1}^L\delta_{\ensuremath{\boldsymbol{k}}}^{(j)}({\ensuremath{\boldsymbol{h}}})
=\sum_{\ell=1}^L\sum_{{\ensuremath{\boldsymbol{k}}}\in I_\ell}\delta_{\ensuremath{\boldsymbol{k}}}^{(\ell)}({\ensuremath{\boldsymbol{h}}}).
\end{align*}
The inequalities follow from \eqref{eq:aliasing_summation}.
\end{proof}
The introduced Kronecker delta functions $\delta_{\ensuremath{\boldsymbol{k}}}^{(\ell)}$ allow for concise
characterizations of the aliasing effects of the sampling method under consideration.
We exploit the observations of Lemma~\ref{lem:properties_of_delta} in order to estimate
the terms $\Sigma_{\bcancel{I}I}$ and $\Sigma_{II}$.
\begin{lemma}\label{lem:est_sum_cI_I}
Let $\Lambda({\ensuremath{\boldsymbol{z}}}_1,M_1,\ldots,{\ensuremath{\boldsymbol{z}}}_L,M_L)$ be a multiple rank-1 lattice
such that $I=\bigcupdot_{\ell=1}^{L}I_\ell$, where $I_\ell$ is defined in \eqref{eq:def_Iell}.
Then we have
$$\Sigma_{\bcancel{I}I}\le L\Sigma_{\bcancel{I}\bcancel{I}}.$$
\end{lemma}
\begin{proof}
Let ${\ensuremath{\boldsymbol{h}}}\not\in I$, ${\ensuremath{\boldsymbol{k}}}\in I$ be given. Due to $I=\bigcupdot_{\ell=1}^{L}I_\ell$
there exists a unique $\ell_{\ensuremath{\boldsymbol{k}}}$, cf. \eqref{eq:def_lh} and \eqref{eq:def_lh1}, such that ${\ensuremath{\boldsymbol{k}}}\in I_{\ell_{\ensuremath{\boldsymbol{k}}}}$.
Accordingly, we observe
\begin{align}
\langle\tau_{\ensuremath{\boldsymbol{h}}},\tau_{\ensuremath{\boldsymbol{k}}}\rangle_d&=
\left\langle
\frac{\textnormal{e}^{2\pi\textnormal{i}{\ensuremath{\boldsymbol{h}}}\cdot\circ}}{r_d({\ensuremath{\boldsymbol{h}}})},
-\sum_{\substack{{\ensuremath{\boldsymbol{k}}}'\in\ensuremath{\mathbb{Z}}^d\setminus\{{\ensuremath{\boldsymbol{k}}}\}\\{\ensuremath{\boldsymbol{k}}}'\cdot{\ensuremath{\boldsymbol{z}}}_{\ell_{\ensuremath{\boldsymbol{k}}}}\equiv{\ensuremath{\boldsymbol{k}}}\cdot{\ensuremath{\boldsymbol{z}}}_{\ell_{\ensuremath{\boldsymbol{k}}}}\imod{M_{\ell_{\ensuremath{\boldsymbol{k}}}}}}}
\frac{\textnormal{e}^{2\pi\textnormal{i}{\ensuremath{\boldsymbol{k}}}'\cdot\circ}}{r_d({\ensuremath{\boldsymbol{k}}}')}
\right\rangle_d\nonumber\\
&=
-\sum_{\substack{{\ensuremath{\boldsymbol{k}}}'\in\ensuremath{\mathbb{Z}}^d\setminus\{{\ensuremath{\boldsymbol{k}}}\}\\{\ensuremath{\boldsymbol{k}}}'\cdot{\ensuremath{\boldsymbol{z}}}_{\ell_{\ensuremath{\boldsymbol{k}}}}\equiv{\ensuremath{\boldsymbol{k}}}\cdot{\ensuremath{\boldsymbol{z}}}_{\ell_{\ensuremath{\boldsymbol{k}}}}\imod{M_{\ell_{\ensuremath{\boldsymbol{k}}}}}}}
\left\langle
\frac{\textnormal{e}^{2\pi\textnormal{i}{\ensuremath{\boldsymbol{h}}}\cdot\circ}}{r_d({\ensuremath{\boldsymbol{h}}})},
\frac{\textnormal{e}^{2\pi\textnormal{i}{\ensuremath{\boldsymbol{k}}}'\cdot\circ}}{r_d({\ensuremath{\boldsymbol{k}}}')}
\right\rangle_d
=
-\frac{\delta_{{\ensuremath{\boldsymbol{k}}}}^{({\ell_{\ensuremath{\boldsymbol{k}}}})}({\ensuremath{\boldsymbol{h}}})}{r_d({\ensuremath{\boldsymbol{h}}})}\label{eq:sp_InI}.
\end{align}
Consequently, we have
\begin{align*}
\Sigma_{\bcancel{I}I}&:=
\sum_{{\ensuremath{\boldsymbol{h}}}\not\in I}\sum_{{\ensuremath{\boldsymbol{k}}}\in I}|\langle\tau_{\ensuremath{\boldsymbol{h}}},\tau_{\ensuremath{\boldsymbol{k}}}\rangle_d|=
\sum_{{\ensuremath{\boldsymbol{h}}}\not\in I}r_d({\ensuremath{\boldsymbol{h}}})^{-1}\sum_{\ell=1}^L\sum_{{\ensuremath{\boldsymbol{k}}}\in I_\ell}\delta_{{\ensuremath{\boldsymbol{k}}}}^{(\ell)}({\ensuremath{\boldsymbol{h}}})
\stackrel{\eqref{eq:aliasing_summation_L}}{\le}
L\Sigma_{\bcancel{I}\bcancel{I}}
\end{align*}
\end{proof}
\begin{lemma}\label{lem:est_sum_I_I}
Let $\Lambda({\ensuremath{\boldsymbol{z}}}_1,M_1,\ldots,{\ensuremath{\boldsymbol{z}}}_L,M_L)$ be a multiple rank-1 lattice
such that $I=\bigcupdot_{\ell=1}^{L}I_\ell$, where $I_\ell$ is defined in \eqref{eq:def_Iell}.
Then we have
$$\Sigma_{I I}\le L^2\Sigma_{\bcancel{I}\bcancel{I}}.$$
\end{lemma}
\begin{proof}
Let ${\ensuremath{\boldsymbol{h}}},{\ensuremath{\boldsymbol{k}}}\in I$ be given.
Due to $I=\bigcupdot_{\ell=1}^{L}I_\ell$
there exist unique $\ell_{\ensuremath{\boldsymbol{k}}}$, $\ell_{\ensuremath{\boldsymbol{h}}}$, cf. \eqref{eq:def_lh} and \eqref{eq:def_lh1}, such that ${\ensuremath{\boldsymbol{k}}}\in I_{\ell_{\ensuremath{\boldsymbol{k}}}}$ and
${\ensuremath{\boldsymbol{h}}}\in I_{\ell_{\ensuremath{\boldsymbol{h}}}}$.
We observe for a single scalar product
\begin{align*}
\langle\tau_{\ensuremath{\boldsymbol{h}}},\tau_{\ensuremath{\boldsymbol{k}}}\rangle_d&=
\left\langle
-\sum_{\substack{{\ensuremath{\boldsymbol{h}}}'\in\ensuremath{\mathbb{Z}}^d\setminus\{{\ensuremath{\boldsymbol{h}}}\}\\{\ensuremath{\boldsymbol{h}}}'\cdot{\ensuremath{\boldsymbol{z}}}_{\ell_{\ensuremath{\boldsymbol{h}}}}\equiv{\ensuremath{\boldsymbol{h}}}\cdot{\ensuremath{\boldsymbol{z}}}_{\ell_{\ensuremath{\boldsymbol{h}}}}\imod{M_{\ell_{\ensuremath{\boldsymbol{h}}}}}}}
\frac{\textnormal{e}^{2\pi\textnormal{i}{\ensuremath{\boldsymbol{h}}}'\cdot\circ}}{r_d({\ensuremath{\boldsymbol{h}}}')}
,
-\sum_{\substack{{\ensuremath{\boldsymbol{k}}}'\in\ensuremath{\mathbb{Z}}^d\setminus\{{\ensuremath{\boldsymbol{k}}}\}\\{\ensuremath{\boldsymbol{k}}}'\cdot{\ensuremath{\boldsymbol{z}}}_{\ell_{\ensuremath{\boldsymbol{k}}}}\equiv{\ensuremath{\boldsymbol{k}}}\cdot{\ensuremath{\boldsymbol{z}}}_{\ell_{\ensuremath{\boldsymbol{k}}}}\imod{M_{\ell_{\ensuremath{\boldsymbol{k}}}}}}}
\frac{\textnormal{e}^{2\pi\textnormal{i}{\ensuremath{\boldsymbol{k}}}'\cdot\circ}}{r_d({\ensuremath{\boldsymbol{k}}}')}
\right\rangle_d
\\
&=
\sum_{\substack{{\ensuremath{\boldsymbol{h}}}'\in\ensuremath{\mathbb{Z}}^d\setminus\{{\ensuremath{\boldsymbol{h}}}\}\\{\ensuremath{\boldsymbol{h}}}'\cdot{\ensuremath{\boldsymbol{z}}}_{\ell_{\ensuremath{\boldsymbol{h}}}}\equiv{\ensuremath{\boldsymbol{h}}}\cdot{\ensuremath{\boldsymbol{z}}}_{\ell_{\ensuremath{\boldsymbol{h}}}}\imod{M_{\ell_{\ensuremath{\boldsymbol{h}}}}}}}
\left\langle
\frac{\textnormal{e}^{2\pi\textnormal{i}{\ensuremath{\boldsymbol{h}}}'\cdot\circ}}{r_d({\ensuremath{\boldsymbol{h}}}')}
,
\sum_{\substack{{\ensuremath{\boldsymbol{k}}}'\in\ensuremath{\mathbb{Z}}^d\setminus\{{\ensuremath{\boldsymbol{k}}}\}\\{\ensuremath{\boldsymbol{k}}}'\cdot{\ensuremath{\boldsymbol{z}}}_{\ell_{\ensuremath{\boldsymbol{k}}}}\equiv{\ensuremath{\boldsymbol{k}}}\cdot{\ensuremath{\boldsymbol{z}}}_{\ell_{\ensuremath{\boldsymbol{k}}}}\imod{M_{\ell_{\ensuremath{\boldsymbol{k}}}}}}}
\frac{\textnormal{e}^{2\pi\textnormal{i}{\ensuremath{\boldsymbol{k}}}'\cdot\circ}}{r_d({\ensuremath{\boldsymbol{k}}}')}
\right\rangle_d.
\intertext{Since we have ${\ensuremath{\boldsymbol{h}}}'\in\ensuremath{\mathbb{Z}}^d\setminus I$ for each ${\ensuremath{\boldsymbol{h}}}'$ in the equality above, we apply \eqref{eq:sp_InI} to each of the summands and achieve}
\langle\tau_{\ensuremath{\boldsymbol{h}}},\tau_{\ensuremath{\boldsymbol{k}}}\rangle_d&=
\sum_{\substack{{\ensuremath{\boldsymbol{h}}}'\in\ensuremath{\mathbb{Z}}^d\setminus\{{\ensuremath{\boldsymbol{h}}}\}\\{\ensuremath{\boldsymbol{h}}}'\cdot{\ensuremath{\boldsymbol{z}}}_{\ell_{\ensuremath{\boldsymbol{h}}}}\equiv{\ensuremath{\boldsymbol{h}}}\cdot{\ensuremath{\boldsymbol{z}}}_{\ell_{\ensuremath{\boldsymbol{h}}}}\imod{M_{\ell_{\ensuremath{\boldsymbol{h}}}}}}}
\frac{\delta_{{\ensuremath{\boldsymbol{k}}}}^{({\ell_{\ensuremath{\boldsymbol{k}}}})}({\ensuremath{\boldsymbol{h}}}')}{r_d({\ensuremath{\boldsymbol{h}}}')}\\
&=
\sum_{\substack{{\ensuremath{\boldsymbol{p}}}\in\ensuremath{\mathbb{Z}}^d}}
\frac{\delta_{{\ensuremath{\boldsymbol{h}}}}^{({\ell_{\ensuremath{\boldsymbol{h}}}})}({\ensuremath{\boldsymbol{p}}})\,\delta_{{\ensuremath{\boldsymbol{k}}}}^{({\ell_{\ensuremath{\boldsymbol{k}}}})}({\ensuremath{\boldsymbol{p}}})}{r_d({\ensuremath{\boldsymbol{p}}})}
=
\sum_{\substack{{\ensuremath{\boldsymbol{p}}}\not\in I}}
\frac{\delta_{{\ensuremath{\boldsymbol{h}}}}^{({\ell_{\ensuremath{\boldsymbol{h}}}})}({\ensuremath{\boldsymbol{p}}})\,\delta_{{\ensuremath{\boldsymbol{k}}}}^{({\ell_{\ensuremath{\boldsymbol{k}}}})}({\ensuremath{\boldsymbol{p}}})}{r_d({\ensuremath{\boldsymbol{p}}})},
\end{align*}
where the last equality holds due to the equality $\delta_{\ensuremath{\boldsymbol{h}}}^{(\ell_{\ensuremath{\boldsymbol{h}}})}({\ensuremath{\boldsymbol{p}}})=0$ for ${\ensuremath{\boldsymbol{p}}}\in I$.
Summing up these terms yields
\begin{align*}
\Sigma_{I I}:&=\sum_{{\ensuremath{\boldsymbol{h}}}\in I}\sum_{{\ensuremath{\boldsymbol{k}}}\in I}|\langle\tau_{\ensuremath{\boldsymbol{h}}},\tau_{\ensuremath{\boldsymbol{k}}}\rangle_d|=
\sum_{{\ensuremath{\boldsymbol{h}}}\in I}\sum_{{\ensuremath{\boldsymbol{k}}}\in I}\langle\tau_{\ensuremath{\boldsymbol{h}}},\tau_{\ensuremath{\boldsymbol{k}}}\rangle_d
=\sum_{{\ensuremath{\boldsymbol{h}}}\in I}\sum_{{\ensuremath{\boldsymbol{k}}}\in I}
\sum_{\substack{{\ensuremath{\boldsymbol{p}}}\not\in I}}
\frac{\delta_{{\ensuremath{\boldsymbol{h}}}}^{({\ell_{\ensuremath{\boldsymbol{h}}}})}({\ensuremath{\boldsymbol{p}}})\,\delta_{{\ensuremath{\boldsymbol{k}}}}^{({\ell_{\ensuremath{\boldsymbol{k}}}})}({\ensuremath{\boldsymbol{p}}})}{r_d({\ensuremath{\boldsymbol{p}}})}
\\
&=\sum_{\substack{{\ensuremath{\boldsymbol{p}}}\not\in I}}
r_d({\ensuremath{\boldsymbol{p}}})^{-1}
\sum_{{\ensuremath{\boldsymbol{h}}}\in I}\delta_{{\ensuremath{\boldsymbol{h}}}}^{({\ell_{\ensuremath{\boldsymbol{h}}}})}({\ensuremath{\boldsymbol{p}}})
\sum_{{\ensuremath{\boldsymbol{k}}}\in I}\delta_{{\ensuremath{\boldsymbol{k}}}}^{({\ell_{\ensuremath{\boldsymbol{k}}}})}({\ensuremath{\boldsymbol{p}}})\\
&
=
\sum_{\substack{{\ensuremath{\boldsymbol{p}}}\not\in I}}
r_d({\ensuremath{\boldsymbol{p}}})^{-1}
\sum_{\ell_1=1}^{L}
\sum_{{\ensuremath{\boldsymbol{h}}}\in I_{\ell_1}}\delta_{{\ensuremath{\boldsymbol{h}}}}^{({\ell_1})}({\ensuremath{\boldsymbol{p}}})
\sum_{\ell_2=1}^{L}
\sum_{{\ensuremath{\boldsymbol{k}}}\in I_{\ell_2}}\delta_{{\ensuremath{\boldsymbol{k}}}}^{({\ell_2})}({\ensuremath{\boldsymbol{p}}})
\stackrel{\eqref{eq:aliasing_summation_L}}{\le}
L^2\Sigma_{\bcancel{I}\bcancel{I}}.
\end{align*}
\end{proof}
In summary, we achieve the main result of this paper.
\begin{theorem}\label{cor:gen_framework_main_result}
Let $I\subset\ensuremath{\mathbb{Z}}^d$ and $\Lambda({\ensuremath{\boldsymbol{z}}}_1,M_1,\ldots,{\ensuremath{\boldsymbol{z}}}_L,M_L)$ be a rank-1 lattice such that $I=\bigcupdot_{\ell=1}^L I_\ell$, where $I_\ell$ is defined in \eqref{eq:def_Iell}.
Then we have
$$
\sup_{\|f|\mathcal{H}_r(\ensuremath{\mathbb{T}}^d)\|\le 1}\|f-\operatorname{S}_I^\Lambda f|L_\infty(\ensuremath{\mathbb{T}}^d)\|\le(L+1)\sqrt{\Sigma_{\bcancel{I}\bcancel{I}}}.$$
\end{theorem}
\begin{proof}
Lemmas \ref{lem:est_sum_cI_I} and \ref{lem:est_sum_I_I} together with \eqref{eq:split_wceinfty} yield the assertion.
\end{proof}
Roughly speaking, the worst case sampling error $\sup_{\|f|\mathcal{H}_r(\ensuremath{\mathbb{T}}^d)\|\le 1}\|f-\operatorname{S}_I^\Lambda f|L_\infty(\ensuremath{\mathbb{T}}^d)\|$ for the considered sampling method is bounded by a product of the worst case truncation error $\Sigma_{\bcancel{I}\bcancel{I}}$, cf. \eqref{eq:trunc_error}, and the
number $L$ of rank-1 lattices that needs to be joined in order to observe $I=\bigcupdot I_\ell$.
\begin{remark}\label{rem:averaging_approx_fc}
In \cite{KaVo19}, a slight modification of Algorithm~\ref{alg:compute_S_I_Lambda} is presented as Algorithm~2 and the authors estimate the asymptotic sampling rates for more specific approximation settings. For practical applications, this
algorithm seems preferable since the approximated Fourier coefficients may be computed as an average from several rank-1 lattices, which may prove beneficial in real world applications due to the averaging of the aliasing error. However,
we decided to consider the simpler Algorithm~\ref{alg:compute_S_I_Lambda} in order to avoid unnecessarily complicated calculations that are caused by the averaging process. Nevertheless, the proof strategy presented here succeeds even for
the slightly more complicated algorithm but suffers from additional technical efforts.
\end{remark}
\section{Applications}
\subsection{Tractability}\label{sec:tractability}
The last section clarifies the general framework. In this section, we will
use the results of the general framework in order to treat a specific approximation problem.
\begin{sloppypar}
Similar to the considerations in \cite{KuWaWo09}, we define the reproducing kernel $K_d$ for the weighted
Korobov space $\mathcal{H}_{\alpha,{\ensuremath{\boldsymbol{\gamma}}}_d}(\ensuremath{\mathbb{T}}^d)$ with smoothness parameter $\alpha>1$ as
$$
K_d({\ensuremath{\boldsymbol{x}}},{\ensuremath{\boldsymbol{y}}})=\sum_{{\ensuremath{\boldsymbol{h}}}\in\ensuremath{\mathbb{Z}}^d}\frac{\textnormal{e}^{2\pi\textnormal{i}{\ensuremath{\boldsymbol{h}}}\cdot({\ensuremath{\boldsymbol{x}}}-{\ensuremath{\boldsymbol{y}}})}}{r_d(\alpha,{\ensuremath{\boldsymbol{\gamma}}}_d,{\ensuremath{\boldsymbol{h}}})},
$$
where for each $d$ the vector ${\ensuremath{\boldsymbol{\gamma}}}_d=(\gamma_{d,1},\ldots,\gamma_{d,d})$ of positive weights satisfies
\begin{align*}
1\ge\gamma_{d,1}\ge\ldots\ge\gamma_{d,d}>0,
\end{align*}
and the weight function $r_d$ is defined as
$$
r_d(\alpha,{\ensuremath{\boldsymbol{\gamma}}}_d,{\ensuremath{\boldsymbol{h}}})=\prod_{j=1}^dr(\alpha,\gamma_{d,j},h_j)
$$
with
$
r(\alpha,\gamma_{d,j},h_j)=\max\left(1,\gamma_{d,j}^{-1}\,|h_j|^\alpha\right).
$
Reasonable frequency sets $I$ are constructed by collecting all frequencies ${\ensuremath{\boldsymbol{h}}}\in\ensuremath{\mathbb{Z}}^d$ where the
weight function $r_d(\alpha,{\ensuremath{\boldsymbol{\gamma}}}_d,{\ensuremath{\boldsymbol{h}}})$ is small, i.e., where the reciprocal $r_d(\alpha,{\ensuremath{\boldsymbol{\gamma}}}_d,{\ensuremath{\boldsymbol{h}}})^{-1}$
is large, which brings smallest possible worst case truncation errors $\Sigma_{\bcancel{I}\bcancel{I}}$ with respect to the number $|I|$ of excluded frequencies.
We define
\begin{equation}
A_d(N):=\left\{{\ensuremath{\boldsymbol{h}}}\in\ensuremath{\mathbb{Z}}^d\;\colon\; r_d(\alpha,{\ensuremath{\boldsymbol{\gamma}}}_d,{\ensuremath{\boldsymbol{h}}})\le N\right\},\label{eq:def_AdN}
\end{equation}
which implies
$
\Sigma_{\bcancel{A_d(N)}\bcancel{A_d(N)}}=\min_{|I|\le|A_d(N)|}\Sigma_{\bcancel{I}\bcancel{I}},
$
i.e., the worst case truncation error $\Sigma_{\bcancel{A_d(N)}\bcancel{A_d(N)}}$ is as small as possible for approximations that
use trigonometric polynomials that are supported by at most $|A_d(N)|$ frequencies.
\end{sloppypar}
We collect some basic facts about the frequency sets $A_d(N)$ from \cite{KuSlWo08}.
\begin{lemma}
For $N\ge 1$, the cardinality of the set $A_d(N)$ is bounded by
\begin{align}
(\gamma_{d,1}N)^{1/\alpha}\le|A_d(N)|\le N^q\prod_{j=1}^d\left(1+2\zeta(\alpha q)\gamma_{d,j}^q\right)\qquad\forall q>\frac{1}{\alpha}.
\label{eq:card_AdN}
\end{align}
Moreover, we observe the set inclusions
\begin{align}
A_d(N)\subset\left[-\floor{(\gamma_{d,1}N)^{1/\alpha}},\floor{(\gamma_{d,1}N)^{1/\alpha}}\right]^d\subset\left[-\frac{|A_d(N)|}{2},\frac{|A_d(N)|}{2}\right]^d
\label{eq:embedding_AdN}
\end{align}
and the upper bound on the worst case truncation error
\begin{align}
\Sigma_{\bcancel{A_d(N)}\bcancel{A_d(N)}}:=\sum_{{\ensuremath{\boldsymbol{h}}}\in\ensuremath{\mathbb{Z}}^d\setminus A_d(N)}r_d(\alpha,{\ensuremath{\boldsymbol{\gamma}}}_d,{\ensuremath{\boldsymbol{h}}})^{-1}\le\frac{1}{|A_d(N)|^{1/\tau-1}}\frac{\tau}{1-\tau}\prod_{j=1}^d\left(1+2\zeta(\alpha\tau)\gamma_{d,j}^\tau\right)^{1/\tau}
\label{eq:ub_wcterr}
\end{align}
for all $\tau\in(1/\alpha,1)$.
\end{lemma}
\begin{proof}
The proof of the statements \eqref{eq:card_AdN} and \eqref{eq:ub_wcterr} can be found in \cite[Lem. 5 \& 6]{KuSlWo08}.
The set inclusions \eqref{eq:embedding_AdN} can be seen by determining
$$
r_d(\alpha,{\ensuremath{\boldsymbol{\gamma}}}_d,{\ensuremath{\boldsymbol{h}}})>N\text{ for }{\ensuremath{\boldsymbol{h}}}\in\ensuremath{\mathbb{Z}}^d\setminus
\left[-\floor{(\gamma_{d,1}N)^{1/\alpha}},\floor{(\gamma_{d,1}N)^{1/\alpha}}\right]^d
$$
and
$$
\floor{(\gamma_{d,1}N)^{1/\alpha}}\le \frac{1+2\floor{(\gamma_{d,1}N)^{1/\alpha}}}{2}=\frac{|A_{d,1}(N)|}{2}\le\frac{|A_{d}(N)|}{2},
$$
where $A_{d,1}(N)$ is the projection of $A_{d}(N)$ to its first dimension.
\end{proof}
In the following, we will apply the results from Section \ref{sec:general_framework} to the specific approximation problem.
To this end, we need to determine multiple rank-1 lattices that fulfill $I=\bigcupdot_{\ell=1}^L I_\ell$, $I_\ell$ as stated in \eqref{eq:def_Iell}. Furthermore, we need upper bounds on the number $L$ of used rank-1 lattices as well as upper bounds
on the number of used sampling values. The next lemma addresses the necessary estimates in full detail.
\begin{lemma}\label{lem:estimate_sampling_set}
Let $N\ge1$ and $A_d(N)$ as stated in \eqref{eq:def_AdN}.
Then there exists a multiple rank-1 lattice
$\Lambda({\ensuremath{\boldsymbol{z}}}_1,M_1,\ldots,{\ensuremath{\boldsymbol{z}}}_L,M_L)$ with $L\le\max(3\,\ln |A_d(N)|,1)$ and $M_1=\ldots=M_L\le3\,|A_d(N)|$
that fulfills $I=\bigcupdot_{\ell=1}^dI_\ell$, $I_\ell$
as stated in \eqref{eq:def_Iell}. In particular, the cardinality of $\Lambda({\ensuremath{\boldsymbol{z}}}_1,M_1,\ldots,{\ensuremath{\boldsymbol{z}}}_L,M_L)$
is bounded by
\begin{equation*}
2|A_d(N)|<M:=|\Lambda({\ensuremath{\boldsymbol{z}}}_1,M_1,\ldots,{\ensuremath{\boldsymbol{z}}}_L,M_L)|< 9\,|A_d(N)|\,\max(\ln |A_d(N)|,1).
\end{equation*}
\end{lemma}
\begin{proof}
We distinguish two cases.
First, we assume $|A_d(N)|=1$. Then, we observe $A_d(N)=\{{\ensuremath{\boldsymbol{0}}}\}$ and the rank-1 lattice $\Lambda({\ensuremath{\boldsymbol{1}}},3)$ is a multiple rank-1 lattice for which $I_1= A_d(N)$, $L\le 1=\max(3\,\ln |A_d(N)|,1)$, and $M_1=3\le 3|A_d(N)|$ hold.
We estimate
$$2=2|A_d(N)|<3<9\,|A_d(N)|\,\max(\ln |A_d(N)|,1)=9.$$
For the second case, we observe that $|A_d(N)|\ge 2$ results in $|A_d(N)|\ge 3$ since ${\ensuremath{\boldsymbol{0}}}\in A_d(N)$ and ${\ensuremath{\boldsymbol{0}}}\neq{\ensuremath{\boldsymbol{h}}}\in A_d(N)$ implies $-{\ensuremath{\boldsymbol{h}}}\in A_d(N)$, $-{\ensuremath{\boldsymbol{h}}}\neq{\ensuremath{\boldsymbol{h}}}$, due to the symmetry of the weight function $r_d$.
Accordingly, we assume $|A_d(N)|\ge 3$ and apply \cite[Theorems 3.2 \& 3.4]{Kae17} to $I:=A_d(N)$ with $c:=2$ and $1>\delta:=\sqrt{\frac{e}{|A_d(N)|}}>0$.
Thus, we determine
$L=\ceil{2(\ln |I|-\ln \delta)}\le3\,\ln |I|$ and lattice sizes
$M_\ell$ that are prime numbers larger than $2(|I|-1)$ fulfilling the
additional condition
$$M_\ell\in\{M\in\ensuremath{\mathbb{N}}\colon M\text{ prime with }|\{{\ensuremath{\boldsymbol{k}}}\bmod M\colon{\ensuremath{\boldsymbol{k}}}\in I\}|=|I|\}.$$
This condition is automatically fulfilled for $M_\ell\ge|I|+1$, $I=A_d(N)$ due to \eqref{eq:embedding_AdN}.
\newline
Due to \cite[Thm. 1.3]{Ba06} there exists at least one prime number $P$ in the interval
$[2\,|I|,3\,|I|]$. We fix this prime number as lattice sizes $M_\ell=P$, $\ell=1,\ldots,L$,
i.e., $M_\ell<3\,|I|$ for each $\ell=1,\ldots,L$.
\newline
Subsequently, we choose the generating vectors ${\ensuremath{\boldsymbol{z}}}_1,\ldots,{\ensuremath{\boldsymbol{z}}}_L\in[0,P-1]^d$ uniformly at random.
Then with probability at least $1-\delta>0$ we observe the equality
$\bigcup_{l=1}^L I_\ell'=I$ with
$$
I_\ell':=\left\{{\ensuremath{\boldsymbol{k}}}\in I\colon {\ensuremath{\boldsymbol{k}}}\cdot{\ensuremath{\boldsymbol{z}}}_\ell\not\equiv{\ensuremath{\boldsymbol{h}}}\cdot{\ensuremath{\boldsymbol{z}}}_\ell\imod{M_\ell}\text{ for all }{\ensuremath{\boldsymbol{h}}}\in I\setminus\{{\ensuremath{\boldsymbol{k}}}\}\right\}
$$
The simple calculation $I_\ell=I_\ell'\setminus\bigcup_{j=1}^{\ell-1}I_j'$, $\ell=1,\ldots,L$, yields
the disjoint partition $\bigcupdot_{\ell=1}^LI_\ell=I$.
\newline
Since the probability for choosing suitable generating vectors is larger than zero, there exists
at least one multiple rank-1 lattice with $L\le3\,\ln |I|$ and $M_\ell<3|I|$, $\ell=1,\ldots,L$, that fulfills $\bigcupdot_{\ell=1}^LI_\ell=I$. Accordingly, we estimate
\begin{align*}
2|A_d(N)|<M_1\le|\Lambda({\ensuremath{\boldsymbol{z}}}_1,M_1,\ldots,{\ensuremath{\boldsymbol{z}}}_L,M_L)|<9\,|A_d(N)|\,\max(\ln|A_d(N)|,1)
\end{align*}
for $|A_d(N)|\ge 3$, i.e., for $A_d(N)\neq\{{\ensuremath{\boldsymbol{0}}}\}$.
\end{proof}
\begin{remark}
In the proof of Lemma \ref{lem:estimate_sampling_set}, the failure probability
$\delta=\sqrt{\frac{e}{|A_d(N)|}}$ decreases for increasing cardinalities of the frequency set $A_d(N)$,
i.e., increasing number $N$. For instance, $|A_d(N)|>272$ implies that the failure probability $\delta$ is bounded from above by $1/10$,
i.e., the construction will be successful with a probability of at least $9/10$.
Moreover, the sets $I_\ell$, cf. \eqref{eq:def_Iell}, can be determined in a fast and
simple way, cf. \cite{Kae17} for more details, which allows to check whether the condition
$I=\bigcupdot_{\ell=1}^LI_\ell$ holds. In cases where $I\neq\bigcupdot_{\ell=1}^LI_\ell$,
one repeats the random choice of the generating vectors and checks the property $I=\bigcupdot_{\ell=1}^LI_\ell$ several times. The corresponding probability that
none of the tested multiple rank-1 lattices ensures $I=\bigcupdot_{\ell=1}^LI_\ell$
decreases exponentially with the number of repetitions. Thus, in practice this strategy inevitably leads to a multiple rank-1 lattice that has the requested property.
Accordingly, we described a practically applicable construction of the sampling sets $\Lambda({\ensuremath{\boldsymbol{z}}}_1,M_1,\ldots,{\ensuremath{\boldsymbol{z}}}_L,M_L)$.
\end{remark}
Now we bring together the relationship of the worst case truncation error, the cardinalities of the frequency sets $A_d(N)$, the estimates of the number $L$ of used rank-1 lattices, and the cardinality of the used sampling set.
\begin{theorem}\label{thm:specific_err_estimate}
The worst case $L_\infty(\ensuremath{\mathbb{T}}^d)$ sampling error for functions from the Korobov space $\mathcal{H}_{\alpha,{\ensuremath{\boldsymbol{\gamma}}}_d}(\ensuremath{\mathbb{T}}^d)$, dimension $d \ge 1$, with smoothness parameter $\alpha>1$ and weights ${\ensuremath{\boldsymbol{\gamma}}}_d$ is bounded from above by
\begin{align}
\sup_{\|f|\mathcal{H}_{\alpha,{\ensuremath{\boldsymbol{\gamma}}}_d}(\ensuremath{\mathbb{T}}^d)\|\le 1}&\|f-\operatorname{S}_{A_d(N)}^\Lambda f|L_\infty(\ensuremath{\mathbb{T}}^d)\|
\nonumber\\
&< 4\,3^{1/\tau-1}(\ln M)^{\frac{1+\tau}{2\tau}}M^{\frac{\tau-1}{2\tau}}\sqrt{\frac{\tau}{1-\tau}}\prod_{j=1}^d\left(1+2\zeta(\alpha\tau)\gamma_{d,j}^\tau\right)^{\frac{1}{2\tau}}\nonumber\\
&\le 4\,3^{1/\tau-1}\delta^{-\frac{1+\tau}{2\tau}}M^{\frac{1+\delta}{2}-\frac{1-\delta}{2\tau}}
\sqrt{\frac{\tau}{1-\tau}}\prod_{j=1}^d\left(1+2\zeta(\alpha\tau)\gamma_{d,j}^\tau\right)^{\frac{1}{2\tau}},
\label{eq:err_estimate_arbitrary_delta_tau}
\end{align}
when using the multiple rank-1 lattices established in Lemma \ref{lem:estimate_sampling_set}.
The estimates in \eqref{eq:err_estimate_arbitrary_delta_tau} hold for all parameters $\tau$ and $\delta$ in their ranges $\tau\in(1/\alpha,1)$ and $\delta\in(0,1)$
and the number $M:=|\Lambda({\ensuremath{\boldsymbol{z}}}_1,M_1,\ldots,{\ensuremath{\boldsymbol{z}}}_L,M_L)|$ is the total number of used sampling values.
\end{theorem}
\begin{proof}
Due to Theorem \ref{cor:gen_framework_main_result} and \eqref{eq:ub_wcterr} we observe
\begin{align*}
\sup_{\|f|\mathcal{H}_{\alpha,{\ensuremath{\boldsymbol{\gamma}}}_d}(\ensuremath{\mathbb{T}}^d)\|\le 1}&\|f-\operatorname{S}_{A_d(N)}^\Lambda f|L_\infty(\ensuremath{\mathbb{T}}^d)\|^2\le (L+1)^2\Sigma_{\bcancel{I}\bcancel{I}}\\
&\le (L+1)^2
\frac{1}{|A_d(N)|^{1/\tau-1}}\frac{\tau}{1-\tau}\prod_{j=1}^d\left(1+2\zeta(\alpha\tau)\gamma_{d,j}^\tau\right)^{1/\tau}.
\end{align*}
For computing approximations, we use the multiple rank-1 lattices from Lemma \ref{lem:estimate_sampling_set} and conclude
\begin{align*}
3&\le M,\\
\max(\ln |A_d(N)|,1)&<\ln M,\\
\frac{1}{|A_d(N)|}&<\frac{9\max(\ln |A_d(N)|,1)}{M}<\frac{9\,\ln M}{M},\\
L+1&\le 4\max(\ln|A_d(N)|,1),
\end{align*}
which yields
\begin{align*}
\sup_{\|f|\mathcal{H}_{\alpha,{\ensuremath{\boldsymbol{\gamma}}}_d}(\ensuremath{\mathbb{T}}^d)\|\le 1}&\|f-\operatorname{S}_{A_d(N)}^\Lambda f|L_\infty(\ensuremath{\mathbb{T}}^d)\|^2\\[-1.5em]
&\le 16\max(\ln |A_d(N)|,1)^2
\frac{1}{|A_d(N)|^{1/\tau-1}}\overbrace{\frac{\tau}{1-\tau}\prod_{j=1}^d\left(1+2\zeta(\alpha\tau)\gamma_{d,j}^\tau\right)^{1/\tau}}^{=:c_{\alpha,d,\tau}}\\
&<16\,c_{\alpha,d,\tau}\,(\ln M)^2\left(\frac{9\,\ln M}{M}\right)^{1/\tau-1}
=16\,9^{1/\tau-1}c_{\alpha,d,\tau}\frac{(\ln M)^{1/\tau+1}}{M^{1/\tau-1}}.
\end{align*}
In order to avoid the logarithmic terms, we exploit $\ln x\le x^\delta/\delta$ for all $\delta\in(0,1)$ and we get
\begin{align*}
\sup_{\|f|\mathcal{H}_{\alpha,{\ensuremath{\boldsymbol{\gamma}}}_d}(\ensuremath{\mathbb{T}}^d)\|\le 1}\|f-\operatorname{S}_{A_d(N)}^\Lambda f|L_\infty(\ensuremath{\mathbb{T}}^d)\|^2
&<16\,9^{1/\tau-1}c_{\alpha,d,\tau}\delta^{-1/\tau-1}\frac{1}{M^{1/\tau-1-\delta(1/\tau+1)}}.
\end{align*}
\end{proof}
The last theorem immediately raises the question of how to choose proper parameters $\delta$ and $\tau$ such that
one can reach best rates of convergence with respect to $M$ in \eqref{eq:err_estimate_arbitrary_delta_tau}.
The answer is given by the next corollary which also states the occurring constants in detail.
\begin{corollary}\label{cor:korobov_main}
With the requirements of Theorem \ref{thm:specific_err_estimate}, for each
$t\in(0,\frac{\tilde{\alpha}-1}{2})$, $1<\tilde{\alpha}\le\alpha$, there exist $\delta:=\delta(\tilde{\alpha},t)\in(0,1)$ and $\tau:=\tau(\tilde{\alpha},t)\in(1/\tilde{\alpha},1)\subset(1/\alpha,1)$ such that
there exists a constant
\begin{align*}
c_{\alpha,\tilde{\alpha},t,d}&:=4\,3^{1/\tau-1}\delta^{-\frac{1+\tau}{2\tau}}
\sqrt{\frac{\tau}{1-\tau}}\prod_{j=1}^d\left(1+2\zeta(\alpha\tau)\gamma_{d,j}^\tau\right)^{\frac{1}{2\tau}}\\
&<4\,3^{\tilde{\alpha}-1}\delta^{-\frac{\tilde{\alpha}+1}{2}}\sqrt{\frac{2}{\tilde{\alpha}-1}} \prod_{j=1}^d\left(1+2\zeta(\alpha\tau)\gamma_{d,j}^\tau\right)^{\frac{1}{2\tau}}
\end{align*}
which allows for the estimate
$$
\sup_{\|f|\mathcal{H}_{\alpha,{\ensuremath{\boldsymbol{\gamma}}}_d}(\ensuremath{\mathbb{T}}^d)\|\le 1}\|f-\operatorname{S}_{A_d(N)}^\Lambda f|L_\infty(\ensuremath{\mathbb{T}}^d)\|
<c_{\alpha,\tilde{\alpha},t,d}M^{-t}.
$$
Here $\delta(\tilde{\alpha},t)$ and $\tau(\tilde{\alpha},t)$ can be chosen as
\begin{align}
\delta(\tilde{\alpha},t)&:=\frac{2+\tilde{\alpha}}{2}-\sqrt{\frac{2+\tilde{\alpha}}{2}-\tilde{\alpha}+1+2t}\nonumber\\
\tau(\tilde{\alpha},t)&:=(\tilde{\alpha}-\delta(\tilde{\alpha},t))^{-1}.\label{eq:definition_suitable_tau}
\end{align}
\end{corollary}
\begin{proof}
For fixed $t$ we determine $\varepsilon:=\tilde{\alpha}-1-2t$ such that $t=\frac{\tilde{\alpha}-1-\varepsilon}{2}$, i.e., $\varepsilon\in(0,\tilde{\alpha}-1)\subset(0,\alpha-1)$. Moreover, we fix
$$
\delta:=\frac{2+\tilde{\alpha}}{2}-\sqrt{\left(\frac{2+\tilde{\alpha}}{2}\right)^2-\varepsilon},
$$
which implies $\delta\in(0,1)$ since
\begin{align*}
0<\delta:=\frac{2+\tilde{\alpha}}{2}-\sqrt{\left(\frac{2+\tilde{\alpha}}{2}\right)^2-\varepsilon}
&<\frac{2+\tilde{\alpha}}{2}-\sqrt{\left(\frac{2+\tilde{\alpha}}{2}\right)^2-\tilde{\alpha}+1}\\
&=\frac{1}{2}(\tilde{\alpha}+2-\sqrt{\tilde{\alpha}^2+8})
<\frac{1}{2}(\tilde{\alpha}+2-\sqrt{\tilde{\alpha}^2})=1,
\end{align*}
and we set $\tau=\frac{1}{\tilde{\alpha}-\delta}$ which implies $\tau>\frac{1}{\tilde{\alpha}}$ and
with
\begin{align*}
\delta<\frac{1}{2}(\tilde{\alpha}+2-\sqrt{\tilde{\alpha}^2+8})\le \frac{1}{2}(\tilde{\alpha}+2-3)=\frac{1}{2}(\tilde{\alpha}-1)
\end{align*}
we estimate $\tau<\frac{1}{\tilde{\alpha}-\frac{\tilde{\alpha}-1}{2}}=\frac{1}{\tilde{\alpha}}+\frac{\tilde{\alpha}-1}{\tilde{\alpha}(\tilde{\alpha}+1)}=\frac{2}{\tilde{\alpha}+1}<1$, i.e., $\tau\in(1/\tilde{\alpha},1)\subset(1/\alpha,1)$.
\newline
Due to Theorem \ref{thm:specific_err_estimate}, we have
\begin{align*}
\sup_{\|f|\mathcal{H}_{\alpha,{\ensuremath{\boldsymbol{\gamma}}}_d}(\ensuremath{\mathbb{T}}^d)\|\le 1}&\|f-\operatorname{S}_{A_d(N)}^\Lambda f|L_\infty(\ensuremath{\mathbb{T}}^d)\|\\
&<4\,3^{1/\tau-1}\delta^{-\frac{1+\tau}{2\tau}}M^{\frac{1+\delta}{2}-\frac{1-\delta}{2\tau}}
\sqrt{\frac{\tau}{1-\tau}}\prod_{j=1}^d\left(1+2\zeta(\alpha\tau)\gamma_{d,j}^\tau\right)^{\frac{1}{2\tau}}.
\intertext{Since $\tau$ and $\delta$ are fixed and in the right range for fixed $t$ and $\alpha$, we obtain}
\sup_{\|f|\mathcal{H}_{\alpha,{\ensuremath{\boldsymbol{\gamma}}}_d}(\ensuremath{\mathbb{T}}^d)\|\le 1}&\|f-\operatorname{S}_{A_d(N)}^\Lambda f|L_\infty(\ensuremath{\mathbb{T}}^d)\|
<
c_{\alpha,\tilde{\alpha},t,d}M^{\frac{1+\delta}{2}-\frac{1-\delta}{2\tau}},
\end{align*}
where $c_{\alpha,\tilde{\alpha},t,d}$ depends on $d$, ${\ensuremath{\boldsymbol{\gamma}}}_d$, $\alpha$ as well as $\tilde{\alpha}$ and $t$ since $\delta$ and $\tau$ depend only on $\tilde{\alpha}$ and $\varepsilon=\tilde{\alpha}-1-2t$.
We verify the main rate in $M$
\begin{align*}
\frac{1+\delta}{2}-\frac{1-\delta}{2\tau}&=\frac{1+\delta}{2}-\frac{(1-\delta)(\tilde{\alpha}-\delta)}{2}=\frac{1}{2}\left(1-\tilde{\alpha}+2\delta+\tilde{\alpha}\delta-\delta^2\right)=-t
\end{align*}
which is caused by the choice of $\delta$ such that
$\varepsilon=2\delta+\tilde{\alpha}\delta-\delta^2$.
Furthermore, the additional estimate on $c_{\alpha,\tilde{\alpha},t,d}$ holds:
\begin{align*}
c_{\alpha,\tilde{\alpha},t,d}&<4\,3^{\tilde{\alpha}-1}\delta^{-\frac{\tilde{\alpha}+1}{2}}\sqrt{\frac{2}{\tilde{\alpha}-1}} \prod_{j=1}^d\left(1+2\zeta(\alpha\tau)\gamma_{d,j}^\tau\right)^{\frac{1}{2\tau}},
\end{align*}
due to the inequalities $1/\tau<\tilde{\alpha}$, $(1+\tau)/(2\tau)<(\tilde{\alpha}+1)/2$, and $\tau/(1-\tau)<2/(\tilde{\alpha}-1)$.
\end{proof}
\begin{remark}
The spaces $\mathcal{H}_{\alpha,{\ensuremath{\boldsymbol{\gamma}}}_d}(\ensuremath{\mathbb{T}}^d)$ are function spaces with dominating mixed smoothness.
Fixing the dimension $d$ and considering $\tilde{\alpha}=\alpha$ in Corollary~\ref{cor:korobov_main} yields
a general approximation result for those function spaces. We observe an asymptotic behavior of the sampling error
$\sup_{\|f|\mathcal{H}_{\alpha,{\ensuremath{\boldsymbol{\gamma}}}_d}(\ensuremath{\mathbb{T}}^d)\|\le 1}\|f-\operatorname{S}_I^\Lambda f|L_\infty(\ensuremath{\mathbb{T}}^d)\|$ that is bounded by $c_{\alpha,t,d}M^{-t}$ for each $t<\frac{\alpha-1}{2}$, i.e., the sampling rate is almost optimal.
More detailed, but for the following purposes less convenient estimates on the cardinalities $|A_d(N)|$ of the frequency sets $A_d(N)$ and the worst case truncation errors $\Sigma_{\bcancel{A_d(N)}\bcancel{A_d(N)}}$
will lead to asymptotic estimates on the sampling errors that are optimal with respect to the exponent on the number $M$ of sampling values, i.e., one achieves
$$\sup_{\|f|\mathcal{H}_{\alpha,{\ensuremath{\boldsymbol{\gamma}}}_d}(\ensuremath{\mathbb{T}}^d)\|\le 1}\|f-\operatorname{S}_I^\Lambda f|L_\infty(\ensuremath{\mathbb{T}}^d)\|\le C M^{-\frac{\alpha-1}{2}}\log^{b}{M},$$
which even improves the results in \cite{KaVo19}. However, the exponent $b$ at the logarithmic term is linear in the dimension $d$ and not optimal.
\end{remark}
Up to now, the constants $c_{\alpha,\tilde{\alpha},t,d}$ heavily depend on the dimension. The following corollaries categorize the properties of the weights ${\ensuremath{\boldsymbol{\gamma}}}_d$ that lead to bounds on the constants that are independent on the dimension $d$ or polynomials in $d$.
\begin{corollary}\label{cor:kor_strong_tractability}
When choosing $\tilde{\alpha}=\min\{\alpha,1/s_{\ensuremath{\boldsymbol{\gamma}}}\}$ in Corollary~\ref{cor:korobov_main} and assuming that
$$
s_{\ensuremath{\boldsymbol{\gamma}}}:=\inf\left\{s \ge 0\,\colon \sup_{d\ge 1}\sum_{j=1}^d\gamma_{d,j}^s<\infty\right\}<1,
$$
holds,
the $L_\infty(\ensuremath{\mathbb{T}}^d)$ worst case sampling error is bounded by terms that do not depend on the dimension~$d$, i.e.,
$$
\sup_{\|f|\mathcal{H}_{\alpha,{\ensuremath{\boldsymbol{\gamma}}}_d}(\ensuremath{\mathbb{T}}^d)\|\le 1}\|f-\operatorname{S}_{A_d(N)}^\Lambda f|L_\infty(\ensuremath{\mathbb{T}}^d)\|
<c_{\alpha,\tilde{\alpha},t}M^{-t},
$$
we observe strong tractability for each $t\in(0,\frac{\tilde{\alpha}-1}{2})$.
\end{corollary}
\begin{proof}
Since $s_{\ensuremath{\boldsymbol{\gamma}}}<1$ , we observe $\tilde{\alpha}>1$ and we apply Corollary \ref{cor:korobov_main}. For fixed $\tau\in(1/\tilde{\alpha},1)$, the products
$$\prod_{j=1}^d\left(1+2\zeta(\alpha\tau)\gamma_{d,j}^\tau\right)^{\frac{1}{2\tau}}
\le\textnormal{e}^{\frac{2\zeta(\alpha\tau)\sum_{j=1}^{d}\gamma_{d,j}^\tau}{2\tau}}
\le C^{\prod}_{\alpha,\tilde{\alpha},t}<\infty$$
are bounded without dependence on the dimension $d$.
Hence, we observe $$
c_{\alpha,\tilde{\alpha},t,d}<4\,3^{\tilde{\alpha}-1}\delta^{-\frac{\tilde{\alpha}+1}{2}}\sqrt{\frac{2}{\tilde{\alpha}-1}}C^{\prod}_{\alpha,\tilde{\alpha},t}=:c_{\alpha,\tilde{\alpha},t}.
$$
\end{proof}
\begin{corollary}\label{cor:kor_tractability}
When choosing $\tilde{\alpha}=\min\{\alpha,1/t_{\ensuremath{\boldsymbol{\gamma}}}\}$ in Corollary~\ref{cor:korobov_main} and assuming that
$$
t_{\ensuremath{\boldsymbol{\gamma}}}:=\inf\left\{t \ge 0\,\colon \sup_{d\ge 1}\frac{\sum_{j=1}^d\gamma_{d,j}^t}{\ln(d+1)}<\infty\right\}<1,
$$
the $L_\infty(\ensuremath{\mathbb{T}}^d)$ worst case sampling error is bounded by terms that depend polynomially on the dimension~$d$, i.e.,
$$
\sup_{\|f|\mathcal{H}_{\alpha,{\ensuremath{\boldsymbol{\gamma}}}_d}(\ensuremath{\mathbb{T}}^d)\|\le 1}\|f-\operatorname{S}_{A_d(N)}^\Lambda f|L_\infty(\ensuremath{\mathbb{T}}^d)\|
<c_{\alpha,\tilde{\alpha},t}\,d^{\beta(\alpha,\tilde{\alpha},t)}M^{-t},
$$
we observe tractability for each $t\in(0,\frac{\tilde{\alpha}-1}{2})$.
\end{corollary}
\begin{proof}
For fixed $t\in(0,\frac{\tilde{\alpha}-1}{2})$, we determine $\tau>1/\tilde{\alpha}\ge t_{\ensuremath{\boldsymbol{\gamma}}}$ as stated in \eqref{eq:definition_suitable_tau}. We observe
$$
\sup_{d\ge 1}\frac{\sum_{j=1}^d\gamma_{d,j}^\tau}{\ln(d+1)}=S_{\tilde{\alpha},t}<\infty
$$
and estimate
$$\prod_{j=1}^d\left(1+2\zeta(\alpha\tau)\gamma_{d,j}^\tau\right)^{\frac{1}{2\tau}}\le \textnormal{e}^{\,\frac{\zeta(\alpha\tau)\,S_{\tilde{\alpha},t}}{\tau}\ln(2d)}=(2d)^{\frac{\zeta(\alpha\tau)\,S_{\tilde{\alpha},t}}{\tau}}\le \tilde{c}_{\alpha,\tilde{\alpha},t}d^{\beta(\alpha,\tilde{\alpha},t)},$$
where $\beta(\alpha,\tilde{\alpha},t):=\ceil{\frac{\zeta(\alpha\tau)\,S_{\tilde{\alpha},t}}{\tau}}$
does not depend on the dimension $d$.
Hence, we observe for $c_{\alpha,\tilde{\alpha},t,d}$ from Theorem \ref{thm:specific_err_estimate}
$$
c_{\alpha,\tilde{\alpha},t,d}<4\,3^{\tilde{\alpha}-1}\delta^{-\frac{\tilde{\alpha}+1}{2}}\sqrt{\frac{2}{\tilde{\alpha}-1}}\tilde{c}_{\alpha,\tilde{\alpha},t}d^{\beta(\alpha,\tilde{\alpha},t)}=:c_{\alpha,\tilde{\alpha},t}d^{\beta(\alpha,\tilde{\alpha},t)},
$$
which is actually a polynomial in the dimension $d$. Again, we stress that $\delta$ and $\tau$ are completely determined in terms of
$\tilde{\alpha}$ and $t$, cf.~\eqref{eq:definition_suitable_tau}, and thus the constant as well as the exponent $\beta$ on the right hand side of the last inequality only depend on $\alpha$, $\tilde{\alpha}$, and $t$.
\end{proof}
\begin{remark}
We stress the fact that the restrictions $s_{\ensuremath{\boldsymbol{\gamma}}}<1$ and $t_\gamma<1$ in Corollaries~\ref{cor:kor_strong_tractability} and~\ref{cor:kor_tractability} are necessary in order to achieve an admissible interval for the choice of $\tau$ in Corollary~\ref{cor:korobov_main}, i.e., these restrictions are caused by the used proof technique.
Nevertheless, these restrictions coincide with those that are stated in \cite[Theorem~11]{KuWaWo08}, where
the authors proved that (strong) tractability can not hold for $L_\infty(\ensuremath{\mathbb{T}}^d)$ approximation for
$t_{\ensuremath{\boldsymbol{\gamma}}}\ge 1$ ($s_{\ensuremath{\boldsymbol{\gamma}}}\ge 1$) -- even in a more general setting.
Hence, these requirements on the summability of the weight sequence ${\ensuremath{\boldsymbol{\gamma}}}$ are natural ones
and do not additionally restrict the function spaces for which the statements of Corollaries~\ref{cor:kor_strong_tractability} and~\ref{cor:kor_tractability}
hold.
\end{remark}
\begin{remark}\label{rem:tractability}
Our sampling method allows for error estimates that reveal (strong) tractability and the exponent at the number of sampling values $M$ is $-t$ with the single restriction that $t<\frac{\tilde{\alpha}-1}{2}$ holds. From~\cite{KuWaWo08} one knows that tractability may hold for exponents $t$ at most $\frac{\tilde{\alpha}-1}{2}$. Thus, the presented sampling method using samples along multiple rank-1 lattices is nearly optimal for Korobov spaces of the considered type even with respect to tractability. Moreover, the presented result improves known tractability results for lattice algorithms, cf.~\cite{KuWaWo09}. Up to now, tractability of the $L_\infty(\ensuremath{\mathbb{T}}^d)$ approximation in combination with constructive methods was investigated for single rank-1 lattices. The best known lower bounds on $t$, where $-t$ is the exponent at the number $M$ of used sampling values, suffer from factors of $1/2$
and less than $2/3$ for $\alpha\in(1,2]$ and $\alpha>2$, respectively,
compared to the best possible exponent $\frac{\tilde{\alpha}-1}{2}$ for general linear information.
In addition, our sampling approach improves more general tractability results from~\cite{KuWaWo09b}. Therein, it is shown that there exist sampling sets such that the $L_\infty(\ensuremath{\mathbb{T}}^d)$ approximation is tractable. The corresponding exponents $-t$ at $M$ are bounded by $\frac{\tilde{\alpha}-1}{2}\frac{1}{1+1/\tilde{\alpha}}\le t \le \frac{\tilde{\alpha}-1}{2}$, where the lower bound is the crucial point that needs to be improved in order to verify better asymptotic tractability behavior of sampling methods. Sampling along multiple rank\mbox{-}1 lattices provides exactly this improvement, i.e., the exponent $t$ at $1/M$ can be arbitrarily close to the upper bound $\frac{\tilde{\alpha}-1}{2}$, which stems from considerations of approximating functions using general linear information, cf.~\cite{KuWaWo08}. At this point, we again stress the fact that -- in contrast to the considerations in~\cite{KuWaWo09b} -- the generation of the sampling sets for the multiple rank-1 lattice approach is completely constructive.
\end{remark}
\subsection{Application to sampling numbers}\label{sec:app_samp_numbers}
A general concept to describe the approximability of a bounded linear operator
$\operatorname{T} \colon X\to Y$, where $X$ and $Y$ are Banach spaces, is the definition of so called
approximation numbers
$$
a_n(\operatorname{T})=\inf\{\|\operatorname{T}-\operatorname{A}\|\,\colon\, \operatorname{rank}\operatorname{A}<n\},
$$
which is the optimal error of approximating the operator $\operatorname{T}$ by operators of rank less than $n$.
The setting of our particular interest are the $n$-th approximation numbers of the identity operator $\operatorname{I}_d$
which maps from specific Hilbert spaces $\mathcal{H}_r(\ensuremath{\mathbb{T}}^d)$ of (smooth and continuous) periodic functions to $L_\infty(\ensuremath{\mathbb{T}}^d)$.
Due to \cite[Theorem 3.4]{CoKueSi16} the associated approximation numbers are determined as
\vspace*{-0.5em}
\begin{align}a_n(\operatorname{I}_d\,\colon\, \mathcal{H}_r(\ensuremath{\mathbb{T}}^d)\rightarrow L_\infty(\ensuremath{\mathbb{T}}^d))=\left(\sum_{j=n}^\infty r_d({\ensuremath{\boldsymbol{k}}}_j)^{-1}\right)^{1/2},\label{eq:an_repro_kernel}
\end{align}
where $\{{\ensuremath{\boldsymbol{k}}}_j\}_{j=1}^\infty$ is a rearrangement of all elements within $\ensuremath{\mathbb{Z}}^d$ such that
the sequence $\{r_d({\ensuremath{\boldsymbol{k}}}_j)\}_{j=1}^\infty$ is a non-decreasing sequence, i.e., $r_d({\ensuremath{\boldsymbol{k}}}_j)\le r_d({\ensuremath{\boldsymbol{k}}}_{j+1})$ holds for all $j\in\ensuremath{\mathbb{N}}$.
One interpretation of the approximation numbers $a_n(\operatorname{I}_d\colon \mathcal{H}_r(\ensuremath{\mathbb{T}}^d)\rightarrow L_\infty(\ensuremath{\mathbb{T}}^d))$ is the following:
The linear operator of rank less than $n$ yielding the best possible worst case error, i.e., that achieves
the approximation number $a_n$, is a mapping of the function $f$ to its exact Fourier partial sum
$S_{I^n}f:=\sum_{{\ensuremath{\boldsymbol{k}}}\in {I^n}}c_{\ensuremath{\boldsymbol{k}}}(f)\textnormal{e}^{2\pi\textnormal{i}{\ensuremath{\boldsymbol{k}}}\cdot\circ}$, where $I^n$, $n\in\ensuremath{\mathbb{N}}$, is defined by
\vspace*{-0.5em}
\begin{align*}
|I^n|=n-1\quad \textnormal{and}\quad I^n:=\left\{{\ensuremath{\boldsymbol{k}}}\in\ensuremath{\mathbb{Z}}^d\colon r_d({\ensuremath{\boldsymbol{k}}})\le r_d({\ensuremath{\boldsymbol{h}}}) \textnormal{ for all }{\ensuremath{\boldsymbol{k}}}\in I^n \textnormal{ and all }{\ensuremath{\boldsymbol{h}}}\in\ensuremath{\mathbb{Z}}^d\setminus I^n\right\}.
\label{eq:def_I_appr_num}
\end{align*}
Note that $I^n$ as well as the numbering of $\{{\ensuremath{\boldsymbol{k}}}_j\}_{j=1}^\infty$ is not uniquely defined.
Moreover, for sampling operators, there exists the concept of sampling numbers, which classifies the quality
of approximations of functions based on samples with respect to the number of samples.
In our specific setting, the corresponding sampling numbers are defined by
\vspace*{-0.5em}
\begin{align*}
g_M(\mathcal{H}_r(\ensuremath{\mathbb{T}}^d)&,L_\infty(\ensuremath{\mathbb{T}}^d)):=\\
&\inf_{\mathcal{X},|\mathcal{X}|\le M}\inf_{\operatorname{A}\,\colon\,\ensuremath{\mathbb{C}}^{|\mathcal{X}|}\to L_\infty(\ensuremath{\mathbb{T}}^d)}
\sup_{\|f|\mathcal{H}_r(\ensuremath{\mathbb{T}}^d)\|\le 1}\left\|f-\operatorname{A}\left(\{f({\ensuremath{\boldsymbol{x}}})\}_{{\ensuremath{\boldsymbol{x}}}\in\mathcal{X}}\right)|L_\infty(\ensuremath{\mathbb{T}}^d)\right\|,
\end{align*}
which can be described as the best possible worst case error of approximating a function $f$ that belongs to the unit ball of the space $\mathcal{H}_r(\ensuremath{\mathbb{T}}^d)$ by the best possible sampling strategy using not more than $M$ sampling values.
Clearly, the worst case sampling error which is determined by the specific sampling method presented in this paper yields an upper bound
on $g_M(\mathcal{H}_r(\ensuremath{\mathbb{T}}^d),L_\infty(\ensuremath{\mathbb{T}}^d))$.
Moreover, the sequence $g_M(\mathcal{H}_r(\ensuremath{\mathbb{T}}^d),L_\infty(\ensuremath{\mathbb{T}}^d))$ is nonincreasing in $M$. Taking this into account,
we observe the following statement.
\begin{theorem}\label{thm:samp_app}
Let $\mathcal{H}_r(\ensuremath{\mathbb{T}}^d)$ be a reproducing kernel Hilbert space as defined in Section~\ref{sec:repro_kernel_hilbert_space}. In addition, we assume that
$I^{n}\subset[-n+1,n-1]^d$ holds for all $n\in\ensuremath{\mathbb{N}}$. Then, we estimate
\begin{align*}
a_{M}(\mathcal{H}_r(\ensuremath{\mathbb{T}}^d),L_\infty(\ensuremath{\mathbb{T}}^d))\le
g_{M}(\mathcal{H}_r(\ensuremath{\mathbb{T}}^d),L_\infty(\ensuremath{\mathbb{T}}^d))
\lesssim
\frac{a_{M/\log{M}}(\operatorname{I}_d\,\colon\, \mathcal{H}_r(\ensuremath{\mathbb{T}}^d)\rightarrow L_\infty(\ensuremath{\mathbb{T}}^d))}{(\ln{M})^{-1}}.
\end{align*}
\end{theorem}
\begin{proof}
We follow the argumentation
of the proof of Lemma \ref{lem:estimate_sampling_set}. Due to the assumptions on $I^{n}$,
there exists a multiple rank-1 lattice $\Lambda({\ensuremath{\boldsymbol{z}}}_1,M_1,\ldots, {\ensuremath{\boldsymbol{z}}}_L,M_L)$
with $L\le \max(3\ln|I^n|,1)$ and $M_\ell\le 3|I^n|$, $\ell=1,\ldots,L$, that fulfills \eqref{eq:reco_prop}.
We apply Theorem~\ref{cor:gen_framework_main_result} and obtain
\begin{eqnarray}
&&\hspace*{-6em}g_{9(n-1)\max(\ln(n-1),1)}(\mathcal{H}_r(\ensuremath{\mathbb{T}}^d),L_\infty(\ensuremath{\mathbb{T}}^d))\le
g_{|\Lambda|}(\mathcal{H}_r(\ensuremath{\mathbb{T}}^d),L_\infty(\ensuremath{\mathbb{T}}^d))\nonumber\\
&\le&
\sup_{\|f|\mathcal{H}_r(\ensuremath{\mathbb{T}}^d)\|\le 1}\left\|f-\operatorname{S}_{I^n}^\Lambda f|L_\infty(\ensuremath{\mathbb{T}}^d)\right\|
\overset{\text{Thm.~\ref{cor:gen_framework_main_result}}}{\le} (L+1)\left(\sum_{j=n}^\infty r_d({\ensuremath{\boldsymbol{k}}}_j)^{-1}\right)^{1/2}\nonumber\\
&\overset{\eqref{eq:an_repro_kernel}}{\le}&
\max(3\,\ln(n-1)+1,2)\;a_{n}(\operatorname{I}_d\,\colon\, \mathcal{H}_r(\ensuremath{\mathbb{T}}^d)\rightarrow L_\infty(\ensuremath{\mathbb{T}}^d)),\label{eq:sampl_num_appr_num}
\end{eqnarray}
\end{proof}
The statement of the last theorem can be generalized using the following constraints:
\begin{enumerate}
\item $\mathcal{H}_r(\ensuremath{\mathbb{T}}^d)$ is a reproducing kernel Hilbert space as defined in Section~\ref{sec:repro_kernel_hilbert_space}, %
\item \label{constraint:n_I^n}$\exists c>0$ such that for each $n\ge 2$ the subset relation $I^n \subset [-c\,n,c\,n]^d$ holds.
\end{enumerate}
One may interpret \eqref{eq:sampl_num_appr_num} in the following way:\newline
The worst case error measured in the $L_\infty(\ensuremath{\mathbb{T}}^d)$ norm of the approximation of functions belonging to the reproducing kernel Hilbert space $\mathcal{H}_r(\ensuremath{\mathbb{T}}^d)$ by a trigonometric polynomial that is supported by the frequency set $I^n$ containing $n-1$ frequencies is best possible, if one chooses the exact Fourier partial sum supported by frequencies with smallest possible weights $r_d( {\ensuremath{\boldsymbol{k}}} )$ as approximant. The approximation number $a_n(\operatorname{I}_d\,\colon\, \mathcal{H}_r(\ensuremath{\mathbb{T}}^d)\rightarrow L_\infty(\ensuremath{\mathbb{T}}^d))$ specifies the corresponding error.
A suitable approximation of the aforementioned Fourier partial sum can be
computed from a number of $|\Lambda|<9(n-1)\max(\ln(n-1),1)$ samples, where the corresponding worst case sampling error
$g_{|\Lambda|}(\mathcal{H}_r(\ensuremath{\mathbb{T}}^d),L_\infty(\ensuremath{\mathbb{T}}^d))$
is bounded from above
by the approximation number $a_n(\operatorname{I}_d\,\colon\, \mathcal{H}_r(\ensuremath{\mathbb{T}}^d)\rightarrow L_\infty(\ensuremath{\mathbb{T}}^d))$ that is related to the frequency set $I^n$ times a logarithmic factor in the cardinality $n-1$ of $I^n$ and a constant less than four.
However, this estimate holds even for small values of $n$. Note that the sequence $\{a_n\}_{n\in\ensuremath{\mathbb{N}}}$ is monotonically decreasing.
Assuming that $\{a_n\}_{n\in\ensuremath{\mathbb{N}}}$ decreases faster than $(3\,\ln(n-1)+1)^{-1}$, we observe a reasonably practicable
sampling method that allows for treating approximation problems even in pre-asymptotic settings, which
are more likely to be the rule than the exception -- at least in dimensions $d>3$.
\begin{remark}\label{rem:approx_num}
The very general result of this section is subject to extremely weak constraints.
E.g., constraint \ref{constraint:n_I^n} is actually a restriction on the weight function
$r_d$. In any cases where the weight function $r_d$ generates index sets $I^n$, where $I^2=\{{\ensuremath{\boldsymbol{0}}}\}$
and each $I^n$ is downward closed, i.e., all $I^n$'s are lower sets, we certainly observe $I^n\subset[-n+1,n-1]^d$.
Furthermore, suitable reconstructing rank-1 lattices $\Lambda$, can be determined
by possibly multiple, but only a few number of applications of \cite[Algorithm~3]{Kae17}.
\end{remark}
\section{Conclusions}
We analyzed a recently developed sampling strategy for multivariate periodic functions
and determined upper bounds on the corresponding worst case sampling error measured in the $L_\infty(\ensuremath{\mathbb{T}}^d)$ norm.
The strategy is to sample functions from reproducing kernel Hilbert spaces
along multiple rank-1 lattices and approximate the function by approximating a suitable set
of Fourier coefficients and assembling corresponding approximating trigonometric polynomials.
It turns out that the considered worst case approximation errors are almost optimal with respect to the
space of trigonometric polynomials that is used for computing the approximant, cf. Theorem~\ref{cor:gen_framework_main_result}.
The crucial assumption on the used multiple rank-1 lattice is the specific reconstruction property $I=\bigcupdot_{\ell=1}^{L}I_\ell$, $I_\ell$ as stated in~\eqref{eq:def_Iell}, cf.~\cite{Kae17}.
Under certain mild assumptions, cf. Section~\ref{sec:app_samp_numbers},
there exist multiple rank-1 lattices that fulfill the reconstruction property
and, moreover, their number of sampling nodes is bounded from above by terms
$\lesssim |I|\log|I|$. This yields nearly optimal estimates of sampling numbers in terms of approximation numbers, cf. Theorem~\ref{thm:samp_app}.
A further application leads to significantly improved tractability results for sampling methods, cf. Section~\ref{sec:tractability}.
Again, we stress on the fact that all suggested algorithms, i.e.,
\begin{itemize}
\item the construction of the sampling sets,
\item the verification of the reconstruction property, and
\item the corresponding discrete Fourier transform
\end{itemize}
are efficiently implementable. The algorithmic complexity of each
of them is bounded from above by a product of linear terms in the cardinality $|I|$,
linear terms in the dimension $d$, and a few logarithmic factors in $|I|$, cf. Algorithm~\ref{alg:compute_S_I_Lambda} as well as
\cite[Algorithm~3]{Kae17}.
\section*{Acknowledgments}
The author thanks Thomas K\"uhn and Winfried Sickel for pointing out the close connection of the main result of this paper to
their results on approximation numbers in \cite{CoKueSi16} and for the valuable discussions on that topic at the workshop "Challenges in optimal recovery and hyperbolic cross approximation" at the INI in Cambridge, 2019.
The author gratefully acknowledges the funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation, project number 380648269).
\small
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 9,142 |
Happy to write this with Blog Fest 2012.
That pathway of gold is gorgeous and your poem is brilliant. Well done!
that was so, so beautiful BG.. beautiful! Someone had once said that "Every writer starts out wanting to write a poem, settles for short stories, and ends up with a novel" ….. poetry is the most difficult to capture – u've done it beautifully here!
The best part is that spring shall come! That thought keeps me going.
Oh wow– this is just so wonderful–a masterpiece!! Excellent take on Autumn Joy! | {
"redpajama_set_name": "RedPajamaC4"
} | 3,172 |
\section{Introduction}
Let us begin with two thought experiments.
First consider the following ``paradox'' in probability:
if $Z$ is a continuous random variable, then for any possible outcome
$z$ in $\R$, the event $Z \ne z$ occurs \emph{almost surely} (i.e. with probability $1$).
How does one reconcile this with the fact that, in a truly random outcome, every event having probability 1 should occur?
Recasting this in more formal language we have that,
``for all $z \in \R$, \emph{almost surely} $Z \ne z$'',
while ``\emph{almost surely} there exists a $z \in \R$, $Z = z$.''
Next suppose that, for some index set $I$, $\Seq{Z_i : i \in I}$ is a family of independent continuous random variables.
It is a trivial matter that for each pair $i \ne j$, the inequality $Z_i \ne Z_j$ holds with probability $1$.
For large index sets $I$, however,
\[
|\{Z_i : i \in I\}| = |I|
\]
holds with probability $0$; in fact this event contains no outcomes if $I$ is larger in cardinality than $\R$.
In terms of the formal logic, we have that,
``for all $i\ne j$ in $I$, \emph{almost surely} the event $Z_i \ne Z_j$ occurs'', while
``\emph{almost surely it is false that} for all $i \ne j \in I$, the event $Z_i \ne Z_j$ occurs''.
It is natural to ask whether it is possible to revise the notion of \emph{almost surely}
so that its meaning remains unchanged for simple logical assertions such as $Z_i \ne Z_j$
but such that it commutes with quantification.
For instance one might reasonably ask that, in the second example,
$|\{Z_i : i \in I\}| = |I|$ should occur \emph{almost surely} regardless of the cardinality of the index set.
Such a formalism would describe truth in a necessarily larger model of mathematics,
one in which there are new outcomes to the random experiment which did not exist before the experiment was
performed.
The method of forcing, which was introduced by Paul Cohen to establish the independence of the Continuum
Hypothesis \cite{CON_negCH} and put into its current form by Scott \cite{ind_CH:Scott} and Solovay \cite{solovay_model},
addresses issues precisely of this kind.
From a modern perspective, forcing provides a formalism for examining what occurs \emph{almost surely}
not only in probability spaces but also in a much more general setting than what is provided by
our conventional notion of randomness.
Forcing has proved extremely useful in developing and understanding of models of set theory and in determining what
can and cannot be proved within the standard axiomatization of mathematics (which we will take to be ZFC).
In fact it is a heuristic of modern set theory that if a statement arises naturally in mathematics and is consistent, then
its consistency can be established using forcing,
possibly starting from a large cardinal hypothesis.
The focus of this article, however, is of a different nature:
the aim is to demonstrate how the method of forcing can be used to \emph{prove theorems} as opposed to
\emph{establish consistency results}.
Forcing itself concerns the study of adding \emph{generic objects} to a model of set theory, resulting in a larger model of
set theory.
One of the key aspects of forcing is that it provides a formalism for studying what happens \emph{almost surely}
as the result of introducing a generic object.
An analysis of this formalism sometimes leads to new results concerning the original model itself --- results which are in fact independent
of the model entirely.
This can be likened to how the probabilistic method is used in finite combinatorics in settings where more constructive methods fail (see, e.g., \cite{probabilistic_method}).
In what follows, we will examine several examples of how forcing can be used to prove theorems.
Admittedly there are relatively few examples of such applications thus far.
It is my hope that by reaching out to a broader audience, this article will inspire more
innovative uses of forcing in the future.
Even though the goals and examples are somewhat unconventional,
the forcings themselves and the development of the theory
are much the same as one would encounter in a more standard treatment of forcing.
The article will only assume a minimal knowledge of set theory and logic, similar to what a graduate or advanced undergraduate
student might encounter in their core curriculum.
In particular, no prior exposure to forcing will be assumed.
The topics which will be covered include the following:
\emph{genericity},
\emph{names},
the \emph{forcing relation},
\emph{absoluteness},
the \emph{countable chain condition},
\emph{countable closure}, and
\emph{homogeneity arguments}.
These concepts will be put to use though several case studies:
\begin{enumerate}
\item partition theorems of Galvin, Nash-Williams, and Prikry for finite and infinite subsets of $\omega$;
\item intersection properties of uncountable families of events in a probability space;
\item a partition theorem of Halpern and L\"auchli for products of finitely branching trees;
\item a property of marker sequences for Bernoulli shifts;
\item Todorcevic's analysis of Rosenthal compacta.
\setcounter{my_enumerate_counter}{\value{enumi}}
\end{enumerate}
Sections marked with a `*' will not be needed for later sections.
While we will generally avoid proving consistency results,
the temptation to establish the consistency
of the Continuum Hypothesis and its negation along the way is too great ---
this will be for free given what needs to be developed.
For those interested in supplementing the material in this article with a more conventional approach to forcing,
Kunen's \cite{set_theory:Kunen} is widely considered to be the standard treatment.
It also provides a complete introduction to combinatorial set theory and independence results;
the reader looking for further background on set theory is referred there.
See also \cite{multiple_forcing}, \cite{ind_CH:Scott}, \cite{forcing:Shoenfield}, \cite{solovay_model}, \cite{forcing_appl}.
The last section contains a list of additional suggestions for further reading.
This exposition grew out of a set of lecture notes prepared for workshop 13w5026
on ``Axiomatic approaches to forcing techniques in set theory''
at the Banff International Research Station in November 2013.
None of the results presented below are my own.
I'll finish by saying that this project was inspired by countless
conversations with Stevo Todorcevic over the years, starting with my time as his student
in the 1990s.
\section{Preliminaries}
\label{prelim:sec}
Before beginning, we will fix some conventions and
review some of the basic concepts from set theory which will be needed.
Further reading and background can be found in \cite{set_theory:Kunen}.
A set $x$ is \emph{transitive} if whenever $z \in y$ and $y \in x$, then $z \in x$.
Equivalently, $x$ is transitive if and only if every element of $x$ is also a subset of $x$.
An \emph{ordinal} is a set $\alpha$ which is transitive and wellordered by $\in$.
It is readily checked that every element of an ordinal is also an ordinal.
Every wellorder is isomorphic to an ordinal; moreover this ordinal and the isomorphism are unique.
If $\alpha$ and $\beta$ are two ordinals, then exactly one of the following is true:
$\alpha \in \beta$, $\beta \in \alpha$, or $\alpha = \beta$.
We will often write $\alpha < \beta$ to denote $\alpha \in \beta$ if $\alpha$ and $\beta$ are ordinals.
Notice that an ordinal is the set of ordinals smaller than it.
The least ordinal is the emptyset, which is denoted $0$.
If $\alpha$ is an ordinal, then $\alpha + 1$ is the least ordinal greater than $\alpha$;
this coincides with the set $\alpha \cup \{\alpha\}$.
The finite ordinals therefore coincide with the nonnegative integers:
$n := \{0,\ldots,n-1\}$.
The least infinite ordinal is $\omega := \{0,1,\ldots\}$, which coincides with the set of
natural numbers.
We will adopt the convention that the set $\N$ of natural numbers does not include $0$.
Unless otherwise specified, $i,j,k,l,m,n$ will be used to denote finite ordinals.
An ordinal $\kappa$ is a \emph{cardinal} if whenever $\alpha < \kappa$, $|\alpha| < |\kappa|$.
If $\alpha$ is an ordinal which is not a successor, then we say that $\alpha$ is a limit ordinal.
In this case, the \emph{cofinality} of $\alpha$ is the minimum cardinality of a cofinal subset of $\alpha$.
A cardinal $\kappa$ is \emph{regular} if its cofinality is $\kappa$.
The regularity of a cardinal $\kappa$ is equivalent to the assertion that if $\kappa$ is
partitioned into fewer than $\kappa$ sets, then one of these sets has cardinality $\kappa$.
If $\kappa$ is a cardinal, then $\kappa^+$ denotes the least cardinal greater than $\kappa$.
Cardinals of the form $\kappa^+$ are called \emph{successor cardinals} and are always regular.
Since every set can be wellordered, every set has the same cardinality as some (unique) cardinal;
we will adopt the convention that $|x|$ is the cardinal $\kappa$ such that $|x| = |\kappa|$.
If $\alpha$ is an ordinal, then we define $\omega_\alpha$ to be the $\alpha{}^{\textrm{th}}$ infinite cardinal.
Thus $\omega_0 := \omega$ and $\omega_\beta := \sup_{\alpha < \beta} (\omega_\alpha)^+$ if $\beta > 0$.
The Greek letters $\alpha$, $\beta$, $\gamma$, $\xi$, and $\eta$ will be used to refer to ordinals;
the letters $\kappa$, $\lambda$, $\mu$, and $\theta$ will be reserved for cardinals.
If $A$ and $B$ are sets, then $B^A$ will be used to denote the collection of all functions from $A$ into $B$.
For us, a function is simply a set of ordered pairs.
Thus if $B \subseteq C$, then $B^A \subseteq C^A$ and if $f$ and $g$ are functions, $f \subseteq g$ means
that $f$ is a restriction of $g$.
There is one exception to this notation worth noting.
We will follow the custom of writing $\aleph_\alpha$ for $\omega_\alpha$ in situations where the underlying
order structure is unimportant (formally $\aleph_\alpha$ equals $\omega_\alpha$).
Arithmetic expressions involving $\aleph_\alpha$'s will be used to refer to the cardinality of the resulting set.
For instance $2^{\omega_1}$ is a collection of functions whose cardinality is $2^{\aleph_1}$, which is a cardinal.
If $(X,d)$ is a metric space, then we define its \emph{completion}
to be the set of equivalence classes of Cauchy sequences,
where $\Seq{x_n : n < \infty}$ is equivalent to $\Seq{y_n : n < \infty}$ if $d(x_n,y_n) \to 0$.
In particular, we will regard the set of real numbers $\R$
as being the completion of the rational numbers $\Q$
with the usual metric $d(p,q) := |p-q|$.
Notice that even if $X$ is complete, the completion is not literally equal to $X$, even though it is canonically isometric
to $X$.
This will serve as a minor annoyance when we define names for complete metric spaces in Section \ref{semantics:sec}.
Finally, we will need some notation from first order logic.
The \emph{language of set theory} is the first order
language with a single binary relation $\in$.
If $\phi$ is a formula in the language of set theory, then $\phi(v_1,\ldots,v_n)$ will
be used to indicate that every free variable in $\phi$ is
$v_i$ for some $i =1,\ldots, n$.
If $x_1, \ldots, x_n$ are constants, then $\phi(x_1,\ldots,x_n)$ is the result of simultaneously substituting $x_i$ for $v_i$ for each $i$.
If $\phi$ is a formula and $v$ is variable and $x$ is a term, then $\phi[x/v]$ is the result of substituting
$x$ for every free occurrence of $v$ in $\phi$ (of which there may be none).
If $\phi$ has no free variables, then we say that $\phi$ is a \emph{sentence}.
If every quantifier in $\phi$ is
of the form $\exists x \in y$ or $\forall x \in y$ for some variables $x$ and $y$,
then we say that the quantification in $\phi$ is \emph{bounded}.
Many assertions can be expressed using only bounded quantification:
for instance the assertions ``$A = \bigcup B$'' and ``$(\Qcal,\leq)$ is a partially ordered set''
are expressible by formulas with only bounded quantification.
We now recall some foundational results in set theory which justify our emphasis
on transitive models of set theory below.
A binary relation $R$ is \emph{well founded} if there is no infinite sequence
$\Seq{x_n : n < \infty}$ such that $x_{n+1} R x_n$.
A binary relation $R$ on a set $X$ is \emph{extensional} if for all $x$ and $y$ in $X$,
$\{z \in X : z R x\} = \{z \in X : z R y\}$ implies $x = y$.
Among the axioms of ZFC are the assertions that $\in$ is well founded and extensional.
\begin{prop}
Suppose that $(X,E)$ is a binary relation and $E$ is well founded and extensional.
Then $(X,E)$ is uniquely isomorphic to a transitive set equipped with $\in$.
In particular, if $(X,E)$ is a model of ZFC and $E$ is well founded, then
$(X,E) \simeq (M,\in)$ for some transitive set $M$.
\end{prop}
\begin{prop}
If $M$ is a transitive set, $\phi(v_1,\ldots,v_n)$ is a first order formula with only bounded
quantification and $a_1,\ldots,a_n \in M$,
then $(M,\in) \models \phi(a_1,\ldots,a_n)$ if and only if $\phi(a_1,\ldots,a_n)$ is true.
\end{prop}
Thus, for example, if $M$ is a transitive set and $\Qcal$ is a partial order in $M$,
then $(M,\in)$ satisfies that $\Qcal$ is a partial order.
\section{What is forcing?}
\label{what_is:sec}
Forcing is the procedure of adjoining to a model $M$ of set theory a new \emph{generic object} $G$ in order to
create a larger model $M[G]$.
In this context,
$M$ is referred to as the \emph{ground model} and $M[G]$ is a \emph{generic extension} of $M$.
For us, the generic object will always be a new subset of some partially ordered set $\Qcal$ in $M$,
known as a \emph{forcing}.
This procedure has the following desirable properties:
\begin{enumerate} \setcounter{enumi}{\value{my_enumerate_counter}}
\item $M[G]$ is also a model of set theory and is the minimal model of set theory which has as members all the elements of $M$
and also the generic object $G$.
\item The truth of mathematical statements in $M[G]$ can be determined by a formalism within $M$, known as
the \emph{forcing relation}, which is completely specified by $\Qcal$.
The workings of this formalism are purely internal to $M$.
\setcounter{my_enumerate_counter}{\value{enumi}}
\end{enumerate}
While it will generally not concern us in this article, the key meta-mathematical feature of forcing is that it is often
the case that it is easier to determine truth in the generic extension $M[G]$ than in the \emph{ground model} $M$.
For instance Cohen specified the description of a forcing $\Qcal$ with the property that if $M[G]$ is any generic extension
created by forcing with $\Qcal$, then $M[G]$ necessarily satisfies that the Continuum Hypothesis is false
\cite{CON_negCH} (see Section \ref{ccc:sec} below).
In fact the second thought experiment presented at the beginning of the introduction is derived from a variation of this forcing.
It is also not difficult to specify different forcings which always produce generic extensions satisfying the Continuum Hypothesis
(see Section \ref{sigma-closed:sec} below).
There are two perspectives one can have of forcing: one which is primarily semantic
and one which is primarily syntactic.
Each has its own advantages and disadvantages.
The semantic approach makes certain properties of the forcing relation and the generic
extension intuitive and transparent.
On the other hand, it is fraught with metamathematical issues and philosophical
hangups.
The syntactic approach is less intuitive but more elementary and makes certain other
features of forcing constructions more transparent.
We will tend to favor the syntactic approach in what follows.
We will now fix some terminology.
\begin{defn}[forcing]
A \emph{forcing} is a set $\Qcal$ equipped with a transitive reflexive relation $\leq_\Qcal$ which
contains a greatest element $\mathbf{1}_\Qcal$.
If $\Qcal$ is clear from the context, the subscripts are usually suppressed.
\end{defn}
Our prototypical example of a forcing is $\Rand$,
the collection of all measurable subsets of $[0,1]$ having positive Lebesgue measure,
ordered by containment.
Elements of a forcing are often referred to as \emph{conditions} and are regarded as being approximations to
a desired \emph{generic object}.
In the analogy with randomness, the conditions correspond to the events of the probability space which have positive measure.
If $q \leq p$, then we sometime say that $q$ is \emph{stronger than} $p$ or that $q$ \emph{extends} $p$.
We think of $q$ as providing a better approximation to the generic object.
It will be helpful to abstract the notion of an outcome in terms of a collection of mutually compatible events.
A set $G \subseteq \Qcal$ is a \emph{filter} if $G$ is nonempty, \emph{upward closed}, and \emph{downward directed} in $\Qcal$:
\begin{enumerate}
\setcounter{enumi}{\value{my_enumerate_counter}}
\item if $q$ is in $G$, $p$ is in $\Qcal$ and $q \leq p$, then $p$ is in $G$;
\item if $p$ and $q$ are in $G$, then there is an $r$ in $G$ such that $r \leq p,q$.
\setcounter{my_enumerate_counter}{\value{enumi}}
\end{enumerate}
If $p$ and $q$ are in $\Qcal$, then we say that $p$ and $q$ are \emph{compatible}
if there is a $r$ in $\Qcal$ such that $r \leq p,q$.
Otherwise we say that $p$ and $q$ are \emph{incompatible}.
Notice that two conditions are compatible exactly when there is a filter which contains both of them.
Of course two events in $\Rand$ are compatible exactly when they intersect in a set of positive measure.
A forcing $\Qcal$ is \emph{separative} if whenever $p \not \leq q$, there is an $r \leq p$ such
that $r$ and $q$ are incompatible.
Notice that if $\Qcal$ is any forcing, we can define an equivalence relation $\equiv$ on $\Qcal$
by $q \equiv p$ if
\[
\{r \in \Qcal : r \textrm{ is compatible with } p\} = \{r \in \Qcal : r \textrm{ is compatible with } q\}.
\]
The quotient is ordered by $[q] \leq [p]$ if
\[
\{r \in \Qcal : r \textrm{ is compatible with } q\} \subseteq \{r \in \Qcal : r \textrm{ is compatible with } p\}.
\]
This quotient ordering is separative and is known as the \emph{separative quotient}.
Notice that if $\Qcal$ is separative, then $\equiv$ is just equality and the quotient ordering is just the usual ordering.
The forcing $\Rand$ is not separative; in this example $p \equiv q$ if $p$ and $q$ differ by a measure $0$ set.
It is often convenient to assume forcings are separative and we will often pass to the separative quotient without
further mention (just as one often writes equality of functions in analysis when they
really mean equality modulo a measure 0 set).
The following definition will play a central role in all that follows.
\begin{defn}[generic]
If $M$ is a collection of sets and $\Qcal$ is a forcing, then we say that a filter $G \subseteq \Qcal$ is \emph{$M$-generic}
if whenever $E \subseteq \Qcal$ is in $M$,
there is a $p \in G$ which is either in $E$ or is incompatible with every element of $E$.
\end{defn}
A family $E$ of conditions is said to be \emph{exhaustive}
if whenever $p$ is an element of $\Qcal$, there is an element $q$ of $E$ which is compatible with $p$.
Notice that if $\Ecal$ is a collection of exhaustive sets and $G \subseteq \Qcal$ is an $\Ecal$-generic
filter, then $G$ must intersect every element of $\Ecal$.
Also observe that if $\Scal := \{\{q\} : q \in Q\}$, then the $\Scal$-generic filters are exactly the \emph{ultrafilters} ---
those filters which are maximal.
In order to illustrate the parallel with randomness, take $\Qcal = \Rcal$.
Observe that if $E \subseteq \Rand$ is exhaustive, then its union has full measure.
Conversely, if $E \subseteq \Rand$ is countable and $\bigcup E$ has full measure,
then $E$ is exhaustive.
Thus in this setting, genericity is an assertion that certain measure 1 events occur.
There are two other order-theoretic notions closely related to being exhaustive which it will be useful to define.
A family of pairwise incompatible conditions is said to be an \emph{antichain}.
Notice that this differs from the usual
notion of an antichain in a poset, where antichain would mean pairwise \emph{incomparable}.
Observe that any maximal antichain is exhaustive but that in general exhaustive families need not be pairwise incompatible.
A family $\Dcal$ of conditions is \emph{dense} if every element of $\Qcal$ has an extension in $\Dcal$.
For example, the collection of all elements of $\Rand$ which are compact is dense in $\Rand$.
Observe that, by Zorn's Lemma, every dense set in a partial order contains a maximal antichain.
Two forcings are said to be \emph{equivalent} if they have dense suborders which are isomorphic.
The reason for this is that such forcings generate the same generic extensions.
\section[A theorem of Galvin and Nash-Williams]%
{A precursor to the forcing relation: a partition theorem of Galvin and Nash-Williams}
In this section, we will prove the following theorem of Galvin and Nash-Williams which generalizes
Ramsey's theorem.
The proof is elementary, but crucially employs the forcing relation, albeit implicitly.
We will also use this partition relation in Section \ref{GP:sec}.
The presentation in this section follows \cite[\S 5]{topics_topology:Todorcevic}.
If $A \subseteq \omega$, let $[A]^\omega$ denote all infinite subsets of $A$.
\begin{thm}[see \cite{Galvin-Prikry}]
\label{GNW}
If \(\Fcal\) is a family of nonempty finite subsets of \(\omega\), then there is an infinite subset
\(H\) of \(\omega\) such that either:
\begin{enumerate}[\indent a.]
\item no element of \(\Fcal\) is a subset of \(H\) or
\item every infinite subset of \(H\) has an initial segment which is in \(\Fcal\).
\end{enumerate}
\end{thm}
Notice that Ramsey's theorem is the special case of this theorem in which all elements of
$\Fcal$ have the same cardinality.
We will now introduce some terminology which will be useful in organizing the proof of Theorem \ref{GNW}.
Fix \(\Fcal\) as in the statement of the theorem.
If \(a \subseteq \omega\), \(A \subseteq \omega\) with \(a\) finite and \(A\) infinite,
then we say that \(A\) \emph{accepts} \(a\) if whenever \(B \subseteq A\) is infinite with \(\max (a) < \min (B)\),
then \(a \cup B\) has an initial segment in \(\Fcal\).
We say that \(A\) \emph{rejects} \(a\) if no infinite subset of \(A\) accepts \(a\) and that
\(A\) \emph{decides} \(a\) if it either accepts or rejects \(A\).
We will prove the conclusion of the theorem through a series of lemmas.
\begin{lem}
If \(A\) rejects \(a\), then \(\{k \in A : A \textrm{ accepts } a \cup \{k\}\}\) is finite.
\end{lem}
\begin{proof}
If \(B := \{k \in A : A \textrm{ accepts } a \cup \{k\}\}\) is infinite, then \(B\) is an infinite subset
of \(A\) which accepts \(a\).
\end{proof}
\begin{lem}
There is an infinite set \(H \subseteq \omega\) which decides all of its finite subsets.
\end{lem}
\begin{proof}
Recursively construct infinite sets \(\omega \supseteq H_0 \supseteq H_1 \supseteq \cdots\) such that
if \(n_k:=\min (H_k)\) then \(n_{k-1} < n_k\) and \(H_k\) decides all subsets of \(\{n_i : i < k\}\).
It follows that \(H := \{n_i : i < \infty\}\) decides all finite subsets of \(\omega\).
\end{proof}
\begin{lem}
If \(H \subseteq \omega\) is infinite and decides all of its finite subsets, then either \(H\)
accepts \(\emptyset\) or else there is an infinite \(A \subseteq H\) which rejects all of its finite subsets.
\end{lem}
\begin{proof}
If \(H\) rejects the emptyset and decides all of its finite subsets, then
recursively construct \(n_0 < n_1 < \ldots\) in \(H\) so that for each \(k\),
\(H\) rejects all subsets of \(\{n_i : i < k\}\).
The choice of the next \(n_k\) is always possible since
\[
\{n : \exists a \subseteq \{n_i : i < k\} (H \textrm{ accepts } a \cup \{n\})\}
\]
is finite.
The set \(A := \{n_i : i < \infty\}\) now rejects all of its finite subsets.
\end{proof}
In order to finish the proof of Theorem \ref{GNW},
observe that if \(H\) accepts the emptyset, then every infinite subset
of \(H\) contains an initial segment in \(\Fcal\).
By the previous lemmas, it therefore suffices to show that if \(A\) is an infinite set which
rejects all of its finite subsets, then no element of \(\Fcal\) is a subset of \(A\).
If \(a \in \Fcal\) with \(a \subseteq A\), then \(B := A \setminus \{0,\ldots,\max (a)\}\) would
accept \(a\), which is impossible.
This finishes the proof of Theorem \ref{GNW}.
\section{The formalism of the forcing relation}
\label{formalism:sec}
In this section we will develop the forcing relation and the forcing language axiomatically, treating the
notion of a \emph{$\Qcal$-name} and the \emph{forcing relation $\Vdash$} as undefined concepts;
the definitions are postponed until Section \ref{semantics:sec}.
The advantage of this approach is that it emphasizes the aspects of the formalism which
are actually used in practice.
Let $\Qcal$ be a forcing, fixed for the duration of the section.
As we stated earlier, one can view $\Qcal$ as providing the collection of events of positive measure with
respect to some abstract notion of randomness.
In this analogy, a $\Qcal$-name would correspond to a set-valued random variable.
It is conventional to denote $\Qcal$-names by letters with a ``dot'' over them.
There are two examples of $\Qcal$-names which deserve special mention.
The first is the ``check name'':
for each set $x$, there is a $\Qcal$-name $\check x$.
This corresponds to a random variable which is constant --- it does not depend on the outcome.
The other is the $\Qcal$-name $\dot G$ for the generic filter;
this corresponds to the random variable representing the outcome of the random experiment.
The \emph{forcing language} associated to $\Qcal$
is the collection of all first order formulas in the language of set theory augmented
by adding a constant symbol for each $\Qcal$-name.
If $q$ is in $\Qcal$ and $\phi$ is a sentence in the forcing language, then informally
the forcing relation $q \Vdash \phi$ asserts that
if the event corresponding to $q$ occurs, then \emph{almost surely} $\phi$ will be true.
In the absence of the definitions of ``$\Qcal$-name'' and ``$\Vdash$,'' the following
properties can be regarded as axioms which govern the behavior of these primitive concepts.
They can be proved from the definitions of $\Qcal$-names and the forcing relation which will be given
in Section \ref{semantics:sec}.
\begin{property} \label{check_names}
For any $p \in \Qcal$ and any sets $x$ and $y$:
\begin{enumerate}[\indent a.]
\item
$p \Vdash \check x \in \check y$ if and only if $x \in y$;
\item
$p \Vdash \check x = \check y$ if and only if $x = y$;
\end{enumerate}
\end{property}
\begin{property} \label{filter_name}
For $p, q \in \Qcal$, $p \Vdash \check q \in \dot G$ if and only if whenever $r \in \Qcal$ is compatible with $p$,
$r$ is compatible with $q$.
\end{property}
If $\Qcal$ is separative, then this property takes a simpler form: $p \Vdash \check q \in \dot G$ if and only
if $p \leq q$.
\begin{property} \label{ordinal_names}
If $\dot \alpha$ is a $\Qcal$-name, $p \in \Qcal$, and
$p \Vdash \dot \alpha \textrm{ is an ordinal}$, then there is an ordinal $\beta$ such that
$p \Vdash \dot \alpha \in \check \beta$.
\end{property}
It is useful to define the following terminology:
if there is a $z$ such that $q \Vdash \dot y = \check z$,
then we say that $q$ \emph{decides} $\dot y$ (to be $z$).
Similarly, if $p \Vdash \phi$ or $p \Vdash \neg \phi$, then we say that $p$ \emph{decides} $\phi$.
\begin{property} \label{decide}
For any $x$, any $\Qcal$-name $\dot y$, and $p \in \Qcal$,
if $p \Vdash \dot y \in \check x$, then
there is a $q \leq p$ which decides $\dot y$.
\end{property}
\begin{property}
\label{collection_for_names}
If $\dot x$ is a $\Qcal$-name and $p \in \Qcal$, then the collection of all $\Qcal$-names $\dot y$ such that
$p \Vdash \dot y \in \dot x$ forms a set and the collection of all $\Qcal$-names $\dot y$ such that
$p \Vdash \dot y = \dot x$ forms a set.
\end{property}
\begin{remark}
Unlike the other properties, this one is dependent on
the definition of $\Qcal$-name which we will give in the next section.
\end{remark}
\begin{property} \label{negation_monotone}
If $p \in \Qcal$ and $\phi$ is a formula in the forcing language, then
$p \Vdash \neg \phi$ if and only if there is no $q \leq p$ such that $q \Vdash \phi$.
\end{property}
Observe that this property implies that
if $p \Vdash \phi$ and $q \leq p$, then $q \Vdash \phi$.
Property \ref{negation_monotone} can be seen as providing an organizational tool in the proof of Theorem \ref{GNW}:
if $\Qcal := ([\omega]^\omega,\subseteq)$ then
an $A \in [\omega]^\omega$ accepts $a$ if and only if $A$ forces that every element of the generic filter contains
an infinite set with an initial part in $\check \Fcal$.
An infinite $A$ rejects $a$ if it forces the negation of this assertion.
\begin{property} \label{completeness_of_names}
If $p \in \Qcal$, then $p \Vdash \exists v \phi$ if and only if there is a $\Qcal$-name $\dot x$ such that
$p \Vdash \phi [\dot x/v]$.
\end{property}
\begin{property} \label{ZFC_forced}
For any $q \in \Qcal$, the collection of sentences in the forcing
language which are forced by $q$ contains the ZFC axioms, the axioms of first order logic,
and is closed under \emph{modus ponens}.
Moreover, if the axioms of ZFC are consistent, then so are the sentences forced by $q$.
\end{property}
If $\mathbf{1} \Vdash_\Qcal \phi$, then we will sometimes say that ``$\Qcal$ forces $\phi$'' or, if $\Qcal$ is clear from the context,
that ``$\phi$ is forced.''
Similarly, we will write ``$\dot x$ is a $\Qcal$-name for...'' to mean
``$\dot x$ is a $\Qcal$-name and $\Qcal$ forces that $\dot x$ is...''.
In order to demonstrate how these properties can be used, we will prove
the following useful propositions.
\begin{prop} \label{check_quant}
Suppose that $x$ is a set and $\phi(v)$ is a formula in the forcing language.
If for all $y \in x$,
$p \Vdash \phi(\check y)$,
then $p \Vdash \forall y \in \check x \phi (y)$.
\end{prop}
\begin{proof}
We will prove the contrapositive.
Toward this end, suppose that $p$ does not force $\forall y \in \check x \phi(y)$.
It follows from Property \ref{negation_monotone}
there is a $q \leq p$ such that $q \Vdash \neg \forall y \in \check x \phi(y)$.
By Property \ref{ZFC_forced}, this is equivalent to $q \Vdash \exists y \in \check x \neg \phi(y)$.
By Property \ref{completeness_of_names},
there is a $\Qcal$-name $\dot y$ such that
\[
q \Vdash (\dot y \in \check x) \land (\neg \phi( \dot y)).
\]
By Property \ref{ZFC_forced} $q \Vdash \dot y \in \check x$ and
therefore by Property \ref{decide}, there is a $r \leq q$ and a $z$ in $x$ such that
$r \Vdash \dot y = \check z$.
But now, by Property \ref{ZFC_forced}, $r \Vdash \neg \phi(\check z)$ and hence
by Property \ref{negation_monotone}
$p$ does not force $\phi(\check z)$.
\end{proof}
\begin{prop} \label{bounded_abs}
Suppose that $\phi(v_1,\ldots,v_n)$ is a formula in the language of set theory
with only bounded quantification.
If $x_1,\ldots,x_n$ are sets
and $\phi(x_1,\ldots,x_n)$ is true, then $\mathbf{1} \Vdash \phi(\check x_1,\ldots,\check x_n)$.
\end{prop}
\begin{proof}
The proof is by induction on the length of $\phi$.
If $\phi$ is atomic, then this follows from Property \ref{check_names}.
If $\phi$ is a conjunct, disjunct, or a negation,
then the proposition follows from Property \ref{ZFC_forced}
and the induction hypothesis.
Finally, suppose $\phi(v_1,\ldots,v_n)$ is of the form $\forall w \in v_n \psi(v_1,\ldots,v_n,w)$.
If $\forall w \psi(x_1,\ldots,x_n,w)$ is true, then for each $w$,
$\psi(x_1,\ldots,x_n,w)$ is true.
By our induction hypothesis, $\mathbf{1} \Vdash \phi(\check x_1,\ldots,\check x_n,\check w)$ for each
$w$.
By Proposition \ref{check_quant}, it follows that
$\mathbf{1} \Vdash \forall w \in \check x_n \psi(\check x_1,\ldots,\check x_n,w)$.
\end{proof}
\begin{prop} \label{WF_abs}
Suppose that $T$ is a set consisting of finite length sequences, closed under taking initial segments.
If there is a forcing $\Qcal$ and some
$q \in \Qcal$ forces ``there is an infinite sequence $\sigma$, all of whose
finite initial parts are in $T$,'' then such a sequence $\sigma$ exists.
\end{prop}
\begin{proof}
If no such sequence $\sigma$ exists, then there is a function $\rho$
from $T$ into the ordinals such that if
$s$ is a proper initial segment of $t$, then $\rho(t) \in \rho(s)$.
Such a $\rho$ certifies the nonexistence of such a $\sigma$ since such a $\sigma$ would define
a strictly decreasing infinite sequence of ordinals.
Observe that the assertion that $\rho$ is a strictly decreasing map from $T$ into the ordinals
is a statement about $\rho$ and $T$ involving only bounded quantification.
By Proposition \ref{bounded_abs}, this statement is forced by every forcing $\Qcal$.
\end{proof}
There is a special class of forcings for which there is a more
conceptual picture of the forcing relation.
We begin by stating a general fact about forcings.
\begin{thm} \label{cBA}
For every forcing $\Qcal$, $\Qcal$ is isomorphic to a dense suborder of the positive
elements of a complete Boolean algebra.
\end{thm}
\noindent
Here we recall that a Boolean algebra is \emph{complete} if every subset has a least upper
bound.
A typical example of a complete Boolean algebra is the algebra of measurable subsets
of $[0,1]$ modulo the ideal of measure zero sets.
The algebra of Borel subsets of $[0,1]$ modulo the ideal of first category sets is similarly a complete Boolean algebra.
Random and Cohen forcing, respectively, are isomorphic to dense suborders of the positive elements of these
complete Boolean algebras.
Suppose now that $\Qcal$ is the positive elements of some complete Boolean algebra $\Bcal$.
If $\phi$ is a formula in the forcing language, then define the \emph{truth value}
$\truth{\phi}$
of $\phi$ to be the least upper bound of all $b \in \Bcal$ such that $b \Vdash \phi$.
Observe that if $a \leq \truth{\phi}$, then $a$ cannot force
$\neg \phi$.
Hence $\truth{\phi}$ forces $\phi$.
The rules which govern the logical connectives now take a particularly nice form:
\[
\truth{\neg \phi} = \truth{\phi}^{\mathtt{c}}
\qquad
\truth{\phi \land \psi} = \truth{\phi} \land \truth{\psi}
\qquad
\truth{\phi \lor \psi} = \truth{\phi} \lor \truth{\psi}
\]
\[
\truth{\forall v \phi} = \bigwedge_{\dot x} \truth{\phi[\dot x/v]}
\qquad
\truth{\exists v \phi} = \bigvee_{\dot x} \truth{\phi[\dot x/v]}
\]
Notice that while $\dot x$ ranges over all $\Qcal$-names in the last equations
--- a proper class --- the collection of all possible values of
$\truth{\phi[\dot x/v]}$ is a set and therefore the last items are
meaningful.
In spite of the usefulness of complete Boolean algebras
in understanding forcing and also in some of
the development of the abstract theory of forcing,
forcings of interest rarely present themselves as complete Boolean algebras (the notable exceptions
being Cohen and random forcing).
While Theorem \ref{cBA} allows us to represent any forcing inside a complete
Boolean algebra, defining forcing strictly in terms of complete
Boolean algebras would prove cumbersome in practice.
\section{Names, interpretation, and semantics}
\label{semantics:sec}
In this section we will turn to the task of giving a
formal definition of what is meant by a \emph{$\Qcal$-name} and
$q \Vdash \phi$.
This will in turn be used to give a semantic perspective of forcing.
The definitions in this section are not essential for understanding
most forcing arguments and the reader may wish to skip this section on their first reading
of the material.
Others, however, may wish to have a tangible model of the axioms.
Before proceeding, we need to recall the notion of the \emph{rank} of a set.
If $x$ is a set, then the \emph{rank} of $x$ is defined recursively:
the rank of the emptyset is $0$ and the rank of $x$ is the least ordinal which is strictly greater than the ranks of
its elements.
This is always a well defined quantity and it will sometimes be necessary to give definitions by recursion on rank.
We recall that formally an ordered pair $(x,y)$ is defined to be $\{x,\{x,y\}\}$;
this is only relevant in that the rank of $(x,y)$ is greater than the ranks of
either $x$ or $y$.
Now let $\Qcal$ be a forcing, fixed for the duration of the section.
If $q \in \Qcal$, let $\Qcal_q$ denote the forcing
$(\{r \in \Qcal : r \leq q\},\leq)$.
\begin{defn}[name]
A set $\dot x$ is a \emph{$\Qcal$-name}
if every element of $\dot x$ is of the form $(\dot y,q)$ where
$\dot y$ is a $\Qcal_q$-name and $q$ is in $\Qcal$.
\end{defn}
\noindent
(The requirement that $\dot y$ be a $\Qcal_q$-name is to help ensure that
Property \ref{collection_for_names} is satisfied.)
Notice that this apparently implicit definition is actually a definition by recursion
on \emph{rank}, as discussed in Section \ref{prelim:sec}.
Furthermore, if $\Pcal \subseteq \Qcal$ is a suborder, then any $\Pcal$-name
is also a $\Qcal$-name.
The following provide two important examples of $\Qcal$-names.
\begin{defn}[check names]
If $x$ is a set, $\check x$ is defined recursively by
\[
\{(\check y,\mathbf{1}) : y \in x\}.
\]
\end{defn}
\begin{defn}[name for the generic filter]
$\dot G := \{(\check q,q) : q \in \Qcal\}$.
\end{defn}
As mentioned in the previous section, the notion of a $\Qcal$-name is intended to describe a procedure
for building a new set from a given filter $G \subseteq \Qcal$.
This procedure is formally described as follows.
\begin{defn}[interpretation]
If $G$ is any filter and $\dot x$ is any $\Qcal$-name, define $\Int{\dot x}{G}$ recursively by
\[
\Int{\dot x}{G} := \{\Int{\dot y}{G} : \exists p \in G\ ((\dot y,p) \in \dot x)\}
\]
\end{defn}
\noindent
Again, this is a definition by recursion on rank.
In the analogy with randomness, $\Int{\dot x}{G}$ corresponds to evaluating a random variable
at a given outcome.
The following gives the motivation for the definitions of $\check x$ and $\dot G$.
\begin{prop}
If $H$ is any filter and $x$ is any set,
then $\Int{\check x}{H} = x$.
\end{prop}
\begin{prop}
If $H$ is any filter, then $\Int{\dot G}{H} = H$.
\end{prop}
\begin{remark}
It is possible to define \emph{$\Qcal$-name} to just be a synonym for \emph{set}.
The definition of $\Int{\dot x}{G}$ would be left unchanged so that only those elements
of $\dot x$ which are ordered pairs with a second coordinate in $\Qcal$ play any role in
the interpretation.
This alternative has the advantage of brevity and
much of what is stated in the previous section remains true with this alteration.
On the other hand, it is easily seen that Property \ref{collection_for_names} fails.
For instance those sets which do not contain any ordered pairs forms a proper class and each member
of this class is forced by the trivial condition to be equal to the emptyset.
\end{remark}
We now turn to the formal definition of the forcing relation.
The main complexity of the definition of the forcing relation is tied
up in the formal definition of $p \Vdash \dot x \in \dot y$.
\begin{defn}[forcing relation: atomic formulae]
If $\Qcal$ is a forcing and $\dot x$ and $\dot y$ are $\Qcal$-names,
then we define the meaning of $p \Vdash \dot x = \dot y$ and $p \Vdash \dot x \in \dot y$ as follows
(the definition is by simultaneous recursion on rank):
\begin{enumerate}[\indent a.]
\item $p \Vdash \dot x = \dot y$ if and only if
for all $\dot z$ and $p' \leq p$,
\[
(p' \Vdash \dot z \in \dot x) \leftrightarrow (p' \Vdash \dot z \in \dot y).
\]
\item $p \Vdash \dot x \in \dot y$ if and only if for every $p' \leq p$ there is
a $p'' \leq p'$ and a $(\dot z,q)$ in $\dot y$ such that
$p'' \leq q$ and $p'' \Vdash \dot x = \dot z$.
\end{enumerate}
\end{defn}
Notice that the definition of $p \Vdash \dot x = \dot y$ is precisely to ensure
that the \emph{Axiom of Extensionality} --- which asserts that two sets are equal if they have the same
set of elements --- is forced by any condition.
The definition of the forcing relation for nonatomic formulas
is straightforward and is essentially determined by the properties of the forcing relation
mentioned already in Section \ref{formalism:sec}.
\begin{defn}[forcing relation: logical connectives]
Suppose that $p \in \Qcal$ and $\phi$ and $\psi$ are formulas in the forcing language.
The following are true:
\begin{enumerate}[\indent a.]
\item $p \Vdash \neg \phi$ if there does not exist a $q \leq p$ such that
$q \Vdash \phi$.
\item $p \Vdash \phi \land \psi$ if and only if $p \Vdash \phi$ and $p \Vdash \psi$.
\item $p \Vdash \phi \lor \psi$ if there does not exist a $q \leq p$ such that
$q \Vdash \neg \phi \land \neg \psi$.
\item $p \Vdash \forall v \phi$ if and only if for all $\dot x$,
$p \Vdash \phi[\dot x/v]$.
\item $p \Vdash \exists v \phi$ if and only if there is an $\dot x$ such that
$p \Vdash \phi[\dot x/v]$.
\end{enumerate}
\end{defn}
The interested reader may wish to stop and verify that the definitions of
$\Vdash_\Qcal$ and $\Qcal$-name given in this section satisfy the properties stated in
Section \ref{formalism:sec}.
The following theorem is one of the fundamental results about forcing.
It connects the syntactic properties of the forcing relation with truth
in generic extensions of models of set theory.
If $M$ is a countable transitive model of ZFC, $\Qcal$ is a forcing in $M$, and
$G \subseteq \Qcal$ is an $M$-generic filter,
define
\[
M[G] := \{\Int{\dot x}{G} : \dot x \in M \mand \dot x \textrm{ is a $\Qcal$-name}\}.
\]
In this context, $M[G]$ is the \emph{generic extension} of $M$ by $G$ and $M$ is referred
to as the \emph{ground model}.
Notice that
\[
M = \{\Int{\check x}{G} : x \in M\} \subseteq M[G] \qquad \textrm{and}
\qquad G = \Int{\dot G}{G} \in M[G].
\]
The following
theorem relates the semantics of forcing (i.e. truth in the generic extension) with
the syntax (i.e. the forcing relation).
\begin{thm}
Suppose that $M$ is a countable transitive model of ZFC and that $\Qcal$
is a forcing which is in $M$.
If $q$ is in $\Qcal$, $\phi(v_1,\ldots,v_n)$ is a formula in the language of set theory,
and $\dot x_1,\ldots,\dot x_n$
are in $M$, then the following are equivalent:
\begin{enumerate}[\indent a.]
\item $q \Vdash \phi(\dot x_1,\ldots,\dot x_n)$.
\item
$M[G] \models \phi(\Int{\dot x_1}{G},\ldots,\Int{\dot x_n}{G})$
whenever $G \subseteq \Qcal$ is an $M$-generic filter and $q$ is in $G$.
\end{enumerate}
\end{thm}
\begin{remark}
This theorem can be modified to cover countable transitive models of
sufficiently large finite fragments of ZFC.
In fact this is crucial if one wishes to give a rigorous treatment of the semantics.
By G\"odel's second incompleteness theorem, ZFC alone does
not prove that there are any set models of ZFC (countable or otherwise).
This is in fact our main reason for de-emphasizing the semantics: while it is formally necessary
to work with models of finite fragments of ZFC, this only introduces technicalities which are
inessential to understanding what can be achieved with forcing.
\end{remark}
While we will generally not work with the semantics of forcing, let us note here
that it is conventional to use $\dot x$ to denote a $\Qcal$-name for
an element $x$ of a generic extension $M[G]$.
While such names are not unique, the choice generally does not matter and
this informal convention affords a great deal of notational economy.
We will now finish this section with some further discussion and notational
conventions concerning names.
It is frequently the case in a forcing construction that one encounters a $\Qcal$-name
for a function $\dot f$ whose domain is forced by some condition to be a ground model set;
that is, for some set $D$, $p \Vdash \operatorname{dom}(\dot f) = \check D$.
A particularly common occurrence is when $D = \omega$ or, more generally, some ordinal.
Under these circumstances, it is common to abuse notation and regard $\dot f$ as a function
defined on $D$, whose values are themselves names: $\dot f(x)$ is a $\Qcal$-name $\dot y$ such that
it is forced that $\dot f(\check x) = \dot y$.
Notice that if, for some sets $A$ and $B$, $p \Vdash \dot f: \check A \to \check B$,
it need not be the case that $\dot f(a)$ is of the form $\check b$ for some $b$ in $B$ --- i.e.
$p$ need not \emph{decide} the value of $\dot f(a)$ for a given $a \in A$.
In most cases, names are not constructed explicitly.
Rather a procedure is described for how to build the object to which the name
is referring.
Properties \ref{completeness_of_names} and \ref{ZFC_forced} are then implicitly invoked.
For example, if $\dot x$ is a $\Qcal$-name, $\bigcup \dot x$ is the $\Qcal$-name for the unique set
which is forced to be equal to the union of $\dot x$.
Notice that there is an abuse of notation at work here: formally, $\dot x$ is a set which has a union $y$.
It need not be the case that $y$ is even a $\Qcal$-name and certainly one should not
expect $\mathbf{1} \Vdash \bigcup \dot x = \check y$.
This is one of the reasons for using ``dot notation'': it emphasizes the role of the object as a name.
A more typical example of is $\omega_1$, the least uncountable ordinal.
Since ZFC proves ``there is a unique set $\omega_1$ such that $\omega_1$ is an ordinal, $\omega_1$ is uncountable,
and every element of $\omega_1$ is countable,''
it follows that if $\Qcal$ is any forcing, $\mathbf{1} \Vdash_\Qcal \exists x \phi(x)$, where $\phi(x)$ asserts $x$ is
the least uncountable ordinal.
In particular there is a $\Qcal$-name $\dot x$ such that $\mathbf{1} \Vdash_\Qcal \phi(\dot x)$.
Unless readability dictates otherwise, such names are denoted by adding a ``dot'' above the usual notation
(e.g. $\dot \omega_1$).
Another example is $\R$.
Recall that $\R$ is the completion of $\Q$ with respect to its metric --- formally the collection
of all equivalence classes of Cauchy sequences of rationals.
We use this same formal definition of $\R$ to define $\dot \R$:
if $\Qcal$ is a forcing, $\dot \R$ is the collection of all $\Qcal$-names for equivalence classes of
Cauchy sequences of rational numbers.
Notice that $\dot \R$ is not the same as $\check \R$
and, more to the point, we need not even have that
$\mathbf{1} \Vdash_{\Qcal} \dot \R = \check \R$ for a given forcing $\Qcal$.
This construction also readily generalizes to define $\dot X$ if $X$ is a complete
metric space.
The $\Qcal$-name $\dot X$ is then the collection of all $\Qcal$-names $\dot x$ such that
$\mathbf{1}$ forces that $\dot x$ is an equivalence class of Cauchy sequences
of elements of $\check X$.
That is, $\dot X$ is a $\Qcal$-name for the completion of $\check X$.
Finally, there are some definable sets which are always interpreted as
ground model sets and do not depend on the generic filter.
Two typical examples are finite and countable ordinals such as $0$, $1$, $\omega$, and $\omega^2$ as well
as sets such as $\Q$.
In such cases, checks are suppressed in writing the names for ease of readability --- we will write
$\Q$ and not $\check \Q$ or $\dot \Q$ in formulae which occur in the forcing language.
\section{The cast}
\label{cast:sec}
We will now introduce the examples which we will put to work
throughout the rest of the article.
The first class of examples provides the justification
for viewing forcings as abstract notions of randomness.
\begin{example}[random forcing]
Define $\Rand$ to be the collection of all measurable subsets of $[0,1]$ which have positive measure.
If $I$ is any index set, let $\Rand_I$ denote the collection of all measurable subsets of
$[0,1]^I$ which have positive measure.
Here $[0,1]$ is equipped with Lebesgue measure and $[0,1]^I$ is given the product measure.
Define $q \leq p$ to mean $q \subseteq p$.
This order is not separative so formally here we define $\Rand$ and $\Rand_I$ to be the corresponding
separative quotients.
This amounts to identifying those measurable sets which differ by a measure zero set.
Notice that every element of $\Rand_I$ contains a compact set in $\Rand_I$ --- the compact elements of $\Rcal_I$
are dense.
Furthermore, any two elements
of $\Rand_I$ are compatible if their intersection has positive measure.
\end{example}
When working with a forcing $\Qcal$, one is rarely interested in the generic filter itself but rather
in some \emph{generic object} which can be derived in some natural way from the generic filter.
For instance, in $\Rand_I$ it is forced that
\[
\bigcap \{ \operatorname{cl}(q) : q \in \dot G\}
\]
contains a unique element.
We will let $\dot r$ denote a fixed $\Rand_I$-name for this element.
For each $i \in I$, let $\dot r_i$ denote a fixed $\Rand_I$-name for the $i$th coordinate of $\dot r$ and
observe that for all $i \ne j$ in $I$,
\[
D_{i,j} := \{q \in \Rand_I : (x \in \operatorname{cl}(q)) \rightarrow (x(i) \ne x(j))\}
\]
is dense.
Therefore $\mathbf{1} \Vdash_{\Rand_I} \forall i \ne j \in \check I\ (\dot r_i \ne \dot r_j)$.
In particular, it is forced by $\Rand_I$ that $|\dot \R| \geq |\check I|$.
(Notice however, that we have not established that if, e.g., $I = \aleph_2$,
then $\mathbf{1} \Vdash_{\Rand_I} \dot \aleph_2 = \check \aleph_2$.
This will be established in Section \ref{ccc:sec}.)
In the context of $\Rand$, we will use $\dot r$ to denote a $\Rand$-name for the unique element
of $\bigcap \{\operatorname{cl} (q) : q \in \dot G\}$.
If $M$ is a transitive model of ZFC, then $r \in [0,1]$ is in every measure 1 Borel set coded in $M$ if and only if
$\{q \in \Rand \cap M : r \in q\}$ is a $M$-generic filter.
Such an $r$ is commonly referred to as a \emph{random real} over $M$.
The notion of a random real was first introduced by Solovay \cite{solovay_model}.
The next class of examples includes Cohen's original forcing from \cite{CON_negCH}.
Just as random forcing is rooted in measure theory,
Cohen forcing is rooted in the notion of Baire category.
\begin{example}[Cohen forcing]
Let $\Cohen$ denote the collection of all \emph{finite partial functions} from $\omega$ to $2$:
all functions $q$ such that the domain of $q$ is a finite subset of $\omega$ and the range of $q$ is
contained in $2 = \{0,1\}$.
We order $\Cohen$ by $q \leq p$ if $q$ extends $p$ as a function.
If $I$ is a set, let $\Cohen_I$ denote the collection
of all finite partial functions from $I \times \omega$ to $2$, similarly ordered by extension.
It is not difficult to show that $\Cohen_I$ is isomorphic to a dense suborder of the collection of all
nonempty open subsets of $[0,1]^I$, ordered by containment.
This makes $\Cohen_I$ analogous to $\Rand_I$ (in fact it is a suborder), although viewing
$\Cohen_I$ as a collection of finite partial functions will often be more convenient from the point
of view of notation.
\end{example}
It is very often the case that forcings consist of a collection of \emph{partial functions} ordered
by \emph{extension}.
By this we mean that $q \leq p$ means that $p$ is the restriction of $q$ to the domain of $p$.
A filter in the forcing is then a collection of functions which is directed under containment
and whose union is therefore also a function.
This union is the generic object derived from the generic filter.
In the case of $\Cohen_I$, observe that
for each $i \ne j$ in $I$ and $n < \omega$, both
\[
\{q \in \Cohen_I : (i,n) \in \operatorname{dom}(q)\}
\]
and
\[
\{q \in \Cohen_I : \exists m\ \big((\{(i,m),(j,m)\} \subseteq \operatorname{dom}(q)) \land (q(i,m) \ne q(j,m)) \big)\}
\]
are dense.
In particular, the generic object will be a function from $I \times \omega$ into $2$.
As in the case of $\Rand_I$,
such a generic object naturally corresponds to an indexed family $\Seq{r_i : i \in I}$
of elements of $[0,1]$ and
genericity ensures that these elements are all distinct.
If $M$ is a transitive model of ZFC, then $r \in [0,1]$ is in every dense open set coded in $M$ if and only if the set of finite restrictions of the binary expansion of $r$ is an
$M$-generic filter for the forcing $\Cohen$.
Such an $r$ is commonly referred to as a \emph{Cohen real} over $M$.
Notice that $[0,1]$ is a union of a measure 0 set and a set of first category:
for every $n$, the rationals in $[0,1]$ are contained in a relatively dense open set
of measure less than $1/n$.
Thus no element of $[0,1]$ is both
a Cohen real and a random real over a transitive model of ZFC.
In fact there are qualitative difference between Cohen and random reals as well.
For instance, in the case of random forcing it is forced that
\[
\lim_{n \to \infty} \frac{1}{n} |\{i < n : r(i) = 1\}| = \frac{1}{2}
\]
where as in the case of Cohen forcing, it is forced that the limit does not exist.
The next example of a forcing appears similar at first to random forcing, but in fact it is quite
different in nature.
\begin{example}[Amoeba forcing]
If $1 > \epsilon > 0$, then define $\Ameoba_\epsilon$ to be the collection of all elements of
$\Rand$ of measure greater than $\epsilon$.
This is regarded as a forcing with the order induced from $\Rand$.
\end{example}
Notice that the compatibility relation on $\Ameoba_{\epsilon}$ differs from that inherited from $\Rand$:
two conditions in $\Ameoba_{\epsilon}$ are compatible in $\Ameoba_{\epsilon}$ if and only if their intersection has measure
greater than $\epsilon$.
The previous forcings all introduce a new subset of $\omega$ to the ground model.
The next example adds a new ultrafilter on $\omega$ but, as we will see in Section \ref{sigma-closed:sec}, it
does not introduce a new subset of $\omega$.
\begin{example}
Let $[\omega]^\omega$ denote the collection
of all infinite subsets of $\omega$.
The ordering of containment on $[\omega]^\omega$ is not separative;
its separative quotient is obtained by identifying those $x$ and $y$
which have finite symmetric difference.
We will abuse notation and denote this quotient by $[\omega]^{\omega}$ as well.
\end{example}
The next forcing was introduced by Mathias \cite{happy_families} to study
infinite dimensional generalizations of Ramsey's theorem.
\begin{example}[Mathias forcing]
Let $\Mathias$ denote the collection of all pairs $p = (a_p,A_p)$
such that $A_p$ is in $[\omega]^{\omega}$ and $a_p$ is a finite initial part of $A_p$.
Define $q \leq p$ to mean $a_p \subseteq a_q$ and $A_q \subseteq A_p$.
Note in particular that in this situation $a_p$ is an initial segment of $a_q$ and
$a_q \setminus a_p$ is contained in $A_p$.
This forcing is known as \emph{Mathias forcing}.
\end{example}
The final example is an illustration of the potential raw power of forcing.
Typically the phenomenon of collapsing cardinals to $\aleph_0$ is something one wishes
to avoid.
\begin{example}[collapsing to $\aleph_0$]
If $X$ is a set, consider the collection $X^{<\omega}$ of all finite sequences of elements
of $X$, ordered by extension.
Observe that if $x$ is in $X$, then the collection of all elements of $X^{<\omega}$ which
contain $x$ in their range is dense.
Thus $X^{<\omega}$ forces that $|\check X| = \aleph_0$.
Notice that if $X = \R$ in this example, then it is forced that
$|\check \R| = \check \aleph_0 < |\dot \R|$.
\end{example}
\section{The countable chain condition}
\label{ccc:sec}
Something which is often an important consideration in the analysis of a forcing
is whether uncountability is preserved.
That is, does $\mathbf{1} \Vdash_\Qcal \check \aleph_1= \dot \aleph_1$?
More generally, one can ask whether cardinals are preserved by forcing with $\Qcal$:
if $X$ and $Y$ are sets such that $|X| < |Y|$,
then does $\mathbf{1} \Vdash_\Qcal |\check X| < |\check Y|$?
In general, this can be a very subtle matter (and even can be influenced by forcing).
One way to demonstrate that a forcing preserves cardinals
is to verify that it satisfies the \emph{countable chain condition (c.c.c.)}.
A forcing $\Qcal$ satisfies the \emph{c.c.c.} if every family of pairwise incompatible elements of $\Qcal$ is at most countable.
The following proposition dates to Cohen's proof that the Continuum Hypothesis is independent of ZFC.
\begin{prop} \label{ccc_card}
Suppose that $\Qcal$ is a c.c.c. forcing.
If $\kappa$ is a regular cardinal, then $\mathbf{1} \Vdash \check \kappa \textrm{ is a regular cardinal}$.
In particular, $\kappa$ is a cardinal then $\mathbf{1} \Vdash \kappa \textrm{ is a cardinal}$ and hence
for every ordinal $\alpha$,
$\mathbf{1} \Vdash \dot \aleph_{\alpha} = \check \aleph_{\alpha}$.
\end{prop}
\begin{proof}
The second conclusion follows from the first since every
cardinal is the supremum of a set of successor cardinals and every
supremum of a set of successor cardinals is a cardinal.
Let $\kappa$ be a regular cardinal and $\Qcal$ be a given forcing.
Suppose that $\dot f$ and $\dot \lambda$ are $\Qcal$-names and that
$p$ is an element of $\Qcal$
such that
\[
p \Vdash (\dot \lambda \in \check \kappa) \land (\dot f : \dot \lambda \to \check \kappa)
\]
By extending $p$ if necessary, we may assume without loss of generality that $\dot \lambda = \check \lambda$
for some $\lambda < \kappa$.
It is sufficient to show that $p$ forces that $\dot f$ is not a surjection.
If $\kappa$ is countable, then $\lambda$ is finite and it is possible to decide $\dot f$ by deciding its values
one at a time
(this does not require that $\Qcal$ is c.c.c.).
Thus we will assume that $\kappa$ is uncountable.
For each $\alpha < \lambda$, define
\[
F(\alpha) := \{ \beta < \kappa : \exists q \leq p (q \Vdash \dot f(\alpha) = \check \beta)\}.
\]
Notice that if $\beta \ne \beta'$ are in $F(\alpha)$ and
$q$ forces that $\dot f(\alpha) = \check \beta$ and $q'$ forces that $\dot f( \alpha) = \check \beta'$,
then $q$ and $q'$ are incompatible (otherwise any extension $\bar q$ would
force $\check \beta = \dot f( \alpha) = \check \beta'$).
Since $\Qcal$ is c.c.c., $F(\alpha)$ is countable and has an upper bound $g(\alpha) < \kappa$.
Since $\kappa$ is regular, the range of $g$ is bounded.
We are therefore finished once we show that
\[
p \Vdash \forall \alpha \in \check \lambda\ (\dot f( \alpha) \in \check F( \alpha)).
\]
By Proposition \ref{check_quant}, this is equivalent to showing that for all $\alpha$ in $\lambda$,
$p \Vdash \dot f( \alpha) \in \check F( \alpha)$.
Suppose for contradiction that this is not the case.
Then there is a $q \leq p$ and an $\alpha$ in $\lambda$ such that
$q \Vdash \dot f( \alpha) \not \in \check F( \alpha)$.
By Proposition \ref{decide}, there is a $q' \leq q$ and a $\beta < \kappa$ such that
$q' \Vdash \dot f( \alpha) = \check \beta$.
But now $\beta \in F(\alpha)$, a contradiction.
\end{proof}
\begin{prop} \label{generic_prop_K}
Suppose that $\Qcal$ is a c.c.c. forcing and that $\Seq{q_\xi : \xi < \omega_1}$ is a sequence of conditions in $\Qcal$.
Then there is a $p$ such that
\[
p \Vdash \{\xi \in \check \omega_1 : q_\xi \in \dot G\} \textrm{ is uncountable}.
\]
\end{prop}
\begin{remark}
Notice that this characterizes the c.c.c.: if $A$ is an uncountable antichain in a forcing $\Qcal$,
then any condition forces that $\check A \cap \dot G$ contains at most one element.
\end{remark}
\begin{proof}
Suppose that this is not the case.
Then
\[
\mathbf{1} \Vdash
\exists \beta \in \check \omega_1
\forall \xi \in \check \omega_1\
(q_\xi \in \dot G \rightarrow \xi < \beta).
\]
By Property \ref{completeness_of_names} of the forcing relation,
there is a $\Qcal$-name $\dot \beta$ for an element of $\omega_1$ such that
\[
\mathbf{1} \Vdash \forall \xi \in \check \omega_1\ (q_\xi \in \dot G \rightarrow \xi < \dot \beta).
\]
As in the proof of Proposition \ref{ccc_card},
the set of $\alpha < \omega_1$ such that, for some $q \in Q$,
$q \Vdash \dot \beta = \check \alpha$
is countable and therefore bounded by some $\gamma$.
That is
$$
\mathbf{1} \Vdash \forall \xi \in \check \omega_1\ (q_\xi \in \dot G \rightarrow \xi < \check \gamma).
$$
But now $q_\gamma$ forces that $\check q_\gamma$ is in $\dot G$, a contradiction.
\end{proof}
We will now return to some of the examples introduced in Section \ref{cast:sec}.
\begin{prop} \label{cohen_random_ccc}
For any index set $I$, both $\Rcal_I$ and $\Ccal_I$ are c.c.c. forcings.
\end{prop}
\begin{proof}
In the case of $\Rand_I$, this is just a reformulation of the assertion
that if $\Fcal$ is an uncountable
family of measurable subsets of $[0,1]^I$, each having positive measure,
then there are two elements of $\Fcal$ which intersect in a set of positive measure.
The reason for this is that if $\Fcal$ is uncountable, then for some $\epsilon > 0$ there
are more than $1/\epsilon$ elements of $\Fcal$ with measure at least $\epsilon$.
At least two of these elements must intersect in a set of positive measure.
The same argument applies to $\Ccal_I$, by observing that
we may view $\Ccal_I$ as a dense suborder of the collection of all nonempty open subsets
of $[0,1]^I$, ordered by containment.
Since any nonempty open subset of $[0,1]^I$ has positive measure we may view
$\Ccal_I$ as a suborder of $\Rcal_I$.
Moreover, conditions $p,q \in \Ccal_I$ which are compatible in $\Rcal$ are compatible in $\Ccal_I$.
\end{proof}
\begin{prop} \label{ameoba_ccc}
The forcing $\Ameoba_\epsilon$ satisfies the c.c.c. for every $\epsilon > 0$.
\end{prop}
\begin{proof}
Let $D$ denote the collection of all elements of $\Ameoba_\epsilon$ which are finite unions of rational
intervals.
Notice that $D$ is countable.
For each $p$ in $D$, let $\Fcal_p$ denote the collection of all elements $q$ of $\Ameoba_\epsilon$ such that
$q \subseteq p$ and
\[
\lambda (p) - \lambda (q) < \frac{\lambda(p) - \epsilon}{2}.
\]
Notice that $\bigcup_{p \in D} \Fcal_p$ contains all of the compact sets in $\Ameoba_\epsilon$ which
are in turn dense in $\Ameoba_\epsilon$.
Moreover, any two elements of $\Fcal_p$ intersect in a set
of measure greater than $\epsilon$ and hence have a common lower bound in $\Ameoba_\epsilon$.
If $X \subseteq \Ameoba_\epsilon$ is uncountable, two distinct elements of $X$ must have extensions in the same
$\Fcal_p$ for some $p$ and thus be compatible.
Hence any antichain in $\Ameoba_\epsilon$ is countable and $\Ameoba_\epsilon$ is c.c.c..
\end{proof}
\begin{remark}
The reader may wonder why we have not bothered to generalize $\Ameoba_\epsilon$ to a larger index set,
given that we did this for $\Rand$.
The reason is that, for uncountable index sets, the analog of $\Ameoba_{\epsilon}$ is not c.c.c.
and in fact collapses the cardinality of $I$ to become countable.
\end{remark}
We finish this section by demonstrating that the Continuum Hypothesis isn't provable
within ZFC.
By Theorem \ref{ccc_card},
\(\Rand_{\omega_2}\) forces that \(\dot \aleph_1 = \check \aleph_1\) and \(\dot \aleph_2 = \check \aleph_2\).
On the other hand, we have already observed that for all $\alpha < \beta < \omega_2$,
\[
\mathbf{1} \Vdash_{\Rand_{\omega_2}} \dot r_\alpha \ne \dot r_\beta.
\]
Hence $\Rand_{\omega_2}$ forces that $|\dot \R| \geq \check \aleph_2 = \dot \aleph_2$.
Since the set of formulas which are forced by $\mathbf{1}$ is a consistent theory
extending ZFC and containing $|\R| \geq \aleph_2$,
this establishes that that ZFC cannot prove the Continuum Hypothesis.
The same argument shows that $\Cohen_{\omega_2}$ forces that CH is false;
this was the essence of Cohen's proof \cite{CON_negCH}.
\section[Intersection property]{An intersection property of families of sets of positive measure*}
\label{ameoba:sec}
The purpose of this section is to use the tools which we have developed in order to
prove the following intersection property of sets
of positive measure in $[0,1]$.
\begin{prop}
If $X \subseteq \R$ is uncountable and
$\Seq{B_x : x \in X}$ is an indexed collection of Borel subsets of $[0,1]$,
each having positive measure,
then there is a nonempty set $Y \subseteq X$ such that $Y$ has no isolated points and
such that
$\bigcap \{B_y : y \in Y\}$ has positive measure.
\end{prop}
\begin{proof}
By replacing each $B_x$ with a subset if necessary, we may assume that each $B_x$ is compact.
Similarly, by replacing $X$ with a subset if necessary, we may assume that there is an
$\epsilon > 0$ such that if $x$ is in $X$, then $B_x$ has measure greater than $\epsilon$.
Let $T$ consist of all finite length sequences $\sigma = \Seq{\sigma_i : i < n}$ such that:
\begin{enumerate}
\setcounter{enumi}{\value{my_enumerate_counter}}
\item $\sigma$ is an increasing sequence of finite subsets of $X$;
\item $\bigcap \{B_x : x \in \sigma_i\}$ has measure greater than $\epsilon$ for all
$i < n$;
\item for each $i < n$, if $x$ is in $\sigma_i$, then there is
a $y$ distinct from $x$ in $\sigma_i$ such that $|x-y| < 1/i$.
\setcounter{my_enumerate_counter}{\value{enumi}}
\end{enumerate}
Observe that if $\sigma$ is an infinite sequence all of whose initial parts are in $T$,
then $Y := \bigcup \{\sigma_i : i < \infty\}$ has no isolated points and
$\bigcap \{B_y :y \in Y\}$ has measure at least $\epsilon$.
Conversely, if there is a countable $Y \subseteq X$ with no isolated points and
$\bigcap_{x \in F} B_x$ has measure greater than $\epsilon$ whenever $F \subseteq Y$ is finite,
then $T$ has an infinite path.
Thus by Proposition \ref{WF_abs}, it is sufficient to show that the conclusion of the
proposition is forced by some condition in some forcing.
Consider the Amoeba forcing $\Ameoba_\epsilon$ and let
$\dot Z$ be the $\Qcal$-name for the set
$\{x \in \check X : \check B_x \in \dot G\}$.
Observe that every condition forces that the intersection of every finite subset of
$\{\check B_x : x \in \dot Z\} \subseteq \dot G$ is in $\dot G$ and hence
in $\check \Ameoba_\epsilon$.
By Proposition \ref{generic_prop_K},
there is a $q$ in $\Ameoba_\epsilon$ such that
$q$ forces that $\dot Z$ is uncountable.
By Property \ref{ZFC_forced} of the forcing relation,
$q$ forces that $\dot Z$ contains a countable subset $\dot Y$ with no isolated points.
This finishes the proof.
\end{proof}
\section{The Halpern-L\"auchli theorem*}
The Halpern-L\"auchli Theorem is a Ramsey-theoretic result concerning colorings of
products of finitely branching trees.
Before stating the theorem, we need to first define some terminology.
Recall that a subset $T$ of $\omega^{<\omega}$ is a \emph{tree} if it is closed under initial segments:
whenever $t$ is in $T$ and $s$ is an initial part of $t$, it follows that $s$ is in $T$.
A tree $T \subseteq \omega^{<\omega}$ comes equipped with a natural partial order:
$s \leq t$ if and only if $s$ is an initial part of $t$.
If $T \subseteq \omega^{<\omega}$ is a tree and $l < \omega$, the \emph{$l$th level}
of $T$ consists of all elements of $T$ of length $l$ and is denoted $(T)_l$.
All trees considered in this section will be assumed to be \emph{pruned} without further mention:
every element will have at least one immediate successor.
A tree $T \subseteq \omega^{<\omega}$ is \emph{finitely branching} if every element of $T$ has only finitely many immediate
successors in $T$.
If $S \subseteq T \subseteq \omega^{<\omega}$ are trees and $J \subseteq \omega$ is infinite,
then we say that
$S$ is a \emph{strong subtree of $T$ based on $J$} if whenever
$s$ is in $S$ with length in $J$, every immediate successor
of $s$ in $T$ is in $S$.
The Halpern-L\"auchli Theorem can now be stated as follows.
\begin{thm} \cite{Halpern-Lauchli}
If $\Seq{T_i : i < d}$ is a sequence of finitely branching subtrees of
$\omega^{<\omega}$ and
\[
f : \bigcup_{l = 0}^\infty \prod_{i < d} (T_i)_l \to k
\]
then there exists an infinite set $L \subseteq \omega$ and strong subtrees
$S_i \subseteq T_i$ based on $L$ for each $i < d$ such that
$f$ is constant when restricted to $\bigcup_{l \in L} \prod_{i < d} (S_i)_l$.
\end{thm}
Unlike essentially all other Ramsey-theoretic statements concerning the countably infinite,
the full form of the Halpern-L\"auchli Theorem --- at least at present --- cannot be derived from the
machinery of semigroup dynamics of spaces of ultrafilters
(see \cite{alg_betaN}, \cite{intro_Ramsey_spaces}).
The special case of the Halpern-L\"auchli Theorem for $n$-ary trees is a consequence
of a form of the Hales-Jewett Theorem, which can be proved using semigroup dynamics
--- see \cite{intro_Ramsey_spaces}.
The proof which is presented in this section is based on forcing and is an inessential modification
of an argument due to Leo Harrington
(see \cite{forcing_appl}).
In order to prove the Halpern-L\"auchli Theorem, we will derive it from the so-called \emph{dense set}
form of the theorem.
If $T \subseteq \omega^{<\omega}$ is a tree and $t$ is in $S$, then a set $D \subseteq T$ is
\emph{$(m,n)$-dense in $T$ above $t$}
if $D \subseteq (T)_n$ and whenever $u$ is in $(T)_m$ with $t \subseteq u$,
there is a $v$ in $D$ such that $u \subseteq v$.
If $t$ is the null string, then we just say that $D$ is $(m,n)$-dense in $T$.
\begin{thm} \label{HLD}
If $\Seq{T_i : i < d}$ is a sequence of finitely branching subtrees of $\omega^{<\omega}$ and
\[
f : \bigcup_{l = 0}^\infty \prod_{i < d} (T_i)_l \to k
\]
then there is an $l$ and a $\bar t$ in $\prod_{i < d} (T_i)_l$
such that for every $m \geq l$ there is an $n \geq m$ and sets
$\Seq{D_i : i < d}$ such that for each $i < d$, $D_i$ is $(m,n)$-dense above
$t_i$ in $T_i$ and such that $f$ is constant on $\prod_{i < d} D_i$.
\end{thm}
The original form of the Halpern-L\"auchli Theorem is an
immediate consequence of the dense set version and the following observation.
\begin{obs}
Let $T \subseteq \omega^{<\omega}$ be a tree and $t$ be an element of $T$.
If $\Seq{D_p : p < \infty }$ is a sequence of subsets of $T$
such that for some increasing sequence
$\Seq{m_p : p < \infty }$, $D_p$ is $(m_p,m_{p+1})$-dense in $T$ above $t$,
then the downward closure of $\bigcup_{p =0}^\infty D_p$ contains a strong subtree of
$T$ which is based on $\{m_p : p < \infty \}$.
\end{obs}
It is also not difficult to see that, unlike the standard formulation of the
Halpern-L\"auchli Theorem, the special case of Theorem \ref{HLD} in which each
$T_i$ is $2^{<\omega}$ is equivalent to the theorem in its full generality.
Harrington's proof of the Halpern-L\"auchli theorem uses the forcing relation to
reduce the desired Ramsey-theoretic properties of trees to Ramsey-theoretic properties
of cardinals.
In the proof we will need some standard definitions
and facts from combinatorial set theory (see, e.g., \cite[Ch.II]{set_theory:Kunen}).
If $\kappa$ is a regular cardinal, a subset $S$ of $\kappa$ is
\emph{stationary} if it intersects every closed and unbounded subset of $\kappa$.
Clearly every stationary subset of $\kappa$ has cardinality $\kappa$.
Furthermore, if $\mu$ is an infinite cardinal less than $\kappa$, then
the set of all ordinals in $\kappa$ of cofinality $\mu$ is stationary.
We will need the following property of stationary sets.
\begin{lem}[Pressing Down Lemma; see \cite{set_theory:Kunen}] \label{PDL}
Suppose that $\theta$ is a regular cardinal
and $S \subseteq \theta$ is a stationary set.
If $r:S \to \theta$ satisfies that
$r(\xi) < \xi$ for all $\xi \in S$,
then $r$ is constant on a stationary subset of $S$.
In particular if a stationary subset of $\theta$ is partitioned
into fewer than $\theta$ sets, then
one of the pieces of the partition is stationary.
\end{lem}
We will need the following variant of the $\Delta$-System Lemma.
\begin{lem} \label{base_case}
Suppose that $X$ is a set, $\theta$ is
the successor of a regular cardinal,
and $\{p_\xi : \xi < \theta\}$
is a family of partial functions from $X$
to $2$ such that for every $\xi < \theta$,
$2^{|\operatorname{dom}(p_\xi)|} < \theta$.
Then there exists a cofinal $H \subseteq \theta$ such that
$\bigcup_{\xi \in H} p_\xi$ is a function.
\end{lem}
\begin{proof}
Set $\theta := \kappa^+$.
Observe that by replacing $X$ with the union of the domains of the $p_\xi$'s if necessary,
we may assume that $|X| \leq \theta$ and thus moreover that $X \subseteq \theta$.
For each $\xi < \theta$,
define $a_\xi := \operatorname{dom} (p_\xi) \cap \xi$.
Observe that $|\operatorname{dom}(p_\xi)|$ must be less than $\kappa$ for each $\xi$
and thus $a_\xi$ is a bounded subset of $\xi$ whenever $\cf(\xi) = \kappa$.
Let $E \subseteq \theta$ consist of all $\delta$ such that
if $\xi < \delta$, then $\sup (\operatorname{dom} (p_\xi)) < \delta$.
It is easily checked that $E$ is a closed and unbounded set.
By Lemma \ref{PDL}, there is a stationary $S \subseteq E$ consisting of
ordinals of cofinality $\kappa$ and a $\zeta$ such that
if $\xi$ is in $S$, $\sup a_\xi < \zeta$.
By the pigeonhole principle, there is a stationary set
$H \subseteq S$ and partial function $r$ from $\theta$ to $2$
such that if $\xi$ is in $H$, then $p_\xi \restriction \xi = r$.
Now if $\xi < \eta$ are in $H$, then
$p_\xi \bigcup p_\eta$ is a function.
To see this, suppose that $\alpha$ is in $\operatorname{dom}(p_\xi) \cap \operatorname{dom}(p_\eta)$.
Since $\eta$ is in $E$, it must be that $\alpha < \eta$.
Thus $\alpha$ is in $a_\xi = a_\eta = \operatorname{dom} (r)$ and hence
$p_\xi(\alpha) = r(\alpha) = p_\eta(\alpha)$.
\end{proof}
Next, we will need two closely related Ramsey-theoretic statements which are relatives
of the Erd\H{o}s-Rado Theorem but which have simpler proofs.
\begin{lem} \label{ER_polar}
Suppose that $\Seq{\theta_i : i < d}$ is a sequence of uncountable
regular cardinals satisfying $2^{\theta_i} < \theta_{i+1}$ if $i < d-1$.
If $f:\prod_{i < d} \theta_i \to \omega$,
then there exist cofinal sets $H_i \subseteq \theta_i$ for each $i < d$
such that $f$ is constant on $\prod_{i < d} H_i$.
\end{lem}
\begin{proof}
The proof is by induction on $d$.
If $d$ is given and $\xi < \theta_{d-1}$, fix
$H_i^\xi \subseteq \theta_i$ for each $i < d-1$ such that
$f$ takes the constant value $g(\xi)$ on
\[
\left( \prod_{i < d-1} H_i \right) \times \{\xi\}
\]
(if $d=1$, then the product over the emptyset is the trivial map with
domain $\emptyset$ and this is vacuously true).
By applying the pigeonhole principle and our cardinal arithmetic assumption,
there is an $H_{d-1} \subseteq \theta_{d-1}$ such that $g$ is constant on $H_{d-1}$ and
$H_i^\xi$ does not depend on $\xi$ for $\xi \in H_{d-1}$.
It follows that $f$ is constant when restricted to
$\prod_{i < d} H_i$, where $H_i := H_i^\xi$ for some (equivalently any) $\xi$ in $H_{d-1}$.
\end{proof}
\begin{lem} \label{ER_Delta}
Suppose that $X$ is a set and $\Seq{\theta_i : i < d}$
is a sequence of successors of infinite regular cardinals such that
$2^{\theta_i} < \theta_{i+1}$ if $i < d-1$.
If $\{p_\sigma : \sigma \in \prod_{i < d} \theta_i\}$
is a family of finite partial functions from $X$ into a countable set,
then there are $H_i \subseteq \theta_i$ of cardinality $\theta_i$
such that
\[
\bigcup \{p_\sigma : \sigma \in \prod_{i < d} H_i\}
\]
is a function.
\end{lem}
\begin{proof}
The proof is by induction on $d$.
The case $d=1$ follows from Lemma \ref{base_case}.
Now suppose $\Seq{\theta_i : i \leq d}$ and
$\Seq{p_\sigma : \sigma \in \prod_{i \leq d} \theta_i}$ are given.
For each $\xi$ in $\theta_{d}$, find $\Seq{H_i^\xi : i < d}$ such
that $H^\xi_i \subseteq \theta_i$ and
such that
\[
\bigcup \{p_{\sigma \cat \xi} : \sigma \in \prod_{i < d} H_i\}
\]
is a function, which we will denote by $q_\xi$.
Applying the pigeonhole principle, find
a cofinal $\Gamma \subseteq \theta_{d}$ such that,
for some $\Seq{H_i : i < d}$,
$H_i^\xi = H_i$ if $\xi$ is in $\Gamma$.
Now apply Lemma \ref{base_case} to $\Seq{q_\xi : \xi \in \Gamma}$
to find $H_d \subseteq \Gamma$ of cardinality $\theta_d$ such
that
\[
\bigcup_{\xi \in H_d} q_\xi = \bigcup \{p_\sigma : \sigma \in \prod_{i\leq d} H_i\}
\]
is a function.
\end{proof}
Finally we turn to the task of proving the dense set form of the Halpern-L\"auchli Theorem.
\begin{proof}[Proof of Theorem \ref{HLD}]
Suppose that
\[
f: \bigcup_{k = 0}^\infty \prod_{i < d} 2^k \to 2
\]
is given and define $\Seq{\theta_i : i < d}$ by $\theta_0 = \aleph_1$
and $\theta_{i+1} = (2^{\theta_i})^{++}$.
Set $\Qcal$ to be the collection of all finite partial functions from $\theta_{d-1}$
into $2^{<\omega}$.
(This is an inessential modification of the forcing $\Cohen_{\theta_{d-1}}$.)
The order on $\Qcal$ is defined by $q \leq p$ if the domain of $q$
contains the domain of $p$ and
$p(\alpha)$ is an initial part $q(\alpha)$ whenever $\alpha$ is in the domain of $p$.
Observe that $\dot r_\xi := \bigcup_{q \in \dot G} q(\check \xi)$ describes an element of $2^{\omega}$.
Applying Property \ref{ZFC_forced} of the forcing relation,
fix a $\Qcal$-name $\dot \Ucal$ for a nonprincipal ultrafilter on $\omega$.
Since $\dot \Ucal$ is forced to be an ultrafilter,
for each $\sigma \in \prod_{i<d} \theta_i$ there is a $\dot e_\sigma$ such that
it is forced that there such that
\[
\dot U_\sigma =
\{
m \in \omega :
f(\dot r_{\sigma(0)} \restriction m ,\ldots,\dot r_{\sigma(d-1)} \restriction m) =
\dot e_\sigma
\}
\]
is in $\dot \Ucal$.
By Property \ref{decide} of the forcing relation,
there is a $p_\sigma$ which decides $\dot e_\sigma$ to be some $e_\sigma \in \{0,1\}$.
By extending $p_\sigma$ if necessary, we may assume that there is an $l_\sigma$
such that if $\alpha$ is in
the domain of $p_\sigma$, $p_\sigma(\alpha)$ has length $l_\sigma$.
Define
\[
g(\sigma) := \big(e_\sigma,l_\sigma,p_\sigma(\sigma(0)),\ldots,p_\sigma(\sigma(d-1))\big).
\]
By Lemmas \ref{ER_polar} and \ref{ER_Delta}, there are cofinal sets $H_i \subseteq \theta_i$
such that:
\begin{enumerate}
\setcounter{enumi}{\value{my_enumerate_counter}}
\item
$g$ is constantly $(e,l,t_0,\ldots,t_{d-1})$ on $\prod_{i < d} H_i$ for some $(e,l)$ and
$t_0,\ldots,t_{d-1}$ in $2^l$;
\item
every finite subset of $\{p_\sigma : \sigma \in \prod_{i < d} H_i\}$
has a common lower bound in $\Qcal$.
\setcounter{my_enumerate_counter}{\value{enumi}}
\end{enumerate}
Now let $m \geq l$ be given.
For each $i < d$, let $A_i$ be a subset of $H_i$ of cardinality $2^{m-l}$
and fix a bijection between $A_i$ and the set of binary sequences of length $m-l$.
Let $q$ be a condition in $\Qcal$ which is a common lower bound for
\[
\{p_\sigma : \sigma \in \prod_{i < d} A_i\}
\]
and such that if $\alpha$ is the element of $A_i$ which corresponds
to $u \in 2^{m-l}$ under the bijection,
then $q(\alpha)$ has $t_i \cat u$ as an initial part.
That is, $q$ forces that $\dot r_\alpha \restriction m = t_i \cat u$.
Let $\bar q$ be an extension of $q$ such that for some $n > m$,
$\bar q$ forces that
\[
\check n \in \bigcap \{\dot U_\sigma : \sigma \in \prod_{i < d} A_i\}.
\]
By extending $\bar q$ if necessary, we may assume that for each $i < d$ and
$\alpha$ in $A_i$,
$\bar q(\alpha)$ has length at least $n$.
Finally, set $D_i$ to be the set of all $w \in 2^n$ such that for some $\alpha$ in $A_i$,
$\bar q(\alpha) \restriction n = w$.
We will now show that $D_i$ is $(m,n)$ dense above $t_i$ for each $i < d$ and that
$f \restriction \prod_{i < d} D_i$ is constantly $e$.
To see the former, fix $i < d$ and let $u$ be in $2^{m-l}$ and let $\alpha$ be the corresponding element of
$A_i$.
By our choice of $q$, $q(\alpha) \restriction m = t_i \cat u$
and by our choice of $\bar q$ and
the definition of $D_i$, there is a $w$ in $D_i$ such that
$\bar q(\alpha) \restriction n = w$.
To see the latter,
suppose $\Seq{w_i : i < d} \in \prod_{i < d} D_i$ and
let $\sigma \in \prod_{i < d} A_i$ be such that
$\bar q(\alpha_i) \restriction n = w_i$.
Clearly $\bar q$ forces that
\[
\Seq{\dot r_{\sigma(i)} \restriction n : i < d} = \Seq{w_i : i < d}.
\]
Furthermore, by the definition of $U_\sigma$ and $n$,
we have that
\[
f(\Seq{\dot r_{\sigma(i)} \restriction n : i < d}) =
f(\Seq{w_i : i < d}) = e.
\]
\end{proof}
\section{Universally Baire sets and absoluteness}
\label{abs:sec}
In this section, we will introduce an abstract notion of regularity for subsets
of complete metric spaces which is useful in proving absoluteness results.
Let $(X,d)$ be a (not necessarily separable) complete metric space.
Recall that the completion of a metric space is taken to be the collection of all
equivalence classes of its Cauchy sequences.
Recall also that if $\Qcal$ is a forcing, then $\dot X$ represents a $\Qcal$-name for
the completion of $\check X$.
In this section we will be interested in interpreting names by filters which
are not fully generic.
Notice, for instance, that it is possible that $\dot y$ is forced to be equal to $\check x$, even
though there are some (non generic) ultrafilters which interpret $\dot y$ to be different than $x$.
For this reason it is necessary to work with names which have better properties
with respect to arbitrary interpretations.
\begin{defn}
If $\Qcal$ is a forcing, then a \emph{nice $\Qcal$-name for an element of $\dot X$}
is a $\Qcal$-name $\dot x$ such that, for some countable collection of dense subsets $\Dcal$
of $\Qcal$, $\Int{\dot x}{G}$ is a Cauchy sequence in $(X,d)$ whenever
$G$ is $\Dcal$-generic.
\end{defn}
\begin{remark}
For technical reasons we need to make nice $\Qcal$-names for elements of a complete metric space
$\dot X$ to formally be a Cauchy sequence rather than an equivalence class
of a Cauchy sequence, even though
the intent is only to refer to the limit point corresponding to the equivalence class.
Also, while the completion of a complete metric space is not literally equal to the original
space, there is a canonical isometry between the two and usually there is no need to
distinguish them.
The point in the above definition is that the only meaningful way to define
$\dot X$ is as the name for the completion of $\check X$.
Hence names for elements of $\dot X$ are names for equivalence classes of Cauchy sequences.
When they are interpreted by a sufficiently
generic filter, they will typically result in elements of the completion of $X$, not
in elements of $X$.
\end{remark}
The next lemma shows nice $\Qcal$-names can be used to represent any element of $\dot X$ whenever
$\Qcal$ is a forcing and $X$ is a complete metric space.
\begin{lem}
If $\Seq{\dot x_n : n < \infty}$ is a $\Qcal$-name for a Cauchy sequence in $\check X$, then
there exists a nice $\Qcal$-name $\Seq{\dot y_n : n < \infty}$ for an element of $\dot X$ such that
it is forced that $\Seq{\dot y_n : n < \infty} = \Seq{\dot x_n : n < \infty}$.
\end{lem}
\begin{proof}
Define
\[
\dot y_n := \{(\check y,q) \in X \times \Qcal : q \Vdash \dot x_n = \check y\}
\]
and let $D_n$ be the elements of $\Qcal$ which decide both $\dot x_n$ and the least $\dot m$
such that for all $i,j > \dot m$, $d(\dot x_i,\dot x_j) < 1/n$.
It is readily verified that $\Dcal := \{D_n : n < \infty\}$ witnesses that
$\Seq{\dot y_n : n < \infty}$
is a nice $\Qcal$-name for an element of $\dot X$.
\end{proof}
\begin{defn} (see \cite{UB})
Let $(X,d)$ be a complete metric space.
A subset $A$ of $X$ is \emph{universally Baire}
if whenever $\Qcal$ is a forcing there is a $\Qcal$-name $\dot A$ such that for every
nice $\Qcal$-name $\dot x$ for an element of $\dot X$, there is a countable collection of
dense subsets $\Dcal$ of $\Qcal$ such that:
\begin{enumerate}[\indent a.]
\item $\{q \in \Qcal : q \textrm{ decides } \dot x \in \dot A\}$ is in $\Dcal$;
\item whenever $G$ is a $\Dcal$-generic filter in $\Qcal$,
$\Int{\dot x}{G}$ is in (the completion of) $X$
and $\Int{\dot x}{G}$ is in $A$ if and only
if there is a $q$ in $G$ such that $q \Vdash \dot x \in \dot A$.
\end{enumerate}
\end{defn}
The following proposition, while easy to establish, is
important in what follows.
\begin{prop}
If $\dot A$ and $\dot B$ are $\Qcal$-names which both witness that
$A$ is universally Baire with respect to $\Qcal$,
then $\mathbf{1} \Vdash \dot A = \dot B$.
\end{prop}
\begin{proof}
If this were not the case, then there would exist a nice
$\Qcal$-name $\dot x$ for an element of $\dot X$and
a $p$ in $\Qcal$ such that
\[
p \Vdash \dot x \in (\dot A \triangle \dot B).
\]
Suppose without loss of generality that $p \Vdash \dot x \in (\dot A \setminus \dot B)$.
If $G$ is a sufficiently generic filter containing $p$, then
$\Int{\dot x}{G}$ will be in $A$ since $p$ is in $G$ and $p$ forces that $\dot x$ is in $\dot A$.
On the other hand, $\Int{\dot x}{G}$ can't be in $A$ since $p$ is in $G$ and
$p$ forces that $\dot x$ is not in $\dot B$,
a contradiction.
\end{proof}
The following is also easy to establish.
The proof is left to the interested reader.
\begin{prop} \label{UB_sigma-alg}
The universally Baire subsets of a complete metric space form a $\sigma$-algebra
which includes the open subsets of $X$.
In particular, every Borel set in a complete metric space is universally Baire.
\end{prop}
Putting this all together, we have the following proposition which will be
used in establishing absoluteness results.
If $\phi(v_1,\ldots,v_n)$ is a logical formula and $x_1,\ldots,x_n$ are sets,
then we say that $\phi(x_1,\ldots,x_n)$ is \emph{generically absolute}
if whenever $\Qcal$ is a forcing and $q$ is in $\Qcal$,
$q \Vdash \phi(\check x_1,\ldots,\check x_n)$ if and only if
$\phi(x_1,\ldots,x_n)$ is true.
\begin{prop} \label{gen_abs_prop}
The assertion that a given countable Boolean combination of universally
Baire subsets of a complete metric space
is nonempty is generically absolute.
\end{prop}
\section{A property of marker sequences for Bernoulli shifts*}
In this section we will give an example of how homogeneity properties of a forcing
can be put to use.
The goal of the section is to prove a special case of a theorem of Gao, Jackson,
and Seward concerning marker sequences in Bernoulli shift actions.
Let $\Gamma$ be a countable discrete group acting continuously on a Polish space $X$.
A decreasing sequence of Borel subsets
$\Seq{A_n : n < \infty}$ of
$X$ is a \emph{vanishing marker sequence} for the action if each $A_n$ intersects
every orbit of the action and $\bigcap_{n=0}^\infty A_n = \emptyset$.
Certainly a necessary requirement for such a sequence to exist is that every orbit of
$\Gamma$ is infinite.
In fact this is also a sufficient condition; this is the content of the so-called
\emph{Marker Lemma} (see, e.g., \cite{KechrisMiller}).
The following result of Gao, Jackson, and Seward
grew out of their analysis of the Borel chromatic number
of the free part of the shift graph on $2^{\Z^2}$.
\begin{thm} \cite{grp_color_bernoulli}
Suppose that $\Gamma$ is a countable group,
$k$ is a natural number, and $\Seq{A_n : n < \infty}$ is a vanishing marker sequence
for the free part of the shift action of $\Gamma$ on $k^\Gamma$.
For every increasing sequence $\Seq{F_n : n < \infty}$ of finite sets which covers
$\Gamma$, there is an $x \in k^\Gamma$ such that:
\begin{enumerate}[\indent a.]
\item the closure of the orbit of $x$ is contained in the free part of the action;
\item the closure of the orbit of $x$ is a minimal nonempty closed subset of
$k^\Gamma$ which is invariant under the action;
\item there are infinitely many $n$ such that, for some $g$ in $F_n$,
$g \cdot x$ is in $A_n$.
\end{enumerate}
\end{thm}
Our interest will be primarily in the last clause, although in this generality,
\cite{grp_color_bernoulli} represents the first proof
of the first two clauses in the theorem.
We will focus our attention on the special case $\Gamma = \Z$ and $k =2$
(the case $\Z^d$ and $k$ arbitrary is just notationally more complicated,
but the full generality of the theorem requires a different argument).
In this context, the above theorem can be rephrased as follows.
\begin{thm} \label{GJS} \cite{grp_color_bernoulli}
Suppose that $\Seq{A_n : n < \infty}$ is a vanishing marker sequence
for the free part of the action of $\Z$ on $2^\Z$ by shift.
For every $f: \N \to \N$ such that $\lim_n f(n) = \infty$,
there exists an $x$ in $2^\Z$ such that:
\begin{enumerate}[\indent a.]
\item the closure of the orbit of $x$ is contained in the free part of the action;
\item the closure of the orbit of $x$ is a minimal nonempty closed subset
of $2^\Gamma$ which
is invariant under the action;
\item there are infinitely many $n$ such that,
for some $i \in [-f(n),f(n)]$, $x + i$ is in $A_n$.
\end{enumerate}
\end{thm}
It will be useful to have a more combinatorial way of
formulating the first two conclusions of the theorem.
They are provided by the following lemmas.
\begin{lem} (see \cite{grp_color_bernoulli}) \label{free_char}
If $x$ is in $2^\Z$, then the closure of the orbit of $x$ is contained
in the free part of the action exactly
when the following condition holds:
for all $a \in \Z \setminus \{0\}$, there exists a finite interval
$B \subseteq \Z$ such that for all $c \in \Z$ there is a $b \in B$
with
\[
x(a + b + c) \ne x(b+c).
\]
\end{lem}
\begin{lem} (see \cite{grp_color_bernoulli}) \label{min_char}
If $x$ is in $2^\Z$, then the closure of the orbit of $x$
is a minimal closed invariant subset of $2^\Z$ if and only if
the following condition holds:
for every finite interval $A \subseteq \Z$, there is a finite interval
$B \subseteq \Z$ such that for all $c \in \Z$ there is a $b \in B$ such that
for all $a \in A$
\[
x(a + c) = x(a+b).
\]
\end{lem}
Define $\Qcal$ to consist of all finite partial functions $q : \Z \to 2$ such that
the domain of $q$ is an interval of integers, denoted $I_q$.
If $n$ is in $\Z$ and $q$ is in $\Qcal$, then the \emph{translate of $q$ by $n$}
is denoted
$q + n$ and is defined by $(q+n)(i) = q(i-n)$
(with $q+n$ having domain $\{i \in \Z : i-n \in I_q\}$).
If $q$ is in $\Qcal$, define $\bar q$ to be the bitwise complement of $q$:
$\bar q(i) := 1-q(i)$.
A disjoint union of conditions which is again a condition
will be referred to as a \emph{concatenation}.
The order on $\Qcal$ is defined by $q \leq p$ if $q = p$
or else $q$ is a concatenation of a set $S$ of conditions, each of which is a translate
of $p$ or $\bar p$ and such that $p$ and a translate of $\bar p$ are in $S$.
If $G$ is a filter in $\Qcal$ such that $\bigcup G$ is a total function $x$
from $\Z$ to $2$, then
it is straightforward to check that $x$
satisfies the conclusion of Lemma \ref{min_char} and thus that the closure
of the orbit of $x$ is a minimal invariant closed subset of $2^{\Z}$.
Under a mild genericity assumption on $G$,
the closure of the orbit of $x$ will be contained in the free part of the action.
For each $p \ne \mathbf{1}$ in $\Qcal$ and $i < \infty$,
define $D_{p,i}$ to be the set of all $q$ in $\Qcal$ such that either $q$
is incompatible with $p$ or else $q \leq p$, $m+i$ is in the domain of $q$, and
$q(m+i) \ne q(m)$, where $m := \max (I_p)$.
\begin{lem}
Each $D_{p,i}$ is a dense subset of $\Qcal$ and if $G \subseteq \Qcal$
is a filter which intersects $D_{p,i}$ for each
$p \in \Qcal \setminus \{\mathbf{1}\}$ and $i < \infty$,
then $x := \bigcup G$ satisfies that the closure of the orbit of $x$
is contained in the free part of the action.
\end{lem}
Thus we have shown that $\mathbf{1}$ forces that $\dot x = \bigcup G$ satisfies that
the closure of the orbit of $\dot x$ is minimal and contained in the free part of the Bernoulli shift.
The following proposition, when combined with Proposition \ref{gen_abs_prop},
implies Theorem \ref{GJS}.
\begin{prop}
Suppose that $\Seq{A_n : n < \infty}$ is a vanishing sequence of
markers for the free part of
the Bernoulli shift $\Z \act 2^{\Z}$ and that
$f:\N \to \N$ is a function such that $\lim_n f(n) = \infty$.
Every condition in $\Qcal$ forces that for every $m$ there is an $n \geq m$
such that $\dot x + k \in \dot A_n$ for some $k$ with $-f(n) \leq k \leq f(n)$.
\end{prop}
\begin{proof}
Suppose for contradiction that this is not the case.
Then there is a $p$ in $\Qcal$ such that $p$ forces:
there is an $m$ such that for every $n \geq m$,
if $-f(n) \leq k \leq f(n)$ then $\dot x + k \not \in \dot A_m$.
By replacing $p$ with a stronger condition if necessary,
we may assume that there is an $m$ such that $p$ forces that for every $n \geq m$,
if $-f(n) \leq k \leq f(n)$ then $\dot x + k \not \in \dot A_m$.
Let $q := p \cup (\bar p + l)$ where $l$ is the length of $I_p$.
Observe that $q \leq p$ and that if $r \leq q$ and $i \in \Z$, then there is a $j$ with
$0 \leq j < 2l$ such that $r - i+j$ is compatible with $p$;
simply choose $j$ such that $i-j$ is a multiple of $2l$.
Let $n \geq m$ be such that $f(n)$ is greater than $2l$ and find a $r \leq q$
and an $i \in \Z$ such that $r$ forces that $\dot x + i$ is in $\dot A_n$.
This is possible since it is forced that $\dot A_n$
meets every orbit
(strictly speaking, we are appealing to Proposition \ref{gen_abs_prop} here).
Now, let $j < 2|p|$ be such that $r$ is compatible with $p+i+j$.
We now have that $r-i+j$ forces that $\dot x + j$ is in $\dot A_n$.
This follows from the fact that
\[
\mathbf{1} \Vdash \dot x - i + j \textrm{ is generic over the ground model}.
\]
(This follows from the observation that if $E \subseteq \Qcal$ is exhaustive, then
so is any translate of $E$.)
Recall that $r-i+j$ is compatible with $p$ and let
$s$ be a common lower bound for $r-i+j$ and $p$.
It follows that $s$ forces that both $\dot x + j \not \in \dot A_n$ and
$\dot x + j \in \dot A_n$, a contradiction.
\end{proof}
\section{Todorcevic's absoluteness theorem for Rosenthal compacta}
\label{RC_absolute:sec}
We will now use the results of Section \ref{abs:sec}
to prove Todorcevic's absoluteness theorem for Rosenthal compacta.
Fix a Polish space $X$.
Recall that a real valued function defined on $X$ is \emph{Baire class 1}
if it is the limit of a pointwise convergent sequence of continuous functions.
Baire characterized functions $f$ which are \emph{not} Baire class 1 as those for which there exist rational
numbers $p < q$ and nonempty sets $D_0,D_1 \subseteq X$ such that the closures of $D_0$ and $D_1$ coincide, have no isolated
points, and
\[
\sup_{x \in D_0} f(x) \leq p < q \leq \inf_{x \in D_1} f(x)
\]
(see \cite{Baire_char}).
The collection of all Baire class 1 functions on a Polish space $X$
is denoted $BC_1(X)$ and is equipped with the topology of
pointwise convergence.
A compact topological space which is homeomorphic to a subspace of $BC_1(X)$ is said
to be a \emph{Rosenthal compactum}.
This class includes all compact metric spaces and is closed under taking closed subspaces and
countable products.
The following are typical nonmetrizable examples.
\begin{example}[Helly's space; the double arrow]
The collection of all
nondecreasing functions from $[0,1]$ to $[0,1]$ is
known as Helly's space.
It is convex as a subset of $\R^{[0,1]}$.
The extreme points of this set are the characteristic functions of the intervals
$(r,1]$ and $[r,1]$.
This subspace is homeomorphic to the so-called \emph{double arrow space}:
the set $[0,1] \times 2$ equipped with the order topology
from the lexicographic order.
\end{example}
\begin{example}[one point compactification]
The constant $0$ function, together with the functions $\delta_r : [0,1] \to \R$ defined by
\[
\delta_r(t) :=
\begin{cases} 1 & \textrm{ if } t = r \\
0 & \textrm{ otherwise.}
\end{cases}
\]
This is homeomorphic to the one point compactification of a discrete
set of cardinality $2^{\aleph_0}$.
\end{example}
Rosenthal compacta enjoy a number of strong properties similar to those of compact metric spaces.
One which will play an important role below is \emph{countable tightness}:
a topological space $Z$ is \emph{countably tight}
if whenever $a$ is in the closure of $A \subseteq Z$,
there is a countable $A_0 \subseteq A$ such that $a$ is in the closure of $A_0$.
\begin{thm} \cite{iso_th_Banach} \label{RC_ctbly_tight}
Rosenthal compacta are countably tight.
\end{thm}
In \cite{Rosenthal_cpt}, Todorcevic derived a number of properties of Rosenthal compacta by
showing that there is a natural way to reinterpret such spaces as Rosenthal compacta in
generic extensions.
This result is in fact a fairly routine consequence of the machinery
which was developed in Section \ref{abs:sec} above.
First, we must verify that elements of $\operatorname{BC}_1(X)$ extend to elements of $\operatorname{BC}_1(X)$
in the generic extension.
\begin{lem} \cite{Rosenthal_cpt} \label{BC_ext}
Suppose that $\Seq{ f_n : n < \infty}$ is a sequence of continuous
functions on a Polish space $X$.
The assertion that $\Seq{ f_n : n < \infty}$ converges pointwise is generically absolute.
Furthermore, if $\Seq{ f_n : n < \infty}$ and $\Seq{ g_n : n < \infty}$ are sequences
of continuous functions on $X$, the assertion that
$f_n-g_n \to 0$ pointwise on $X$ is generically absolute.
\end{lem}
\begin{proof}
Let $\Seq{ f_n : n < \infty}$ be sequence of continuous functions.
Observe that
\[
\bigcup_{\epsilon > 0} \bigcap_{n=0}^\infty \bigcup_{i,j \geq n} \{x \in X : |f_i(x) - f_j(x)| > \epsilon\}
\]
specifies a countable Boolean combination of open subsets of $X$ which is empty
if and only if $\Seq{ f_n : n < \infty}$ converges pointwise.
Thus the assertion that $\Seq{ f_n : n < \infty}$ converges pointwise is
generically absolute by Proposition \ref{gen_abs_prop}.
The second conclusion is verified in a similar manner.
\end{proof}
Now suppose that $\Qcal$ is a forcing, $X$ is a Polish space,
and $f$ is in $\operatorname{BC}_1(X)$.
By Lemma \ref{BC_ext}, $\Qcal$ forces that there is a unique element of $\operatorname{BC}_1(\dot X)$ which
extends $\check f$; fix a $\Qcal$-name $\dot f$ for this extension.
If $K \subseteq \operatorname{BC}_1(X)$ is a Rosenthal compactum,
then $\dot K$ is a $\Qcal$-name for the closure of the set of extensions of elements of
$\check K$ to $\dot X$.
(Specifically, it is a $\Qcal$-name for the closure of
$\{(\dot f,\mathbf{1}) : f \in K\}$ in $\dot \R^{\dot X}$.)
Todorcevic's absoluteness theorem can now be stated as follows.
\begin{thm} \cite{Rosenthal_cpt} \label{RC_abs}
Suppose that $X$ is a Polish space and $\Fcal$ is a family of Baire class 1 functions.
The assertion that every accumulation point of $\Fcal$ is Baire class 1 is generically absolute.
\end{thm}
\begin{proof}
Let $X$ and $\Fcal$ be fixed and let $\Qcal$ be a forcing.
It is sufficient to show that the assertion that $\Fcal$ has a pointwise accumulation point which
is not in $\operatorname{BC}_1(X)$ is equivalent to a certain countable Boolean combination of open sets in a completely metrizable
space being nonempty.
Let $Z$ be the set of all sequences
\[
\Seq{\Seq{f_{k,i} : i < \infty} : k < \infty}
\]
such that, for each $k$, $\Seq{f_{k,i} : i < \infty}$ is a sequence of continuous functions
which converges pointwise to an element of $\Fcal$.
We will regard $Z$ as being a product of discrete spaces, noting that with this topology, $Z$ is
completely metrizable.
Observe that if $g$ is a limit point of $\Fcal$ which is not in $\operatorname{BC}_1(X)$, then by Baire's characterization,
there are rational numbers $p < q$, sets $A := \{a_k : k < \infty\}$, $B := \{b_k : k < \infty\}$,
and $\{f_k : k < \infty\} \subseteq \Fcal$ such that:
\begin{enumerate}
\setcounter{enumi}{\value{my_enumerate_counter}}
\item $A$ and $B$ are contained in $X$, have no isolated points, and have the same closures;
\item if $k < l$, then $f_l(a_k) < p < q < f_l(b_k)$.
\setcounter{my_enumerate_counter}{\value{enumi}}
\end{enumerate}
Moreover, one can select sequences $\Seq{f_{k,i} : i < \infty}$ of continuous functions such that
$f_{k,i} \to f_k$ pointwise for each $k$.
Thus we have that for every $k < l$ there is an $n$ such that if $n < j$,
then $f_{l,j} (a_k) < p < q < f_{l,j}(b_k)$.
It follows that there exist
\[
(\Seq{a_k : k < \infty} , \Seq{b_k : k < \infty} , \Seq{\Seq{f_{k,i} : i < \infty} : k < \infty})
\]
in $X^\omega \times X^\omega \times Z$
specifying objects with the above properties if and only if $\Fcal$ has an accumulation point outside
of $\operatorname{BC}_1(X)$.
Notice however, that these properties define a countable Boolean combination of open subsets of
$X^\omega \times X^\omega \times Z$ and therefore
the theorem follows from Proposition \ref{gen_abs_prop}.
\end{proof}
\section{$\sigma$-closed forcings}
\label{sigma-closed:sec}
There are two basic aspects of a forcing which are
of fundamental importance in understanding its properties:
how large are its families of pairwise incompatible elements and how frequently do directed families have lower bounds.
Properties of the former type are often referred to loosely as \emph{chain conditions};
we have already seen the most important of these in Section \ref{ccc:sec}.
Properties of the latter type are known as \emph{closure properties} of a forcing.
In this section, we will discuss the simplest and most important example of a closure property.
\begin{defn}[$\sigma$-closed]
A forcing $\Qcal$ is \emph{$\sigma$-closed} if whenever $\Seq{q_n : n < \infty}$ is
a $\leq$-decreasing sequence of elements of $\Qcal$, there is a $\bar q$ in $\Qcal$
such that $\bar q \leq q_n$ for all $n$.
\end{defn}
It is perhaps worth remarking that any forcing which is $\sigma$-closed and atomless
(i.e. every element has two incompatible extensions), necessarily has an antichain of
cardinality of the continuum and so in particular is not c.c.c..
Like c.c.c. forcings, however, $\sigma$-closed forcings also preserve uncountability,
although for a quite different reason.
\begin{prop}
Suppose that $\Qcal$ is a $\sigma$-closed forcing.
If $\dot f$ is a $\Qcal$-name and $p \in \Qcal$ forces that
$\dot f$ is a function with domain $\omega$, then there is a $q \leq p$ and
a function $g$ such that
$q \Vdash \dot f = \check g$.
In particular $\mathbf{1} \Vdash \dot \aleph_1 = \check \aleph_1$ and
$\mathbf{1} \Vdash \dot \R = \check \R$.
\end{prop}
\begin{proof}
Let $p$ and $\dot f$ be given as in the statement of the proposition.
By repeatedly appealing to Property \ref{decide},
recursively construct a sequence of conditions $\Seq{p_n : n < \infty}$ and
values $g(n)$ of a function $g$ defined on $\omega$
such that for all $n$, $p_{n+1} \leq p_n \leq p$ and
\[
p_n \Vdash \dot f(\check n) = \check g(\check n).
\]
Since $\Qcal$ is $\sigma$-closed, there is a $q$ in $\Qcal$ such that
$q \leq p_n$ for all $n$.
Thus by Proposition \ref{check_quant}, it follows that
\[
q \Vdash \forall n\ (\dot f(n) = \check g(n)).
\]
\end{proof}
We will now consider some examples.
The first forcing provides a means for forcing the Continuum Hypothesis over a given
model of set theory, complementing the discussion at the end of Section \ref{ccc:sec}.
\begin{example}
Let $\Qcal$ denote the collection of all countable partial functions from $\omega_1$ to
$\R$, ordered by extension.
Let $\dot g$ be the $\Qcal$-name for the union of the generic filter.
It is easily verified that $\Qcal$ forces that $\dot g$ is defined on all of $\check \omega_1$
and maps $\check \omega_1$ onto $\check \R$.
Furthermore, if $\Seq{q_n : n < \infty}$ is a descending sequence of conditions, then
$\bigcup_{n=0}^\infty q_n$ is a condition: it is a function and its domain is countable,
being a countable union of countable sets.
Thus $\Qcal$ is $\sigma$-closed and hence forces that
$\dot \R = \check \R$ and $\dot \aleph_1 = \check \aleph_1$.
Hence $\Qcal$ forces that $|\dot \R| = \dot \aleph_1$
(i.e. that the Continuum Hypothesis is true).
\end{example}
\begin{example}
Consider the forcing $([\omega]^{\omega},\subset)$.
This forcing is neither separative nor $\sigma$-closed.
The separative quotient is obtained by identifying sets
$a$ and $b$ which have a finite symmetric difference.
If we define $a \subseteq^* b$ to mean that $a \setminus b$ is finite,
then $\subseteq^*$ induces the order on the separative quotient.
If $\Seq{A_n : n < \infty}$ is a $\subseteq^*$-decreasing sequence of infinite
subsets of $\omega$,
let $n_k$ be the least element of $\bigcap_{i \leq k} A_i$ which is greater than $n_i$ for
each $i < k$.
Notice that $B := \{n_i : i < \infty\}$ is an infinite set and that
$\{n_i : i \geq k\}$ is a subset of $A_k$.
Thus $B \subseteq^* A_k$ for all $k$.
This shows that the separative quotient is $\sigma$-closed.
Notice that, by Ramsey's theorem, if $f:[\omega]^{d} \to 2$, then
\[
\{q \in [\omega]^{\omega} : f \restriction [q]^{d} \textrm{ is constant}\}
\]
is dense in $[\omega]^{\omega}$ (here $[A]^{d}$ denotes the $d$-element subsets of $A$).
Since the separative quotient of $[\omega]^{\omega}$ is $\sigma$-closed,
forcing with it does not add new subsets of $\omega$.
Thus it forces that $\dot G$ is a \emph{Ramsey ultrafilter on $\omega$}:
if $f:[\omega]^{d} \to 2$ is a coloring of the $d$-element subsets of $\omega$,
there is an $H$ in the ultrafilter
such that $f$ is constant on the $d$-element subsets of $H$.
Kunen has shown, on the other hand, that whenever $\theta > 2^{\aleph_0}$, $\Rand_\theta$ forces that there does not
exist a Ramsey ultrafilter on $\omega$ \cite{points_betaN}.
(Kunen actually proved this in the special case in which the ground model satisfies the Continuum Hypothesis.
The general case follows by an absoluteness argument --- forcing with the poset $\Qcal$
of the previous example does not change the truth of ``$\Rcal_{\theta}$ forces that there are no Ramsey ultrafilters on $\omega$''.)
\end{example}
We are now in a position to derive another property of Rosenthal compacta.
The proof below is a reproduction of Todorcevic's proof in \cite{Rosenthal_cpt};
the result itself was originally proved by Bourgain \cite{rem_cpt_BC1} using classical methods.
\begin{thm} \label{RC_point_1st_ctbl}
If $K$ is a Rosenthal compactum,
then $K$ contains a dense set of points with a countable neighborhood base.
\end{thm}
\begin{proof}
Observe that it is sufficient to show that every Rosenthal compactum
contains a point with a countable base.
Recall the following result of \v{C}ech and Po\v{s}pisil \cite{Gdelta_point}:
if $K$ is a compact topological space of cardinality at most $\aleph_1$, then
$K$ contains a point with a countable neighborhood base.
Let $\Qcal$ be the forcing from the previous example.
We have seen that $\Qcal$ forces that $|\dot \R| = \dot \aleph_1$ and hence
that the collection of all real valued Borel functions on a given Polish space
has cardinality $\aleph_1$.
In particular, $\Qcal$ forces that
any Rosenthal compactum has cardinality $\aleph_1$.
Now, let $K$ be a Rosenthal compactum consisting of Baire class 1 functions on some Polish space $X$.
By Theorem \ref{RC_abs}, $\Qcal$ forces that the closure of $\check K$ inside of $\R^X$ still consists
only of Baire class 1 functions.
Since $\Qcal$ is $\sigma$-closed, it follows that $\mathbf{1}$ forces that $\check K$ is closed and hence a compact space of
cardinality $\aleph_1$.
Therefore by the \v{C}ech-Po\v{s}pisil Theorem, there are $\Qcal$-names $\dot g$ and $\dot U_n$ for each $n$ such that
$\mathbf{1}$ forces that $\dot g$ is an element of $\check K$ and that $\{\dot U_n : n < \infty\}$
is a countable neighborhood base for $\dot g$ consisting of basic open sets.
Since $\Qcal$ is $\sigma$-closed, there is a $q$ in $\Qcal$ which decides $\dot g$ to be some $f$ and
$\dot U_n$ to be some $V_n$ for each $n$.
It follows that $\{V_n: n < \infty\}$ is a countable neighborhood base.
\end{proof}
\section{Mathias reals and a theorem of Galvin and Prikry}
\label{GP:sec}
In this section we will give a forcing proof of the Galvin-Prikry Theorem,
which is an infinite dimensional form of Ramsey's Theorem:
\begin{thm} \cite{Galvin-Prikry}
If $\Xcal \subseteq [\omega]^\omega$ is Borel, then there is an $H \in [\omega]^\omega$ such
that either $[H]^\omega \subseteq \Xcal$ or else $[H]^\omega \cap \Xcal = \emptyset$.
\end{thm}
Recall that Mathias forcing \(\Mathias\) consists of all pairs \(p = (a_p,A_p)\) such that
\(A_p\) is an infinite subset of \(\omega\) and \(a_p\) is a finite initial segment of \(A_p\).
The order on \(\Mathias\) is such that \(q\) extends \(p\) if \(a_p\) is an initial part of \(a_q\) and
\(A_q \subseteq A_p\).
A \emph{Mathias real} is a subset \(X\) of \(\omega\) such that
\[
G_X:=\{p \in \Mathias : a_p \subseteq X \subseteq A_p\}
\]
is a generic filter.
If \(\Dcal\) is a collection of subsets of \(\Mathias\), then we say that \(X\) is \(\Dcal\)-generic
if \(G_X\) is \(\Dcal\)-generic.
If \(D \subseteq \Mathias\), we will say that \(X\) is \(D\)-generic if it is \(\{D\}\)-generic.
We say that \(D \subseteq \Mathias\) is \emph{dense above $n$}
if whenever \(p \in \Mathias\) and \(n \leq \min (a_p)\), \(p\) has an extension in \(D\).
\begin{lem} \label{singleton-generic}
Suppose that \(D \subseteq \Mathias\) is dense above \(n\).
There is a dense set of \(H\) in \(([\omega]^\omega, \subseteq)\)
such that any infinite subset of \(H\) is
\(D\)-generic.
\end{lem}
\begin{proof}
Let \(D\) and \(n\) be given as in the statement of the lemma and let \(A \subseteq \omega\) be arbitrary
with \(n < \min (A)\).
Construct a sequence of infinite subsets \(H_k \subseteq A\) for each \(k\) such that,
setting \(n_k := \min (H_k)\):
\begin{enumerate}
\setcounter{enumi}{\value{my_enumerate_counter}}
\item \(H_0 := A\) and \(H_{k+1} \subseteq H_k\);
\item \(n_k < n_{k+1}\);
\item for each \(x \subseteq \{n_i : i < k\}\) either there is a \(p \in D\) such that
\(a_p = x\) and \(H_k \subseteq A_p\) or else whenever \(p \in D\) with \(a_p = x\),
\(A_p \cap H_k\) is finite.
\setcounter{my_enumerate_counter}{\value{enumi}}
\end{enumerate}
Define \(B :=\{n_k : k < \infty\}\) and set
\[
\Fcal :=\{x \in [B]^{<\omega} : \exists p \in D ((a_p = x) \mand (A_p \subseteq B ))\}.
\]
By Theorem \ref{GNW}, there is an infinite \(H \subseteq B\) such that either \(H\) has
no subset in \(\Fcal\) or else every infinite subset of \(H\) has an initial segment in \(\Fcal\).
Since \((\emptyset,H)\) is in \(\Mathias\), it has an extension \(p\) in \(D\).
Since \(a_p \subseteq A_p \subseteq H \subseteq B\), \(a_p\) is a subset of \(H\) in \(\Fcal\).
Thus every infinite subset of \(H\) has an initial segment in \(\Fcal\) and \(n \leq \min (H)\).
Now let \(X\) be an infinite subset of \(H\) and \(x\) is an initial part of \(X\) in \(\Fcal\).
Let \(k\) be such that \(x \subseteq \{n_i : i < k\}\) and let \(p \in D\) be such that
\(x= a_p \subseteq A_p \subseteq B\).
Observe that in particular \(A_p \cap H_k\) is infinite.
Thus by our construction, \(H_k \subseteq A_q\) for some \(q \in D\) such that
\(a_q = x\).
Now we have that \(a_q \subseteq X \subseteq A_q\) and therefore that
\(X\) is \(D\)-generic.
\end{proof}
\begin{prop} \label{countably-generic}
Suppose that \(\Dcal\) is a countable collection of dense subsets of \(\Mathias\).
For every \(x \in [\omega]^{<\omega}\) there is a dense set of \(H\) in
\([\omega]^\omega\) such that if \(X \subseteq H\) is infinite then \(x \cup X\) is
\(\Dcal\)-generic.
\end{prop}
\begin{proof}
Let \(\Dcal\) and \(x\) be given as in the statement of the proposition.
Fix an enumeration \(\{D_k : k < \infty\}\) of \(\Dcal\) and let \(A \in [\omega]^\omega\) be arbitrary with
\(\max(x) < \min (A)\).
If \(y\) is a finite set and \(k < \infty\), define
\[
D_{k,y} := \{p \in \Mathias : (\max (y) < \min (a_p)) \mand ((y \cup a_p, y \cup A_p) \in D_k) \}.
\]
Observe that \(D_{k,y}\) is dense above \(\max (y) + 1\) and if \(X \subseteq \omega\) with
\(\max (y) < \min (X)\), then \(y \cup X\) is \(D_k\)-generic if \(X\) is \(D_{k,y}\)-generic.
Using Lemma \ref{singleton-generic}, construct infinite sets \(\{H_k : k < \infty\}\) so that:
\begin{enumerate}
\setcounter{enumi}{\value{my_enumerate_counter}}
\item \(H_0 :=A\) and \(H_{k+1} \subseteq H_k\);
\item setting \(n_k := \min (H_k)\), we have \(n_k < n_{k+1}\);
\item any infinite subset of \(H_k\) is \(D_{j,y}\)-generic whenever \(j < k\) and
\(x \subseteq y \subseteq x \cup \{n_i : i < k\}\).
\setcounter{my_enumerate_counter}{\value{enumi}}
\end{enumerate}
Define \(H:=\{n_k : k < \infty\}\) and suppose that \(X\) is an infinite subset of \(H\).
Let \(k\) be given and set \(y:= x \cup (X \cap \{n_i : i < k\})\).
Since \(X \cap H_k = X \setminus x\) is \(D_{k,y}\)-generic,
\(x \cup X\) is \(D_k\)-generic.
Thus \(H\) satisfies the conclusion of the proposition.
\end{proof}
If \(p,q \in \Mathias\), then we say that \(q\) is a \emph{pure extension} of \(p\) if
\(q \leq p\) and \(a_p = a_q\).
The following proposition is central to the analysis of \(\Mathias\) and related posets.
\begin{prop} \label{pure_decision}
If \(\phi\) is a formula in the forcing language and \(p \in \Mathias\),
\(p\) has a pure extension which decides \(\phi\).
\end{prop}
\begin{proof}
Let \(p\) and \(\phi\) be given as in the statement of the proposition.
Define \(D\) to be the set of all conditions in \(\Mathias\) which decide \(\phi\), noting that \(D\) is
dense.
By Proposition \ref{countably-generic}, there is an infinite \(B \subseteq A_p\) such that
if \(X \subseteq B\) is infinite, then
\(a_p \cup X\) is \(D\)-generic.
Set
\[
\Fcal := \{x \in [B]^{<\omega} : \exists p \in \Mathias ((p \Vdash \phi) \mand (a_p = x) \mand (A_p \subseteq B))\}.
\]
By Theorem \ref{GNW} there is an infinite \(H \subseteq B\) such that either \(H\) has no subset in
\(\Fcal\) or else every infinite subset of \(H\) has an initial part in \(\Fcal\).
If the first conclusion is true,
then \((a_p,H)\) forces \(\neg \phi\).
If the second conclusion is true, then \((a_p,H)\) forces \(\phi\).
\end{proof}
Since every Borel set is universally Baire by Proposition \ref{UB_sigma-alg},
the next theorem implies the Galvin-Prikry Theorem \cite{Galvin-Prikry}.
\begin{thm}
If \(\Xcal \subseteq [\omega]^\omega\) is universally Baire,
then there is an \(H \in [\omega]^\omega\) such that either
\([H]^\omega \subseteq \Xcal\) or else \([H]^\omega \cap \Xcal = \emptyset\).
\end{thm}
\begin{proof}
Since \(\Xcal\) is universally Baire, there is an \(\Mathias\)-name
\(\dot \Xcal\), countably many dense sets \(\Dcal\), and a name \(\dot X\) for
the Mathias real such that if \(G\) is a \(\Dcal\)-generic filter, then
\(\dot X(G) \in \Xcal\) if and only if there is a \(p \in G\) such that
\(p \Vdash \dot X \in \dot \Xcal\) if and only if there is no \(p \in G\) such that
\(p \Vdash \dot X \not \in \dot \Xcal\).
By Proposition \ref{pure_decision}, there is condition of the form \((\emptyset,A)\) which decides
\(\dot X \in \dot \Xcal\).
By Proposition \ref{countably-generic},
there is an \(H \in [A]^\omega\) such that every infinite subset of \(H\) is \(\Dcal\)-generic.
It follows that \(H\) satisfies the conclusion of the theorem.
\end{proof}
\section{When compacta have dense metrizable subspaces*}
Suppose that $K$ is a compact Hausdorff space.
In this section we will reformulate the question of when $K$
contains a dense metrizable subspace in terms of the language of forcing.
Recall that every compact Hausdorff space is homeomorphic to a closed subspace of $[0,1]^I$ for
some index set $I$.
In this section, when we reinterpret $K$ in a generic extension, we will take $\dot K$ to be the name
for the closure of $\check K$ in $[0,1]^{\check I}$.
Recall that a \emph{regular pair} in $K$ is a pair $(F,G)$ such that $F$ and $G$ are disjoint
closed $G_\delta$ subsets of $K$.
If $\Xi$ is an ordered set and $\Seq{(F_\xi,G_\xi) : \xi \in \Xi}$ is a sequence of
regular pairs, then we say that $\Seq{(F_\xi,G_\xi) : \xi \in \Xi}$ is a \emph{free sequence}
if whenever $A,B \subseteq \Xi$ are finite and satisfy $\max (A) < \min (B)$, it follows that
\[
\bigcap_{\xi \in A} G_\xi \cap \bigcap_{\xi \in B} F_\xi \ne \emptyset.
\]
Recall also that a collection $\Bcal$ of nonempty open subsets of
$K$ is a \emph{$\pi$-base} if every nonempty open set in $K$ contains an element of $\Bcal$.
We note the following result of Todorcevic.
\begin{thm} \cite{free_sequences} \label{pi_base}
If $K$ is any compact Hausdorff space, there is a sequence $\Seq{(F_\xi,G_\xi) : \xi \in \Pi}$
of regular pairs in $K$ such that
$\{\mathrm{int} (G_\xi) : \xi \in \Pi\}$ forms a $\pi$-base for $K$ of minimum cardinality and such that
whenever $\Xi \subseteq \Pi$ satisfies that $\{G_\xi : \xi \in \Xi\}$ has the finite intersection
property, $\Seq{(F_\xi,G_\xi) : \xi \in \Xi}$ is a free sequence.
\end{thm}
The following result is implicit in \cite{Rosenthal_cpt} and is a key component in
Todorcevic's proof that every Rosenthal compactum contains a dense metrizable subspace.
Let $\Qcal_K$ denote the forcing consisting of all nonempty
open subsets of $K$ ordered so that $q < p$ means that the closure of $q$ is contained in $p$.
We will let $\dot x_G$ denote the $\Qcal_K$-name for the unique element of the intersection of $\dot G$,
when regarded as a collection of open sets.
\begin{thm} \label{sigma_disj_pi-base_char}
Suppose that $K$ is a compact Hausdorff space and $\Seq{(F_\xi,G_\xi) : \xi \in \Pi}$ is a sequence satisfying
the conclusion of Theorem \ref{pi_base}.
The following are equivalent:
\begin{enumerate}[\indent a.]
\item \label{sigma_disj}
$K$ has a $\sigma$-disjoint $\pi$-base.
\item \label{gen_1st_ctbl}
$\Qcal_K$ forces that $\dot x_G$ has a countable neighborhood base.
\item \label{coll_Pi}
$\Qcal_K$ forces that
$|\{\xi \in \check \Pi : \check G_\xi \in \dot G\} | \leq \aleph_0$.
\end{enumerate}
\end{thm}
\begin{proof}
To see that (\ref{sigma_disj}) implies (\ref{gen_1st_ctbl}), first observe that if $\Ucal$
is a $\pi$-base for the topology on $K$, then $\Ucal$ is dense
as a subset of $\Qcal_K$.
Hence $\Qcal_K$ forces
that $\check \Ucal \cap \dot G$ generates $G$.
Also, if $\Ocal$ is a pairwise disjoint family of open sets, then it is forced
that $|\check \Ocal \cap \dot G| \leq 1$.
Hence if $\Ucal$ is a $\sigma$-disjoint $\pi$-base,
then it is forced that $\check \Ucal \cap \dot G$ is a countable neighborhood base of $\dot x_G$.
The equivalence between (\ref{gen_1st_ctbl}) and (\ref{coll_Pi}) follows from the fact that
\[
\{G_\xi : (\xi \in \Pi) \mand (\dot x_G \in G_\xi)\}
\]
is forced to be a neighborhood base for $\dot x_G$ and that
\[
\{(F_\xi,G_\xi) : (\xi \in \Pi) \mand (\dot x_G \in G_\xi)\}
\]
is a free sequence and hence no smaller neighborhood base can suffice.
Finally, to see that (\ref{coll_Pi}) implies (\ref{sigma_disj}),
suppose that every condition forces that
$|\{\xi \in \check \Pi : \check G_\xi \in \dot G\} | \leq \aleph_0$.
Let $\Seq{\dot \xi_n : n < \infty}$ be a sequence of $\Qcal_K$-names such that
every condition of $\Qcal_K$ forces that
\[
\{\dot \xi_n : n < \infty\} = \{\xi \in \check \Pi : \dot x_G \in \check G_\xi\}.
\]
Let $\Ocal_n$ be a maximal antichain in $\Qcal_K$ such that elements of $\Ocal_n$ decide
$\dot \xi_n$ and set $\Ucal := \bigcup_{n=0}^\infty \Ocal_n$.
Clearly $\Ucal$ is $\sigma$-disjoint; it suffices to show that it is a $\pi$-base.
To see this, suppose that $V$ is a nonempty open subset of $K$.
Let $p$ be a nonempty regular open subset of $V$ and let
$\dot n$ be such that $p$ forces that $\check G_{\dot \xi_{\dot n}} \subseteq \check V$.
Now let $U$ be an element of $\Ucal$ which decides $\dot \xi_{\dot n}$ to be $\xi$.
Notice that we must have that $U \subseteq G_\xi \subseteq V$.
\end{proof}
Recall that a topological space $X$ is \emph{countably tight} if whenever $A \subseteq X$ and
$x \in \operatorname{cl}(A)$, there is a countable $A_0 \subseteq A$ such that $x \in \operatorname{cl}(A_0)$.
It is easy to show that continuous images of countably tight spaces are countably tight.
It is well known that in the class of compact Hausdorff spaces, countable tightness
is equivalent to the nonexistence of uncountable free sequences of regular pairs.
We now have the following corollary.
\begin{cor}\label{dense_metr}
Let $\Pcal$ be a forcing.
If $K$ is compact, contains a dense first countable subspace, and
\[
\mathbf{1} \Vdash_\Pcal \dot K \textrm{ is countably tight}
\]
then $K$ contains a dense metrizable subspace.
\end{cor}
\begin{proof}
By our assumption and Theorem \ref{sigma_disj_pi-base_char},
$K$ has a $\sigma$-disjoint $\pi$-base and thus so does the
dense first countable subspace.
By a result of H.E. White \cite{metrization:White},
any first countable Hausdorff space with a $\sigma$-disjoint
$\pi$-base has a dense metrizable subspace.
\end{proof}
An immediate consequence of the results we have developed so far is the following result
of Todorcevic.
Previously it had not been known whether there were nonseparable Rosenthal compacta which had no uncountable family of pairwise disjoint open sets
or whether certain specific Rosenthal compacta had dense metrizable subspaces
(see the discussion in \cite{Rosenthal_cpt}).
\begin{thm} \cite{Rosenthal_cpt}
Rosenthal compacta contain dense metrizable subspaces.
\end{thm}
\begin{proof}
Let $K$ be a Rosenthal compactum and let $\tilde K$ be the $\Qcal_K$-name for the reinterpretation
of $K$ as a Rosenthal compactum in the generic extension by $\Qcal_K$.
By Theorem \ref{RC_point_1st_ctbl}, $K$ contains a dense first countable subspace.
By Theorem \ref{RC_abs},
$\Qcal_K$ forces that $\tilde K$ is a Rosenthal compactum and hence is countably tight by
Theorem \ref{RC_ctbly_tight}.
Since $\dot K$, as defined in the beginning of this section, is a continuous projection
of $\tilde K$, it follows that $\dot K$ is forced to be countably tight.
By Corollary \ref{dense_metr}, $K$ contains a dense metrizable subspace.
\end{proof}
\section{Further reading}
As was mentioned earlier, Kunen's book \cite{set_theory:Kunen}
is a good next step if one is interested in further reading on forcing.
It also contains a large number of exercises.
Chapters VII and VIII provide a standard treatment of forcing, presented with a more semantic orientation, and
Chapter II provides some useful background on combinatorial set theory.
Further reading on forcings which add a single real --- such as $\Cohen$, $\Rand$, $\Mathias$, $\Ameoba_\epsilon$ ---
can be found in \cite{set_theory_reals}.
Also, Laver's work on the Borel Conjecture \cite{Borel_conj} is a significant early paper on the subject which already contains
important techniques in the modern set theory such as countable support iteration.
Zapletal's \cite{forcing_idealized} gives a different perspective on forcings related to set theory of the reals.
For those who can find a copy, \cite{forcing_appl} is also good further reading on forcing and provides a different perspective than \cite{set_theory:Kunen}.
Those readers who have studied the material on Martin's Axiom in \cite[II,VIII]{set_theory:Kunen}
and/or \emph{forcing axioms} in \cite{forcing_appl} are referred to
\cite{PFA_ICM}, \cite{forcing_axioms:Todorcevic}, and \cite{FMS} where this concept is further developed
and the literature is surveyed.
Solovay's analysis of the model $L(\R)$ \cite{solovay_model}
is a landmark result in the study of forcing and large cardinals.
(Solovay actually analyzed a larger model than $L(\R)$, but $L(\R)$ has since shown itself
to be more fundamental and now bears the name \emph{Solovay model}.)
At the same time, \cite{solovay_model} should be accessible to readers who
have been through this article.
A proof of Solovay's theorem is reproduced in \cite{higher_infinite} which is also
a standard encyclopedic reference on large cardinals.
See also Mathias's infinite dimensional generalization
of Ramsey's Theorem which holds in $L(\R)$ after collapsing an appropriate large cardinal \cite{happy_families}.
An explanation of the special role Solovay's model plays in the foundations of mathematics
can be found in \cite{LCLM} \cite{L(R)_abs:Woodin}.
\cite{stationary_tower} provides a good introduction to the methods needed to establish absoluteness results
about $L(\R)$.
\def\leavevmode\lower.6ex\hbox to 0pt{\hskip-.23ex \accent"16\hss}D{\leavevmode\lower.6ex\hbox to 0pt{\hskip-.23ex \accent"16\hss}D}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 4,484 |
Pink stove auction | press release | charity event | BlueStar
(Reading, PA – June 18) – 'Get Your Pink On' and join BlueStar for a great cause! BlueStar™, the manufacturer of high-performance cooking equipment for the home, has kicked off 100 Days of fundraising and eBay auctions, featuring a custom-made PINK gas range, to benefit the Greater New York City Affiliate of Susan G. Komen for the Cure® via its Susan G. Komen Greater NYC Race for the Cure® charity event.
The stove has a retail price of about $4,700, but Mike Trapp, vice president of operations, hopes the eBay auction attracts more because of the cause. The auction ends on July 10th, and 100% of the proceeds will be contributed to the charity. The company will provide free shipping.
The company will launch a series of other eBay auctions throughout the summer, so Trapp advises to check back regulary for new items. Auctions will include BlueStar products, including a $700 set of high-end cookware, and items signed by celebrity chefs such as Marcus Samuelsson.
"This is our way to help to make a difference as we celebrate survivorship, honor those who have lost their battle, and most importantly, help raise funds and awareness for the fight against this life threatening disease," said Trapp. Team BlueStar is the company's first ever company-wide team, comprised of employees, distributors, dealers, customers, family and friends.
The pink range also will be part of a special 100-day tour which will take it to appliance retailers throughout the Northeast. And, in a historic day-trip on Sunday, September 18th, Team BlueStar will lead the range from its Pennsylvania headquarters to join teammates in New York City for the big Komen Greater NYC Race for the Cure in Central Park. For more information or to register for the Race, visit www.komennyc.org/race.
Range Features:
Extra large oven capacity: 26.25″W x 20″D x 15″ H
ULTRANOVA power burner delivers 22,000 BTUs of intense heat
Accommodates a full-size commercial 18″ x 26″ baking sheet
24″ depth for compatibility with standard kitchen cabinetry
Convection oven cooking provides incredibly precise, even heat distribution
Free shipping within the U.S. and Canada
BlueStar manufactures high-performance gas ranges and cooktops for the residential market. The company's unique open burner system produces 22,000 BTU of cooking power, resulting in shorter cooking times and an even simmer. Each BlueStar is hand-crafted in Reading, Pennsylvania. To learn more, visit bluestar2016.wpengine.com
About BlueStar
BlueStar™ – Genuine Restaurant Ranges for the Home™ – manufactures high-performance gas ranges and cooktops for the residential market. The company's unique open burner system produces 22,000 BTU of cooking power, resulting in shorter cooking times and an even simmer. Each BlueStar range is hand-crafted in Reading, Pennsylvania and features burners that can be custom configured at the time of order. Most BlueStar models are available in 190 colors, at no extra charge. For more information, please visit bluestar2016.wpengine.com.
< 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122 > | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 7,173 |
\section*{Appendix}
\end{document}
\section{Discussion and Future Work}
We have presented a model class of stochastic \acp{RNN} that can be trained with a recently proposed estimator, \ac{SGVB}.
The resulting model fulfills the expectation to greatly improve over the performance of \acp{sRNN} erroneously assuming a factorisation of output variables.
An important take away message of this work is that the performance of \acp{RNN} can greatly benefit from more sophisticated methods that greatly improve the representative capabilities of the model.
While not shown in this work, \acp{STORN} can be readily extended to feature computationally more powerful architectures such as LSTM or deep transition operators \citep{hochreiter1997long, pascanu2013construct}.
Still, an apparent weakness seems to be the stochastic objective function.
Thankfully, research in optimisation of stochastic objective functions has far from halted and we believe \ac{STORN} to benefit from any advances in that area.
\section{Acknowledgements}
Part of this work has been supported by the TACMAN project, EC Grant agreement no. 610967, within the FP7 framework programme.
\section{Experiments}
For evaluation we trained the proposed model on a set of midi music, which was used previously \citep{bengio2012advances,pascanu2013construct,bayer2013fast,boulanger2012modeling} to evaluate \acp{RNN}.
We also investigated modelling human motion in the form of motion capture data \citep{boulanger2012modeling,sutskever2008recurrent,taylor2006modeling}.
We employ \acp{FD-RNN}~\citep{bayer2013fast} for both the recognition and the generating model.
While we determine the dropout rates for the generating model via model selection on a validation set, we include them into the parameter set for the recognition model.
In a manner similar to \cite{bayer2013training}, we exploit fast dropout's natural inclusion of variance as the variance for the recognition model, i.e. $\sigma^2_{t, k}$.
We used Adadelta~\citep{zeiler2012adadelta} enhanced with Nesterov momentum~\citep{sutskever2013importance} for optimisation.
\subsection{Polyphonic Music Generation}
All experiments were done by performing a random search~\citep{bergstra2012random} over the hyper parameters, where 128 runs were performed for each data set.
Both the recognition and the generating model used 300 hidden units with the logistic sigmoid as the transfer function.
We report the estimated negative log-likelihood (obtained via the optimiser proposed in \citep{rezende2014stochastic}) on the test set of the parameters which yielded the best bound on the validation set.
As expected, \ac{STORN} improves over the models assuming a factorised output distribution (FD-RNN, sRNN, Deep RNN) in all cases.
Still, RNN-NADE has a competitive edge.
The reasons for this remain unclear from the results alone, but the stochastic training and resulting noisy gradients are a viable hypothesis, since RNN-NADE does not suffer from those.
\begin{table}
\caption[ ]{
Results on the midi data sets.
All numbers are average negative log-likelihoods on the test set, where ``FD-RNN'' represents the work from \cite{bayer2013fast}; ``sRNN'' and ``RNN-NADE'' results are from \cite{bengio2012advances} while ``Deep RNN`` shows the best results from \cite{pascanu2013construct}.
The results of our work are shown as ``STORN`` and have been obtained by means of the importance sampler described in~\cite{rezende2014stochastic}.
}
\label{table:results-midi}
\begin{center}
\begin{small}
\begin{tabular}[t]{|l|r|r|r||r|r|}
\hline
Data set & STORN & FD-RNN & sRNN & RNN-NADE & Deep RNN \\
\hline
Piano-midi.de & 7.13 & 7.39 & 7.58 & 7.05 & -- \\
Nottingham & 2.85 & 3.09 & 3.43 & 2.31 & 2.95 \\
MuseData & 6.16 & 6.75 & 6.99 & 5.60 & 6.59 \\
}{
JSBChorales & 6.91 & 8.01 & 8.58 & 5.19 & 7.92 \\
\hline
\end{tabular}
\end{small}
\end{center}
\end{table}
\subsection{Motion Capture Data}
The motion capture data set \citep{hsu2005style,taylor2006modeling} is a sequence of kinematic quantities obtained from a human body during walking.
It consists of 3128 time steps of 49 angular quantities each.
The experimental protocol of previous studies of this data set is to report the mean squared error on the training set , which we comply with.
\footnote{The use of the MSE on the trainig set is debatable for this task. First, there is the danger of overfitting the training set. Second, the metric only captures a single moment of the residual distribution. We go forward with this protocol nonetheless to make our results comparable to previous works. Additionally, we report the negative log-likelihood, which is the right metric for the task.}
For motion capture data, we chose a Gaussian likelihood with a fixed standard deviation for the generating model.
The recognition model was chosen to be a bidirectional RNN.
While the standard deviation was fixed to $1$ during training, we performed a binary search for a better value after training; the resulting estimate of the negative log-likelihood on the validation set was then used for model selection.
The estimated negative log-likelihood of the data was 15.99.
Other models trained on this data set, namely the RNN-RBM, RTRBM and cRBM do not offer a tractable way of estimating the log-likelihood of the data, which is why there is no direct mean of comparison respecting the probabilistic nature of the models.
In the case of the former two, the mean squared prediction error is reported instead, which is 20.1 and 16.2 respectively.
Our method achieved an average MSE of 4.94, which is substantially less than previously reported results.For additional means of comparison, we performed approximate missing value imputation of motion capture data.
We picked random sequences of length 60 and replaced all of the 49 channels from time steps 30 to 40 with standard normal noise.
We then performed a \emph{maximum a posteriori} point selection of the recognition model, i.e. $\argmax_{\hat{\mathbf{z}}_{1:T}} q(\hat{\mathbf{z}}_{1:T}|\mathbf{x}_{1:T})$, from which we reconstructed the output via $\argmax_{\hat{\mathbf{x}}_{30:40}} \log p(\mathbf{x}_{1:T}|\hat{\mathbf{z}}_{1:T})$.
Note that this method is different from the one proposed in \citep{rezende2014stochastic}, where an iterative scheme is used.
We also experimented with that method, but did not find it to yield substantially better results.
The results of the imputations are shown in Figure~\ref{fig:impute}.
To demonstrate the generative capabilities of the method, we drew 50 samples from the model after initialising it with a stimulus prefix.
The stimulus had a length of 20, after which we ran the model in ``generating mode'' for another 80 time steps.
This was done by feeding the mean of the model's output at time step $t$ into the generating model at time step $t+1$.
Additionally, we drew $\mathbf{z}_{20:80}$ from the prior. The results are visualised in Figure~\ref{fig:sample}.
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{plots/imputed.pdf} \\
\end{center}
\caption{
Illustration of missing value imputation on the motion capture data set.
We show the first 48 of the 49 channels of a random sample, where time steps 30 to 40 were initialised with random noise.
Subsequently, a maximum a posteriori point estimate of the latent variables was used to reconstruct the missing parts of the signals.
}
\label{fig:impute}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{plots/stimulus-sampling.pdf} \\
\end{center}
\caption{
Samples from the model trained on motion capture data after providing a stimulus prefix sequence of 20 time steps.
The uncertainty of the learned distribution is visible by the diversity of the samples; nevertheless, the distribution is rather unimodal.
}
\label{fig:sample}
\end{figure}
\section{Introduction}
\label{sec:intro}
\acp{RNN} are flexible and powerful tools for modeling sequences.
While only bearing marginal existence in the 1990's, recent successes in real world applications~\citep{graves2013generating, graves2013speech, sutskever2014sequence, graves2008unconstrained, cho2014learning} have resurged interest.
This is partially due to architectural enhancements~\citep{hochreiter1997long}, new optimisation findings~\citep{martens2011learning,sutskever2013importance,bengio2012advances} and the increased computional power available to researchers.
\acp{RNN} can be employed for a wide range of tasks as they inherit their flexibility from plain neural networks.
This includes universal approximation capabilities, since \acp{RNN} are capable of approximating any measureable sequence to sequence mapping and have been shown to be Turing complete~\citep{hammer2000approximation,siegelmann1991turing}.
One typical application is to let an \ac{RNN} model a probability distribution over sequences, i.e. $p(\mathbf{x}_{1:T})$.
This is done by writing the distribution in cascade form,
\eq{
p(\mathbf{x}_{1:T}) & = \prod_{t=0}^{T-1} p(x_{t+1}|\mathbf{x}_{1:t}),
}
where $\mathbf{x}_{1:0} = \emptyset$.
Each $p(x_{t+1}|\mathbf{x}_{1:t})$ is then represented by the output of an \ac{RNN} at a single time step, identifying each of its components with the statistics of the distribution.
A simple example is that of a Bernoulli, i.e.
\eq{
p(x_{t+1, k} = 1|\mathbf{x}_{1:t}) &= \eta_k(\mathbf{x}_{1:t}) \numberthis \label{eq:bernoulli}
}
where $x_{t+1, k}$ corresponds to the $k$'th component of the $t+1$'th time step of $\mathbf{x}$ with $k=1,\dots,\omega$ and $t=1,\dots,T$.
Each $\eta_k(\mathbf{x}_{1:t})$ is the $k$'th output of some \ac{RNN} at time step $t$, constrained to lie in the interval $(0, 1)$.
Learning such an \ac{RNN} then boils down to minimising the negative log-likelihood of the data with respect to the parameters of the network.
This framework gives practitioners a powerful tool to model rich probability distributions over sequences.
A common simplification is a na\"ive Bayes assumption that the individual components factorise:
\eq{
p(x_{t+1}|\mathbf{x}_{1:t}) &= \prod_k p(x_{t+1, k}|\mathbf{x}_{1:t}).
}
While sufficient for many applications, reintroduction of dependency among the components of $x_t$ leaves room for improvement.
This is especially true for sequences over spaces which are high dimensional and tightly coupled.
The approach taken by~\cite{graves2013generating} is to use a mixture distribution for $p(x_t|\mathbf{x}_{1:t-1})$.
Arguably powerful enough to model any dependency between the components of $x_t$, a drawback is that the number of parameters scales at least linearly with the number of chosen mixture components.
Models based on restricted Boltzmann machines and variations~\citep{boulanger2012modeling,boulanger2013high,sutskever2008recurrent} provide a solution to this as well, yet come with tighter restrictions on the assumptions that can be made.
E.g. RBMs are restricted to model data using posteriors from the exponential family~\citep{welling2004exponential}, make use of an intractable objective function and require costly MCMC steps for learning and sampling.
In this work, we propose to consider adding latent variables similar to \cite{tang2013learning} to the network.
Using \ac{SGVB}~\citep{rezende2014stochastic,kingma2013auto} as an estimator, we train \acp{RNN} to model high dimensional sequences.
\section{Methods}
We propose to combine \ac{SGVB} and \acp{RNN} by making use of an \ac{sRNN} for both the recognition model $q(z_t|\mathbf{x}_{1:t-1})$ and the generating model $p(x_t|\mathbf{z}_{1:t})$.
\subsection{The Generating Model}
More specifically, the generating model is an \ac{sRNN} where the latent variables form additional inputs:
\eq{
h_t &= f_h(x_t \Win^g + z_t \Win'^g + h_{t-1} \Wrec + \bhid) \numberthis \label{eq:hidden_storn} \\
}
which replaces Eq.~(\ref{eq:hidden}).
We let $y_t$ from Eq.~(\ref{eq:output}) represent the necessary statistics to fully determine $p(x_{t+1}|\mathbf{x}_{1:t})$.
Note that the model reduces to an \ac{sRNN} as soon as we remove any latent variables, e.g. by setting $\Win'^g = \mathbf{0}$.
Hence, such a model generalises \acp{sRNN}.
The only quantities bearing uncertainty in the calculation of $\mathbf{h}_{1:T}$ are the latent variables $\mathbf{z}_{1:T}$, as $\mathbf{x}_{1:T}$ stems from the data set and for all $t$, $h_t$ is a deterministic function of $\mathbf{x}_{1:t}$ and $\mathbf{z}_{1:t}$.
The resulting factorisation of the data likelihood of a single sequence $p(\mathbf{x}_{1:T})$ is then
\eq{
p(\mathbf{x}_{1:T})
&= \prod_{t=0}^{T-1} p(x_{t+1}|\mathbf{x}_{1:t}) \\
&= \int_{\mathbf{z}_{1:T}} p(\mathbf{z}_{1:T}) \prod_{t=0}^{T-1} p(x_{t+1}|\mathbf{x}_{1:t}, \mathbf{z}_{1:t}, \cancel{\mathbf{z}_{t+1:T}}) d\mathbf{z}_{1:T}\\
&= \int_{\mathbf{z}_{1:T}} p(\mathbf{z}_{1:T}) \prod_{t=0}^{T-1} \int_{h_{t}} p(x_{t+1}|\mathbf{x}_{1:t}, \mathbf{z}_{1:t}, h_{t}) p(h_{t}|\mathbf{x}_{1:t}, \mathbf{z}_{1:t}) d h_t d\mathbf{z}_{1:T},
}
where we have made use of the fact that $x_{t+1}$ is independent of $\mathbf{z}_{t+1:T}$.
Since $h_t$ is a deterministic function of $\mathbf{x}_{1:t}$ and $\mathbf{z}_{1:t}$, we note that $p(h_t|\mathbf{x}_{1:t}, \mathbf{z}_{1:t})$ follows a Dirac distribution with its mode given by Eq.~(\ref{eq:hidden_storn}).
Thus, the integral over the hidden states is replaced by a single point; we make the dependency of $h_t$ on both $\mathbf{z}_{1:t}$ and $\mathbf{x}_{1:t}$ explicit.
\eq{
p(\mathbf{x}_{1:T})
&= \int_{\mathbf{z}_{1:T}} p(\mathbf{z}_{1:T}) \prod_{t=0}^{T-1} p(x_{t+1}|h_t(\mathbf{x}_{1:t}, \mathbf{z}_{1:t})) d\mathbf{z}_{1:T}. \numberthis \label{eq:storn_ll}
}
The corresponding graphical model is shown in Figure~\ref{fig:graphmodel}.
Even though the determinism of $h_t$ might seem restrictive at first, we will argue that it is not.
Let $\mathbf{h}_{1:T}$ be the sequence of hidden layer activations as given by Eq.~(\ref{eq:hidden_storn}).
This sequence is deterministic given $\mathbf{x}_{1:T}$ and $\mathbf{z}_{1:T}$ and consequently, $p(\mathbf{h}_{1:T}|\mathbf{x}_{1:T}, \mathbf{z}_{1:T})$ will follow a Dirac distribution.
Marginalising out $\mathbf{z}_{1:T}$ will however lead to a universal approximator of probability distributions over sequences, analoguously to the argument given in Section~\ref{sec:sgvb}.
An additional consequence is, that we can restrict ourselves to prior distributions over the latent variables that factorise over time steps, i.e. $p(\mathbf{z}_{1:T}) = \prod_t p(z_t)$.
This is much easier to handle in practice, as calculating necessary quantities such as the KL-divergence can be done independently over all time steps and components of $z_t$.
Despite of this, the distribution over $\mathbf{h}_{1:T}$ will be a Markov chain and can exhibit stochastic behaviour, if necessary for modelling the data distribution.
\begin{figure}
\label{fig:graphmodel}
\centering
\input{tex/graphicalmodel}
\caption{
Graphical model corresponding to the factorisation given in Eq.~(\ref{eq:storn_ll}).
The hidden states $h_t$ are shown as diamonds to stress that they are no source of stochasticity.
Despite of this, marginalising out $\mathbf{z}_{1:T}$ makes $\mathbf{h}_{1:T}$ stochastic.}
\end{figure}
\subsection{Variational Inference for Latent State Sequences}
The derivation of the training criterion is done by obtaining a variational upper bound on the negative log-likelihood via Jensen's inequality, where we use a variational approximation $q(\mathbf{z}_{1:T}|\mathbf{x}_{1:T}) \approx p(\mathbf{z}_{1:T}|\mathbf{x}_{1:T})$.
\eq{
- \log p(\mathbf{x}_{1:T})
&= -\log \int_{\mathbf{z}_{1:T}} {q(\mathbf{z}_{1:T}|\mathbf{x}_{1:T}) \over q(\mathbf{z}_{1:T}|\mathbf{x}_{1:T})} p(\mathbf{z}_{1:T}) \prod_{t=0}^{T-1} p(x_{t+1}|h_t(\mathbf{x}_{1:t}, \mathbf{z}_{1:t})) d\mathbf{z}_{1:T} \\
&\le KL(q(\mathbf{z}_{1:T}|\mathbf{x}_{1:T})|p(\mathbf{z}_{1:T})) - \Expc{\mathbf{z}_{1:T} \sim q(\mathbf{z}_{1:T}|\mathbf{x}_{1:T})}{\sum_{t=0}^{T-1} \log p(x_t|h_{t-1}, \mathbf{z}_{1:t})} \numberthis \label{eq:srn-bound} \\
&:= \mathcal{L}_\text{STORN}
}
In this work, we restrict ourselves to a standard Normal prior\footnote{In a preliminary report, we proposed the use of a Wiener process for a prior. However, the presented results were invalid due to implementation errors and the paper has been withdrawn.} of the form
\eq{
p(\mathbf{z}_{1:T}) = \prod_{t, k} \mathcal{N}(z_{t, k}|0, 1),
}
where $z_{t, k}$ is the value of the $k$'th latent sequence at time step $t$.
The recognition model $q$ will in this case be parameterised by a single mean $\mu_{t, k}$ and variance $\sigma^2_{t, k}$ for each time step and latent sequence.
Both will be represented by the output of a recurrent net, which thus has $2\omega$ outputs of which the first $\omega$ (representing the mean) will be unconstrained, while the second $\omega$ (representing the variance) need to be strictly positive.
Given the output $\mathbf{y}_{1:T} = f^r(\mathbf{x}_{1:T})$ of the recognition \ac{RNN} $f^r$, we set
\eq{
\mu_{t, k} &= y_{t, k}, \\
\sigma^2_{t, k} &= y_{t, k+\omega}^2.
}
Note that the square ensures positiveness.
Going along with the reparametrisation trick of \cite{kingma2013auto}, we will sample from a standard Normal at each time step, i.e. $\epsilon_{t, k} \sim \mathcal{N}(0, 1)$ and use it to sample from $q$ via $z_{t, k} = \mu_{t, k} + \sigma_{t, k}\epsilon_{t, k}$.
Given the complete sample sequence $\mathbf{z}_{1:T}$ we calculate the two terms of Equation~(\ref{eq:srn-bound}).
The KL-divergence can be readily computed, while we need to pass $\mathbf{z}_{1:T}$ through the generating model $f^g$ which gives $-\log p(\mathbf{x}_{1:T}|\mathbf{z}_{1:T})$.
The computational flow is illustrated in Figure~\ref{fig:compmodel}.
\begin{figure}
\label{fig:compmodel}
\centering
\input{tex/computationalmodel}
\caption{
Diagram of the \emph{computational} dependencies of \acp{STORN}.
Each node of the graph corresponds to a vectorial quantity.
The different types of nodes shown are data (\textcolor{magenta}{magenta}), the recognition model (\textcolor{cyan}{cyan}), samples (\textcolor{green}{green}) and the generating model (\textcolor{teal}{teal}).
Note that the outputs of the recognition model $y^r_t$ depict the statistics of $q(z_t|\mathbf{x}_{1:t})$, from which the sample $z_t$ (green) is drawn.
The output of the generating model, $y^g_t$ is used to represent $p(x_{t+1}|\mathbf{x}_{1:t})$.
The red arrow expresses that this prediction is used to evaluate the loss, i.e. the negative log-likelihood.
}
\end{figure}
\subsection{Comparison to \acp{RNN}} An important question is whether the proposed model offers any theoretical improvements over \acp{RNN} with no latent variables.
The approximation capabilities (with respect to probability distributions) of \acp{RNN} result from the choice of likelihood function, i.e. the way the density of the observations at time step $t$ is determined by the outputs of the network, $y_t$. See Eq.~(\ref{eq:bernoulli}).
We have argued in Section~\ref{sec:intro} that a na\"ive Bayes assumption reduces the approximation capabilities.
One way to circumvent this is to use mixture distributions
\citep{graves2013generating}.
The number of parameters of the latter scales poorly, though: linear in the number of modes, hidden units in the last layer and output dimensions.
Both approaches also share the drawback that the stochasticity entering the computation is not represented in the hidden layers: drawing a sample is determined by a random process invisible to the network.
\ac{STORN} overcomes both of these issues.
Introducing an additional mode merely requires an additional change of curvature in the approximation of $F$ (compare Section~\ref{sec:sgvb}).
This can be obtained by additional hidden units, for which the number of parameters scales linearly in the number of hidden units in the incoming and outgoing layer.
Further, the stochasticity in the network is stemming from $z$ of which the hidden layer is a function.
\section{Preliminaries}
In this section we will recap the basis of our method.
We will first describe the used model family, that of recurrent neural networks and then the estimator, stochastic gradient variational Bayes (SGVB).
\subsection{Recurrent Neural Networks}
\label{sub-sec:rnn}
Given an input sequence $\mathbf{x} = (x_1, \dots, x_T), x_t \in \mathbb{R}^\kappa$ we compute the output sequence of a \ac{sRNN} $\mathbf{y} = (y_1, \dots, y_T), y_t \in \mathbb{R}^\omega$ via an intermediary hidden state layer $\mathbf{h} = (h_1, \dots, h_T), h_t \in \mathbb{R}^\gamma$ by recursive evaluation of the following equations:
\eq{
h_t &= f_h(x_t \Win + h_{t-1} \Wrec + \bhid), \numberthis \label{eq:hidden}, \\
y_t &= f_y(h_t \Wout + \bout). \numberthis \label{eq:output}
}
The set of adaptable parameters is given by $\theta = \{\Win, \Wrec, \Wout, \bhid, \bout \}$.
$f_h$ and $f_y$ are transfer functions introducing nonlinearity into the computation.
Adaptation of the network's behaviour can be done by optimising a loss function with respect to the network's parameters with gradient-based schemes.
Consider a data set of finite size, i.e. $\mathcal{D} = \{(\mathbf{x}^{(i)}_{1:T})\}_{i=1}^{I}$ on which the loss operates.
In a setting as in Equation~(\ref{eq:bernoulli}) a reasonable choice is the negative log-likelihood given by $\Lnll(\theta) = -\sum_{i=1}^I \sum_{t=1}^T \log p(x_t|\mathbf{x}_{1:t-1})$.
\subsection{Stochastic Gradient Variational Bayes}
\label{sec:sgvb}
\ac{SGVB} was introduced independently by \cite{rezende2014stochastic} and \cite{kingma2013auto}.
For this paper, we will review the method briefly in order to introduce notation.
We are interested in modelling the data distribution $p(\mathbf{x})$ with the help of unobserved latent variable $\mathbf{z}$ represented as a directed graphical model, i.e. $p(\mathbf{x}) = \int p(\mathbf{x}|\mathbf{z})p(\mathbf{z})d\mathbf{z}$.
The integral is in general intractable, which is why we will use a variational upper bound on the negative log-likelihood for learning.
\eq{
-\log p(\mathbf{x}) &= -\log \int p(\mathbf{x}|\mathbf{z})p(\mathbf{z})dz \\
&= -\log \int {q(\mathbf{z}|\mathbf{x}) \over q(\mathbf{z}|\mathbf{x})} p(\mathbf{x}|\mathbf{z})p(\mathbf{z}) dz \\
&\le KL(q(\mathbf{z}|\mathbf{x})||p(\mathbf{z})) - \Expc{z \sim q(\mathbf{z}|\mathbf{x})}{\log p(\mathbf{x}|\mathbf{z})} =: \mathcal{L}.
}
where $KL(q||p)$ denotes the Kullback-Leibler divergence of $p$ from $q$.
In this case, we call $q$ the \emph{recognition model} since it allows for fast approximate inference of the latent variables $\mathbf{z}$ given the observed variables $\mathbf{x}$.
Note that $q$ is a variational approximation of $p(\mathbf{z}|\mathbf{x})$, which is the inverse of the \emph{generating model}\footnote{We use the non standard term ``generating model'' for $p(\mathbf{x}|\mathbf{z})$ to distinguish it more clearly from the generative model $p(\mathbf{x})$.}
$p(\mathbf{x}|\mathbf{z})$ that cannot be found in general.
Both the recognition and the generating model can be chosen arbitrarily in their computational form with the possibility to represent probability distributions as outputs and stochastic training being the only requirements.
In order to minimise the upper bound of the negative log-likelihood $\mathcal{L}$ with numerical means, it is convenient to choose parametric models.
In that case we write $p(\mathbf{x}|\mathbf{z}, \theta^g)$ and $q(\mathbf{z}|\mathbf{x}, \theta^r)$ to make the dependency on the respective parameter sets explicit.
Learning good parameters can then be done by performing stochastic optimization of $\mathcal{L}$ with respect to both $\theta^r$ and $\theta^g$, where the expectation term is approximated by single draws from $q$ in each training step.
Designing a model is then done by the following steps: (1) Choice of a prior $p(\mathbf{z})$ over the latent variables. (2) Choice of a recognition model $q(\mathbf{z}|\mathbf{x}, \theta^r)$.
The Kullback-Leibler divergence between the prior and the recognition model has to be tractable and efficient to compute. (3) Choice of a generating model $p(\mathbf{x}|\mathbf{z}, \theta^g)$, which is often given by the type of data under investigation.
An important question is that of the representation capabilities of such a model.
It turns out that if the distribution $p(x|z)$ is a universal \emph{function} approximator, so is the overall model.
An argument for the one-dimensional case is as follows.
Assume random variables $x$ and $z$ with respective distribution functions $F_x$ and $F_z$.
According to the inverse transform technique theorem \citep{grimmett1992probability}, $u = F^{-1}_x(x)$ will be uniformly distributed over the interval $[0, 1]$ and so will be $u' = F^{-1}_z(z)$.
Equating gives $F^{-1}_z(z) = F^{-1}_x(x) \Rightarrow F_x(F^{-1}_z(z)) = x$.
Therefore setting $p(x|z) := \delta(x = f(z))$ with $F = F_x \circ F^{-1}_z$ makes $p(x) =
\int_z p(x|z)p(z)dz$.
An extension to the multidimensional case can be done by applying the above to the individual factors of a cascade decomposition and requiring $x$ and $z$ to be of the same dimensionality.
The burden is then on the learning algorithm to find a good approximation for $F$.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 3,964 |
Мима Јаушовец (Марибор, 20. јул 1956) бивша је словеначка и југословенска тенисерка.
Биографија
Рођена је 20. јуна 1956. године у Марибору (Словенија) где је завршила основну школу.
У периоду између 1976. и 1988. била је најуспешнија југословенска тенисерка.
Године 1988. престала је са активним играњем тениса. Активно је играла тенис петнаест година.
Једина је словеначка тенисерка која је освојила неки од Гренд слем турнира.
Тренутно је селектор Словеначке женске тениске репрезентације.
Најважнији резултати
. освојила је Отворено првенство Француске у Паризу.
1978. године и 1983. године била је и финалиста у Ролан Гаросу.
1978. године је у Паризу освојила прво место у игри женских парова са Вирђинијом Рузич из Румуније.
Имала је запажене резултате и на Вимблдону, Риму, Хамбургу и Отвореном првенству Канаде.
Извори
Спољашње везе
Подаци о Мими Јаушовец на сајту ИТФ
Биографија Миме Јаушовец на сајту ВТА
Резултати Миме Јаушовец у Фед купу
Рођени 1956.
Мариборчани
Словеначки тенисери
Југословенски тенисери
Победници гренд слем турнира у тенису — жене појединачно
Победници гренд слем турнира у тенису — женски парови
Победници Отвореног првенства Француске у тенису | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 6,834 |
{"url":"https:\/\/www.nature.com\/articles\/s41598-019-56196-2?error=cookies_not_supported&code=3efd4a91-efa7-4b61-bbe1-169112316026","text":"## Introduction\n\nDuring engineering activities such as the discharge of waste liquid in injection wells1, geological sequestration of CO22,3, or large-scale hydrofracturing using acidic solutions4, the rock mass is not only loaded but also etched and corroded by the chemical solutions in the ambient environment. This chemical exposure changes the rock mass\u2019s mineral components, structure, and mechanical properties, thus threatening stability5,6.\n\nWhen the rock of the porous medium is in an aqueous environment, the absorption and transportation of internal moisture is closely linked to the capillary action. And capillary water absorption is one of the most significant physical properties of rock, which is the best property to evaluate the penetration of water into rock materials7. Studies have shown that rock with increased porosity and thus increased absorption is more sensitive and less durable8,9. For example, N. Sengun had found that capillary water absorption coefficient (CWA) values have a linear relationship with total porosity and CWA values have an inverse relationship with ultimate compression strength. In addition, the higher the capillary water absorption and porosity, the worse are the negative influence on physical and mechanical properties of rock10. The water inhaled will produce such effects as lubrication, softening and crystallization, resulting in the reduction of rock strength and elastic modulus.\n\nInteractions between the chemical solution and rock mass include physical and chemical activities. Compared with the physical negative influence of lubrication, softening, and argillization, water-rock chemical reactions have a greater impact on the mechanical properties of the rock mass11. For instance, the loss of strength, Young\u2019s modulus, cohesion and internal friction angle is more obvious on rock samples immersed in different chemical solutions than these just saturated with pure water12,13,14,15. However, Miao et al. found the poisson\u2019s ratio of granite increased after treatment with water and acidic solutions16. This is mainly due to the microstructural damage caused by chemical reactions between rock and chemical solution, which resulted in the macroscopic loosening and fragility of rocks.\n\nFurthermore, The damage to the rock macro-mechanical properties caused by water-rock chemical reactions is closely related to the chemical properties of the solution, such as pH value, ion concentration, and ion components17. Some research have shown that neutral water(PH\u2009=\u20097) has the least effect on the strength and elastic modulus reduction and this reduction of strength and elastic modulus increased with the increasing in the acidic or alkaline degree of the solution with the same ionic concentration15,18. Besides, Feucht et al. had found that solution of intermediate ionic concentration produced the least mechanical weakening and the mechanical weakening increased with the ionic concentration increasing or decreasing of solutions19. This phenomenon was also observed by Feng et al.18. In addition, there is different influence on reduction of rock strength after the rock soaked into chemical solutions that contained different ionic components. For example, among chemical solutions of NaCl, CaCl2, and NaHCO3 with the same ionic concentration and pH value, NaCl and NaHCO3 solutions cases had the largest reduction and smallest reduction on strength of the limestone specimens, respectively20. But for sandstone, NaHCO3 and CaCl2 solutions produced the largest loss and smallest loss of strength, respectively. These are mainly due to the chemical reaction between solutions and main mineral components of rock samples21.\n\nIt is clear that rock mass is geological material with many initial internal micro-cracks and surrounds by complex hydrological environment. The change of internal microstructure of rock mass can cause the change of macro-mechanical properties22. During the process of water-rock chemical corrosion, an erosion of mineral particles and crystals, a weakening of particle bonds strength, and an alteration of the mineral components will cause the degradation of the rock macro-mechanical properties23. Therefore, it is important to study the micro-mechanism of rock mass under chemical corrosion. Over the past few decades, several methods have been employed to study the micro-mechanism of water-rock chemical reaction, including computerized tomography(CT)24, nuclear magnetic resonance spectroscopy(NMR)25,26 and scanning electron microscope(SEM)27. CT value and CT image obtained from the CT testing technique provides an effective mean to analysis of rock damage evolution28. Feng et al. analyzed the damage evolution of sandstone by adopting CT images and CT values and defined a damage variable based on the chemical corrosive influence21. Through the NMR, pore volume changing with time can be measured after rock immersed into different chemical solutions. It can be found that number of small pores and large pores both increased significantly due to the dissolution of minerals and erosion of microstructure15,29. By mean of SEM, it can be found that surfaces of sandstone that subjected to water-rock reaction are more rough and show little gouge19. Besides, the bigger particles of granite broken into a number of smaller, fragmented particles and the microstructures became looser under the SEM16. However, the observation field of SEM is very limited, it is difficult to analyze the whole fracture morphology of rock sample.\n\nPrevious research have indeed made some achievements, however, the existing studies of water-rock chemical reactions so far mainly have been focused on macro mechanical properties and micromorphology of rock under uniaxial and triaxial compression tests. Few research have emphasized shear-resistant mechanical performance and macroscopic fracture morphology of rock after corroded by chemical solutions. Therefore, this paper utilizes a self-developed meso-shear test device for rock to conduct shear-fracture tests on sandstone corroded by chemical solutions with different pH values. Besides, a three-dimensional scanner is adopted to scan the shear and fracture surface, and quantify the surface smoothness and fracture characteristics through a software package. Furthermore, We analyzed the macroscopic impact of corrosive effects of the chemical solutions on the shear-resistant properties of sandstone using statistical methods and use a scanning electron microscope (SEM) to relate the macroscopic features to microstructural changes.\n\n## Test Methods\n\n### Sample preparation\n\nThe rock samples used in the test are upper Triassic Xujiahe (T3xj) sandstone, which is classified as terrigenius fine grain clastic sedimentary rock with a particle diameter of 0.1\u20130.5\u2009mm. The main components of the sandstone include quartz, feldspar, flint, and muscovite, and an image of its microstructure is shown in Fig.\u00a01. During the test, large-size complete sandstones were selected and incised to reduce the discreteness of the test specimen. The small-size sandstones were fabricated into cube specimens of dimensions 40\u2009mm\u2009\u00d7\u200940\u2009mm\u2009\u00d7\u200940\u2009mm with wet processing. The processing precision was controlled within 0.02\u2009mm. Figure\u00a02 shows the fabricated specimens and Table\u00a01 lists their basic physico-mechanical parameters.\n\n### Preparation of water-chemical solution\n\nIn real environment, the pH value of underground water is generally 5.5\u20138.5. However, during waste liquid injection, the injected waste liquid may be strongly acid or alkaline; thus, we prepared a gradient of 5\u2009pH values, namely, 2, 5, 7, 9, and 12. The preparation method for the water-chemical solution is as follows: First, we prepared a NaCl solution with pH\u2009=\u20097 and concentration of 0.1\u2009mol\/L. Second, we added 0.1\u2009mol\/L HCl and NaOH solutions to the NaCl solution to prepare NaCl solutions with pH values of 2, 5, 9, and 1219.\n\n### Testing conditions\n\nAccording to the test plan (see Table\u00a02), three sandstone specimens were arranged under each type of testing condition so that a total of 21 samples were tested. After the specimen processing is completed, the samples were placed in ovens with a temperature of 105\u2009\u00b0C for 24\u2009h, cooled in a sealed environment, and then soaked in 150\u2009ml of solutions with pH of 2, 5, 7, 9, and 12 and pure water for 14 days. During the soaking, we regularly monitored the change in solutions\u2019 pH value (the measurement data for the solution with pH of 5 was missed) and kept the environment sealed during the whole process.\n\nA custom meso-shear test device was adopted to conduct nonrestrictive shear tests30. Figure\u00a03 shows the actual specimen installation and labels the forces exerted. After the test, a 3D laser scanner was employed to scan the three dimensions of the upper and lower shear-fracture surfaces (Fig.\u00a04). Furthermore, We then used Matlab and other software to collect statistics and analyze the morphology characteristic parameters of the three-dimensional maps of the shear-fracture surface.\n\n## Results and Discussion\n\n### Change of pH value of the water-chemical solutions\n\nWe used a PHS-2C acidimeter to measure the change of pH value of the chemical solutions during the whole soaking process. The measurements began when the specimen is completely soaked in the solution and the measurement frequency was by the change rate of the pH value. Figure\u00a05 plots the change of pH values during the whole soaking process.\n\nAs Fig.\u00a05 shows, the pH values of the solutions change rapidly at an early stage, which is mainly caused by the relatively large contact area between the chemical solution and specimen surface. Thus, the water\u2013rock chemical coupling reaction is relatively strong. As time goes on, the etching and corrosive effects of the chemical solution on sandstone slowly extend from the surface to the interior and the pH value tends to change less, becoming stable after approximately 14 days. This means that the water\u2013rock coupling reaction is strongly time-dependent; in other words, the chemical reaction will gradually weaken over the corrosion period and finally cease.\n\nThe pH value of all solutions finally approached 7 (neutral) as a result of the self-balancing feature of the solution\u2019s pH value during the water-rock reaction. The pH value of pure water tends toward alkalinity, because the main component of the sandstone adopted in this test is aluminosilicate, which is weakly alkaline after hydrolysis.\n\n### Effect of water-rock reaction on shear strength of sandstone\n\nWe conducted nonrestrictive shear tests for dry sandstone and sandstone etched in pure water and solutions with different pH values. Figure\u00a06 shows the correlation curve between the shear stress and shear displacement obtained from the test.\n\nAs Fig.\u00a06 shows, the shapes of the various curves are basically consistent. The samples experience shear failure at three stages: pore and fissure compaction, elastic deformation and nonstable fracture, and development. The shear strength of dry sandstone is 13.67\u2009MPa, obviously higher than the samples that were soaked. The sandstone etched in NaCl solution with a pH of 12 had the lowest shear strength of 6.30\u2009MPa, which demonstrates that the water\u2013rock chemical coupling action greatly degraded the shear strength of the specimens.\n\nAmong all the etched sandstone samples, the sandstone etched by pure water had the highest shear strength of 9.25\u2009MPa. The sandstone etched in NaCl solution with the same pH as pure water has a shear strength of 9.18\u2009MPa, which is close to the above value. In addition, the two types of sandstones have similar shear-stress\u2013shear-displacement curves, which means that the Na+ and Cl ions in 0.1\u2009mol\/L NaCl solution have less impact on the shear characteristics of sandstone.\n\nFigure\u00a07 shows a fitted curve for the relation between the shear strength of sandstone and pH value of water\u2013chemical solution. As the figure shows, the shear strength of sandstone increases linearly as the acidity of etching solution weakens (pH\u2009=\u20092, pH\u2009=\u20095, pH\u2009=\u20097). Moreover, as the alkalinity of the etching solution increased (pH\u2009=\u20097, pH\u2009=\u20099, pH\u2009=\u200912), the shear strength of sandstone decreased linearly.\n\nTo describe the effect of the water-rock chemical coupling reaction on the shear strength of sandstone better, we classified the sandstone damage into two categories based on the chemical-damage model defined by Tang31. Type A damage DA and Type C damage DC are categorized according to the severity of the change of the rocks\u2019 uniaxial compressive strength caused by water-chemical action:\n\n$${D}_{A}=1-S\/{S}_{0}$$\n(1)\n$${D}_{C}=1-S\/{S}_{w}$$\n(2)\n\nwhere S indicates the shear strength of sandstone etched in NaCl solutions with different pH values, S0 indicates the shear strength of dry sandstone, and Sw indicates the shear strength of sandstone etched in pure water.\n\nType A damage includes physical and chemical damage to the rock structure, such as softening, argillization, and lubrication. Type C damage refers to the chemical damage to the rock caused by the change of the solution\u2019s pH value. Figure\u00a08 plots these two damage types versus the pH of the etching solution.\n\nFigure\u00a08 shows that as the acidity or alkalinity of the chemical solution increased, both types of damage gradually become serious. The alkaline solution degrades the rock more than the acid solution, which means that the sandstone is more sensitive to alkaline solutions. According to the plot of DA against pH values, chemical solutions have a very obvious effect on the shear strength of sandstone. If the damage approaches 50% during engineering activities, the stability of the local rock mass may be compromised, causing geological disturbance and threatening the safety and people and property. According to the plot of DC against pH values, the change of the solution\u2019s pH value could cause significant chemical damage, which suggests that the pH value of the chemical solution is closely related to the strength of water\u2013rock chemical coupling action and the corresponding weakening of the rock strength.\n\n### The features of fracture surface and analysis of the SEM graph\n\nA three-dimensional scanning system (Fig.\u00a04) was used to scan the stereometric and three-dimensional structure of the shear-fracture surface of the sandstone. The scanning results are shown in Fig.\u00a09.\n\nFigure\u00a09 clearly shows that compared with the shear-fracture surface of sandstone etched in solutions, the shear-fracture surface of dry sandstone has a steeper topography (color gradient, see Fig.\u00a09(a)) and more gullies, contained irregular uplifts in the middle area, and presented more obvious irregular microbulges on the surface. These features suggest that the shear-fracture surface fluctuates more and is more rough. The extension and perforation of the shear-fracture surface is more complex, the extension of internal fracture is more circuitous, and the shear resistance is larger during shear failure. For shearing fractures of sandstone under etching solutions, with increasing of acidity (pH\u2009=\u20097, pH\u2009=\u20095, pH\u2009=\u20092), the shearing fracture of sandstone has fewer steep portions and becomes smoother. The microconvexities on the surface also disappear gradually. The alkaline corrosion effect is similar to that of acidic corrosion. With increasing alkalinity of the etching solution, the shear-fracture surface of sandstone fluctuates less, the roughness decreases, and obvious water\u2013chemical damage is evident.\n\nIn order to observe the microstructure of the mineral granules after the sandstone is corroded by the chemical solution, the samples under different test conditions were imaged by electron microscope, and an SEM graph of the samples is shown in Fig.\u00a010.\n\nAs shown in Fig.\u00a010, the mineral particles on the surface of dry sandstone connect compactly and flatly. However, for sandstone etching with a neutral solution, mineral grains were clearly separated in a loose cell-like pattern, and the clay mineral content between particles was seriously corroded. With the increase of the acidity or alkalinity of etching solution, the mineral particle morphology tends to be scale shaped. This morphology indicates that some particles dissolve by corrosion or argillization in the water; thus, the strength of the particles weakens and the cohesive forces between them are weakened.\n\n### Analysis on morphological characteristics of shear-fracture surface\n\nIn order to describe the changes to the morphological characteristics of the sandstone shear-fracture surfaces, we quantified the corrosion effects of the chemical solution with a set of characteristic parameters. These characteristics can be divided into height and texture features of the surface morphology. The former may represent height fluctuations and the slope at each point on the fracture surface, and the latter may capture position between each point of the fracture surface and their correlation32. The combination of both can best describe morphology characteristics of fracture surface. Many of the characteristic parameters are inclusive. Based on the least-square plane, auxiliary software was applied to extract coordinate data for the shear fracture. Moreover, Matlab was used to select the height-characteristic parameters of the surface topography, the maximum profile height Sh, profile mean square root deviation Sq, texture characteristic parameters profile area ratio SA, and the slope mean square root S\u2206q.\n\nThe parameters are defined as follows:\n\n1. (1)\n\nThe maximum profile height Sh:\n\n$${S}_{h}=max(|{S}_{pi}|+|{S}_{mj}|)$$\n(3)\n\nwhere Spi (i\u2009=\u20091, 2,\u2026) represents the distance from the highest peak to the base level for the ith profile peak and Smj (j\u2009=\u20091, 2,\u2026) represents the distance from the lowest point to the base level in the jth profile valley.\n\n2. (2)\n\nProfile mean square root deviation Sq:\n\n$${S}_{q}=\\sqrt{\\frac{1}{m\\times n}\\mathop{\\sum }\\limits_{i=1}^{m}\\mathop{\\sum }\\limits_{j=1}^{n}{Z}_{i,j}^{2}}$$\n(4)\n\nwhere, m and n represent the number of points on the length and width of the sampling area and Zi,j represents the distance from the profile point to the base level at coordinate (i, j) on the surface profile. When the profile point is above the base level, Zi,j is positive.\n\n3. (3)\n\nProfile area ratio SA:\n\n$${S}_{A}={A}_{t}\/A$$\n(5)\n\nwhere At represents the extended area of the fracture surface and A represents the projected area of the fracture surface.\n\n4. (4)\n\nSlope mean square root S\u2206q:\n\n$${S}_{\\Delta q}=\\sqrt{\\frac{{\\sum }_{i=1}^{m-1}{\\sum }_{j=1}^{n-1}[{({Z}_{i+1,j}-{Z}_{i,j})}^{2}+{({Z}_{i,j+1}-{Z}_{i,j})}^{2}+{({Z}_{i+1,j+1}-{Z}_{i+1,j})}^{2}+{({Z}_{i+1,j+1}-{Z}_{i,j}+1)}^{2}]}{2(m-1)(n-1){\\Delta }^{2}}}$$\n(6)\n\nwhere \u0394 represents the sampling spacing. The rest of the symbols are the same as above.\n\nAfter soaking in water solution with different pH values, the three-dimensional morphology characteristic parameters of the shear-fracture surfaces are listed in Table\u00a03.\n\nFig.\u00a011 shows the correlation curve of the height parameters of the shear-fracture surface Sh, Sq, and the pH value of the chemical solution. Figure\u00a012 shows the correlation curve of the texture parameters of the shear-fracture surface SA, S\u2206q, and the pH value of solution.\n\nFrom Fig.\u00a011 with pH\u2009=\u20097 as the demarcation point we observe that as the pH value of the solution increases or decreases, the maximum profile height Sh of the shear-fracture of sandstone decreases, indicating that the maximum fluctuation range of the fracture surface decreases gradually. The curve of the profile mean square root deviation Sq follows the same trend. The profile mean square root deviation fully describes the discreteness and volatility of the profile points on the fracture surface, which may decrease with increasing acidity or alkalinity solution, indicating that the discreteness and volatility of the profile points on the fracture surface decrease gradually. Analysis of the two height-characteristic parameters of the shear-fracture surface shows that higher acidity or alkalinity decreases the degree of surface fluctuation on the sandstone shear-fracture surface, implying that the fracture surface is more smooth. The plot also shows that alkaline corrosion smoothens the fracture surface more than acidic corrosion, which also confirms that sandstone is more sensitive to alkaline solutions. Figure\u00a012 shows that as the acidity or alkalinity increases, the profile area ratio on the sandstone shear-fracture surface approaches 1 and the slope mean square root decreases, indicating that the folds on the fracture surface are fewer and tend to be more smooth.\n\nIn summary, with increasing acidity or alkalinity in the etching solution, the degree of fluctuation and roughness on the shear-fracture surface of sandstone decrease gradually, indicating that chemical solution causes increasing erosion damage to the sandstone. This is because solutions with stronger acidity or alkalinity may generate strong water\u2013rock interactions, and the internal particles of sandstone are broken down by dissolution. When the shear-load effects run through the whole fracture surface, the fracture surface is more smooth and flat and high-order microconvexities on the surface are also reduced. This smoothing is caused, on the one hand, by weakening of the cohesive force between particles. On the other hand, because the microconvex bodies on the upper and lower fracture surfaces experienced more contact abrasion during loading due to the degradation of mechanical properties.\n\n### Analysis on water-rock chemical coupling mechanism\n\nThe chemical effects of chemical solutions on rock mainly include ion exchange, dissolution, corrosion, hydration, hydrolysis, and oxidation reduction. The etching solution we tested is 0.1\u2009mol\/L of NaCl solution, with ionic composition of Na+, Cl, H+, and OH. Because the effects of Na+ and Cl under this ion concentration are not obvious on the degradation of sandstone, we only need to discuss the mechanism behind the influence of H+ and OH ions (the pH value, in other words) on the chemical action that corrodes sandstone. The main components of the sandstone selected for this experiment were quartz, feldspar, and kaolinite. Therefore, the water\u2013rock chemical coupling mechanism is discussed in terms of the reactive mode of these three in solution.\n\nIn case of acid soaking, the pH value of the solution rises. This is mainly caused by feldspar (k-teldspar, albite, and thiorsanite) and kaolinite reacting with H+ in the solution to consume H+ in the solution. The chemical reaction equation is as follows:\n\n$$KAlS{i}_{3}{O}_{8}({\\rm{K}}-{\\rm{teldspar}}\\,)+4{H}_{2}O+4{H}^{+}\\to {K}^{+}+A{l}^{3+}+3{H}_{4}Si{O}_{4}$$\n(7)\n$$NaAlS{i}_{3}{O}_{8}({\\rm{Albite}})+4{H}_{2}O+4{H}^{+}\\to N{a}^{+}+A{l}^{3+}+3{H}_{4}Si{O}_{4}$$\n(8)\n$$CaA{l}_{2}S{i}_{2}{O}_{8}({\\rm{Thiorsanite}})+8{H}^{+}\\to C{a}^{2+}+2A{l}^{3+}+2{H}_{4}Si{O}_{4}$$\n(9)\n$$A{l}_{2}S{i}_{2}{O}_{5}{(OH)}_{4}({\\rm{Kaolnite}})+6{H}^{+}\\to 2A{l}^{3+}+2{H}_{4}Si{O}_{4}+{H}_{2}O$$\n(10)\n\nIn case of alkaline soaking, the pH value of the solution decreases, and quartz and kaolinite are the main components to participate in the corresponding reactions. The chemical reaction equation is as follows:\n\n$$Si{O}_{2}({\\rm{Quartz}})+2O{H}^{-}\\to Si{O}_{3}^{2-}+{H}_{2}O$$\n(11)\n$$A{l}_{2}S{i}_{2}{O}_{5}{(OH)}_{4}+5{H}_{2}O+2O{H}^{-}\\to 2Al{(OH)}_{4}^{-}+2{H}_{4}Si{O}_{4}$$\n(12)\n\nIn case of neutral soaking, the pH value of the solution inclines weakly toward alkalinity. This is because hydrolysis of quartz and feldspar tends to weak alkalinity. Even if the kaolinite reaction consumes OH , its influence on the pH value is negligible compared to the previous two reactions. The specific chemical reaction equation is as follows:\n\n$$KAlS{i}_{3}{O}_{8}+8{H}_{2}O\\to {K}^{+}+Al{(OH)}_{4}^{-}+3{H}_{4}Si{O}_{4}$$\n(13)\n$$NaAlS{i}_{3}{O}_{8}+8{H}_{2}O\\to N{a}^{+}+Al{(OH)}_{4}^{-}+3{H}_{4}Si{O}_{4}$$\n(14)\n$$CaA{l}_{2}S{i}_{2}{O}_{8}+8{H}_{2}O\\to C{a}^{2+}+2Al{(OH)}_{4}^{-}+2{H}_{4}Si{O}_{4}$$\n(15)\n$$Si{O}_{2}+2{H}_{2}O\\to {H}_{4}Si{O}_{4}$$\n(16)\n$$A{l}_{2}S{i}_{2}{O}_{5}{(OH)}_{4}+5{H}_{2}O+2O{H}^{-}\\to 2Al{(OH)}_{4}^{-}+2{H}_{4}Si{O}_{4}$$\n(17)\n\nAccording to the water-rock chemical equation, the mineral composition of sandstone is eroded by the soaking process. Moreover, the granular structure is weakened and loosened, resulting in degradation of the mechanical properties of the sandstone samples. As the pH value increases, the degree of grain corrosion increases and the cohesive forces between particles are weakened, thus resulting in intensified wear and smoother fracture surfaces.\n\nIn conclusion, the change of sandstone microstructure is closely related to macroscopic changes to the shear mechanical parameters. The mineral composition of the sandstone samples and their microstructural changes correspond with the degradation to mechanical parameters caused by chemical solution soaking.\n\n## Conclusion\n\nThe following present several major conclusions drawn from the results and discussion above:\n\n1. (1)\n\nWith increase of the acidity or alkalinity in water-chemical solution, the shearing strength of sandstone is in linear decrease, indicating that the stronger acidity or alkalinity in water solution, the more serious water-chemical damage inside the sandstone after soaking, resulting in decrease of mechanical properties of sandstone.\n\n2. (2)\n\nShear-fracture morphology from three-dimensional scanning and SEM imaging showed that stronger acidity or alkalinity caused greater damage to the sandstone. High or low pH smoothed the fracture surface and caused the internal structure to be scaled in shape. This evidence indicates that etching effects exacerbated the dissolution, corrosion, and argillization of particles inside the sandstone and clay, thus weakening the sandstone\u2019s internal structure of sandstone.\n\n3. (3)\n\nThe three-dimensional morphological characteristic parameters Sh, Sq, SA, and S\u2206q, which were obtained with self-compiled Matlab scripts, can describe the degree of fluctuation and roughness of the fracture surface of sandstone quantitatively. With increasing acidity or alkalinity in the soaking solution, these characteristic parameters decrease gradually, indicating that the shear-fracture fluctuation gradually slows down. The roughness also decreases gradually, reflecting the influence of the water-chemical damage on the internal structure of sandstone.","date":"2023-04-02 07:58:53","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6397179365158081, \"perplexity\": 2694.108572945523}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-14\/segments\/1679296950422.77\/warc\/CC-MAIN-20230402074255-20230402104255-00309.warc.gz\"}"} | null | null |
\section{Introduction}
Recall that a group $G$ is \emph{perfect} if it is equal to its
own commutator subgroup $[G,G]$. This means that for any $g\in G$
there exist $r\in\N$ and $h_i,\bar h_i\in G$, $i=1\ld r$, such
that
\begin{equation} g=[h_1,\bar h_1]\ldots[h_r,\bar
h_r].\end{equation} When we consider the category of topological
groups a fundamental question arises whether $h_i,\bar h_i$ can be
chosen to be continuously dependent on $g$. More precisely, we
introduce the following notion.
\begin{dff}
A topological group $G$ will be called \emph{locally continuously
perfect } if there exist $r\in\N$, a neighborhood $U$ of $e$ in
$G$ and
continuous mappings $S_i:U\r G$, $\bar S_i:U\r G$, $i=1\ld r$,
such that \begin{equation}g=[S_1(g),\bar
S_1(g)]\ldots[S_{r}(g),\bar S_{r}(g)]\end{equation} for every
$g\in U$. Moreover, we assume that $S_i(e)=e$ for all $i$. The
smallest $r$ as above will be denoted by $r_G$.
\end{dff}
Clearly any connected locally continuously perfect group is
perfect.
An analogous notion of a \emph{locally smoothly perfect group} in
the category of (possibly infinite dimensional) Lie groups have
been studied in the paper by Haller and Teichmann \cite{ha-te},
where, for the first time, the problem of smooth dependence of
$h_i,\bar h_i$ on $g$ in (1.1) was put forward. The main purpose
of the present paper is to show that the property of being locally
continuously perfect is even more common for homeomorphism groups
of manifolds than its smooth counterpart in \cite{ha-te} for
diffeomorphism groups of manifolds. In both cases very deep (but
completely different from each other) facts are exploited: our
main result is based on deformations in the spaces of imbeddings
of manifolds (Edwards and Kirby \cite{ed-ki}), while
Haller and Teichmann used a simplicity theorem of Herman (\cite{Her73}) on
the diffeomorphism group of a torus and the small denominator
theory (or the KAM theory) in its background.
Throughout $M$ is a topological metrizable manifold, possibly with
boundary, and $\H(M)$ denotes the path connected identity
component of the group of all compactly supported homeomorphisms
of $M$ endowed with the graph topology (\cite{KM}) (or the
majorant topology \cite{ed-ki}). By a \emph{ball} in $M$ we will
mean rel. compact, open ball imbedded with its closure in $M$.
Similarly we define a half-ball if $M$ has boundary. For $M$
compact, let $d_M$ is the smallest integer such that
$M=\bigcup_{i=1}^{d_M}B_i$ where $B_i$ is a ball or half-ball for
each $i$.
\begin{thm} If $M$ is compact, then $\H(M)$ is a locally continuously
perfect group (even more, it satisfies Def. 2.1 below) with
$r_{\H(M)}\leq d_M$. In particular, $\H(M)$ is perfect and simple
(if $M$ is connected).
\end{thm}
The fact that $\H(M)$ is perfect is an immediate consequence of
\cite{Mat71} and \cite{ed-ki}, Corollary 1.3. A special case was
already proved by Fisher \cite{fis}. Note that if we drop the
compactness assumption, $\H(M)$ is also perfect in view of an
argument based on Theorem 5.1 (also Theorem 5.1 in \cite{ed-ki}).
The group $\H(M)$ is simple as well, see e.g. \cite{li}.
\begin{thm} Let $M$ be an open manifold such that $M=\intt(\bar M)$, where $\bar M$
is a compact manifold with boundary. Then $\H(M)$ is a locally
continuously perfect group (and fulfils Def. 2.1). In particular,
$\H(M)$ is perfect and simple. Furthermore, $r_{\H(M)}\leq
d_{M}+2$. Here $d_M$ stands for the smallest integer such that
$M=P\cup \bigcup_{i=1}^{d_M}B_i$ where $B_i$ is an open ball for
each $i$ and $P$ is a collar neighborhood of the boundary.
\end{thm}
It is doubtful whether $\H(M)$ is locally continuously
perfect without the assumption of Theorem 1.3. Observe that
Theorems 1.2 and 1.3 are also true for isotopies (Corollary
3.3).
In the next three sections we present miscellaneous notions,
examples, facts and problems related to locally continuously
perfect groups. The proofs of Theorems 1.2 and 1.3, making use of
subtle and difficult techniques of Edwards and Kirby in
\cite{ed-ki}, are presented in section 5. The case of
$G$-equivariant homeomorphisms is investigated in section 6.
\section{Relative notions and basic lemma}
In order to describe the structure of homeomorphism groups of
manifolds it will be useful to strengthen slightly Def.1.1.
\begin{dff}
A topological group $G$ is \emph{locally continuously perfect (in a stronger sense)}
if there are $r\in\N$, a neighborhood $U$ of $e\in G$, elements
$h_1\ld h_r\in G$ and continuous mappings $S_i:U\r G$ with
$S_i(e)=e$, $i=1\ld r$, satisfying \begin{equation}
g=[S_1(g),h_1]\ldots[S_r(g),h_r]\end{equation} for all $g\in U$.
\end{dff}
Let us "globalize" the notion of local continuous perfectness.
\begin{dff}
A group $G$ is called \emph{uniformly perfect} (see, e.g., \cite{tsu2}),
if $G$ is perfect and
there is $r\in\N$ such that any $g\in G$ can be expressed by (1.1)
for some $h_i,\bar h_i\in G$, $i=1\ld r$. Next, a topological
group $G$ is said to be \emph{continuously perfect} if there
exist $r\in\N$ and continuous mappings $S_i:G\r G$, $\bar S_i:G\r
G$, $i=1\ld r$, satisfying the equality (1.2) for all $g\in G$.
\end{dff}
Of course, every continuously perfect group is uniformly perfect.
In early 1970's Thurston proved that the identity component of
the group of compactly supported diffeomorphism of class
$C^{\infty}$ of a manifold $M$ is perfect and simple (see
\cite{thu74}, \cite{Ban97}). The proof is based on Herman's
theorem \cite{Her73}. Next, Thurston's theorem was extended to
groups of $C^r$-diffeomorphisms where $r\neq\dim(M)+1$
(\cite{Mat74}), and to classical diffeomorphism groups
(\cite{Ban97},
\cite{ry5}).
Recently, the problem of uniform perfectness of diffeomorphism
groups has been studied in \cite{bip}, \cite{tsu2} and
\cite{ry6}. In contrast to the problem of perfectness and
simplicity, the obtained results depend essentially on the
topology of the underlying manifold. However, in most cases (with
some exceptions presented below)
difficult open problems arise whether the groups in question
satisfy Def. 1.1, 2.1, or 2.2, or whether they are locally smoothly perfect.
The following type of fragmentations is important when studying
groups of homeomorphisms.
\begin{dff} Let $\U$ be an open covering of $M$.
A subgroup $G\s \H(M)$ is \emph{locally continuously factorizable with respect to $\U$}
if for any finite subcovering $(U_i)_{i=1}^d$ of $\U$, there
exist a neighborhood $\P$ of $\id\in G$ and continuous mappings $\sigma_i:\P\r G$, $i=1\ld d$,
such that for all $f\in
\P$ one has \begin{equation*}f=\sigma_1(f)\ldots \sigma_d(f),\quad \supp(\sigma_i(f))\s U_i, \forall i.
\end{equation*}
\end{dff}
Given a subset $S\s M$, by
$\H_S(M)$ we denote the path connected identity component of the
subgroup of all elements of $\H(M)$ with compact support contained
in $S$.
Using the Alexander trick, we have that $\H(\R^n)$ coincides with
the group of all compactly supported homeomorphisms of $\R^n$. In
fact, if $\supp(g)$ is compact, we define an isotopy $g_{t} :
\mathbb{R}^{n} \rightarrow \mathbb{R}^{n}$, $t \in I$, from the
identity to $g$, by
\begin{equation*}
g_{t}(x)= \left\{
\begin{array}{lcl}
tg\left( \frac{1}{t}x \right)& for & t>0\\
x&for&t=0.
\end{array} \right.
\end{equation*}
In particular, for every ball $B$ in $M$ the group $\H_B(M)$
consists of all homeomorphisms compactly supported in $B$.
The following fact, with a straightforward proof, plays a basic
role in studies on homeomorphism groups.
\begin{lem}\cite{Mat71}(Basic lemma)
Let $B\s M$ be a ball and $U\s M$ be an open subset such that
$\cl(B)\s U$. Then there are $\phi\in\H_U(M)$ and a continuous
mapping $S:\H_B(M)\r \H_U(M)$ such that $h=[S(h),\phi]$ for all
$h\in\H_B(M)$.
\end{lem}
\begin{proof} First choose a larger ball $B'$ such that $\cl(B)\s
B'\s\cl(B')\s U$. Next, fix $p\in\pp B'$ and set $B_0=B$. There
exists a sequence of balls $(B_k)_{k=1}^{\infty}$ such that
$\cl(B_k)\s B'$ for all $k$, the family $(B_k)_{k=0}^{\infty}$ is
pairwise disjoint, locally finite in $B'$, and $B_k\r p$ when
$k\r\infty$. Choose a homeomorphism $\phi\in\H_U(M)$ such that
$\varphi(B_{k-1})=B_k$ for $k=1,2,\ldots$. Here we use the fact
that $\H_U(M)$ acts transitively on the family of balls in $B'$
(c.f. \cite{hir}).
Now we define a continuous homomorphism $S:\H_B(M)\r\H_U(M)$ by
the formula
$$ S(h)=\phi^kh\phi^{-k}\quad\hbox{on}\,B_k,\ k=0,1,\ldots$$
and $S(h)=\id$ outside $\bigcup_{k=0}^{\infty}B_k$. It is clear
that $h=[S(h),\phi]$, as required.
\end{proof}
\begin{rem} (1) Perhaps for the first time the above reasoning appeared in Mather's paper
\cite{Mat71}. Actually Mather proved also the acyclicity of
$\H(\R^n)$. Obviously, \cite{Mat71} and Lemma 2.4 are no longer
true for $C^1$ homeomorphisms. However, Tsuboi brilliantly
improved this reasoning and adapted it for $C^r$-diffeomorphisms
with small $r$, see \cite{tsu89}.
(2) It is likely that the basic lemma is no longer true for
$\H(M)$ instead of $\H_B(M)$ and $\H_U(M)$. In fact, consider
$\H(\R^n)$ and a weaker Def.1.1. If one tried to repeat the proof
of Lemma 2.4 then one would have continuous maps $S, \bar
S:\H(\R^n)\supset U\r\H(\R^n)$, where $\bar S(h)$ would play a
role of the shift homeomorphism $\phi$ for $h\in\H(\R^n)$. But
then $\bar S(h)$ depends somehow on the support of $h$. On the
other hand, there are arbitrarily close to id elements of
$\H(\R^n)$ with arbitrarily large support. This would spoil the
continuity of $\bar S$.
\end{rem}
Given a foliated manifold $(M,\F)$, a mapping $h:M\r M$ is
\emph{leaf preserving} if $h(L)\s L$ for all $L\in\F$. Let
$\H(M,\F)$ be the path connected identity component of the group
of all leaf preserving homeomorphisms of $(M,\F)$.
\begin{cor} Let $\F_k=\{\R^k\t\{pt\}\}$, $k=1\ld n-1$, be the
product foliation of $\R^n$. If $B=I\t \R^{n-k}$ and $U=J\t
\R^{n-k}$, where $I, J\s\R^k$ are open intervals such that
$\cl(I)\s J$, then there exist $\phi\in\H_U(\R^n,\F_k)$ and a
continuous mapping $S:\H_B(\R^n,\F_k)\r \H_U(\R^n,\F_k)$ such that
$h=[S(h),\phi]$ for all $h\in\H_B(\R^n,\F_k)$.
\end{cor}
\begin{proof}
Consider $\H_I(\R^k)$ and $\H_J(\R^k)$, and repeat the
construction of $\phi$ and $S(h)$ from the proof of Lemma 2.4.
Then multiply everything by $\id_{\R^{n-k}}$.
\end{proof}
\begin{cor} Assume that either
\begin{enumerate}\item M is a manifold with boundary and $B, U\s
M$ such that $B$ is a half-ball, $U$ is open with $\cl(B)\s U$; or
\item $M=N\t\R$, where $N$ is a manifold, and $B=N\t I$, $U=N\t J$
where $I, J\s\R$ are open intervals with $\cl(I)\s
J$.\end{enumerate} Then the assertion of Lemma 2.4 holds.
\end{cor}
The proof is analogous to the above.
\section{Examples}
First examples are provided by Theorems 1.2 and 1.3, and by
Theorem 6.2 below.
{\bf 1.} For a smooth manifold $M$ let $\D^r(M)$ stand for the
subgroup of all elements of $\diff^r(M)$ that can be joined to
the identity by a compactly supported isotopy in
$\diff^r(M)$, where $r=1\ld\infty$.
Let $G$ be a possibly infinite dimensional Lie group which is simultaneously a topological group.
If $G$ is also
locally smoothly perfect (see Def. 1 in Haller and Teichmann
\cite{ha-te}) then obviously $G$ is locally continuously perfect
(even satisfies Def. 2.1).
In particular, the following groups are locally smoothly perfect
(and a fortiori locally continuously perfect).
\begin{enumerate}
\item Any finite dimensional perfect Lie group $G$; we then have
$r_G\leq\dim G$ (\cite{ha-te}). \item Any real semisimple Lie
group $G$; then $r_G=2$ (\cite{ha-te}). \item $\D^{\infty}(\mathbb
T^n)$ for the torus $\mathbb T^n$ with $r_{\D^{\infty}(\mathbb
T^n)}\leq 3$ (\cite{Her73}). \item Let $M$ be a closed smooth
manifold being the total space of $k$ locally trivial bundles with
fiber $\mathbb T^{n(i)}$, $i=1\ld k$, such that the corresponding
vertical distributions span $TM$. Then $\D^{\infty}(M)$ is locally
smoothly perfect. In particular, the assumption is satisfied for
odd dimensional spheres and for any compact Lie group $G$, and one
has $r_{\D^{\infty}(\mathbb S^3)}\leq 18$ and
$r_{\D^{\infty}(G)}\leq 3(\dim G)^2$ (see \cite{ha-te}).
\end{enumerate}
\begin{rem}
To obtain the theorem mentioned in (4) Haller and Teichmann used a
fruitful method of decomposing diffeomorphisms into fiber
preserving ones. This method enabled an application of the deep
Herman's theorem stating that $\D^{\infty}(\mathbb T^n)$ is not
only perfect and simple but also locally smoothly perfect. Notice
that this method was already considered in the "generic" case of
$M=\R^n$ in \cite{ry2} to study the (still open) problem of the
perfectness of $\D^{n+1}(M^n)$ by means of the possible
perfectness property of the group of leaf preserving
$C^r$-diffeomorphisms with $r$ large. For a hypothetical proof of
such a result one would apply Mather's proof of the simplicity of
$\D^r(M^n)$, $r\neq n+1$, from \cite{Mat74}. See also \cite{Mat74}
III, \cite{ry1}, \cite{LR} for the problem of the perfectness of
leaf preserving diffeomorphism groups.
Notice as well that recently Tsuboi in \cite{tsu3} used a similar
method in the proofs of
perfectness theorems for groups of real-analytic
diffeomorphisms in absence of the fragmentation property.
\end{rem}
It seems likely that the groups $\D^r(\R^n)$ with small $r$
(depending on $n$) would be continuously perfect. By using his own
method Tsuboi in \cite{tsu89} generalized Mather's method from
\cite{Mat71}, reproved the simplicity theorem for
$C^r$-diffeomorphisms with $1\leq r\leq n$ (originally proved by
Mather \cite{Mat74}, II), and showed the vanishing of lower order
homologies of the groups $\D^r(\R^n)$. However, a possible
analysis of a very technical proof in \cite{tsu89} is beyond the
scope of the present paper. We may also ask whether $\D^r(\R^n)$
is (locally) $C^r$-smoothly perfect.
\medskip
{\bf 2.} It is very likely that theorems analogous to Theorems 1.2
and 1.3 can be obtained for the groups of Lipschitz homeomorphisms
on Lipschitz manifolds. See Theorem 2.2 and other results in Abe
and Fukui \cite{AF01}.
\medskip
{\bf 3.} Now we consider permanence properties of locally
continuously perfect groups. These properties provide further
examples of such groups.
Let $H\s G$ be a subgroup and $G$ be locally continuously perfect
with continuous mappings $S_i:U\r G$, $\bar S_i:U\r G$,
$i=1\ld r$, satisfying (1.2). If $S_i(U\cap H)\s H$ and $\bar
S_i(U\cap H)\s H$ for all $i$ then $H$ is also continuously
perfect. Corollary 2.6 illustrates this situation.
If $G$ and $H$ are locally continuously perfect groups then so is
its product $G\t H$.
For a compact manifold $M$ and a topological group $G$, let
$\C(M,G)$ stand for the group of continuous maps $M\r G$ with the
pointwise multiplication and the compact-open topology. Note that
$\C(M,G)$ can be viewed as an analogue of the current group (c.f.
\cite{KM}).
\begin{prop} If $G$ is locally continuously perfect then so is $\C(M,G)$ and $r_{\C(M,G)}=r_G$.
\end{prop}
\begin{proof} Let $S_i:U\r G$, $\bar S_i:U\r G$,
$i=1\ld r$, be as in Def. 1.1. Set $\U=\{f\in\C(M,G): f(M)\s U\}$
and define continuous maps $S_i^{\C}:\U\r \C(M,G)$, $\bar
S_i^{\C}:\U\r \C(M,G)$, $i=1\ld r$, by the formulae
$S_i^{\C}(f)(x)=S_i(f(x))$, where $f\in\C(M,G)$, $x\in M$, and
similarly for $\bar S_i^{\C}$. It follows that
$$ \prod_i[S_i^{\C}(f),\bar S_i^{\C}(f)](x)=\prod_i[S_i(f(x)),\bar
S_i(f(x))]=f(x)$$ for all $x\in M$. Thus
$f=\prod_i[S_i^{\C}(f),\bar S_i^{\C}(f)]$, as required. Observe
that for all $x\in M$ and $f\in\C(M,G)$, if $f(x)=e$ then
$S^{\C}_i(f)(x)=e$ for all $i$.
\end{proof}
For a topological group $G$ we denote by $\P G=\{f:I\r G:\,
f(0)=e\}$ the path group.
\begin{cor}
Theorems 1.2 and 1.3 hold for $\P\H(M)$. In
other words, these theorems are true for isotopies.
\end{cor}
Let $G$ be a compact Lie group. Given a principal $G$-bundle
$p:M\r B_M$ the \emph{gauge group} $\gau(M)$ is the group of all
$G$-equivariant mappings of $M$ over $\id_{B_M}$. That is,
$\gau(M)$ is the space of $G$-equivariant mappings
$\C(M,(G,\conj))^G$. It follows that $\gau(M)$ identifies with
$\C(B_M\leftarrow M[G,\conj])$, the space of sections of the
associated bundle $M[G,\conj]$. Consequently, any $f\in\gau(M)$ in
a trivialization of $p$ over $B=B_i\s B_M$ identifies with a
mapping $f^{(i)}:B_i\r G$ such that $f(x)=x.f^{(i)}(p(x))$.
\begin{prop}
Let $G$ be a compact Lie group and let $p:M\r B_M$ be a principal
$G$-bundle with $B_M$ compact. Then $\gau(M)$ is locally
continuously perfect provided $G$ is so, and $r_{\gau(M)}=d_M r_G$
where $d_M$ is as in Theorem 1.2.
\end{prop}
\begin{proof}
Let $(B_i)_{i=1}^d$ be a covering of $M$ by balls. Choose another
covering by balls $(B_i')_{i=1}^d$ with $\cl(B_i')\s B_i$ for all
$i$. We identify $\L(G)$, the Lie algebra of $G$, with $\rz^q$,
$q=\dim G$, by means of a basis $(X_1\ld X_q)$ of $\L(G)$. Let
$\Phi:\rz^q\supset V\r U\s G$ be a chart given by $\Phi(t_1\ld t
_q)=(\exp\,t_1X_1)\ldots(\exp\,t_qX_q)$. Suppose that
$h\in\gau(M)$ is so small that the image of $ h^{(1)}$ is in $U$.
We let $\tilde h^{(1)}=\Phi^{-1}\circ\ h^{(1)}$.
By using bump functions for
$(B_1',B_1)$ we modify $\tilde h^{(1)}$ and compose the resulting
map with $\Phi$. Consequently, we get $g_1\in\gau(M)$ such that
$\supp(g_1)\s p^{-1}(B_1)$ and $g^{(1)}_1=h^{(1)}$ on $B_1'$.
Moreover, $g_1$ depends continuously on $h$. Now take
$f_1=g_1^{-1}h$. Then $f_1\in\gau(M_{1})$, where $M_1=M\setminus
B_1'$ is a compact manifold with boundary endowed with the
coverings $(B_i\cap M_1)_{i=1}^d$ and $(B_i'\cap M_1)_{i=1}^d$.
Note that $f_1=\id$ on $\p B'_1$. Taking possibly smaller $h$ we
may continue the procedure. Finally, we get a neighborhood $\U$ of
$e\in\gau(M)$ such that for all $h\in\U$ we get a uniquely
determined decomposition $h=h_1\ldots h_d$ with $\supp(h_i)\s
p^{-1}(B_i)$ and $h_i$ depending continuously on $h$ for all $i$.
Thus, possibly shrinking $\U$, in view of Proposition 3.2 the
claim follows.
\end{proof}
\section{Remarks on conjugation-invariant norms}
The notion of the conjugation-invariant norm is a basic tool in
studies on the structure of groups. Let $G$ be a group. A
\wyr{conjugation-invariant norm} (or \emph{norm} for short) on $G$
is a function $\nu:G\r[0,\infty)$ which satisfies the following
conditions. For any $g,h\in G$ \begin{enumerate} \item $\nu(g)>0$
if and only if $g\neq e$; \item $\nu(g^{-1})=\nu(g)$; \item
$\nu(gh)\leq\nu(g)+\nu(h)$; \item $\nu(hgh^{-1})=\nu(g)$.
\end{enumerate}
Recall that a group is called \emph{ bounded} if it is bounded
with respect to any bi-invariant metric. It is easily seen that
$G$ is bounded if and only if any conjugation-invariant norm on
$G$ is bounded.
Let us introduce the following norm.
\begin{dff}
Let $G$ be a connected topological group and let $U$ be a neighborhood $e\in G$.
\begin{enumerate}\item By $\tilde U$ we denote the "saturation" of $U$ w. r. t.
$\conj_g$ for $g\in G$ and
the inversion $i$, that is $\tilde U=\bigcup_{g\in G}gUg^{-1}\cup gU^{-1}g^{-1}$. Then for $g\in
G$, $g\neq e$, by $\mu^{ U}(g)$ we denote the smallest $s\in\N$
such that $g=g_1\ldots g_s$ with $g_i\in \tilde U$ for $i=1\ld
s$. It is easily seen that $\mu^{ U}$ is a conjugation-invariant
norm. \item We say that $G$ is \emph{continuously decomposable
with respect to $U$} if there are $s\in\N$ and continuous mappings
$\varrho_i:G\r \tilde U$, $i=1\ld s$, such that
$g=\varrho_1(g)\ldots\varrho_s(g)$ for all $g\in G$. In
particular, $\mu^U(G)\leq s$.
\end{enumerate}
\end{dff}
Clearly if $U\s V$ then $\mu^V\leq\mu^U$.
It is straightforward that if $G$ is locally continuously perfect
and continuously decomposable w.r.t. $U$ as in Def. 1.1, then $G$
is continuously perfect. Likewise, if $G\s\H(M)$ satisfies Def.
2.3 and if $G$ is continuously decomposable w.r.t. $\P$ as in Def.
2.3, then $G$ is continuously factorizable. However, we do not
have any example of a homeomorphism group being continuously
decomposable. Notice that, in view of Lemma 2.4, for any two balls
$B$ and $U$ in $M$ with $\cl(B)\s U$ the group $\H_B(M)$ is
continuously perfect "in the group $\H_U(M)$", but not in itself.
Recall now two classical examples of conjugation-invariant norms.
For $g\in[G,G]$ by the \emph{commutator length } of $g$,
$\cl_G(g)$, we mean the smallest $r$ as in (1.1). Observe that the
commutator length $\cl_G$ is a norm on $[G,G]$. In particular, if
$G$ is a perfect group then $\cl_G$ is a norm on $G$.
A subgroup $G\s\H(M)$ is \emph{factorizable} if for every $g\in G$
there are $g_1\ld g_d\in G$ with $\supp(g_i)\s B_i$, where each
$B_i$ is a ball or a half-ball. Clearly any connected $G$
satisfying Def. 2.3 with respect to a family of balls is
factorizable. If $G$ is factorizable then we may introduce the
following \emph{ fragmentation norm} $\frag_G$ on $G$. For $g\in
G$, $g\neq\id$, we define $\frag_G(g)$ to be the least integer
$d>0$ such that $g=g_1\ldots g_{d}$ with $\supp(g_i)\s B_i$ for
some ball or half-ball $B_i$.
Let $\nu$ be a conjugation-invariant norm on a topological group
$G$. Then $G$ is called \emph{locally bounded with respect to
$\nu$} if there are $r\in\N$ and a symmetric neighborhood $U$
of $e\in G$ (i.e. $U=U^{-1}$) such that $\nu(g)\leq
r$ for all $g\in U$. The following obvious fact can be applied to
$\cl_G$ or to $\frag_G$.
\begin{cor} Let $G$ be a subgroup of $\H(M)$. If $G$ is locally bounded w. r. t. $\nu$
and the norm $\mu^{U}$ (Def. 4.1) is bounded where $U$ is as
above, then $G$ is bounded w. r. t. $\nu$.
\end{cor}
\section{Proofs of Theorems 1.2 and 1.3}
The proofs depend on the deformation properties for the spaces of
imbeddings obtained by Edwards and Kirby in \cite{ed-ki}. See
also Siebenmann \cite{sie}. First let us recall some notions and
the main theorem of \cite{ed-ki}. From now on $M$ is a metrizable
topological manifold and $I=[0,1]$. If $U$ is a subset of $M$, a
\emph{proper imbedding} of $U$ into $M$ is an imbedding $h: U
\rightarrow M$ such that $h^{-1}(\partial M)=U \cap \partial M$.
An \emph{isotopy} of $U$ into $M$ is a family of imbeddings
$h_{t}: U \rightarrow M$, $t \in I$, such that the map $h: U
\times I \rightarrow M$ defined by $h(x,t)=h_{t}(x)$ is
continuous. An isotopy is \emph{proper} if each imbedding in it is
proper. Now let $C$ and $U$ be subsets of $M$ with $C\subseteq
U$. By $I(U,C;M)$ we denote the space of proper imbeddings of $U$
into $M$
which equal the identity on $C$, endowed with the compact-open topology.
Suppose $X$ is a space with subsets $A$ and $B$. A
\emph{deformation of A into B}
is a continuous mapping $\varphi : A \times I\rightarrow X$ such that $\varphi|_{A\times 0}=\id_{A}$
and $\varphi(A\times 1) \subseteq B$. If $\P$ is a subset of
$I(U;M)$ and $\varphi:\P \times I\rightarrow I(U;M)$ is a deformation of $\P$,
we may equivalently view $\varphi$ as a map $\varphi:\P\times I\times U\rightarrow M$
such that for each $h\in \P$ and $t\in I$, the map $\varphi(h,t):U\rightarrow M$ is
a proper imbedding.
If $W\subseteq U$, a deformation $\varphi: \P\times I\rightarrow
I(U;M)$ is \emph{modulo W} if $\varphi(h,t)|_{W}=h|_{W}$ for all
$h\in \P$ and $t\in I$.
Suppose $\varphi: \P\times I \rightarrow
I(U;M)$ and $\psi: \Q\times I \rightarrow I(U;M)$ are deformations
of subsets of $I(U;M)$ and suppose that $\varphi(\P\times
1)\subseteq \Q$. Then the \emph{composition} of $\psi$ with
$\varphi$, denoted by $\psi \star \varphi$, is the deformation
$\psi \star \varphi :\P\times I\rightarrow I(U;M)$ defined by
\begin{equation}
\psi \star\varphi(h,t)= \left\{
\begin{array}{lcl}
\varphi(h,2t)& for & t\in [0,1/2]\\
\psi(\varphi(h,1),2t-1)& for &t\in [1/2,1].
\end{array}
\right.
\end{equation}
The main result in \cite{ed-ki} is the following
\begin{thm}
Let $M$ be a topological manifold and let $U$ be a neighborhood in
$M$ of a compact subset $C$. For any neighborhood $\Q$ of the
inclusion $i:U\subset M$ in $I(U;M)$ there are a neighborhood $\P$
of $i\in I(U;M)$ and a deformation $\varphi:\P\t I\r \Q$ into
$I(U,C;M)$ which is modulo of the complement of a compact
neighborhood of $C$ in $U$ and such that $\varphi(i,t)=i$ for all
$t$. Moreover, if $D_i\subset V_i$, $i=1\ld q$, is a finite family
of closed subsets $D_i$ with their neighborhoods $V_i$, then
$\varphi$ can be chosen so that the restriction of $\varphi$ to
$(\P\cap I(U,U\cap V_i;M))\t I$ assumes its values in $I(U,U\cap
D_i;M)$ for each $i$.
\end{thm}
Now we wish to show that $\H(M)$ is locally continuously
factorizable (Def. 2.3) provided $M$ is compact.
\begin{prop} Let $M$ be compact and let $(U_i)_{i=1}^d$ be an open
cover of $M$. Then there exist $\P$, a neighborhood of the
identity in $\H(M)$, and continuous mappings $\sigma_i:\P\r\H(M)$,
$i=1\ld d$, such that $h=\sigma_1(h)\ldots \sigma_d(h)$ and
$\supp(\sigma_i(h))\s U_i$ for all $i$ and all $h\in\P$; that is
$\H(M)$ satisfies Def. 2.3.
\end{prop}
\begin{proof} (See also \cite{ed-ki}.)
First we have to shrink the cover $(U_i)_{i=1}^d$ $d$ times, that
is we choose an open $U_{i,j}$ for every $i=1\ld d$ and $j=0\ld d$
with $U_{i,0}=U_i$ such that $\bigcup_{i=1}^dU_{i,j}=M$ for all
$j$ and such that $\cl(U_{i,j+1})\s U_{i,j}$ for all $i,j$. We
make use of Theorem 5.1 $d$ times with $q=1$. Namely, for $i=1\ld
d$ we have a neighborhood $\P_i$ of the identity in
$I(M,\bigcup_{\alpha=1}^{i-1}U_{\alpha,i-1}; M)$ and a deformation
$\phi_i:\P_i\t I\r \H(M)$ which is modulo $M\setminus U_{i,0}$ and
which takes its values in
$I(M,\bigcup_{\alpha=1}^{i}\cl(U_{\alpha,i}); M)$ and such that
$\phi_i(\id, t)=\id$ for all $t$. Here we apply Theorem 5.1 with
$C=\cl(U_{i,i})$, $U=U_{i,0}$,
$D_1=\bigcup_{\alpha=1}^{i-1}\cl(U_{\alpha,i})$ and
$V_1=\bigcup_{\alpha=1}^{i-1}U_{\alpha,i-1}$. Taking a
neighborhood $\P$ of id small enough, we have that
$\phi_d\star\cdots\star\phi_1$ restricted to $\P\t I$ is well
defined. For every $h\in\P$ we set $h_0=h$ and
$h_i=\phi_i\star\cdots\star\phi_1(h,1)$, $i=1\ld d$. It follows
that $h_d=\id$ and $h=\prod_{i=1}^dh_ih_{i-1}^{-1}$. It suffices
to define $\sigma_i:\P\r\H(M)$ by $\sigma_i(h)=h_ih_{i-1}^{-1}$
for all $i$.\end{proof}
\medskip
\emph{Proof of Theorem 1.2}. Choose any finite cover
$(U_i)_{i=1}^d$ of $M$ by balls and half-balls. Next, fix another
cover of $M$ by balls and half-balls $(B_i)_{i=1}^d$ with
$\cl(B_i)\s U_i$ for all $i$. Then apply Proposition 5.2 to
$(B_i)_{i=1}^d$, Lemma 2.4 and Corollary 2.7(1) to each couple
$(B_i, U_i)$.\quad $\square$
\medskip
Recall the notion of the graph topology. Let $X$ and $Y$ be
Hausdorff spaces and let $\C(X,Y)$ be the space of all continuous
mappings $X\r Y$. For $f\in\C(X,Y)$ by $\graph_f:X\r X\t Y$ we
denote the graph mapping. The \emph{graph topology}
on $\C(X,Y)$ is given by the basis of
all sets of the form $\{f\in\C(X,Y):\, \graph_f(X)\s U\}$, where
$U$ runs over all open sets in $X\t Y$. The graph topology is
Hausdorff since it is finer than the compact-open topology. If $X$
is paracompact and $(Y,d)$ is a metric space then for
$f\in\C(X,Y)$ one has a basis of neighborhoods of the form
$\{g\in\C(X,Y):\, d(f(x), g(x))<\varepsilon(x),\ \forall x\in
X\}$, where $\varepsilon$ runs over all positive continuous
functions on $X$.
\medskip
\emph{Proof of Theorem 1.3}. In view of the assumption $M$ is the
interior of a compact, connected manifold $\bar M$ with non-empty
(not necessarily connected) boundary $\p$. Then $\p$ admits a
collar neighborhood, that is an open subset $P$ of $M$, where
$P=\p\t(0,1)$. Here $\p\t[0,1]$ is imbedded in $\bar M$, and
$\p\t\{1\}$ is identified with $\p$.
Take a finite family of balls $(B_i)_{i=1}^d$ in $M$ and a collar neighborhood $P$
of $\p$ such
that $M=\bigcup_{i}B_{i}\cup P$. We wish to check that $\H(M)$
fulfils Def. 2.3 for $\{B_1\ld B_d,P\}$. We define balls $B_{i,j}$
for every $i=1\ld d$ and $j=0\ld d$ with $U_{i,0}=U_i$ such that
$\bigcup_{i=1}^dU_{i,j}\cup P=M$ for all $j$ and such that
$\cl(U_{i,j+1})\s U_{i,j}$ for all $i,j$. Now proceeding like in
the proof of Prop.5.2 there exist a neighborhood $\P$ of
$\id\in\H(M)$ and continuous mappings $\sigma_i:\P\r\H(M)$, where
$i=0\ld d$, such that for all $h\in \P$ we have
$$h=\sigma_0(h)\sigma_1(h)\ldots\sigma_d(h),\quad \supp(\sigma_0(h))\s
P,\quad\supp(\sigma_i(h))\s B_i$$ for $i=1\ld d$. Fix a sequence
of reals from $(0,1)$
$$ \tilde a_1<\bar a_1<a_1<b_1<\bar b_1<\tilde b_1<\tilde
a_2<\cdots<\tilde a_k<\bar a_k<a_k<b_k<\bar b_k<\tilde
b_k<\cdots$$ tending to 1. For $j=1,2\ld $ let
$C_j=\p\t[a_j,b_j]$, $V_j=\p\t(\bar a_j,\bar b_j)$ and
$U_j=\p\t(\tilde a_j,\tilde b_j)$. In view of Theorem 5.1, for
every $j$ there exist a neighborhood $\P_j$ of the inclusion
$i_j:C_j\s U_j$ in $I(U_j;M)$ and a deformation $\varphi_j:\P_j\t
I\r I(U_j,C_j;M)$ which is modulo of $M\setminus V_j$ and such
that $\varphi(i_j,t)=i_j$ for all $t\in I$. Shrinking $\P$ if
necessary, for any $h\in\P$ we may have that
$\sigma_0(h)|_{U_j}\in\P_j$ for all $j$. Put $U=\bigcup U_j$,
$V=\bigcup V_j$ and let $D=\bigcup \p\t I_j$, where $I_j$ is an
arbitrary open interval with $\cl(I_j)\s(a_j,b_j)$ for all $j$.
Therefore there are a neighborhood $\P$ of $\id\in\H(M)$ in the
graph topology and a continuous mapping $\sigma_0^1:\P\r\H(M)$
given by
$$\sigma_0^1(h)|_{U_j}=\phi_j(\sigma_0(h)|_{U_j}),\quad j=1,2\ld$$
and $\sigma_0^1(h)|_{M\setminus U}=\sigma_0(h)|_{M\setminus U}$.
It follows that $\sigma_0^1(h)=\sigma_0(h)$ on $M\setminus V$ and
that $\supp(\sigma_0^1(h))\s M\setminus \cl(D)$. Set
$\sigma_0^2=(\sigma_0^1)^{-1}\sigma_0$. Then
$\sigma_0^2:\P\r\H(M)$ is continuous, and
$\sigma_0=\sigma_0^1\sigma_0^2$ with $\supp(\sigma_0^2)\s U$. Thus
we get a decomposition
$h=\sigma_0^1(h)\sigma_0^2(h)\sigma_1(h)\ldots\sigma_d(h)$ for all
$h\in\P$. By applying Lemma 2.4 to $\sigma_i$ and Corollary 2.7(2)
to $\sigma_0^j$, the claim follows. $\square$
\section{The case of $G$-equivariant homeomorphisms}
Let $G$ be a
compact Lie group acting on $M$. Let $\H_G(M)$ be the group of
all equivariant homeomorphisms of $M$ which are isotopic to the
identity through compactly supported equivariant isotopies.
Suppose now that $G$ acts freely on $M$. Then $M$ can be regarded
as the total space of a principal $G$-bundle $p:M\r B_M=M/G$
(c.f. \cite{br}).
Let $\C_c(\rz^m)$ (resp. $\C_B(\rz^m)$) denote the space of
continuous maps $u:\R^m\r\R$ with compact support (resp. contained
in $B$). Consider the semi-direct product group
$\H(\rz^m)\t_{\tau}\C_c(\rz^m)$, where
$\tau_h(u)=u\circ h^{-1}$ for $h\in
\H(\rz^m)$ and $u\in \C_c(\rz^m)$. Then we have
$$(h_1,u_1)\cdot(h_2,u_2)=(h_1\circ h_2, u_1\circ h_2^{-1}+u_2)$$
for all $h_1,h_2\in\H(\rz^m)$ and $u_1,u_2\in \C_c(\rz^m)$. For
$(h, u)\in\H(\rz^m)\t_{\tau}\C_c(\rz^m)$ we have $(h,
u)=(h,0)\cdot(\id,u)=(\id,u_1)\cdot(h,0)$, where $u_1= u\circ h$.
We may treat $h,u$ as elements of $\H(\rz^m)\t_{\tau}\C_c(\rz^m)$.
The main lemma in \cite{ry3} (Lemma 2.1), which has an elementary
but rather sophisticated proof, can be reformulated for our
purpose as follows.
\begin{lem} Let $B$ be a ball in $\R^m$. There are homeomorphisms $\phi^-, \phi^+,
\psi^-, \psi^+$ from $\H(\R^n)$, depending on $B$, and continuous
mappings $$v_1^-, v_1^+, v_2^-, v_2^+:\C_B(\R^m)\r\C_c(\R^m)$$
such that
$$u=[\phi^-,v_1^-(u)]^{-1}[\phi^+,v_1^+(u)]^{-1}[\psi^-,v_2^-(u)]
[\psi^+,v_2^+(u)]$$ for all $u\in\C_B(\R^m)$ in the
semi-direct product group
$\H(\rz^m)\t_{\tau}\C_c(\rz^m)$.
\end{lem}
In view of Lemma 6.1 we have
\begin{thm}
If $B_M$ is compact then the group $\H_G(M)$ is locally
continuously perfect. Moreover, $r_{\H_G(M)}\leq (4\dim
G+1)d_{B_M}$.
\end{thm}
\begin{proof} Let $(B_i)_{i=1}^d$ be a covering by balls of $B_M$.
Let $P:\H_G(M)\r\H(B_M)$ be the
homomorphism given by $P(h)(p( x))=p(h(x))$, where $x\in M$. Let
$h\in\U$, where $\U$ is a neighborhood of $\id\in\H_G(M)$. Then
for $\U$ small enough $P(h)$ can be decomposed as $P(h)=g_1\cdots
g_d$ such that $g_i\in\H_{B_i}(B_ M)$, $i=1\ld d$. Then each $g_i$
can be lifted to $h_i\in\H_G(M)$, i.e. $P(h_i)=g_i$. Thus, due to
Theorem 1.2 it suffices to consider $f=h h_d^{-1}\cdots
h_1^{-1}\in\ker P=\gau(M)$.
Proceeding as in the proof of Prop. 3.4, we can write $f=f_1\cdots
f_d$ where $f_i\in\gau(M)$ and $\supp(f_i)\s p^{-1}(B_i)$.
Shrinking $\U$ we assume that
$f_i\in\H(\rz^m)\t_{\tau}\C_c(\rz^m,\rz^q)$ for all $i$, where
$q=\dim G$.
We can extend the semi-direct product structure from
$\H(\rz^m)\t_{\tau}\C_c(\rz^m)$ to
$\H(\rz^m)\t_{\tau}\C_c(\rz^m,\rz^q)$, where $\C_c(\rz^m,\rz ^q)$
is the space of compactly supported $\rz^q$-valued functions, by
the formulae $(h,(v_1\ld v_q))=(\id,(v_1\ld v_q)\circ
h)\cdot(h,0)$ and $(\id,(v_1\ld v_q))=(\id,v_1)\cdots(\id,v_q)$.
In view of Lemma 6.1, each $(\id,v_i)$ is written as a product of
four commutators from $\H(\rz^m)\t_{\tau}\C_c(\rz^m,\rz^q)$ with
factors depending continuously on $f$. This completes the proof.
\end{proof}
\begin{cor} Let $M$ be a topological $G$-manifold with one orbit
type. Then $\H_G(M)$ is a locally continuously perfect group.
\end{cor}
\begin{proof}
Indeed, if $H$ is the isotropy group of a point of $M$ then
$M^H=\{x\in M: \, H\,\,\hbox{fixes}\,\,x\}$ is a free
$N^G(H)/H$-manifold, where $N^G(H)$ is the normalizer of $H$ in
$G$. Since $\H_G(M)$ is isomorphic and homeomorphic to
$\H_{N(H)/H}(M^H)$, the corollary follows from Theorem 6.2.
To explain the relation $\H_G(M)\cong\H_{N(H)/H}(M^H)$, recall
basic facts on the $G$-spaces with one orbit type (see, Bredon
\cite{br}, section II, 5). Let $G$ a compact Lie group and let $X$
be a $T_{3\frac{1}{2}}$ $G$-space with one orbit type $G/H$ (that
is, all isotropy subgroups are conjugated to $H$). Set $N=N^G(H)$
and $X^H=\{x\in X: h.x=x\,,\forall h\in H\}$. Then we have the
homeomorphism $G\t_NX^H\ni[g,x]\mapsto g(x)\in X$. That is, the
total space of the bundle over $G/N$ with the standard fiber $X^H$
associated to the principal $N$-bundle $G\r G/N$ is $G$-equivalent
to $X$. In particular, the inclusion $X^H\subset X$ induces a
homeomorphism $X^H/N\cong X/G$.
Denote $K=N/H$. Given an arbitrary $G$-space $Y$, there is a
bijection $\kappa_{X,Y}$ between $G$-equivariant mappings $X\r Y$
and $K$-equivariant mappings $X^H\r Y^H$ such that
$\kappa_{X,Y}(f)=f|_{X^H}$.
Notice that $K$ acts freely on $X^H$ and the homeomorphism
$X^H/N\cong X/G$ induces the homeomorphism $X^H/K\cong X/G$. In
particular, we get the principal $K$-bundle $\pi_X:X^H\r X/G$,
where $\pi_X$ is the restriction to $X^H$ of the projection
$\pi:X\r X/G$.
\end{proof}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 7,993 |
Eugene Joseph McCarthy (ur. 29 marca 1916 w Watkins, Minnesota, zm. 10 grudnia 2005 w Waszyngtonie) – amerykański polityk, działacz Partii Demokratycznej, deputowany do Izby Reprezentantów, senator, kandydat w prawyborach Partii Demokratycznej i niezależny kandydat w wyborach prezydenckich.
Pochodził z katolickiej rodziny o korzeniach irlandzkich ze strony ojca, Michaela J. McCarthy'ego, i niemieckich ze strony matki, Anny z d. Baden. Studiował na University of Minnesota, pracował jako nauczyciel, a później także wykładowca uniwersytecki ekonomii i pedagogiki. W latach 1949–1959 zasiadał w Izbie Reprezentantów, w 1959 został senatorem. Był przeciwnikiem noszącego to samo nazwisko inicjatora dekomunizacji amerykańskiej Josepha McCarthy'ego. W ciągu kilkunastu lat w Senacie USA brał udział w pracach komisji spraw zagranicznych i z czasem zyskał opinię czołowego polityka antywojennego w Partii Demokratycznej.
W 1960 w prawyborach prezydenckich Partii Demokratycznej wspierał liberalnego kandydata Adlai Stevensona, który jednak musiał ustąpić miejsca Johnowi Kennedy'emu. W 1968 McCarthy sam spróbował swoich sił w walce o nominację Partii Demokratycznej; w marcu t.r. wygrał prawybory w stanie New Hampshire pod hasłem szybkiego zakończenia wojny w Wietnamie. Rezultat ten, przy jednoczesnym zgłoszeniu się popularnego Roberta Kennedy'ego, przyczynił się do rezygnacji z ubiegania się o reelekcję przez Lyndona Johnsona. Ostatecznie kandydatem demokratów – po zabójstwie Kennedy'ego w czerwcu 1968 – został wiceprezydent Hubert Humphrey, który w wyborach uległ Richardowi Nixonowi.
McCarthy pełnił mandat senatora do 1971. Zajął się następnie publicystyką polityczną, ale w 1972 ponownie zgłosił swoją kandydaturę w prawyborach Partii Demokratycznej. Po porażce opuścił szeregi partii i cztery lata później wystartował w wyborach jako kandydat niezależny, zdobywając ok. 740 tysięcy głosów (mniej niż 1%). W 1982 bez powodzenia ubiegał się o ponowny wybór do Senatu. W 1988 startował w wyborach prezydenckich wspierany przez blok małych lewicowych partii, m.in. Partii Postępowej Minnesoty i Partii Konsumenckiej Pensylwanii. Powrócił następnie do Partii Demokratycznej i po raz trzeci ubiegał się o partyjną nominację w 1992, ale został wykluczony z grona kandydatów przez władze partyjne. W 2000 był aktywny w kampanii na rzecz dopuszczenia do prezydenckich debat telewizyjnych kandydata Partii Zielonych, Ralpha Nadera.
Kontynuował w tym okresie działalność publicystyczną; w latach 70. był profesorem nauk politycznych w New School of Social Research. Opublikował kilkanaście książek, głównie poświęconych polityce, ale także m.in. zbiór wierszy.
Linki zewnętrzne
Członkowie Izby Reprezentantów Stanów Zjednoczonych z Minnesoty
Senatorzy z Minnesoty
Urodzeni w 1916
Zmarli w 2005 | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 7,601 |
Le château de Rothenbourg se situe sur la commune de Philippsbourg, dans le département de la Moselle.
Géographie
Le château est situé sur la hauteur appelée Rothenberg ou Rodenberg, au nord du Falkenstein et du Helfenstein. Il domine la vallée du Rothenbach, ruisseau qui vient de l'Erbsenthal et qui alimente le Grafenweiher quelques kilomètres plus en aval. Situé à un peu plus de trois kilomètres à vol d'oiseau de Neunhoffen (Dambach, Bas-Rhin), c'est aussi le château le plus à l'est du département de la Moselle et de l'ancienne région Lorraine.
Toponymie
Anciennes mentions : Rothenburg (912) ; Rothemburg (1353) ; Rotenburg (1369) ; Retenburg (1698).
Histoire
Ce château aurait été bâti tout au début du , en , par Otbert ou Albert, trente-septième évêque de Strasbourg. Ayant appuyé les prétentions du roi Charles le Simple à l'héritage de Louis III, tandis que la ville de Strasbourg se prononçait pour celles de Conrad de Franconie, il se trouva ainsi en désaccord avec ses sujets. Pour vaincre leur résistance, il eut recours à l'excommunication, ce qui causa un soulèvement général devant lequel il dut fuir. Vers l'an 912-913, il se serait réfugié à la Rathburg, qui est peut-être Rothenburg, et y fut assassiné peu de temps après.
Rothenburg appartenait en partie, au , au Comte Walram de Deux-Ponts-Bitche qui le donna en fief, pour moitié, en 1353, à Gerhard Harnasch de Weisskirchen. Celui-ci, vraisemblablement un de ces Raubritter (chevaliers pillards) qui infestaient alors les confins de l'Alsace et de la Lorraine, eut maille à partir avec les bourgeois de Strasbourg, lesquels vinrent l'attaquer dans son repaire. En 1369, après s'être emparés du château, ils le démantelèrent et il ne semble pas qu'il ait jamais été reconstruit depuis lors.
À la fin du , le Rotenburg fait encore partie de la seigneurie de Bitche. Il est possible que le fief ait passé au début du , avec le Falkenstein, entre les mains des Hanau-Lichtenberg.
Le château pourrait avoir donné son nom à la famille Blick de Rothenburg, qui tenait plusieurs fiefs des sires de Bitche, et qui s'est éteinte en 1749.
Notes et références
Bibliographie
Château appelé Rothenberg ou Rodenberg, Vosges du Nord.
Château de Rothenbourg.
Château de Rothenbourg, sur chateauxfortsalsace.fr/
Vosges du Nord : Château de Rothenbourg, sur https://vosgesdunord.jimdofree.com/
Articles connexes
Liste des châteaux de la Moselle
Liens externes
Le château de Rothenbourg sur le site du Bitscherland
Château de Rothenbourg, sur le site des Vosges du Nord
. Château fort, château de Rothenburg], sur www.pop.culture.gouv.fr/
Philippsbourg
Rothenbourg
Rothenbourg, Chateau de | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 8,664 |
Agrimonia eupatoria or 'Sticklewort' as it is commonly known is traditionally used as a medicinal herb as well as a cottage garden plant.
A woody perennial with an erect growth habit, the flower spikes are tall and hence another common name of 'Church Steeples'. The flowers are fragrant and make a welcome addition to the garden.
The plant is used to produce a yellow dye and was used in a number of traditional herbal remedies.
The essential oils are extracted by distillation and are used in a number of herbal therapeutic treatments. It has also been used as tea or infusion and is said to have a fragrance similar to apricots.
It is still widely used in Chinese medicine.
A relatively easy to grow herb, it prefers a position in morning full sun to afternoon shade and a humus rich yet well drained soil.
It will require water through spring and summer in dry climates.
You can grow Agrimony from seeds, although seedling plants are somewhat easier as seeds can be difficult to germinate.
The plant will reach around 60 cm in height, a woody stem with deep green foliage. Yellow flowers on spikes in spring are fragrant with a citrus to apricot perfume. The stems develop a downy appearance and the plant makes an interesting addition to a herb garden or cottage garden.
The flowers fade and leave small burs. Leaves are harvested for use as needed, flowers are cut as they begin to open if used for the flower essence. | {
"redpajama_set_name": "RedPajamaC4"
} | 4,633 |
Q: Injective Homomorphism Example Ok, so there is this problem that quite confuses me because the example seems too easy (at least thats what I think):
Find an example of a group homomorphism $\Phi: S_3 \rightarrow S_4$ that is injective.
I think the example is just $\Phi(\sigma)=\sigma$, this is the only example I can think about but I am not sure if it is correct. Can anyone tell me if this example is valid? If not what other examples could there be?
A: Your example is (almost) right. If you concretely take $S_n$ as the permutations of $\{1,2,...,n\}$ then any $\sigma \in S_3$ can be mapped to $g(\sigma) \in S_4$ by having $g(\sigma)(4)=4,$ while for $k=1,2,3$ put $g(\sigma)(k)=\sigma(k).$
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 2,458 |
\section{Introduction}
Heavy quarkonium physics provides an ideal laboratory to study QCD at
the interplay between perturbative and non-perturbative domains. Because of the large samples of ${J/\psi}$ and ${\psi(2S)}$ accumulated at the LHC, it is an opportune moment to study the quarkonium production mechanism. To this end, understanding the polarization of produced quarkonium is an attractive and important issue~\cite{Brambilla:2010cs,Lansberg:2006dh}.
At the Tevatron, the polarization observable $\lambda_{\theta}$ (or
$\alpha$) for both of prompt ${J/\psi}$ and prompt ${\psi(2S)}$ in their helicity
frame measured by CDF Collaboration~\cite{Affolder:2000nn} are close
to $0$, which are in contradiction with the prediction of transverse polarization at the leading order (LO) in non-relativistic QCD (NRQCD)~\cite{Bodwin:1994jh}.
In the past a few years,
three groups~\cite{Butenschoen:2012px,Chao:2012iv,Gong:2012ug}
reported their independent analyzes of ${J/\psi}$ polarizations at the
next-to-leading order (NLO) level in $\alpha_S$. Although the short-distance coefficients (SDCs) are consistent with each other, the three groups give three different versions for the polarization prediction because different treatments are used in the extractions of
non-perturbative color-octet (CO) long-distance matrix elements
(LDMEs). Specifically, both ref.~\cite{Butenschoen:2012px} and ref.~\cite{Gong:2012ug} claim that the NLO NRQCD will necessarily give a transversely polarized prediction for prompt ${J/\psi}$ production; while the work by some of the present authors~\cite{Chao:2012iv} give a possible explanation for the ${J/\psi}$ polarization issue by finding that the transversely polarized contributions from ${\bigl.^3\hspace{-1mm}S^{[8]}_1}$ and ${\bigl.^3\hspace{-1mm}P^{[8]}_J}$ channels cancel each other. A similar cancelation between ${\bigl.^3\hspace{-1mm}S^{[8]}_1}$ and ${\bigl.^3\hspace{-1mm}P^{[8]}_J}$ channels for yield was also found earlier in ref.~\cite{Ma:2010yw}. The consequence of these cancelations is that the ${\bigl.^1\hspace{-1mm}S^{[8]}_0}$ channel will dominate, which results in an unpolarized prediction. Note that, the crucial point to get these cancelations is the introduction of a relatively large $p_T$ cutoff for data in the lower $p_T$ region.
In the high $p_T$ region, however, large logarithms like $\ln(p_T^2/m_c^2)$ may ruin the convergence of perturbative expansion, thus resummation of these large logarithmic terms are needed. This can be done by using DGLAP evolution equations to resum terms in the leading power in $1/p_T$ expansion, and using double parton evolution equations derived in ref.~\cite{Kang:2014tta} to resum terms in the next-to-leading power in $1/p_T$ expansion. The first goal is achieved recently~\cite{Bodwin:2014gia}. By combining the NLO NRQCD result with the leading power resummation, authors in ref.~\cite{Bodwin:2014gia} find that contributions from ${\bigl.^3\hspace{-1mm}S^{[8]}_1}$ and ${\bigl.^3\hspace{-1mm}P^{[8]}_J}$ channels should be almost canceled with each other and the produced ${J/\psi}$ is almost unpolarized, which is similar to our conclusion in ref.~\cite{Chao:2012iv}. This is encouraging because it implies that the qualitative results in the NLO NRQCD calculation are not changed by resummation.
Based on the NLO NRQCD calculation, a data-driven method is employed in ref.~\cite{Faccioli:2014cqa} to fit CO LDMEs. By investigating the behaviour of $\chi^2/d.o.f.$ for different $p_T$ cutoff, the authors push the $p_T$ cutoff for ${\psi(2S)}$ to even larger values, say, about $12\mathrm{~GeV}$. Then they found that the ${\psi(2S)}$ production is dominated by the ${\bigl.^1\hspace{-1mm}S^{[8]}_0}$ channel and the polarization data of ${\psi(2S)}$ production can be explained, which is similar to the explanation of ${J/\psi}$ polarization in refs.~\cite{Chao:2012iv,Bodwin:2014gia}. Therefore, it seems possible that the polarizations of ${J/\psi}$ and ${\psi(2S)}$ can be explained in a unified way.
However, in ref.~\cite{Chao:2012iv}, as well as in refs.~\cite{Butenschoen:2012px,Bodwin:2014gia}, only the direct ${J/\psi}$ production contribution is considered. An estimation of the
impact of feeddown contributions to ${J/\psi}$ polarization is given in ref.~\cite{Shao:2012fs}, where it was
pointed out that the feeddown contributions should not change the polarization result too much. Yet, to be precise, it should be better to include the feeddown contributions rigorously since they may contribute a substantial amount of prompt ${J/\psi}$ production. Hence, the purpose of the present article is to do a comprehensive analysis for prompt ${J/\psi}$ production by including
the feeddown contributions from $\chi_{cJ}$ and
${\psi(2S)}$ decays. Meantime, we also give predictions of
yields and polarizations for prompt ${\psi(2S)}$.
The remaining context is organized as follows. We first fix our
strategy for estimating the LDMEs in section \ref{sec:2}, and
then give our predictions for the yields and polarizations of ${\psi(2S)}$
and ${J/\psi}$ in the next two sections. A summary will be given in the last section.
\section{Strategy for estimating LDMEs\label{sec:2}}
\subsection{General setup}
Before going ahead, we first list some details that are used in this article.
The helicity-summed yields are calculated following the way
mentioned in refs.~\cite{Ma:2010yw,Ma:2010jj,Ma:2010vd}, while the
method of the polarisation is described in
refs.~\cite{Chao:2012iv,Shao:2012iz,Shao:2014fca}.
Cross section for a
quarkonium $\mathcal{Q}$ production in $pp$ collision can be
expressed as~\cite{Bodwin:1994jh}
\begin{eqnarray}
\sigma(pp\rightarrow\mathcal{Q}+X)&=&\sum_{n}{\hat{\sigma}(pp\rightarrow
Q\bar{Q}[n]+X)}\times\langle\mathcal{O}^{\mathcal{Q}}(n)\rangle,
\end{eqnarray} where $\hat{\sigma}(pp\rightarrow Q\bar{Q}[n]+X)$ are SDCs for producing a heavy quark pair $Q\bar{Q}$ with the
quantum number $n$, and $\langle\mathcal{O}^{\mathcal{Q}}(n)\rangle$
is a LDME for $\mathcal{Q}$. SDCs can be computed in perturbative QCD as
\begin{eqnarray} \hat{\sigma}(pp\rightarrow
Q\bar{Q}[n]+X)&=&\sum_{a,b}{\int{dx_1dx_2d\rm{LIPS}
\textit{f}_{a/p}(x_1)\textit{f}_{b/p}(x_2)}}\nonumber\\&\times&|M(ab\rightarrow
Q\bar{Q}[n]+X)|^2,\end{eqnarray} where the symbols $a$ and $b$ represent all
possible partons, $x_1$ and $x_2$ are light-cone momentum fractions, $d\rm{LIPS}$ is the
lorentz-invariant phase space measure, and
$\textit{f}_{a/p}(x_1)$ and $\textit{f}_{b/p}(x_2)$ are parton
distribution functions (PDFs) for partons $a$ and $b$ in the initial
colliding protons.
In this article, we have included all important $c\bar{c}$ Fock
states, ${\bigl.^3\hspace{-1mm}S^{[1]}_1},{\bigl.^1\hspace{-1mm}S^{[8]}_0},{\bigl.^3\hspace{-1mm}S^{[8]}_1}$ and ${\bigl.^3\hspace{-1mm}P^{[8]}_J}$ for ${J/\psi}$ and ${\psi(2S)}$,
${\bigl.^3\hspace{-1mm}S^{[8]}_1}$ and ${\bigl.^3\hspace{-1mm}P^{[1]}_J}$ for $\chi_{cJ}$. All corresponding SDCs are calculated up to
$\mathcal{O}(\alpha_S^4)$, i.e. NLO in $\alpha_S$. We use
CTEQ6M~\cite{Pumplin:2002vw} as our default PDF. The mass of charm
quark is fixed to be $m_c=1.5 \rm{GeV}$, and an analysis of uncertainties from choosing charm quark mass can be found in ref.~\cite{Ma:2010yw}.
The renormalization and factorization scales are
$\mu_R=\mu_F=\sqrt{(2m_c)^2+p_T^2}$, while the NRQCD scale is
$\mu_{\Lambda}=m_c$. Since cross sections of charmonia are
decreasing with high powers of their $p_T$, we should consider the
$p_T$ spectrum shiftting in the decay of
$\mathcal{Q}_1\rightarrow\mathcal{Q}_0+X$ approximately by
$p_T^{\mathcal{Q}_0}=\frac{M_{\mathcal{Q}_0}}{M_{\mathcal{Q}_1}}p_T^{\mathcal{Q}_1}$~\cite{Ma:2010yw},
where $M_{\mathcal{Q}_0}$ and $M_{\mathcal{Q}_1}$ are physical masses for
quarkonia $\mathcal{Q}_0$ and $\mathcal{Q}_1$ respectively. Masses of relevant charmonia in our article are shown in
Table~\ref{tab:mass}. Table~\ref{tab:br} gives the branching
ratios for various decay processes involved in
this article.
\begin{table}
\begin{center}
\begin{tabular}{{c}*{4}{c}}\hline\hline
${J/\psi}$ & ${\psi(2S)}$ & $\chi_{c0}$ & $\chi_{c1}$ & $\chi_{c2}$\\\hline
$3.097$ & $3.686$ & $3.415$ & $3.511$ & $3.556$\\\hline\hline
\end{tabular}
\end{center}
\caption{\label{tab:mass}Physical masses (in unit of GeV) of
various charmonia~\cite{Nakamura:2010zzi}.}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{{c}*{1}{c}}\hline\hline
decay channel & branching ratio ($\times10^{-2}$)\\\hline ${J/\psi}\rightarrow
\mu^+\mu^-$ & $5.93$\\
${\psi(2S)}\rightarrow\mu^+\mu^-$ & $0.75$\\
${\psi(2S)}\rightarrow{J/\psi}+X$ & $57.4$\\
${\psi(2S)}\rightarrow{J/\psi}\pi^+\pi^-$ & $34.0$\\
${\psi(2S)}\rightarrow \chi_{c0}+\gamma$ & $9.84$\\
${\psi(2S)}\rightarrow \chi_{c1}+\gamma$ & $9.3$\\
${\psi(2S)}\rightarrow \chi_{c2}+\gamma$ & $8.76$\\
$\chi_{c0}\rightarrow {J/\psi}+\gamma$ & $1.28$\\
$\chi_{c1}\rightarrow {J/\psi}+\gamma$ & $36.0$\\
$\chi_{c2}\rightarrow {J/\psi}+\gamma$ & $20.0$
\\\hline\hline
\end{tabular}
\end{center}
\caption{\label{tab:br}Branching ratios of various decay
processes involved in this article~\cite{Nakamura:2010zzi}.}
\end{table}
\begin{table}[h]
\begin{center}
\begin{tabular}{|{c}*{4}{|c}|} \hline
$p_{T\rm{cut}}^{{\psi(2S)}}$(GeV) & $M_{0,r_0}^{\psits}(\times10^{-2}\rm{GeV}^3)$ & $M_{1,r_1}^{\psits}(\times10^{-2}\rm{GeV}^3)$ & $\chi^2/d.o.f$ \\
\hline $5$ &
$1.3754\pm0.118931$ & $0.159987\pm0.0117348$ & $37.2068/16=2.32542$\\
$6$ & $1.93677\pm0.17044$ & $0.128511\pm0.0135506$ & $14.0112/14=1.0008$\\
$7$ & $2.23162\pm0.23115$ & $0.109918\pm0.0155178$ & $7.21501/12=0.601251$\\
$8$ & $2.253154\pm0.301835$ & $0.100531\pm0.0175978$ & $5.46679/10=0.546679$\\
$9$ & $2.7258\pm 0.401123$ & $0.0932409\pm0.0201979$ & $4.92587/8=0.615734$\\
$10$ & $3.23067\pm0.58727$ & $0.0763209\pm0.0247166$ & $3.37617/6=0.562696$\\
$11$ & $3.81594\pm0.784395$ & $0.0585894\pm0.0293102$ & $2.10933/5=0.421866$\\
$12$ & $3.67631\pm1.00394$ & $0.0625013\pm0.0341653$ & $2.05968/4=0.514919$\\
$13$ & $3.48695\pm1.30212$ & $0.0673741\pm0.0402811$ & $2.00752/3=0.669175$\\
$14$ & $3.02071\pm1.7219$ & $0.0784274\pm0.0483324$ & $1.83628/2=0.918141$\\
$15$ & $1.04558\pm2.34914$ & $0.121791\pm0.0597233$ & $0.308538/1=0.308538$
\\\hline
\end{tabular}
\end{center}
\caption{\label{tab:fitcdfpsi2s}The values of $M_{0,r_0}^{\psits}$ and $M_{1,r_1}^{\psits}$ by fitting the CDF data~\cite{Aaltonen:2009dm} with different $p_{T\rm{cut}}$, where $r_0=3.9,r_1=-0.56$.}
\end{table}
The polarisation observable $\lambda_{\theta}$ for ${J/\psi}$(${\psi(2S)}$) is
defined as ~\cite{Noman:1978eh,Beneke:1998re} \begin{eqnarray}
\lambda_{\theta}=\frac{d\sigma_{11}-d\sigma_{00}}{d\sigma_{11}+d\sigma_{00}},
\end{eqnarray} where $d\sigma_{ij}(i,j=0,\pm1)$ is the $ij$ component in the
spin density matrix formula for ${J/\psi}$(${\psi(2S)}$). The full spin
correlation of $\chi_{cJ}$'s spin density matrix element and
${J/\psi}$'s spin density matrix element including E1, M2 and E3
transitions has been explored in eq.~(C4) of ref.~\cite{Shao:2012fs}.
We use the normalized M2 amplitude $a^{J=1}_2=-6.26\times10^{-2}$
for $\chi_{c1}\rightarrow{J/\psi}+\gamma$, and the normalized M2 and E3
amplitudes $a^{J=2}_2=-9.3\times10^{-2}$ and $a^{J=2}_3=0$ for
$\chi_{c2}\rightarrow{J/\psi}+\gamma$, which are measured by CLEO
collaboration~\cite{Artuso:2009aa}. From the eq.~(C4) of
ref.~\cite{Shao:2012fs}, we notice that the $\lambda_{\theta}$ is
squared-amplitude dependent. Hence, these extra spin-flip effects due
to M2 and E3 transitions are negligible. We still keep it here since
no extra effort is needed.
\subsection{LDMEs estimation} \label{sec:ldmes}
Because of spin symmetry, LDMEs ${\langle\mathcal{O}^{\chi_{cJ}}(\bigl.^3\hspace{-1mm}S_1^{[8]})\rangle}$ and ${\langle\mathcal{O}^{\chi_{cJ}}(\bigl.^3\hspace{-1mm}P_J^{[1]})\rangle}$ for $\chi_{cJ}$ have the relation
\begin{eqnarray} {\langle\mathcal{O}^{\chi_{cJ}}(\bigl.^3\hspace{-1mm}S_1^{[8]})\rangle}=(2J+1){\langle\mathcal{O}^{\chi_{c0}}(\bigl.^3\hspace{-1mm}S_1^{[8]})\rangle},\\
{\langle\mathcal{O}^{\chi_{cJ}}(\bigl.^3\hspace{-1mm}P_J^{[1]})\rangle}=(2J+1){\langle\mathcal{O}^{\chi_{c0}}(\bigl.^3\hspace{-1mm}P_0^{[1]})\rangle}.
\end{eqnarray}
Color-singlet LDME ${\langle\mathcal{O}^{\chi_{c0}}(\bigl.^3\hspace{-1mm}P_0^{[1]})\rangle}$ can be estimated by the derivation of wavefunction at origin $R'(0)$
via
\begin{eqnarray}
{\langle\mathcal{O}^{\chi_{c0}}(\bigl.^3\hspace{-1mm}P_0^{[1]})\rangle}=2N_c\frac{3}{4\pi}|R'(0)|^2,
\end{eqnarray}
where $|R'(0)|^2=0.075~\rm{GeV}^5$ is calculated
in ref.~\cite{Eichten:1995ch} by using potential model. The
remaining CO LDME ${\langle\mathcal{O}^{\chi_{c0}}(\bigl.^3\hspace{-1mm}S_1^{[8]})\rangle}$ should be determined by fitting
experimental data. In ref.~\cite{Ma:2010vd}, we used $p_T$ spectrum of
$\sigma_{\chi_{c2}\rightarrow{J/\psi}\gamma}/\sigma_{\chi_{c1}\rightarrow{J/\psi}\gamma}$
measured by CDF~\cite{Abulencia:2007bra} in our fitting procedure, and we got
\begin{eqnarray}
{\langle\mathcal{O}^{\chi_{c0}}(\bigl.^3\hspace{-1mm}S_1^{[8]})\rangle}=(2.2^{+0.48}_{-0.32})\times10^{-3}\rm{GeV}^3,
\end{eqnarray}
which is consistent with later studies~\cite{Gong:2012ug,Shao:2014fca,Jia:2014jfa}.
Moreover, we want to emphasize that this value is insensitive to $p_T$ cutoff in our fit, especially when $p_T>7\rm{GeV}$.
Similarly, CS LDMEs for ${J/\psi}$ and ${\psi(2S)}$ can also be estimated by potential model~\cite{Eichten:1995ch},
\begin{eqnarray}
{\langle\mathcal{O}^\jpsi(\bigl.^3\hspace{-1mm}S_1^{[1]})\rangle}=2N_c\frac{3}{4\pi}|R_{{J/\psi}}(0)|^2=1.16~\rm{GeV}^3,\\
{\langle\mathcal{O}^\psits(\bigl.^3\hspace{-1mm}S_1^{[1]})\rangle}=2N_c\frac{3}{4\pi}|R_{{\psi(2S)}}(0)|^2=0.76~\rm{GeV}^3,
\end{eqnarray}
although their precise values are in fact irrelevant in our analysis because their corresponding SDCs are too small in our interested $p_T$ regime.
The determination of three unknown CO LDMEs
for ${J/\psi}(\psi(2S))$ is more complicated and involved. Based on our
previous studies~\cite{Chao:2012iv,Ma:2010yw,Ma:2010jj}, we
summarize the following facts:
\begin{itemize}
\item In the regime $p_T>4m_c$, the short-distance coefficient of P-wave CO Fock state ${\bigl.^3\hspace{-1mm}P^{[8]}_J}$ can
be nicely decomposed into a linear combination of the short-distance
coefficients of ${\bigl.^1\hspace{-1mm}S^{[8]}_0}$ and ${\bigl.^3\hspace{-1mm}S^{[8]}_1}$, \begin{eqnarray}
d\hat{\sigma}({\bigl.^3\hspace{-1mm}P^{[8]}_J})=r_0\frac{d\hat{\sigma}({\bigl.^1\hspace{-1mm}S^{[8]}_0})}{m_c^2}+r_1\frac{d\hat{\sigma}({\bigl.^3\hspace{-1mm}S^{[8]}_1})}{m_c^2}.\end{eqnarray}
$r_0$ and $r_1$ changes slightly with rapidity interval but almost
not changes with the center-of-mass energy $\sqrt{S}$ (see table I in
ref.~\cite{Ma:2010jj}). This makes it difficult to extract three
independent CO LDMEs by fitting helicity-summed yields data at hadron
colliders. Instead, one is restricted to be able to extract two
linear combinations of three CO LDMEs within convincing precision.
They are denoted as\begin{eqnarray}
M_{0,r_0}^{\jpsi(\psits)}\equiv{\langle\mathcal{O}^{\jpsi(\psits)}(\bigl.^1\hspace{-1mm}S_0^{[8]})\rangle}+r_0\frac{{\langle\mathcal{O}^{\jpsi(\psits)}(\bigl.^3\hspace{-1mm}P_0^{[8]})\rangle}}{m_c^2},\\
M_{1,r_1}^{\jpsi(\psits)}\equiv{\langle\mathcal{O}^{\jpsi(\psits)}(\bigl.^3\hspace{-1mm}S_1^{[8]})\rangle}+r_1\frac{{\langle\mathcal{O}^{\jpsi(\psits)}(\bigl.^3\hspace{-1mm}P_0^{[8]})\rangle}}{m_c^2}. \end{eqnarray}
Because $d\hat{\sigma}({\bigl.^1\hspace{-1mm}S^{[8]}_0})$ and $d\hat{\sigma}({\bigl.^3\hspace{-1mm}S^{[8]}_1})$ have mainly $p_T^{-6}$ and $p_T^{-4}$ behaviour respectively, values of $M_{0,r_0}^{\jpsi(\psits)}$ and $M_{1,r_1}^{\jpsi(\psits)}$ can roughly indicate the relative importance of $p_T^{-6}$ and $p_T^{-4}$ components. Using the Tevatron yields
data~\cite{Acosta:2004yw,Aaltonen:2009dm} with $p_T>7\rm{GeV}$, they are extracted as in ref.~\cite{Ma:2010yw}
\begin{eqnarray}
M_{0,r_0}^{\jpsi}=(7.4\pm1.9)\times10^{-2}\rm{GeV}^3,\\
M_{1,r_1}^{\jpsi}=(0.05\pm0.02)\times10^{-2}\rm{GeV}^3,
\end{eqnarray}
with $\chi^2/d.o.f=0.33$ for ${J/\psi}$, and
\begin{eqnarray}
M_{0,r_0}^{\psits}=(2.0\pm0.6)\times10^{-2}\rm{GeV}^3, \label{eq:set1a}\\
M_{1,r_1}^{\psits}=(0.12\pm0.03)\times10^{-2}\rm{GeV}^3, \label{eq:set1b}
\end{eqnarray}
with $\chi^2/d.o.f=0.56$ for ${\psi(2S)}$, with $r_0=3.9$ and $r_1=-0.56$. Inspired by the recent work~\cite{Faccioli:2014cqa}, we are also trying to see what happens if we enlarge the $p_T$ cutoff in our fit. With CDF data only~\cite{Acosta:2004yw,Aaltonen:2009dm}, we found values of $M_{0,r_0}^{\psits}$ and $M_{1,r_1}^{\psits}$ can be alternated by enlarging the $p_T$ cutoff as shown in table~\ref{tab:fitcdfpsi2s}, while it is not the case for ${J/\psi}$.
When the cutoff is larger than $11\rm{GeV}$, we have relatively stable and minimal $\chi^2$ value for $\psi(2S)$. We thus obtained another set of CO LDMEs for $\psi(2S)$ by choosing cutoff as $p_T=11\mathrm{~GeV}$,
\begin{eqnarray} \label{eq:set2}
M_{0,r_0}^{\psits}=(3.82\pm0.78)\times10^{-2}\rm{GeV}^3,\\
M_{1,r_1}^{\psits}=(0.059\pm0.029)\times10^{-2}\rm{GeV}^3.
\end{eqnarray}
For simplification, we will call this set of CO LDMEs as ``set II" in the remaining context, while nothing will be labeled if we use the default one extracted from $p_T>7\rm{GeV}$ data in eqs.~\eqref{eq:set1a} and \eqref{eq:set1b}.
\item The short-distance coefficient\footnote{In this article, we only consider the helicity frame.} $d\hat{\sigma}_{11}({\bigl.^3\hspace{-1mm}P^{[8]}_J})$ has
the similar decomposition but into $d\hat{\sigma}_{11}({\bigl.^1\hspace{-1mm}S^{[8]}_0})$ and
$d\hat{\sigma}_{11}({\bigl.^3\hspace{-1mm}S^{[8]}_1})$. The non-trivial thing is that coefficient of $d\hat{\sigma}_{11}({\bigl.^3\hspace{-1mm}S^{[8]}_1})$ in
$d\hat{\sigma}_{11}({\bigl.^3\hspace{-1mm}P^{[8]}_J})$ decomposition is quite close to $r_1$ in
$d\hat{\sigma}({\bigl.^3\hspace{-1mm}P^{[8]}_J})$ decomposition~\cite{Chao:2012iv}. Hence, it
still does not help a lot to fix the three independent CO LDMEs by
including polarisation data~\cite{Chao:2012iv}. Moreover, the value of $M_{1,r_1}^{\jpsi(\psits)}$ almost
control the weight of transverse component. The unpolarized data
really require a (very) small $M_{1,r_1}^{\jpsi(\psits)}$.
\item We assume that all of the CO LDMEs are
positive~\cite{Chao:2012iv}, which is in contrast with those given
in refs.~\cite{Butenschoen:2012px,Gong:2012ug} (see also refs.~\cite{Butenschoen:2010rq,Butenschoen:2011yh,Butenschoen:2012qr}).\footnote{Although the authors in ref.~\cite{Gong:2012ug} used the same $p_T$ cut and included the feed-down contribution in prompt $J/\psi$ production, they tried to extract three independent CO LDMEs by including data in the forward-rapidity region. However, due to the correlation between the decompositions in the central and forward regions (see table I in ref.~\cite{Ma:2010jj}), the uncertainties in the extracted three CO LDMEs might be underestimated and they got negative CO LDMEs.} Since $r_1$ in forward
rapidity interval is smaller than that in central rapidity interval~\cite{Ma:2010jj},
a positive ${\langle\mathcal{O}^{\jpsi(\psits)}(\bigl.^3\hspace{-1mm}P_0^{[8]})\rangle}$ would imply that $\lambda_\theta$ in forward rapidity
will be smaller than its value in the central
rapidity. We will see later that this conclusion is confirmed by LHC data. Further more, in a recent study of
${J/\psi}+\gamma$ production~\cite{Li:2014ava}, the authors found that positivity of CO LDMEs are needed to guarantee a
physical cross section, while the sets of CO LDMEs in refs.~\cite{Butenschoen:2012px,Gong:2012ug} result in
unphysical negative cross section for ${J/\psi}+\gamma$ production at
hadron colliders. It also supports our assumption.
\end{itemize}
Based on these reasons, we are trying to use only Tevatron yield data as input to
give all yields and polarisation predictions for prompt
${J/\psi}$ and ${\psi(2S)}$ production at hadron colliders. We use values of $M_{0,r_0}^{\jpsi(\psits)}$ and $M_{1,r_1}^{\jpsi(\psits)}$ in this section
and vary $0\leq{\langle\mathcal{O}^{\jpsi(\psits)}(\bigl.^1\hspace{-1mm}S_0^{[8]})\rangle}\leqM_{0,r_0}^{\jpsi(\psits)}$ to estimate the three CO
LDMEs. This variation and the errors in $M_{0,r_0}^{\jpsi(\psits)}$, $M_{1,r_1}^{\jpsi(\psits)}$ and
${\langle\mathcal{O}^{\chi_{c0}}(\bigl.^3\hspace{-1mm}S_1^{[8]})\rangle}$ will be considered as theoretical uncertainties.
\section{Prompt ${\psi(2S)}$ yields and polarizations\label{sec:3}}
In this section, we discuss the prompt ${\psi(2S)}$ yields and
polarisation at the Tevatron and the LHC. Experimentally, people can
reconstruct ${\psi(2S)}$ via ${\psi(2S)}\rightarrow\mu^+\mu^-$ or
${\psi(2S)}\rightarrow{J/\psi}(\rightarrow\mu^+\mu^-)\pi^+\pi^-$. Unlike prompt ${J/\psi}$,
there is no significant feeddown contribution to prompt ${\psi(2S)}$
production.
\subsection{Yields}
We update our numerical predictions for ${\psi(2S)}$ yields at the
Tevatron and the LHC as several collaborations have
released their prompt ${\psi(2S)}$ yields
measurements~\cite{Aaltonen:2009dm,Chatrchyan:2011kc,Aaij:2012ag,Aad:2014fpa} in the past a few years. It is
worthwhile to mention that one of the main uncertainty in experimental
measurement comes from the unknown spin-alignment. Hence, it would be
quite useful to give a theoretical prediction on polarisation, which
will be presented in the next subsection. Our NLO NRQCD predictions for
prompt ${\psi(2S)}$ yields are shown in figure \ref{fig:psi2syields} (using default set of CO LDMEs) and figure \ref{fig:psi2syields2} (using set II of CO LDMEs). Our
theoretical results are in good agreement with the experimental data
at the LHC and Tevatron for the regime $p_T>p_{T\rm{cut}}$, where we use $p_{T\rm{cut}}=7\rm{GeV}$ in default set and $p_{T\rm{cut}}=11\rm{GeV}$ in set II [ Strictly speaking, ATLAS large $p_T$ yields data favor our prediction on set II ]. In the
$p_T<p_{T\rm{cut}}$ regime, experimental data tell us that there
might be a significant non-perturbative smearing effect to violate
the reliability of our fixed-order result.
The error bands in our results represent our theoretical uncertainties, which are
dominated by the uncertainties in CO LDMEs.
\begin{figure}[!h]
\begin{center}
\hspace{0cm}\includegraphics[width=0.4\textwidth]{psi2syields-CDF.eps}
\hspace{0cm}\includegraphics[width=0.4\textwidth]{psi2syields-CMS-All.eps}
\hspace{0cm}\includegraphics[width=0.4\textwidth]{psi2syields-LHCb.eps}
\hspace{0cm}\includegraphics[width=0.4\textwidth]{psi2syields-ATLAS-All-bins.eps}
\caption{\label{fig:psi2syields}Comparison of NLO NRQCD ( with the default set of CO LDMEs ) and
CDF~\cite{Aaltonen:2009dm},CMS~\cite{Chatrchyan:2011kc},
LHCb~\cite{Aaij:2012ag} and ATLAS~\cite{Aad:2014fpa} data for prompt ${\psi(2S)}$ yields.}
\end{center}
\end{figure}
\begin{figure}[!h]
\begin{center}
\hspace{0cm}\includegraphics[width=0.4\textwidth]{psi2syields-CDF-II.eps}
\hspace{0cm}\includegraphics[width=0.4\textwidth]{psi2syields-CMS-All-II.eps}
\hspace{0cm}\includegraphics[width=0.4\textwidth]{psi2syields-LHCb-II.eps}
\hspace{0cm}\includegraphics[width=0.4\textwidth]{psi2syields-ATLAS-All-II-bins.eps}
\caption{\label{fig:psi2syields2}Comparison of NLO NRQCD ( with the set II of CO LDMEs ) and
CDF~\cite{Aaltonen:2009dm},CMS~\cite{Chatrchyan:2011kc},
LHCb~\cite{Aaij:2012ag} and ATLAS~\cite{Aad:2014fpa} data for prompt ${\psi(2S)}$ yields.}
\end{center}
\end{figure}
\subsection{polarizations}
We are in the position to give the theoretical predictions of
polarisation observable $\lambda_{\theta}$ for prompt ${\psi(2S)}$. We
compare our NLO NRQCD results with the experimental data given by
CDF~\cite{Abulencia:2007us} and CMS~\cite{Chatrchyan:2013cla}
collaborations in figure \ref{fig:psi2spol} (using default set of CO LDMEs) and figure \ref{fig:psi2spol2} (using set II of CO LDMEs). As we discussed in section \ref{sec:ldmes}, a larger value of $M_{1,r_1}^{\psits}$
will result in a larger transverse component for prompt ${\psi(2S)}$. Hence, using our default set of CO LDMEs, the resulted $\lambda_{\theta}$ are much larger than the data (see figure \ref{fig:psi2spol}), while values of $\lambda_{\theta}$ calculated by using set II of CO LDMEs in figure \ref{fig:psi2spol2} can marginally describe the data. On the experimental side, there seems to be a little bit inconsistence between the CDF~\cite{Abulencia:2007us}
data and the CMS~\cite{Chatrchyan:2013cla} data, and the error bars
are large. Therefore, a more precise measurement at the LHC is essential to
clarify the difference in the future.
\begin{figure*}[!hbtp]
\begin{center}
\includegraphics*[scale=0.47]{psi2spol-CDF.eps}
\includegraphics*[scale=0.47]{psi2spol-CMS.eps}
\includegraphics*[scale=0.47]{psi2spol-LHCb.eps}
\caption{\label{fig:psi2spol}Comparison of NLO NRQCD (with the default set of CO LDMEs) and
CDF~\cite{Abulencia:2007us}, CMS~\cite{Chatrchyan:2013cla} and LHCb~\cite{Aaij:2014qea} data
for prompt ${\psi(2S)}$ polarisation $\lambda_{\theta}$ in helicity frame.}
\end{center}
\end{figure*}
\begin{figure*}[!hbtp]
\begin{center}
\includegraphics*[scale=0.47]{psi2spol-CDF-II.eps}
\includegraphics*[scale=0.47]{psi2spol-CMS-II.eps}
\includegraphics*[scale=0.47]{psi2spol-LHCb-II.eps}
\caption{\label{fig:psi2spol2}Comparison of NLO NRQCD (with the set II of CO LDMEs) and
CDF~\cite{Abulencia:2007us}, CMS~\cite{Chatrchyan:2013cla} and LHCb~\cite{Aaij:2014qea} data
for prompt ${\psi(2S)}$ polarisation $\lambda_{\theta}$ in helicity frame.}
\end{center}
\end{figure*}
We would emphasize here that if we simply set $M_{1,r_1}^{\psits}$ to be zero and only keep ${\bigl.^1\hspace{-1mm}S^{[8]}_0}$, it will of course
result in unpolarized results for any polarisation observables in
any frame, which is also noticed in refs.~\cite{Ma:2010yw,Faccioli:2014cqa}.
\section{Prompt ${J/\psi}$ yields and polarizations\label{sec:4}}
The prompt ${J/\psi}$ production in hadronic collisions is more
involved. It receives a significant contribution from $\chi_{cJ}$
and ${\psi(2S)}$ decay via $\chi_{cJ}\rightarrow{J/\psi}+\gamma$ and
${\psi(2S)}\rightarrow{J/\psi}+X$ respectively, which is usually called the
feeddown contribution. ${J/\psi}$ can be reconstructed quite well from
its decay products, a muon pair. In our previous
study~\cite{Chao:2012iv}, we did not include feeddown contribution
in our ${J/\psi}$ yields and polarisation predictions. We found there
was still a parameter space for CO LDMEs to give an almost
unpolarized theoretical prediction, though we were still unable to
extract the three independent CO LDMEs unambiguously. More precisely, we need a cancellation happens between the transverse
components of ${\bigl.^3\hspace{-1mm}S^{[8]}_1}$ and ${\bigl.^3\hspace{-1mm}P^{[8]}_J}$ to give an unpolarized result, which happens to be equivalent to need a (very) small $M_{1,r_1}^{\jpsi}$. Later, we also
consider the impact of feeddown contribution from $\chi_{cJ}$ decay
on our direct ${J/\psi}$ polarisation~\cite{Shao:2012fs,Shao:2014fca}. From eq.~(C4)
in ref.~\cite{Shao:2012fs}, the feeddown contribution from
$\chi_{c1}$ for ${J/\psi}$ polarisation is in the interval
$[-\frac{1}{3},1]$, while the feeddown contribution from $\chi_{c2}$
is in the interval $[-\frac{3}{5},1]$, regardless of its production
mechanism.\footnote{The ${J/\psi}$ polarisation $\lambda_{\theta}$ from
scalar particle $\chi_{c0}$ is always zero.} We showed that the
smearing from feeddown contribution will not change our result too
much based on our direct ${J/\psi}$ polarisation. Now, we are intending
to give a rigorous prediction for prompt ${J/\psi}$ yields and
polarisation after including the feeddown contribution from
$\chi_{cJ}$ and ${\psi(2S)}$ decay. As we discussed in section \ref{sec:ldmes}, the LDMEs of $M_{0,r_0}^{\jpsi}$ and $M_{1,r_1}^{\jpsi}$ are insensitive to the $p_{T\rm{cut}}$ when $p_{T\rm{cut}}>7\rm{GeV}$. We will use the values of $M_{0,r_0}^{\jpsi}$ and $M_{1,r_1}^{\jpsi}$ obtained from $p_T>7\rm{GeV}$ data only in this section.
\subsection{yields}
In this subsection, we present the $p_T$ spectrum for prompt ${J/\psi}$
yields. We show our NLO NRQCD predictions for prompt ${J/\psi}$ yields
in figure~\ref{fig:jpsiyields}. The experimental data are taken from
CDF~\cite{Acosta:2004yw}, ATLAS~\cite{Aad:2011sp}, CMS~\cite{Chatrchyan:2011kc}
and LHCb~\cite{Aaij:2011jh}. Good agreement is found up to 70 GeV
and in various rapidity bins.
In order to understand the fraction of feeddown contribution from
$\chi_{cJ}$ to prompt ${J/\psi}$, we also show the theoretical
prediction for $\frac{\sigma(\chi_c\rightarrow{J/\psi}\gamma)}{\sigma({J/\psi})}$
in figure~\ref{fig:fdjpsiyields} in the LHCb fiducial region. The plot
implies that the $p_T$ spectrum of prompt $\chi_c$ is harder than
that of ${J/\psi}$, which can be understood as $\chi_c$ has a stronger $p_T^{-4}$ behaviour. In figure \ref{fig:Rjpsiyields}, we also show the ratio $R$ of prompt ${\psi(2S)}$ yields and
prompt ${J/\psi}$ yields as defined in
refs.~\cite{Aaij:2012ag,Chatrchyan:2011kc}, \begin{eqnarray}
R\equiv\frac{\sigma({\psi(2S)}\rightarrow\mu^+\mu^-)}{\sigma({J/\psi}\rightarrow\mu^+\mu^-)},
\end{eqnarray} which indicates the $p_T$ dependence of feeddown contribution
from ${\psi(2S)}$ in prompt ${J/\psi}$ yields. With the default set of CO LDMEs for ${\psi(2S)}$, it increases as $p_T$
becomes larger because of $M_{1,r_1}^{\psits}/M_{0,r_0}^{\psits} > M_{1,r_1}^{\jpsi}/M_{0,r_0}^{\jpsi}$. On the contrast, after using the new set II of CO LMDEs for ${\psi(2S)}$, the ratio $R$ is flat in $p_T$, which is easily understood because of a smaller $M_{1,r_1}^{\psits}/M_{0,r_0}^{\psits}$. Finally, we divide the prompt ${J/\psi}$ yields into direct ${J/\psi}$
yields and the feeddown ${J/\psi}$ from $\chi_c$ and ${\psi(2S)}$ decay in
the second plot of figure~\ref{fig:fdjpsiyields}. It shows that
the $p_T$ spectrum of feeddown ${J/\psi}$ is harder than that of direct
one.
Before going ahead into the discussion of the polarization case, we want to clarify that only the ratio $R$ is sensitive to different sets of CO LMDEs for ${\psi(2S)}$ in this subsection, while other differential distributions are not. It is just because the feed-down contribution from ${\psi(2S)}$ in prompt ${J/\psi}$ production is indeed small. This fact has also been checked numerically. It is also applicable to the polarization observable $\lambda_{\theta}$ for prompt ${J/\psi}$ in the next subsection. Hence, we will refrain ourselves from presenting the similar plots by using the set II of CO LDMEs for ${\psi(2S)}$ except the ratio $R$.
\begin{center}
\begin{figure}
\hspace{0cm}\includegraphics[width=0.45\textwidth]{promptjpsiyields-CDF.eps}
\hspace{0cm}\includegraphics[width=0.45\textwidth]{promptjpsiyields-ATLAS.eps}
\hspace{1cm}\includegraphics[width=0.45\textwidth]{promptjpsiyields-CMS-All.eps}
\hspace{1.4cm}\includegraphics[width=0.45\textwidth]{promptjpsiyields-LHCb.eps}
\caption{\label{fig:jpsiyields}Comparison of NLO NRQCD and
CDF~\cite{Acosta:2004yw},ATLAS~\cite{Aad:2011sp},CMS~\cite{Chatrchyan:2011kc}
and LHCb~\cite{Aaij:2011jh} data for prompt ${J/\psi}$ yields.}
\end{figure}
\end{center}
\begin{center}
\begin{figure}
\hspace{0cm}\includegraphics[width=0.45\textwidth]{promptjpsifromchicyields-LHCb.eps}
\hspace{1cm}\includegraphics[width=0.45\textwidth]{jpsiyields-CDF.eps}
\caption{\label{fig:fdjpsiyields}Comparison of NLO NRQCD and
LHCb~\cite{LHCb:2012af} and
CDF~\cite{Acosta:2004yw} data for ${J/\psi}$ yields.}
\end{figure}
\end{center}
\begin{center}
\begin{figure}
\hspace{0cm}\includegraphics[width=0.45\textwidth]{R-LHCb-2045.eps}
\hspace{1cm}\includegraphics[width=0.45\textwidth]{R-CMS-024.eps}\\
\hspace{0cm}\includegraphics[width=0.45\textwidth]{R-LHCb-2045-II.eps}
\hspace{1cm}\includegraphics[width=0.45\textwidth]{R-CMS-024-II.eps}
\caption{\label{fig:Rjpsiyields}Comparison of NLO NRQCD and
LHCb~\cite{Aaij:2012ag} and CMS~\cite{Chatrchyan:2011kc} data for $R$. We use the default set of CO LDMEs for ${\psi(2S)}$ in the upper two panels, while the lower two panels are obtained by using the set II of CO LDMEs for ${\psi(2S)}$.}
\end{figure}
\end{center}
\subsection{polarizations}
The polarisation for prompt ${J/\psi}$ should be expected to be almost unpolarized
because a smaller $M_{1,r_1}^{\jpsi}$ indicates
a smaller transverse polarized component in prompt ${J/\psi}$. We
compare our NLO NRQCD results with
CDF~\cite{Abulencia:2007us}, CMS~\cite{Chatrchyan:2013cla},
LHCb~\cite{Aaij:2013nlm} and ALICE~\cite{Abelev:2011md} data in
figure~\ref{fig:jpsipol}. $\lambda_{\theta}$ in different rapidity bins
are close to 0, which is consistent with our previous claim even
after including feeddown
contribution~\cite{Chao:2012iv,Shao:2012fs}. Our results are in good
agreement with the measurements of
CMS~\cite{Chatrchyan:2013cla},\footnote{Although there seems to be some difference between our theoretical results and the current CMS polarization data, we would like to mention that there are still some statistical fluctuations in the CMS data themselves, such as shown in the last bins in $|y|<0.6$ and $0.6<|y|<1.2$ in figure \ref{fig:jpsipol}.} LHCb~\cite{Aaij:2013nlm} and
ALICE~\cite{Abelev:2011md} collaborations, while it is not so good
with CDF data~\cite{Abulencia:2007us}. However, it is worthwhile to
note that the CDF data is also inconsistent with the CMS data in
the same rapidity interval.
Our positive LDMEs assumption is consistent with experiment that
the LHCb data is a little bit lower than the CMS data. As we have pointed out in section~\ref{sec:2}, positivity of LDMEs implies that the $\lambda_{\theta}$ will be smaller in the forward
rapidity bin than in the central rapidity bin, based on the
understanding that $M_{1,r_1}^{\jpsi}$ is smaller when rapidity $y$ is larger.
On the other hand, there are negative values of ${\langle\mathcal{O}^\jpsi(\bigl.^3\hspace{-1mm}P_0^{[8]})\rangle}$ in the other
two groups~\cite{Butenschoen:2012px,Gong:2012ug}. They will give
larger values of $\lambda_{\theta}$ in the forward rapidity bins, which
will be in conflict with the LHCb data.
\begin{center}
\begin{figure}
\hspace{0cm}\includegraphics[width=0.45\textwidth]{jpsipol-CDF.eps}
\hspace{1cm}\includegraphics[width=0.45\textwidth]{jpsipol-CMS06.eps}
\hspace{1cm}\includegraphics[width=0.45\textwidth]{jpsipol-CMS612.eps}
\hspace{1.4cm}\includegraphics[width=0.45\textwidth]{jpsipol-LHCb-ALICE.eps}
\caption{\label{fig:jpsipol}Comparison of NLO NRQCD and
CDF~\cite{Abulencia:2007us},CMS~\cite{Chatrchyan:2013cla},
LHCb~\cite{Aaij:2013nlm} and ALICE~\cite{Abelev:2011md} data for
prompt ${J/\psi}$ polarisation $\lambda_{\theta}$ in helicity frame. The
ALICE~\cite{Abelev:2011md} data is for the inclusive ${J/\psi}$.}
\end{figure}
\end{center}
\section{Summary\label{sec:5}}
With large samples of heavy quarkonium accumulated at the LHC,
quarkonium physics has reached the precision era even at large
transverse momenta regime. In this article, we present a
comprehensive analysis for prompt ${J/\psi}$ and ${\psi(2S)}$ produced at
the Tevatron and the LHC within NRQCD. For prompt ${J/\psi}$, we have
taken feeddown contributions from $\chi_{cJ}(J=0,1,2)$ and ${\psi(2S)}$
into account. Short-distance coefficients for all important CO
Fock states are computed up to $\mathcal{O}(\alpha_S^4)$, i.e. at NLO
in $\alpha_S$. Color-singlet LDMEs of ${J/\psi}$, $\chi_{cJ}$ and
${\psi(2S)}$ are estimated by using potential model~\cite{Eichten:1995ch},
while CO LDMEs are estimated by fitting experimental
data. For $\chi_{cJ}(J=0,1,2)$, there is only one independent CO LDME
${\langle\mathcal{O}^{\chi_{c0}}(\bigl.^3\hspace{-1mm}S_1^{[8]})\rangle}$. Its value can be fixed by fitting the Tevatron data
$\frac{\sigma({\chi_{c2}}\rightarrow{J/\psi}\gamma)}{\sigma({\chi_{c1}}\rightarrow{J/\psi}\gamma)}$~\cite{Abulencia:2007bra}
as done in ref.~\cite{Ma:2010vd}. For ${J/\psi}$ or ${\psi(2S)}$, there are
three independent CO LDMEs, i.e. ${\langle\mathcal{O}^{\jpsi(\psits)}(\bigl.^1\hspace{-1mm}S_0^{[8]})\rangle}$, ${\langle\mathcal{O}^{\jpsi(\psits)}(\bigl.^3\hspace{-1mm}S_1^{[8]})\rangle}$ and ${\langle\mathcal{O}^{\jpsi(\psits)}(\bigl.^3\hspace{-1mm}P_0^{[8]})\rangle}$.
From the decomposition of short-distance coefficients for ${\bigl.^3\hspace{-1mm}P^{[8]}_J}$, we
understand that it is difficult to extract the three independent CO
LDMEs from the hadronic data even after including polarisation data.
What we can determine unambiguously is two linear combinations of
these three CO LDMEs. Their values were already extracted in
refs.~\cite{Ma:2010yw,Ma:2010jj} with $p_{T\rm{cut}}=7\rm{GeV}$. However, we still need three CO
LDMEs instead of two linear combinations to predict the yields and
polarizations for prompt ${J/\psi}$ and ${\psi(2S)}$ in various rapidity
regions. We assume all CO LDMEs are of positive signs, which are in
contrast to other groups' assumptions~\cite{Butenschoen:2012px,Gong:2012ug}. The result obtained under our
assumption is consistent with the observed relative magnitudes of
polarization in the forward rapidity interval and in the central rapidity
interval. Based on our assumption, we can provide more satisfactory predictions of both
yields and polarizations $\lambda_{\theta}$ in the helicity frame for prompt
${J/\psi}$, which is almost unpolarized at hadron
colliders. But we are unable to explain the
polarization of prompt ${\psi(2S)}$ based on the old fit in ref.~\cite{Ma:2010yw}. We thus checked the ${\psi(2S)}$ data and performed a new fit to the Tevatron data with $p_{T\rm{cut}}=11\rm{GeV}$, which gives a better description for the polarization data of ${\psi(2S)}$.
However, on the theoretical side, it is still needed to understand why we have to use such a large $p_T$ cutoff, which is much larger than the quarkonium mass. It might be possible that the NRQCD factorization formula may not be applicable if $p_T$ is not large enough, which were also pointed out by the authors in refs.~\cite{Bodwin:2014gia,Faccioli:2014cqa}. Recently, it was found that the $J/\psi$ production in small $p_T$ regime may be described by a CGC+NRQCD formalism \cite{Ma:2014mri}. In a moderate $p_T$ regime, say $p_T\sim 5-7$GeV, the CGC+NRQCD results match smoothly to our NLO NRQCD results \cite{Ma:2014mri}, and thus the $J/\psi$ production in the whole $p_T$ regime may be described. It will be interesting to see whether the ${\psi(2S)}$ production in small and moderate $p_T$ regime can be described in the same way. In recent years, several other efforts are made by people to understand the quarkonium production mechanism, including the relativistic corrections \cite{Fan:2009zq,Xu:2012am}, the small $p_T$ regime resummation \cite{Sun:2012vc}, and the large $p_T$ regime factorization and resummation \cite{Kang:2011mg,Fleming:2012wy, Fleming:2013qu, Kang:2014tta, Kang:2014pya, Ma:2013yla,Ma:2014eja,Bodwin:2014gia,Ma:2014svb}. These works will provide more precise predictions for the quantitative understanding of quarkonium production. Moreover, other quarkonium associated production processes (e.g. double $J/\psi$ production~\cite{Lansberg:2013qka,Sun:2014gca,Lansberg:2014swa}) and/or other observables (e.g. fragmenting jet functions~\cite{Baumgart:2014upa}) may also reveal the quarkonium production mechanism at the LHC in the future. On the experiment side, more precise measurements on the yields and especially the polarizations of heavy quarkonia are definitely needed to further clarify the present issues in quarkonium production.
\begin{acknowledgments}
We thank D.~Price for useful discussions.
This work was supported in part by the National Natural Science
Foundation of China (No 11075002, No 11021092), and the Ministry of
Science and Technology of China (2009CB825200). Y.Q.Ma was supported
by the U.S. Department of Energy, under Contract No.
DE-AC02-98CH10886.
\end{acknowledgments}
\providecommand{\href}[2]{#2}\begingroup\raggedright | {
"redpajama_set_name": "RedPajamaArXiv"
} | 1,068 |
.class Lcom/android/server/BackupManagerService$PerformRestoreTask;
.super Ljava/lang/Object;
.source "BackupManagerService.java"
# interfaces
.implements Lcom/android/server/BackupManagerService$BackupRestoreTask;
# annotations
.annotation system Ldalvik/annotation/EnclosingClass;
value = Lcom/android/server/BackupManagerService;
.end annotation
.annotation system Ldalvik/annotation/InnerClass;
accessFlags = 0x0
name = "PerformRestoreTask"
.end annotation
.annotation system Ldalvik/annotation/MemberClasses;
value = {
Lcom/android/server/BackupManagerService$PerformRestoreTask$RestoreRequest;
}
.end annotation
# instance fields
.field private mAgentPackages:Ljava/util/List;
.annotation system Ldalvik/annotation/Signature;
value = {
"Ljava/util/List",
"<",
"Landroid/content/pm/PackageInfo;",
">;"
}
.end annotation
.end field
.field private mBackupData:Landroid/os/ParcelFileDescriptor;
.field private mBackupDataName:Ljava/io/File;
.field private mCount:I
.field private mCurrentPackage:Landroid/content/pm/PackageInfo;
.field private mCurrentState:Lcom/android/server/BackupManagerService$RestoreState;
.field private mFilterSet:Ljava/util/HashSet;
.annotation system Ldalvik/annotation/Signature;
value = {
"Ljava/util/HashSet",
"<",
"Ljava/lang/String;",
">;"
}
.end annotation
.end field
.field private mFinished:Z
.field private mNeedFullBackup:Z
.field private mNewState:Landroid/os/ParcelFileDescriptor;
.field private mNewStateName:Ljava/io/File;
.field private mObserver:Landroid/app/backup/IRestoreObserver;
.field private mPmAgent:Lcom/android/server/PackageManagerBackupAgent;
.field private mPmToken:I
.field private mRestorePackages:Ljava/util/ArrayList;
.annotation system Ldalvik/annotation/Signature;
value = {
"Ljava/util/ArrayList",
"<",
"Landroid/content/pm/PackageInfo;",
">;"
}
.end annotation
.end field
.field private mSavedStateName:Ljava/io/File;
.field private mStartRealtime:J
.field private mStateDir:Ljava/io/File;
.field private mStatus:I
.field private mTargetPackage:Landroid/content/pm/PackageInfo;
.field private mToken:J
.field private mTransport:Lcom/android/internal/backup/IBackupTransport;
.field final synthetic this$0:Lcom/android/server/BackupManagerService;
# direct methods
.method constructor <init>(Lcom/android/server/BackupManagerService;Lcom/android/internal/backup/IBackupTransport;Landroid/app/backup/IRestoreObserver;JLandroid/content/pm/PackageInfo;IZ[Ljava/lang/String;)V
.locals 7
.parameter
.parameter "transport"
.parameter "observer"
.parameter "restoreSetToken"
.parameter "targetPackage"
.parameter "pmToken"
.parameter "needFullBackup"
.parameter "filterSet"
.prologue
.line 4291
iput-object p1, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->this$0:Lcom/android/server/BackupManagerService;
invoke-direct/range {p0 .. p0}, Ljava/lang/Object;-><init>()V
.line 4292
sget-object v4, Lcom/android/server/BackupManagerService$RestoreState;->INITIAL:Lcom/android/server/BackupManagerService$RestoreState;
iput-object v4, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mCurrentState:Lcom/android/server/BackupManagerService$RestoreState;
.line 4293
const/4 v4, 0x0
iput-boolean v4, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mFinished:Z
.line 4294
const/4 v4, 0x0
iput-object v4, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mPmAgent:Lcom/android/server/PackageManagerBackupAgent;
.line 4296
iput-object p2, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mTransport:Lcom/android/internal/backup/IBackupTransport;
.line 4297
iput-object p3, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mObserver:Landroid/app/backup/IRestoreObserver;
.line 4298
iput-wide p4, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mToken:J
.line 4299
iput-object p6, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mTargetPackage:Landroid/content/pm/PackageInfo;
.line 4300
iput p7, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mPmToken:I
.line 4301
iput-boolean p8, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mNeedFullBackup:Z
.line 4303
if-eqz p9, :cond_0
.line 4304
new-instance v4, Ljava/util/HashSet;
invoke-direct {v4}, Ljava/util/HashSet;-><init>()V
iput-object v4, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mFilterSet:Ljava/util/HashSet;
.line 4305
move-object/from16 v0, p9
.local v0, arr$:[Ljava/lang/String;
array-length v2, v0
.local v2, len$:I
const/4 v1, 0x0
.local v1, i$:I
:goto_0
if-ge v1, v2, :cond_1
aget-object v3, v0, v1
.line 4306
.local v3, pkg:Ljava/lang/String;
iget-object v4, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mFilterSet:Ljava/util/HashSet;
invoke-virtual {v4, v3}, Ljava/util/HashSet;->add(Ljava/lang/Object;)Z
.line 4305
add-int/lit8 v1, v1, 0x1
goto :goto_0
.line 4309
.end local v0 #arr$:[Ljava/lang/String;
.end local v1 #i$:I
.end local v2 #len$:I
.end local v3 #pkg:Ljava/lang/String;
:cond_0
const/4 v4, 0x0
iput-object v4, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mFilterSet:Ljava/util/HashSet;
.line 4313
:cond_1
:try_start_0
new-instance v4, Ljava/io/File;
iget-object v5, p1, Lcom/android/server/BackupManagerService;->mBaseStateDir:Ljava/io/File;
invoke-interface {p2}, Lcom/android/internal/backup/IBackupTransport;->transportDirName()Ljava/lang/String;
move-result-object v6
invoke-direct {v4, v5, v6}, Ljava/io/File;-><init>(Ljava/io/File;Ljava/lang/String;)V
iput-object v4, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mStateDir:Ljava/io/File;
:try_end_0
.catch Landroid/os/RemoteException; {:try_start_0 .. :try_end_0} :catch_0
.line 4317
:goto_1
return-void
.line 4314
:catch_0
move-exception v4
goto :goto_1
.end method
# virtual methods
.method agentCleanup()V
.locals 3
.prologue
.line 4769
iget-object v0, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mBackupDataName:Ljava/io/File;
invoke-virtual {v0}, Ljava/io/File;->delete()Z
.line 4770
:try_start_0
iget-object v0, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mBackupData:Landroid/os/ParcelFileDescriptor;
if-eqz v0, :cond_0
iget-object v0, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mBackupData:Landroid/os/ParcelFileDescriptor;
invoke-virtual {v0}, Landroid/os/ParcelFileDescriptor;->close()V
:try_end_0
.catch Ljava/io/IOException; {:try_start_0 .. :try_end_0} :catch_2
.line 4771
:cond_0
:goto_0
:try_start_1
iget-object v0, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mNewState:Landroid/os/ParcelFileDescriptor;
if-eqz v0, :cond_1
iget-object v0, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mNewState:Landroid/os/ParcelFileDescriptor;
invoke-virtual {v0}, Landroid/os/ParcelFileDescriptor;->close()V
:try_end_1
.catch Ljava/io/IOException; {:try_start_1 .. :try_end_1} :catch_1
.line 4772
:cond_1
:goto_1
const/4 v0, 0x0
iput-object v0, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mNewState:Landroid/os/ParcelFileDescriptor;
iput-object v0, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mBackupData:Landroid/os/ParcelFileDescriptor;
.line 4787
iget-object v0, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mNewStateName:Ljava/io/File;
invoke-virtual {v0}, Ljava/io/File;->delete()Z
.line 4791
iget-object v0, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mCurrentPackage:Landroid/content/pm/PackageInfo;
iget-object v0, v0, Landroid/content/pm/PackageInfo;->applicationInfo:Landroid/content/pm/ApplicationInfo;
if-eqz v0, :cond_2
.line 4794
:try_start_2
iget-object v0, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->this$0:Lcom/android/server/BackupManagerService;
#getter for: Lcom/android/server/BackupManagerService;->mActivityManager:Landroid/app/IActivityManager;
invoke-static {v0}, Lcom/android/server/BackupManagerService;->access$1200(Lcom/android/server/BackupManagerService;)Landroid/app/IActivityManager;
move-result-object v0
iget-object v1, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mCurrentPackage:Landroid/content/pm/PackageInfo;
iget-object v1, v1, Landroid/content/pm/PackageInfo;->applicationInfo:Landroid/content/pm/ApplicationInfo;
invoke-interface {v0, v1}, Landroid/app/IActivityManager;->unbindBackupAgent(Landroid/content/pm/ApplicationInfo;)V
.line 4802
iget-object v0, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mTargetPackage:Landroid/content/pm/PackageInfo;
if-nez v0, :cond_2
iget-object v0, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mCurrentPackage:Landroid/content/pm/PackageInfo;
iget-object v0, v0, Landroid/content/pm/PackageInfo;->applicationInfo:Landroid/content/pm/ApplicationInfo;
iget v0, v0, Landroid/content/pm/ApplicationInfo;->flags:I
const/high16 v1, 0x1
and-int/2addr v0, v1
if-eqz v0, :cond_2
.line 4804
const-string v0, "BackupManagerService"
new-instance v1, Ljava/lang/StringBuilder;
invoke-direct {v1}, Ljava/lang/StringBuilder;-><init>()V
const-string v2, "Restore complete, killing host process of "
invoke-virtual {v1, v2}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v1
iget-object v2, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mCurrentPackage:Landroid/content/pm/PackageInfo;
iget-object v2, v2, Landroid/content/pm/PackageInfo;->applicationInfo:Landroid/content/pm/ApplicationInfo;
iget-object v2, v2, Landroid/content/pm/ApplicationInfo;->processName:Ljava/lang/String;
invoke-virtual {v1, v2}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v1
invoke-virtual {v1}, Ljava/lang/StringBuilder;->toString()Ljava/lang/String;
move-result-object v1
invoke-static {v0, v1}, Landroid/util/Slog;->d(Ljava/lang/String;Ljava/lang/String;)I
.line 4806
iget-object v0, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->this$0:Lcom/android/server/BackupManagerService;
#getter for: Lcom/android/server/BackupManagerService;->mActivityManager:Landroid/app/IActivityManager;
invoke-static {v0}, Lcom/android/server/BackupManagerService;->access$1200(Lcom/android/server/BackupManagerService;)Landroid/app/IActivityManager;
move-result-object v0
iget-object v1, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mCurrentPackage:Landroid/content/pm/PackageInfo;
iget-object v1, v1, Landroid/content/pm/PackageInfo;->applicationInfo:Landroid/content/pm/ApplicationInfo;
iget-object v1, v1, Landroid/content/pm/ApplicationInfo;->processName:Ljava/lang/String;
iget-object v2, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mCurrentPackage:Landroid/content/pm/PackageInfo;
iget-object v2, v2, Landroid/content/pm/PackageInfo;->applicationInfo:Landroid/content/pm/ApplicationInfo;
iget v2, v2, Landroid/content/pm/ApplicationInfo;->uid:I
invoke-interface {v0, v1, v2}, Landroid/app/IActivityManager;->killApplicationProcess(Ljava/lang/String;I)V
:try_end_2
.catch Landroid/os/RemoteException; {:try_start_2 .. :try_end_2} :catch_0
.line 4817
:cond_2
:goto_2
iget-object v0, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->this$0:Lcom/android/server/BackupManagerService;
iget-object v0, v0, Lcom/android/server/BackupManagerService;->mBackupHandler:Lcom/android/server/BackupManagerService$BackupHandler;
const/4 v1, 0x7
invoke-virtual {v0, v1, p0}, Lcom/android/server/BackupManagerService$BackupHandler;->removeMessages(ILjava/lang/Object;)V
.line 4818
iget-object v0, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->this$0:Lcom/android/server/BackupManagerService;
iget-object v1, v0, Lcom/android/server/BackupManagerService;->mCurrentOpLock:Ljava/lang/Object;
monitor-enter v1
.line 4819
:try_start_3
iget-object v0, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->this$0:Lcom/android/server/BackupManagerService;
iget-object v0, v0, Lcom/android/server/BackupManagerService;->mCurrentOperations:Landroid/util/SparseArray;
invoke-virtual {v0}, Landroid/util/SparseArray;->clear()V
.line 4820
monitor-exit v1
.line 4821
return-void
.line 4820
:catchall_0
move-exception v0
monitor-exit v1
:try_end_3
.catchall {:try_start_3 .. :try_end_3} :catchall_0
throw v0
.line 4810
:catch_0
move-exception v0
goto :goto_2
.line 4771
:catch_1
move-exception v0
goto :goto_1
.line 4770
:catch_2
move-exception v0
goto/16 :goto_0
.end method
.method agentErrorCleanup()V
.locals 2
.prologue
.line 4764
iget-object v0, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->this$0:Lcom/android/server/BackupManagerService;
iget-object v1, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mCurrentPackage:Landroid/content/pm/PackageInfo;
iget-object v1, v1, Landroid/content/pm/PackageInfo;->packageName:Ljava/lang/String;
invoke-virtual {v0, v1}, Lcom/android/server/BackupManagerService;->clearApplicationDataSynchronous(Ljava/lang/String;)V
.line 4765
invoke-virtual {p0}, Lcom/android/server/BackupManagerService$PerformRestoreTask;->agentCleanup()V
.line 4766
return-void
.end method
.method beginRestore()V
.locals 10
.prologue
const/4 v6, 0x1
const/4 v9, 0x0
.line 4354
iget-object v4, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->this$0:Lcom/android/server/BackupManagerService;
iget-object v4, v4, Lcom/android/server/BackupManagerService;->mBackupHandler:Lcom/android/server/BackupManagerService$BackupHandler;
const/16 v5, 0x8
invoke-virtual {v4, v5}, Lcom/android/server/BackupManagerService$BackupHandler;->removeMessages(I)V
.line 4357
iput v6, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mStatus:I
.line 4361
const/16 v4, 0xb0e
const/4 v5, 0x2
:try_start_0
new-array v5, v5, [Ljava/lang/Object;
const/4 v6, 0x0
iget-object v7, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mTransport:Lcom/android/internal/backup/IBackupTransport;
invoke-interface {v7}, Lcom/android/internal/backup/IBackupTransport;->transportDirName()Ljava/lang/String;
move-result-object v7
aput-object v7, v5, v6
const/4 v6, 0x1
iget-wide v7, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mToken:J
invoke-static {v7, v8}, Ljava/lang/Long;->valueOf(J)Ljava/lang/Long;
move-result-object v7
aput-object v7, v5, v6
invoke-static {v4, v5}, Landroid/util/EventLog;->writeEvent(I[Ljava/lang/Object;)I
.line 4365
new-instance v4, Ljava/util/ArrayList;
invoke-direct {v4}, Ljava/util/ArrayList;-><init>()V
iput-object v4, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mRestorePackages:Ljava/util/ArrayList;
.line 4369
iget-object v4, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->this$0:Lcom/android/server/BackupManagerService;
iget-boolean v4, v4, Lcom/android/server/BackupManagerService;->isEdmRestoreRequest:Z
if-nez v4, :cond_0
.line 4370
new-instance v2, Landroid/content/pm/PackageInfo;
invoke-direct {v2}, Landroid/content/pm/PackageInfo;-><init>()V
.line 4371
.local v2, omPackage:Landroid/content/pm/PackageInfo;
const-string v4, "@pm@"
iput-object v4, v2, Landroid/content/pm/PackageInfo;->packageName:Ljava/lang/String;
.line 4372
iget-object v4, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mRestorePackages:Ljava/util/ArrayList;
invoke-virtual {v4, v2}, Ljava/util/ArrayList;->add(Ljava/lang/Object;)Z
.line 4376
.end local v2 #omPackage:Landroid/content/pm/PackageInfo;
:cond_0
iget-object v4, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->this$0:Lcom/android/server/BackupManagerService;
invoke-virtual {v4}, Lcom/android/server/BackupManagerService;->allAgentPackages()Ljava/util/List;
move-result-object v4
iput-object v4, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mAgentPackages:Ljava/util/List;
.line 4377
iget-object v4, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mTargetPackage:Landroid/content/pm/PackageInfo;
if-nez v4, :cond_4
.line 4380
iget-object v4, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mFilterSet:Ljava/util/HashSet;
if-eqz v4, :cond_2
.line 4381
iget-object v4, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mAgentPackages:Ljava/util/List;
invoke-interface {v4}, Ljava/util/List;->size()I
move-result v4
add-int/lit8 v1, v4, -0x1
.local v1, i:I
:goto_0
if-ltz v1, :cond_2
.line 4382
iget-object v4, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mAgentPackages:Ljava/util/List;
invoke-interface {v4, v1}, Ljava/util/List;->get(I)Ljava/lang/Object;
move-result-object v3
check-cast v3, Landroid/content/pm/PackageInfo;
.line 4383
.local v3, pkg:Landroid/content/pm/PackageInfo;
iget-object v4, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mFilterSet:Ljava/util/HashSet;
iget-object v5, v3, Landroid/content/pm/PackageInfo;->packageName:Ljava/lang/String;
invoke-virtual {v4, v5}, Ljava/util/HashSet;->contains(Ljava/lang/Object;)Z
move-result v4
if-nez v4, :cond_1
.line 4384
iget-object v4, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mAgentPackages:Ljava/util/List;
invoke-interface {v4, v1}, Ljava/util/List;->remove(I)Ljava/lang/Object;
.line 4381
:cond_1
add-int/lit8 v1, v1, -0x1
goto :goto_0
.line 4394
.end local v1 #i:I
.end local v3 #pkg:Landroid/content/pm/PackageInfo;
:cond_2
iget-object v4, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mRestorePackages:Ljava/util/ArrayList;
iget-object v5, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mAgentPackages:Ljava/util/List;
invoke-virtual {v4, v5}, Ljava/util/ArrayList;->addAll(Ljava/util/Collection;)Z
.line 4401
:goto_1
iget-object v4, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mObserver:Landroid/app/backup/IRestoreObserver;
:try_end_0
.catch Landroid/os/RemoteException; {:try_start_0 .. :try_end_0} :catch_0
if-eqz v4, :cond_3
.line 4405
:try_start_1
iget-object v4, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mObserver:Landroid/app/backup/IRestoreObserver;
iget-object v5, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mRestorePackages:Ljava/util/ArrayList;
invoke-virtual {v5}, Ljava/util/ArrayList;->size()I
move-result v5
invoke-interface {v4, v5}, Landroid/app/backup/IRestoreObserver;->restoreStarting(I)V
:try_end_1
.catch Landroid/os/RemoteException; {:try_start_1 .. :try_end_1} :catch_1
.line 4418
:cond_3
:goto_2
iput v9, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mStatus:I
.line 4419
sget-object v4, Lcom/android/server/BackupManagerService$RestoreState;->DOWNLOAD_DATA:Lcom/android/server/BackupManagerService$RestoreState;
invoke-virtual {p0, v4}, Lcom/android/server/BackupManagerService$PerformRestoreTask;->executeNextState(Lcom/android/server/BackupManagerService$RestoreState;)V
.line 4420
:goto_3
return-void
.line 4397
:cond_4
:try_start_2
iget-object v4, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mRestorePackages:Ljava/util/ArrayList;
iget-object v5, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mTargetPackage:Landroid/content/pm/PackageInfo;
invoke-virtual {v4, v5}, Ljava/util/ArrayList;->add(Ljava/lang/Object;)Z
:try_end_2
.catch Landroid/os/RemoteException; {:try_start_2 .. :try_end_2} :catch_0
goto :goto_1
.line 4411
:catch_0
move-exception v0
.line 4413
.local v0, e:Landroid/os/RemoteException;
const-string v4, "BackupManagerService"
const-string v5, "Error communicating with transport for restore"
invoke-static {v4, v5}, Landroid/util/Slog;->e(Ljava/lang/String;Ljava/lang/String;)I
.line 4414
sget-object v4, Lcom/android/server/BackupManagerService$RestoreState;->FINAL:Lcom/android/server/BackupManagerService$RestoreState;
invoke-virtual {p0, v4}, Lcom/android/server/BackupManagerService$PerformRestoreTask;->executeNextState(Lcom/android/server/BackupManagerService$RestoreState;)V
goto :goto_3
.line 4406
.end local v0 #e:Landroid/os/RemoteException;
:catch_1
move-exception v0
.line 4407
.restart local v0 #e:Landroid/os/RemoteException;
:try_start_3
const-string v4, "BackupManagerService"
const-string v5, "Restore observer died at restoreStarting"
invoke-static {v4, v5}, Landroid/util/Slog;->d(Ljava/lang/String;Ljava/lang/String;)I
.line 4408
const/4 v4, 0x0
iput-object v4, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mObserver:Landroid/app/backup/IRestoreObserver;
:try_end_3
.catch Landroid/os/RemoteException; {:try_start_3 .. :try_end_3} :catch_0
goto :goto_2
.end method
.method downloadRestoreData()V
.locals 8
.prologue
const/16 v7, 0xb0f
const/4 v6, 0x0
.line 4431
:try_start_0
iget-object v2, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mTransport:Lcom/android/internal/backup/IBackupTransport;
iget-wide v3, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mToken:J
iget-object v1, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mRestorePackages:Ljava/util/ArrayList;
const/4 v5, 0x0
new-array v5, v5, [Landroid/content/pm/PackageInfo;
invoke-virtual {v1, v5}, Ljava/util/ArrayList;->toArray([Ljava/lang/Object;)[Ljava/lang/Object;
move-result-object v1
check-cast v1, [Landroid/content/pm/PackageInfo;
invoke-interface {v2, v3, v4, v1}, Lcom/android/internal/backup/IBackupTransport;->startRestore(J[Landroid/content/pm/PackageInfo;)I
move-result v1
iput v1, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mStatus:I
.line 4433
iget v1, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mStatus:I
if-eqz v1, :cond_0
.line 4434
const-string v1, "BackupManagerService"
const-string v2, "Error starting restore operation"
invoke-static {v1, v2}, Landroid/util/Slog;->e(Ljava/lang/String;Ljava/lang/String;)I
.line 4435
const/16 v1, 0xb0f
const/4 v2, 0x0
new-array v2, v2, [Ljava/lang/Object;
invoke-static {v1, v2}, Landroid/util/EventLog;->writeEvent(I[Ljava/lang/Object;)I
.line 4436
sget-object v1, Lcom/android/server/BackupManagerService$RestoreState;->FINAL:Lcom/android/server/BackupManagerService$RestoreState;
invoke-virtual {p0, v1}, Lcom/android/server/BackupManagerService$PerformRestoreTask;->executeNextState(Lcom/android/server/BackupManagerService$RestoreState;)V
:try_end_0
.catch Landroid/os/RemoteException; {:try_start_0 .. :try_end_0} :catch_0
.line 4457
:goto_0
return-void
.line 4439
:catch_0
move-exception v0
.line 4440
.local v0, e:Landroid/os/RemoteException;
const-string v1, "BackupManagerService"
const-string v2, "Error communicating with transport for restore"
invoke-static {v1, v2}, Landroid/util/Slog;->e(Ljava/lang/String;Ljava/lang/String;)I
.line 4441
new-array v1, v6, [Ljava/lang/Object;
invoke-static {v7, v1}, Landroid/util/EventLog;->writeEvent(I[Ljava/lang/Object;)I
.line 4442
const/4 v1, 0x1
iput v1, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mStatus:I
.line 4443
sget-object v1, Lcom/android/server/BackupManagerService$RestoreState;->FINAL:Lcom/android/server/BackupManagerService$RestoreState;
invoke-virtual {p0, v1}, Lcom/android/server/BackupManagerService$PerformRestoreTask;->executeNextState(Lcom/android/server/BackupManagerService$RestoreState;)V
goto :goto_0
.line 4449
.end local v0 #e:Landroid/os/RemoteException;
:cond_0
iget-object v1, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->this$0:Lcom/android/server/BackupManagerService;
iget-boolean v1, v1, Lcom/android/server/BackupManagerService;->isEdmRestoreRequest:Z
if-nez v1, :cond_1
.line 4450
sget-object v1, Lcom/android/server/BackupManagerService$RestoreState;->PM_METADATA:Lcom/android/server/BackupManagerService$RestoreState;
invoke-virtual {p0, v1}, Lcom/android/server/BackupManagerService$PerformRestoreTask;->executeNextState(Lcom/android/server/BackupManagerService$RestoreState;)V
goto :goto_0
.line 4454
:cond_1
sget-object v1, Lcom/android/server/BackupManagerService$RestoreState;->RUNNING_QUEUE:Lcom/android/server/BackupManagerService$RestoreState;
invoke-virtual {p0, v1}, Lcom/android/server/BackupManagerService$PerformRestoreTask;->executeNextState(Lcom/android/server/BackupManagerService$RestoreState;)V
goto :goto_0
.end method
.method public execute()V
.locals 2
.prologue
.line 4323
sget-object v0, Lcom/android/server/BackupManagerService$4;->$SwitchMap$com$android$server$BackupManagerService$RestoreState:[I
iget-object v1, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mCurrentState:Lcom/android/server/BackupManagerService$RestoreState;
invoke-virtual {v1}, Lcom/android/server/BackupManagerService$RestoreState;->ordinal()I
move-result v1
aget v0, v0, v1
packed-switch v0, :pswitch_data_0
.line 4348
:goto_0
return-void
.line 4325
:pswitch_0
invoke-virtual {p0}, Lcom/android/server/BackupManagerService$PerformRestoreTask;->beginRestore()V
goto :goto_0
.line 4329
:pswitch_1
invoke-virtual {p0}, Lcom/android/server/BackupManagerService$PerformRestoreTask;->downloadRestoreData()V
goto :goto_0
.line 4333
:pswitch_2
invoke-virtual {p0}, Lcom/android/server/BackupManagerService$PerformRestoreTask;->restorePmMetadata()V
goto :goto_0
.line 4337
:pswitch_3
invoke-virtual {p0}, Lcom/android/server/BackupManagerService$PerformRestoreTask;->restoreNextAgent()V
goto :goto_0
.line 4341
:pswitch_4
iget-boolean v0, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mFinished:Z
if-nez v0, :cond_0
invoke-virtual {p0}, Lcom/android/server/BackupManagerService$PerformRestoreTask;->finalizeRestore()V
.line 4345
:goto_1
const/4 v0, 0x1
iput-boolean v0, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mFinished:Z
goto :goto_0
.line 4343
:cond_0
const-string v0, "BackupManagerService"
const-string v1, "Duplicate finish"
invoke-static {v0, v1}, Landroid/util/Slog;->e(Ljava/lang/String;Ljava/lang/String;)I
goto :goto_1
.line 4323
nop
:pswitch_data_0
.packed-switch 0x1
:pswitch_0
:pswitch_1
:pswitch_2
:pswitch_3
:pswitch_4
.end packed-switch
.end method
.method executeNextState(Lcom/android/server/BackupManagerService$RestoreState;)V
.locals 3
.parameter "nextState"
.prologue
.line 4853
iput-object p1, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mCurrentState:Lcom/android/server/BackupManagerService$RestoreState;
.line 4854
iget-object v1, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->this$0:Lcom/android/server/BackupManagerService;
iget-object v1, v1, Lcom/android/server/BackupManagerService;->mBackupHandler:Lcom/android/server/BackupManagerService$BackupHandler;
const/16 v2, 0x14
invoke-virtual {v1, v2, p0}, Lcom/android/server/BackupManagerService$BackupHandler;->obtainMessage(ILjava/lang/Object;)Landroid/os/Message;
move-result-object v0
.line 4855
.local v0, msg:Landroid/os/Message;
iget-object v1, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->this$0:Lcom/android/server/BackupManagerService;
iget-object v1, v1, Lcom/android/server/BackupManagerService;->mBackupHandler:Lcom/android/server/BackupManagerService$BackupHandler;
invoke-virtual {v1, v0}, Lcom/android/server/BackupManagerService$BackupHandler;->sendMessage(Landroid/os/Message;)Z
.line 4856
return-void
.end method
.method finalizeRestore()V
.locals 7
.prologue
const/16 v6, 0x8
const/4 v5, -0x3
.line 4642
:try_start_0
iget-object v2, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mTransport:Lcom/android/internal/backup/IBackupTransport;
invoke-interface {v2}, Lcom/android/internal/backup/IBackupTransport;->finishRestore()V
:try_end_0
.catch Landroid/os/RemoteException; {:try_start_0 .. :try_end_0} :catch_0
.line 4650
:goto_0
iget-object v2, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mObserver:Landroid/app/backup/IRestoreObserver;
if-eqz v2, :cond_0
.line 4652
:try_start_1
iget-object v2, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mObserver:Landroid/app/backup/IRestoreObserver;
iget v3, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mStatus:I
invoke-interface {v2, v3}, Landroid/app/backup/IRestoreObserver;->restoreFinished(I)V
:try_end_1
.catch Landroid/os/RemoteException; {:try_start_1 .. :try_end_1} :catch_1
.line 4661
:cond_0
:goto_1
iget-object v2, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mTargetPackage:Landroid/content/pm/PackageInfo;
if-nez v2, :cond_1
iget-object v2, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mPmAgent:Lcom/android/server/PackageManagerBackupAgent;
if-eqz v2, :cond_1
.line 4662
iget-object v2, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->this$0:Lcom/android/server/BackupManagerService;
iget-object v3, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mPmAgent:Lcom/android/server/PackageManagerBackupAgent;
invoke-virtual {v3}, Lcom/android/server/PackageManagerBackupAgent;->getRestoredPackages()Ljava/util/Set;
move-result-object v3
iput-object v3, v2, Lcom/android/server/BackupManagerService;->mAncestralPackages:Ljava/util/Set;
.line 4663
iget-object v2, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->this$0:Lcom/android/server/BackupManagerService;
iget-wide v3, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mToken:J
iput-wide v3, v2, Lcom/android/server/BackupManagerService;->mAncestralToken:J
.line 4664
iget-object v2, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->this$0:Lcom/android/server/BackupManagerService;
invoke-virtual {v2}, Lcom/android/server/BackupManagerService;->writeRestoreTokens()V
.line 4669
:cond_1
iget v2, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mPmToken:I
if-lez v2, :cond_2
.line 4672
:try_start_2
iget-object v2, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->this$0:Lcom/android/server/BackupManagerService;
iget-object v2, v2, Lcom/android/server/BackupManagerService;->mPackageManagerBinder:Landroid/content/pm/IPackageManager;
iget v3, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mPmToken:I
invoke-interface {v2, v3}, Landroid/content/pm/IPackageManager;->finishPackageInstall(I)V
:try_end_2
.catch Landroid/os/RemoteException; {:try_start_2 .. :try_end_2} :catch_2
.line 4677
:cond_2
:goto_2
iget-object v2, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->this$0:Lcom/android/server/BackupManagerService;
iget-object v2, v2, Lcom/android/server/BackupManagerService;->mBackupHandler:Lcom/android/server/BackupManagerService$BackupHandler;
invoke-virtual {v2, v6}, Lcom/android/server/BackupManagerService$BackupHandler;->removeMessages(I)V
.line 4678
iget-object v2, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->this$0:Lcom/android/server/BackupManagerService;
iget-object v2, v2, Lcom/android/server/BackupManagerService;->mBackupHandler:Lcom/android/server/BackupManagerService$BackupHandler;
const-wide/32 v3, 0xea60
invoke-virtual {v2, v6, v3, v4}, Lcom/android/server/BackupManagerService$BackupHandler;->sendEmptyMessageDelayed(IJ)Z
.line 4682
const-string v2, "BackupManagerService"
const-string v3, "Restore complete."
invoke-static {v2, v3}, Landroid/util/Slog;->i(Ljava/lang/String;Ljava/lang/String;)I
.line 4683
iget-object v2, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->this$0:Lcom/android/server/BackupManagerService;
iget-object v2, v2, Lcom/android/server/BackupManagerService;->mWakelock:Landroid/os/PowerManager$WakeLock;
invoke-virtual {v2}, Landroid/os/PowerManager$WakeLock;->release()V
.line 4686
iget-object v2, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->this$0:Lcom/android/server/BackupManagerService;
iget-boolean v2, v2, Lcom/android/server/BackupManagerService;->isEdmRestoreRequest:Z
if-eqz v2, :cond_3
.line 4687
iget-object v2, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->this$0:Lcom/android/server/BackupManagerService;
#calls: Lcom/android/server/BackupManagerService;->resetEdmRestoreTags(I)V
invoke-static {v2, v5}, Lcom/android/server/BackupManagerService;->access$300(Lcom/android/server/BackupManagerService;I)V
.line 4690
:cond_3
new-instance v1, Landroid/content/Intent;
const-string v2, "edm.intent.action.backup.service.available"
invoke-direct {v1, v2}, Landroid/content/Intent;-><init>(Ljava/lang/String;)V
.line 4691
.local v1, intent:Landroid/content/Intent;
iget-object v2, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->this$0:Lcom/android/server/BackupManagerService;
#getter for: Lcom/android/server/BackupManagerService;->mContext:Landroid/content/Context;
invoke-static {v2}, Lcom/android/server/BackupManagerService;->access$400(Lcom/android/server/BackupManagerService;)Landroid/content/Context;
move-result-object v2
invoke-virtual {v2, v1}, Landroid/content/Context;->sendBroadcast(Landroid/content/Intent;)V
.line 4693
return-void
.line 4643
.end local v1 #intent:Landroid/content/Intent;
:catch_0
move-exception v0
.line 4644
.local v0, e:Landroid/os/RemoteException;
const-string v2, "BackupManagerService"
const-string v3, "Error finishing restore"
invoke-static {v2, v3, v0}, Landroid/util/Slog;->e(Ljava/lang/String;Ljava/lang/String;Ljava/lang/Throwable;)I
.line 4646
iget-object v2, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->this$0:Lcom/android/server/BackupManagerService;
#calls: Lcom/android/server/BackupManagerService;->resetEdmRestoreTags(I)V
invoke-static {v2, v5}, Lcom/android/server/BackupManagerService;->access$300(Lcom/android/server/BackupManagerService;I)V
goto :goto_0
.line 4653
.end local v0 #e:Landroid/os/RemoteException;
:catch_1
move-exception v0
.line 4654
.restart local v0 #e:Landroid/os/RemoteException;
const-string v2, "BackupManagerService"
const-string v3, "Restore observer died at restoreFinished"
invoke-static {v2, v3}, Landroid/util/Slog;->d(Ljava/lang/String;Ljava/lang/String;)I
goto :goto_1
.line 4673
.end local v0 #e:Landroid/os/RemoteException;
:catch_2
move-exception v2
goto :goto_2
.end method
.method public handleTimeout()V
.locals 4
.prologue
.line 4842
const-string v0, "BackupManagerService"
new-instance v1, Ljava/lang/StringBuilder;
invoke-direct {v1}, Ljava/lang/StringBuilder;-><init>()V
const-string v2, "Timeout restoring application "
invoke-virtual {v1, v2}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v1
iget-object v2, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mCurrentPackage:Landroid/content/pm/PackageInfo;
iget-object v2, v2, Landroid/content/pm/PackageInfo;->packageName:Ljava/lang/String;
invoke-virtual {v1, v2}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v1
invoke-virtual {v1}, Ljava/lang/StringBuilder;->toString()Ljava/lang/String;
move-result-object v1
invoke-static {v0, v1}, Landroid/util/Slog;->e(Ljava/lang/String;Ljava/lang/String;)I
.line 4843
const/16 v0, 0xb10
const/4 v1, 0x2
new-array v1, v1, [Ljava/lang/Object;
const/4 v2, 0x0
iget-object v3, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mCurrentPackage:Landroid/content/pm/PackageInfo;
iget-object v3, v3, Landroid/content/pm/PackageInfo;->packageName:Ljava/lang/String;
aput-object v3, v1, v2
const/4 v2, 0x1
const-string v3, "restore timeout"
aput-object v3, v1, v2
invoke-static {v0, v1}, Landroid/util/EventLog;->writeEvent(I[Ljava/lang/Object;)I
.line 4846
invoke-virtual {p0}, Lcom/android/server/BackupManagerService$PerformRestoreTask;->agentErrorCleanup()V
.line 4847
sget-object v0, Lcom/android/server/BackupManagerService$RestoreState;->RUNNING_QUEUE:Lcom/android/server/BackupManagerService$RestoreState;
invoke-virtual {p0, v0}, Lcom/android/server/BackupManagerService$PerformRestoreTask;->executeNextState(Lcom/android/server/BackupManagerService$RestoreState;)V
.line 4848
return-void
.end method
.method initiateOneRestore(Landroid/content/pm/PackageInfo;ILandroid/app/IBackupAgent;Z)V
.locals 9
.parameter "app"
.parameter "appVersionCode"
.parameter "agent"
.parameter "needFullBackup"
.prologue
const/4 v8, 0x0
.line 4699
iput-object p1, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mCurrentPackage:Landroid/content/pm/PackageInfo;
.line 4700
iget-object v7, p1, Landroid/content/pm/PackageInfo;->packageName:Ljava/lang/String;
.line 4702
.local v7, packageName:Ljava/lang/String;
const-string v0, "BackupManagerService"
new-instance v1, Ljava/lang/StringBuilder;
invoke-direct {v1}, Ljava/lang/StringBuilder;-><init>()V
const-string v2, "initiateOneRestore packageName="
invoke-virtual {v1, v2}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v1
invoke-virtual {v1, v7}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v1
invoke-virtual {v1}, Ljava/lang/StringBuilder;->toString()Ljava/lang/String;
move-result-object v1
invoke-static {v0, v1}, Landroid/util/Slog;->d(Ljava/lang/String;Ljava/lang/String;)I
.line 4705
new-instance v0, Ljava/io/File;
iget-object v1, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->this$0:Lcom/android/server/BackupManagerService;
iget-object v1, v1, Lcom/android/server/BackupManagerService;->mDataDir:Ljava/io/File;
new-instance v2, Ljava/lang/StringBuilder;
invoke-direct {v2}, Ljava/lang/StringBuilder;-><init>()V
invoke-virtual {v2, v7}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v2
const-string v3, ".restore"
invoke-virtual {v2, v3}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v2
invoke-virtual {v2}, Ljava/lang/StringBuilder;->toString()Ljava/lang/String;
move-result-object v2
invoke-direct {v0, v1, v2}, Ljava/io/File;-><init>(Ljava/io/File;Ljava/lang/String;)V
iput-object v0, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mBackupDataName:Ljava/io/File;
.line 4707
iget-object v0, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->this$0:Lcom/android/server/BackupManagerService;
iget-boolean v0, v0, Lcom/android/server/BackupManagerService;->isEdmBackupRequest:Z
if-eqz v0, :cond_0
.line 4708
new-instance v0, Ljava/io/File;
iget-object v1, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mStateDir:Ljava/io/File;
new-instance v2, Ljava/lang/StringBuilder;
invoke-direct {v2}, Ljava/lang/StringBuilder;-><init>()V
iget-object v3, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->this$0:Lcom/android/server/BackupManagerService;
iget-object v3, v3, Lcom/android/server/BackupManagerService;->mAdminPkgName:Ljava/lang/String;
invoke-virtual {v2, v3}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v2
invoke-virtual {v2, v7}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v2
invoke-virtual {v2}, Ljava/lang/StringBuilder;->toString()Ljava/lang/String;
move-result-object v2
invoke-direct {v0, v1, v2}, Ljava/io/File;-><init>(Ljava/io/File;Ljava/lang/String;)V
iput-object v0, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mSavedStateName:Ljava/io/File;
.line 4709
new-instance v0, Ljava/io/File;
iget-object v1, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mStateDir:Ljava/io/File;
new-instance v2, Ljava/lang/StringBuilder;
invoke-direct {v2}, Ljava/lang/StringBuilder;-><init>()V
iget-object v3, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->this$0:Lcom/android/server/BackupManagerService;
iget-object v3, v3, Lcom/android/server/BackupManagerService;->mAdminPkgName:Ljava/lang/String;
invoke-virtual {v2, v3}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v2
invoke-virtual {v2, v7}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v2
const-string v3, ".new"
invoke-virtual {v2, v3}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v2
invoke-virtual {v2}, Ljava/lang/StringBuilder;->toString()Ljava/lang/String;
move-result-object v2
invoke-direct {v0, v1, v2}, Ljava/io/File;-><init>(Ljava/io/File;Ljava/lang/String;)V
iput-object v0, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mNewStateName:Ljava/io/File;
.line 4716
:goto_0
iget-object v0, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->this$0:Lcom/android/server/BackupManagerService;
invoke-virtual {v0}, Lcom/android/server/BackupManagerService;->generateToken()I
move-result v4
.line 4719
.local v4, token:I
:try_start_0
iget-object v0, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mBackupDataName:Ljava/io/File;
const/high16 v1, 0x3c00
invoke-static {v0, v1}, Landroid/os/ParcelFileDescriptor;->open(Ljava/io/File;I)Landroid/os/ParcelFileDescriptor;
move-result-object v0
iput-object v0, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mBackupData:Landroid/os/ParcelFileDescriptor;
.line 4724
iget-object v0, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mTransport:Lcom/android/internal/backup/IBackupTransport;
iget-object v1, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mBackupData:Landroid/os/ParcelFileDescriptor;
invoke-interface {v0, v1}, Lcom/android/internal/backup/IBackupTransport;->getRestoreData(Landroid/os/ParcelFileDescriptor;)I
move-result v0
if-eqz v0, :cond_1
.line 4727
const-string v0, "BackupManagerService"
new-instance v1, Ljava/lang/StringBuilder;
invoke-direct {v1}, Ljava/lang/StringBuilder;-><init>()V
const-string v2, "Error getting restore data for "
invoke-virtual {v1, v2}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v1
invoke-virtual {v1, v7}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v1
invoke-virtual {v1}, Ljava/lang/StringBuilder;->toString()Ljava/lang/String;
move-result-object v1
invoke-static {v0, v1}, Landroid/util/Slog;->e(Ljava/lang/String;Ljava/lang/String;)I
.line 4728
const/16 v0, 0xb0f
const/4 v1, 0x0
new-array v1, v1, [Ljava/lang/Object;
invoke-static {v0, v1}, Landroid/util/EventLog;->writeEvent(I[Ljava/lang/Object;)I
.line 4729
iget-object v0, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mBackupData:Landroid/os/ParcelFileDescriptor;
invoke-virtual {v0}, Landroid/os/ParcelFileDescriptor;->close()V
.line 4730
iget-object v0, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mBackupDataName:Ljava/io/File;
invoke-virtual {v0}, Ljava/io/File;->delete()Z
.line 4731
sget-object v0, Lcom/android/server/BackupManagerService$RestoreState;->FINAL:Lcom/android/server/BackupManagerService$RestoreState;
invoke-virtual {p0, v0}, Lcom/android/server/BackupManagerService$PerformRestoreTask;->executeNextState(Lcom/android/server/BackupManagerService$RestoreState;)V
:try_end_0
.catch Ljava/lang/Exception; {:try_start_0 .. :try_end_0} :catch_0
.line 4758
:goto_1
return-void
.line 4711
.end local v4 #token:I
:cond_0
new-instance v0, Ljava/io/File;
iget-object v1, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mStateDir:Ljava/io/File;
invoke-direct {v0, v1, v7}, Ljava/io/File;-><init>(Ljava/io/File;Ljava/lang/String;)V
iput-object v0, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mSavedStateName:Ljava/io/File;
.line 4712
new-instance v0, Ljava/io/File;
iget-object v1, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mStateDir:Ljava/io/File;
new-instance v2, Ljava/lang/StringBuilder;
invoke-direct {v2}, Ljava/lang/StringBuilder;-><init>()V
invoke-virtual {v2, v7}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v2
const-string v3, ".new"
invoke-virtual {v2, v3}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v2
invoke-virtual {v2}, Ljava/lang/StringBuilder;->toString()Ljava/lang/String;
move-result-object v2
invoke-direct {v0, v1, v2}, Ljava/io/File;-><init>(Ljava/io/File;Ljava/lang/String;)V
iput-object v0, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mNewStateName:Ljava/io/File;
goto :goto_0
.line 4736
.restart local v4 #token:I
:cond_1
:try_start_1
iget-object v0, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mBackupData:Landroid/os/ParcelFileDescriptor;
invoke-virtual {v0}, Landroid/os/ParcelFileDescriptor;->close()V
.line 4737
iget-object v0, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mBackupDataName:Ljava/io/File;
const/high16 v1, 0x1000
invoke-static {v0, v1}, Landroid/os/ParcelFileDescriptor;->open(Ljava/io/File;I)Landroid/os/ParcelFileDescriptor;
move-result-object v0
iput-object v0, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mBackupData:Landroid/os/ParcelFileDescriptor;
.line 4740
iget-object v0, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mNewStateName:Ljava/io/File;
const/high16 v1, 0x3c00
invoke-static {v0, v1}, Landroid/os/ParcelFileDescriptor;->open(Ljava/io/File;I)Landroid/os/ParcelFileDescriptor;
move-result-object v0
iput-object v0, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mNewState:Landroid/os/ParcelFileDescriptor;
.line 4746
iget-object v0, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->this$0:Lcom/android/server/BackupManagerService;
const-wide/32 v1, 0xea60
invoke-virtual {v0, v4, v1, v2, p0}, Lcom/android/server/BackupManagerService;->prepareOperationTimeout(IJLcom/android/server/BackupManagerService$BackupRestoreTask;)V
.line 4747
iget-object v1, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mBackupData:Landroid/os/ParcelFileDescriptor;
iget-object v3, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mNewState:Landroid/os/ParcelFileDescriptor;
iget-object v0, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->this$0:Lcom/android/server/BackupManagerService;
iget-object v5, v0, Lcom/android/server/BackupManagerService;->mBackupManagerBinder:Landroid/app/backup/IBackupManager;
move-object v0, p3
move v2, p2
invoke-interface/range {v0 .. v5}, Landroid/app/IBackupAgent;->doRestore(Landroid/os/ParcelFileDescriptor;ILandroid/os/ParcelFileDescriptor;ILandroid/app/backup/IBackupManager;)V
:try_end_1
.catch Ljava/lang/Exception; {:try_start_1 .. :try_end_1} :catch_0
goto :goto_1
.line 4748
:catch_0
move-exception v6
.line 4749
.local v6, e:Ljava/lang/Exception;
const-string v0, "BackupManagerService"
new-instance v1, Ljava/lang/StringBuilder;
invoke-direct {v1}, Ljava/lang/StringBuilder;-><init>()V
const-string v2, "Unable to call app for restore: "
invoke-virtual {v1, v2}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v1
invoke-virtual {v1, v7}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v1
invoke-virtual {v1}, Ljava/lang/StringBuilder;->toString()Ljava/lang/String;
move-result-object v1
invoke-static {v0, v1, v6}, Landroid/util/Slog;->e(Ljava/lang/String;Ljava/lang/String;Ljava/lang/Throwable;)I
.line 4750
const/16 v0, 0xb10
const/4 v1, 0x2
new-array v1, v1, [Ljava/lang/Object;
aput-object v7, v1, v8
const/4 v2, 0x1
invoke-virtual {v6}, Ljava/lang/Exception;->toString()Ljava/lang/String;
move-result-object v3
aput-object v3, v1, v2
invoke-static {v0, v1}, Landroid/util/EventLog;->writeEvent(I[Ljava/lang/Object;)I
.line 4751
invoke-virtual {p0}, Lcom/android/server/BackupManagerService$PerformRestoreTask;->agentErrorCleanup()V
.line 4756
sget-object v0, Lcom/android/server/BackupManagerService$RestoreState;->RUNNING_QUEUE:Lcom/android/server/BackupManagerService$RestoreState;
invoke-virtual {p0, v0}, Lcom/android/server/BackupManagerService$PerformRestoreTask;->executeNextState(Lcom/android/server/BackupManagerService$RestoreState;)V
goto/16 :goto_1
.end method
.method public operationComplete()V
.locals 6
.prologue
const/4 v5, 0x0
.line 4826
iget-object v1, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mBackupDataName:Ljava/io/File;
invoke-virtual {v1}, Ljava/io/File;->length()J
move-result-wide v1
long-to-int v0, v1
.line 4827
.local v0, size:I
const/16 v1, 0xb11
const/4 v2, 0x2
new-array v2, v2, [Ljava/lang/Object;
iget-object v3, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mCurrentPackage:Landroid/content/pm/PackageInfo;
iget-object v3, v3, Landroid/content/pm/PackageInfo;->packageName:Ljava/lang/String;
aput-object v3, v2, v5
const/4 v3, 0x1
invoke-static {v0}, Ljava/lang/Integer;->valueOf(I)Ljava/lang/Integer;
move-result-object v4
aput-object v4, v2, v3
invoke-static {v1, v2}, Landroid/util/EventLog;->writeEvent(I[Ljava/lang/Object;)I
.line 4829
invoke-virtual {p0}, Lcom/android/server/BackupManagerService$PerformRestoreTask;->agentCleanup()V
.line 4832
iget-object v1, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->this$0:Lcom/android/server/BackupManagerService;
iget-boolean v1, v1, Lcom/android/server/BackupManagerService;->isEdmRestoreRequest:Z
if-eqz v1, :cond_0
.line 4833
iget-object v1, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->this$0:Lcom/android/server/BackupManagerService;
#calls: Lcom/android/server/BackupManagerService;->resetEdmRestoreTags(I)V
invoke-static {v1, v5}, Lcom/android/server/BackupManagerService;->access$300(Lcom/android/server/BackupManagerService;I)V
.line 4836
:cond_0
sget-object v1, Lcom/android/server/BackupManagerService$RestoreState;->RUNNING_QUEUE:Lcom/android/server/BackupManagerService$RestoreState;
invoke-virtual {p0, v1}, Lcom/android/server/BackupManagerService$PerformRestoreTask;->executeNextState(Lcom/android/server/BackupManagerService$RestoreState;)V
.line 4837
return-void
.end method
.method restoreNextAgent()V
.locals 13
.prologue
const/4 v12, 0x1
.line 4524
:try_start_0
iget-object v8, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mTransport:Lcom/android/internal/backup/IBackupTransport;
invoke-interface {v8}, Lcom/android/internal/backup/IBackupTransport;->nextRestorePackage()Ljava/lang/String;
move-result-object v7
.line 4526
.local v7, packageName:Ljava/lang/String;
if-nez v7, :cond_0
.line 4527
const-string v8, "BackupManagerService"
const-string v9, "Error getting next restore package"
invoke-static {v8, v9}, Landroid/util/Slog;->e(Ljava/lang/String;Ljava/lang/String;)I
.line 4528
const/16 v8, 0xb0f
const/4 v9, 0x0
new-array v9, v9, [Ljava/lang/Object;
invoke-static {v8, v9}, Landroid/util/EventLog;->writeEvent(I[Ljava/lang/Object;)I
.line 4529
sget-object v8, Lcom/android/server/BackupManagerService$RestoreState;->FINAL:Lcom/android/server/BackupManagerService$RestoreState;
invoke-virtual {p0, v8}, Lcom/android/server/BackupManagerService$PerformRestoreTask;->executeNextState(Lcom/android/server/BackupManagerService$RestoreState;)V
.line 4636
.end local v7 #packageName:Ljava/lang/String;
:goto_0
return-void
.line 4531
.restart local v7 #packageName:Ljava/lang/String;
:cond_0
const-string v8, ""
invoke-virtual {v7, v8}, Ljava/lang/String;->equals(Ljava/lang/Object;)Z
move-result v8
if-eqz v8, :cond_1
.line 4532
const-string v8, "BackupManagerService"
const-string v9, "No next package, finishing restore"
invoke-static {v8, v9}, Landroid/util/Slog;->v(Ljava/lang/String;Ljava/lang/String;)I
.line 4533
invoke-static {}, Landroid/os/SystemClock;->elapsedRealtime()J
move-result-wide v8
iget-wide v10, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mStartRealtime:J
sub-long/2addr v8, v10
long-to-int v5, v8
.line 4534
.local v5, millis:I
const/16 v8, 0xb12
const/4 v9, 0x2
new-array v9, v9, [Ljava/lang/Object;
const/4 v10, 0x0
iget v11, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mCount:I
invoke-static {v11}, Ljava/lang/Integer;->valueOf(I)Ljava/lang/Integer;
move-result-object v11
aput-object v11, v9, v10
const/4 v10, 0x1
invoke-static {v5}, Ljava/lang/Integer;->valueOf(I)Ljava/lang/Integer;
move-result-object v11
aput-object v11, v9, v10
invoke-static {v8, v9}, Landroid/util/EventLog;->writeEvent(I[Ljava/lang/Object;)I
.line 4535
sget-object v8, Lcom/android/server/BackupManagerService$RestoreState;->FINAL:Lcom/android/server/BackupManagerService$RestoreState;
invoke-virtual {p0, v8}, Lcom/android/server/BackupManagerService$PerformRestoreTask;->executeNextState(Lcom/android/server/BackupManagerService$RestoreState;)V
:try_end_0
.catch Landroid/os/RemoteException; {:try_start_0 .. :try_end_0} :catch_0
goto :goto_0
.line 4631
.end local v5 #millis:I
.end local v7 #packageName:Ljava/lang/String;
:catch_0
move-exception v1
.line 4632
.local v1, e:Landroid/os/RemoteException;
const-string v8, "BackupManagerService"
const-string v9, "Unable to fetch restore data from transport"
invoke-static {v8, v9}, Landroid/util/Slog;->e(Ljava/lang/String;Ljava/lang/String;)I
.line 4633
iput v12, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mStatus:I
.line 4634
sget-object v8, Lcom/android/server/BackupManagerService$RestoreState;->FINAL:Lcom/android/server/BackupManagerService$RestoreState;
invoke-virtual {p0, v8}, Lcom/android/server/BackupManagerService$PerformRestoreTask;->executeNextState(Lcom/android/server/BackupManagerService$RestoreState;)V
goto :goto_0
.line 4539
.end local v1 #e:Landroid/os/RemoteException;
.restart local v7 #packageName:Ljava/lang/String;
:cond_1
:try_start_1
iget-object v8, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mObserver:Landroid/app/backup/IRestoreObserver;
:try_end_1
.catch Landroid/os/RemoteException; {:try_start_1 .. :try_end_1} :catch_0
if-eqz v8, :cond_2
.line 4541
:try_start_2
iget-object v8, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mObserver:Landroid/app/backup/IRestoreObserver;
iget v9, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mCount:I
invoke-interface {v8, v9, v7}, Landroid/app/backup/IRestoreObserver;->onUpdate(ILjava/lang/String;)V
:try_end_2
.catch Landroid/os/RemoteException; {:try_start_2 .. :try_end_2} :catch_1
.line 4549
:cond_2
:goto_1
const/4 v4, 0x0
.line 4550
.local v4, metaInfo:Lcom/android/server/PackageManagerBackupAgent$Metadata;
:try_start_3
iget-object v8, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->this$0:Lcom/android/server/BackupManagerService;
iget-boolean v8, v8, Lcom/android/server/BackupManagerService;->isEdmRestoreRequest:Z
if-eqz v8, :cond_3
.line 4551
new-instance v8, Lcom/android/server/PackageManagerBackupAgent;
iget-object v9, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->this$0:Lcom/android/server/BackupManagerService;
#getter for: Lcom/android/server/BackupManagerService;->mPackageManager:Landroid/content/pm/PackageManager;
invoke-static {v9}, Lcom/android/server/BackupManagerService;->access$1000(Lcom/android/server/BackupManagerService;)Landroid/content/pm/PackageManager;
move-result-object v9
const/4 v10, 0x0
invoke-direct {v8, v9, v10}, Lcom/android/server/PackageManagerBackupAgent;-><init>(Landroid/content/pm/PackageManager;Ljava/util/List;)V
iput-object v8, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mPmAgent:Lcom/android/server/PackageManagerBackupAgent;
.line 4552
iget-object v8, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mPmAgent:Lcom/android/server/PackageManagerBackupAgent;
iget-object v9, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->this$0:Lcom/android/server/BackupManagerService;
iget v9, v9, Lcom/android/server/BackupManagerService;->mEdmRestoreAppVersionCode:I
iget-object v10, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->this$0:Lcom/android/server/BackupManagerService;
iget-object v10, v10, Lcom/android/server/BackupManagerService;->mEdmRestoreAppSignatures:[Landroid/content/pm/Signature;
invoke-virtual {v8, v9, v10}, Lcom/android/server/PackageManagerBackupAgent;->getNewMetada(I[Landroid/content/pm/Signature;)Lcom/android/server/PackageManagerBackupAgent$Metadata;
move-result-object v4
.line 4557
:goto_2
if-nez v4, :cond_4
.line 4558
const-string v8, "BackupManagerService"
new-instance v9, Ljava/lang/StringBuilder;
invoke-direct {v9}, Ljava/lang/StringBuilder;-><init>()V
const-string v10, "Missing metadata for "
invoke-virtual {v9, v10}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v9
invoke-virtual {v9, v7}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v9
invoke-virtual {v9}, Ljava/lang/StringBuilder;->toString()Ljava/lang/String;
move-result-object v9
invoke-static {v8, v9}, Landroid/util/Slog;->e(Ljava/lang/String;Ljava/lang/String;)I
.line 4559
const/16 v8, 0xb10
const/4 v9, 0x2
new-array v9, v9, [Ljava/lang/Object;
const/4 v10, 0x0
aput-object v7, v9, v10
const/4 v10, 0x1
const-string v11, "Package metadata missing"
aput-object v11, v9, v10
invoke-static {v8, v9}, Landroid/util/EventLog;->writeEvent(I[Ljava/lang/Object;)I
.line 4561
sget-object v8, Lcom/android/server/BackupManagerService$RestoreState;->RUNNING_QUEUE:Lcom/android/server/BackupManagerService$RestoreState;
invoke-virtual {p0, v8}, Lcom/android/server/BackupManagerService$PerformRestoreTask;->executeNextState(Lcom/android/server/BackupManagerService$RestoreState;)V
goto/16 :goto_0
.line 4542
.end local v4 #metaInfo:Lcom/android/server/PackageManagerBackupAgent$Metadata;
:catch_1
move-exception v1
.line 4543
.restart local v1 #e:Landroid/os/RemoteException;
const-string v8, "BackupManagerService"
const-string v9, "Restore observer died in onUpdate"
invoke-static {v8, v9}, Landroid/util/Slog;->d(Ljava/lang/String;Ljava/lang/String;)I
.line 4544
const/4 v8, 0x0
iput-object v8, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mObserver:Landroid/app/backup/IRestoreObserver;
goto :goto_1
.line 4554
.end local v1 #e:Landroid/os/RemoteException;
.restart local v4 #metaInfo:Lcom/android/server/PackageManagerBackupAgent$Metadata;
:cond_3
iget-object v8, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mPmAgent:Lcom/android/server/PackageManagerBackupAgent;
invoke-virtual {v8, v7}, Lcom/android/server/PackageManagerBackupAgent;->getRestoredMetadata(Ljava/lang/String;)Lcom/android/server/PackageManagerBackupAgent$Metadata;
:try_end_3
.catch Landroid/os/RemoteException; {:try_start_3 .. :try_end_3} :catch_0
move-result-object v4
goto :goto_2
.line 4567
:cond_4
const/16 v2, 0x40
.line 4568
.local v2, flags:I
:try_start_4
iget-object v8, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->this$0:Lcom/android/server/BackupManagerService;
#getter for: Lcom/android/server/BackupManagerService;->mPackageManager:Landroid/content/pm/PackageManager;
invoke-static {v8}, Lcom/android/server/BackupManagerService;->access$1000(Lcom/android/server/BackupManagerService;)Landroid/content/pm/PackageManager;
move-result-object v8
invoke-virtual {v8, v7, v2}, Landroid/content/pm/PackageManager;->getPackageInfo(Ljava/lang/String;I)Landroid/content/pm/PackageInfo;
:try_end_4
.catch Landroid/content/pm/PackageManager$NameNotFoundException; {:try_start_4 .. :try_end_4} :catch_2
.catch Landroid/os/RemoteException; {:try_start_4 .. :try_end_4} :catch_0
move-result-object v6
.line 4577
.local v6, packageInfo:Landroid/content/pm/PackageInfo;
:try_start_5
iget v8, v4, Lcom/android/server/PackageManagerBackupAgent$Metadata;->versionCode:I
iget v9, v6, Landroid/content/pm/PackageInfo;->versionCode:I
if-le v8, v9, :cond_6
.line 4581
iget-object v8, v6, Landroid/content/pm/PackageInfo;->applicationInfo:Landroid/content/pm/ApplicationInfo;
iget v8, v8, Landroid/content/pm/ApplicationInfo;->flags:I
const/high16 v9, 0x2
and-int/2addr v8, v9
if-nez v8, :cond_5
.line 4583
new-instance v8, Ljava/lang/StringBuilder;
invoke-direct {v8}, Ljava/lang/StringBuilder;-><init>()V
const-string v9, "Version "
invoke-virtual {v8, v9}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v8
iget v9, v4, Lcom/android/server/PackageManagerBackupAgent$Metadata;->versionCode:I
invoke-virtual {v8, v9}, Ljava/lang/StringBuilder;->append(I)Ljava/lang/StringBuilder;
move-result-object v8
const-string v9, " > installed version "
invoke-virtual {v8, v9}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v8
iget v9, v6, Landroid/content/pm/PackageInfo;->versionCode:I
invoke-virtual {v8, v9}, Ljava/lang/StringBuilder;->append(I)Ljava/lang/StringBuilder;
move-result-object v8
invoke-virtual {v8}, Ljava/lang/StringBuilder;->toString()Ljava/lang/String;
move-result-object v3
.line 4585
.local v3, message:Ljava/lang/String;
const-string v8, "BackupManagerService"
new-instance v9, Ljava/lang/StringBuilder;
invoke-direct {v9}, Ljava/lang/StringBuilder;-><init>()V
const-string v10, "Package "
invoke-virtual {v9, v10}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v9
invoke-virtual {v9, v7}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v9
const-string v10, ": "
invoke-virtual {v9, v10}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v9
invoke-virtual {v9, v3}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v9
invoke-virtual {v9}, Ljava/lang/StringBuilder;->toString()Ljava/lang/String;
move-result-object v9
invoke-static {v8, v9}, Landroid/util/Slog;->w(Ljava/lang/String;Ljava/lang/String;)I
.line 4586
const/16 v8, 0xb10
const/4 v9, 0x2
new-array v9, v9, [Ljava/lang/Object;
const/4 v10, 0x0
aput-object v7, v9, v10
const/4 v10, 0x1
aput-object v3, v9, v10
invoke-static {v8, v9}, Landroid/util/EventLog;->writeEvent(I[Ljava/lang/Object;)I
.line 4588
sget-object v8, Lcom/android/server/BackupManagerService$RestoreState;->RUNNING_QUEUE:Lcom/android/server/BackupManagerService$RestoreState;
invoke-virtual {p0, v8}, Lcom/android/server/BackupManagerService$PerformRestoreTask;->executeNextState(Lcom/android/server/BackupManagerService$RestoreState;)V
goto/16 :goto_0
.line 4569
.end local v3 #message:Ljava/lang/String;
.end local v6 #packageInfo:Landroid/content/pm/PackageInfo;
:catch_2
move-exception v1
.line 4570
.local v1, e:Landroid/content/pm/PackageManager$NameNotFoundException;
const-string v8, "BackupManagerService"
const-string v9, "Invalid package restoring data"
invoke-static {v8, v9, v1}, Landroid/util/Slog;->e(Ljava/lang/String;Ljava/lang/String;Ljava/lang/Throwable;)I
.line 4571
const/16 v8, 0xb10
const/4 v9, 0x2
new-array v9, v9, [Ljava/lang/Object;
const/4 v10, 0x0
aput-object v7, v9, v10
const/4 v10, 0x1
const-string v11, "Package missing on device"
aput-object v11, v9, v10
invoke-static {v8, v9}, Landroid/util/EventLog;->writeEvent(I[Ljava/lang/Object;)I
.line 4573
sget-object v8, Lcom/android/server/BackupManagerService$RestoreState;->RUNNING_QUEUE:Lcom/android/server/BackupManagerService$RestoreState;
invoke-virtual {p0, v8}, Lcom/android/server/BackupManagerService$PerformRestoreTask;->executeNextState(Lcom/android/server/BackupManagerService$RestoreState;)V
goto/16 :goto_0
.line 4591
.end local v1 #e:Landroid/content/pm/PackageManager$NameNotFoundException;
.restart local v6 #packageInfo:Landroid/content/pm/PackageInfo;
:cond_5
const-string v8, "BackupManagerService"
new-instance v9, Ljava/lang/StringBuilder;
invoke-direct {v9}, Ljava/lang/StringBuilder;-><init>()V
const-string v10, "Version "
invoke-virtual {v9, v10}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v9
iget v10, v4, Lcom/android/server/PackageManagerBackupAgent$Metadata;->versionCode:I
invoke-virtual {v9, v10}, Ljava/lang/StringBuilder;->append(I)Ljava/lang/StringBuilder;
move-result-object v9
const-string v10, " > installed "
invoke-virtual {v9, v10}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v9
iget v10, v6, Landroid/content/pm/PackageInfo;->versionCode:I
invoke-virtual {v9, v10}, Ljava/lang/StringBuilder;->append(I)Ljava/lang/StringBuilder;
move-result-object v9
const-string v10, " but restoreAnyVersion"
invoke-virtual {v9, v10}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v9
invoke-virtual {v9}, Ljava/lang/StringBuilder;->toString()Ljava/lang/String;
move-result-object v9
invoke-static {v8, v9}, Landroid/util/Slog;->v(Ljava/lang/String;Ljava/lang/String;)I
.line 4597
:cond_6
iget-object v8, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->this$0:Lcom/android/server/BackupManagerService;
iget-object v9, v4, Lcom/android/server/PackageManagerBackupAgent$Metadata;->signatures:[Landroid/content/pm/Signature;
#calls: Lcom/android/server/BackupManagerService;->signaturesMatch([Landroid/content/pm/Signature;Landroid/content/pm/PackageInfo;)Z
invoke-static {v8, v9, v6}, Lcom/android/server/BackupManagerService;->access$2100(Lcom/android/server/BackupManagerService;[Landroid/content/pm/Signature;Landroid/content/pm/PackageInfo;)Z
move-result v8
if-nez v8, :cond_7
.line 4598
const-string v8, "BackupManagerService"
new-instance v9, Ljava/lang/StringBuilder;
invoke-direct {v9}, Ljava/lang/StringBuilder;-><init>()V
const-string v10, "Signature mismatch restoring "
invoke-virtual {v9, v10}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v9
invoke-virtual {v9, v7}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v9
invoke-virtual {v9}, Ljava/lang/StringBuilder;->toString()Ljava/lang/String;
move-result-object v9
invoke-static {v8, v9}, Landroid/util/Slog;->w(Ljava/lang/String;Ljava/lang/String;)I
.line 4599
const/16 v8, 0xb10
const/4 v9, 0x2
new-array v9, v9, [Ljava/lang/Object;
const/4 v10, 0x0
aput-object v7, v9, v10
const/4 v10, 0x1
const-string v11, "Signature mismatch"
aput-object v11, v9, v10
invoke-static {v8, v9}, Landroid/util/EventLog;->writeEvent(I[Ljava/lang/Object;)I
.line 4601
sget-object v8, Lcom/android/server/BackupManagerService$RestoreState;->RUNNING_QUEUE:Lcom/android/server/BackupManagerService$RestoreState;
invoke-virtual {p0, v8}, Lcom/android/server/BackupManagerService$PerformRestoreTask;->executeNextState(Lcom/android/server/BackupManagerService$RestoreState;)V
goto/16 :goto_0
.line 4605
:cond_7
const-string v8, "BackupManagerService"
new-instance v9, Ljava/lang/StringBuilder;
invoke-direct {v9}, Ljava/lang/StringBuilder;-><init>()V
const-string v10, "Package "
invoke-virtual {v9, v10}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v9
invoke-virtual {v9, v7}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v9
const-string v10, " restore version ["
invoke-virtual {v9, v10}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v9
iget v10, v4, Lcom/android/server/PackageManagerBackupAgent$Metadata;->versionCode:I
invoke-virtual {v9, v10}, Ljava/lang/StringBuilder;->append(I)Ljava/lang/StringBuilder;
move-result-object v9
const-string v10, "] is compatible with installed version ["
invoke-virtual {v9, v10}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v9
iget v10, v6, Landroid/content/pm/PackageInfo;->versionCode:I
invoke-virtual {v9, v10}, Ljava/lang/StringBuilder;->append(I)Ljava/lang/StringBuilder;
move-result-object v9
const-string v10, "]"
invoke-virtual {v9, v10}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v9
invoke-virtual {v9}, Ljava/lang/StringBuilder;->toString()Ljava/lang/String;
move-result-object v9
invoke-static {v8, v9}, Landroid/util/Slog;->v(Ljava/lang/String;Ljava/lang/String;)I
.line 4611
iget-object v8, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->this$0:Lcom/android/server/BackupManagerService;
iget-object v9, v6, Landroid/content/pm/PackageInfo;->applicationInfo:Landroid/content/pm/ApplicationInfo;
const/4 v10, 0x0
invoke-virtual {v8, v9, v10}, Lcom/android/server/BackupManagerService;->bindToAgentSynchronous(Landroid/content/pm/ApplicationInfo;I)Landroid/app/IBackupAgent;
move-result-object v0
.line 4614
.local v0, agent:Landroid/app/IBackupAgent;
if-nez v0, :cond_8
.line 4615
const-string v8, "BackupManagerService"
new-instance v9, Ljava/lang/StringBuilder;
invoke-direct {v9}, Ljava/lang/StringBuilder;-><init>()V
const-string v10, "Can\'t find backup agent for "
invoke-virtual {v9, v10}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v9
invoke-virtual {v9, v7}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v9
invoke-virtual {v9}, Ljava/lang/StringBuilder;->toString()Ljava/lang/String;
move-result-object v9
invoke-static {v8, v9}, Landroid/util/Slog;->w(Ljava/lang/String;Ljava/lang/String;)I
.line 4616
const/16 v8, 0xb10
const/4 v9, 0x2
new-array v9, v9, [Ljava/lang/Object;
const/4 v10, 0x0
aput-object v7, v9, v10
const/4 v10, 0x1
const-string v11, "Restore agent missing"
aput-object v11, v9, v10
invoke-static {v8, v9}, Landroid/util/EventLog;->writeEvent(I[Ljava/lang/Object;)I
.line 4618
sget-object v8, Lcom/android/server/BackupManagerService$RestoreState;->RUNNING_QUEUE:Lcom/android/server/BackupManagerService$RestoreState;
invoke-virtual {p0, v8}, Lcom/android/server/BackupManagerService$PerformRestoreTask;->executeNextState(Lcom/android/server/BackupManagerService$RestoreState;)V
:try_end_5
.catch Landroid/os/RemoteException; {:try_start_5 .. :try_end_5} :catch_0
goto/16 :goto_0
.line 4624
:cond_8
:try_start_6
iget v8, v4, Lcom/android/server/PackageManagerBackupAgent$Metadata;->versionCode:I
iget-boolean v9, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mNeedFullBackup:Z
invoke-virtual {p0, v6, v8, v0, v9}, Lcom/android/server/BackupManagerService$PerformRestoreTask;->initiateOneRestore(Landroid/content/pm/PackageInfo;ILandroid/app/IBackupAgent;Z)V
.line 4625
iget v8, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mCount:I
add-int/lit8 v8, v8, 0x1
iput v8, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mCount:I
:try_end_6
.catch Ljava/lang/Exception; {:try_start_6 .. :try_end_6} :catch_3
.catch Landroid/os/RemoteException; {:try_start_6 .. :try_end_6} :catch_0
goto/16 :goto_0
.line 4626
:catch_3
move-exception v1
.line 4627
.local v1, e:Ljava/lang/Exception;
:try_start_7
const-string v8, "BackupManagerService"
new-instance v9, Ljava/lang/StringBuilder;
invoke-direct {v9}, Ljava/lang/StringBuilder;-><init>()V
const-string v10, "Error when attempting restore: "
invoke-virtual {v9, v10}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v9
invoke-virtual {v1}, Ljava/lang/Exception;->toString()Ljava/lang/String;
move-result-object v10
invoke-virtual {v9, v10}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v9
invoke-virtual {v9}, Ljava/lang/StringBuilder;->toString()Ljava/lang/String;
move-result-object v9
invoke-static {v8, v9}, Landroid/util/Slog;->e(Ljava/lang/String;Ljava/lang/String;)I
.line 4628
invoke-virtual {p0}, Lcom/android/server/BackupManagerService$PerformRestoreTask;->agentErrorCleanup()V
.line 4629
sget-object v8, Lcom/android/server/BackupManagerService$RestoreState;->RUNNING_QUEUE:Lcom/android/server/BackupManagerService$RestoreState;
invoke-virtual {p0, v8}, Lcom/android/server/BackupManagerService$PerformRestoreTask;->executeNextState(Lcom/android/server/BackupManagerService$RestoreState;)V
:try_end_7
.catch Landroid/os/RemoteException; {:try_start_7 .. :try_end_7} :catch_0
goto/16 :goto_0
.end method
.method restorePmMetadata()V
.locals 12
.prologue
const/16 v11, 0xb0f
const/16 v10, 0x14
const/4 v9, 0x1
const/4 v8, 0x0
.line 4461
:try_start_0
iget-object v4, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mTransport:Lcom/android/internal/backup/IBackupTransport;
invoke-interface {v4}, Lcom/android/internal/backup/IBackupTransport;->nextRestorePackage()Ljava/lang/String;
move-result-object v3
.line 4462
.local v3, packageName:Ljava/lang/String;
if-nez v3, :cond_1
.line 4463
const-string v4, "BackupManagerService"
const-string v5, "Error getting first restore package"
invoke-static {v4, v5}, Landroid/util/Slog;->e(Ljava/lang/String;Ljava/lang/String;)I
.line 4464
const/16 v4, 0xb0f
const/4 v5, 0x0
new-array v5, v5, [Ljava/lang/Object;
invoke-static {v4, v5}, Landroid/util/EventLog;->writeEvent(I[Ljava/lang/Object;)I
.line 4465
const/4 v4, 0x1
iput v4, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mStatus:I
.line 4466
sget-object v4, Lcom/android/server/BackupManagerService$RestoreState;->FINAL:Lcom/android/server/BackupManagerService$RestoreState;
invoke-virtual {p0, v4}, Lcom/android/server/BackupManagerService$PerformRestoreTask;->executeNextState(Lcom/android/server/BackupManagerService$RestoreState;)V
.line 4520
.end local v3 #packageName:Ljava/lang/String;
:cond_0
:goto_0
return-void
.line 4468
.restart local v3 #packageName:Ljava/lang/String;
:cond_1
const-string v4, ""
invoke-virtual {v3, v4}, Ljava/lang/String;->equals(Ljava/lang/Object;)Z
move-result v4
if-eqz v4, :cond_2
.line 4469
const-string v4, "BackupManagerService"
const-string v5, "No restore data available"
invoke-static {v4, v5}, Landroid/util/Slog;->i(Ljava/lang/String;Ljava/lang/String;)I
.line 4470
invoke-static {}, Landroid/os/SystemClock;->elapsedRealtime()J
move-result-wide v4
iget-wide v6, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mStartRealtime:J
sub-long/2addr v4, v6
long-to-int v1, v4
.line 4471
.local v1, millis:I
const/16 v4, 0xb12
const/4 v5, 0x2
new-array v5, v5, [Ljava/lang/Object;
const/4 v6, 0x0
const/4 v7, 0x0
invoke-static {v7}, Ljava/lang/Integer;->valueOf(I)Ljava/lang/Integer;
move-result-object v7
aput-object v7, v5, v6
const/4 v6, 0x1
invoke-static {v1}, Ljava/lang/Integer;->valueOf(I)Ljava/lang/Integer;
move-result-object v7
aput-object v7, v5, v6
invoke-static {v4, v5}, Landroid/util/EventLog;->writeEvent(I[Ljava/lang/Object;)I
.line 4472
const/4 v4, 0x0
iput v4, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mStatus:I
.line 4473
sget-object v4, Lcom/android/server/BackupManagerService$RestoreState;->FINAL:Lcom/android/server/BackupManagerService$RestoreState;
invoke-virtual {p0, v4}, Lcom/android/server/BackupManagerService$PerformRestoreTask;->executeNextState(Lcom/android/server/BackupManagerService$RestoreState;)V
:try_end_0
.catch Landroid/os/RemoteException; {:try_start_0 .. :try_end_0} :catch_0
goto :goto_0
.line 4509
.end local v1 #millis:I
.end local v3 #packageName:Ljava/lang/String;
:catch_0
move-exception v0
.line 4510
.local v0, e:Landroid/os/RemoteException;
const-string v4, "BackupManagerService"
const-string v5, "Error communicating with transport for restore"
invoke-static {v4, v5}, Landroid/util/Slog;->e(Ljava/lang/String;Ljava/lang/String;)I
.line 4511
new-array v4, v8, [Ljava/lang/Object;
invoke-static {v11, v4}, Landroid/util/EventLog;->writeEvent(I[Ljava/lang/Object;)I
.line 4512
iput v9, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mStatus:I
.line 4513
iget-object v4, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->this$0:Lcom/android/server/BackupManagerService;
iget-object v4, v4, Lcom/android/server/BackupManagerService;->mBackupHandler:Lcom/android/server/BackupManagerService$BackupHandler;
invoke-virtual {v4, v10, p0}, Lcom/android/server/BackupManagerService$BackupHandler;->removeMessages(ILjava/lang/Object;)V
.line 4514
sget-object v4, Lcom/android/server/BackupManagerService$RestoreState;->FINAL:Lcom/android/server/BackupManagerService$RestoreState;
invoke-virtual {p0, v4}, Lcom/android/server/BackupManagerService$PerformRestoreTask;->executeNextState(Lcom/android/server/BackupManagerService$RestoreState;)V
goto :goto_0
.line 4475
.end local v0 #e:Landroid/os/RemoteException;
.restart local v3 #packageName:Ljava/lang/String;
:cond_2
:try_start_1
const-string v4, "@pm@"
invoke-virtual {v3, v4}, Ljava/lang/String;->equals(Ljava/lang/Object;)Z
move-result v4
if-nez v4, :cond_3
.line 4476
const-string v4, "BackupManagerService"
new-instance v5, Ljava/lang/StringBuilder;
invoke-direct {v5}, Ljava/lang/StringBuilder;-><init>()V
const-string v6, "Expected restore data for \"@pm@\", found only \""
invoke-virtual {v5, v6}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v5
invoke-virtual {v5, v3}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v5
const-string v6, "\""
invoke-virtual {v5, v6}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v5
invoke-virtual {v5}, Ljava/lang/StringBuilder;->toString()Ljava/lang/String;
move-result-object v5
invoke-static {v4, v5}, Landroid/util/Slog;->e(Ljava/lang/String;Ljava/lang/String;)I
.line 4478
const/16 v4, 0xb10
const/4 v5, 0x2
new-array v5, v5, [Ljava/lang/Object;
const/4 v6, 0x0
const-string v7, "@pm@"
aput-object v7, v5, v6
const/4 v6, 0x1
const-string v7, "Package manager data missing"
aput-object v7, v5, v6
invoke-static {v4, v5}, Landroid/util/EventLog;->writeEvent(I[Ljava/lang/Object;)I
.line 4480
sget-object v4, Lcom/android/server/BackupManagerService$RestoreState;->FINAL:Lcom/android/server/BackupManagerService$RestoreState;
invoke-virtual {p0, v4}, Lcom/android/server/BackupManagerService$PerformRestoreTask;->executeNextState(Lcom/android/server/BackupManagerService$RestoreState;)V
goto/16 :goto_0
.line 4485
:cond_3
new-instance v2, Landroid/content/pm/PackageInfo;
invoke-direct {v2}, Landroid/content/pm/PackageInfo;-><init>()V
.line 4486
.local v2, omPackage:Landroid/content/pm/PackageInfo;
const-string v4, "@pm@"
iput-object v4, v2, Landroid/content/pm/PackageInfo;->packageName:Ljava/lang/String;
.line 4487
new-instance v4, Lcom/android/server/PackageManagerBackupAgent;
iget-object v5, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->this$0:Lcom/android/server/BackupManagerService;
#getter for: Lcom/android/server/BackupManagerService;->mPackageManager:Landroid/content/pm/PackageManager;
invoke-static {v5}, Lcom/android/server/BackupManagerService;->access$1000(Lcom/android/server/BackupManagerService;)Landroid/content/pm/PackageManager;
move-result-object v5
iget-object v6, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mAgentPackages:Ljava/util/List;
invoke-direct {v4, v5, v6}, Lcom/android/server/PackageManagerBackupAgent;-><init>(Landroid/content/pm/PackageManager;Ljava/util/List;)V
iput-object v4, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mPmAgent:Lcom/android/server/PackageManagerBackupAgent;
.line 4489
const/4 v4, 0x0
iget-object v5, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mPmAgent:Lcom/android/server/PackageManagerBackupAgent;
invoke-virtual {v5}, Lcom/android/server/PackageManagerBackupAgent;->onBind()Landroid/os/IBinder;
move-result-object v5
invoke-static {v5}, Landroid/app/IBackupAgent$Stub;->asInterface(Landroid/os/IBinder;)Landroid/app/IBackupAgent;
move-result-object v5
iget-boolean v6, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mNeedFullBackup:Z
invoke-virtual {p0, v2, v4, v5, v6}, Lcom/android/server/BackupManagerService$PerformRestoreTask;->initiateOneRestore(Landroid/content/pm/PackageInfo;ILandroid/app/IBackupAgent;Z)V
.line 4500
iget-object v4, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mPmAgent:Lcom/android/server/PackageManagerBackupAgent;
invoke-virtual {v4}, Lcom/android/server/PackageManagerBackupAgent;->hasMetadata()Z
move-result v4
if-nez v4, :cond_0
.line 4501
const-string v4, "BackupManagerService"
const-string v5, "No restore metadata available, so not restoring settings"
invoke-static {v4, v5}, Landroid/util/Slog;->e(Ljava/lang/String;Ljava/lang/String;)I
.line 4502
const/16 v4, 0xb10
const/4 v5, 0x2
new-array v5, v5, [Ljava/lang/Object;
const/4 v6, 0x0
const-string v7, "@pm@"
aput-object v7, v5, v6
const/4 v6, 0x1
const-string v7, "Package manager restore metadata missing"
aput-object v7, v5, v6
invoke-static {v4, v5}, Landroid/util/EventLog;->writeEvent(I[Ljava/lang/Object;)I
.line 4504
const/4 v4, 0x1
iput v4, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->mStatus:I
.line 4505
iget-object v4, p0, Lcom/android/server/BackupManagerService$PerformRestoreTask;->this$0:Lcom/android/server/BackupManagerService;
iget-object v4, v4, Lcom/android/server/BackupManagerService;->mBackupHandler:Lcom/android/server/BackupManagerService$BackupHandler;
const/16 v5, 0x14
invoke-virtual {v4, v5, p0}, Lcom/android/server/BackupManagerService$BackupHandler;->removeMessages(ILjava/lang/Object;)V
.line 4506
sget-object v4, Lcom/android/server/BackupManagerService$RestoreState;->FINAL:Lcom/android/server/BackupManagerService$RestoreState;
invoke-virtual {p0, v4}, Lcom/android/server/BackupManagerService$PerformRestoreTask;->executeNextState(Lcom/android/server/BackupManagerService$RestoreState;)V
:try_end_1
.catch Landroid/os/RemoteException; {:try_start_1 .. :try_end_1} :catch_0
goto/16 :goto_0
.end method
| {
"redpajama_set_name": "RedPajamaGithub"
} | 6,559 |
{"url":"https:\/\/physics.stackexchange.com\/questions\/442230\/how-do-gravitational-waves-agree-with-lorentz-invariance","text":"# How do gravitational waves agree with Lorentz invariance?\n\nFollowing is a simple but incorrect explanation for gravitational waves. My question is what is wrong with it?\n\nI'd like to say that a gravitational wave is a periodic variation in the local gravitational field. For example, suppose the Earth is not rotating, for simplicity, and that the Moon orbits the Earth every 28 days. In this case, an observer on Earth would observe the Moon's gravitational field changing with a 28 day period, which seems to me would be an observation of a gravitational wave. The observer could also go sit on Pluto and measure the local gravitational field there changing from Earth's Moon. Again, this person would see it varying with a period of 28 days, but now delayed by about 5 hours due to the gravity transit time from Earth to Pluto. Again, this seems to me like an observation of a gravitational wave, but from a little farther away.\n\nA problem with this explanation can be seen with the Earth measurement. From this explanation, I would expect the wave phase at Earth to be delayed by about 1 second from the Moon's position due to the fact that it takes light (and gravity) about 1 second to get from the Moon to the Earth. This seems reasonable on the surface, but it violates Lorentz invariance, which in this case states that the gravitational field direction for an object moving at constant velocity should point directly toward the object (see Wikipedia \"Speed of gravity\"). The same issue applies for the Pluto measurement, too. Intuitively, it seems hard to believe that there isn't a delay between the Moon's gravity and its measurement on Pluto, but that's what Lorentz invariance says. Admittedly, the Moon is accelerating very slowly, but that was not a central part of my explanation.\n\nSo, is my explanation some sort of \"near-field\" effect, and distinct from actual gravitational waves? Or am I missing something else?\n\nThanks for any replies.\n\n-Steve\n\n\u2022 Equations of motion are Lorentz invariant. \u2013\u00a0Avantgarde Nov 20 '18 at 22:37\n\u2022 The Moon is not moving at constant velocity. If you want to say that it approximately is, you also need to allow that the position of the Moon one second ago is almost the same as its position now. \u2013\u00a0Javier Nov 20 '18 at 23:15\n\u2022 The effect you're describing is not a radiation effect at all. Your effect is proportional to the moon's mass, but gravitational radiation from the earth-moon system would be proportional to the square of the moon's mass. I don't think this is any different from E&M. A very slowly rotating electric dipole will produce an oscillating electric dipole field, but it's not a radiation field, and in this limit of low frequency the Poynting vector goes to zero. \u2013\u00a0user4552 Nov 21 '18 at 1:50\n\u2022 @BenCrowell. Thanks for the comment, which I agree with. I'm now thinking in terms of a gravitational near field. See my comment below. If you have insights on this, I'd be very interested. \u2013\u00a0user2419194 Nov 21 '18 at 5:34\n\nGravity\n\nDue to the presence of mass (and energy) gravity distorts space time. The strength of the gravitational effect attenuates in proportion to $$\\frac{1}{r^2}$$.\n\nGravitational Waves\n\nGravitational waves are a type of space time distortions that sustain and propagate at the speed of light. They are created by accelerating mass in certain circumstances (such as assymetric rotation). They attenuate more slowly, in proportion to $$\\frac{1}{r}$$.\n\nThe effect of a gravitational wave is related to the size of the change in the gravitational field at the source, rather than the size of the gravitational field itself (although larger gravitational fields are more likely to produce larger gravitational changes, but not always). At relatively close range they are typically a much smaller than the effect of the gravity of the mass that creates it.\n\nThe earth moon system will create gravitational waves, but because of their small magnitude and our proximity to the originating gravitational mass they cannot be distinguished.\n\n\u2022 Thanks for this answer. This reinforces my idea that the Moon's periodic effect on the Earth represents a gravitational near field. Like the EM near field, its effect dies off much more quickly than the radiative waves. Also like the EM near field, the \"receiver\" (the Earth) affects the \"transmitter\" (the Moon); in contrast, receiver and transmitter are decoupled for either EM or gravitational radiation. Can someone either support or refute this connection? \u2013\u00a0user2419194 Nov 21 '18 at 5:32\n\u2022 A difference between EM radiation and gravitational radiation is that in EM case the simplest form of radiation is electric dipole whereas simplest gravitational radiation is mass quadrupole. One effect of quadrupole character is that for a binary system with period $T$ the fundamental radiated frequency is $1\/(2T)$, not $1\/T$. Just for fun, power radiated by Earth-Moon system amounts to $\\simeq 7\\,\\mu W$. \u2013\u00a0Elio Fabri Nov 21 '18 at 15:15\n\u2022 @ElioFabri. These are good points. For those who want a link to the 7 $\\mu$W number and some more good discussion, I found it here. As an aside, searching the research literature shows that \"gravitational near field\" is not a new term, but is not used much. \u2013\u00a0user2419194 Nov 21 '18 at 19:03","date":"2020-07-05 23:36:58","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 2, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8574582934379578, \"perplexity\": 351.106989993538}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-29\/segments\/1593655889877.72\/warc\/CC-MAIN-20200705215728-20200706005728-00243.warc.gz\"}"} | null | null |
AKT2 is one of 3 closely related serine/threonine-protein kinases (AKT1, AKT2 and AKT3) called the AKT kinase, and which regulate many processes including metabolism, proliferation, cell survival, growth and angiogenesis. AKT is responsible of the regulation of glucose uptake by mediating insulin-induced translocation of the SLC2A4/GLUT4 glucose transporter to the cell surface. AKT regulates also cell survival via the phosphorylation of MAP3K5 (apoptosis signal-related kinase). Overexpression of this gene contributes to the malignant phenotype of a subset of human ductal pancreatic cancers. | {
"redpajama_set_name": "RedPajamaC4"
} | 4,084 |
You Don't Nomi is a 2019 American documentary film that details the history of the 1995 erotic drama film Showgirls. The documentary is directed by Jeffrey McHale and it features the original cast of the film (in archive footage). It premiered on 27 April 2019 at the Tribeca Film Festival, and upon release it was met with positive feedback from the critics. The film was nominated for Ad Hoc Docs Competition category at the Cleveland International Film Festival.
Cast
Adam Nayman
April Kidwell
Barbara Shulgasser
David Schmader
Haley Mlotek
Jeff Conaway
Jeffrey Sconce
Matt Baume
Peaches Christ
Susan Wloszczyna
Release
On 27 April 2019, You Don't Nomi made its world premiere at the Tribeca Film Festival. It was later screened at the Frameline Film Festival on 27 June 2019, and at the Outfest on 27 July 2019. The film was released in the United States on digital download and DVD on 21 July 2020 by RLJ Entertainment.
Critical response
On the review aggregator website Rotten Tomatoes, the film holds an approval rating of , based on reviews, with an average rating of . The website's critical consensus reads, "It may not change many minds regarding Showgirls, but You Don't Nomi is a solidly entertaining postmortem of an infamous flop." Metacritic, which uses a weighted average, assigned the film a score of 66 out of 100, based on 17 critics, indicating "generally favorable reviews".
Peter Bradshaw writing for The Guardian said that "this documentary completely nails the movie's [Showgirls] attraction". Owen Gleiberman of Variety wrote "You Don't Nomi takes "Showgirls" seriously, obsessively, looking at it from every angle, presenting a chorus of critical voices who analyze the film in ways that are highly enlightening and provocative". David Rooney of The Hollywood Reporter penned that the film is a "Sweet redemption, whether you like it or not". Peter Travers of Rolling Stone wrote, "You Don't Nomi takes many shots at Hollywood hypocrisy, but scores its most cutting points when it shows instead of tells". Glenn Kenny of RogerEbert.com wrote, "The critical rehabilitation of Paul Verhoeven's 1995 "Showgirls" continues apace with "You Don't Nomi," a documentary that wants to appear inventive but too often comes off as affected". KC Ifeanyi of Fast Company wrote that "You Don't Nomi is thoughtfully constructed cine-essay on how Showgirls evolved into the cult classic we know it as today".
Awards and nominations
References
External links
2019 documentary films
2010s English-language films | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 487 |
Christian Marrero (born July 30, 1986) is an American professional baseball coach for the Pittsburgh Pirates of Major League Baseball (MLB).
Marrero played in Minor League Baseball for 12 seasons for the Pirates, Chicago White Sox, Atlanta Braves, and Philadelphia Phillies organizations. He served as the hitting coach for the Williamsport Crosscutters in 2018 and the Lakewood Blueclaws in 2019. The Pirates hired him as their major league assistant hitting coach before the 2021 season.
His brother, Chris, played in MLB.
References
External links
Living people
1986 births
Altoona Curve players
Indianapolis Indians players
Birmingham Barons players
Bravos de Margarita players
Gigantes del Cibao players
Great Falls White Sox players
Gwinnett Braves players
Kannapolis Intimidators players
Lehigh Valley IronPigs players
Mississippi Braves players
Reading Fightin Phils players
Somerset Patriots players
Tiburones de La Guaira players
Tigres de Quintana Roo players
Tigres del Licey players
Winston-Salem Dash players
American expatriate baseball players in the Dominican Republic
American expatriate baseball players in Mexico
American expatriate baseball players in Venezuela | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 1,758 |
<br/>
<form action="?" method="post" enctype="multipart/form-data" name="frmEdit" class="form-horizontal">
[#if OK#]
<div class="alert alert-success"><#LANG_DATA_SAVED#></div>
[#endif OK#]
[#if ERR#]
<div class="alert alert-danger"><#LANG_FILLOUT_REQURED#></div>
[#endif ERR#]
<div class="form-group ">
<label class="col-lg-4 control-label"><#LANG_TITLE#>:<font color="red">*</font> <#LANG_HCB#>title<#LANG_HCE#></label>
<div class="col-lg-4"><input type="text" class="form-control [#if ERR_TITLE#]alert-danger[#endif#]" name="title" value="[#TITLE#]" required="true"></div>
</div>
<div class="form-group ">
<label class="col-lg-4 control-label"><#LANG_CLASS#>:<font color="red">*</font> <#LANG_HCB#>object_class<#LANG_HCE#></label>
<div class="col-lg-4">
<select name="class_id" class="form-control [#if ERR_TITLE#]alert-danger[#endif#]" required="true">
<option value="">select
[#begin CLASS_ID_OPTIONS#]<option value="[#ID#]"[#if ID="<#CLASS_ID#>"#] selected[#endif#]>[#TITLE#]
[#end CLASS_ID_OPTIONS#]
</select>
</div>
</div>
<div class="form-group ">
<label class="col-lg-4 control-label"><#LANG_DESCRIPTION#>: <#LANG_HCB#>description<#LANG_HCE#></label>
<div class="col-lg-4"><textarea name="description" rows="3" class="form-control">[#DESCRIPTION#]</textarea></div>
</div>
<div class="form-group ">
<label class="col-lg-4 control-label"><#LANG_LOCATION#>: <#LANG_HCB#>location<#LANG_HCE#></label>
<div class="col-lg-4"><select name="location_id" class="form-control">
<option value="">-
[#begin LOCATION_ID_OPTIONS#]<option value="[#ID#]"[#if SELECTED#] selected[#endif#]>[#TITLE#]
[#end LOCATION_ID_OPTIONS#]
</select></div>
</div>
<!--#
<div class="form-group ">
<label class="col-lg-4 control-label"><#LANG_KEEP_HISTORY_DAYS#>: <#LANG_HCB#>keep_history<#LANG_HCE#></label>
<div class="col-lg-4">
<div class="form-inline">
<div class="form-group col-lg-6">
<input type="text" name="keep_history" value="[#KEEP_HISTORY#]" size="10" class="form-control">
</div>
<div class="form-group col-lg-6">
<p class="form-control-static">
(0 = <#LANG_USE_CLASS_SETTINGS#>)
</p>
</div>
</div>
</div>
</div>
#-->
<div class="form-group">
<div class="col-lg-offset-1 col-lg-5">
[#if ID!=""#]
<input class="btn btn-default btn-primary" type="submit" name="subm" value="<#LANG_SUBMIT#>">
[#else ID#]
<input class="btn btn-default btn-primary" type="submit" name="subm" value="<#LANG_ADD#>">
[#endif ID#]
<a href="?data_source=<#DATA_SOURCE#>" class="btn btn-default"><#LANG_CANCEL#></a>
[#if ID!=""#]
<a class="btn btn-default" href="?id=<#ID#>&view_mode=clone" onClick="return confirm('<#LANG_ARE_YOU_SURE#>')"><#LANG_MAKE_COPY#></a>
[#endif#]
</div>
</div>
<input type="hidden" name="id" value="<#ID#>">
<input type="hidden" name="view_mode" value="<#VIEW_MODE#>">
<input type="hidden" name="edit_mode" value="<#EDIT_MODE#>">
<input type="hidden" name="mode" value="update">
<input type="hidden" name="tab" value="<#TAB#>">
</form>
| {
"redpajama_set_name": "RedPajamaGithub"
} | 8,556 |
The State vs. the Dead Body of Spencer Simpson
Spencer Simpson
We the Jury of inquest. . . find that Spencer Simpson died in Laurens County on 21st Day of Nov AD 1896 - from the Effects of a gunshot wound from the hands of Jno. Miller, and so we all agree.
J. P. Sloan
at Clinton
Jno. Miller
Harriet Miller sworn says I am the wife of Jno Miller I know Spencer Simpson saw him Saturday eve about 3 pm, he stoped [sic] in at my house stayed about 15 min. About 9 oclock at night he came back to my house I heard a noise and went out there and found him. I said what you doing here... He said he had something to tell me; it was to tell Jno. not to deal with anymore Liquor or he would get into trouble, I told him to hush I see somebody... Jno (my husband) run up on us, and struck me with his fist, then shot at Mr Simpson, I turned and run looked saw Simpson running, and Jno shot at him, then came back and Beat me. I went on into the house after a few min Jno came in and said he had killed Mr Simpson, he commenced cryinh and said what must I do, I told him if he had listened to me you would not have done it. Jno went off, and said he was going to move that man; when he came back he said he had taken Mr Simpson over the river, on top of the hill, I asked him how he got him there he said I took him in that Big basket... | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 3,125 |
{"url":"https:\/\/hbsesolutions.com\/hbse-6th-class-maths-solutions-chapter-14-ex-14-1\/","text":"# HBSE 6th Class Maths Solutions Chapter 14 Practical Geometry Ex 14.1\n\nHaryana State Board HBSE 6th Class Maths Solutions Chapter 14 Practical Geometry Ex 14.1 Textbook Exercise Questions and Answers.\n\n## 14.1\n\nQuestion 1.\nDraw a circle of radius 3.2 cm.\nSolution:\nSteps of construction :\n(i) Open the compasses for the required radius 3.2 cm.\n(ii) Mark a point \u20180\u2019 with a sharp pencil where we want the centre of the circle to be.\n(iii) Place the pointer of the compasses on 0.\n(iv) Turn the compasses slowly to draw the circle.\n\nQuestion 2.\nWith the same circle O, draw two circles of radii 4 cm and 2.5 cm.\nSolution:\nSteps of construction :\n(i) Mark a point \u2018O\u2019 with a sharp pencil where we want the centre of the circle.\n(ii) Open the compasses 4 cm.\n\n(iii) Place the pointer of the compasses on O.\n(iv) Turn the compasses slowly to draw the circle.\n(v) Again open the compasses 2.5 cm\nand place the pointer of the compasses at O. Turn the compasses slowly to draw the second circle.\n\nQuestion 3.\nDraw a circle and any two of its diameters. If you join the ends of these diameters, what is the figure obtained ? What figure is obtained if the diameters are perpendicular to each other ? How do you check you answer ?\nSolution:\n(i) By joining the ends of two diameters, we get a rectangle. By measuring, we find\nAB = CD = 3 cm,\nBC = AD = 2 cm\n\ni.e., pair of opposite sides are equal.\n\u2220A = \u2220B = \u2220C = \u2220D = 90\u00b0\ni.e., each angle is equal to 90\u00b0. Hence, ABCD is a rectangle, [see Fig.]\n\n(ii) If the diameters are perpendicular to each other, then by joining the ends of two diameters, we get a square.\n\nBy measuring, we find that\nAB = BC = CD = AD = 2.5 cm\ni.e., four sides are equal.\n\u2220A = \u2220B = \u2220C = \u2220D = 90\u00b0\ni.e., each angle is a right-angle. Hence, ABCD is a square [Fig.]\n\nQuestion 4.\nDraw any circle and mark points A, B and C such that:\n(a) A is on the circle.\n(b) B is in the interior o the circle.\n(c) C is in the exterior of the circle.\nSolution:\n(i) Mark a point\u2018O\u2019with sharp pencil where we want centre of the circle.\n(ii) Place the poin-ter of the compasses at \u2018O\u2019, then move the compasses slowly to draw a circle. In Fig.\n(a) Point A is on the circle.\n(b) Point B 1 is in the interior of the circle.\n(c) Point C is in the exterior of the circle.\n\nQuestion 5.\nLet A, B be the centres of two circles of equal radii, draw them so that each one of them passes through the centre of the other. Let them intersect at C and D. Examine whether $$\\overline{\\mathbf{A B}}$$ and $$\\overline{\\mathbf{CD}}$$ are at right angles.\nSolution:\nDraw two circles of equal radii taking A and B as their centres such that each one.\nof them passes through the centre of the other. Let them intersect at C and D. Join AB and CD. [Fig.]\nYes. $$\\overline{\\mathbf{A B}}$$ and $$\\overline{\\mathbf{CD}}$$ are at right angles, because \u2220BOC = 90\u00b0.","date":"2022-12-02 00:38:35","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.565879225730896, \"perplexity\": 1322.5967722107134}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-49\/segments\/1669446710870.69\/warc\/CC-MAIN-20221201221914-20221202011914-00136.warc.gz\"}"} | null | null |
<div class="container search">
<!-- Titre page -->
<div class="page-header">
<h3>Catalogue des ateliers</h3>
</div>
<!-- Recherche -->
<div class="row search-bar" nb-model="resultant">
<div class="col-xs-12 no-padding">
<div class="input-group">
<span id="avancedSearch" class="input-group-btn">
<button ng-click="dropDown()" type="button" class="btn btn-default dropdown-toggle">
<span class="caret"></span>
<span class="sr-only">Toggle Dropdown</span>
</button>
<ul class="dropdown-menu">
<li class="add-padding">Cochez les notions que vous recherchez</li>
<li role="separator" class="divider"></li>
<li class="checkbox add-padding">
<label><input id="a" type="checkbox" ng-model="educational_aims_search.value1"
ng-true-value="'Prévisions'" ng-false-value="''">Prévisions</label>
</li>
<li class="checkbox add-padding">
<label><input type="checkbox" ng-model="educational_aims_search.value2"
ng-true-value="'Amélioration continue'" ng-false-value="''">Amélioration continue</label>
</li>
<li class="checkbox add-padding">
<label><input id="b" type="checkbox" value="Retrospective"
ng-model="educational_aims_search.value3" ng-true-value="'Rétrospective'"
ng-false-value="''">Rétrospective</label>
</li>
<li class="checkbox add-padding">
<label><input type="checkbox" value="" ng-model="educational_aims_search.value4"
ng-true-value="'Travail itératif'"
ng-false-value="''">Travail itératif</label>
</li>
<li class="checkbox add-padding">
<label><input type="checkbox" value="" ng-model="educational_aims_search.value5"
ng-true-value="'Lead time vs Throughput'" ng-false-value="''">Lead time vs Throughput</label>
</li>
<li class="checkbox add-padding">
<label><input id="c" type="checkbox" value="TaF_WiP"
ng-model="educational_aims_search.value6" ng-true-value="'TaF - WiP'"
ng-false-value="''">TaF WiP</label>
</li>
<button type="button" class="btn btn-default center-block" ng-click="close()">Fermer</button>
</ul>
</span>
<input type="text" ng-model="query" class="form-control" placeholder="Rechercher...">
<span class="input-group-btn">
<button class="btn btn-default" type="button"><span
class="glyphicon glyphicon-search"></span></button>
</span>
</div>
</div>
</div>
<div class="row">
<!-- Résultat -->
<div class="col-xs-12 no-padding">
<div ng-repeat="elem in data.data |filter : searchTitle | filter: searchEduc ">
<a href="/#/catalogue/{{elem._id}}">
<div class="col-xs-12 thumbnail">
<!-- Photo -->
<div class="col-xs-4 col-sm-3 no-padding-r">
<img class="img-responsive img-padding" ng-src="{{elem.photo}}">
</div>
<div class="col-xs-8 col-sm-9">
<!-- Titre -->
<div class="col-xs-12 no-padding">
<div class="col-xs-12 col-md-6 no-padding">
<h4 class="labelsTitle">{{elem.title}}</h4>
</div>
<div class="col-xs-12 col-md-6 padding-t labels no-padding">
<span ng-repeat="category in elem.content.educational_aims | orderBy">
<span class="label {{getLabelColor(category)}}">{{category}}</span>
</span>
</div>
</div>
<div class="col-xs-12 col-sm-6 no-padding">
<!-- Note -->
<div class="col-xs-12 no-padding">
Participants :
<i ng-if="elem.grade.participants"
ng-class="{ 'fa fa-star': $index<=elem.grade.participants, 'fa fa-star-o': $index>elem.grade.participants }"
data-ng-repeat="i in [] | range:5"></i>
<span ng-if="!elem.grade.participants">Aucunes notes</span>
</div>
<div class="col-xs-12 no-padding">
Facilitateurs :
<i ng-if="elem.grade.facilitators"
ng-class="{ 'fa fa-star': $index<=elem.grade.facilitators, 'fa fa-star-o': $index>elem.grade.facilitators }"
data-ng-repeat="i in [] | range:5"></i>
<span ng-if="!elem.grade.facilitators">Aucunes notes</span>
</div>
</div>
<div class="col-xs-12 col-sm-6 no-padding">
<div>
<!-- Temps de l'atelier -->
<span class="glyphicon glyphicon-time" aria-hidden="true"></span>
{{elem.duration}}m
</div>
<div class="hidden-xs">
Nombre de fois joué : {{elem.nb_time_played}}
</div>
</div>
<!-- Detail -->
<p class="col-xs-12 hidden-xs no-padding padding-t">{{elem.synopsis}}</p>
</div>
</div>
</a>
</div>
</div>
</div>
</div> | {
"redpajama_set_name": "RedPajamaGithub"
} | 2,492 |
package com.github.windbender.service;
import org.joda.time.DateTimeZone;
import com.github.windbender.core.LatLonPair;
public interface TimeZoneGetter {
public DateTimeZone getTimeZone(LatLonPair location);
} | {
"redpajama_set_name": "RedPajamaGithub"
} | 2,903 |
u p d a t e d w e e k l y !
The Last Bastion of America's Liberal Media
Turd of the Week
Ahnold, your entry into the polical circus is so clearly driven by your agent to forestall the inevitable career death throes that face action heros in their 50's.
See stories
for details
Blabbering Bush Head
Click head for fresh random quote from
Jacob Weinberg's The Complete Bushisms - Netscape users hit reload
This data is an accounting of civilian deaths in Iraq to date.
See Iraqbodycount.net for statistical methodologies
Growing fat off juicy Iraqi rebuildin' contracts. Did you know the bin Laden group is one of our top investors?
screw all the other stockholders, we're cashing out!
Hey, what do you know? We make money from American militarily screwed up countries in the Middle East!
We're already negotiating with the "new Iraqi democracy" for oil rights!
Selling weapons all over the globe to ensure civilian death and instability which in turn ensures a strong market for years and years...
Repeat After Me: "At Least I Don't Live in California!"
What do you do if you are a state, billions of dollars in the red due to decades of property tax ceilings, massive investment of meager state endowment in corrupt energy stocks, and the bottom falling out of the dot com economy? Well, of course, you spend $67,000,000 you don't have to let a musclebound robotic killing machine from the homeland of the Third Reich, a twice convicted car thief, a gimp porn magnate, a porn start past her expiration date, a Libertarian smokers' rights cigarette retailer, a medical marijuana activist, Father Guido Sarducci, a retired boxer, and a midget has-been TV star fight over the 15% majority needed to steal back the governor's mansion!
If California sets the trends for the rest of the nation to follow, the other 49 states better start putting major hallucinigenics in their water supplies. Democratic Governor Gray Davis, whose most heinous crime is possessing the charisma of wet cardboard, is the latest victim of Republican election purloinery. His term had barely begun, but the Roving bands of democracy destroyers must gobble everything in their path and Davis just so happens to be on the appetizer tray before the 2004 main entree.
Politically, the weirdest thing is that you get the sense no one really hates or loves Davis. The Dems see him as a hapless chump who inherited a mess and doesn't have the communication skills to at least explain it all to the populace. The GOP, on the other hand seems much more hellbent on grabbing power than on ridding California of some evil cancer. Oh sure, they have vague indictments of how bad things are and he's got his hand on the broken steering wheel, but you don't hear the same invective as you would for a Clinton or Kennedy.
But let's not analyze this deeper than it needs to be folks. Look for Jerry Springer to moderate the debates as Arnold kicks Larry Flynn out of his wheelchair, crushing Gary Coleman to death. That is, after all, the future of American politics.
Oh, My Bad. I Thought You Said Secretary of "Hate"
With Neo-con leaked rumors of Colin Powell's mid-dynasty retirement, the Neo-Cons are floating names around to name the leader ofAmerica's interactions with the rest of the world. Fitting then, that the most frequently named name is the name of none other than SOTC.com's public enemy number 17, Paul Wolfowitz. It seems appropriate that America, whose foreign policy is either military anihilation (Afghanistan 2002, Iraq spring 2003, the middle class) or complete indifference (Afghanistan 2003, Iraq summer 2003, Africa, Asia, the poor), would have one of the prime architects of America, the World's Pimp as its Secretary of State.
Wolfie, nearly as brilliant as he is evil, would bring a sinister cynicism to the job that would finally seal off the gaps in team Bush's veneer of moral certainty of purpose. Without Powell's occasional nagging doubts, the neo-cons will be able to move forward undeterred by the barnacles of conscience.
Typical of the neo-con ruthlessness, the rumors point to Alma Powell as the source of Colin's discontent. All in line with a "Blame it on the Bitch" boys' clubbery.
Rest assured, precious readership, there will be no diminution of diversity within the Lilly White House leadership. The mindset will become even more homogenous, the wagons circled even tighter.
A Little Politico-Corporate Infighting?
The Bush administration is recognized across the spectrum as effective in their lack of infighting and their unity of purpose. Pretty much everyone is on board the evil agenda. Dissenters like Powell are given body-snatching pods which replace their human bodies with identical Bush-bots, spouting the party line with the slightest hint of intellectual agony.
So it is extra delicious to see that one of the official sponsors of Operation democracy-up-the-ass, Bechtel is publicly accusing another offical sponsor, Halliburton of monopolizing the Iraqi oil market. But the accusations don't stop in the boardroom. Nope, the Army Corps of Engineers, who sets the bidding terms (should the contract be put out to bid, which hasn't been done often with Iraq rebuilding) and awards the contracts, has carefully written the terms of the job to effectively exclude non-Halliburton competitors.
Included in the preferential wording are timelines only attainable by providers who have a significant infrastructure in place. Readers will note that only Halliburton has in such an infrastructure in place, having been previously awarded the initial clean-up contracts without the nuisance of an open bidding process. Further, astute readers will remember SOTC.com ran a story on May 12th called Bush does something that doesn't suck (ironic in retrospect!), in which Bush cut the legs out from under the Army Corps of Engineers. Turns out his real goal was to staff the Corps with lackeys who would dole out the bazillions of American taxless dollars to his corporate cronies.
It's not Napalm!
Napalm, used extensively in Vietnam to immolate any persons, animals, fauna, etc... foolish enough to be under the shadows of the airborne dispersers, is made of gasoline and benzine. It is sprayed on victims and lit ablaze, ensuring the target is covered with sticky flaming goo which incinerates the victim in almost certain death, or at least a lifetime of incapacity and pain.
Not in Iraq! No, they use Mark 77 firebombs, which use kerosene-based jet fuel, plus smaller quantities of benzine than is typical in Napalm. Don't worry Jane's Defense Weekly readers! it too coats victims with a flaming goo which incinerates the victim in almost certain death, or at least lifetime incapacity and pain.
...but it's not Napalm, a registered trademark of Dow Chemical, the makers of those cute little scrubbing bubble guys in your bathtub.
Afghanistan - Yesterday's News
Yeah, I know, I can be like the old guy in the bar reminiscing on the '69 Superbowl, "Man that Broadway Joe was something, eh?" Afghanistan is way off the American attention span these days, and by middle American assessments it is finished, Democracy reigns and all the evil bearded dudes are rotting in cement cells in Gitmo.
Just keep watching American Idle, my friend.
With the US attention so firmly fixed on parading Moustache's soon-to-be-severed head around the middle east, windows of opportunity in Afghanistan have slammed shut for democracy and even stability. The American resource drain from Talibanistan has been huge. Now, America is resorting to contracting out peacekeeping to local warlords, many of whom are former (whatever former means in this context)... get this... Taliban!
And while some 183,000 US troops are in Iraq getting shot at and blown up, guess how many American troops are in Afghanistan? Give up? 8,500. That's less than half a percent of the Iraq deployment. The invasion and killing part sure is fun and sexy, but that messy cleanup stuff? Booooring!
You'd think with a sociopathic trigger happy cowboy at the helm, you'd at least get something done with your overwhelming carpet bombing and Napal... I mean Mark 77 firebombing. Across the middle east, it sounds more and more like our primary accomplishment is the further alienation of middle easterners, while the badasses we are supposed to be unseating are sneaking back into the picture ON OUR PAYROLL.
Why Ain't Saddam Caught Yet?
With 38 cards of the deck put into body bags, sons in pieces, daughters in Jordan, stoolies sprouting like chiggers after a summer rain, you'd think Saddam was on the verge of getting caught. Republicans are salivating at the prospect, believing that it will dash any hopes of Democratic candidacy in 2004 (huh??!?), as well as assuming that his capture will cause Glinda the Good Witch to summon all the happy Iraqi munchkins out of the desert where they will join hands with America, drink a Coke and start working on rebuilding America East.
I too am bracing myself for pictures of the disfigured corpse bits of Moustache stuck on the media pike for all to see. Sure, some will somehow see it as proof there are WMD, Niger yellowcake, Al Q'aeda all wrapped into one (or seventeen) bloody photograph, but there's got to be a nagging fear among the neo-cons that perhaps the resistance to Operation Democracy-up-your-ass will persist. Some Iraqis are even insisting the resistance will increase, as Iraqis currently don't want to be seen as responding to the evil Saddam's call to arms.
I predict he will be killed within 6 weeks (Damn, where's that Pentagon gambling site?) in a gruesome last stand, thus proving to absurdists the moral justification of the thousands of Iraqis killed in his pursuit. At that point, it will be critical for the Bushies to turn our attention to other concerns, much as they have done in Afghanistan.
Crap Archives
Legal Disclaimer: All information on this site has been carefully considered as to its inflammatory value against the backdrop of the prevailing standards of cultural depravity. Research is spotty at best. The resulting verbiage, though dead-on and wickedly insightful (not to mention inciteful) should be considered pure satire, if for no other reason than to deflect lawsuits. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 5,990 |
{"url":"https:\/\/proofwiki.org\/wiki\/Transpose_of_Row_Matrix_is_Column_Matrix","text":"# Transpose of Row Matrix is Column Matrix\n\n## Theorem\n\nLet $\\mathbf x = \\left[{x}\\right]_{1 n} = \\begin{bmatrix}x_1 & x_2 & \\cdots & x_n\\end{bmatrix}$ be a row matrix.\n\nThen $\\mathbf x^\\intercal$, the transpose of $\\mathbf x$, is a column matrix:\n\n$\\begin{bmatrix}x_1 & x_2 & \\cdots & x_n\\end{bmatrix}^\\intercal = \\begin{bmatrix}x_1 \\\\ x_2 \\\\ \\vdots \\\\ x_n\\end{bmatrix}$\n\n## Proof\n\nSelf-evident.\n\n$\\blacksquare$","date":"2020-03-29 11:09:48","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8253790140151978, \"perplexity\": 2427.027043111699}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-16\/segments\/1585370494331.42\/warc\/CC-MAIN-20200329105248-20200329135248-00134.warc.gz\"}"} | null | null |
Tetramicra canaliculata är en orkidéart som först beskrevs av Jean Baptiste Christophe Fusée Aublet, och fick sitt nu gällande namn av Ignatz Urban. Tetramicra canaliculata ingår i släktet Tetramicra och familjen orkidéer. Inga underarter finns listade i Catalogue of Life.
Bildgalleri
Källor
Orkidéer
canaliculata | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 4,262 |
package com.snowengine.utils.files;
import org.w3c.dom.Document;
import org.w3c.dom.Element;
import org.w3c.dom.Node;
import org.w3c.dom.NodeList;
import javax.xml.parsers.DocumentBuilder;
import javax.xml.parsers.DocumentBuilderFactory;
public class XMLData extends FileData
{
private static DocumentBuilderFactory m_DocumentBuilderFactory;
private Document m_Document;
protected XMLData(String filepath)
{
super (filepath);
}
@Override
protected void load()
{
if (m_DocumentBuilderFactory == null)
{
m_DocumentBuilderFactory = DocumentBuilderFactory.newInstance();
}
try
{
DocumentBuilder builder = m_DocumentBuilderFactory.newDocumentBuilder();
m_Document = builder.parse(new java.io.File(m_FilePath));
m_Document.getDocumentElement().normalize();
}
catch (Exception ex)
{
ex.printStackTrace();
}
}
public NodeList getElementsByTagName(String tagName)
{
return m_Document.getElementsByTagName(tagName);
}
public NodeList getElementsByTagNameNS(String namespace, String tagName)
{
return m_Document.getElementsByTagNameNS(namespace, tagName);
}
public Element getElementById(String elementId)
{
return m_Document.getElementById(elementId);
}
public String getInputEncoding()
{
return m_Document.getInputEncoding();
}
public String getEncoding()
{
return m_Document.getXmlEncoding();
}
public boolean isStandalone()
{
return m_Document.getXmlStandalone();
}
public String getVersion()
{
return m_Document.getXmlVersion();
}
public boolean isStrictErrorChecking()
{
return m_Document.getStrictErrorChecking();
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 2,760 |
\section{The blowup-polynomial of a metric space and its distance matrix}
This work aims to provide novel connections between metric geometry, the
geometry of (real) polynomials, and algebraic combinatorics via partially
symmetric functions. In particular, we introduce and study a polynomial
graph-invariant for each graph, which to our knowledge is novel.
\subsection{Motivations}
The original motivation for our paper came from the study of distance
matrices $D_G$ of graphs $G$ -- on both the algebraic and spectral sides:
\begin{itemize}
\item On the algebraic side, Graham and Pollak \cite{Graham-Pollak}
initiated the study of $D_G$ by proving: if $T_k$ is a tree on $k$ nodes,
then $\det D_{T_k}$ is independent of the tree structure and depends only
on $k$. By now, many variants of such results are proved, for trees as
well as several other families of graphs, including with $q$-analogues,
weightings, and combinations of both of these. (See e.g.\ \cite{CK-tree}
and its references for a list of such papers, results, and their common
unification.)
\item Following the above work~\cite{Graham-Pollak}, Graham also
worked on the spectral side, and with Lov\'asz, studied in~\cite{GL} the
distance matrix of a tree, including computing its inverse and
characteristic polynomial. This has since led to the intensive study of
the roots, i.e.\ the ``distance spectrum'', for trees and other graphs.
See e.g.\ the survey~\cite{AH} for more on distance spectra.
\end{itemize}
A well-studied problem in spectral graph theory involves understanding
which graphs are \textit{distance co-spectral} -- i.e., for which graphs
$H' \not\cong K'$, if any, do $D_{H'}, D_{K'}$ have the same spectra.
Many such examples exist; see e.g.\ the references in~\cite{DL}. In
particular, the characteristic polynomial of $D_G$ does not ``detect''
the graph $G$. It is thus natural to seek some other byproduct of $D_G$
which does -- i.e., which recovers $G$ up to isometry. In this paper, we
find such a (to our knowledge) novel graph invariant: a multivariate
polynomial, which we call the \textit{blowup-polynomial of $G$}, and
which does detect $G$. Remarkably, this polynomial turns out to have
several additional attractive properties:
\begin{itemize}
\item It is multi-affine in its arguments.
\item It is also real-stable, so that its ``support'' yields a hitherto
unexplored delta-matroid.
\item The blowup-polynomial simultaneously encodes the determinants of
\textit{all} graph-blowups of $G$ (defined presently), thereby connecting
with the algebraic side (see the next paragraph).
\item Its ``univariate specialization'' is a transformation of the
characteristic polynomial of $D_G$, thereby connecting with the spectral
side as well.
\end{itemize}
Thus, the blowup-polynomial that we introduce, connects distance spectra
for graphs -- and more generally, for finite metric spaces -- to other
well-studied objects, including real-stable/Lorentzian polynomials and
delta-matroids.
On the algebraic side, a natural question involves asking if there are
graph families $\{ G_i : i \in I \}$ (like trees on $k$ vertices) for
which the scalars $\det (D_{G_i})$ behave ``nicely'' as a function of $i
\in I$. As stated above, the family of blowups of a fixed graph $G$
(which help answer the preceding ``spectral'' question) not only answer
this question positively as well, but the nature of the answer --
multi-affine polynomiality -- is desirable in conjunction with its
real-stability. In fact, we will obtain many of these results, both
spectral and algebraic, in greater generality: for arbitrary finite
metric spaces.
The key construction required for all of these contributions is that of a
blowup, and we begin by defining it more generally, for arbitrary
discrete metric spaces.
\begin{definition}\label{Ddef}
Given a discrete metric space $(X,d)$ and a function $\ensuremath{\mathbf n} : X \to
\mathbb{Z}_{>0}$, the \emph{$\ensuremath{\mathbf n}$-blowup} of $X$ is the metric space
$X[\ensuremath{\mathbf n}]$ obtained by creating $n_x := \ensuremath{\mathbf n}(x)$ copies of each point $x$
(also referred to below as blowups of $x$).
Define the distance between copies of distinct points $x \neq y$ in $X$
to still be $d(x,y)$, and between distinct copies of the same point to be
$2 d(x,X \setminus \{x\}) = 2 \inf_{y \in X \setminus \{ x \}} d(x,y)$.
Also define the \emph{distance matrix} $D_X$ and the \emph{modified
distance matrix} $\ensuremath{\mathcal{D}}_X$ of $X$ via:
\begin{equation}
D_X := (d(x,y))_{x,y \in X}, \qquad
\ensuremath{\mathcal{D}}_X := D_X + \diag(2 d(x, X \setminus \{ x \}))_{x \in X}.
\end{equation}
\end{definition}
Notice for completeness that the above construction applied to a
non-discrete metric space does not yield a metric; and that blowups of
$X$ are ``compatible'' with isometries of $X$ (see~\eqref{Eisom}).
We also remark that this notion of blowup seems to be relatively less
studied in the literature, and differs from several other variants in the
literature -- for metric spaces e.g.~\cite{metric} or for graphs
e.g.~\cite{Liu}. However, the variant studied in this paper was
previously studied for the special case of unweighted graphs, see
e.g.~\cite{HHN,KKOT,KSS} in extremal and probabilistic graph theory.
\subsection{Defining the blowup-polynomial; Euclidean embeddings}
We now describe some of the results in this work, beginning with metric
embeddings. Recall that the complete information about a (finite) metric
space is encoded into its distance matrix $D_X$ (or equivalently, in the
off-diagonal part of $\ensuremath{\mathcal{D}}_X$). Metric spaces are useful in many
sub-disciplines of the mathematical sciences, and have been studied for
over a century. For instance, a well-studied question in metric geometry
involves understanding metric embeddings. In 1910, Fr\'echet
showed~\cite{Frechet0} that every finite metric space with $k+1$ points
isometrically embeds into $\ensuremath{\mathbb R}^k$ with the supnorm. Similarly, a
celebrated 1935 theorem of Schoenberg \cite{Schoenberg35} (following
Menger's works~\cite{Menger0,Menger}) says:
\begin{theorem}[Schoenberg, \cite{Schoenberg35}]\label{Tschoenberg}
A finite metric space $X = \{ x_0, \dots, x_k \}$ isometrically embeds
inside Euclidean space $(\ensuremath{\mathbb R}^r, \| \cdot \|_2)$ if and only if its
modified Cayley--Menger matrix
\begin{equation}
(d(x_0,x_i)^2 + d(x_0,x_j)^2 - d(x_i,x_j)^2)_{i,j=1}^k
\end{equation}
is positive semidefinite, with rank at most $r$.
\end{theorem}
As an aside, the determinant of this matrix is related to the volume of a
polytope with vertices $x_i$ (beginning with classical work of
Cayley~\cite{Cayley}), and the Cayley--Menger matrix itself connects to
the principle of trilateration/triangulation that underlies the GPS
system.
Returning to the present work, our goal is to study the distance matrix
of a finite metric space vis-a-vis its blowups. We begin with a
``negative'' result from metric geometry. Note that every blowup of a
finite metric space embeds into $\ensuremath{\mathbb R}^k$ (for some $k$) equipped with the
supnorm, by Fr\'echet's aforementioned result. In contrast, we employ
Schoenberg's theorem~\ref{Tschoenberg} to show that the same is far from
true when considering the Euclidean metric. Namely, given a finite metric
space $X$, we characterize all blowups $X[\ensuremath{\mathbf n}]$ that embed in some
Euclidean space $(\ensuremath{\mathbb R}^k, \| \cdot \|_2)$. Since $X$ embeds into $X[\ensuremath{\mathbf n}]$,
a necessary condition is that $X$ itself should be Euclidean. With this
in mind, we have:
\begin{utheorem}\label{Teuclidean}
Suppose $X = \{ x_1, \dots, x_k \}$ is a finite metric subspace of
Euclidean space $(\ensuremath{\mathbb R}^r, \| \cdot \|_2)$. Given positive integers $\{
n_{x_i} : 1 \leq i \leq k \}$, not all of which equal $1$, the following
are equivalent:
\begin{enumerate}
\item The blowup $X[\ensuremath{\mathbf n}]$ isometrically embeds into some Euclidean space
$(\ensuremath{\mathbb R}^{r'}, \| \cdot \|_2)$.
\item Either $k=1$ and $\ensuremath{\mathbf n}$ is arbitrary (then by convention $X[\ensuremath{\mathbf n}]$ is
a simplex); or $k>1$ and there exists a unique $1 \leq j \leq k$ such
that $n_{x_j} = 2$. In this case, we moreover have:
(a)~$n_{x_i} = 1\ \forall i \neq j$,
(b)~$x_j$ is not in the affine hull/span $V$ of $\{ x_i : i \neq j \}$,
and
(c)~the unique point $v \in V$ closest to $x_j$, lies in $X$.
\end{enumerate}
If these conditions hold, one can take $r' = r$ and $X[\ensuremath{\mathbf n}] = X \sqcup \{
2v - x_j \}$.
\end{utheorem}
Given the preceding result, we turn away from metric geometry, and
instead focus on studying the family of blowups $X[\ensuremath{\mathbf n}]$ -- through their
distance matrices $D_{X[\ensuremath{\mathbf n}]}$ (which contain all of the information on
$X[\ensuremath{\mathbf n}]$). Drawing inspiration from Graham and Pollak
\cite{Graham-Pollak}, we focus on one of the simplest invariants of this
family of matrices: their determinants, and the (possibly algebraic)
nature of the dependence of $\det D_{X[\ensuremath{\mathbf n}]}$ on $\ensuremath{\mathbf n}$. In this paper, we
show that the function $: \ensuremath{\mathbf n} \mapsto \det D_{X[\ensuremath{\mathbf n}]}$ possesses several
attractive properties. First, $\det D_{X[\ensuremath{\mathbf n}]}$ is a polynomial function
in the \textit{sizes} $n_x$ of the blowup, up to an exponential factor:
\begin{utheorem}\label{Tmetricmatrix}
Given $(X,d)$ a finite metric space, and a tuple of positive integers
$\ensuremath{\mathbf n} := (n_x)_{x \in X} \in \mathbb{Z}_{>0}^X$, the function $\ensuremath{\mathbf n} \mapsto
\det D_{X[\ensuremath{\mathbf n}]}$ is a multi-affine polynomial $p_X(\ensuremath{\mathbf n})$ in the $n_x$
(i.e., its monomials are squarefree in the $n_x$), times the exponential
function
\[
\prod_{x \in X} (-2 \; d(x, X \setminus \{ x \}))^{n_x-1}.
\]
Moreover, the polynomial $p_X(\ensuremath{\mathbf n})$ has constant term
$p_X({\bf 0}) = \prod_{x \in X} (-2 \; d(x, X \setminus \{ x \}))$, and
linear term $-p_X({\bf 0}) \sum_{x \in X} n_x$.
\end{utheorem}
Theorem~\ref{Tmetricmatrix} follows from a stronger one proved below. See
Theorem~\ref{Tmonoid}, which shows in particular that not only do the
conclusions of Theorem~\ref{Tmetricmatrix} hold over an arbitrary
commutative ring, but moreover, the blowup-polynomial $p_X(\ensuremath{\mathbf n})$ is a
polynomial function in the variables $\ensuremath{\mathbf n} = \{ n_x : x \in X \}$ as well
as the entries of the ``original'' distance matrix $D_X$ -- and moreover,
it is squarefree/multi-affine in all of these arguments (where we treat
all entries of $D_X$ to be ``independent'' variables).
We also refine the final assertions of Theorem~\ref{Tmetricmatrix}, by
isolating in Proposition~\ref{Pcoeff} the coefficient of \textit{every}
monomial in $p_X(\ensuremath{\mathbf n})$. That proposition moreover provides a sufficient
condition under which the coefficients of two monomials in $p_X(\ensuremath{\mathbf n})$ are
equal.
Theorem~\ref{Tmetricmatrix} leads us to introduce the following notion,
for an arbitrary finite metric space (e.g., every finite, connected,
$\mathbb{R}_{>0}$-weighted graph).
\begin{definition}\label{Dblowup}
Define the \emph{(multivariate) blowup-polynomial} of a finite metric
space $(X,d)$ to be $p_X(\ensuremath{\mathbf n})$, where the $n_x$ are thought of as
indeterminates. We write out a closed-form expression in the proof of
Theorem~\ref{Tmetricmatrix} -- see Equation~\eqref{Emetricpoly}.
In this paper, we also study a specialization of this polynomial. Define
the \emph{univariate blowup-polynomial} of $(X,d)$ to be $u_X(n) :=
p_X(n,n,\dots,n)$, where $n$ is thought of as an indeterminate.
\end{definition}
\begin{remark}\label{Rzariski}
Definition~\ref{Dblowup} requires a small clarification.
The polynomial map (by Theorem~\ref{Tmetricmatrix}) \[ \ensuremath{\mathbf n} \mapsto \det
D_{X[\ensuremath{\mathbf n}]} \cdot \prod_{x \in X} (-2 d(x, X \setminus \{ x \}))^{1 -
n_x}, \qquad \ensuremath{\mathbf n} \in \mathbb{Z}_{>0}^k \] can be extended from the
Zariski dense subset $\mathbb{Z}_{>0}^k$ to all of $\ensuremath{\mathbb R}^k$. (Zariski
density is explained during the proof of Theorem~\ref{Tmetricmatrix}
below.) Since $\ensuremath{\mathbb R}$ is an infinite field, this polynomial map on $\ensuremath{\mathbb R}^k$
may now be identified with a polynomial, which is precisely $p_X(-)$, a
polynomial in $|X|$ variables (which we will denote by $\{ n_x : x \in X
\}$ throughout the paper, via a mild abuse of notation). Now setting all
arguments to be the same indeterminate yields the univariate
blowup-polynomial of $(X,d)$.
\end{remark}
\subsection{Real-stability}
We next discuss the blowup-polynomial $p_X(\cdot)$ and its univariate
specialization $u_X(\cdot)$ from the viewpoint of root-location
properties. As we will see, the polynomial $u_X(n) = p_X(n,n,\dots,n)$
always turns out to be real-rooted in $n$. In fact, even more is true.
Recall that in recent times, the notion of real-rootedness has been
studied in a much more powerful avatar: real-stability. Our next result
strengthens the real-rootedness of $u_X(\cdot)$ to the second attractive
property of $p_X(\cdot)$ -- namely, real-stability:
\begin{utheorem}\label{Tstable}
The blowup-polynomial $p_X(\ensuremath{\mathbf n})$ of every finite metric space $(X,d)$ is
real-stable in $\{ n_x \}$. (Hence its univariate specialization $u_X(n)
= p_X(n,n,\dots,n)$ is always real-rooted.)
\end{utheorem}
Recall that real-stable polynomials are simply ones with real
coefficients, which do not vanish when all arguments are constrained to
lie in the (open) upper half-plane $\Im(z) > 0$. Such polynomials have
been intensively studied in recent years, with a vast number of
applications. For instance, they were famously used in celebrated works
of Borcea--Br\"and\'en (e.g.~\cite{BB1,BB2,BB3}) and
Marcus--Spielman--Srivastava~\cite{MSS1,MSS2} to prove longstanding
conjectures (including of Kadison--Singer, Johnson, Bilu--Linial,
Lubotzky, and others), construct expander graphs, and vastly extend the
Laguerre--P\'olya--Schur program \cite{Laguerre1,Polya1913,Polya-Schur}
from the turn of the 20th century (among other applications).
Theorem~\ref{Tstable} reveals that for all finite metric spaces -- in
particular, for all finite connected graphs -- the blowup-polynomial is
indeed multi-affine and real-stable. The class of multi-affine
real-stable polynomials has been characterized in \cite[Theorem
5.6]{Branden} and \cite[Theorem 3]{WW}. (For a connection to matroids,
see \cite{Branden}, \cite{COSW}.) To the best of our knowledge,
blowup-polynomials $p_X(\ensuremath{\mathbf n})$ provide novel examples/realizations of
multi-affine real-stable polynomials.
\subsection{Graph metric spaces: symmetries, complete multipartite
graphs}
We now turn from the metric-geometric Theorem~\ref{Teuclidean}, the
algebraic Theorem~\ref{Tmetricmatrix}, and the analysis-themed
Theorem~\ref{Tstable}, to a more combinatorial theme, by restricting from
metric spaces to graphs. Here we present two ``main theorems'' and one
proposition.
\subsubsection{Graph invariants and symmetries}
Having shown that $\det D_{X[\ensuremath{\mathbf n}]}$ is a polynomial in $\ensuremath{\mathbf n}$ (times an
exponential factor), and that $p_X(\cdot)$ is always real-stable, our
next result explains a third attractive property of $p_X(\cdot)$:
\textit{The blowup-polynomial of a graph $X = G$ is indeed a (novel)
graph invariant}. To formally state this result, we begin by re-examining
the blowup-construction for graphs and their distance matrices.
A distinguished sub-class of discrete metric spaces is that of finite
simple connected unweighted graphs $G$ (so, without parallel/multiple
edges or self-loops). Here the distance between two nodes $v,w$ is
defined to be the (edge-)length of any shortest path joining $v,w$. In
this paper, we term such objects \textbf{graph metric spaces}. Note that
the blowup $G[\ensuremath{\mathbf n}]$ is \textit{a priori} only defined as a metric space;
we now adjust the definition to make it a graph.
\begin{definition}
Given a graph metric space $G = (V,E)$, and a tuple $\ensuremath{\mathbf n} = (n_v : v \in
V)$, the \emph{$\ensuremath{\mathbf n}$-blowup} of $G$ is defined to be the graph $G[\ensuremath{\mathbf n}]$
-- with $n_v$ copies of each vertex $v$ -- such that a copy of $v$ and
one of $w$ are adjacent in $G[\ensuremath{\mathbf n}]$ if and only if $v \neq w$ are
adjacent in $G$.
\end{definition}
\noindent (For example, the $\ensuremath{\mathbf n}$-blowup of a complete graph is a
complete multipartite graph.) Now note that if $G$ is a graph metric
space, then so is $G[\ensuremath{\mathbf n}]$ for all tuples $\ensuremath{\mathbf n} \in
\mathbb{Z}_{>0}^{|V|}$. The results stated above thus apply to every such
graph $G$ -- more precisely, to the distance matrices of the blowups of
$G$.
To motivate our next result, now specifically for graph metric spaces, we
first relate the symmetries of the graph with those of its
blowup-polynomial $p_G(\ensuremath{\mathbf n})$. Suppose a graph metric space $G = (V,E)$
has a structural (i.e., adjacency-preserving) symmetry $\Psi : V \to V$
-- i.e., an \textit{(auto-)isometry} as a metric space. Denoting the
corresponding relabeled graph metric space by $\Psi(G)$,
\begin{equation}\label{Eisom} D_G = D_{\Psi(G)}, \qquad \ensuremath{\mathcal{D}}_G =
\ensuremath{\mathcal{D}}_{\Psi(G)}, \qquad p_G(\ensuremath{\mathbf n}) \equiv p_{\Psi(G)}(\ensuremath{\mathbf n}). \end{equation}
It is thus natural to ask if the converse holds -- i.e., if $p_G(\cdot)$
helps recover the group of auto-isometries of $G$. A stronger result
would be if $p_G$ recovers $G$ itself (up to isometry). We show that both
of these hold:
\begin{utheorem}\label{Tisom}
Given a graph metric space $G = (V,E)$ and a bijection $\Psi : V \to V$,
the symmetries of the polynomial $p_G$ equal the isometries of $G$. In
particular, any (equivalently all) of the statements in~\eqref{Eisom}
hold, if and only if $\Psi$ is an isometry of $G$. More strongly, the
polynomial $p_G(\ensuremath{\mathbf n})$ recovers the graph metric space $G$ (up to
isometry). However, this does not hold for the polynomial $u_G$.
\end{utheorem}
As the proof reveals, one in fact needs only the homogeneous quadratic
part of $p_G$, i.e.\ its Hessian matrix $((\partial_{n_v}
\partial_{n_{v'}} p_G)({\bf 0}_V))_{v,v' \in V}$, to recover the graph
and its isometries. Moreover, this associates to every graph a
\textit{partially symmetric polynomial}, whose symmetries are precisely
the graph-isometries
\medskip
Our next result works more generally in metric spaces $X$, hence is
stated over them. Note that the polynomial $p_X(\ensuremath{\mathbf n})$ is ``partially
symmetric'', depending on the symmetries (or isometries) of the distance
matrix (or metric space). Indeed, partial symmetry is as much as one can
hope for, because it turns out that ``full'' symmetry (in all variables
$n_x$) occurs precisely in one situation:
\begin{proposition}\label{Tsymm}
Given a finite metric space $X$, the following are equivalent:
\begin{enumerate}
\item The polynomial $p_X(\ensuremath{\mathbf n})$ is symmetric in the variables $\{ n_x, \
x \in X \}$.
\item The metric $d_X$ is a rescaled discrete metric:
$d_X(x,y) = c {\bf 1}_{x \neq y}\ \forall x,y \in X$, for some $c>0$.
\end{enumerate}
\end{proposition}
\subsubsection{Complete multipartite graphs: novel characterization via
stability}
The remainder of this section returns back to graphs. We next present an
interesting byproduct of the above results: a novel characterization of
the class of complete multipartite graphs. Begin by observing from the
proof of Theorem~\ref{Tstable} that the polynomials $p_G(\cdot)$ are
stable because of a determinantal representation (followed by inversion).
However, they do not enjoy two related properties:
\begin{enumerate}
\item $p_G(\cdot)$ is not homogeneous.
\item The coefficients of the multi-affine polynomial $p_G(\cdot)$ are
not all of the same sign; in particular, they cannot form a probability
distribution on the subsets of $\{ 1, \dots, k \}$ (corresponding to the
various monomials in $p_G(\cdot)$). In fact, even the constant and linear
terms have opposite signs, by the final assertion in
Theorem~\ref{Tmetricmatrix}.
\end{enumerate}
These two (unavailable) properties of real-stable polynomials are indeed
important and well-studied in the literature. Corresponding to the
preceding numbering: \begin{enumerate} \item Very recently, Br\"and\'en
and Huh~\cite{BH} introduced and studied a distinguished class of
homogeneous real polynomials, which they termed \textit{Lorentzian}
polynomials (defined below). Relatedly, Gurvits \cite{Gurvits} /
Anari--Oveis Gharan--Vinzant \cite{AGV} defined strongly/completely
log-concave polynomials, also defined below. These classes of polynomials
have several interesting properties as well as applications; see
e.g.~\cite{AGV,BH,Gurvits}, and related/follow-up works.
\item Recall that strongly Rayleigh measures are probability measures on
the power set of $\{ 1, \dots, k \}$ whose generating (multi-affine)
polynomials are real-stable. These were introduced and studied by Borcea,
Br\"and\'en, and Liggett in the fundamental work~\cite{BBL}. This work
developed the theory of negative association/dependence for such
measures, and enabled the authors to prove several conjectures of
Liggett, Pemantle, and Wagner, among other achievements. \end{enumerate}
Given that $p_G(\cdot)$ is always real-stable, a natural question is if
one can characterize those graphs for which a certain homogenization of
$p_G(\cdot)$ is Lorentzian, or a suitable normalization is strongly
Rayleigh. The standard mathematical way to address obstacle~(1) above is
to ``projectivize'' using a new variable $z_0$, while for obstacle~(2) we
evaluate at $(-z_1, \dots, -z_k)$, where we use $z_j$ instead of
$n_{x_j}$ to denote complex variables. Thus, our next result proceeds via
homogenization at $-z_0$:
\begin{utheorem}\label{Tlorentz}
Say $G = (V,E)$ with $|V|=k$. Define the
{\em homogenized blowup-polynomial}
\begin{equation}
\widetilde{p}_G(z_0, z_1, \dots, z_k) := (-z_0)^k p_G \left(
\frac{z_1}{-z_0}, \dots, \frac{z_k}{-z_0} \right) \in \mathbb{R}[z_0,
z_1, \dots, z_k].
\end{equation}
Then the following are equivalent:
\begin{enumerate}
\item The polynomial $\widetilde{p}_G(z_0, z_1, \dots, z_k)$ is
real-stable.
\item The polynomial $\widetilde{p}_G(\cdot)$ has all coefficients (of
the monomials $z_0^{k - |J|} \prod_{j \in J} z_j$) non-negative.
\item We have $(-1)^k p_G(-1,\dots,-1) > 0$, and the normalized
``reflected'' polynomial
\[
(z_1, \dots, z_k) \quad \mapsto \quad \frac{p_G(-z_1, \dots,
-z_k)}{p_G(-1,\dots,-1)}
\]
is {\em strongly Rayleigh}. In other words, this (multi-affine)
polynomial is real-stable and has non-negative coefficients (of all
monomials $\prod_{j \in J} z_j$), which sum up to $1$.
\item The modified distance matrix $\ensuremath{\mathcal{D}}_G$ (see Definition~\ref{Ddef}) is
positive semidefinite.
\item $G$ is a complete multipartite graph.
\end{enumerate}
\end{utheorem}
Theorem~\ref{Tlorentz} is a novel characterization result of complete
multipartite graphs in the literature, in terms of real stability and the
strong(ly) Rayleigh property.
Moreover, given the remarks preceding Theorem~\ref{Tlorentz}, we present
three further equivalences to these characterizations:
\begin{corollary}\label{Clorentz}
Definitions in Section~\ref{Slorentz}. The assertions in
Theorem~\ref{Tlorentz} are further equivalent to:
\begin{enumerate}
\setcounter{enumi}{5}
\item The polynomial $\widetilde{p}_G(z_0, \dots, z_k)$ is Lorentzian.
\item The polynomial $\widetilde{p}_G(z_0, \dots, z_k)$ is strongly
log-concave.
\item The polynomial $\widetilde{p}_G(z_0, \dots, z_k)$ is completely
log-concave.
\end{enumerate}
\end{corollary}
We quickly explain the corollary. Theorem~\ref{Tlorentz}(1) implies
$\widetilde{p}_G$ is Lorentzian (see \cite{BH,COSW}), which implies
Theorem~\ref{Tlorentz}(2). The other equivalences follow from
\cite[Theorem 2.30]{BH}, which shows that -- for any real homogeneous
polynomial -- assertions~(7), (8) here are equivalent to
$\widetilde{p}_G$ being Lorentzian.
\begin{remark}
As we see in the proof of Theorem~\ref{Tlorentz}, when $\ensuremath{\mathcal{D}}_G$ is positive
semidefinite, the homogeneous polynomial $\widetilde{p}_G(z_0, \dots,
z_k)$ has a determinantal representation, i.e.,
\[
\widetilde{p}_G(z_0, \dots, z_k) = c \cdot \det( z_0 \Id_k + \sum_{j=1}^k
z_j A_j),
\]
with all $A_j$ positive semidefinite and $c \in \ensuremath{\mathbb R}$. In
Proposition~\ref{Pmixed} below, we further compute the \textit{mixed
characteristic polynomial} of these matrices $A_j$
(see~\eqref{Emixeddefn} for the definition), and show that up to a
scalar, it equals the ``inversion'' of the univariate blowup-polynomial,
i.e.\ $z_0^k u_G(z_0^{-1})$.
\end{remark}
\begin{remark}
We also show that the univariate polynomial $u_G(x)$ is intimately
related to the characteristic polynomial of $D_G$ (i.e, the ``distance
spectrum'' of $G$), whose study was one of our original motivations. See
Proposition~\ref{Pdistancespectrum} and the subsequent discussion, for
precise statements.
\end{remark}
\subsection{Two novel delta-matroids}
We conclude with a related byproduct: two novel constructions of
delta-matroids, one for every finite metric space and the other for each
tree graph. Recall that a \textit{delta-matroid} consists of a finite
``ground set'' $E$ and a nonempty collection of \textit{feasible} subsets
$\mathcal{F} \subset 2^E$, satisfying $\bigcup_{F \in \mathcal{F}} F = E$
as well as the symmetric exchange axiom: \textit{Given $A,B \in
\mathcal{F}$ and $x \in A \Delta B$ (their symmetric difference), there
exists $y \in A \Delta B$ such that $A \Delta \{ x, y \} \in
\mathcal{F}$.} Delta-matroids were introduced by Bouchet
in~\cite{Bouchet1} as a generalization of the notion of matroids.
Each (skew-)symmetric matrix $A_{k \times k}$ over a field yields a
\textit{linear delta-matroid} $\mathcal{M}_A$ as follows. Given any
matrix $A_{k \times k}$, let $E := \{ 1, \dots, k \}$ and let a subset $F
\subset E$ belong to $\mathcal{M}_A$ if either $F$ is empty or the
principal submatrix $A_{F \times F}$ is nonsingular. In~\cite{Bouchet2},
Bouchet showed that if $A$ is (skew-)symmetric, then the set system
$\mathcal{M}_A$ is indeed a delta-matroid, which is said to be
\textit{linear}.
We now return to the blowup-polynomial. First recall a 2007 result of
Br\"and\'en \cite{Branden}: given a multi-affine real-stable polynomial,
the set of monomials with nonzero coefficients forms a delta-matroid.
Thus, from $p_X(\ensuremath{\mathbf n})$ we obtain a delta-matroid, which as we will explain
is linear:
\begin{corollary}\label{Cdeltamatroid}
Given a finite metric space $(X,d)$, the set of monomials with nonzero
coefficients in $p_X(\ensuremath{\mathbf n})$ forms the linear delta-matroid
$\mathcal{M}_{\ensuremath{\mathcal{D}}_X}$.
\end{corollary}
\begin{definition}
We term $\mathcal{M}_{\ensuremath{\mathcal{D}}_X}$ the \emph{blowup delta-matroid} of $(X,d)$.
\end{definition}
The blowup delta-matroid $\mathcal{M}_{\ensuremath{\mathcal{D}}_X}$ is -- even for $X$ a finite
connected unweighted graph -- a novel construction that arises out of
metric geometry rather than combinatorics, and one that seems to be
unexplored in the literature (and unknown to experts).
Of course, it is a simple, direct consequence of Br\"and\'en's result in
\cite{Branden}.
However, the next delta-matroid is less direct to show:
\begin{utheorem}\label{Ttree-blowup}
Suppose $T = (V,E)$ is a finite connected unweighted tree with $|V| \geq
2$. Define the set system $\mathcal{M}'(T)$ to comprise all subsets
$I \subset V$, except for the ones that contain two vertices $v_1 \neq
v_2$ in $I$ such that the Steiner tree $T(I)$ has $v_1, v_2$ as leaves
with a common neighbor. Then $\mathcal{M}'(T)$ is a delta-matroid, which
does not equal $\mathcal{M}_{D_T}$ for every path graph $T = P_k$, $k
\geq 9$.
\end{utheorem}
We further prove, this notion of (in)feasible subsets in
$\mathcal{M}'(T)$ does not generalize to all graphs. Thus,
$\mathcal{M}'(T)$ is a \textit{combinatorial} (not matrix-theoretic)
delta-matroid that is also unstudied in the literature to our knowledge,
and which arises from every tree, but interestingly, not from all
graphs.
As a closing statement here: in addition to further exploring the
real-stable polynomials $p_G(\ensuremath{\mathbf n})$, it would be interesting to obtain
connections between these delta-matroids $\mathcal{M}_{\ensuremath{\mathcal{D}}_G}$ and
$\mathcal{M}'(T)$, and others known in the literature from combinatorics,
polynomial geometry, and algebra.
\subsection*{Organization of the paper}
The remainder of the paper is devoted to proving the above
Theorems~\ref{Teuclidean} through~\ref{Ttree-blowup}; this will require
developing several preliminaries along the way. The paper is clustered by
theme; thus, the next two sections and the final one respectively
involve, primarily:
\begin{itemize}
\item (commutative) algebraic methods -- to prove the polynomiality of
$p_X(\cdot)$ (Theorem \ref{Tmetricmatrix}), and to
characterize those $X$ for which it is a symmetric polynomial
(Proposition \ref{Tsymm});
\item methods from real-stability and analysis -- to show $p_X(\cdot)$ is
real-stable (Theorem~\ref{Tstable});
\item metric geometry -- to characterize for a given Euclidean finite
metric space $X$, all blowups that remain Euclidean
(Theorem~\ref{Teuclidean}), and to write down a related ``tropical''
version of Schoenberg's Euclidean embedding theorem
from~\cite{Schoenberg35}.
\end{itemize}
In the remaining Section~\ref{Sgraphs}, we show
Theorems~\ref{Tisom}--\ref{Ttree-blowup}. In greater detail: we focus on
the special case of $X = G$ a finite simple connected unweighted graph,
with the minimum edge-distance metric. After equating the isometries of
$G$ with the symmetries of $p_G(\ensuremath{\mathbf n})$, and recovering $G$ from
$p_G(\ensuremath{\mathbf n})$, we prove the aforementioned characterization of complete
multipartite graphs $G$ in terms of $\widetilde{p}_G$ being real-stable,
or $p_G(-\ensuremath{\mathbf n}) / p_G(-1, \dots, -1)$ being strongly Rayleigh. Next, we
discuss a family of blowup-polynomials from this viewpoint of ``partial''
symmetry. We also connect $u_G(x)$ to the characteristic polynomial of
$D_G$, hence to the distance spectrum of $G$. Finally, we introduce the
delta-matroid $\mathcal{M}'(T)$ for every tree, and explore its relation
to the blowup delta-matroid $\mathcal{M}_{\ensuremath{\mathcal{D}}_T}$ (for $T$ a path), as
well as extensions to general graphs. We end with two Appendices that
contain supplementary details and results.
We conclude this section on a philosophical note. Our approach in this
work adheres to the maxim that the multivariate polynomial is a natural,
general, and more powerful object than its univariate specialization.
This is of course famously manifested in the recent explosion of activity
in the geometry of polynomials, via the study of real-stable polynomials
by Borcea--Br\"and\'en and other researchers; but also shows up in
several other settings -- we refer the reader to the survey~\cite{Sokal2}
by Sokal for additional instances. (E.g., a specific occurrence is in the
extreme simplicity of the proof of the multivariate Brown--Colbourn
conjecture~\cite{Royle-Sokal,Sokal1}, as opposed to the involved proof in
the univariate case~\cite{Wagner}.)
\section{Algebraic results: the blowup-polynomial and its full
symmetry}\label{S2}
We begin this section by proving Theorem~\ref{Tmetricmatrix} in ``full''
algebraic (and greater mathematical) generality, over an arbitrary unital
commutative ring $R$. We require the following notation.
\begin{definition}
Fix positive integers $k, n_1, \dots, n_k > 0$, and vectors $\ensuremath{\mathbf p}_i, \ensuremath{\mathbf q}_i
\in R^{n_i}$ for all $1 \leq i \leq k$.
\begin{enumerate}
\item For these parameters, define the \emph{blowup-monoid} to be the
collection $\mathcal{M}_\ensuremath{\mathbf n}(R) := R^k \times R^{k \times k}$. We write a
typical element as a pair $(\ensuremath{\mathbf a}, D)$, where in coordinates, $\ensuremath{\mathbf a} =
(a_i)^T$ and $D = (d_{ij})$.
\item Given $(\ensuremath{\mathbf a}, D) \in \mathcal{M}_\ensuremath{\mathbf n}(R)$, define $M(\ensuremath{\mathbf a},D)$ to be
the square matrix of dimension $n_1 + \cdots + n_k$ with $k^2$ blocks,
whose $(i,j)$-block for $1 \leq i,j \leq k$ is $\delta_{i,j} a_i
\Id_{n_i} + d_{ij} \ensuremath{\mathbf p}_i \ensuremath{\mathbf q}_j^T$. Also define $\Delta_\ensuremath{\mathbf a} \in R^{k
\times k}$ to be the diagonal matrix with $(i,i)$ entry $a_i$, and
\[
N(\ensuremath{\mathbf a},D) := \Delta_\ensuremath{\mathbf a} + \diag(\ensuremath{\mathbf q}_1^T \ensuremath{\mathbf p}_1, \dots, \ensuremath{\mathbf q}_k^T \ensuremath{\mathbf p}_k) \cdot
D \ \in R^{k \times k}.
\]
\item Given $\ensuremath{\mathbf a}, \ensuremath{\mathbf a}' \in R^k$, define $\ensuremath{\mathbf a} \circ \ensuremath{\mathbf a}' := (a_1 a'_1,
\dots, a_k a'_k)^T \in R^k$.
\end{enumerate}
\end{definition}
The set $\mathcal{M}_\ensuremath{\mathbf n}(R)$ is of course a group under addition, but we
are interested in the following non-standard monoid structure on it.
\begin{lemma}\label{Lmonoid}
The set $\mathcal{M}_\ensuremath{\mathbf n}(R)$ is a monoid under the product
\begin{align*}
(\ensuremath{\mathbf a},D) \circ (\ensuremath{\mathbf a}',D') := &\ (\ensuremath{\mathbf a} \circ \ensuremath{\mathbf a}', \Delta_\ensuremath{\mathbf a} D' + D
\Delta_{\ensuremath{\mathbf a}'} + D \cdot \diag(\ensuremath{\mathbf q}_1^T \ensuremath{\mathbf p}_1, \dots, \ensuremath{\mathbf q}_k^T \ensuremath{\mathbf p}_k) \cdot
D')\\
= &\ (\ensuremath{\mathbf a} \circ \ensuremath{\mathbf a}', \Delta_\ensuremath{\mathbf a} D' + D N(\ensuremath{\mathbf a}',D')),
\end{align*}
and with identity element $((1,\dots,1)^T, 0_{k \times k})$.
\end{lemma}
With this notation in place, we now present the ``general'' formulation
of Theorem~\ref{Tmetricmatrix}:
\begin{theorem}\label{Tmonoid}
Fix integers $k, n_1, \dots, n_k$ and vectors $\ensuremath{\mathbf p}_i, \ensuremath{\mathbf q}_i$ as above.
Let $K := n_1 + \cdots + n_k$.
\begin{enumerate}
\item The following map is a morphism of monoids:
\[
\Psi : (\mathcal{M}_\ensuremath{\mathbf n}(R), \circ) \to (R^{K \times K},
\cdot), \qquad (\ensuremath{\mathbf a},D) \mapsto M(\ensuremath{\mathbf a},D).
\]
\item The determinant of $M(\ensuremath{\mathbf a},D)$ equals $\prod_i a_i^{n_i - 1}$ times
a multi-affine polynomial in $a_i, d_{ij}$, and the entries $\ensuremath{\mathbf q}_i^T
\ensuremath{\mathbf p}_i$. More precisely,
\begin{equation}\label{Emonoid}
\det M(\ensuremath{\mathbf a},D) = \det N(\ensuremath{\mathbf a},D) \prod_{i=1}^k a_i^{n_i - 1}.
\end{equation}
\item If all $a_i \in R^\times$ and $N(\ensuremath{\mathbf a},D)$ is invertible, then so is
$M(\ensuremath{\mathbf a},D)$, and
\[
M(\ensuremath{\mathbf a},D)^{-1} = M((a_1^{-1}, \dots, a_k^{-1})^T, -\Delta_\ensuremath{\mathbf a}^{-1} D
N(\ensuremath{\mathbf a},D)^{-1}).
\]
\end{enumerate}
\end{theorem}
Instead of using $N(\ensuremath{\mathbf a},D)$ which involves ``post-multiplication'' by
$D$, one can also use $N(\ensuremath{\mathbf a},D^T)^T$ in the above results, to obtain
similar formulas that we leave to the interested reader.
\begin{proof}
The first assertion is easy, and it implies the third via showing that
$M(\ensuremath{\mathbf a},D)^{-1} M(\ensuremath{\mathbf a},D) = \Id_K$. (We show these computations for
completeness in an Appendix.) Thus, it remains to prove the second
assertion.
To proceed, we employ \textit{Zariski density}, as was done in
e.g.~our previous work~\cite{CK-tree}. Namely, we begin by working over
the field of rational functions in $k + k^2 + 2K$ variables
\[
\mathbb{F} := \mathbb{Q}(A_1, \dots, A_k, D_{ij}, Q_i^{(l)}, P_i^{(l)}),
\]
where $A_i, D_{ij}$ (with a slight abuse of notation), and $Q_i^{(l)},
P_i^{(l)}$ -- with $1 \leq i,j \leq k$ and $1 \leq l \leq n_i$ -- serve
as proxies for $a_i, d_{ij}$, and the coordinates of $\ensuremath{\mathbf q}_i, \ensuremath{\mathbf p}_i$
respectively. Over this field, we work with
\[
{\bf A} = (A_1, \dots, A_k)^T, \qquad
{\bf Q}_i = (Q_i^{(1)}, \dots, Q_i^{(n_i)})^T, \qquad
{\bf P}_i = (P_i^{(1)}, \dots, P_i^{(n_i)})^T,
\]
and the matrix ${\bf D} = (D_{ij})$; note that ${\bf D}$ has full rank
$r=k$, since $\det {\bf D}$ is a nonzero polynomial over $\mathbb{Q}$,
hence is a unit in $\mathbb{F}$.
Let ${\bf D} = \sum_{j=1}^r \ensuremath{\mathbf u}_j \ensuremath{\mathbf v}_j^T$ be any rank-one decomposition.
For each $1 \leq j \leq r$, write $\ensuremath{\mathbf u}_j = (u_{j1}, \dots, u_{jk})^T$,
and similarly for $\ensuremath{\mathbf v}_j$. Then $D_{ij} = \sum_{s=1}^r u_{si} v_{sj}$ for
all $i,j$. Now a Schur complement argument (with respect to the $(2,2)$
block below) yields:
\[
\det M({\bf A}, {\bf D}) = \det \begin{pmatrix}
A_1 \Id_{n_1} & \cdots & 0 & \vline & u_{11} {\bf P}_1 & \cdots & u_{r1}
{\bf P}_1 \\
\vdots & \ddots & \vdots & \vline & \vdots & \ddots & \vdots\\
0 & \cdots & A_k \Id_{n_k} & \vline & u_{1k} {\bf P}_k & \cdots & u_{rk}
{\bf P}_k\\
\hline
-v_{11} {\bf Q}_1^T & \cdots & -v_{1k} {\bf Q}_k^T & \vline & & & \\
\vdots & \ddots & \vdots & \vline & & \Id_r & \\
-v_{r1} {\bf Q}_1^T & \cdots & -v_{rk} {\bf Q}_k^T & \vline & & &
\end{pmatrix}.
\]
We next compute the determinant on the right alternately: by using the
Schur complement with respect to the $(1,1)$ block instead. This yields:
\[
\det M({\bf A}, {\bf D}) = \det ( \Id_r + M ) \prod_{i=1}^k A_i^{n_i},
\]
where $M_{r \times r}$ has $(i,j)$ entry
$\sum_{l=1}^k v_{il} \; (A_l^{-1} {\bf Q}_l^T {\bf P}_l) \; u_{jl}$.
But $\det (\Id_r + M)$ is also the determinant of
\[
M' :=
\begin{pmatrix}
& & & \vline & (A_1^{-1} {\bf Q}_1^T {\bf P}_1) u_{11} & \cdots &
(A_1^{-1} {\bf Q}_1^T {\bf P}_1) u_{r1} \\
& \Id_k & & \vline & \vdots & \ddots & \vdots\\
& & & \vline & (A_k^{-1} {\bf Q}_k^T {\bf P}_k) u_{1k} & \cdots &
(A_k^{-1} {\bf Q}_k^T {\bf P}_k) u_{rk}\\
\hline
-v_{11} & \cdots & -v_{1k} & \vline & & & \\
\vdots & \ddots & \vdots & \vline & & \Id_r & \\
-v_{r1} & \cdots & -v_{rk} & \vline & & &
\end{pmatrix},
\]
by taking the Schur complement with respect to its $(1,1)$ block.
Finally, take the Schur complement with respect to the $(2,2)$ block of
$M'$, to obtain:
\begin{align*}
\det M({\bf A}, {\bf D}) = &\ \det M' \prod_{i=1}^k A_i^{n_i} = \det
\left(\Id_k + \Delta_{\bf A}^{-1} \diag( {\bf Q}_1^T {\bf P}_1, \dots,
{\bf Q}_k^T {\bf P}_k) {\bf D} \right) \prod_{i=1}^k A_i^{n_i}\\
= &\ \det N({\bf A}, {\bf D}) \prod_{i=1}^k A_i^{n_i - 1},
\end{align*}
\noindent and this is indeed $\prod_i A_i^{n_i - 1}$ times a multi-affine
polynomial in the claimed variables.
The above reasoning proves the assertion~\eqref{Emonoid} over the field
\[
\mathbb{F} = \mathbb{Q}(A_1, \dots, A_k, D_{ij}, Q_i^{(l)}, P_i^{(l)})
\]
defined above. We now explain how Zariski density helps
prove~\eqref{Emonoid} over every unital commutative ring -- with the key
being that both sides of~\eqref{Emonoid} are \textit{polynomials} in the
variables. Begin by observing that~\eqref{Emonoid} actually holds over
the polynomial (sub)ring
\[
R_0 := \mathbb{Q}[A_1, \dots, A_k, D_{ij}, Q_i^{(l)}, P_i^{(l)}],
\]
but the above proof used the invertibility of certain polynomials $A_1,
\dots, A_k, \det (D_{ij})_{i,j=1}^k$.
Now use that $\mathbb{Q}$ is an infinite field; thus, the following
result applies:
\begin{proposition}\label{Pzariski}
The following are equivalent for a field $\mathbb{F}$.
\begin{enumerate}
\item The polynomial ring $\mathbb{F}[x_1, \dots, x_n]$ (for some $n \geq
1$) equals the ring of polynomial functions from affine $n$-space
$\mathbb{A}_{\mathbb{F}}^n \cong \mathbb{F}^n$ to $\mathbb{F}$.
\item The preceding statement holds for every $n \geq 1$.
\item $\mathbb{F}$ is infinite.
\end{enumerate}
Moreover, the nonzero-locus $\mathcal{L}$ of any nonzero polynomial in
$\mathbb{F}[x_1, \dots, x_n]$ with $\mathbb{F}$ an infinite field, is
Zariski dense in $\mathbb{A}_{\mathbb{F}}^n$. In other words, if a
polynomial in $n$ variables equals zero on $\mathcal{L}$, then it
vanishes on all of $\mathbb{A}_{\mathbb{F}}^n \cong \mathbb{F}^n$.
\end{proposition}
\begin{proof}[Proof-sketch]
Clearly $(2) \implies (1)$; and that the contrapositive of $(1) \implies
(3)$ holds follows from the fact that over a finite field $\mathbb{F}_q$,
the nonzero polynomial $x_1^q - x_1$ equals the zero \textit{function}.
The proof of $(1) \implies (3)$ is by induction on $n \geq 1$, and is
left to the reader (or see e.g.\ standard textbooks, or even
\cite{CK-tree}) -- as is the proof of the final assertion.
\end{proof}
By the equivalence in Proposition~\ref{Pzariski}, the above polynomial
ring $R_0$ equals the ring of polynomial functions in the same number of
variables, so~\eqref{Emonoid} now holds over the ring of polynomial
functions in the above $k + k^2 + 2K$ variables -- but only on the
nonzero-locus of the polynomial $(\det {\bf D}) \prod_i A_i$, since we
used $A_i^{-1}$ and the invertibility of ${\bf D}$ in the above proof.
Now for the final touch: as $(\det {\bf D}) \prod_i A_i$ is a nonzero
polynomial, its nonzero-locus is Zariski dense in affine space
$\mathbb{A}_{\mathbb{Q}}^{k + k^2 + 2K}$ (by Proposition~\ref{Pzariski}).
Since the difference of the polynomials in~\eqref{Emonoid} (this is where
we use that $\det(\cdot)$ is a polynomial!) vanishes on the above
nonzero-locus, it does so for all values of $A_i$ and the other
variables. Therefore~\eqref{Emonoid} holds in the ring $R'_0$ of
polynomial \textit{functions} with coefficients in $\mathbb{Q}$, hence
upon restricting to the polynomial subring of $R'_0$ with integer (not
just rational) coefficients -- since the polynomials on both sides
of~\eqref{Emonoid} have integer coefficients. Finally, the proof is
completed by specializing to an arbitrary unital commutative ring $R$.
\end{proof}
Theorem~\ref{Tmonoid}, when specialized to $p_i^{(l)} = q_i^{(l)} = 1$
for all $1 \leq i \leq k$ and $1 \leq l \leq n_i$, reveals how to convert
the sizes $n_{x_i}$ in the blowup-matrix $D_{X[\ensuremath{\mathbf n}]}$ into entries of the
related matrix $N(\ensuremath{\mathbf a},D)$. This helps prove a result in the introduction
-- that $\det D_{X[\ensuremath{\mathbf n}]}$ is a polynomial in $\ensuremath{\mathbf n}$:
\begin{proof}[Proof of Theorem~\ref{Tmetricmatrix}]
Everything but the final sentence follows from Theorem~\ref{Tmonoid},
specialized to
\begin{align*}
R = \mathbb{R}, \qquad n_i = n_{x_i}, \qquad
d_{ij} = &\ d(x_i, x_j)\ \forall i \neq j, \qquad
d_{ii} = 2 d(x_i, X \setminus \{ x_i \}) = -a_i, \\
D = \ensuremath{\mathcal{D}}_X = &\ (d_{ij})_{i,j=1}^k, \qquad
p_i^{(l)} = q_i^{(l)} = 1\ \forall 1 \leq l \leq n_i.
\end{align*}
(A word of caution: $d_{ii} \neq d(x_i, x_i)$, and hence $\ensuremath{\mathcal{D}}_X \neq D_X$:
they differ by a diagonal matrix.)
In particular, $p_X(\ensuremath{\mathbf n})$ is a multi-affine polynomial in $\ensuremath{\mathbf q}_i^T \ensuremath{\mathbf p}_i
= n_i$. We also write out the blowup-polynomial, useful here and below:
\begin{align}\label{Emetricpoly}
p_X(\ensuremath{\mathbf n}) = \det N(\ensuremath{\mathbf a}_X,\ensuremath{\mathcal{D}}_X), \quad \text{where} \quad \ensuremath{\mathbf a}_X = &\
(D_X - \ensuremath{\mathcal{D}}_X) (1,1,\dots,1)^T,\\
\text{and so} \quad N(\ensuremath{\mathbf a}_X,\ensuremath{\mathcal{D}}_X) = &\ \diag((n_{x_i} - 1) 2 d(x_i, X
\setminus \{ x_i \}))_i + (n_{x_i} d(x_i, x_j))_{i,j=1}^k. \notag
\end{align}
Now the constant term is obtained by evaluating $\det N({\bf a}_X, 0_{k
\times k})$, which is easy since $N({\bf a}_X, 0_{k \times k})$ is
diagonal. Similarly, the coefficient of $n_{x_i}$ is obtained by setting
all other $n_{x_{i'}} = 0$ in $\det N(\ensuremath{\mathbf a}_X,\ensuremath{\mathcal{D}}_X)$. Expand along the
$i$th column to compute this determinant; now adding these determinants
over all $i$ yields the claimed formula for the linear term.
\end{proof}
As a further refinement of Theorem~\ref{Tmetricmatrix}, we isolate
\textit{every} term in the multi-affine polynomial $p_X(\ensuremath{\mathbf n})$.
Two consequences follow:
(a)~a formula relating the blowup-polynomials for a metric space $X$ and
its subspace $Y$; and
(b)~a sufficient condition for two monomials in $p_X(\ensuremath{\mathbf n})$ to have equal
coefficients. In order to state and prove these latter two results, we
require the following notion.
\begin{definition}
We say that a metric subspace $Y$ of a finite metric space $(X,d)$ is
\emph{admissible} if for every $y \in Y$, there exists $y' \in Y$ such
that $d(y, X \setminus \{ y \}) = d(y,y')$.
\end{definition}
For example, in every finite simple connected unweighted graph $G$ with
the minimum edge-distance as its metric, a subset $Y$ of vertices is
admissible if and only if the induced subgraph in $G$ on $Y$ has no
isolated vertices.
\begin{proposition}\label{Pcoeff}
Notation as above.
\begin{enumerate}
\item Given any subset $I \subset \{ 1, \dots, k \}$, the coefficient in
$p_X(\ensuremath{\mathbf n})$ of $\prod_{i \in I} n_{x_i}$ is
\[
\det (\ensuremath{\mathcal{D}}_X)_{I \times I} \prod_{j \not\in I} (-2 d(x_j, X \setminus \{
x_j \})) = \det (\ensuremath{\mathcal{D}}_X)_{I \times I} \prod_{j \not\in I} (-d_{jj}),
\]
with $(\ensuremath{\mathcal{D}}_X)_{I \times I}$ the principal submatrix of $\ensuremath{\mathcal{D}}_X$ formed by
the rows and columns indexed by~$I$.
\item Suppose $I \subset \{ 1, \dots, k \}$, and $Y = \{ x_i : i \in I
\}$ is an admissible subspace of $X$. Then,
\[
p_Y(\{ n_{x_i} : i \in I \}) = p_X(\ensuremath{\mathbf n})|_{n_{x_j} = 0\; \forall j \not\in
I} \cdot \prod_{j \not\in I} (-2 d(x_j, X \setminus \{ x_j \}))^{-1}.
\]
In particular, if a particular monomial $\prod_{i \in I_0} n_{x_i}$ does
not occur in $p_Y(\cdot)$ for some $I_0 \subset I$, then it does not
occur in $p_X(\cdot)$ either.
\item Suppose two admissible subspaces of $X$, consisting of points
$(y_1, \dots, y_l)$ and $(z_1, \dots, z_l)$, are isometric (here, $1 \leq
l \leq k$). If moreover
\begin{equation}\label{Ehypothesis}
\prod_{i=1}^l d(y_i, X \setminus \{ y_i \}) =
\prod_{i=1}^l d(z_i, X \setminus \{ z_i \}),
\end{equation}
then the coefficients in $p_X(\ensuremath{\mathbf n})$ of $\prod_{i=1}^l
n_{y_i}$ and $\prod_{i=1}^l n_{z_i}$ are equal.
\end{enumerate}
\end{proposition}
The final assertion strengthens the (obvious) observation that if $\Psi :
X \to X$ is an isometry, then $p_X(\cdot) \equiv p_{\Psi(X)}(\cdot)$ --
in other words, the polynomial $p_X(\cdot)$ is invariant under the action
of the permutation of the variables $( n_x : x \in X )$ induced by
$\Psi$. This final assertion applies to blowup-polynomials of unweighted
graphs with ``locally homeomorphic neighborhoods'', e.g.~to interior
points and intervals in path graphs (or more generally, banded graphs).
See the opening discussion in Section~\ref{Rgraphsymm}, as well as
Proposition~\ref{Pzeroterms}.
\begin{proof}\hfill
\begin{enumerate}
\item It suffices to compute the coefficient of $\prod_{i \in I} n_{x_i}$
in $p_X(\ensuremath{\mathbf n}) = \det N(\ensuremath{\mathbf a}_X,\ensuremath{\mathcal{D}}_X)$, where $a_i = -2 d(x_i, X \setminus \{
x_i \})\ \forall 1 \leq i \leq k$, and we set all $n_{x_j},\ j \not\in I$
to zero. To evaluate this determinant, notice that for $j \not\in I$, the
$j$th row contains only one nonzero entry, along the main diagonal. Thus,
expand the determinant along the $j$th row for every $j \not\in I$; this
yields $\prod_{j \not\in I} (-d_{jj})$ times the principal minor
$N(\ensuremath{\mathbf a}_X,\ensuremath{\mathcal{D}}_X)_{I \times I}$. Moreover, the coefficient of $\prod_{i \in
I} n_{x_i}$ in the expansion of $\det N(\ensuremath{\mathbf a}_X,\ensuremath{\mathcal{D}}_X)_{I \times I}$ is the
same as that in expanding $\det N({\bf 0}, \ensuremath{\mathcal{D}}_X)_{I \times I}$, and this
is precisely $\det (\ensuremath{\mathcal{D}}_X)_{I \times I}$.
\item Let us use $\ensuremath{\mathbf a}_X, \ensuremath{\mathcal{D}}_X$ and $\ensuremath{\mathbf a}_Y, \ensuremath{\mathcal{D}}_Y$ for the appropriate data
generated from $X$ and $Y$ respectively. Then the admissibility of $Y$
indicates that $(\ensuremath{\mathbf a}_X)_I = \ensuremath{\mathbf a}_Y$ and $(\ensuremath{\mathcal{D}}_X)_{I \times I} = \ensuremath{\mathcal{D}}_Y$.
Now a direct computation reveals:
\[
p_X(\ensuremath{\mathbf n})|_{n_{x_j} = 0\; \forall j \not\in I} = \det(\Delta_{\ensuremath{\mathbf a}_Y} +
\Delta_{\ensuremath{\mathbf n}_Y} \ensuremath{\mathcal{D}}_Y) \prod_{j \not\in I} (-d_{jj}).
\]
This shows the claimed equation, and the final assertion is an immediate
consequence of it.
\item Let $I', I'' \subset \{ 1, \dots, k \}$ index the points $(y_1,
\dots, y_l)$ and $(z_1, \dots, z_l)$, respectively. Similarly, let $\ensuremath{\mathcal{D}}_Y,
\ensuremath{\mathcal{D}}_Z$ denote the respective $l \times l$ matrices (e.g.\ with
off-diagonal entries $d(y_i, y_j)$ and $d(z_i, z_j)$ respectively). The
admissibility of the given subspaces implies that $(\ensuremath{\mathcal{D}}_X)_{I' \times I'}
= \ensuremath{\mathcal{D}}_Y$ and $(\ensuremath{\mathcal{D}}_X)_{I'' \times I''} = \ensuremath{\mathcal{D}}_Z$. Now use the isometry
between the $y_i$ and $z_i$ (up to relabeling) to deduce that $\det \ensuremath{\mathcal{D}}_Y
= \det \ensuremath{\mathcal{D}}_Z$. Via the first part above, it remains to prove that
\[
\prod_{j \not\in I'} (-2 d(x_j, X \setminus \{ x_j \})) =
\prod_{j \not\in I''} (-2 d(x_j, X \setminus \{ x_j \})).
\]
But this indeed holds, since multiplying the left- and right- hand sides
of this equation by the corresponding sides of~\eqref{Ehypothesis} yields
$2^{-l} \prod_{x \in X} (-2d(x, X \setminus \{ x \}))$ on both sides
(once again using admissibility). \qedhere
\end{enumerate}
\end{proof}
We provide some applications of Proposition~\ref{Pcoeff} in later
sections; for now, we apply it to prove that the blowup delta-matroid of
$X$ is linear:
\begin{proof}[Proof of Corollary~\ref{Cdeltamatroid}]
It is immediate from Proposition~\ref{Pcoeff}(1) that the blowup
delta-matroid of $X$ is precisely the linear delta-matroid
$\mathcal{M}_{\ensuremath{\mathcal{D}}_X}$ (see the paragraph preceding
Corollary~\ref{Cdeltamatroid}).
\end{proof}
We conclude this section by showing another result in the introduction,
which studies when $p_X(\ensuremath{\mathbf n})$ is symmetric in the variables $n_x$.
\begin{proof}[Proof of Proposition~\ref{Tsymm}]
First suppose $d_X$ is the discrete metric times a constant $c > 0$. Then
all $a_i = -2c = d_{ii}$. Hence,
\[
\ensuremath{\mathcal{D}}_X = c {\bf 1}_{k \times k} + c \Id_k \quad \implies \quad
N(\ensuremath{\mathbf a}_X,\ensuremath{\mathcal{D}}_X) = -2c \Id_k + \diag(n_{x_1}, \dots, n_{x_k}) \ensuremath{\mathcal{D}}_X
\]
and this is a rank-one update of the diagonal matrix $\mathbf{\Delta} := c
\diag(n_{x_1}, \dots, n_{x_k}) -2c \Id_k$. Hence
\begin{align}\label{Ecomplete}
p_X(\ensuremath{\mathbf n}) = \det N(\ensuremath{\mathbf a}_X,\ensuremath{\mathcal{D}}_X) = &\ \det (\mathbf{\Delta}) + c \cdot
(1,\dots,1) {\rm adj} (\mathbf{\Delta}) (n_{x_1}, \dots,
n_{x_k})^T\notag\\
= &\ c^k \left( \prod_{i=1}^k (n_{x_i} - 2) + \sum_{i=1}^k n_{x_i}
\prod_{i' \neq i} (n_{x_{i'}} - 2) \right),
\end{align}
and this is indeed symmetric in the $n_{x_i}$.
Conversely, suppose $p_X(\ensuremath{\mathbf n})$ is symmetric in $\ensuremath{\mathbf n}$. If $|X| = k \leq 2$
then the result is immediate. Also note that the assertion~(2) for $k
\geq 3$ follows from that for $k=3$ -- since if the distances between any
$3$ distinct points are equal, then $d(x,y) = d(x,y') = d(x',y')$ for all
distinct $x,y,x',y' \in X$ (verifying the remaining cases is easier).
Thus, we suppose henceforth that $|X| = k = 3$. For ease of exposition,
in this proof we denote $d'_{ij} := d(x_i, x_j)$ for $1 \leq i,j \leq 3$.
Also assume by relabeling the $x_i$ (if needed) that $0 < d'_{12} \leq
d'_{13} \leq d'_{23}$. Then
\[
N(\ensuremath{\mathbf a}_X,\ensuremath{\mathcal{D}}_X) = \begin{pmatrix}
(n_{x_1} - 1) 2 d'_{12} & n_{x_1} d'_{12} & n_{x_1} d'_{13} \\
n_{x_2} d'_{12} & (n_{x_2} - 1) 2 d'_{12} & n_{x_2} d'_{23} \\
n_{x_3} d'_{13} & n_{x_3} d'_{23} & (n_{x_3} - 1) 2 d'_{13}
\end{pmatrix}.
\]
Since $p_X(\ensuremath{\mathbf n}) = \det N(\ensuremath{\mathbf a}_X,\ensuremath{\mathcal{D}}_X)$ is symmetric in the $n_{x_i}$, we
equate the coefficients of $n_{x_1} n_{x_2}$ and $n_{x_2} n_{x_3}$, to
obtain:
\[
-2 \cdot d'_{13} \cdot (3 (d'_{12})^2) = -2 \cdot d'_{12} \cdot (4
d'_{12} d'_{13} - (d'_{23})^2).
\]
Simplifying this yields: $d'_{12} d'_{13} = (d'_{23})^2$, and since
$d'_{23}$ dominates $d'_{12}, d'_{13}$, the three \textit{distances}
$d'_{12}, d'_{13}, d'_{23}$ are equal. This proves the converse for $|X|
= k = 3$, hence for all $k \geq 3$.
\end{proof}
\section{Real-stability of the blowup-polynomial}
The proofs in Section~\ref{S2} were mostly algebraic in nature: although
they applied to metric spaces, all but the final proof involved no
inequalities. We now show Theorem~\ref{Tstable}: $p_X(\cdot)$ is always
real-stable.
We begin by mentioning some properties with respect to which blowups
behave well. These include iterated blowups, the blowup-polynomial, and
the modified distance matrix $\ensuremath{\mathcal{D}}_X$ and its positivity. (As
Theorem~\ref{Teuclidean} indicates, the property of being Euclidean is
not such a property.) We first introduce another ``well-behaved'' matrix
$\mathcal{C}_X$ for a finite metric space, parallel to $\ensuremath{\mathcal{D}}_X$ and the
vector $\ensuremath{\mathbf a}_X$, which will be useful here and in later sections.
\begin{definition}
Given a finite metric space $X = \{ x_1, \dots, x_k \}$, recall the
vector ${\bf a}_X \in \ensuremath{\mathbb R}^k$ as in~\eqref{Emetricpoly} and define the
symmetric matrix $\mathcal{C}_X \in \ensuremath{\mathbb R}^{k \times k}$, via:
\begin{align}
\begin{aligned}
{\bf a}_X = &\ -2 (d(x_1, X \setminus \{ x_1 \}), \dots, d(x_k, X
\setminus \{ x_k \})) = (-2 d(x, X \setminus \{ x \}))_{x \in X}, \\
\mathcal{C}_X := &\ (-\Delta_{\ensuremath{\mathbf a}_X})^{-1/2} \ensuremath{\mathcal{D}}_X
(-\Delta_{\ensuremath{\mathbf a}_X})^{-1/2},
\end{aligned}
\end{align}
In other words, $-{\bf a}_X$ is the diagonal vector of the modified
distance matrix $\ensuremath{\mathcal{D}}_X$, and
\begin{equation}\label{Ecmatrix}
(\mathcal{C}_X)_{ij} = \begin{cases}
1, & \text{if } i = j;\\
\displaystyle \frac{d(x_i, x_j)}{2 \sqrt{d(x_i, X \setminus \{ x_i \})
d(x_j, X \setminus \{ x_j \})}}, \qquad & \text{otherwise}.
\end{cases}
\end{equation}
\end{definition}
\begin{lemma}\label{Lwellbehaved}
Fix a finite metric space $(X,d)$ and an integer tuple $\ensuremath{\mathbf n} = (n_x : x
\in X) \in \mathbb{Z}_{>0}^X$.
\begin{enumerate}
\item Fix a positive integer $m_{xi}$ for each $x \in X$ and $1 \leq i
\leq n_x$, and let ${\bf m} := (m_{xi})_{x,i}$ denote the entire
collection. Then $(X[\ensuremath{\mathbf n}])[{\bf m}]$ is isometrically isomorphic to
$X[\ensuremath{\mathbf n}']$, where $\ensuremath{\mathbf n}' = (\sum_{i=1}^{n_x} m_{xi} : x \in X)$. Here the
$i$th copy of $x$ in $X[\ensuremath{\mathbf n}]$ is copied $m_{xi}$ times in $(X[\ensuremath{\mathbf n}])[{\bf
m}]$.
\item In particular, the blowup-polynomial of an iterated blowup is
simply the original blowup-polynomial in a larger number of variables, up
to a constant:
\begin{equation}\label{Eblowup}
p_{X[\ensuremath{\mathbf n}]}({\bf m}) \equiv p_X(\ensuremath{\mathbf n}') \prod_{x \in X} a_x^{n_x - 1},
\end{equation}
where the coordinates of $\ensuremath{\mathbf n}' = (\sum_{i=1}^{n_x} m_{xi} : x \in X)$ are
sums of variables.
\item Now write $X = \{ x_1, \dots, x_k \}$ as above. Then the matrices
$\ensuremath{\mathcal{D}}_{X[\ensuremath{\mathbf n}]}, \mathcal{C}_{X[\ensuremath{\mathbf n}]}$ are both block $k \times k$ matrices,
with $(i,j)$ block respectively equal to
\[
d_{ij} {\bf 1}_{n_{x_i} \times n_{x_j}} \quad \text{and} \quad
c_{ij} {\bf 1}_{n_{x_i} \times n_{x_j}},
\]
where $\ensuremath{\mathcal{D}}_X = (d_{ij})_{i,j=1}^k, \mathcal{C}_X = (c_{ij})_{i,j=1}^k$.
\item The following are equivalent:
\begin{enumerate}
\item The matrix $\ensuremath{\mathcal{D}}_X$ is positive semidefinite.
\item The matrix $\ensuremath{\mathcal{D}}_{X[\ensuremath{\mathbf n}]}$ is positive semidefinite for some
(equivalently, every) tuple $\ensuremath{\mathbf n}$ of positive integers.
\item The matrix $\mathcal{C}_X$ is positive semidefinite.
\item The matrix $\mathcal{C}_{X[\ensuremath{\mathbf n}]}$ is positive semidefinite for some
(equivalently, every) tuple $\ensuremath{\mathbf n}$ of positive integers.
\end{enumerate}
\end{enumerate}
\end{lemma}
\begin{proof}\hfill
\begin{enumerate}
\item For ease of exposition we write $Y := X[\ensuremath{\mathbf n}], Z := Y[{\bf m}]$.
Also write $y_{xi}$ for the $i$th copy of $x$ in $Y$, and $z_{xij}$ for
the $j$th copy of $y_{xi}$ in $Z$, with $1 \leq i \leq n_x$ and $1 \leq j
\leq m_{xi}$. We now compute $d_Z(z_{xij},z_{x'i'j'})$, considering three
cases. First if $x \neq x'$ then this equals $d_Y(y_{xi},y_{x'i'}) =
d_X(x,x')$. Next if $x = x'$ but $i \neq i'$ then it equals $d_Y(y_{xi},
y_{xi'}) = 2 d(x, X \setminus \{ x \})$. Finally, suppose $x = x'$ and $i
= i'$ but $j \neq j'$. Then
\[
d_Z(z_{xij},z_{x'i'j'}) = 2 d_Y(y_{xi}, Y \setminus \{ y_{xi} \}),
\]
and it is not hard to show, by considering all distances in $Y$, that
this equals $2 d_X(x, X \setminus \{ x \})$. These three cases reveal
that $d_Z(z_{xij}, z_{x'i'j'})$ equals the distance in $X[\ensuremath{\mathbf n}']$ between
the copies of $x,x' \in X$, and the proof is complete.
\item We show~\eqref{Eblowup} using the previous part and the next part,
and via Zariski density arguments as in the proof of
Theorem~\ref{Tmonoid}. Define $n_j := n_{x_j}$ in this proof for
convenience. Thus, we work more generally in the setting where $X = \{
x_1, \dots, x_k \}$, but the arrays
\[
\ensuremath{\mathbf a}_X = (a_{x_1}, \dots, a_{x_k})^T, \qquad \ensuremath{\mathcal{D}}_X =
(d_{rs})_{r,s=1}^k, \qquad {\bf m} = (m_{j1}, \dots, m_{jn_j})_{j=1}^k
\]
consist of \textit{indeterminates}. Let $K := \sum_{j=1}^k n_j$, and
define $\mathcal{W}_{K \times k}$ to be the block matrix
\[
\mathcal{W} := \begin{pmatrix}
{\bf 1}_{n_1 \times 1} & 0_{n_1 \times 1} & \cdots & 0_{n_1 \times 1} \\
0_{n_2 \times 1} & {\bf 1}_{n_2 \times 1} & \cdots & 0_{n_2 \times 1} \\
\vdots & \vdots & \ddots & \vdots \\
0_{n_k \times 1} & 0_{n_k \times 1} & \cdots & {\bf 1}_{n_k \times 1}
\end{pmatrix}.
\]
Now $\Delta_{\ensuremath{\mathbf a}_{X[\ensuremath{\mathbf n}]}} = \diag( a_{x_1} \Id_{n_1}, \dots, a_{x_k}
\Id_{n_k})$, and a straightforward computation (using the next part)
shows that $\ensuremath{\mathcal{D}}_{X[\ensuremath{\mathbf n}]} = \mathcal{W} \ensuremath{\mathcal{D}}_X \mathcal{W}^T$.
Notice that if one works over the field
\[
\mathbb{Q}(\{ a_{x_j}, m_{ji} : 1 \leq j \leq k, \ 1 \leq i \leq n_j \},
\{ d_{rs} : 1 \leq r,s \leq k \}),
\]
then the following polynomial is nonzero:
\begin{equation}\label{Ezariski}
(\det \ensuremath{\mathcal{D}}_X) \prod_{j=1}^k a_{x_j} \prod_{j=1}^k \prod_{i=1}^{n_j} m_{ji}.
\end{equation}
Thus, we now compute:
\[
p_{X[\ensuremath{\mathbf n}]}({\bf m}) = \det (\Delta_{a_{X[\ensuremath{\mathbf n}]}} + \Delta_{\bf m}
\ensuremath{\mathcal{D}}_{X[\ensuremath{\mathbf n}]}) = \det (\Delta_{a_{X[\ensuremath{\mathbf n}]}} + \Delta_{\bf m} \mathcal{W}
\ensuremath{\mathcal{D}}_X \mathcal{W}^T).
\]
Using~\eqref{Ezariski} and Schur complements, this equals
\[
\det (\Delta_{\bf m}) \cdot \det \begin{pmatrix} \Delta_{\bf m}^{-1}
\Delta_{a_{X[\ensuremath{\mathbf n}]}} & -\mathcal{W} \\ \mathcal{W}^T & \ensuremath{\mathcal{D}}_X^{-1}
\end{pmatrix} \det (\ensuremath{\mathcal{D}}_X).
\]
Using an alternate Schur complement, we expand this latter expression as:
\[
\det (\Delta_{\bf m}) \cdot \det (\Delta_{\bf m}^{-1}) \det
(\Delta_{a_{X[\ensuremath{\mathbf n}]}}) \det( \ensuremath{\mathcal{D}}_X^{-1} + \mathcal{W}^T \Delta_{\bf m}
\Delta_{a_{X[\ensuremath{\mathbf n}]}}^{-1} \mathcal{W}) \cdot \det(\ensuremath{\mathcal{D}}_X).
\]
Now defining $n'_j := \sum_{i=1}^{n_j} m_{ji}$ as in the assertion, we
have:
\[
\mathcal{W}^T \Delta_{\bf m} \Delta_{a_{X[\ensuremath{\mathbf n}]}}^{-1} \mathcal{W} =
\diag(a_{x_1}^{-1} n'_1, \dots, a_{x_k}^{-1} n'_k) = \Delta_{a_X}^{-1}
\Delta_{\ensuremath{\mathbf n}'}.
\]
Thus the above computation can be continued:
\begin{align*}
p_{X[\ensuremath{\mathbf n}]}({\bf m}) = &\ \det(\Delta_{a_{X[\ensuremath{\mathbf n}]}}) \det(\ensuremath{\mathcal{D}}_X^{-1} +
\Delta_{a_X}^{-1} \Delta_{\ensuremath{\mathbf n}'}) \det(\ensuremath{\mathcal{D}}_X)\\
= &\ \prod_{j=1}^k a_{x_j}^{n_j} \cdot \det(\Id_k + \Delta_{a_X}^{-1}
\Delta_{\ensuremath{\mathbf n}'} \ensuremath{\mathcal{D}}_X)\\
= &\ \prod_{j=1}^k a_{x_j}^{n_j - 1} \cdot \det(\Delta_{a_X} +
\Delta_{\ensuremath{\mathbf n}'} \ensuremath{\mathcal{D}}_X) = p_X(\ensuremath{\mathbf n}') \prod_{j=1}^k a_{x_j}^{n_j - 1}.
\end{align*}
This proves the result over the function field (over $\mathbb{Q}$) in
which the entries $a_{x_j}, m_{ji}, d_{rs}$ are variables. Now we repeat
the Zariski density arguments as in the proof of Theorem~\ref{Tmonoid},
working this time with the nonzero polynomial given in~\eqref{Ezariski}.
This shows the result over an arbitrary commutative ring -- in
particular, over $\ensuremath{\mathbb R}$.
\item The key observation is that the diagonal entries of $\ensuremath{\mathcal{D}}_{X[\ensuremath{\mathbf n}]}$
corresponding to the copies of $x \in X$, all equal $2d_X(x, X \setminus
\{ x \})$, which is precisely the corresponding diagonal entry in $\ensuremath{\mathcal{D}}_X$.
From this, the assertion for $\ensuremath{\mathcal{D}}_{X[\ensuremath{\mathbf n}]}$ is immediate, and that for
$\mathcal{C}_{X[\ensuremath{\mathbf n}]}$ is also straightforward.
\item We first prove the equivalence for the $\ensuremath{\mathcal{D}}$-matrices. The preceding
part implies that $\ensuremath{\mathcal{D}}_X$ is a principal submatrix of $\ensuremath{\mathcal{D}}_{X[\ensuremath{\mathbf n}]}$, hence
is positive semidefinite if $\ensuremath{\mathcal{D}}_{X[\ensuremath{\mathbf n}]}$ is.
Conversely, given $v \in \ensuremath{\mathbb R}^{n_{x_1} + \cdots + n_{x_k}}$, write $v^T =
(v_1^T, \dots, v_k^T)$, with all $v_i \in \ensuremath{\mathbb R}^{n_{x_i}}$. Let
$w_i := v_i^T {\bf 1}_{n_{x_i}}$, and
denote by
$w := (w_1, \dots, w_k)^T$
the ``compression'' of $v$. Now compute:
\[
v^T \ensuremath{\mathcal{D}}_{X[\ensuremath{\mathbf n}]} v = \sum_{i,j=1}^k v_i^T d_{ij} {\bf 1}_{n_{x_i} \times
n_{x_j}} v_j = \sum_{i,j=1}^k w_i d_{ij} w_j = w^T \ensuremath{\mathcal{D}}_X w,
\]
and this is non-negative (for all $v$) since $\ensuremath{\mathcal{D}}_X$ is positive
semidefinite. Hence so is $\ensuremath{\mathcal{D}}_{X[\ensuremath{\mathbf n}]}$.
This proves the equivalence for the $\ensuremath{\mathcal{D}}$-matrices. Now for any metric
space $Y$ (e.g.\ $Y = X$ or $X[\ensuremath{\mathbf n}]$), the matrix $\mathcal{C}_Y =
(-\Delta_{\ensuremath{\mathbf a}_Y})^{-1/2} \ensuremath{\mathcal{D}}_Y (-\Delta_{\ensuremath{\mathbf a}_Y})^{-1/2}$ is positive
semidefinite if and only if $\ensuremath{\mathcal{D}}_Y$ is. This concludes the proof. \qedhere
\end{enumerate}
\end{proof}
\begin{remark}
The proof of Lemma~\ref{Lwellbehaved}(2) using Zariski density indicates
a similar, alternate approach to proving the formula for $\det M({\bf A},
{\bf D})$ in Theorem~\ref{Tmonoid}. The difference, now, is that the
rank-one expansion of the matrix ${\bf D}$ is no longer needed, and can
be replaced by the use of the two block-diagonal matrices
\[
\mathcal{W}({\bf p}_1, \dots, {\bf p}_k) := \begin{pmatrix} ({\bf
p}_1)_{n_1 \times 1} & 0_{n_1 \times 1} & \cdots & 0_{n_1 \times 1} \\
0_{n_2 \times 1} & ({\bf p}_2)_{n_2 \times 1} & \cdots & 0_{n_2 \times 1}
\\
\vdots & \vdots & \ddots & \vdots \\
0_{n_k \times 1} & 0_{n_k \times 1} & \cdots & ({\bf p}_k)_{n_k \times 1}
\end{pmatrix}
\]
and a similar matrix $\mathcal{W}({\bf q}_1, \dots, {\bf q}_k)$, so that
$M({\bf A}, {\bf D}) = \diag(\{ A_i \cdot \Id_{n_i} \}) + \mathcal{W}(\{
{\bf p}_i \}) \cdot {\bf D} \cdot \mathcal{W}(\{ {\bf q}_i \})^T$.
\end{remark}
Lemma~\ref{Lwellbehaved}(2) immediately implies the following consequence
(which can also be shown directly):
\begin{corollary}\label{Cblowup}
Fix a finite metric space $(X,d)$. For all integer tuples $\ensuremath{\mathbf n} \in
\mathbb{Z}_{>0}^X$, the blowup-polynomial of $X[\ensuremath{\mathbf n}]$ has total degree at
most $|X|$.
\end{corollary}
\noindent In other words, no monomials of degree $|X|+1$ or higher occur
in $p_{X[\ensuremath{\mathbf n}]}$, for any tuple $\ensuremath{\mathbf n}$.
We now prove the real-stability of $p_X(\cdot)$:
\begin{proof}[Proof of Theorem~\ref{Tstable}]
We continue to use the notation in the proof of
Theorem~\ref{Tmetricmatrix}, with one addition: for expositional clarity,
in this proof we treat $p_X(\cdot)$ as a polynomial in the complex
variables $z_j := n_{x_j}$ for $j=1,\dots,k$. Thus,
\[
p_X(z_1, \dots, z_k) = \det N(\ensuremath{\mathbf a}_X,\ensuremath{\mathcal{D}}_X) = \det (\Delta_{\ensuremath{\mathbf a}_X} +
\Delta_{\bf z} \ensuremath{\mathcal{D}}_X),
\]
where $a_j = -2 d(x_j, X \setminus \{ x_j \}) < 0\ \forall j$ and
$\Delta_{\bf z} := \diag(z_1, \dots, z_k)$. We compute:
\begin{align*}
p_X({\bf z}) = &\ \det (\Delta_{\bf z}) \det (\ensuremath{\mathcal{D}}_X - (-\Delta_{\ensuremath{\mathbf a}_X})
\Delta_{\bf z}^{-1})\\
= &\ \prod_{j=1}^k z_j \cdot \det (-\Delta_{\ensuremath{\mathbf a}_X})^{1/2} \det \left(
(-\Delta_{\ensuremath{\mathbf a}_X})^{-1/2} \ensuremath{\mathcal{D}}_X (-\Delta_{\ensuremath{\mathbf a}_X})^{-1/2} - \Delta_{\bf
z}^{-1} \right) \det (-\Delta_{\ensuremath{\mathbf a}_X})^{1/2}\\
= &\ \det (-\Delta_{\ensuremath{\mathbf a}_X}) \prod_{j=1}^k z_j \cdot \det \left(
\mathcal{C}_X + \sum_{j=1}^k (-z_j^{-1}) E_{jj} \right),
\end{align*}
where $E_{jj}$ is the elementary $k \times k$ matrix with $(j,j)$ entry
$1$ and all other entries zero.
We now appeal to two facts. The first is a well-known result of
Borcea--Br\"and\'en: \cite[Proposition 2.4]{BB1} (see also~\cite[Lemma
4.1]{Branden}), which says that if $A_1, \dots, A_k, B$ are
equi-dimensional real symmetric matrices, with all $A_j$ positive
semidefinite, then the polynomial
\begin{equation}\label{EBH}
f(z_1, \dots, z_k) := \det \left( B + \sum_{j=1}^k z_j A_j \right)
\end{equation}
is either real-stable or identically zero. The second is the folklore
result that ``inversion preserves stability'' (since the upper half-plane
is preserved under the transformation $z \mapsto -1/z$ of
$\mathbb{C}^\times$). That is, if a polynomial $g(z_1, \dots, z_k)$ has
$z_j$-degree $d_j \geq 1$ and is real-stable, then so is the polynomial
\[
z_1^{d_1} \cdot g(-z_1^{-1}, z_2, \dots, z_k)
\]
(this actually holds for any $z_j$). Apply this latter fact to each
variable of the multi-affine polynomial $f(\cdot)$ in~\eqref{EBH} -- in
which $d_j=1$, $B = \mathcal{C}_X$, and $A_j = E_{jj}\ \forall j$. It
follows that the polynomial
\[
z_1 \cdots z_k \cdot f(-z_1^{-1}, \dots, -z_k^{-1}) = \prod_{j=1}^k z_j
\cdot \det \left( \mathcal{C}_X + \sum_{j=1}^k (-z_j^{-1}) E_{jj} \right)
= \det (-\Delta_{\ensuremath{\mathbf a}_X})^{-1} p_X({\bf z})
\]
is real-stable, and the proof is complete.
\end{proof}
\begin{remark}
For completeness, we briefly touch upon other notions of stability that
are standard in mathematics (analysis, control theory,
differential/difference equations): Hurwitz stability and Schur
stability. Recall that a real polynomial in one variable is said to be
Hurwitz stable (respectively, Schur stable) if all of its roots lie in
the open left half-plane (respectively, in the open unit disk) in
$\mathbb{C}$. Now the univariate specializations $u_X(n) =
p_X(n,n,\dots,n)$ are not all either Hurwitz or Schur stable. As a
concrete example: in the simplest case of the discrete metric on a space
$X$, Equation~\eqref{Ecomplete} implies that $u_X(n) = (n-2)^{k-1} (n-2 +
kn)$, and this vanishes at $n=2, \frac{2}{k+1}$.
\end{remark}
\section{Combinatorics: Graphs and their partially symmetric
blowup-polynomials}\label{Sgraphs}
We now take a closer look at a distinguished sub-class of finite metric
spaces: unweighted graphs.
In this section, we will show Theorems \ref{Tisom}--\ref{Ttree-blowup}.
To avoid having to mention the same quantifiers repeatedly, we introduce
the following term.
\begin{definition}
A \emph{graph metric space} is a finite, simple, connected, unweighted
graph $G$, in which the distance between two vertices is the number of
edges in a shortest path connecting them.
\end{definition}
Every graph metric space $G$ is thus a finite metric space, and so the
results in the previous sections apply to it. In particular, to every
graph metric space $G = (V,E)$ are naturally associated a (to our
knowledge) novel graph invariant
\begin{equation}
p_G(\ensuremath{\mathbf n}) = p_G(\{ n_v : v \in V \}) := \det (2\Delta_\ensuremath{\mathbf n} - 2 \Id_V +
\Delta_\ensuremath{\mathbf n} D_G) = \det (-2 \Id_V + \Delta_\ensuremath{\mathbf n} \ensuremath{\mathcal{D}}_G)
\end{equation}
(which we showed is real-stable), as well as its univariate
specialization (which is thus real-rooted)
\begin{equation}
u_G(n) = p_G(n,n,\dots,n) = \det ((2n - 2) \Id_V + n D_G) = \det (-2
\Id_V + n \ensuremath{\mathcal{D}}_G)
\end{equation}
and its ``maximum root'' $\alpha_{\max}(u_G) \in \ensuremath{\mathbb R}$. Here,
$D_G$ is the distance matrix of $G$ (with zeros on the diagonal) and
$\ensuremath{\mathcal{D}}_G = D_G + 2 \Id_V$ is the modified distance matrix.
\subsection{Connections to the distance spectrum; $p_G$ recovers $G$}
We begin with an observation (for completeness), which ties into one of
our original motivations by connecting the blowup-polynomial $u_G$ to the
distance spectrum of $G$, i.e.\ to the eigenvalues of the distance matrix
$D_G$. The study of these eigenvalues began with the work of Graham and
Lov\'asz~\cite{GL}, and by now, is a well-developed program in the
literature; see e.g.\ \cite{AH}. Our observation here is the following:
\begin{proposition}\label{Pdistancespectrum}
Suppose $G = (V,E)$ is a graph metric space. A real number $n$ is a root
of the univariate blowup-polynomial $u_G$, if and only if $2n^{-1} - 2$
is an eigenvalue of the distance matrix $D_G$, with the same
multiplicity.
\end{proposition}
Alternately, $\lambda \neq -2$ is an eigenvalue of $D_G$ if and only if
$\frac{2}{2 + \lambda}$ is a root of $u_G$.
\begin{proof}
First note from the definitions that $u_G(0) = \det (-2 \Id_V) \neq 0$.
We now compute:
\[
u_G(n) = \det (-2 \Id_V + n (2 \Id_V + D_G)) = (2n)^{|V|} \det( \Id_V +
\textstyle{\frac{1}{2}} D_G - n^{-1} \Id_V).
\]
Thus, $n$ is a (nonzero) root of $u_G$ if and only if $n^{-1}$ is an
eigenvalue of $\Id_V + \frac{1}{2} D_G$. The result follows from here.
\end{proof}
In the distance spectrum literature, much work has gone into studying the
largest eigenvalue of $D_G$, called the ``distance spectral radius'' in
the literature, as well as the smallest eigenvalue of $D_G$. An immediate
application of Proposition~\ref{Pdistancespectrum} provides an
interpretation of another such eigenvalue:
\begin{corollary}\label{Cdistancespectrum}
The smallest eigenvalue of $D_G$ which is strictly greater than $-2$, is
precisely $\frac{2}{\alpha_{\max}(u_G)} - 2$.
\end{corollary}
\noindent We refer the reader to further discussions about
$\alpha_{\max}(u_G)$ in and around Proposition~\ref{Pmaxroot}.
Following these observations that reinforce our motivating connections
between distance spectra and the blowup-polynomial, we now move on to the
proof of Theorem~\ref{Tisom}. Recall, this result shows that (the
homogeneous quadratic part of) $p_G$ recovers/detects the graph and its
isometries -- but $u_G$ does not do so.
\begin{proof}[Proof of Theorem~\ref{Tisom}]
We prove the various assertions in serial order.
One implication for the first assertion was described just above the
theorem-statement.
Conversely, suppose $p_G(\ensuremath{\mathbf n}) \equiv p_{\Psi(G)}(\ensuremath{\mathbf n})$. Fix vertices $v
\neq w \in V$, and equate the coefficient of $n_v n_w$ on both sides
using Proposition~\ref{Pcoeff}:
\[
(-2)^{|V|-2} \det \begin{pmatrix} 2 & d(v,w) \\ d(v,w) & 2 \end{pmatrix}
= (-2)^{|V|-2} \det \begin{pmatrix} 2 & d(\Psi(v),\Psi(w)) \\
d(\Psi(v),\Psi(w)) & 2 \end{pmatrix}
\]
since $d_G(v, V \setminus \{ v \}) = 1\ \forall v \in V$. Thus
$d(\Psi(v), \Psi(w)) = d(v,w)$ for all $v, w \in V$, so $\Psi$ is an
isometry.
The second assertion is shown as follows. By Proposition~\ref{Pcoeff}, the
vertex set can be obtained from the nonzero monomials $n_v n_w$ (since
every edge yields a nonzero monomial). In particular, $|V|$ is recovered.
Again by Proposition~\ref{Pcoeff}, there is a bijection between the set
of edges $v \sim w$ in $G$ and the monomials $n_v n_w$ in $p_G(\ensuremath{\mathbf n})$ with
coefficient $3(-2)^{|V|-2}$. Thus, all quadratic monomials in $p_G(\ensuremath{\mathbf n})$
with this coefficient reveal the edge set of $G$ as well.
Finally, to show that $u_G$ does not detect the graph $G$, consider the
two graphs $H,K$ in Figure~\ref{Fig1}.
\begin{figure}[ht]
\definecolor{xdxdff}{rgb}{0.49,0.49,1}
\begin{tikzpicture}
\draw (0,1.5)-- (-1.5,0);
\draw (0,0)-- (0,1.5);
\draw (-1.5,0)-- (0,0);
\draw (0,0)-- (1.5,0);
\draw (1.5,0)-- (3,0);
\draw (3,0)-- (4.5,0);
\fill [color=xdxdff] (-1.5,0) circle (1.5pt);
\draw[color=black] (-1.5,-0.25) node {$2$};
\fill [color=xdxdff] (0,0) circle (1.5pt);
\draw[color=black] (0,-0.25) node {$3$};
\fill [color=xdxdff] (1.5,0) circle (1.5pt);
\draw[color=black] (1.5,-0.25) node {$4$};
\fill [color=xdxdff] (3,0) circle (1.5pt);
\draw[color=black] (3,-0.25) node {$5$};
\fill [color=xdxdff] (4.5,0) circle (1.5pt);
\draw[color=black] (4.5,-0.25) node {$6$};
\fill [color=xdxdff] (0,1.5) circle (1.5pt);
\draw[color=black] (0,1.75) node {$1$};
\draw[color=black] (3,1) node {$H$};
\draw (9,1.5)-- (7.5,0);
\draw (9,1.5)-- (9,0);
\draw (9,1.5)-- (10.5,0);
\draw (7.5,0)-- (9,0);
\draw (9,0)-- (10.5,0);
\draw (10.5,0)-- (12,0);
\draw (12,0)-- (13.5,0);
\fill [color=xdxdff] (7.5,0) circle (1.5pt);
\draw[color=black] (7.5,-0.25) node {$2$};
\fill [color=xdxdff] (9,0) circle (1.5pt);
\draw[color=black] (9,-0.25) node {$3$};
\fill [color=xdxdff] (10.5,0) circle (1.5pt);
\draw[color=black] (10.5,-0.25) node {$4$};
\fill [color=xdxdff] (12,0) circle (1.5pt);
\draw[color=black] (12,-0.25) node {$5$};
\fill [color=xdxdff] (13.5,0) circle (1.5pt);
\draw[color=black] (13.5,-0.25) node {$6$};
\fill [color=xdxdff] (9,1.5) circle (1.5pt);
\draw[color=black] (9,1.75) node {$1$};
\draw[color=black] (12,1) node {$K$};
\end{tikzpicture}
\caption{Two non-isometric graphs on six vertices with co-spectral
blowups}\label{Fig1}
\end{figure}
Both graphs have vertex sets $\{ 1, \dots, 6 \}$, and are not isomorphic.
Now define (see Remark~\ref{Rdrury}):
\[
H' := H[(2,1,1,2,1,1)], \qquad K' := K[(2,1,1,1,1,2)].
\]
Then $H', K'$ are not isometric, but a direct computation reveals:
\[
u_{H'}(n) = u_{K'}(n) = -320 n^6 + 3712 n^5 - 10816 n^4 + 10880 n^3 -
1664 n^2 - 2048 n + 256. \qedhere
\]
\end{proof}
\begin{remark}\label{Rdrury}
The graphs $H',K'$ in the preceding proof were not accidental or
providential, but stem from the recent paper~\cite{DL}, which is part of
the literature on exploring which graphs are distance co-spectral (see
the Introduction). In the discussion preceding~\cite[Figure 1]{DL}, the
authors verified that the graphs $H' \not\cong K'$ used in the preceding
proof are indeed distance co-spectral. This result, combined with
Proposition~\ref{Pdistancespectrum}, leads to the above use of $H', K'$
in proving that $u_G$ cannot detect $G$ up to isometry.
\end{remark}
\begin{remark}
As the proof of Theorem~\ref{Tisom} reveals, for any graph metric space
$G = (V,E)$, the Hessian of the blowup-polynomial carries the same
information as the matrix $\ensuremath{\mathcal{D}}_G \in \mathbb{Z}_{>0}^{V \times V}$:
\begin{equation}
\mathcal{H}(p_G) := ((\partial_{n_v} \partial_{n_{v'}} p_G)({\bf
0}))_{v,v' \in V} = (-2)^{|V|} {\bf 1}_{|V| \times |V|} - (-2)^{|V|-2}
\ensuremath{\mathcal{D}}_G^{\circ 2},
\end{equation}
where $\ensuremath{\mathcal{D}}_G^{\circ 2}$ is the entrywise square of the modified distance
matrix $\ensuremath{\mathcal{D}}_G$.
\end{remark}
\subsection{Complete multipartite graphs via
real-stability}\label{Slorentz}
The next result we show is Theorem~\ref{Tlorentz} (described in the title
of this subsection). Before doing so, we define the three classes of
polynomials alluded to in Corollary~\ref{Clorentz}, as promised there
(and for the self-sufficiency of this paper).
\begin{enumerate}
\item Br\"and\'en--Huh \cite{BH} defined a polynomial $p \in \ensuremath{\mathbb R}[x_1,
\dots, x_k]$ to be {\em Lorentzian} if $p(\cdot)$ is homogeneous of some
degree $d$, has non-negative coefficients, and given any indices $0 \leq
j_1, \dots, j_{d-2} \leq k$, if
\[
g(x_1, \dots, x_k) := \left( \partial_{x_{j_1}} \cdots
\partial_{x_{j_{d-2}} } p \right)(x_1, \dots, x_k),
\]
then the Hessian matrix $\mathcal{H}_g := (\partial_{x_i} \partial_{x_j}
g)_{i,j=1}^k \in \ensuremath{\mathbb R}^{k \times k}$ is Lorentzian.
(This last term means that $\mathcal{H}_g$ is nonsingular and has exactly
one positive eigenvalue.)
\item Suppose $p \in \ensuremath{\mathbb R}[x_1, \dots, x_k]$ has non-negative coefficients.
Gurvits~\cite{Gurvits} defined $p$ to be \textit{strongly log-concave} if
for all $\alpha \in \mathbb{Z}_{\geq 0}^k$, either the derivative
$\displaystyle
\partial^\alpha (p) := \prod_{i=1}^k \partial_{x_i}^{\alpha_i} \cdot p$
is identically zero, or $\log(\partial^\alpha (p))$ is defined and
concave on $(0,\infty)^k$.
\item Suppose $p \in \ensuremath{\mathbb R}[x_1, \dots, x_k]$ has non-negative coefficients.
Anari, Oveis Gharan, and Vinzant~\cite{AGV} defined $p$ to be
\textit{completely log-concave} if for all integers $m \geq 1$ and
matrices $A = (a_{ij}) \in [0,\infty)^{m \times k}$, either the
derivative $\displaystyle
\partial_A (p) := \prod_{i=1}^m \left( \sum_{j=1}^k a_{ij} \partial_{x_j}
\right) \cdot p$
is identically zero, or $\log(\partial_A (p))$ is defined and concave on
$(0,\infty)^k$.
\end{enumerate}
Having written these definitions, we proceed to the main proof.
\begin{proof}[Proof of Theorem~\ref{Tlorentz}]
We prove the cyclic chain of implications:
\[
(4) \quad \implies \quad
(1) \quad \implies \quad
\{ (1), (2) \} \quad \implies \quad
(3) \quad \implies \quad
(2) \quad \implies \quad
(4) \quad \Longleftrightarrow \quad
(5).
\]
We begin with a short proof of $(1) \implies (2)$ via Lorentzian
polynomials from Corollary~\ref{Clorentz}. It was shown in~\cite[pp.\
828--829]{BH} that if $(1)$ holds then $\widetilde{p}_G$ is Lorentzian
(see also \cite[Theorem~6.1]{COSW}), and in turn, this implies $(2)$ by
definition (or by \textit{loc.\ cit.}).
We next show that $(3) \implies (2)$. Observe that
\begin{equation}\label{Erayleigh}
\frac{p_G(-z_1, \dots, -z_k) \cdot (-1)^k}{p_G(-1, \dots, -1) \cdot
(-1)^k} \equiv
\frac{\widetilde{p}_G(1, z_1, \dots, z_k)}{\widetilde{p}_G(1,1,\dots,1)}.
\end{equation}
Now if~$(3)$ holds, then $\widetilde{p}_G(1,1,\dots,1) = (-1)^k p_G(-1,
\dots, -1) > 0$, so the polynomial
\[
(-1)^k p_G(-z_1, \dots, -z_k) = \widetilde{p}_G(1,z_1, \dots, z_k)
\]
has all coefficients non-negative, using~$(3)$ and~\eqref{Erayleigh}.
Since $p_G(\cdot)$ is multi-affine (or by inspecting the form of
$\widetilde{p}_G(\cdot)$), this shows $(3) \implies (2)$.
Now to show $\{ (1), (2) \} \implies (3)$, note that the sum of all
coefficients in $\widetilde{p}_G(\cdot)$ equals
\[
\widetilde{p}_G(1,1,\dots,1) = (-1)^k p_G(-1,\dots,-1),
\]
and by~$(2)$, this dominates the ``constant term'' of $p_G$, i.e.,
\[
(-1)^k p_G(-1,\dots,-1) \geq \widetilde{p}_G(1,0,\dots,0) = (-1)^k
p_G(0,\dots,0) = (-1)^k \prod_{v \in V} (-2 d(v, V \setminus \{ v \})) >
0.
\]
In particular, $(-1)^k p_G(-1,\dots,-1) > 0$, proving a part of~$(3)$.
Hence using~$(2)$ and ~\eqref{Erayleigh}, all coefficients of the
``reflected'' polynomial are non-negative; and the normalization shows that
the coefficients sum to $1$. It remains to show that the ``reflected''
polynomial $p_G(-{\bf z}) / p_G(-1,\dots,-1)$ is real-stable. Once again,
using~\eqref{Erayleigh} and that $(-1)^k p_G(-1,\dots,-1) > 0$, it
suffices to show that $\widetilde{p}_G(1,z_1, \dots, z_k)$ is
real-stable. But this follows from~$(1)$ by specializing to $z_0 \mapsto
1 \in \ensuremath{\mathbb R}$. This finally shows that $(1)$ and $(2)$ together imply $(3)$.
We next show the equivalence of $(4)$ and $(5)$. If $G = K_k$, then $\ensuremath{\mathcal{D}}_G
= \Id_k + {\bf 1}_{k \times k}$ is positive semidefinite. Hence so is
$\ensuremath{\mathcal{D}}_{K_k[\ensuremath{\mathbf n}]}$ for all $\ensuremath{\mathbf n}$, by Lemma~\ref{Lwellbehaved}(4). The
converse follows from~\cite[Theorem 1.1]{LHWS}, since $\ensuremath{\mathcal{D}}_G = D_G + 2
\Id_{|V(G)|}$.
Finally, we will show $(2) \implies (4) \implies (1)$. First assume
(2)~$\widetilde{p}_G(\cdot)$ has non-negative coefficients. Fix a subset
$J \subset \{ 1, \dots, k \}$; using Proposition~\ref{Pcoeff}(1), the
coefficient of $z_0^{k - |J|} \prod_{j \in J} z_j$ equals
\[
(-1)^{k - |J|} \cdot \det (\ensuremath{\mathcal{D}}_G)_{J \times J} \prod_{j \in \{ 1, \dots, k
\} \setminus J} (-d_{jj}) = \det (\ensuremath{\mathcal{D}}_G)_{J \times J} \prod_{j \in \{ 1,
\dots, k \} \setminus J} 2 d_G(v_j, V \setminus \{ v_j \}).
\]
By the hypotheses, this expression is non-negative for every $J \subset
\{ 1, \dots, k \}$. Hence $\ensuremath{\mathcal{D}}_G$ has all principal minors non-negative
(and is symmetric), so is positive semidefinite, proving~(4).
Finally, if (4)~$\ensuremath{\mathcal{D}}_G$ is positive semidefinite, then so is
$\mathcal{C}_G$. Write $\mathcal{C}_G = (-\Delta_{\ensuremath{\mathbf a}_G})^{-1/2} \ensuremath{\mathcal{D}}_G
(-\Delta_{\ensuremath{\mathbf a}_G})^{-1/2}$ as above, and compute using that
$-\Delta_{\ensuremath{\mathbf a}_G}$ is a positive definite diagonal matrix:
\begin{align}\label{Ecomput}
\widetilde{p}_G(z_0, z_1, \dots, z_k) = &\ \det(z_0 \cdot
(-\Delta_{\ensuremath{\mathbf a}_G}) + \Delta_{\bf z} \ensuremath{\mathcal{D}}_G)\notag\\
= &\ \det (-\Delta_{\ensuremath{\mathbf a}_G})^{1/2} \det (z_0 \Id_k + \Delta_{\bf z}
\mathcal{C}_G) \det (-\Delta_{\ensuremath{\mathbf a}_G})^{1/2}\notag\\
= &\ \det (-\Delta_{\ensuremath{\mathbf a}_G}) \det (z_0 \Id_k + \Delta_{\bf z}
\mathcal{C}_G).
\end{align}
(As an aside, the second factor in the final expression was termed as the
\textit{multivariate characteristic polynomial of $\mathcal{C}_G$}, in
\cite[Section 4.4]{BH}.)
Now $\mathcal{C}_G$ has a positive semidefinite square root
$\sqrt{\mathcal{C}_G}$ by the hypotheses. We claim that
\[
\det (z_0 \Id_k + \Delta_{\bf z} \sqrt{\mathcal{C}_G}
\sqrt{\mathcal{C}_G}) = \det (z_0 \Id_k + \sqrt{\mathcal{C}_G}
\Delta_{\bf z} \sqrt{\mathcal{C}_G});
\]
this follows by expanding the determinant of
$\begin{pmatrix} z_0 \Id_k & -\sqrt{\mathcal{C}_G} \\ \Delta_{\bf z}
\sqrt{\mathcal{C}_G} & \Id_k \end{pmatrix}$
in two different ways, both using Schur complements. Therefore -- and as
in the proof of Theorem~\ref{Tstable} -- we have:
\begin{align}\label{Emixed}
\begin{aligned}
\widetilde{p}_G(z_0, z_1, \dots, z_k) = &\ \det (-\Delta_{\ensuremath{\mathbf a}_G}) \det
(z_0 \Id_k + \sqrt{\mathcal{C}_G} \Delta_{\bf z} \sqrt{\mathcal{C}_G})\\
= &\ \det (-\Delta_{\ensuremath{\mathbf a}_G}) \det \left( z_0 \Id_k + \sum_{j=1}^k z_j
\sqrt{\mathcal{C}_G} E_{jj} \sqrt{\mathcal{C}_G} \right).
\end{aligned}
\end{align}
Now the coefficient of each $z_j, \ j \geq 0$ inside the above
determinant is a positive semidefinite matrix. It follows by
\cite[Proposition 2.4]{BB1} (see the text around~\eqref{EBH}) that
$\widetilde{p}_G(\cdot)$ is real-stable, proving~(1).
This proves the equivalence. The final assertion is immediate from~(4)
via Lemma~\ref{Lwellbehaved}(4).
\end{proof}
\begin{remark}
The assertions in Theorem~\ref{Tlorentz} and Corollary~\ref{Clorentz} are
thus further equivalent to $\mathcal{C}_G$ being a correlation matrix.
Moreover, Theorem~\ref{Tstable} (except the final equivalent assertion
(5)) and Corollary~\ref{Clorentz} hold more generally: for arbitrary
finite metric spaces. We leave the details to the interested reader.
\end{remark}
\begin{remark}
The proof of Theorem~\ref{Tlorentz} also reveals that inspecting the
coefficients in the polynomial $p_G(\cdot)$ or $\widetilde{p}_G(\cdot)$
helps identify the principal minors of $\ensuremath{\mathcal{D}}_G$ (or $\mathcal{C}_G$) that
are negative, zero, or positive.
\end{remark}
\subsection{Symmetries; monomials with zero coefficients}\label{Rgraphsymm}
The results in this paper are developed with the goal of being used in
proving the main theorems in the opening section. The only exceptions are
one of the Appendices (below), and this present subsection, in which our
goal is to provide further intuition for blowup-polynomials of graph
metric spaces $G$. To do so, we study a concrete family of graph metric
spaces $K^{(l)}_k$ -- for which we compute the blowup-polynomials and
reveal connections to symmetric functions. In addition, these
computations lead to the study of certain monomials whose coefficients in
$p_G$ vanish -- this provides intuition for proving
Theorem~\ref{Ttree-blowup}.
We begin from the proof of Theorem~\ref{Tisom}, which shows that the
blowup-polynomial of $G$ is a \textit{partially symmetric polynomial}, in
the sense of being invariant under the subgroup ${\rm Isom}(G)$ (of
isometries of $G$) of $S_{V(G)}$ (the permutations of $V(G)$). For
instance, $p_G(\ensuremath{\mathbf n})$ is symmetric in $\ensuremath{\mathbf n}$ for $G$ a complete graph;
whereas $p_G$ possesses ``cyclic'' symmetries for $G$ a cycle; and so on.
In addition to these global symmetries (i.e., isometries) of $G$, there
also exist ``local symmetries''. For example, suppose $G = P_k$ is a path
graph, with vertex $x_i$ adjacent to $x_{i \pm 1}$ for $1 < i < k$. For
any $1 < i \leq j < k$, the coefficient of the monomials
\[
n_{x_{i-1}} n_{x_i} \cdots n_{x_j} \qquad \text{and} \qquad
n_{x_i} n_{x_{i+1}} \cdots n_{x_{j+1}}
\]
are equal, by Proposition~\ref{Pcoeff}(3). A similar result holds more
generally for banded graphs with bandwidth $d \geq 1$, in which a vertex
$x_i$ is adjacent to $x_j$ if and only if $0 < |i-j| < d$ and $1 \leq i,j
\leq k$.
\begin{example}
We now use this principle of symmetry to compute the blowup-polynomial
for a two-parameter family of graph metric spaces
\[
G = K_k^{(l)}, \qquad 0 \leq l \leq k-2.
\]
(This will shortly be followed by another application: a sufficient
condition for when certain monomials do not occur in $p_G(\ensuremath{\mathbf n})$.)
Here, $K_k^{(l)}$ denotes the complete graph $K_k$ from which the edges
$(1,2), \dots, (1,l+1)$ have been removed. This leads to three types of
vertices:
\[
\{ 1 \}, \qquad \{ 2, \dots, l+1 \}, \qquad \{ l+2, \dots, k \},
\]
and correspondingly, the isometry group ${\rm Isom}(K_k^{(l)}) \cong S_l
\times S_{k-l-1}$. Notice that the vertices $2, \dots, k$ form a complete
induced subgraph of $K_k^{(l)}$.
The graphs $K_k^{(l)}$ are all chordal (i.e., do not contain an induced
$m$-cycle for any $m \geq 4$), and include as special cases: complete
graphs (for $l=0$) as well as complete graphs with one pendant vertex
(for $l=k-2$). The ``almost complete graph'' $K_k^{(1)}$ (missing exactly
one edge) is another special case, important from the viewpoint of matrix
positivity: it was crucially used in \cite{GKR-critG} to compute a graph
invariant which arose out of analysis and positivity, for every chordal
graph. This was termed the \textit{critical exponent} of a graph in
\cite{GKR-critG}, and seems to be a novel graph invariant.
By the remarks above, the blowup-polynomial in the $n_{v_i}$ (which we
will replace by $n_i,\ 1 \leq i \leq k$ for convenience) will be
symmetric separately in $\{ n_2, \dots, n_{l+1} \}$ and in $\{ n_{l+2},
\dots, n_k \}$. In particular, since the polynomial is multi-affine in
the $n_i$, the only terms that appear will be of the form
\begin{equation}\label{Ecoeff}
n_1^{\varepsilon_1} e_r(n_2, \dots, n_{l+1}) e_s(n_{l+2}, \dots, n_k),
\end{equation}
where $\varepsilon_1 = 0$ or $1$, and $e_r(\cdot)$ is the elementary
symmetric (homogeneous, multi-affine, and in fact real-stable) polynomial
for every $r \geq 0$ (with $e_0(\cdot) := 1$).
\end{example}
With this preparation, we can state and prove the following result for
the graphs $K_k^{(l)}$.
\begin{proposition}\label{PKkl}
Fix non-negative integers $k,l$ such that $0 \leq l \leq k-2$. With
$K_k^{(l)}$ as defined above, and denoting $n_{v_i}$ by $n_i$ for
convenience (with $1 \leq i \leq k$), we have
\begin{align*}
p_{K_k^{(l)}}(\ensuremath{\mathbf n}) = &\ \sum_{r=0}^l \sum_{s=0}^{k-l-1} \left[
(-2)^{k-r-s} (1 + r + s) \right] e_r(n_2, \dots, n_{l+1}) e_s(n_{l+2},
\dots, n_k)\\
&\ + n_1 \sum_{r=0}^l \sum_{s=0}^{k-l-1} \left[ (-2)^{k-r-s-1} (1 -r)
(s+2) \right] e_r(n_2, \dots, n_{l+1}) e_s(n_{l+2}, \dots, n_k).
\end{align*}
\end{proposition}
Notice that setting $n_1=0$, we obtain the blowup-polynomial of the
complete graph $K_{k-1}$, in the variables $n_2, \dots, n_k$. This
clearly equals (via $r+s \leadsto s$):
\[
\sum_{s=0}^{k-1} (-2)^{k-s} (1+s) e_s(n_2, \dots, n_k),
\]
and it also equals the expression in~\eqref{Ecomplete} (modulo
relabelling of the variables) since the underlying metric spaces are
isometric. A similar computation holds upon working with $l=0$.
\begin{proof}[Proof of Proposition~\ref{PKkl}]
We begin by exploiting the symmetries in $K_k^{(l)}$, which imply that
given a subset $I \subset \{ 1, \dots, k \}$, the coefficient of
$\prod_{i \in I} n_i$ depends only on the three integers
\[
\varepsilon_1 := {\bf 1}(1 \in I), \qquad r := \# (I \cap \{ 2, \dots,
l+1 \}), \qquad s := \# (I \cap \{ l+2, \dots, k \}).
\]
Now using~\eqref{Ecoeff}, it follows that the
blowup-polynomial indeed has the desired form:
\begin{align*}
p_{K_k^{(l)}}(\ensuremath{\mathbf n}) = &\ \sum_{r=0}^l \sum_{s=0}^{k-l-1} a_{0,r,s}
e_r(n_2, \dots, n_{l+1}) e_s(n_{l+2}, \dots, n_k)\\
&\ + n_1 \sum_{r=0}^l \sum_{s=0}^{k-l-1} a_{1,r,s} e_r(n_2, \dots,
n_{l+1}) e_s(n_{l+2}, \dots, n_k),
\end{align*}
for some coefficients $a_{\varepsilon_1,r,s} \in \ensuremath{\mathbb R}$. It remains to
compute these coefficients, and we begin with the coefficients
$a_{0,r,s}$. By a computation akin to the proof of
Proposition~\ref{Pcoeff}(1), these are obtained by specializing $n_1 =
0$, which leads to a power of $(-2)$ times a principal minor of
$\ensuremath{\mathcal{D}}_{K_k^{(l)}}$ not involving its first row or column. But this is the
determinant of a principal submatrix of
\[
\Id_{k-1} + {\bf 1}_{(k-1) \times (k-1)}
\]
of size $(r+s) \times (r+s)$, and so a direct computation implies the
desired formula:
\[
a_{0,r,s} = (-2) \cdot (-2)^{k-r-s-1} \cdot (1 + r + s).
\]
It remains to compute $a_{1,r,s}$, which equals $(-2)^{k-r-s-1}$ times
the determinant of the block matrix
\[
\ensuremath{\mathcal{D}}_{r,s} := \begin{pmatrix}
2 & 2 {\bf 1}_{r \times 1}^T & {\bf 1}_{s \times 1}^T \\
2 {\bf 1}_{r \times 1} & \Id_r + {\bf 1}_{r \times r} & {\bf 1}_{r \times
s} \\
{\bf 1}_{s \times 1} & {\bf 1}_{s \times r} & \Id_s + {\bf 1}_{s \times
s} \end{pmatrix}
= \begin{pmatrix} 2 & v^T \\ v & C_{r,s} \end{pmatrix}
\in \ensuremath{\mathbb R}^{(1+r+s) \times (1+r+s)},
\]
where $v := (2 {\bf 1}_{r \times 1}^T \quad {\bf 1}_{s \times
1}^T)^T \in \ensuremath{\mathbb R}^{r+s}$, and $C_{r,s} = \Id_{r+s} + {\bf 1}_{(r+s) \times
(r+s)}$ denotes the principal submatrix of $\ensuremath{\mathcal{D}}_{r,s}$ obtained by
removing its first row and column. Now $C^{-1} = \Id - (1+r+s)^{-1} {\bf
1}_{(r+s) \times (r+s)}$, so using Schur complements,
\begin{align*}
&\ \det \ensuremath{\mathcal{D}}_{r,s}\\
= &\ \det C_{r,s} \cdot \det \left( 2 - v^T C_{r,s}^{-1} v \right)\\
= &\ (1+r+s) \left[ 2 - \frac{1}{1+r+s} \begin{pmatrix} 2 {\bf 1}_{r
\times 1} \\ {\bf 1}_{s \times 1} \end{pmatrix}^T
\begin{pmatrix} (1+r+s) \Id_r - {\bf 1}_{r \times r} & - {\bf 1}_{r
\times s} \\ - {\bf 1}_{s \times r} & (1+r+s) \Id_s - {\bf 1}_{s \times
s} \end{pmatrix}
\begin{pmatrix} 2 {\bf 1}_{r \times 1} \\ {\bf 1}_{s \times 1}
\end{pmatrix} \right],
\end{align*}
and a straightforward (if careful) calculation reveals this quantity to
equal $(1-r)(s+2)$.
\end{proof}
\begin{example}\label{Exbipartite}
As an additional example, we compute the blowup-polynomial for several
other graphs at once. The path graph $P_3$ is a special case of the star
graphs $K_{1,k-1}$, and in turn, these as well as the cycle graph $C_4$
are special cases of complete bipartite graphs $K_{r,s}$. As $K_{r,s} =
K_2[(r,s)]$ is a blowup, we can use Lemma~\ref{Lwellbehaved}(2) and
Equation~\eqref{Ecomplete} for $k=2$, to obtain:
\begin{equation}\label{Ebipartite}
p_{K_{r,s}}(n_1, \dots, n_r; m_1, \dots, m_s)
= (-2)^{r+s-2} \left( 3 \sum_{i=1}^r n_i \cdot \sum_{j=1}^s m_j - 4
\sum_{i=1}^r n_i - 4 \sum_{j=1}^s m_j + 4 \right).
\end{equation}
\end{example}
As one observes by visual inspection, the coefficient in
Proposition~\ref{PKkl} of $n_1 n_2$ times any monomial in the
(``type-$3$'') node-variables $n_{l+2}, \dots, n_k$, vanishes. Similarly,
in~\eqref{Ebipartite} there are many coefficients that vanish -- in fact,
every coefficient of total degree at least $3$. These facts can be
explained more simply, by the following result about zero terms in the
blowup-polynomial:
\begin{proposition}\label{Pzeroterms}
Suppose $G,H$ are graph metric spaces, and $\ensuremath{\mathbf n} \in
\mathbb{Z}_{>0}^{V(G)}$ a tuple of positive integers, such that $G[\ensuremath{\mathbf n}]$
isometrically embeds as a subgraph metric space inside $H$. Also suppose
$n_v \geq 2$ for some $v \in V(G)$, and $v_1, v_2 \in G[\ensuremath{\mathbf n}]$ are copies
of $v$. Then for every subset of vertices $\{ v_1, v_2 \} \subset S
\subset V(G[\ensuremath{\mathbf n}])$, the coefficient of $\prod_{s \in S} n_s$ in
$p_H(\cdot)$ is zero.
\end{proposition}
For example, for $H$ the path graph $P_4$ with vertices $a - b - c - d$,
the coefficients of $n_a n_c, n_a n_b n_c$, and $n_b n_d, n_b n_c n_d$ in
$p_H(\ensuremath{\mathbf n})$ are all zero, since the path subgraphs $a-b-c$ and $b-c-d$ are
both isomorphic to the graph blowup $K_2[(1,2)]$.
This result also extends to arbitrary finite metric spaces.
\begin{proof}
By Proposition~\ref{Pcoeff}, the coefficient of $\ensuremath{\mathbf n}^S$ (whose meaning is
clear from context) in $p_H(\{ n_w : w \in V(H) \})$ is a scalar times
$\det (\ensuremath{\mathcal{D}}_H)_{S \times S}$. Since $G[\ensuremath{\mathbf n}]$ is a metric subspace of $H$,
it suffices to show that $\det (\ensuremath{\mathcal{D}}_{G[\ensuremath{\mathbf n}]})_{S \times S} = 0$, since
this matrix agrees with $(\ensuremath{\mathcal{D}}_H)_{S \times S}$. But since $v_1, v_2 \in
V(G[\ensuremath{\mathbf n}])$ are copies of $v \in V(G)$, the matrix $\ensuremath{\mathcal{D}}_{G[\ensuremath{\mathbf n}]}$ has two
identical rows by Lemma~\ref{Lwellbehaved}(3). It follows that $\det
(\ensuremath{\mathcal{D}}_{G[\ensuremath{\mathbf n}]})_{S \times S} = 0$, and the proof is complete.
\end{proof}
\subsection{The tree-blowup delta-matroid}
We conclude this section by proving Theorem~\ref{Ttree-blowup} about the
delta-matroid $\mathcal{M}'(T)$ for every tree, which seems to be
unstudied in the literature. We then explore
(a)~if this delta-matroid equals the blowup delta-matroid
$\mathcal{M}_{\ensuremath{\mathcal{D}}_T}$; and
(b)~if this construction can be extended to arbitrary graphs.
To motivate the construction of $\mathcal{M}'(T)$, which we term the
\textit{tree-blowup delta-matroid}, we begin by applying
Proposition~\ref{Pzeroterms} with $G = P_2 = K_2$ and $G[\ensuremath{\mathbf n}] = P_3$. An
immediate consequence is:
\begin{corollary}\label{Cp3}
If $a,b,c \in V(H)$ are such that $b$ is adjacent to $a,c$ but $a,c$ are
not adjacent, then $n_a n_c$ and $n_a n_b n_c$ have zero coefficients in
$p_H(\ensuremath{\mathbf n})$.
\end{corollary}
In light of this, we take a closer look at $\ensuremath{\mathcal{D}}_X$ for $X = P_k$ a path
graph. Given an integer $k \geq 1$, the \emph{path graph} $P_k$ has
vertex set $\{ 1, \dots, k \}$, with vertices $i,j$ adjacent if and only
if $|i-j|=1$.
Recall from \cite{Bouchet2} that the set system $\mathcal{M}_{\ensuremath{\mathcal{D}}_X}$
(defined a few lines before Corollary~\ref{Cdeltamatroid}) is a linear
delta-matroid for every finite metric space -- in particular, for every
graph metric space, such as $P_k$. It turns out (by inspection) that
Corollary~\ref{Cp3} describes \textit{all} of the principal minors of
$\ensuremath{\mathcal{D}}_{P_k}$ which vanish -- i.e., the complement of the delta-matroid
$\mathcal{M}_{\ensuremath{\mathcal{D}}_{P_k}}$ in the power-set of $\{ 1, \dots, k \}$ -- for
\textit{small} values of $k$. While this may or may not hold for general
$k$, the same construction still defines a delta-matroid, and one that is
hitherto unexplored as well:
\begin{proposition}\label{Ppk}
Given an integer $k \geq 3$, define the set system
\[
\mathcal{B}(P_k) := 2^{\{ 1, \dots, k \}} \setminus \left\{ \ \{ i, i+1,
i+2 \}, \ \ \{ i, i+2 \} \ : \ 1 \leq i \leq k-2 \right\}.
\]
Then $\mathcal{B}(P_k)$ is a delta-matroid.
\end{proposition}
We now strengthen this result to the case of arbitrary trees (for
completeness, recall that a tree is a finite connected graph without a
cycle $C_n$ for $n \geq 3$). We begin with a few basic observations on
trees, which help in the next proofs. Every pair of vertices in $T$ is
connected by a unique path in $T$. Moreover, every connected sub-graph of
$T$ is a tree, and every (nonempty) set $I$ of vertices of $T$ is
contained in a unique smallest sub-tree $T(I)$, called its
\textit{Steiner tree}. We also recall that a \textit{leaf}, or a
\textit{pendant vertex}, is any vertex with degree one, i.e., adjacent to
exactly one vertex.
\begin{definition}
Let $T = (V,E)$ be a finite connected, unweighted, tree with the (unique)
edge-distance metric.
We say that a subset of vertices $I \subset V$ is \emph{infeasible} if
there exist vertices $v_1 \neq v_2$ in $I$, such that the Steiner tree
$T(I)$ has $v_1, v_2$ as leaves, both adjacent to the same (unique)
vertex.
\end{definition}
With this terminology at hand, Theorem~\ref{Ttree-blowup} asserts that
the feasible subsets of $V$ form a delta-matroid.
Note, if $T = P_k$ is a path graph then $\mathcal{M}'(T)$ equals the
delta-matroid $\mathcal{B}(P_k)$ above.
The proof of Theorem~\ref{Ttree-blowup} requires a preliminary result,
which characterizes when a graph $G$ is a nontrivial blowup, and which
then connects this to the distance spectrum of $G$.
\begin{proposition}\label{Pblowup}
Suppose $G = (V,E)$ is a graph metric space. Then each of the following
statements implies the next:
\begin{enumerate}
\item $G$ is a nontrivial blowup, i.e., a blowup of a graph metric space
$H$ with $|V(H)| < |V(G)|$.
\item $G$ contains two vertices $v,w$ with the same set of neighbors. (In
particular, $d_G(v,w) = 2$.)
\item $-2$ is an eigenvalue of the distance matrix $D_G$.
\item The blowup-polynomial has total degree strictly less than $|V|$.
\end{enumerate}
In fact, the first two assertions are equivalent, as are the last two --
but all four assertions are not equivalent.
\end{proposition}
\begin{proof}
We show the implications $(2) \implies (1) \implies (2) \implies (3)
\implies (4) \implies (3)$. If~(2) holds, and $H$ is the induced subgraph
of $G$ on $V(G) \setminus \{ w \}$, then it is easy to see that $G =
H[\ensuremath{\mathbf n}]$, where $n_{v'} = 1$ for $v' \neq v$, $n_v = 2$, and $w$ is the
other copy of $v$. This shows~(1); conversely, if~(1) holds then there
exist two vertices $v \neq w \in V(G)$ that are copies of one another.
But then $v,w$ share the same neighbors, so $0 < d_G(v,w) \leq 2$. If
$d_G(v,w) = 1$ then $v,w$ are neighbors of each other but not of
themselves, which contradicts the preceding sentence. This shows that
$(1) \implies (2)$.
Now suppose~(2) holds, and hence so does~(1). By
Lemma~\ref{Lwellbehaved}(3), $\ensuremath{\mathcal{D}}_G = D_G + 2 \Id_V$ has two identical
rows, so is singular. This shows~(3). (This implication is also mentioned
in \cite[Theorem 2.34]{AH}.) Finally,~(3) holds if and only if $\ensuremath{\mathcal{D}}_G$ is
singular as above. By Proposition~\ref{Pcoeff}(1), this is if and only if
the coefficient of the unique top-degree monomial $\prod_{v \in V(G)}
n_v$ in $p_G(\ensuremath{\mathbf n})$ vanishes, which shows that $(3) \Longleftrightarrow
(4)$.
To show~(3) does not imply~(2), consider the path graph $P_9$ (with edges
$\{ i, i+1 \}$ for $0 < i < 9$). It is easy to check that $\det \ensuremath{\mathcal{D}}_{P_9}
= 0$, so that~(3) holds; and also that~(2) does not hold.
\end{proof}
Specializing Proposition~\ref{Pblowup} to trees, we obtain:
\begin{corollary}\label{Ctree-blowup}
A tree $T$ is a blowup of a graph $G$ if and only if
(a)~$G$ is a sub-tree of $T$, and
(b)~the only vertices of $G$ which are ``copied'' are a set of leaves of
$G$.
\end{corollary}
\begin{proof}
One way is easily shown, and for the ``only if'' part, the key
observation is that if a vertex adjacent to at least two others is
copied, then this creates a $4$-cycle in the blowup. (An equally short
proof is that this result is a special case of Proposition~\ref{Pblowup}
$(1) \Longleftrightarrow (2)$.)
\end{proof}
\begin{proof}[Proof of Theorem~\ref{Ttree-blowup}]
If $T$ has two nodes (so $T = K_2$), then $\mathcal{M}'(T) = 2^V$, which
is a delta-matroid. Thus, suppose henceforth that $|V| \geq 3$.
Since $\mathcal{M}'(T)$ contains all singleton subsets of $V$, the only
nontrivial step is to verify the symmetric exchange axiom. Suppose $A
\neq B \in \mathcal{M}'(T)$ and $x \in A \Delta B$. We divide the
analysis into two cases; in what follows, given a subset $I \subset V$,
we will denote by $T_I$ the subgraph of $T$ induced on the vertex set
$I$, and by $T(I)$ the Steiner tree of $I$ (as above).
\begin{enumerate}
\item $x \in B \setminus A$.
If $A \sqcup \{ x \} \in \mathcal{M}'(T)$, then we simply choose $y=x$.
Otherwise $A \sqcup \{ x \}$ is infeasible whereas $A$ is not (i.e., $A$
is feasible). In particular, $x \not\in T(A)$ since otherwise $T(A \sqcup
\{ x \}) = T(A)$. Now using Corollary~\ref{Ctree-blowup}, this yields a
unique $v \in A$ such that $v,x$ are leaves in the sub-tree $T(A \sqcup
\{ x \})$. Also denote their (unique) common adjacent vertex by $a \in
T(A \sqcup \{ x \})$.
We now proceed. If $v \not\in B$, then compute using $y = v$:
\[
A \Delta \{ x, y \} = (A \setminus \{ v \}) \sqcup \{ x \}.
\]
Since $v,x$ were copies, removing $v$ and adding $x$ produces an
isomorphic graph: $T_{(A \setminus \{ v \}) \sqcup \{ x \}} \cong T_A$,
which is indeed in $\mathcal{M}'(T)$, as desired.
Otherwise $v \in B$. In this case, $T(B)$ contains $v,x$ and hence $a$ as
well. If now $v,x$ are leaves in $T(B)$ then $B$ is infeasible, which
contradicts the assumption. Hence there exists $y \in B$ which is
separated from $a \in V$ by (exactly) one of $v,x \in B$. Clearly, $y
\not\in A \sqcup \{ x \}$; now note that $A \Delta \{ x, y \} = A \sqcup
\{ x, y \}$, and this belongs to $\mathcal{M}'(T)$ by
Corollary~\ref{Ctree-blowup} (and the uniqueness of the leaves $v,x$ with
a common adjacent vertex).
\item The other case is when $x \in A \setminus B$.
Once again, there are two cases, analogous to the two cases above. First
if $A \setminus \{ x \} \in \mathcal{M}'(T)$, then we simply choose
$y=x$. Otherwise $A \setminus \{ x \}$ is infeasible whereas $A$ is not.
Using Corollary~\ref{Ctree-blowup}, this yields unique $v,w \in A$ such
that:
\begin{itemize}
\item $v,w$ are leaves in $T(A \setminus \{ x \})$,
\item $v,w$ are both adjacent to a (unique) vertex $a \in T(A \setminus
\{ x \})$, and
\item $v$ separates $x$ from $w$ in $T(A)$.
\end{itemize}
There are now two sub-cases. First if $v,w \in B$, then $T(B)$ contains
$v,w$, hence $a$. As $B \in \mathcal{M}'(T)$, there again exists $y \in
B$ which is separated from $a \in V$, now by (exactly) one of $v,w$. But
then $A \Delta \{ x, y \} = (A \setminus \{ x \}) \sqcup \{ y \}$, and
this is in $\mathcal{M}'(T)$ as in the preceding case, by
Corollary~\ref{Ctree-blowup} (and the uniqueness of the leaves $v,w$ with
a common adjacent vertex).
The final sub-case is when not both $v,w$ lie in $B$. Choose an element
$y \in \{ v, w \} \setminus B \subset A \setminus B$. Now $A \Delta \{ x,
y \} = (A \setminus \{ x \}) \setminus \{ y \}$, and by the uniqueness of
the leaves $v,w$ (with a common adjacent vertex), this set lacks two
leaves at distance $2$ from each other in $T$. Therefore $A \Delta \{ x,
y \} \in \mathcal{M}'(T)$, as desired. \qedhere
\end{enumerate}
\end{proof}
As mentioned above, the delta-matroids $\mathcal{M}_{\ensuremath{\mathcal{D}}_{P_k}} =
\mathcal{M}'(P_k) = \mathcal{B}(P_k)$ for small values of $k$. It is
natural to seek to know if this result holds for all path graphs, and
perhaps even more generally. It turns out that this is false, and the
situation is more involved even for path graphs $P_k$ with $k \geq 9$:
\begin{proposition}\label{Ppath}
Suppose $k \geq 3$ is an integer. The blowup delta-matroid
$\mathcal{M}_{\ensuremath{\mathcal{D}}_{P_k}}$ of the graph metric space $P_k$ coincides with
the tree-blowup delta-matroid $\mathcal{B}(P_k)$, if and only if $k \leq
8$.
\end{proposition}
\begin{proof}
An explicit (and longwinded) inspection shows that
$\mathcal{M}_{\ensuremath{\mathcal{D}}_{P_k}} = \mathcal{B}(P_k)$ for $3 \leq k \leq 8$.
Another direct computation shows that $\det \ensuremath{\mathcal{D}}_{P_9} = 0$. Hence by
Proposition~\ref{Pcoeff}(2), the coefficient of $n_1 n_2 \cdots n_9$ in
$p_{P_k}(\ensuremath{\mathbf n})$ also vanishes, for all $k \geq 9$. Using
Proposition~\ref{Pcoeff}(3), it follows that
\[
\{ i, i+1, \dots, i+8 \} \not\in \mathcal{M}_{\ensuremath{\mathcal{D}}_{P_k}}, \qquad \forall k
\geq 9, \ 1 \leq i \leq k-8.
\]
But these sets all lie in $\mathcal{B}(P_k)$, so $\mathcal{B}(P_k)
\supsetneq \mathcal{M}_{\ensuremath{\mathcal{D}}_{P_k}}$ for $k \geq 9$.
\end{proof}
As also promised above, we next explore the question of extending
Theorem~\ref{Ttree-blowup} from trees to arbitrary graphs. The key
observation is that the equivalence $(1) \Longleftrightarrow (2)$ in
Proposition~\ref{Pblowup} extends Corollary~\ref{Ctree-blowup}. This
suggests how to define a set system in $2^{V(G)}$ for every graph metric
space $G$, which specializes to the delta-matroid $\mathcal{M}'(T)$ when
$G = T$ is a tree.
\begin{definition}
Suppose $G = (V,E)$ is a graph metric space. We say that a subset $I
\subset V$ is
\begin{enumerate}
\item \textit{infeasible of the first kind} if there exist vertices $v_1
\neq v_2 \in I$ and a subset $I \subset \widetilde{I} \subset V$, such
that:
(a)~the induced subgraph $G(\widetilde{I})$ on $\widetilde{I}$ is
connected in $G$, and
(b)~$v_1, v_2$ have the same set of neighbors in $G(\widetilde{I})$.
\item \textit{infeasible of the second kind} if there exist vertices $v_1
\neq v_2 \in I$ and a subset $I \subset \widetilde{I} \subset V$, such
that:
(a)~the induced subgraph $G(\widetilde{I})$ on $\widetilde{I}$ is a
\textit{metric subspace} of $G$ (hence connected), and
(b)~$v_1, v_2$ have the same set of neighbors in $G(\widetilde{I})$.
\end{enumerate}
Also define $\mathcal{M}'_1(G)$ (respectively, $\mathcal{M}'_2(G)$) to
comprise all subsets of $V$ that are not infeasible of the first
(respectively, second) kind.
\end{definition}
For instance, if $G = T$ is a tree, then it is not hard to see that
$\mathcal{M}'_1(T) = \mathcal{M}'_2(T) = \mathcal{M}'(T)$, which was
studied in Theorem~\ref{Ttree-blowup}. Thus, a question that is natural
given that theorem, is \textit{whether or not $\mathcal{M}'_1(G)$ and/or
$\mathcal{M}'_2(G)$ is a delta-matroid for every graph $G$?}
This question is also natural from the viewpoint of the
blowup-polynomial, in that if $I \subset V$ is infeasible of the second
kind, then the coefficient in $p_G(\ensuremath{\mathbf n})$ of the monomial $\prod_{i \in I}
n_i$ is zero by Proposition~\ref{Pzeroterms}. Nevertheless, our next
result shows that this question has a negative answer, already for a
graph on seven vertices. Thus, the construction of $\mathcal{M}'(T)$ does
not extend to arbitrary graph metric spaces in either of the above two
ways:
\begin{proposition}
There exists a graph $G$ such that neither $\mathcal{M}'_1(G)$ nor
$\mathcal{M}'_2(G)$ is a delta-matroid.
\end{proposition}
\begin{proof}
Let $G$ denote the graph in Figure~\ref{Fig2} on the vertex set $V = \{
u, v_1, v_2, w_1, w_2, x, z \}$.
\begin{figure}[ht]
\definecolor{xdxdff}{rgb}{0.49,0.49,1}
\begin{tikzpicture}
\draw (0,1)-- (-1.5,0);
\draw (0,1)-- (1.5,0);
\draw (0,-1)-- (-1.5,0);
\draw (0,-1)-- (1.5,0);
\draw (1.5,0)-- (3,1);
\draw (1.5,0)-- (3,-1);
\draw (1.5,-2)-- (3,-1);
\draw (1.5,-2)-- (0,-1);
\draw[color=black] (-1.5,-2.3) node {$G$};
\fill [color=xdxdff] (-1.5,0) circle (1.5pt);
\draw[color=black] (-1.5,-0.25) node {$u$};
\fill [color=xdxdff] (0,-1) circle (1.5pt);
\draw[color=black] (0,-1.4) node {$w_2$};
\fill [color=xdxdff] (1.5,0) circle (1.5pt);
\draw[color=black] (1.5,-0.25) node {$z$};
\fill [color=xdxdff] (3,1) circle (1.5pt);
\draw[color=black] (3,1.4) node {$v_1$};
\fill [color=xdxdff] (3,-1) circle (1.5pt);
\draw[color=black] (3,-1.4) node {$v_2$};
\fill [color=xdxdff] (1.5,-2) circle (1.5pt);
\draw[color=black] (1.5,-2.4) node {$x$};
\fill [color=xdxdff] (0,1) circle (1.5pt);
\draw[color=black] (0,1.4) node {$w_1$};
\end{tikzpicture}
\caption{A graph on seven vertices}\label{Fig2}
\end{figure}
The definitions show that $A := V$ and $B := \{v_2 \}$ both lie in
$\mathcal{M}'_1(G) \cap \mathcal{M}'_2(G)$, and clearly, $x \in A \Delta
B = A \setminus B$. Moreover, for all $y \in V$, one can verify that $A
\Delta \{ x, y \} = A \setminus \{ x, y \}$ (or $A \setminus \{ x \}$ if
$y=x$) is indeed infeasible of the second kind, hence of the first.
(This uses that in the induced subgraph on $A \setminus \{ x, y \}$,
either $v_1, v_2$ are both present, or $w_1, w_2$ are, and then they are
copies of one another.) Hence the symmetric exchange axiom fails for both
$\mathcal{M}'_1(G)$ and for $\mathcal{M}'_2(G)$, proving the result.
\end{proof}
\section{Metric geometry: Euclidean blowups}
In this section we explore the blowup-polynomial from the viewpoint of
metric geometry, and prove the final outstanding theorem.
Specifically, we understand which blowups $X[\ensuremath{\mathbf n}]$ of a given finite
metric space $X$ isometrically embed into some Euclidean space $(\ensuremath{\mathbb R}^r, \|
\cdot \|_2)$.
\begin{proof}[Proof of Theorem~\ref{Teuclidean}]
If $k=1$ then the result is immediate, since $X[n_1]$ comprises the
vertices of a Euclidean simplex, for any $n_1 \geq 2$. Thus, we suppose
henceforth that $k \geq 2$.
We first show $(2) \implies (1)$. Let $j_0 \in [1,k] \setminus \{ j \}$
be such that $x_{j_0} = v \in V$ is the closest point in $V$ to $x_j$,
Now a straightforward computation reveals that $x'_j := 2 x_{j_0} - x_j
\in \ensuremath{\mathbb R}^r$ serves as a blowup of $x_j$ in $X[\ensuremath{\mathbf n}]$. This proves (1).
The proof of the reverse implication $(1) \implies (2)$ is in steps.
We first suppose $n_{x_1}, n_{x_2} \geq 2$ and arrive at a contradiction.
Indeed, now the metric subspace $Y[(2,2)] \subset X[\ensuremath{\mathbf n}]$ is also
Euclidean, where $Y = ( x_1, x_2 )$. Rescale the metric such that
$d(x_1,x_2) = 1$, and let $y_i$ be the blowup of $x_i$ in $Y[(2,2)]$ for
$i=1,2$. Then the modified Cayley--Menger matrix of $Y[(2,2)]$ with
respect to $(x_1, y_1, x_2, y_2)$ is
\[
A = \begin{pmatrix} 8 & 4 & 4 \\ 4 & 2 & -2 \\ 4 & -2 & 2 \end{pmatrix}.
\]
As $\det A < 0$, it follows by Theorem~\ref{Tschoenberg} that $Y[(2,2)]$,
and hence $X[\ensuremath{\mathbf n}]$, is not Euclidean.
Next, we suppose $n_{x_2} \geq 3$ and again arrive at a contradiction --
this time by considering the Euclidean subspace $Y[(1,3)] \subset
X[\ensuremath{\mathbf n}]$, where $Y = (x_1, x_2)$. Rescale the metric such that $d(x_1,
x_2) = 1$, let $y_2, z_2$ denote the blowups of $x_2$ in $Y[(1,3)]$, and
consider the modified Cayley--Menger matrix of $Y[(1,3)]$ with respect to
$(x_1, x_2, y_2, z_2)$:
\[
A = \begin{pmatrix} 2 & -2 & -2 \\ -2 & 2 & -2 \\ -2 & -2 & 2 \end{pmatrix}.
\]
As $\det A < 0$, it follows by Theorem~\ref{Tschoenberg} that $Y[(1,3)]$,
hence $X[\ensuremath{\mathbf n}]$, is not Euclidean. (Note, these two uses of
Theorem~\ref{Tschoenberg} can also be avoided by using ``visual Euclidean
geometry''. For instance, in the latter case $x_2, y_2, z_2$ are the
vertices of an equilateral triangle with edge-length $2$, and drawing
three unit spheres centered at these vertices reveals there is no common
intersection point $x_1$.)
This shows the existence of the unique index $j \in [1,k]$ such that
$n_{x_j} = 2$ and (2)(a)\ all other $n_{x_i} = 1$. If $k=2$ then the
result is easy to show, so we now suppose that $k \geq 3$. Let $x_{j_0}$
denote any point in $X \setminus \{ x_j \}$ that is closest to $x_j$.
Since $X[\ensuremath{\mathbf n}]$ is also Euclidean, considering the degenerate (Euclidean)
triangle with vertices $x_{j_0}, x_j$, and the blowup $x'_j$ of $x_j$
shows that these vertices are collinear, and in fact, $x_{j_0} = (x_j +
x'_j)/2$. In turn, this implies that a ``closest point'' $x_{j_0} \in X$
to $x_j$ (and hence the index $j_0$) is unique.
Next, denote by $l \geq 1$ the dimension of the span $V$ of $\{ x_i -
x_{j_0} : i \neq j \}$. Relabel the $x_i$ via: $y_0 := x_{j_0}, y_1 :=
x_j$, and choose any enumeration of the remaining points in $X$ as $y_2,
\dots, y_{k-1}$, such that $y_2 - y_0, \dots, y_{l+1} - y_0$ form a basis
of $V$. Since $X$ is Euclidean, a simple check shows that the modified
Cayley--Menger matrix of $X$ with respect to $(y_0, \dots, y_{k-1})$ is
precisely the Gram matrix
\[
(d(y_0, y_i)^2 + d(y_0, y_{i'})^2 - d(y_i, y_{i'})^2)_{i,i'=1}^{k-1} = (
2 \langle y_{i'} - y_0, y_i - y_0 \rangle )_{i,i'=1}^{k-1}.
\]
Now (2)(b) follows from the claim that if $y_1 - y_0$ is in the span of
$\{ y_i - y_0 : 2 \leq i \leq l+1 \}$, then this matrix uniquely
determines $y_1 \in \ensuremath{\mathbb R}^r$. To show the claim, write $y_1 - y_0 =
\sum_{i=2}^{l+1} c_i (y_i - y_0)$, and take inner products to obtain the
linear system
\[
\sum_{i=2}^{l+1} \langle y_{i'} - y_0, y_i - y_0 \rangle c_i = \langle
y_{i'} - y_0, y_1 - y_0 \rangle, \qquad 2 \leq i' \leq l+1.
\]
But the left-hand side is the product of the Gram matrix of $\{ y_{i'} -
y_0 : 2 \leq i' \leq l+1 \}$ against the vector $(c_2, \dots,
c_{l+1})^T$. As the vectors $y_{i'} - y_0$ are independent, their Gram
matrix is invertible, so this system determines all $c_i$ uniquely. In
particular, if $y_1 = x_j$ is in the affine hull of $X \setminus \{ x_j
\}$, then $X[\ensuremath{\mathbf n}]$ cannot be Euclidean since $n_{x_j} > 1$. This proves
(2)(b).
Finally, since $k \geq 3$, we have $l \geq 1$. Now the definition of the
blowup $X[\ensuremath{\mathbf n}]$ implies:
\[
\| y_i - y_1 \|_2^2 = \| y_i - (2 y_0 - y_1) \|_2^2, \qquad 2 \leq i \leq
l+1
\]
or in other words,
\[
\| (y_i - y_0) - (y_1 - y_0) \|_2^2 = \| (y_i - y_0) + (y_1 - y_0)
\|_2^2.
\]
Simplifying this equality yields that $y_1 - y_0$ is orthogonal to $y_2 -
y_0, \dots, y_{l+1} - y_0$. This implies $y_1 - y_0$ is orthogonal to all
of $V$ -- which was the assertion (2)(c).
\end{proof}
\subsection{Real-closed analogues}
For completeness, we now provide analogues of some of the results proved
above, over arbitrary real-closed fields. As this subsection is not used
anywhere else in the paper, we will be brief, and also assume familiarity
with real-closed fields; we refer the reader to the opening chapter
of~\cite{BCR} for this.
In this part, we suppose $\ensuremath{\mathbb K}$ is a real-closed field, where we denote the
non-negative elements in $\ensuremath{\mathbb K}$ by: $r \geq 0$. Also let $\overline{\ensuremath{\mathbb K}} =
\ensuremath{\mathbb K}[\sqrt{-1}]$ denote an algebraic closure of $\ensuremath{\mathbb K}$, where we fix a choice
of ``imaginary'' square root $i = \sqrt{-1}$ in $\overline{\ensuremath{\mathbb K}}$. Then
several ``real'' notions can be defined over $\ensuremath{\mathbb K}, \overline{\ensuremath{\mathbb K}}$:
\begin{enumerate}
\item A symmetric matrix $A \in \ensuremath{\mathbb K}^{k \times k}$ is said to be
\emph{positive semidefinite} (respectively, \emph{positive definite}) if
$x^T A x \geq 0$ (respectively, $x^T A x > 0$) for all nonzero vectors $x
\in \ensuremath{\mathbb K}^n$.
\item A matrix $A \in \ensuremath{\mathbb K}^{k \times k}$ is \emph{orthogonal} if $A A^T =
\Id_k$.
\item Given $z = x + iy \in \overline{\ensuremath{\mathbb K}}$ with $x,y \in \ensuremath{\mathbb K}$, we define
$\Re(z) := x$ and $\Im(z) := y$.
\item We say that a multivariable polynomial $p \in \overline{\ensuremath{\mathbb K}}[z_1,
\dots, z_k]$ is \emph{stable} if $p(z_1, \dots, z_k) \neq 0$ whenever
$\Im(z_j) > 0\ \forall j$.
\end{enumerate}
Now (an analogue of) the spectral theorem holds for symmetric matrices $A
= A^T \in \ensuremath{\mathbb K}^{k \times k}$. Moreover, every such matrix $A$ is positive
semidefinite if and only if it has non-negative eigenvalues in $\ensuremath{\mathbb K}$ --
and if so, then it has a positive semidefinite square root $\sqrt{A}$.
One also has:
\begin{proposition}\label{Prealclosed}
Fix an integer $k \geq 1$.
\begin{enumerate}
\item Suppose $A_1, \dots, A_m, B \in \ensuremath{\mathbb K}^{k \times k}$ are symmetric
matrices, with all $A_j$ positive semidefinite. Then the polynomial
\[
p(z_1, \dots, z_m) := \det \left( B + \sum_{j=1}^m z_j A_j \right)
\]
is either stable or identically zero.
\item Suppose $p \in \ensuremath{\mathbb K}[z_1, \dots, z_m]$ is homogeneous of degree $k$.
If $p$ is stable, then $p$ is Lorentzian (equivalently,
strongly/completely log-concave, whenever $\log(\cdot)$ is defined over
$\ensuremath{\mathbb K}$).
\end{enumerate}
\end{proposition}
The two parts of this proposition were proved over $\ensuremath{\mathbb K} = \ensuremath{\mathbb R}$ in
\cite{BB1,BH}, respectively. Thus, they hold over an arbitrary real
closed field $\ensuremath{\mathbb K}$ by Tarski's principle, since all of these notions can
be expressed in the first-order language of ordered fields.
Also note that the equivalence in the second part indeed makes sense over
some real-closed fields, e.g.\ $\ensuremath{\mathbb K} = \ensuremath{\mathbb R}$, or $\ensuremath{\mathbb K}$ the field of convergent
(real) Puiseux series with rational powers, where it was proved in
\cite[Theorem 3.19]{BH}. The notions of Lorentzian and
strongly/completely log-concave polynomials over the latter field can be
found in \cite{BH} (see pp.~861).
\begin{remark}
Schoenberg's Euclidean embeddability result (Theorem~\ref{Tschoenberg})
-- stated and used above -- turns out to have an alternate
characterization in a specific real-closed field: the convergent
generalized Puiseux series $\ensuremath{\mathbb R} \{ t \}$ with real powers. Recall, the
elements of $\ensuremath{\mathbb R} \{ t \}$ are series
\[
\sum_{n=0}^\infty c_n t^{\alpha_n}, \qquad c_0, c_1, \ldots \in \ensuremath{\mathbb R},
\]
satisfying:
(a)~the exponents $\alpha_0 < \alpha_1 < \cdots$ are real;
(b)~$\{ -\alpha_n : n \geq 0, \ c_n \neq 0 \}$ is well-ordered; and
(c)~$\sum_n c_n t^{\alpha_n}$ is convergent on a punctured open disk
around the origin. Such a series is defined to be positive if its leading
coefficient is positive. It is known (see e.g.\ \cite[Section
1.5]{Speyer}) that this field is real-closed; moreover, its algebraic
closure is the degree-$2$ extension $\mathbb{C} \{ t \}$, where
real coefficients in the definition above are replaced by complex ones.
Now we claim that the following holds.
\end{remark}
\begin{theorem}
A finite metric space $X = \{ x_0, x_1, \dots, x_k \}$ isometrically
embeds inside Hilbert space $\ell^2$ (over $\ensuremath{\mathbb R}$) if and only if the
matrix $E_X := (t^{d(x_i, x_j)^2})_{i,j=0}^k$ is positive semidefinite in
$\ensuremath{\mathbb R} \{ t \}$.
\end{theorem}
\begin{proof}
This follows from a chain of equivalences:
$X$ is Euclidean-embeddable if and only if (by~\cite{Schoenberg38b},
via Theorem~\ref{Tschoenberg}) the matrix $(-d(x_i,x_j)^2)_{i,j=0}^k$ is
conditionally positive semidefinite,
if and only if (by~\cite{Schoenberg38b}) $(\exp (-\lambda
d(x_i,x_j)^2))_{i,j=0}^k$ is positive semidefinite for all $\lambda \geq
0$,
if and only if (replacing $e^{-\lambda} \leadsto q \in (0,1)$ and via
\cite[Section~1.5]{Speyer}) $E_X$ is positive semidefinite in $\ensuremath{\mathbb R} \{ t
\}$.
\end{proof}
We conclude this part by formulating the promised tropical version of
some of our results above. As the notion of a $\ensuremath{\mathbb K}_{\geq 0}$-valued metric
(can be easily defined, but) is not very common in the literature, we
formulate the next result in greater generality.
\begin{theorem}
Suppose $\ensuremath{\mathbb K}$ is a real-closed field, and $\Delta, M \in \ensuremath{\mathbb K}^{k \times k}$
are symmetric matrices, with $\Delta$ a positive definite diagonal
matrix.
\begin{enumerate}
\item The multi-affine polynomials
\[
p_\pm(z_1, \dots, z_k) := \det (\pm \Delta + \Delta_{\bf z} M), \qquad
{\bf z} \in \overline{\ensuremath{\mathbb K}}^k
\]
are stable, with coefficients in $\ensuremath{\mathbb K}$.
\item Define the homogenization $\widetilde{p}(z_0, z_1, \dots, z_k) :=
p(z_0 \Delta + \Delta_{\bf z} M)$. The following are equivalent:
\begin{enumerate}
\item $\widetilde{p}$ is stable.
\item $\widetilde{p}$ is Lorentzian (equivalently, strongly/completely
log-concave, whenever $\log(\cdot)$ is defined over $\ensuremath{\mathbb K}$).
\item All coefficients of $\widetilde{p}$ lie in $\ensuremath{\mathbb K}_{\geq 0}$.
\item $p(1,\dots,1) > 0$, and the polynomial $(z_1, \dots, z_k) \mapsto
\displaystyle \frac{p(z_1, \dots, z_k)}{p(1,\dots,1)}$ is stable and has
non-negative coefficients that sum to $1$.
\item The matrix $M$ is positive semidefinite.
\end{enumerate}
\end{enumerate}
\end{theorem}
One can prove this result from the same result over $\ensuremath{\mathbb K} = \ensuremath{\mathbb R}$, via
Tarski's principle. Alternately, the case of $\ensuremath{\mathbb K} = \ensuremath{\mathbb R}$ was essentially
proved in Theorem~\ref{Tlorentz}; the slightly more general versions here
(for $\ensuremath{\mathbb K} = \ensuremath{\mathbb R}$, with $M$ instead of $\ensuremath{\mathcal{D}}_G$) require only minimal
modifications to the earlier proofs. These proofs (for $\ensuremath{\mathbb K} = \ensuremath{\mathbb R}$) go
through over arbitrary real-closed $\ensuremath{\mathbb K}$ with minimal further
modifications, given Proposition~\ref{Prealclosed}.
We end the paper with some observations. The results in this work reveal
several novel invariants associated to all finite connected unweighted
graphs -- more generally, to all finite metric spaces $X$:
\begin{enumerate}
\item The (real-stable) blowup-polynomials $p_X(\ensuremath{\mathbf n})$ and $u_X(n)$.
\item The degrees of $p_X, u_X$ -- notice by Proposition~\ref{Ppath} that
the degrees can be strictly less than the number of points in $X$, even
when $X$ is not a blowup of a smaller graph.
\item The largest root of $u_X(n)$ (which is always positive). By
Corollary~\ref{Cdistancespectrum}, for $X = G$ a graph this equals
$\frac{2}{2+\lambda}$, where $\lambda$ is the smallest eigenvalue of
$D_G$ above $-2$.
\item The blowup delta-matroid $\mathcal{M}_{\ensuremath{\mathcal{D}}_X}$; and for $X$ an
unweighted tree, the delta-matroid $\mathcal{M}'(X)$ which is
combinatorial rather than matrix-theoretic.
\end{enumerate}
\noindent It would be interesting and desirable to explore if -- say for
$X = G$ a graph metric space -- these invariants can be related to
``traditional'' combinatorial graph invariants.
\newpage
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 3,465 |
We recorded two songs, "Garden Girl" and "What to Leave Behind" at our friend Brandon's Blind Dog Studio. Brandon was joined by Mike who both play in the band, Counterfeit Party, to help record the two singles. We also had Jenny from Bad Rooster Images taking pictures of the session. The session was great and the pictures look awesome. We want to thank Brandon, Mike, and Jenny for all of their hard work. We should have the songs up before year's end. | {
"redpajama_set_name": "RedPajamaC4"
} | 5,144 |
Gideon is an unincorporated community and census-designated place (CDP) in Cherokee County, Oklahoma, United States. The population was 49 at the 2010 census.
Geography
Gideon is located northwest of the center of Cherokee County along Oklahoma State Highway 82. Tahlequah, the Cherokee County seat, is to the southeast, and Locust Grove is to the northwest.
According to the United States Census Bureau, the Gideon CDP has a total area of , all land.
Demographics
References
Census-designated places in Cherokee County, Oklahoma
Census-designated places in Oklahoma | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 8,360 |
\section{Introduction}
A BFKL-based initial state dipole model in impact parameter space has been developed in a series of papers \cite{Avsar:2005iz,Avsar:2006jy,Avsar:2007xg,Flensburg:2008ag,Flensburg:2010kq}. In the Good--Walker approach \cite{Good:1960ba} the two incoming hadronic particles are seen as linear superpositions of eigenstates to the interaction. Miettinen and Pumplin suggested that the interaction eigenstates are parton cascades \cite{Miettinen:1978jb}, which in this model are generated with a dipole model. Only inclusive, and some semi-inclusive, observables have been studied in previous publications, but we now want to use the information in the initial state cascades to produce exclusive final states. Some of the partons in the initial state cascade will be selected as real partons, while others will be seen as virtual fluctuations that will be absorbed. The real partons will be further evolved through final state radiation with ARIADNE \cite{Lonnblad:1992tz} and hadronisation with PYTHIA \cite{Sjostrand:2006za,Andersson:1983ia}.
The way the real partons are selected is not straight forward, and there are many details in higher orders that have large impact on observables. Some of these details may be solved by further study, but others seem to be out of reach from current perturbative calculations. It should be well noted that the results presented here are not the only possible way of extending the inclusive formalism. This short summary will not go into detail on these complications, leaving them for a publication in the near future.
\section{The Cascade}
The cascades are generated from an initial valence state of dipoles in impact parameter space, and is then evolved in rapidity with a BFKL-based splitting probability. The foundation of the cascade is the leading logarithm formalism first used by Mueller and coworkers \cite{Mueller:1993rr, Mueller:1994jq, Mueller:1994gb}:
\begin{eqnarray}
\frac{d\mathcal{P}}{dY}=\frac{\bar{\alpha}}{2\pi}d^2\pmb{z}
\frac{(\pmb{x}-\pmb{y})^2}{(\pmb{x}-\pmb{z})^2 (\pmb{z}-\pmb{y})^2},
\,\,\,\,\,\,\, \mathrm{with}\,\,\, \bar{\alpha} = \frac{3\alpha_s}{\pi}.
\label{eq:dipkernel1}
\end{eqnarray}
Here $\mathcal{P}$ is the emission probability density, integrated over the transverse position $\pmb{z}$ of the emitted gluon. $\pmb{x}$ and $\pmb{y}$ are the transverse positions of the partons in the emitting dipole.
The cascade in our model introduces a number of corrections to Muellers original formulation.
\begin{itemize}
\item \emph{The momenta} of the partons are necessary to determine for many of the following corrections. In DIPSY, each emission gets a transverse recoil from each parent of $\pmb{r}_{t}/\pmb{r}_{t}^2$ where $\pmb{r}_t$ is the transverse distance between the emission and the parent. With the transverse momentum set and the rapidity known from the emission, the $p_+$ of an emission can be calculated through
$$p_+ = p_Te^{-y}$$
where $y$ is the rapidity. Notice that the rapidity is negative for particles incoming along the negative $z$-axis. The negative lightcone momentum depends on the structure of the rest of the event, but will in the cascade be assumed to be that of an on-shell gluon:
$$p_- = p_T e^{y}.$$
To conserve momentum, recoils in $p_T$ and $p_+$ are given to the partons in the emitting dipole. The $p_-$ will be delivered from the colliding partons from the other side, and the limitations are taken into account when the interactions are decided.
\item \emph{Next to leading logarithm effects} are known to be large in BFKL. While this is not a full NLL calculation, it includes what is known to be the dominating parts of NLL \cite{Salam:1999cn}.
\begin{itemize}
\item \emph{The running coupling} can be taken in account by replacing $\alpha_s$ with $\alpha_s(\mu)$ in the emission probability. The scale $\mu$ is set by the largest $p_T$ involved in the emission.
\item The \emph{non-singular terms} in the splitting function suppress large $z$-values, where the emitted parton takes the larger part of the momentum. Most of this effect is included by energy conservation, and requiring that the emitted partons are ordered in lightcone momentum $p_+$.
\item The \emph{energy scale terms} are essentially equivalent to projectile-target symmetry and are taken into account by not only requiring ordering in $p_+$, but also in $p_-$.
\end{itemize}
\item \emph{Confinement} is added by giving the gluon a mass. That modifies the splitting probability to have an exponential suppression for large transverse distances. This slows down the increase in cross section with energy.
\item \emph{Saturation} is present in the interaction in Muellers original formulation through multiple interactions. This only allows for loops that are cut by the interaction frame, not including loops contained in one of the cascades. To fully include saturation, and to restore frame independence, a 2 to 2 dipole ``swing'' is introduced. The swing can be interpreted either as a quadrupole correction, or as the exchange of virtual gluons. It tends to swing larger dipoles into smaller dipoles, which suppresses the growth of the cross section, and can give rise to 2 to 1 mergings in the final state, if one of the outgoing dipoles is reabsorbed.
\item \emph{Coherence} is important when a long distance emission is made from a parton with other partons close by. The long transverse wavelength of the emitted gluon cannot resolve the group of closely spaced partons, and the emission will treat the group of partons as one in terms of recoil and ordering. Without coherence, the restrictions of $p_-$ ordering would be overestimated, and the full phase space of allowed emissions would not be taken into account.
\end{itemize}
\section{The Interaction}
The elastic scattering amplitude between two colliding dipoles at a given impact parameter $b$ is in Muellers formulation given by
\begin{equation}
f_{ij} = f(\pmb{x}_i,\pmb{y}_i|\pmb{x}_j,\pmb{y}_j) =
\frac{\bar{\alpha}_s^2}{8}\biggl[\log\biggl(\frac{(\pmb{x}_i-\pmb{y}_j)^2
(\pmb{y}_i-\pmb{x}_j)^2}
{(\pmb{x}_i-\pmb{x}_j)^2(\pmb{y}_i-\pmb{y}_j)^2}\biggr)\biggr]^2.
\label{eq:dipamp}
\end{equation}
This formula is in our model replaced by a similar calculation in $p$-space rather than $x$-space, and modified in the same way as the emission in the cascade for recoils and lightcone ordering, NLL effects, confinement and coherence. Saturation is already present in Muellers form through multiple interactions. The details of these effects are mainly guided by frame invariance, requiring that the corrections are the same in the interaction as in the evolution.
Summing over all pairs of dipoles, one from the state incoming from the negative $z$-axis, and one from the positive $z$-axis, gives the full first order elastic amplitude $F(\pmb{b})=\sum f_{ij}$. Allowing multiple interactions gives the unitarised elastic amplitude
\begin{equation}
T(\pmb{b})=1-e^{-F(\pmb{b})}.
\label{tf-relationmueller}
\end{equation}
Averaging over the incoming cascades, the interaction eigenstates, from both states now gives the elastic cross section
\begin{eqnarray}
d\sigma_{\text{el}}/d^2b &=& \langle \Psi_{\text{in}} | T | \Psi_{\text{in}} \rangle^2 = \langle \sum c_n \phi_n \Big| \, T \, \Big| \sum c_n \phi_n \rangle^2 \nonumber\\
&=& \left( \sum c_n^2 T_n \right)^2 = \langle T\rangle^2
\end{eqnarray}
where $\Psi_{\text{in}}$ is the incoming mass eigenstates and $\phi_n$ are the interaction eigenstates, that is all pairs of dipole cascades, with weights $c_n$. $T_n = 1-e^{-\sum_{ij \in n} f_{ij}}$ is the scattering eigenvalue for the pair of cascades $n$. By replacing $\Psi_{\text{in}}$ with $\Psi_{X}$ the diffractive excitation cross section can be calculated, and the optical theorem gives the total cross section. From that, the non-diffractive cross section can be found:
\begin{eqnarray}
d\sigma_{\text{diff + el}}/d^2 b
&=&\sum_X \langle \Psi_{\text{in}} | T | \Psi_{X} \rangle \langle \Psi_{X} | T |
\Psi_{\text{in}} \rangle =\langle T^2 \rangle. \nonumber\\
d\sigma_{\text{tot}} / d^2b&=& 2\langle \Psi_{\text{in}} | T | \Psi_{\text{in}} \rangle = 2 \langle T \rangle \\
d\sigma_{\text{non-diff}}/d^2b& = & d\sigma_{\text{tot}} / d^2b-d\sigma_{\text{diff+el}} / d^2b = \langle 1-e^{-2F}\rangle, \nonumber
\label{eq:eikonalcross}
\end{eqnarray}
where eq.~(\ref{tf-relationmueller}) was used in the last equality. Identifying $e^{-2F} = e^{-2\sum f_{ij}}$ as the non-interaction probability, the non-interaction probability of the individual dipole pairs factorises, and each pair of dipoles $i,j$ has a non-diffractive interaction probability of $1-e^{2f_{ij}}$. From this it is possible to select which dipoles interact in each collision.
\begin{figure}
\includegraphics[width=0.8\linewidth]{figs/CDF_1990_S2089246_d03-x01-y01.eps}
\includegraphics[width=0.8\linewidth]{figs/CDF_2001_S4751469_d03-x01-y01.eps}
\includegraphics[width=0.8\linewidth]{figs/CDF_2001_S4751469_d02-x01-y01.eps}
\caption{ \label{fig:CDF} Comparison with a selection of data from CDF \cite{Abe:1989td,Affolder:2001xt}. The plots have been generated with Rivet \cite{Buckley:2010ar}.}
\end{figure}
\section{Finding the final state}
By tracing the interacting partons back towards the valence partons, the interacting gluon chains can be found. Since the timelike part of the shower will be regarded as final state radiation taken care of by ARIADNE, only the emissions directly connected to the interacting chains are kept. All other emissions are reabsorbed, to avoid double counting with the final state radiation.
\subsection{High $p_T$ suppression}
As can be seen from eq.~(\ref{eq:dipkernel1}) dipoles are in the cascade emitted with a weight of $d^2r/r^2$, corresponding to $d^2p_T/p_T^2$. However, the maximum $p_T$ in an interaction, coming from the cascade or from the interaction, should come with a weight $d^2p_T/p_T^4$. Muellers original model was designed for inclusive cross sections, which were not affected by small dipoles, and this discrepancy was not a problem. To produce a correct $p_T$-spectrum in the final state though, it is necessary to reweight the maxima. Reweighting according to \cite{Andersson:1995ju}, maxima in $p_T$ are reabsorbed with a probability
$$P_{\text{abs}} = 1 - \frac{p_{T\text{min}}^2}{p_{T\text{max}}^2},$$
where the maximum $p_T$ is compared to the closest minimum along the parton chain.
The extra phase space opened up by these reabsorbtions (mainly the $p_-$ ordering is significantly changed) is approximately covered by the coherence in the cascade: A maximum in $p_T$ means that the reabsorbed dipole is smaller than the proceeding emission, and thus that emission will not have resolved the small dipole, and the full phase space of the merged dipole has been taken into account.
\subsection{FSR and hadronisation}
The gluon chains connected by the exchanges in the interaction, together with the high-$p_T$ reabsorbtions above, determine the ISR. The partons and their colour structure are then passed on to ARIADNE \cite{Lonnblad:1992tz} to perform FSR. The phase space is restricted to spacelike emissions, matching the restriction in DIPSY.
After the FSR, the colour dipoles are allowed to fragment into hadrons as colour strings using PYTHIA \cite{Sjostrand:2006za,Andersson:1983ia}. Here the low $p_T$ cutoff in ARIADNE is matched to PYTHIAs starting scale to cover all phase space without double counting.
\begin{figure}[t]
\includegraphics[width=0.8\linewidth]{figs/CDF_2002_S4796047_d02-x01-y01.eps}
\caption{ \label{fig:CDF2} Comparison with a selection of data from CDF \cite{Acosta:2001rm}. The plots have been generated with Rivet \cite{Buckley:2010ar}.}
\end{figure}
\section{Results}
As can be seen in previous publications, the following inclusive cross sections are well described at high energy:
\begin{itemize}
\item Total and elastic $pp$ as function of $\sqrt{s}$ and $t$,
\item Single and double diffractive excitation in $pp$ as function of $M_X$,
\item Total $\gamma^*p$ as function of $W$ and $Q^2$ (also low $Q^2$ with VMD),
\item Deeply virtual compton scattering as function of $W$, $Q^2$ and $t$ (also low $Q^2$ with VMD),
\item $\gamma^*p \rightarrow \rho p$ as function of $W$, $Q^2$ and $t$ (also low $Q^2$ with VMD).
\end{itemize}
The exclusive final state extension introduces many ambiguities from higher orders or non-perturbative QCD, which cannot easily be solved from first principles. All the plots are from a preliminary version of DIPSY from spring 2010. We will here discuss how it compares to data, and how sensitive the observables are to details in the algorithm.
\begin{figure}
\includegraphics[width=0.9\linewidth]{figs/ALICENchMult.eps}
\caption{ \label{fig:ALICE} Comparison with a selection of data from ALICE \cite{Aamodt:2010ft,Aamodt:2010pp}.}
\end{figure}
A selection of data is shown in figs \ref{fig:CDF}, \ref{fig:CDF2} and \ref{fig:ALICE}.
\begin{itemize}
\item The pseudorapidity distribution fits well with data in this preliminary version of DIPSY, but it is sensitive to details regarding mainly how the $p_\pm$ ordering is implemented.
\item The leading $p_T$ distribution in region close to the trigger particle fits well with data and is relatively stable between different versions. A successful prediction.
\item The distribution in $\Delta \phi$ around a minijet does not have enough activity in the transverse and away region. This is related to the rapidity distance between a minijet and its recoil. Possibly it is caused by an overestimate of the absorbtion of partons due to local $p_T$-maxima, giving too large rapidity distances between the remaining partons.
\item The multiplicity distribution from CDF follows data approximately over several orders of magnitude. This holds true for most version of DIPSY. The bias towards low multiplicities in this version is possibly related to the same overestimate of reabsorbtion. The same effect is seen in the ALICE multiplicity distributions, but at the higher energies DIPSY undershoots data somewhat in the tail.
\end{itemize}
The suppression algorithm for high $p_T$ for this preliminary version of DIPSY has a known flaw, allowing some small dipoles to survive unsupressed. This gives a too strong tail in the $p_T$ spectrum, but is corrected in later versions.
$\text{d}N_{ch}/\text{d}\eta$ in midrapidity at ALICE overshoots simulations at high energies in this version, but it is sensitive to several details in the model. More analysis is needed to understand the energy dependence in DIPSY.
\section{Conclusions and Outlook}
We have over the last years developed a BFKL based dipole model in transverse impact parameter space, including the major parts of NLL, confinement and saturation. This model has in previous publications shown to reproduce a wide range of total, elastic and diffractive cross sections with few parameters. Now we are producing exclusive final states with the same model. There are however many effects in higher orders, which it is not clear how to include, but that still has a significant effect on observables. Some effects can be accounted for with intuitive solutions, while others require tuning from data. The model can reproduce many exclusive observables at the Tevatron and the LHC. It should be noted that the results here are from a preliminary version of DIPSY, with several known flaws, and we expect the results in an upcoming publication to be improved.
While all data here are from $pp$ colliders, the model can be applied to any high energy collisions, as for example $DIS$, $AA$, $pA$ and $eA$. $AA$ will be the first to be studied after $pp$.
We would in the future also like to extend the model to include diffractive final states, but since the interaction amplitude does not factorise as neatly as in the non-diffractive case, it is more complicated. We do have some ideas though, and hope to return to this topic in future publications.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 6,073 |
Q: WMI performance monitors not available On a Windows Server 2008 R2 Datacenter (Service Pack) I am trying to query the following counters, but getting the error 'Invalid Query' in wbemtest.exe
Win32_PerfRawData_PerfOS_Memory
Win32_PerfRawData_PerfOS_Processor
They are not even turning up in the list of objects! I am new to this, so my apologies if you feel there is information missing. I will make it available as the questions arrive.
The result of:
Get-WmiObject -Query "Select * from Win32_PerfRawData_PerfOS_Memory"
is as follows:
Get-WmiObject : Invalid query "Select * from Win32_PerfRawData_PerfOS_Memory"
At line:1 char:1
+ Get-WmiObject
+ ~~~~~~~~~~~~~
+ CategoryInfo : InvalidArgument: (:) [Get-WmiObject], ManagementException
+ FullyQualifiedErrorId : GetWMIManagementException,Microsoft.PowerShell.Commands.GetWmiObjectCommand
A: Backup %windir%\system32\wbem folder, and switch to %windir%\system32\wbem and run mofcomp Wmi.mof since this is the MOF file for this Win32_PerfRawData class.
If still fails, consider to rebuild WMI Repository:
https://blogs.technet.microsoft.com/askperf/2009/04/13/wmi-rebuilding-the-wmi-repository/
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 1,880 |
package models
import (
"github.com/UserStack/ustackweb/backend"
)
type StatsCollection struct {
}
func (this *StatsCollection) All() (stats map[string]int64, err *backend.Error) {
connection, err := backend.Connection()
if err != nil {
return
}
backendStats, backendError := connection.Stats()
backend.VerifyConnection(backendError)
if backendError == nil {
stats = backendStats
}
return
}
func Stats() *StatsCollection {
return &StatsCollection{}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 4,099 |
Dual-sensitized modification engineering with enhanced photocatalytic degradation for organic dye
Yingying Zhao, Jiejing Zhang, Wuyou Fu
https://doi.org/10.21203/rs.3.rs-246705/v1
In order to use solar irradiation efficiently, we should fabricate a high reactive photocatalyst under the visible light. It has been clearly revealed that if the titanium dioxide (TiO2) is irradiated with light, as a stable catalyst, excited electron–hole pairs of it could apply in degrading organic pollutants, but the wavelength of optical absorption is narrower than 380 nanometers. In this work, we designed and fabricated a unique TiO2 base heterojunction photocatalyst dual-sensitized by cadmium sulfide (CdS) and lead sulfide (PbS) with wide-spectrum (300-800 nm) response. Moreover, the degradation efficiency of nanocomposites reached 99.9% under visible light, which was 5 times over pure TiO2. This ternary Z-scheme structure materials will be well-promising photocatalysts.
Electronic Materials and Devices
photocatalytic degradation
PbS/CdS/TiO2NSs
Optical materials and properties
Heterojunction photocatalyst
With the demand of development in science and technology, storage of fossil energy is nearly exhausted, simultaneously, numerous organic toxins are produced and released into water through the process of industrialization(1-3). Therefore, the strategy of seeking clean energy and degradation of organic pollutants has recently been the most widely investigated by multitudinous scientists(4-6). Nowadays, semiconductor materials have been considered in more consideration due to their attractive applications including energy conversion, storage, solar fuel, photocatalytic and medical(7, 8). They are regarded as potential catalyst for wastewater management and water splitting, which can directly decompose the pollutants absorbed on the surface through redox processes(9, 10).
TiO2 is thought to be a suitable photocatalyst. Although, it has played an important role in stability, cost effectiveness and inert nature, they are still encountering the problems of large band gap (~3.2 eV), high recombination rate of photogenerated electron-hole pairs and narrow light response range(11-15). Numerous researchers have studied that combining the photoprocess with physical or chemical methods was an effective way to improve the efficiency of photodecomposition over TiO2(16-19). Among them, it is one of the most effective way that building multiple heterojunction between semiconductors(7, 20-22).
As an important visible light-sensitive semiconductor, cadmium sulfide nanocrystal (CdS NCs) is the most investigated among the metal chalcogenides due to its direct band energy of 2.42 eV(23-25). Therefore, CdS is easy enough for photoexcitation of electrons and used light to a larger extent(26). Moreover, lead sulfide nanocrystal (PbS NCs) can be excited with less light, due to its band gap is only as narrow as 0.4 eV(27-30). And it is not difficult to find that its light absorption edge is in the near infrared region(31). As far as we know, there were only a few studies that CdS/PbS co-sensitized vertically aligned TiO2NSs array films for photocatalytic degradation. Hence, the study of CdS/PbS/TiO2 NSs nanocomposites is of remarkable interest and challenging.
In our work, we study a kind of unique recyclable heterogeneous PbS/CdS/TiO2 NSs sheet nanocomposite photocatalyst. CdS and PbS play three roles in photocatalyst: (1) PbS/CdS can affect the energy band structure for nanocomposite; (2) the increasing number of quantum dots increases the specific surface area of the material; (3) building multiple heterojunction between semiconductors. Simultaneously, it may lead to a new alternative or potential technology for future photocatalyst design and improvement in practical applications.
Experimental Section
The materials for TiO2NSs, including Ti(OC4H9)4 and (NH4)2TiF6 were obtained from the Tianjin Institute of Guangfu Fine Chemicals and Aladdin Reagents Co., Ltd. The materials for CdS NCs, such as Cd(NO3)2·4H2O , KOH, NH4NO3 and CH4N2S were purchased from Aladdin Reagents Co., Ltd. Pb(CH3COO)2 and Na2S were employed for synthesis of PbS NCs, which were obtained from Tianjin Institute of Guangfu Fine Chemicals and Xilong Chemical Co., Ltd.
2.2 Synthesis
2.2.1 TiO2NSs
Ti(OC4H9)4 (1 ml) and (NH4)2TiF6 (0.5 g) were added into a solution, including deionized water (30 ml) and hydrochloric acid (30 ml). Then, the precursor solution was transferred into hydrothermal reactor (100ml) with FTO substrates, reacted for 12h at 170 ℃. Finally, the obtained product was washed with deionized water several times and dried in the air.
2.2.2 CdS NCs
The CdS NCs were synthesized by the method of chemical bath deposition (CBD). Firstly, Cd(NO3)2·4H2O (0.12 g), KOH (0.5611 g), NH4NO3 (2.412 g) and CH4N2S (0.30448 g) were added into deionized water (80ml). Then, the TiO2NSs were put into beakers with mixed solutions. Finally, the beakers were transferred into thermostat water bath, reacted 80 ℃ for 10-40min, respectively. The obtained products were abbreviated as CdS(10-40)/T.
2.2.3 PbS NCs
The PbS NCs were synthesized by successive ionic layer adsorption and reaction (SILAR) method, which were assembled onto CdS(10-40)/T. The detailed synthetic process are as follows: Firstly, the CdS(10-40)/T were put into 0.02M Pb(CH3COO)2 ethanol solution(100ml) and kept for 5min. The CdS(10-40)/T were cleaned by methanol and dried several minutes. Then, the obtained products were put into 0.02M anhydrous Na2S methanol solution (100ml) and kept for 5min. After that, the final products were cleaned by methanol and dried several minutes. The process of fabrication was regarded as one cycle. Meanwhile, the final products were abbreviated as PbS(nC)/CdS(10-40)/T.
2.3Characterization
All samples were tested in air condition. The measurements of X-ray diffraction were performed on Rigaku D/max-2500 diffratometer (λ=0.154056nm) with the scanning rate of 0.3° s-1 in the 2θ range from 20° to 80°. UV-vis curves were obtained by an UV-3150 with double-beam spectrophotometer. The samples of morphology, size and crystallographic directions images were obtained by SEM (FEI MAGELLAN 400 Scanning Electron Microscope) and the transmission electron microscope (TEM, JEM–2100F, 200 kV). The element composition and distribution were measured by EDS. The X-ray photoelectron spectra (XPS) and valence-band X-ray photoelectron spectra (VB-XPS) were obtained by the ESCALAB-250 photoelectron spectrometer. Steady-state photoluminescence (PL) spectrum was recorded on a Ramascope System (Renishaw, London, UK) with an excitation laser at wavelength of 473 nm. The light excitation current curve of the samples was obtained by a three-electrode test system. Electrochemical impedance spectroscopy (EIS) was tested by SI 1296 electrochemical interface & SI 1260 interface /grain phase analyzer.
2.4 Photocatalytic test
The test of photocatalytic activity was performed on a self-built photocatalytic reaction system, a mid-pressure Hg lamp (300 W) and a xenon lamp (XLS-150A) were selected as the light source. The catalysis (size: 3 cm1.5 cm) was added into the dye solution (700 ml, M) and stirred at 25 ℃ under UV-vis light. The method was used to evaluate the degradation efficiency of organic pollutants. Before irradiation, the suspension was stirred in the dark to reach the adsorption-desorption equilibrium on the surface of the photocatalyst. The samples (5-8ml) were selected at the same time interval, according to the photocatalytic degradation of organic pollutants. The absorption peaks, which included different concentration of the RhB or MB solution, were determined by a UV-vis spectrophotometer. The photocatalytic degradation efficiency was calculated by the formula see formula 1 in the supplementary files.
Figure1 shows the fabrication process of TiO2 NSs, CdS/TiO2 NSs and PbS/CdS/TiO2 NSs. X-ray diffraction (XRD) measurement for TiO2 NSs, CdS/TiO2 NSs and PbS/CdS/TiO2 NSs were performed as shown in Figure2a, which showed diffraction peaks at 25.34°, 37.83°, 48.10° and 55.06° corresponding to the (101), (004), (200) and (211) crystal planes of anatase TiO2 phase (JCPDS#21-1272)(32, 33). Meanwhile, the blue curve presents new diffraction peaks at 43.97° and 52.08°, assigning to the (220) and (311) planes of cubic phase CdS NCs. Beyond that, the new diffraction peaks appear at 30.13° and 68.88° in green curve, assigning confirmed to (111) and (200) crystal planes of PbS NCs (JCPDS#65-2935). TEM, HTEM and SAED patterns (FigureS1) further confirmed that the samples were synthesized, successfully. The x-ray photoelectron spectroscopy (XPS) spectrum of PbS/CdS/TiO2NSs is shown in Figure2b. FigureS2 present the characteristic peaks position of Ti2p, Cd3d, Pb4f and S2p. The peak positions of Ti 2p3/2 and Ti 2p1/2 are located at 458.3 and 464.0 eV(34). The peaks of Cd 3d5/2 and Cd 3d3/2 are 405.0 and 411.7 eV, respectively. In the meantime, the peaks of S2p locate at 161.2 eV(35). The Pb4f is observed in FigureS2d, which is at about 138.3 eV as revealed in the previous literature(36). Therefore, the above results are confirmed that the preparation of the three-way PbS/CdS/TiO2NSs catalyst is successful.
The morphology images of samples were conducted by FESEM (Figure3). Figure 3a-3b show the morphology of bare TiO2NSs, inset picture is the cross-section of bare TiO2NSs. It can be seen that the dense and uniform TiO2NSs grow on the FTO substrate, vertically. The length and the thickness of TiO2NSs are about 1.5-2.1 μm and 200-240 nm, respectively. Figure3c-3f show the images of CdS/TiO2NSs composites with more active sites. The increase of CdS NCs is growing with the longer chemical bath deposition time (10-40 min). The more CdS NCs are growing, the more roughness of CdS/TiO2NSs composites is improving. Figure3h-3j present the morphology of PbS/ CdS/TiO2NSs with different cycles. The number of PbS NCs is growing with the increase in experimental cycles. The more PbS NCs are growing, the specific surface of PbS/CdS/TiO2NSs is increasing. Therefore, a larger specific surface area is beneficial for photocatalysts. The EDS and EDS mapping spectrum of PbS/CdS/TiO2NSs are shown in FigureS4-5. The elements of Pb, Cd and S are homogenously distributed on the TiO2NSs surface. The atomic ratio of each element is consistent with expectations.
Figure4a illustrates the UV–vis absorption spectra of TiO2NSs and CdS(10-40 min)/T. Meanwhile, Figure4c shows the UV–vis absorption spectra of PbS(3-5 C)/CdS(10-40 min)/T. Pure TiO2NSs exhibited visible light absorption with absorption band edges at 380 nm. In Figure4c, the absorption edge of PbS(7C)/ CdS(40 min)/ TiO2NSs is widened to 800 nm(32, 36). As shown in Figure4b and Figure4d, the UV–vis absorption spectra of PbS(3-5 C)/CdS(10-40 min)/T are red shifted as the band gap (Eg) gets narrow. Eg was computed as stated by Kubelka–Munk function (a), where c is proportionally constant, α and ν are the absorption coefficient and the frequency, respectively. See formula 2 in the supplementary files.
The EIS measurement and steady-state photoluminescence (PL) were conducted to explore the charge transfer dynamics. Figure5a shows the EIS curves of TiO2NSs, CdS(30)/T and PbS(5C)/CdS(30)/T. The TiO2 NSs could exhibit higher carrier resistance than PbS(5C)/CdS(30)/T. It demonstrates that PbS(5C)/CdS(30)/T has lower electron transport resistance. Therefore, PbS(5C)/CdS(30)/T is beneficial to electrons and holes transportation. The separation of electron-hole pairs plays a key role in the photocatalytic activity of the catalyst(37, 38). To further explore the charge recombination dynamics, the steady-state photoluminescence (PL) was performed and the PL intensity of PbS(5C)/CdS(30)/T demonstrates an obvious decrease compared with TiO2 NSs (Figure5b). It can be seen that the PL is quenched to some extent after the introduction of PbS and CdS. The PL quenching effect of PbS(5C)/CdS(30)/T indicates that electrons and holes can separate, effectively. It is consistent with the EIS results (Figure5a). The transient photocurrent response is shown in Figure5c. The photoelectrochemical performance of samples has been tested for analyzing the number of photogenerated electron-hole pairs. A higher current density was achieved by PbS(7C)/CdS(30)/T than the other samples, implying the enhanced electron-hole pairs generation. Moreover, the reason of much more electron-hole pairs is due to the proper energy band matching between PbS, CdS and TiO2. In particular, the transmitting procedure of electrons and holes is obviously shown in Figure5d.
The photocatalytic performances of the samples over RhB and MB were studied under UV-vis light. The details of reaction experiments were shown in Supplementary FigureS6. FigureS6a and FigureS6b show the UV-Vis absorption of RhB and MB dye solution degraded by the sample (30-CdS/TiO2NSs). As shown in FigureS6c, the concentration of the organic solution did not change significantly during the dark treatment (-30-0 min). It illustrates the catalyst has no catalytic effect without driven force of solar power. Outstandingly, the degradation rate of 30-CdS/TiO2NSs for RhB was 99.8 % within 40 min. The degradation rate of pure TiO2 for RhB was only 23.3 % under the UV light. It is due to the smaller number of light-excited oxidative holes and the higher electron hole combination rate of bare TiO2NSs. Meanwhile, FigureS6d shows a degradation pattern for MB analogous to that of FigureS6c, it is obvious that all of the composites sensitized with CdS quantum dots show better photocatalytic activity.
We made a comparison experiment under visible light by using a xenon lamp simulator. Clearly, according to the analysis in Figure7a, the appropriate addition of PbS QDS could enhance the catalytic property. After 160 min, 97.1 % of RhB could be removed by PbS(5C)/CdS(30)/TiO2. The other samples, for example PbS(7C)/CdS(30)/TiO2, show lower photocatalytic activity, it maybe owing to the increasing number of defects, recombination center and smaller surface area. Acting as recombination center of photoelectrons and holes, the defect has a negative effect on catalysis. In conclusion, CdS and PbS could effectively help TiO2NSs to broad the light utilization range. Establishing heterogeneous nodes could accelerate the photocatalysis process. Interestingly, as confirmed by Figure5a, the electrochemical impedance spectroscopy (EIS) analysis could also agree with this.
Those response energy about photocatalytic degradation from RhB by samples sort of impetuses were greatly fitted with pseudo-first-order energy module: -ln(C0/Ct)=Kt, where C0 is the concentration at the initial, Ct is the concentration at the reaction time t, and K is the rate constant. As shown in Figure7b, PbS(5C)/ CdS(30)/TiO2 has a maximum K value that is much higher than the other samples. It demonstrates that the multiple heterostructure photocatalyst has high-efficiency photocatalytic activity.
In fact, the excellent recycling property and stability of photocatalysts can effectively cut the waste water treatment cost and avert secondary pollution. The stability of PbS(5C)/CdS(30)/TiO2NSs was conducted by 5 recycling experiments in Figure7a. Specifically, there was a little drop in degradation ability after five cycles, PbS(5C)/CdS(30)/TiO2NSs composites could still degrade 86.64% RhB. As the outermost layer of the sample, lead sulfide avoided direct light exposure to cadmium sulfide to reduce the photo-corrosion of cadmium sulfide and improve the stability of the sample. In order to investigate the sample changes before and after the reaction, X-ray diffraction patterns were analyzed on the samples after the photocatalytic reaction. According to Figure7b, the diffraction peak of the photocatalyst has almost no change, and the sample maintains its original composition. As we expected, this result indicates that the sample exhibits excellent long-term stability. It is hopeful to achieve large-scale use without any additional pollution in the future.
The mechanism of photodegradation of dye over the catalyst is shown in Figure8. PbS is excited to produce photoelectrons under visible light irradiation. The electrons on the PbS (CB) jump to the CB of CdS and continue to transfer to TiO2 (CB), which follows the law of conservation of energy. Meanwhile, h+ on the surface of TiO2 (VB) should transfer to VB of CdS. Holes don't rest on CdS (VB) and jump to the valence band of PbS. The isolated electrons and h+ react with water to produce large quantities of highly oxidizing ·O2- and ·OH for degradating organic pollutants.
PbS+UV light→PbS(e-)+PbS(h+) (1)
CdS+PbS(e-)→CdS(e-)+CdS(h+) (2)
CdS(e-)+TiO2→TiO2(e-) (3)
TiO2(e-)+O2→TiO2+O2- (4)
O2-+TiO2(e-)+H+→H2O2 (5)
H2O2+O2-→·OH+OH-+O2 (6)
OH+RhB/MB→degraded or mineralizedproducts (7)
CdS(h+)+RhB/MB→degraded or mineralized products (8)
PbS(h+)+RhB/MB→degraded or mineralized products (9)
As discussed, the degradation reactions can be described as the above equations (1-9).
In conclusion, a novel ternary PbS/CdS/TiO2NSs heterojunction photocatalyst was successfully prepared by one-step hydrothermal method, chemical bath deposition (CBD) and successive ionic layer adsorption and reaction (SILAR). Thus, a degradation mechanism featuring heterojunction as the bridge between the photogenerated electrons and holes was proposed, which could degrade 99.9% RhB and MB solution. Transient photocurrent response indexes that the ternary PbS/CdS/TiO2NSs composites demonstrate the ability to efficiently separate photo-generated electrons and holes and make photocatalytic activity improved nearly 5 times. Absorption spectra experiment indicates that the absorption edge of samples has extended from 375 nm to 800 nm, the light utilization rate increased significantly. We believe that the novel three-way PbS/CdS/TiO2NSs catalyst could have great potential in the field of energy and environmental protection. This work could provide a dual-sensitized modification engineering to construct ternary catalysts improving the degradation efficiency.
There are no conflicts to declare.
We are particularly grateful to the National Natural Science Foundation of China (Grant No. 51272086).
S. Mozia (2010) Photocatalytic membrane reactors (PMRs) in water and wastewater treatment. A review. Separation and Purification Technology73, 71-91.
M. Vakili et al. (2014) Application of chitosan and its derivatives as adsorbents for dye removal from water and wastewater: a review. Carbohydr Polym113, 115-130.
P. Shandilya et al. (2018) Fabrication of fluorine doped graphene and SmVO4 based dispersed and adsorptive photocatalyst for abatement of phenolic compounds from water and bacterial disinfection. Journal of Cleaner Production203, 386-399.
R. Asahi, T. Morikawa, T. Ohwaki, K. Aoki, Y. Taga (2001) Visible-light photocatalysis in nitrogen-doped titanium oxides. Science293, 269-271.
X. Liu et al. (2017) Noble metal-metal oxide nanohybrids with tailored nanostructures for efficient solar energy conversion, photocatalysis and environmental remediation. Energy & Environmental Science10, 402-434.
H. Tong et al. (2012) Nano-photocatalytic Materials: Possibilities and Challenges. Advanced Materials24, 229-251.
H. Wang et al. (2014) Semiconductor heterojunction photocatalysts: design, construction, and photocatalytic performances. Chemical Society Reviews43, 5234-5244.
L. Jing et al. (2006) Review of photoluminescence performance of nano-sized semiconductor materials and its relationships with photocatalytic activity. Solar Energy Materials and Solar Cells90, 1773-1787.
C. Chen, W. Ma, J. Zhao (2010) Semiconductor-mediated photodegradation of pollutants under visible-light irradiation. Chemical Society Reviews39, 4206-4219.
J. Low, J. Yu, M. Jaroniec, S. Wageh, A. A. Al-Ghamdi (2017) Heterojunction Photocatalysts. Advanced Materials29.
M. Pelaez et al. (2012) A review on the visible light active titanium dioxide photocatalysts for environmental applications. Applied Catalysis B-Environmental125, 331-349.
X. Chen, L. Liu, P. Y. Yu, S. S. Mao (2011) Increasing Solar Absorption for Photocatalysis with Black Hydrogenated Titanium Dioxide Nanocrystals. Science331, 746-750.
A. Fujishima, X. Zhang, D. A. Tryk (2008) TiO2 photocatalysis and related surface phenomena. Surface Science Reports63, 515-582.
G. Williams, B. Seger, P. V. Kamat (2008) TiO2-graphene nanocomposites. UV-assisted photocatalytic reduction of graphene oxide. Acs Nano2, 1487-1491.
X. Li et al., (2008) Preparation of polyaniline-modified TiO2 nanoparticles and their photocatalytic activity under visible light illumination. Applied Catalysis B: Environmental81, 267-273.
M. Sathish, B. Viswanathan, R. P. Viswanath, C. S. (2005) Gopinath, Synthesis, characterization, electronic structure, and photocatalytic activity of nitrogen-doped TiO2 nanocatalyst. Chemistry of Materials17, 6349-6353.
U. G. Akpan, B. H. Hameed (2010) The advancements in sol-gel method of doped-TiO2 photocatalysts. Applied Catalysis a-General375, 1-11.
M. Zubair, H. Kim, A. Razzaq, C. A. Grimes, S.-I. In (2018) Solar spectrum photocatalytic conversion of CO2 to CH4 utilizing TiO2 nanotube arrays embedded with graphene quantum dots. Journal of Co2 Utilization26, 70-79.
J. M. Herrmann (1999) Heterogeneous photocatalysis: fundamentals and applications to the removal of various types of aqueous pollutants. Catalysis Today53, 115-129.
W. Zhou et al. (2010) Ag2O/TiO2 Nanobelts Heterostructure with Enhanced Ultraviolet and Visible Photocatalytic Activity. Acs Applied Materials & Interfaces2, 2385-2392.
S. J. A. Moniz, S. A. Shevlin, D. J. Martin, Z.-X. Guo, J. Tang (2015) Visible-light driven heterojunction photocatalysts for water splitting - a critical review. Energy & Environmental Science8, 731-759.
S. J. Hong, S. Lee, J. S. Jang, J. S. Lee (2011) Heterojunction BiVO4/WO3 electrodes for enhanced photoactivity of water oxidation. Energy & Environmental Science4, 1781-1787.
D. Jing, L. Guo (2006) A novel method for the preparation of a highly stable and active CdS photocatalyst with a special surface nanostructure. Journal of Physical Chemistry B110, 11139-11145.
L. Wu, J. C. Yu, X. Z. Fu (2006) Characterization and photocatalytic mechanism of nanosized CdS coupled TiO2 nanocrystals under visible light irradiation. Journal of Molecular Catalysis a-Chemical244, 25-32.
Q. Li, X. Li, S. Wageh, A. A. Al-Ghamdi, J. Yu (2015) CdS/Graphene Nanocomposite Photocatalysts. Advanced Energy Materials5.
L. Ge et al. (2012) Synthesis and Efficient Visible Light Photocatalytic Hydrogen Evolution of Polymeric g-C3N4 Coupled with CdS Quantum Dots. Journal of Physical Chemistry C116, 13708-13714.
J. Hou et al. (2020) Flexible CdS and PbS nanoparticles sensitized TiO2 nanotube arrays lead to significantly enhanced photocatalytic performance. Ceramics International46, 28785-28791.
K. Hedayati, M. Joulaei, D. Ghanbari (2020) Auto Combustion Synthesis using Grapefruit Extract: Photocatalyst and Magnetic MgFe2O4-PbS Nanocomposites. Journal of Nanostructures10, 83-91.
X. Liu et al., Fabrication of PbS quantum dots decorated ZnO nanorod arrays on Zn substrate and their visible light photocatalytic application. Materials Letters179, 134-137 (2016).
L. Liu et al. (2014) Photocatalytic activity of PbS quantum dots sensitized flower-like Bi2WO6 for degradation of Rhodamine B under visible light irradiation. Journal of Molecular Catalysis a-Chemical394, 309-315.
D. Ayodhya, G. Veerabhadram (2018) A review on recent advances in photodegradation of dyes using doped and heterojunction based semiconductor metal sulfide nanostructures for environmental protection. Materials Today Energy9, 83-113.
T. Liu et al. (2018) Enhanced photoelectric performance of CdS/CdSe co-sensitized TiO2 nanosheets array films. Sustainable Energy & Fuels2, 1262-1268.
H. Yao et al. (2016) Hierarchical photoanode of rutile TiO2 nanorods coupled with anatase TiO2 nanosheets array for photoelectrochemical application. Journal of Alloys and Compounds680, 206-211.
J. C. Yu, J. G. Yu, W. K. Ho, Z. T. Jiang, L. Z. Zhang (2002) Effects of F- doping on the photocatalytic activity and microstructures of nanocrystalline TiO2 powders. Chemistry of Materials14, 3808-3816.
J. Low, B. Dai, T. Tong, C. Jiang, J. Yu (2019) In Situ Irradiated X-Ray Photoelectron Spectroscopy Investigation on a Direct Z-Scheme TiO2/CdS Composite Film Photocatalyst. Advanced Materials31.
J. Wu, C. Tang, H. Xu, W. Yan (2015) Enhanced photoelectrochemical performance of PbS sensitized Sb-SnO2/TiO2 nanotube arrays electrode under visible light illumination. Journal of Alloys and Compounds633, 83-91.
Q. Zhou et al. (2017) Photoelectrochemical Performance of Quantum dot-Sensitized TiO2 Nanotube Arrays: , a Study of Surface Modification by Atomic Layer Deposition Coating. Nanoscale Research Letters12.
P. V. Kamat (2012) Boosting the Efficiency of Quantum Dot Sensitized Solar Cells through Modulation of Interfacial Charge Transfer. Accounts of Chemical Research45, 1906-1915.
GraphicalAbstract.doc
Highlights.doc
Supportinginformation.doc
formula.docx | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 47 |
{"url":"https:\/\/hal.archives-ouvertes.fr\/hal-01068413","text":"# Scalable Learnability Measure for Hierarchical Learning in Large Scale Multi-Class Classification\n\n1 MLIA - Machine Learning and Information Access\nLIP6 - Laboratoire d'Informatique de Paris 6\nAbstract : The increase in computational and storage capacities leads to an increasing complexity of the data to be treated: data can be represented in much more detail (many features) and in very large amounts : in the context of text categorization or image classification, the number of labels can scale from $10^2$ to $10^5$, and features range from $10^4$ to $10^6$. The main trade-off is generally between the accuracy of the predictions and the inference time. A usual methodology consists in organizing multiple classifiers in a hierarchical structure in order to reduce the computation cost of the inference. A popular category of algorithms is to iteratively build the structure. Inspired by clustering, the iteration scheme is a splitting (top-down lgorithms) or aggregating (bottom-up algorithms) process. This step uses measures to determine the split\/aggregation rule (like entropy, similarity between classes, separability ...). These kinds of measures are often computationaly heavy and can not be used in a large scale context. In this paper, we propose to use a reduced projected space of the input space to build measures of interest. Preliminary experiments on real dataset show the interest of such methods. We propose preliminary experiments which integrate a ''learnability'' measure in hierarchical approaches.\nKeywords :\nDocument type :\nConference papers\n\nhttps:\/\/hal.archives-ouvertes.fr\/hal-01068413\nContributor : Raphael Puget <>\nSubmitted on : Friday, September 26, 2014 - 2:25:40 PM\nLast modification on : Thursday, March 21, 2019 - 12:58:49 PM\nDocument(s) archiv\u00e9(s) le : Saturday, December 27, 2014 - 10:15:17 AM\n\n### File\n\nwsdm_ws.pdf\nFiles produced by the author(s)\n\n### Identifiers\n\n\u2022 HAL Id : hal-01068413, version 1\n\n### Citation\n\nRaphael Puget, Nicolas Baskiotis, Patrick Gallinari. Scalable Learnability Measure for Hierarchical Learning in Large Scale Multi-Class Classification. WSDM Workshop Web-Scale Classification: Classifying Big Data from the Web, 2014, New York, United States. \u27e8hal-01068413\u27e9\n\nRecord views","date":"2020-07-10 13:59:19","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.2828395664691925, \"perplexity\": 2532.288964346931}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-29\/segments\/1593655908294.32\/warc\/CC-MAIN-20200710113143-20200710143143-00098.warc.gz\"}"} | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.