The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code: DatasetGenerationError
Exception: ArrowInvalid
Message: JSON parse error: Missing a closing quotation mark in string. in row 5
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 145, in _generate_tables
dataset = json.load(f)
File "/usr/local/lib/python3.9/json/__init__.py", line 293, in load
return loads(fp.read(),
File "/usr/local/lib/python3.9/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python3.9/json/decoder.py", line 340, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 45887)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1995, in _prepare_split_single
for _, table in generator:
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 148, in _generate_tables
raise e
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables
pa_table = paj.read_json(
File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Missing a closing quotation mark in string. in row 5
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet
builder.download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
self._download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
text
string | meta
dict |
|---|---|
\section{Introduction}
Package \pkg{lme4} \citep{lme4} is widely used to estimate a variety of
generalized linear mixed models. Despite its popularity, the package does
not provide certain results related to derivatives of the likelihood,
which makes it difficult to obtain robust standard errors and other
statistical tests. This absence is partially related to the
fact that \pkg{lme4} does not directly estimate models via likelihood
maximization, but rather employs a penalized least squares approach
that leads to ML (or REML) estimates \citep{lme4}. While this approach
eases model estimation, it also makes it more difficult to obtain
derivatives (first and second) of the likelihood from a
fitted model (which are required for, e.g., the Huber-White sandwich
estimator). While it is possible to instead utilize
the robust estimation methods from package \pkg{robustlmm} \citep{roblmm},
we are interested in directly using derivative-based methods that rely on
estimation of the traditional model. Thus, the goal of this paper is to
describe
\proglang{R} package \pkg{merDeriv}, which contains functions that
compute these derivatives for objects of class \code{lmerMod}.
We also briefly discuss derivatives associated with models of
class \code{glmerMod}, though we do not currently have code for these
models (the computations are more difficult due to
the need for numerical integration).
The paper proceeds as follows.
We first describe general notation for the linear mixed model. Next, we derive
expressions for the linear mixed models' casewise (observation level) and
clusterwise (cluster/group level) first
derivatives, along with the Hessian and Fisher information
matrix (including both fixed effect parameters and variances/covariances
of random effects.
Next, we illustrate the derivatives' application via
the sleep study data \citep{belenky03} included with \pkg{lme4}, comparing
our results to a benchmark from \pkg{lavaan} \citep{rosv12}.
This illustration includes computation of the Huber-White sandwich
estimator \citep{eicker67, white80, huber67} for
linear mixed models with independent clusters/groups.
Finally, we discuss further use and extension of our package's functionality.
\section{Linear mixed model}
Following \cite{lme4}, the linear mixed model can be written as
\begin{eqnarray}
\label{eq:lmmcond}
\bm y |\bm b &\sim& N(\bm X \bm \beta+\bm Z\bm b, \bm R)\\
\label{eq:lmmran}
\bm b &\sim& N(\bm 0, \bm G)\\
\label{eq:lmmres}
\bm R &=& \sigma_{r}^2\bm I_{n},
\end{eqnarray}
where $\bm y$ is the observed data vector of length $n$; $\bm X$ is an
$n \times p$ matrix of fixed covariates; $\bm \beta$ is the fixed effect
vector of length $p$; $\bm Z$ is an $n \times q$ design matrix of random
effects; and $\bm b$ is the random effect vector of
length $q$.
The vector $\bm b$ is assumed to follow a normal distribution with
mean $\bm 0$ and covariance matrix $\bm G$, where $\bm G$ is a block diagonal
matrix composed of varaiance/covariance for random effect parameters.
The residual covariance matrix, $\bm R$, is the product of the residual
variance $\sigma_{r}^2$ and an identity matrix of dimension $n$.
We further define $\bm \sigma^2$ to be a vector of length $K$, containing all
variance/covariance parameters (including those of the random
effects and the residual). Thus, the matrix $\bm G$ has $(K-1)$
unique elements. For example, in a model with two random effects
that are allowed to covary, $\bm \sigma^{2}$ is a vector of
length 4 (i.e., $K = 4$). The first three elements correspond to the unique
entries of $\bm G$, which are commonly expressed as
$\sigma_0^2$, $\sigma_0\sigma_1$, and $\sigma_1^2$.
The last component is then the residual variance $\sigma_r^2$.
Based on Equations~\ref{eq:lmmcond}, \ref{eq:lmmran},
and \ref{eq:lmmres}, the marginal distribution of the LMM is
\begin{equation}
\label{eq:marginml}
\bm y \sim N(\bm X \bm \beta, \bm V),
\end{equation}
where
\begin{equation}
\label{eq:marginv}
\bm V = \bm Z \bm G \bm Z^{\top} + \sigma_{r}^2\bm I_{n}.
\end{equation}
Therefore, the marginal likelihood can be expressed as
\begin{equation}
\label{eq:obj}
\ell(\bm \sigma^2, \bm \beta; \bm y) = -\frac{n}{2}\log(2\pi) -
\frac{1}{2}\log(|\bm V|) - \frac{1}{2}
(\bm y - \bm X \bm \beta)^{\top}\bm V^{-1} (\bm y- \bm X \bm \beta).
\end{equation}
\section{Derivative computations for the linear mixed model}
In this section, we first discuss analytic results involving the linear
mixed model's first and second derivatives. We then illustrate how
these derivatives can be obtained from an object of class \code{lmerMod}.
\subsection{Scores}
Based on the objective function from Equation~\ref{eq:obj},
we derive the score function $s_i()$ for each observation w.r.t.\ the
parameter vector $\bm \xi = (\bm \sigma^2, \bm \beta)^\top$. We focus
separately on $\bm \sigma^2$ and on $\bm \beta$ below.
\subsubsection[Random scores]{Scores for $\bm \sigma^2$}
The gradient with respect to the $k^{\text{th}}$ entry
of $\bm \sigma^2$ ($k=1, 2, 3, \ldots, K$) is \cite[p.\ 136--137]{stroup12}:
\begin{equation}
\label{eq:grasigma}
\frac{\partial \ell(\bm \sigma^2, \bm \beta; \bm y)}
{\partial \sigma_k^2} = -\frac{1}{2}
\text{tr} \left [\bm V^{-1} \frac{\partial \bm V}
{\partial \sigma_k^2}\right ] +
\frac{1}{2}(\bm y-\bm X \bm \beta)^{\top} \bm V^{-1}
\left (\frac{\partial \bm V}{\partial \sigma_k^2}\right )
\bm V^{-1}(\bm y- \bm X \bm \beta),
\end{equation}
where $\bm V$ is defined in Equation~\ref{eq:marginv}. This gradient sums over
$i$, whereas the scores are defined for each observation $i$.
Thus, to obtain the scores, we can remove the sums from the above equation.
This is accomplished by replacing a trace operator with a diag operator,
as well as replacing a matrix product with a Hadamard product (also known as
elementwise/entrywise multiplication):
\begin{equation}
\label{eq:scoresigma}
s(\sigma_k^2; \bm y)= -\frac{1}{2}
\text{diag} \left [\bm V^{-1} \frac{\partial \bm V}
{\partial \sigma_k^2}\right ] +
\left \{\frac{1}{2}(\bm y-\bm X \bm \beta)^{\top} \bm V^{-1} \left
(\frac{\partial \bm V}{\partial \sigma_k^2}\right ) \bm V^{-1}\right\}^{T}
\circ (\bm y- \bm X \bm \beta).
\end{equation}
In this way, the gradient
of parameter $\sigma_k^2$ (a scalar) becomes a $n \times 1$ score vector.
\subsubsection[Fixed scores]{Scores for $\bm \beta$}
For the fixed effect parameter $\bm \beta$, the gradient is:
\begin{equation}
\label{eq:grabeta}
\frac{\partial \ell(\bm \sigma^2, \bm \beta; \bm y)}{\partial \bm \beta}
= \bm X^{\top} \bm V^{-1}(\bm y-\bm X \bm \beta).
\end{equation}
The score vector $s(\bm \beta; \bm y)$ can again be obtained by
replacing the matrix multiplication by the Hadamard product:
\begin{equation}
\label{eq:scoresbeta}
s(\bm \beta; \bm y)
= \left\{\bm X^{\top} \bm V^{-1} \right\}^{T} \circ (\bm y-\bm X
\bm \beta).
\end{equation}
The full set of scores can then be
expressed as a matrix whose columns consist of the results from
Equations~\ref{eq:scoresigma} and \ref{eq:scoresbeta}.
These equations provide scores for each observation $i$, and we can
construct the clusterwise scores by summing scores within each cluster.
In situations with one grouping (clustering) variable, the clusterwise
scores can be obtained from our \code{estfun.lmerMod()} function via the
default argument \code{level = 2}.
The casewise scores, on the other hand, can be
retrieved for all models via the argument \code{level = 1}.
\subsection{Hessian/observed information matrix}
The Hessian is the second derivative of the log-likelihood,
noted as $A^{\star}$ in this paper. The negative of the
Hessian is often called the observed information matrix or observed
Fisher information. It is a sample-based version of the Fisher information. Because
package \pkg{lme4} does not
provide a Hessian that includes both the fixed and
variance/covariance of
random effect (including residual variance)
parameters, the derivation of this matrix requires special attention.
To obtain the Hessian, we can divide the matrix $\bm A^{\star}$ into the
following four blocks:
$$
\bm A^{\star} = \left[\begin{array}{ccc|ccc}
&&\\
&\frac{\partial^2 \ell(\bm \sigma^2, \bm \beta;
\bm y)}
{\partial \bm \beta \partial \bm \beta^{T}} &&&
\frac{\partial^2 \ell(\bm \sigma^2, \bm \beta;
\bm y)}
{\partial \bm \beta \partial \bm \sigma^2}&\\
&&\\
\hline
&&\\
&\frac{\partial^2 \ell(\bm \sigma^2, \bm \beta;
\bm y)}
{\partial \bm \sigma^2 \partial \bm \beta} &&&
\frac{\partial^2 \ell(\bm \sigma^2, \bm \beta;
\bm y)}
{\partial \bm \sigma^2 \partial \bm \sigma^2} &\\
&&\\
\end{array}\right],
$$
where $\bm \beta$ contains all fixed parameters and $\bm \sigma^2$
contains all variance-covariance parameters
(in variance-covariance scale) in the linear mixed model. To facilitate
the analytic derivations, we index the above four blocks as:
$$
\bm A^{\star} = \left[\begin{array}{ccc|ccc}
&&\\
&\text{Block 1}^{\star} &&& \text{Block 3}^{\star} &\\
&&\\
\hline
&&\\
&\text{Block 2}^{\star} &&& \text{Block 4}^{\star}&\\
&&\\
\end{array}\right].
$$
$\text{Block 1}^{\star}$ is straightforward, which can be obtained by taking
the derivative of
Equation~\ref{eq:grabeta} w.r.t.\ $\beta$, which can be expressed as:
\begin{equation}
\label{eq:hessianbeta}
\frac{\partial^2 \ell(\bm \sigma^2, \bm \beta; \bm y)}
{\partial \bm \beta \partial \bm \beta^{T}} = -
\bm X^{\top} \bm V^{-1} \bm X
\end{equation}
Derivation of $\text{Block 4}^{\star}$ is described in \cite{stroup12} and
can be written as
\begin{multline}
\label{eq: hessianrandom}
\frac{\partial^2 \ell(\bm \sigma, \bm y, \bm \beta)}
{\partial \sigma_{k_1}^2\sigma_{k_2}^2} =
\left (\frac{1}{2}\right) \text{tr}
\left [
\bm V^{-1} \left (\frac{\partial \bm V}
{\partial \sigma_{k1}}\right) \bm V^{-1} \left (\frac{\partial \bm V}
{\partial \sigma_{k2}}\right) \right ]\\
- (\bm y- \bm X \bm \beta)\left\{\bm V^{-1} \left (\frac{\partial \bm V}
{\partial \sigma_{k1}}\right) \bm V^{-1} \left (\frac{\partial \bm V}
{\partial \sigma_{k2}}\right) \bm V^{-1}\right\}\left(\bm y- \bm X
\bm \beta \right),
\end{multline}
where $k_1 \in 1, \ldots, K$ and $k_2 \in 1, \ldots, K$.
Finally, $\text{Block 3}^{\star}$ (which is the transpose of
$\text{Block 2}^{\star}$) can
be seen as the derivative of Equation~\ref{eq:grasigma}
w.r.t.\ $\bm \beta$.
Using the identity from \citet[p.\ 11, Eq.\ (86)]{peter08},
this allows us to derive $\text{Block 3}^{\star}$ as
\begin{equation}
\label{eq:covbeta}
\frac{\partial^2 \ell(\bm \sigma^2, \bm \beta; \bm y)}
{\partial \bm \sigma^2 \partial \bm \beta} = -\bm
X^{\top} \bm V^{-1} \left ( \frac{\partial \bm V}
{\partial \bm \sigma^2} \right )
\bm V^{-1}(\bm y- \bm X \bm \beta)
\end{equation}
The results obtained for $\bm A^{\star}$ are similar to the derivation
of Fisher information matrix, as described below.
\subsection{Fisher/expected information matrix}
The Fisher information matrix (or expected information matrix)
is the expectation of the negative second
derivative of the
log likelihood, noted as
$\bm A$ throughout the paper. It
can often be obtained in \proglang{R} with the help of the
\code{vcov()} function, but package \pkg{lme4} only
provides results for fixed effect parameters. Thus, we obtain the
Fisher information
w.r.t. all model
parameters by taking the expectation of the negative of the Hessian
matrix $\bm A^{\star}$.
Specifically, we can express the matrix $\bm A$ in the
following four blocks as before. The only difference is the negative
expectation operator.
$$
\bm A = \left[\begin{array}{ccc|ccc}
&&\\
&-E \left (\frac{\partial^2 \ell(\bm \sigma^2, \bm \beta;
\bm y)}
{\partial \bm \beta \partial \bm \beta^{T}} \right ) &&&
-E \left (\frac{\partial^2 \ell(\bm \sigma^2, \bm \beta;
\bm y)}
{\partial \bm \beta \partial \bm \sigma^2} \right ) &\\
&&\\
\hline
&&\\
&-E \left (\frac{\partial^2 \ell(\bm \sigma^2, \bm \beta;
\bm y)}
{\partial \bm \sigma^2 \partial \bm \beta} \right ) &&&
-E \left (\frac{\partial^2 \ell(\bm \sigma^2, \bm \beta;
\bm y)}
{\partial \bm \sigma^2 \partial \bm \sigma^2} \right )&\\
&&\\
\end{array}\right],
$$
Following the same strategy, we index the above four blocks as:
$$
\bm A = \left[\begin{array}{ccc|ccc}
&&\\
&\text{Block 1} &&& \text{Block 3}&\\
&&\\
\hline
&&\\
&\text{Block 2} &&& \text{Block 4}&\\
&&\\
\end{array}\right].
$$
Because $\bm X$ and $\bm X^{T}$ are considered
constants, Block 1 is simply the negative of $\text{Block 1}^{\star} $ shown as
below:
\begin{equation}
\label{eq:infobeta}
-E \left (\frac{\partial^2 \ell(\bm \sigma^2, \bm \beta; \bm y)}
{\partial \bm \beta \partial \bm \beta^{T}} \right ) = -E \left (-
\bm X^{\top} \bm V^{-1} \bm X \right ) = \bm X^{\top} \bm V^{-1} \bm X.
\end{equation}
This analytic result is mathematically equivalent to the result
provided by \code{solve(vcov())} in \pkg{lme4} (which only contains fixed
effect parameters).
Derivation of Block 4 is also based on the result
from $\text{Block 4}^{\star}$. In
particular, following the expectation identity for Gaussian distributions from
\citet[p.\ 43, Eq.\ (380)]{peter08},
the second term of Equation~\ref{eq: hessianrandom} can be
transformed to $\text{tr}\left[ \bm V^{-1} \left (\frac{\partial \bm V}
{\partial \sigma_{k1}^2}\right)
\bm V^{-1} \left (\frac{\partial \bm V}
{\partial \sigma_{k2}^2}\right) \right]$. Thus Block 4 is reduced to
the form shown as below, which is also described in \cite{stroup12}.
\begin{equation}
\label{eq: inforandom}
-E \left (\frac{\partial^2 \ell(\bm \sigma, \bm y, \bm \beta)}
{\partial \sigma_{k_1}^2\sigma_{k_2}^2} \right ) =
\left (\frac{1}{2}\right) \text{tr}
\left [ \bm V^{-1} \left (\frac{\partial \bm V}
{\partial \sigma_{k1}^2}\right)
\bm V^{-1} \left (\frac{\partial \bm V}
{\partial \sigma_{k2}^2}\right)\right ],
\end{equation}
where $k_1 \in 1, \ldots, K$ and $k_2 \in 1, \ldots, K$.
Finally, Block 3 is the negative of the expectation of $\text{Block 3}^{\star}$.
Using the expectation identity from \citet[p.\ 35, Eq.\ (312)]{peter08},
this allows us to derive Block 3 as
\begin{eqnarray}
\label{eq:infocovbeta}
-E \left (\frac{\partial^2 \ell(\bm \sigma^2, \bm \beta; \bm y)}
{\partial \bm \sigma^2 \partial \bm \beta}\right ) &=& -E \left (-\bm
X^{\top} \bm V^{-1} \left ( \frac{\partial \bm V}
{\partial \bm \sigma^2} \right )
\bm V^{-1}(\bm y- \bm X \bm \beta) \right) \\
& = & \bm
X^{\top} E \left (\left\{ \bm V^{-1} \left ( \frac{\partial \bm V}
{\partial \bm \sigma^2} \right )
\bm V^{-1} \right \} \left\{(\bm y- \bm X \bm \beta) \right\}\right)
\end{eqnarray}
Since $E(\bm y- \bm X \bm \beta) = \bm 0$, it leads to
$-E \left (\frac{\partial^2 \ell(\bm \sigma^2, \bm \beta; \bm y)}
{\partial \bm \sigma^2 \partial \bm \beta}\right ) = \bm 0$, which
reflects
asymptotic independence of $\bm \beta$ and $\bm \sigma^2$.
Thus, we have expressed the necessary derivatives as functions of model
matrices and derivatives of the marginal variance $\bm V$. We can summarize
the Fisher information matrix result for the LMM as:
$$
\bm A = \left[\begin{array}{ccc|ccc}
&&\\
& \bm X^{\top} \bm V^{-1} \bm X &&& \bm 0 &\\
&&\\
\hline
&&\\
& \bm 0 &&& \bm \left (\frac{1}{2}\right) \text{tr}
\left [ \bm V^{-1} \left (\frac{\partial \bm V}
{\partial \sigma_{k1}^2}\right)
\bm V^{-1} \left (\frac{\partial \bm V}
{\partial \sigma_{k2}^2}\right)\right ]&\\
&&\\
\end{array}\right].
$$
These results are
equivalent to Equations 6.69 to 6.74 of \cite{mcc01}.
We can then invert the information matrix to obtain the
variance-covariance matrix. In the \code{vcov.lmerMod()} function
from \pkg{merDeriv}, we use the default
argument \code{full = TRUE} to get the variance-covariance matrix w.r.t.\
all parameters in the model. If \code{full = FALSE}, the
variance-covariance matrix w.r.t.\ only fixed parameters is returned.
To switch between the observed
and expected information matrix, we can supply the argument
\code{information = "observed"} or \code{information = "expected"}.
The default option is ``expected'' due to its wider usage.
\section[Computational relation]{Relation to \code{lmerMod} objects}
In this section, we describe how the quantities needed to
compute the scores,
Hessian, and Fisher information matrix can be obtained from
an \code{lmerMod} object.
The data and model matrices $\bm y$, $\bm X$, $\bm \beta$, and $\bm Z$ can
be obtained directly from \pkg{lme4} via \code{getME()}.
The only remaining components, then, are $\bm V$ and
$\partial \bm V/\partial \bm \sigma^2$. In the following,
we focus on how to indirectly obtain these components.
In the \pkg{lme4} framework, the random effects covariance matrix
$\bm G$ is decomposed via \citep{lme4}:
\begin{equation}
\label{eq:glme4}
\bm G = \bm \Lambda_{\bm \theta} \bm \Lambda_{\bm \theta}^{\top} \sigma_r^2,
\end{equation}
where $\bm \Lambda_{\bm \theta}$ is a $q \times q$ lower diagonal matrix,
called the \emph{relative covariance factor}. It can be seen as a
Cholesky decomposition of $\bm G/\sigma_r^2$. The dimension of
$\bm \Lambda_{\bm \theta}$ is the same as that of $\bm G$. Additionally,
the position of $\sigma_k^2$ in $\bm G$ is the same as the position
of $\theta_k$ in $\bm \Lambda_{\bm \theta}$.
Inserting Equation~\ref{eq:glme4} into Equation~\ref{eq:marginv},
we can express $\bm V$ as
\begin{equation}
\label{eq:computev}
\bm V = (\bm Z \bm \Lambda_{\bm \theta} \bm \Lambda_{\bm \theta}^{\top}
\bm Z^{\top} + \bm I_n) \sigma_r^2.
\end{equation}
Equation~\ref{eq:computev} is mathematically equivalent to
Equation~\ref{eq:marginv}, but it has computational advantages when, e.g.,
a random effect variance is close to 0.
Using Equation~\ref{eq:marginv}, the term
$\partial \bm V/\partial \sigma_k^2$ can usually be expressed as
\begin{equation}
\label{eq:compusigma}
\bm Z \frac{\partial \bm G}{\partial \sigma_k^2} \bm Z^{\top},
\end{equation}
so long as $\sigma_k^2$ is not the residual variance.
The partial derivative $\frac{\partial \bm G}{\partial \sigma_k^2}$ is
then a matrix of the same dimension as $\bm G$, with an entry of
$1$ corresponding to the location of $\sigma_k^2$ and $0$ elsewhere.
Because the location of $\sigma_k^2$ within $\bm G$ matches
its location within $\bm \Lambda_{\bm \theta}$, we can use
$\bm \Lambda_{\bm \theta}$ to facilitate computation of
$\partial \bm V/\partial \sigma_k^2$. The only trick is that
$\bm G$ is symmetric, whereas $\bm \Lambda_{\bm \theta}$ is lower diagonal.
The code below illustrates implementation of this strategy, where
\code{object} is a fitted model of class \code{lmerMod}.
We use \code{forceSymmetric()} to convert the lower
diagonal information from $\bm \Lambda_{\bm \theta}$ into the symmetric
$\bm G$.
\begin{Schunk}
\begin{Sinput}
R> ## "object" is a fitted model of class lmerMod.
R> parts <- getME(object, "ALL")
R> uluti <- length(parts$theta)
R> devLambda <- vector("list", uluti)
R> devV <- vector ("list", (uluti + 1))
R>
R> ## get the position of parameters in Lambda matrix
R> LambdaInd <- parts$Lambda
R> LambdaInd@x[] <- as.double(parts$Lind)
R>
R> for (i in 1:uluti) {
+ devLambda[[i]] <- forceSymmetric(LambdaInd==i, uplo = "L")
+ devV[[i]] <- tcrossprod(tcrossprod(parts$Z, t(devLambda[[i]])), parts$Z)
+ }
\end{Sinput}
\end{Schunk}
Finally, for the derivative with respect to the residual variance, it
is obvious that
$\partial \bm V/\partial \sigma_r^2=\bm I_n$ so long
as $\bm R = \sigma_r^2 \bm I$ \cite[also see][p.\ 137]{stroup12}.
The above results are sufficient for obtaining the derivatives necessary for
computing the Huber-White sandwich estimator and for carrying out additional
statistical tests (see the Discussion section). In the following sections,
we will
describe the Huber-White sandwich estimator for
linear mixed models with independent clusters, then provide an application.
\section{Huber-White sandwich estimator}
Let $\bm y_{c_j}$ contain the observations within cluster $c_j$. If
observations in different clusters are independent (as is the case in many
linear mixed models), then we can write
\begin{equation}
\label{eq:likelihood}
\ell(\bm \sigma^2, \bm \beta; \bm{y}) = \sum_{j=1}^J \ell(\bm \sigma^2,
\bm \beta; \bm y_{c_j}),
\end{equation}
where $J$ is the total number of clusters and $\ell()$ is defined in
Equation~\ref{eq:obj}. The first and second partial derivatives of $\ell$ w.r.t.
$\bm \xi = (\bm \sigma^2\ \bm \beta)^\top$ can then be written as
\begin{eqnarray}
\label{eq:firstder}
\ell^{'}(\bm \xi; \bm y) &=& \sum_{j=1}^{J}
\frac{\partial \ell(\bm \xi; \bm y_{c_j})}
{\partial \bm \xi} = \sum_{j=1}^{J}
\sum_{i \in c_j} s_i(\bm \xi; y_i)\\
\label{eq:secondder}
\ell^{''}(\bm \xi; \bm y) &=& \sum_{j=1}^{J}\frac{\partial^2
\ell(\bm \xi; \bm y_{c_j})}
{\partial \bm \xi^2},
\end{eqnarray}
where $\frac{\partial \ell(\bm \xi; \bm y_{c_j})}{\partial \bm \xi}$
represents the first derivative within cluster $c_j$, which can be
expressed as the sum of the casewise score $s_i()$ belonging to
$c_j$. The function $s_i()$ has also been studied in other
contexts \citep[e.g.,][]{WanMerZei14, zeihor07}.
Inference about $\bm \xi$ relies on a central limit theorem:
\begin{equation}
\label{eq:clt}
\sqrt{J}(\hat{\bm \xi} - \bm \xi)\xrightarrow{d}
N(\bm 0, \bm V(\bm \xi)),
\end{equation}
where $\xrightarrow{d}$ denotes convergence in distribution.
The traditional estimate of $\bm V(\bm \xi)$ relies on
Equation~\ref{eq:secondder}, whereas
the Huber-White sandwich estimator of $\bm V(\bm \xi)$ is defined as
\citep[e.g.,][]{freed12, white80, sand2}:
\begin{equation}
\label{eq:aba}
\bm V(\hat{\bm \xi}) = (\bm A)^{-1}\bm B(\bm A)^{-1},
\end{equation}
where $\bm A=-E(\ell^{''}(\hat{\bm \xi}); \bm{y})$ and
$\bm B=\text{Cov}(\ell^{'}(\hat{\bm \xi}; \bm{y}))$.
The square roots of the diagonal
elements of $\bm V$ are the ``robust standard errors.''
When the model is correctly specified, the Huber-White sandwich estimator
corresponds to the Fisher information matrix. However, the estimator
is often used in non-\emph{i.i.d.} samples to ``correct'' the
information matrix for misspecification \citep[e.g.,][]{freed12}.
While mixed models explicitly handle lack of independence via
random effects, the Huber-White estimators can still be applied to
these models to address remaining model misspecifications such as outliers in
random effects or deviations from normality \citep{roblmm, kol13}.
To construct the Huber-White sandwich estimator,
$\bm A$ can be obtained from Equation~\ref{eq:secondder}, whose analytic
expression for the linear mixed model is expressed in Section 3.2. The matrix
$\bm B$ can then be constructed via \citep[e.g.,][]{freed12}:
\begin{equation}
\label{eq:clustermeat}
\bm B=\sum_{j=1}^{J}\left[\sum_{i \in c_j} s_i(\bm \xi; y_i) \right ]^{\top}
\left [\sum_{i \in c_j} s_i(\bm \xi; y_i)\right ].
\end{equation}
Thus, we require the derivations presented in the previous section: the
``score'' terms $s_i(\bm{\xi}; y_i)$ $(i=1,\ldots,n)$ and the
information matrix using the marginal
likelihood from Equation~\ref{eq:obj}.
\section{Application}
In this section, we illustrate how the package can be used to obtain
clusterwise robust standard errors for the \code{sleepstudy}
data \citep{belenky03} included in \pkg{lme4}. This dataset includes
$18$ subjects participating in a sleep deprivation study, where each
subject's reaction time was monitored for $10$ consecutive days.
The reaction times are nested by subject and continuous in measurement,
hence the linear mixed model.
We first load package \pkg{lme4}, along with the \pkg{merDeriv} package
that is the focus of this paper.
\begin{Schunk}
\begin{Sinput}
R> library("lme4")
R> library("merDeriv")
\end{Sinput}
\end{Schunk}
Next, we fit a model with \code{Days} as the covariate, including random
intercept and slope effects that are allowed to covary. There are six
free model parameters: the fixed intercept and slope $\beta_0$ and $\beta_1$,
the random variance and covariances $\sigma_0^2$, $\sigma_1^2$,
and $\sigma_{01}$, and the residual variance $\sigma_r^2$.
\begin{Schunk}
\begin{Sinput}
R> lme4fit <- lmer(Reaction ~ Days + (Days|Subject), sleepstudy,
+ REML = FALSE)
\end{Sinput}
\end{Schunk}
This particular model can also be estimated as a structural equation
model via package \pkg{lavaan}, facilitating the comparison of our results
with a benchmark. We first convert the data to wide format and then
specify/estimate the model:
\begin{Schunk}
\begin{Sinput}
R> testwide <- reshape2::dcast(sleepstudy, Subject ~ Days,
+ value.var = "Reaction")
R> names(testwide)[2:11] <- paste("d", 1:10, sep = "")
R> ## describe latent model
R> latent <- 'i =~ 1*d1 + 1*d2 + 1*d3 + 1*d4 + 1*d5
+ + 1*d6 + 1*d7 + 1*d8 + 1*d9 + 1*d10
+
+ s = ~ 0*d1 + 1*d2 + 2*d3 + 3*d4 + 4*d5
+ + 5*d6 + 6*d7 + 7*d8 + 8*d9 + 9*d10
+
+ d1 ~~ evar*d1
+ d2 ~~ evar*d2
+ d3 ~~ evar*d3
+ d4 ~~ evar*d4
+ d5 ~~ evar*d5
+ d6 ~~ evar*d6
+ d7 ~~ evar*d7
+ d8 ~~ evar*d8
+ d9 ~~ evar*d9
+ d10 ~~ evar*d10
+
+ ## reparameterize as sd
+ sdevar := sqrt(evar)
+ i ~~ ivar*i
+ isd := sqrt(ivar)'
R> ## fit model in lavaan
R> lavaanfit <- growth(latent, data = testwide, estimator = "ML")
\end{Sinput}
\end{Schunk}
The parameter estimates from the two packages (not shown) all agree to at
least three decimal places. Below, we examine the agreement of derivative computations.
\subsubsection{Scores}
The analytic casewise and clusterwise scores are obtained
via \code{estfun.lmerMod()}, using the arguments \code{level = 1} and
\code{level = 2}, respectively. The sum of scores (either casewise or
clusterwise) equals the gradient, which is close to zero at the ML estimates.
\begin{Schunk}
\begin{Sinput}
R> score1 <- estfun.lmerMod(lme4fit, level = 1)
R> gradients1 <- colSums(score1)
R> gradients1
\end{Sinput}
\begin{Soutput}
(Intercept) Days
2.39e-14 2.38e-13
cov_Subject.(Intercept) cov_Subject.Days.(Intercept)
2.94e-09 4.19e-08
cov_Subject.Days residual
8.29e-08 -7.38e-09
\end{Soutput}
\end{Schunk}
\begin{Schunk}
\begin{Sinput}
R> score2 <- estfun.lmerMod(lme4fit, level = 2)
R> gradients2 <- colSums(score2)
R> gradients2
\end{Sinput}
\begin{Soutput}
(Intercept) Days
2.39e-14 2.38e-13
cov_Subject.(Intercept) cov_Subject.Days.(Intercept)
2.94e-09 4.19e-08
cov_Subject.Days residual
8.29e-08 -7.38e-09
\end{Soutput}
\end{Schunk}
The clusterwise scores are also provided by \code{estfun.lavaan()} in
\pkg{lavaan}. Figure~\ref{fig:scorelavaan} presents a comparison
between the clusterwise scores obtained
from \code{estfun.lmerMod()} and \code{estfun.lavaan()},
showing they are nearly identical. The absolute difference between the scores
obtained from these two packages is within $1.5 \times 10^{-7}$. The
sum of squared differences is within $2.2 \times 10^{-14}$.
\begin{figure}
\caption{Comparison of scores obtained via \code{estfun.lavaan}
and \code{estfun.lmerMod}. In the left panel,
the y-axis represents analytic, clusterwise scores
obtained from \code{estfun.lmerMod}, and the x-axis represents
clusterwise scores obtained from \code{estfun.lavaan}. The dashed
line serves as a reference line as y = x; In the right panel,
the y-axis represents the difference between the scores obtained via
\code{estfun.lavaan} and \code{estfun.lmerMod}, and the x-axis represents
the clusterwise scores obtained from \code{estfun.lavaan}. The dashed
line serves as a reference line as y = 0;}
\label{fig:scorelavaan}
\begin{Schunk}
{\centering \includegraphics[width=6.5in,height=4.5in]{robse-empirical-1}
}
\end{Schunk}
\end{figure}
\subsubsection{Variance covariance matrices}
We also compare the variance covariance matrix calculated via our \pkg{lme4}
second derivatives to the \code{vcov()} output of lavaan.
The results are displayed in Table~\ref{tab:bread}. The maximum of the
absolute difference for all components in the variance covariance matrix is
0.07. This minor
difference is due to the fact that
\pkg{lavaan} applies the delta method to compute the Fisher
information matrix for
defined parameters \citep{rosv12,obe14}. In contrast,
\pkg{merDeriv} utilizes analytic expressions. This
difference is ignorable due to the small relative
difference (within $10^{-6}$).
\begin{table}[ht]
\centering
\scalebox{0.75}{
\begin{tabular}{llllll}
\hline
Column name & Row name & merDeriv & lavaan & Abs Diff & Relative Diff \\
\hline
(Intercept) & (Intercept) & 43.99 & 43.99 & 0.00 & 0.00 \\
Days & (Intercept) & -1.37 & -1.37 & 0.00 & 0.00 \\
cov\_Subject.(Intercept) & (Intercept) & 0.00 & 0.00 & 0.00 & -- \\
cov\_Subject.Days.(Intercept) & (Intercept) & 0.00 & 0.00 & 0.00 & -- \\
cov\_Subject.Days & (Intercept) & 0.00 & 0.00 & 0.00 & -- \\
residual & (Intercept) & 0.00 & 0.00 & 0.00 & -- \\
(Intercept) & Days & -1.37 & -1.37 & 0.00 & 0.00 \\
Days & Days & 2.26 & 2.26 & 0.00 & 0.00 \\
cov\_Subject.(Intercept) & Days & 0.00 & 0.00 & 0.00 & -- \\
cov\_Subject.Days.(Intercept) & Days & 0.00 & 0.00 & 0.00 & -- \\
cov\_Subject.Days & Days & 0.00 & 0.00 & 0.00 & -- \\
residual & Days & 0.00 & 0.00 & 0.00 & -- \\
(Intercept) & cov\_Subject.(Intercept) & 0.00 & 0.00 & 0.00 & -- \\
Days & cov\_Subject.(Intercept) & 0.00 & 0.00 & 0.00 & -- \\
cov\_Subject.(Intercept) & cov\_Subject.(Intercept) & 70366.08 & 70366.15 & 0.07 & 0.00 \\
cov\_Subject.Days.(Intercept) & cov\_Subject.(Intercept) & -2282.47 & -2282.46 & 0.01 & 0.00 \\
cov\_Subject.Days & cov\_Subject.(Intercept) & 92.56 & 92.56 & 0.00 & 0.00 \\
residual & cov\_Subject.(Intercept) & -2058.08 & -2058.08 & 0.00 & 0.00 \\
(Intercept) & cov\_Subject.Days.(Intercept) & 0.00 & 0.00 & 0.00 & -- \\
Days & cov\_Subject.Days.(Intercept) & 0.00 & 0.00 & 0.00 & -- \\
cov\_Subject.(Intercept) & cov\_Subject.Days.(Intercept) & -2282.47 & -2282.46 & 0.01 & 0.00 \\
cov\_Subject.Days.(Intercept) & cov\_Subject.Days.(Intercept) & 1838.33 & 1838.33 & 0.00 & 0.00 \\
cov\_Subject.Days & cov\_Subject.Days.(Intercept) & -115.28 & -115.28 & 0.00 & 0.00 \\
residual & cov\_Subject.Days.(Intercept) & 324.96 & 324.96 & 0.00 & 0.00 \\
(Intercept) & cov\_Subject.Days & 0.00 & 0.00 & 0.00 & -- \\
Days & cov\_Subject.Days & 0.00 & 0.00 & 0.00 & -- \\
cov\_Subject.(Intercept) & cov\_Subject.Days & 92.56 & 92.56 & 0.00 & 0.00 \\
cov\_Subject.Days.(Intercept) & cov\_Subject.Days & -115.28 & -115.28 & 0.00 & 0.00 \\
cov\_Subject.Days & cov\_Subject.Days & 184.21 & 184.21 & 0.00 & 0.00 \\
residual & cov\_Subject.Days & -72.21 & -72.21 & 0.00 & 0.00 \\
(Intercept) & residual & 0.00 & 0.00 & 0.00 & -- \\
Days & residual & 0.00 & 0.00 & 0.00 & -- \\
cov\_Subject.(Intercept) & residual & -2058.08 & -2058.08 & 0.00 & 0.00 \\
cov\_Subject.Days.(Intercept) & residual & 324.96 & 324.96 & 0.00 & 0.00 \\
cov\_Subject.Days & residual & -72.21 & -72.21 & 0.00 & 0.00 \\
residual & residual & 5957.61 & 5957.61 & 0.00 & 0.00 \\
\hline
\end{tabular}
}
\caption{Comparison between \pkg{merDeriv}
\code{vcov.lmerMod()} output and \pkg{lavaan} \code{vcov()}
output for the \code{sleepstudy} data. The first two columns
describe the specific matrix entry being compared, the third
and fourth columns show the estimates, the fifth and sixth
column shows the absolute and relative difference.}
\label{tab:bread}
\end{table}
Finally, the clusterwise Huber-White sandwich estimator is shown in
Table~\ref{tab:sandwich},
which is comparable to the one provided by \pkg{lavaan}.
The maximum of the absolute difference for all
components in the variance covariance matrix is
0.05. The
minor difference is again caused by the aforementioned reasons.
\begin{table}[ht]
\centering
\scalebox{0.75}{
\begin{tabular}{llllll}
\hline
Column name & Row name & merDeriv & lavaan & Abs Diff & Relative Diff \\
\hline
(Intercept) & (Intercept) & 43.99 & 43.99 & 0.00 & 0.00 \\
Days & (Intercept) & -1.37 & -1.37 & 0.00 & 0.00 \\
cov\_Subject.(Intercept) & (Intercept) & -523.40 & -523.41 & 0.01 & 0.00 \\
cov\_Subject.Days.(Intercept) & (Intercept) & -20.77 & -20.77 & 0.00 & 0.00 \\
cov\_Subject.Days & (Intercept) & -5.92 & -5.92 & 0.00 & 0.00 \\
residual & (Intercept) & 149.15 & 149.15 & 0.00 & 0.00 \\
(Intercept) & Days & -1.37 & -1.37 & 0.00 & 0.00 \\
Days & Days & 2.26 & 2.26 & 0.00 & 0.00 \\
cov\_Subject.(Intercept) & Days & -56.09 & -56.09 & 0.00 & 0.00 \\
cov\_Subject.Days.(Intercept) & Days & 0.18 & 0.18 & 0.00 & 0.00 \\
cov\_Subject.Days & Days & -1.98 & -1.98 & 0.00 & 0.00 \\
residual & Days & 78.71 & 78.71 & 0.00 & 0.00 \\
(Intercept) & cov\_Subject.(Intercept) & -523.40 & -523.41 & 0.01 & 0.00 \\
Days & cov\_Subject.(Intercept) & -56.09 & -56.09 & 0.00 & 0.00 \\
cov\_Subject.(Intercept) & cov\_Subject.(Intercept) & 45232.13 & 45232.18 & 0.05 & 0.00 \\
cov\_Subject.Days.(Intercept) & cov\_Subject.(Intercept) & 1055.38 & 1055.38 & 0.00 & 0.00 \\
cov\_Subject.Days & cov\_Subject.(Intercept) & 427.39 & 427.39 & 0.00 & 0.00 \\
residual & cov\_Subject.(Intercept) & -27398.62 & -27398.62 & 0.00 & 0.00 \\
(Intercept) & cov\_Subject.Days.(Intercept) & -20.77 & -20.77 & 0.00 & 0.00 \\
Days & cov\_Subject.Days.(Intercept) & 0.18 & 0.18 & 0.00 & 0.00 \\
cov\_Subject.(Intercept) & cov\_Subject.Days.(Intercept) & 1055.38 & 1055.38 & 0.00 & 0.00 \\
cov\_Subject.Days.(Intercept) & cov\_Subject.Days.(Intercept) & 1862.99 & 1862.99 & 0.00 & 0.00 \\
cov\_Subject.Days & cov\_Subject.Days.(Intercept) & -89.28 & -89.28 & 0.00 & 0.00 \\
residual & cov\_Subject.Days.(Intercept) & 1214.37 & 1214.37 & 0.00 & 0.00 \\
(Intercept) & cov\_Subject.Days & -5.92 & -5.92 & 0.00 & 0.00 \\
Days & cov\_Subject.Days & -1.98 & -1.98 & 0.00 & 0.00 \\
cov\_Subject.(Intercept) & cov\_Subject.Days & 427.39 & 427.39 & 0.00 & 0.00 \\
cov\_Subject.Days.(Intercept) & cov\_Subject.Days & -89.28 & -89.28 & 0.00 & 0.00 \\
cov\_Subject.Days & cov\_Subject.Days & 137.89 & 137.89 & 0.00 & 0.00 \\
residual & cov\_Subject.Days & -492.56 & -492.56 & 0.00 & 0.00 \\
(Intercept) & residual & 149.15 & 149.15 & 0.00 & 0.00 \\
Days & residual & 78.71 & 78.71 & 0.00 & 0.00 \\
cov\_Subject.(Intercept) & residual & -27398.62 & -27398.62 & 0.00 & 0.00 \\
cov\_Subject.Days.(Intercept) & residual & 1214.37 & 1214.37 & 0.00 & 0.00 \\
cov\_Subject.Days & residual & -492.56 & -492.56 & 0.00 & 0.00 \\
residual & residual & 43229.03 & 43229.03 & 0.00 & 0.00 \\
\hline
\end{tabular}
}
\caption{Comparison of the \code{sleepstudy} sandwich
estimator obtained from our \pkg{merDeriv} code with the
analogous estimator obtained from \pkg{lavaan}. The first two
columns describe the specific matrix entry being compared, the
third and fourth columns show the estimates, the fifth and sixth
column shows the absolute and relative difference.}
\label{tab:sandwich}
\end{table}
\section{Discussion}
In this paper, we illustrated how to obtain the Huber-White sandwich
estimator of estimated parameters arising from objects of
class \code{lmerMod} with independent clusters. This required us to derive
observational (and clusterwise) scores for fixed and random
parameters (leading to the ``meat'') as well as a Fisher information matrix
that included random effect variances and
covariances (leading to the ``bread''). In the discussion below,
we address extensions to related
statistical metrics and models.
\subsection{Restricted maximum likelihood (REML)}
While we focused on linear mixed models estimated via maximum
likelihood (ML), extension to restricted maximum likelihood (REML) is
straightforward.
The central idea of REML is to maximize the likelihood function of
variance parameters after
accounting for the fixed effects. By using this approach, the downward
bias of
ML for variance estimates can be eliminated (similarly to division
of $n$ versus $n-1$ in simple variance calculations), so REML is used often in
LMM applications \citep{stroup12}.
Referring to the \code{sleepstudy} example, package \pkg{merDeriv} can
provide scores (\code{estfun})
and variance covariance matrix (\code{vcov}) based on the REML likelihood
function and corresponding estimates.
The fixed effects parameters are equivalent for ML and REML, whereas the
corresponding \code{vcov} components are larger based on REML. For example,
the ML variance for the estimated fixed intercept is
43.99
whereas the REML variance for the estimated fixed intercept is
46.57.
\subsection{Statistical tests}
The scores derived in this paper can potentially be used to carry out
a variety of score-based statistical tests. For example, the
``fluctuation test'' framework discussed by \cite{zeihor07}, \cite{MerZei13},
and others generalizes the traditional
score (Lagrange multiplier) test and is used to detect parameter instability
across orderings of observations. The tests have been critical for the
development of model-based recursive partitioning procedures available
via packages such as \pkg{partykit} \citep{party}.
The code that we present here facilitates application of score-based
tests to linear mixed models, because the tests described in the
previous paragraph are available via object-oriented \proglang{R} packages.
That is to say the aforementioned packages can be applied to
linear mixed models estimated via \pkg{lme4}, because we have supplied the generic function
\code{estfun} for models of class \code{lmerMod}.
A challenge involves the fact that much of the above theory
requires observations to be independent. For the linear mixed models with
independent clusters, tests can often be applied immediately.
However, while we can
test parameter instability across independent clusters, it is
more difficult to test for instability across correlated observations within a
cluster. A related issue, further described below, arises when we attempt to
apply sandwich estimators to models with crossed random effects.
\subsection{Models with multiple random effects terms}
The ``independence'' challenges described in the previous section
translate to the setting of models with multiple random effects terms, such as
(partially) crossed
random effects or models with multilevel nested
designs \cite[e.g., three-level models;][Ch.2]{bates10}.
These correspond to situations where there are at
least two unique variables defining clusters (for example, clusters
defined by primary school attended and by secondary school attended).
In this case, we cannot simply sum scores within a cluster
to obtain independent, clusterwise scores. This is because observations in
different clusters on the first grouping variable may be in
the same cluster on the second grouping variable. Thus, it is unclear
how the statistical machinery developed for independent
observations (e.g., robust standard errors, instability tests) can transfer to
models with partially crossed random effects. While
our \code{estfun.lmerMod()} code can return casewise
scores and \code{vcov.lmerMod()} can
return the full variance covariance matrix of all \code{lmerMod} objects,
it is unclear how to further use these results.
The main difficulty involves construction of
the ``meat'', which is the variance of the first derivatives based on the
grouping variable. One possible solution is to create separate ``meats''
based on
different grouping variables, accounting for covariances between the meats.
This approach is described in \cite{ras94}
and \cite{cam11} to decompose
parameter variances when there are multiple grouping variables.
It may be possible to apply the same idea to our problem, and we plan to study this in the future.
\subsection{GLMM}
Finally, the procedures described here for scores, Hessians, Fisher information
and sandwich estimators can be extended to generalized linear mixed
models estimated via \code{glmer()}.
The technical difficulty involved with this extension is the observational
scores. In the linear mixed model, we can derive the analytic scores for each
observation because we know that the marginal distribution is normal.
In the GLMM, the marginal distribution is typically unknown, and we require
integral approximation methods (e.g., quadrature or the Laplace approximation)
to obtain the scores and second derivatives.
Combination of these integral approximation methods with the
\pkg{lme4} penalized least squares approach presents a challenge
that we have not yet overcome. We plan to do so in the future.
\section{Acknowledgments}
This research was partially supported by NSF grant 1460719.
|
{
"timestamp": "2017-11-06T02:01:48",
"yymm": "1612",
"arxiv_id": "1612.04911",
"language": "en",
"url": "https://arxiv.org/abs/1612.04911"
}
|
\section{Introduction}
\label{sec:IntroSec}
The effects of galaxy cluster mergers on star formation (SF) have begun to be better understood in recent years, adding depth to the relationships found in relaxed clusters between SF and clustercentric distance and local density (e.g., \citealt{Dressler1980}; \citealt{Cohen2014}, hereafter C14; \citealt{Cohen2015}; and many others). While some studies find no relationship between cluster merger activity and SF in specific clusters \citep[e.g.,][]{Metevier2000, Ferrari2005, Braglia2009, Hwang2009, Kleiner2014}, many others report such a relationship \citep[e.g.,][]{Knebe2000, Cortese2004, Ferrari2005, Johnston2008, Bravo2009, Braglia2009, Hwang2009, Ma2010, Wegner2011, Wegner2015, Sobral2015, Girardi2015, Stroe2015}. Indeed, \citetalias{Cohen2014} and \citet{Cohen2015} found that SF is statistically correlated to cluster substructure in studies of large numbers of clusters: in general, clusters with more substructure exhibit greater levels of SF.
Recent studies have investigated the relationship between cluster substructure and supercluster environment \citep[][hereafter E12b]{Einasto2015, Krause2013, Einasto2012b}. In particular, \citetalias{Einasto2012b} found that clusters in superclusters are more likely to have substructure than those that are isolated, though the correlation discussed in the paper is weak. Studies have also begun to probe the connection between supercluster environment and SF \citep[e.g.][]{Costa2013, Luparello2013, Lietzen2012}. In voids, there is a general consensus that this lower-density large-scale environment only weakly affects galaxy properties, which depend more strongly on local environment \citep{Grogin2000, Rojas2005, Patiri2006, Wegner2008, Hoyle2012, Kreckel2011, Kreckel2012}.
In superclusters, \citet{Einasto2014} recently showed that supercluster morphology is important in shaping the properties of galaxies: higher levels of SF are found in galaxies in spider-type superclusters than filament-type superclusters. Simulations by \citet{Aragon2014} suggest that the quenching of SF in clusters depends on the geometry of the large-scale surrounding structure. This supports observational work by \citet{Einasto2014}: spider-type superclusters have richer inner structure and larger numbers of filaments connecting galaxy clusters than do filament-type superclusters.
However, a definitive relationship between supercluster environment and SF has yet to be shown. We seek to develop a coherent picture connecting these four variables: cluster star-forming fraction ($f_{SF}$), amount of cluster substructure, supercluster environment density, and supercluster morphology. This paper considers the correlations between these parameters, focusing in particular on the pairwise comparison between supercluster environment and SF. Furthermore, we seek to confirm the pairwise comparisons involving cluster substructure and cluster SF, and cluster substructure and supercluster density. Finally, we investigate a potential multi-dimensional correlation among the three non-morphological variables. In \S\ref{sec:DataSec}, we introduce our cluster sample and discuss methods for determining substructure, SF, and supercluster properties; \S\ref{sec:ResultsSec} enumerates our results; and the implications of our findings we discuss in \S\ref{sec:DiscussionSec}. Throughout our analysis we assume a standard cosmology of $H_{0} = 100\:h\:\textnormal{km}\:\textnormal{s}^{-1}\:\textnormal{Mpc}^{-1}$, $\Omega_{\textnormal{m}} = 0.27$, and $\Omega_{\Lambda} = 0.73$.
\section{Data and Methods}
\label{sec:DataSec}
In this section, we describe our cluster and supercluster samples. We also explain our methods for calculating SF and substructure properties of clusters. Finally, we introduce our use of principal component analysis (PCA) in determining relationships between SF, substructure, and large-scale environment.
\subsection{Cluster sample}
\label{sec:SampleDataSec}
We use the sample of rich clusters from the group catalogue of \citet{Tempel2012}, which is based on the SDSS DR8 spectroscopic data \citep{Aihara2011}. Using SDSS data, \citet{Tempel2012} identified 77,858 groups and clusters using the friends-of-friends (FoF) method \citep{Zeldovich1982, Huchra1982}. \citealt{Einasto2012a} (hereafter E12a) used the subsample of rich clusters with at least 50 members in the redshift interval $0.04 \leq z \leq 0.1$ to determine the substructure properties of the clusters. They found that 90 of these clusters contain substructure and 17 do not. In the present paper, we use this cluster sample, previously analyzed by \citetalias{Cohen2014}. We obtain galaxy stellar masses from the Max Planck Institute (MPA)/Johns Hopkins University (JHU) VAGC \citep{Tremonti2004}, and both observed and estimated total \emph{r}-band luminosities ($L_{obs}$ and $L_{tot}$, respectively) from the catalogue of \citet{Tempel2012}.
As the SDSS data are flux-limited, the FoF method potentially suffers from the bias of fainter galaxies vanishing as distance increases. This leads to differences in the luminosities of member galaxies between nearby and more distant groups. \citet{Tempel2012} partly corrected for this effect by determining a relationship between distance and the linking length used in their FoF algorithm, and then applying this relation when selecting groups at different distances. They note that, by applying this correction, their final group catalogue is quite homogeneous in richness, size, and velocity dispersion, regardless of distance. However, we note that this does not correct for the fact that groups of a given richness at lower redshift are less luminous than those of the same richness at higher redshift. This is because, spectroscopically, fainter galaxies are more easily detected in the SDSS -- and thus included as group members -- at lower redshift than higher redshift. Despite this bias, in \S\ref{sec:PairwiseSec}, we discuss how our results are not affected by differences in cluster luminosity.
To further alleviate the biases inherent in flux-limited surveys, following the prescription in \citetalias{Cohen2014}, we study only those galaxies with $M^{0.1}_{r} < -20.5$. The determination of this absolute magnitude cut follows the methods of \citet{Hwang2009}. A galaxy's \emph{r}-band absolute magnitude is calculated from its apparent magnitude $m_{r}$ via
\begin{equation}
\label{equ:MrEqu}
M^{0.1}_{r} = m_{r} - DM - K(z) - E(z),
\end{equation}
where $m_{r}$ is corrected for extinction; $DM \equiv 5\:\textnormal{log}(D_{L}/10\:\textnormal{pc})$ and $D_{L}$ is a luminosity distance; $K(z)$ is a \emph{K}-correction \citep{Blanton2007} to a redshift of 0.1, denoted by the superscript; and $E(z)$ is an evolution correction defined by $E(z) = 1.6(z - 0.1)$ \citep{Tegmark2004}. Extinction-corrected magnitudes and \emph{K}-corrections are collected from the NYU Value-Added Galaxy Catalogue \citep[VAGC;][]{Blanton2005, Padmanabhan2008}.
\subsection{Star formation and substructure determinations}
\label{sec:SFStrucDataSec}
\citetalias{Cohen2014} determined which galaxies are star-forming using the detection of H$\alpha$ emission, defined as the measurement of an equivalent width of at least 3 \AA\:\citep[a compromise between, e.g.,][]{Ma2008, Balogh2004, Rines2005}. Relevant equivalent width and flux measurements were retrieved from the MPA/JHU VAGC \citep{Tremonti2004}. When possible, they also used the BPT diagram \citep{Baldwin1981} that uses the emission line ratios $\log([\textnormal{OIII}]\lambda5007/\textnormal{H}\beta)$ vs. $\log([\textnormal{NII}]\lambda6583/\textnormal{H}\alpha)$ to separate star-forming galaxies from AGN and LINERs \citep{Kauffmann2003, Kewley2001}; the latter two types we remove from our analysis. A cluster's $f_{SF}$ is defined as the number of star-forming galaxies divided by the total number of galaxies in the cluster.
Cluster substructure properties were determined by \citetalias{Einasto2012a} using multidimensional normal mixture modelling via the \emph{Mclust} package for classification and clustering \citep{Fraley2006}. \emph{Mclust} assigns each member galaxy to a component, thus determining the number of components in each cluster. \citetalias{Einasto2012a} also analyzed the substructure properties of our clusters using the Dressler-Shectman (DS or $\Delta$) test \citep{Dressler1988}. In short, for each cluster, this test measures how each galaxy's local kinematics differ from the kinematics of the cluster as a whole. The results of the test are then calibrated using Monte Carlo simulations to determine a \emph{p}-value, the probability that any observed substructure is due to chance. Thus, smaller \emph{p}-values indicate higher probabilities of substructure. Please see \S3.2 in \citetalias{Einasto2012a} for more details on the $\Delta$ test and its calibration.
\subsection{Large-scale environment of clusters}
\label{sec:SuperclustDataSec}
Most clusters belong to a supercluster, and these superclusters are characterized by their total luminosity, richness, and morphology \citepalias{Einasto2012b}. To demarcate superclusters, we use the methods of \citetalias{Einasto2012b}, who calculated the galaxy luminosity density field and determined the luminosity distribution of galaxies. Supercluster membership was determined at the smoothing length of 8 $h^{-1}$ Mpc (hereafter D8), and the density $\textnormal{D}8 = 5$ (in units of mean density, $\ell_{\mathrm{mean}} = 1.65 \cdot 10^{-2} \frac{10^{10} h^{-2} L_{\odot}}{(h^{-1} \textnormal{Mpc})^3}$) is used to separate supercluster environments from the field \citep{Liivamagi2012}. Furthermore, as determined in \citet{Einasto2007}, $\textnormal{D}8 \approx 8$ separates the high-density cores of superclusters from their outskirts. We direct the reader to Appendix B in \citetalias{Einasto2012b} and references therein for more details on these density calculations. We note that a correlation exists between D8 and redshift for our sample clusters. However, any evolution in redshift should be minimal within the redshift range we study, and this bias should not affect our conclusions.
\begin{figure}
\begin{center}
\subfigure{\includegraphics[scale=0.63]{f1a.pdf}}\\
\subfigure{\includegraphics[scale=0.63]{f1b.pdf}}
\caption{Examples of a filament supercluster (top) and a spider supercluster (bottom). These are the richest superclusters of the Sloan Great Wall, SCl~027 and SCl~019, respectively (see \citealt{Einasto2014} for details). Black circles denote galaxies in clusters of at least 50 members, and gray circles represent other galaxies in the supercluster.}
\label{fig:SClsFig}
\end{center}
\end{figure}
Supercluster morphology is determined by the four Minkowski functionals \citepalias{Einasto2012b}, which are proportional to an enclosed volume, the area of the surface surrounding it, the integrated mean curvature of this surface, and its integrated Gaussian curvature. The first three functionals describe the overall structure of a supercluster via two shapefinders (planarity and filamentarity) and their ratio (shape parameter). The fourth functional describes a supercluster's inner structure. This methodology divides superclusters into four morphologies, based on the Minkowski functions and visual appearance: spiders, multispiders, filaments, and multi-branching filaments \citep{Einasto2011}. For simplicity, in this work, we combine these classifications into two main types: spiders, which exhibit one or more high-density clumps of clusters connected by many galaxy chains; and filaments, in which high-density clumps or cores are connected by a small number of galaxy chains. Please see Appendix C in \citetalias{Einasto2012b} and references therein for details on these morphology calculations. Figure~\ref{fig:SClsFig} shows an example of a filament supercluster (top) and a spider supercluster (bottom). Galaxies in clusters of at least 50 members are shown in black, and other galaxies in the supercluster are shown in gray.
\subsection{Principal component analysis methods}
\label{sec:PCADataSec}
PCA has been widely used in astronomy for a number of purposes (see \citealt{Einasto2011} for references). PCA transforms variables of interest to a new coordinate system whose new variables are known as the principal components (PCs) of the data. These PCs are linear combinations of the original parameters, and they illustrate the variable(s) along which the original data has the most variance. The original data varies most when projected along the first PC; the direction of the second PC indicates the direction of the second greatest variance; etc. We normalize and centralize our parameters by dividing each by its standard deviation and centering each on its mean. We use PCA to investigate how several variables are potentially correlated with cluster SF. In particular, we focus not only on the two variables discussed in this work -- cluster substructure and supercluster environment -- but also include total cluster halo mass via two proxies, cluster \emph{r}-band luminosity (both $L_{obs}$ and $L_{tot}$) and total cluster stellar mass \citep[$M_{*}$; e.g.,][]{Yang2007, Andreon2010, Gonzalez2013}.
\section{Results}
\label{sec:ResultsSec}
\subsection{Pairwise comparisons}
\label{sec:PairwiseSec}
\begin{deluxetable}{cccccc}
\tablecolumns{6}
\tablewidth{0pc}
\tablehead{
\colhead{} & \colhead{Supercluster} & \colhead{Environmental} & \colhead{} & \colhead{} & \colhead{} \\
\colhead{} & \colhead{Morphology} & \colhead{Density (D8)} & \colhead{$f_{SF}$} & \colhead{$N_{clust}$} & \colhead{$N_{gal}$} \\
\colhead{} & \colhead{(1)} & \colhead{(2)} & \colhead{(3)} & \colhead{(4)} & \colhead{(5)}}
\startdata
(1) & All & $< 8$ & $0.244 \pm 0.011$ & 68 & 1948 \\
(2) & All & $\ge 8$ & $0.202 \pm 0.014$ & 38 & 2192 \\
\hline
(3) & Filament & $5 \le \textnormal{D}8 < 8$ & $0.258 \pm 0.033$ & 13 & 467 \\
(4) & Filament & $\ge 8$ & $0.166 \pm 0.019$ & 16 & 924 \\
(5) & Spider & $5 \le \textnormal{D}8 < 8$ & $0.231 \pm 0.016$ & 24 & 922 \\
(6) & Spider & $\ge 8$ & $0.229 \pm 0.016$ & 22 & 1268
\enddata
\label{tab:SFfracsTab}
\end{deluxetable}
In this section, we investigate how cluster $f_{SF}$ is related to both the density of the clusters' surrounding environment, and the morphology of the superclusters in which the clusters reside. For convenience, all $f_{SF}$ values discussed can be found in Table~\ref{tab:SFfracsTab}, with the following columns: (1) morphology of supercluster; (2) environmental density (D8); (3) $f_{SF}$; (4) number of clusters; and (5) number of galaxies.
In Figure~\ref{fig:SFvsD8Fig}, we plot $f_{SF}$ as a function of D8. Each blue point represents a cluster, and the best fit line is calculated via a linear regression of the cluster values. The gray region represents a $1\sigma$ error on the best fit, which is calculated by performing a bootstrap resampling of all clusters, recalculating the best fit line each time, and taking the standard deviation of the resulting slopes. The error bar represents the median of the standard deviations of each individual cluster's $f_{SF}$. Each cluster's $f_{SF}$ standard deviation is calculated by resampling the galaxies in the cluster, determining a new $f_{SF}$ each time, and taking the standard deviation of these $f_{SF}$ values.
The slope of this relation, $-0.008 \pm 0.002$, is negative at the 99.9\% confidence level with a significance of approximately $3.5\sigma$, indicating that a weak but significant inverse correlation exists between $f_{SF}$ and the density of the supercluster environment. We also calculate the average cluster $f_{SF}$ at lower large-scale densities ($\textnormal{D}8 < 8$; row 1 of Table~\ref{tab:SFfracsTab}) and in high-density supercluster cores ($\textnormal{D}8 \geq 8$; row 2). In low-density areas, we find the $f_{SF}$, $0.244 \pm 0.011$, is higher than that in high-density cores, $0.202 \pm 0.014$, a difference that is significant to 99\% confidence (as determined through bootstrap resampling). These results suggest that, in general, there exist higher values of $f_{SF}$ in clusters in low-density large-scale environments than in high-density cores of superclusters. We note as an aside that, if we remove the highest-density cluster with $\textnormal{D}8 > 20$ from our analysis, the slope of our relation remains negative with a significance at the 99.7\% confidence level.
We test whether differences in cluster mass could be the cause of the observed correlation between SF and D8. Many studies find decreasing SF with increasing cluster mass \citep[e.g.,][]{Finn2005, Homeier2005, Weinmann2005, Poggianti2006, Koyama2010}, while others find no such correlation \citep[e.g.,][]{Goto2005, Popesso2007, Balogh2010, Chung2011}. As a proxy for total halo mass, \citetalias{Cohen2014} used the observed stellar mass of cluster galaxies, $M_{*}$, obtained from the MPA/JHU VAGC \citep{Tremonti2004}. We use a similar metric, but multiply $M_{*}$ by the ratio of $L_{tot}$ to $L_{obs}$ to obtain an estimated total cluster stellar mass, $M_{*}^{tot} = M_{*} \times (L_{tot}/L_{obs})$.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.63]{f2.pdf}
\caption{$f_{SF}$ versus D8. Blue points represent individual clusters, and the gray region represents a $1\sigma$ error on the best fit solid line. The error bar is the median standard deviation of each individual cluster's $f_{SF}$. In general, clusters in lower-density environments exhibit higher values of $f_{SF}$.}
\label{fig:SFvsD8Fig}
\end{center}
\end{figure}
Our method to test the effect of cluster mass is as follows. In short, in each bin of D8 we weight the clusters to have the same $M_{*}^{tot}$ distribution as the sample as a whole, and use these weights to calculate measurement errors for our linear regression. This effectively removes any effect of cluster mass on our $f_{SF}$ measurements. First, we calculate each cluster's $M_{*}^{tot}$ and determine the normalized distribution of these cluster masses. Next, for each bin of D8, we weight the bin's $M_{*}^{tot}$ values so their normalized distribution matches that of our entire sample. Each cluster is assigned the weight of its $M_{*}^{tot}$ bin. Finally, we apply these weights to the galaxies in our linear regression analysis. We find that the slope of our relation actually becomes slightly more negative, decreasing to $-0.11 \pm 0.003$, when controlling for cluster mass. Furthermore, the significance of the correlation increases slightly to $3.7\sigma$. This suggests that a relation between large-scale density and cluster mass is not the cause of the observed correlation between SF and D8. We also perform the same weighting procedure using \emph{r}-band luminosity (both $L_{obs}$ and $L_{tot}$) as a proxy for halo mass, and the results remain the same. We further note that, when plotting $f_{SF}$ as a function of cluster mass, we observe no correlation. Finally, we perform the same weighting procedure using number of cluster galaxies instead of $M_{*}^{tot}$. In this case, the significance of the correlation drops slightly, but still remains above $3\sigma$.
We now test this relationship in superclusters of spider and filament morphology separately. Note that we only include clusters within superclusters, i.e., with $\textnormal{D}8 > 5$. Figure~\ref{fig:SFvsD8MorphFig} shows $f_{SF}$ as a function of D8 for clusters in spider (blue, right-hatched) and filament (red, left-hatched) superclusters. The hatched regions represent $1\sigma$ errors on the best fit lines, determined, as in Figure~\ref{fig:SFvsD8Fig}, by bootstrapping over clusters of each type. Interestingly, we observe the same inverse correlation between $f_{SF}$ and D8 only for filament superclusters: the slope of this relation is negative at the 99.8\% confidence level, and $f_{SF}$ at lower densities, $0.258 \pm 0.033$, is higher than that at higher densities, $0.166 \pm 0.019$, with greater than 99\% confidence (rows 3 and 4 of Table~\ref{tab:SFfracsTab}, respectively). In spider superclusters, there is no significant correlation between $f_{SF}$ and D8.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.63]{f3.pdf}
\caption{$f_{SF}$ versus D8 in spider (blue, right-hatched) and filament (red, left-hatched) superclusters. Points represent individual clusters, and the hatched regions represent $1\sigma$ errors on the best fit lines. The inverse correlation between $f_{SF}$ and D8 is predominantly due to filament superclusters. Also, we observe higher $f_{SF}$ values in spider superclusters than filament superclusters at high environmental densities.}
\label{fig:SFvsD8MorphFig}
\end{center}
\end{figure}
We also examine $f_{SF}$ in clusters in spider and filament superclusters at high densities (rows 4 and 6 of Table~\ref{tab:SFfracsTab}, respectively). The value of $f_{SF}$ in spider superclusters with $\textnormal{D}8 > 8$, $0.229 \pm 0.016$, is higher than that in filament superclusters with $\textnormal{D}8 > 8$, $0.166 \pm 0.019$, with greater than 99\% confidence. This difference is apparent in Figure~\ref{fig:SFvsD8MorphFig}. In low density outskirts (rows 3 and 5 of Table~\ref{tab:SFfracsTab}), there is no difference in $f_{SF}$ between clusters in spider and filament superclusters.
The specifics of the FoF algorithm used by \citet{Tempel2012} introduces a complication into our analysis. As the FoF algorithm builds a given cluster, the higher density of galaxies within superclusters makes it easier for the algorithm to include galaxies at larger cluster radii. Since galaxies on the outskirts of clusters typically exhibit more SF, this could artificially enhance cluster $f_{SF}$ values at higher supercluster densities. We test for this possibility in two ways:
\begin{enumerate}[topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex]
\item We first measure $f_{SF}$ against the ratio of virial radius ($r_{vir}$, derived as the projected harmonic mean radius by \citealt{Tempel2012}) to $L_{obs}$, serving as a proxy for a measurement of cluster radius based solely on cluster mass \citep[e.g.,][]{Yang2007}. Clusters with high $r_{vir}$ for their mass-derived radii (via $L_{obs}$) could have enhanced values of $f_{SF}$ due to galaxies in outskirts included by the FoF algorithm. We find no correlation between these quantities.
\item Second, we measure $f_{SF}$ against the surface density of cluster galaxies ($L_{obs}/\pi r_{vir}^2$). Clusters with lower surface densities may be artificially expanded by the FoF algorithm, including galaxies in outskirts with higher SF and thus exhibiting enhanced values of $f_{SF}$. Again, we find no correlation between these quantities.
\end{enumerate}
We perform these tests not only with $L_{obs}$ as a proxy for cluster radius and mass, but also with $L_{tot}$ and $M_{*}^{tot}$. The results of the tests suggest that any extended tails of galaxies included in clusters due to the FoF algorithm are not artificially enhancing cluster $f_{SF}$.
All of these results suggest that 1) there is a significant inverse correlation between $f_{SF}$ and D8, dominated by clusters in filament superclusters; and 2) in high-density cores of superclusters, spider superclusters exhibit higher values of $f_{SF}$ than filament superclusters.
\begin{deluxetable}{lcccc}
\tablecolumns{4}
\tablewidth{0pc}
\tablehead{
\colhead{Variable} & \colhead{PC1} & \colhead{PC2} & \colhead{PC3} & \colhead{PC4}}
\startdata
$f_{SF}$ & $\hphantom{-}0.173$ & $\hphantom{-}0.730$ & $-0.662$ & $-0.008$ \\
$-\log(p_{\Delta})$ & $-0.302$ & $\hphantom{-}0.664$ & $\hphantom{-}0.656$ & $-0.195$ \\
D8 & $-0.650$ & $-0.159$ & $-0.336$ & $-0.662$ \\
$L_{obs} [10^{10} h^{-2} L_{\odot}]$ & $-0.675$ & $\hphantom{-}0.042$ & $-0.139$ & $\hphantom{-}0.723$ \\
\hline
Std. dev. & $\hphantom{-}1.403$ & $\hphantom{-}1.161$ & $\hphantom{-}0.714$ & $\hphantom{-}0.415$ \\
Prop. of var. & $\hphantom{-}0.492$ & $\hphantom{-}0.337$ & $\hphantom{-}0.127$ & $\hphantom{-}0.043$ \\
Cum. prop. & $\hphantom{-}0.492$ & $\hphantom{-}0.830$ & $\hphantom{-}0.957$ & $\hphantom{-}1.000$
\enddata
\label{tab:PCATab}
\end{deluxetable}
\subsection{Principal component analysis}
\label{sec:PCASec}
As discussed in \S\ref{sec:PCADataSec}, we use PCA to determine any correlations between cluster $f_{SF}$, amount of cluster substructure, density of supercluster environment, and total cluster mass. We use two measurements of amount of substructure from \citetalias{Cohen2014}: number of components; and the results from the $\Delta$ test, which in this case is the negative of $\log(p_{\Delta})$. We also use two proxies for total cluster mass: \emph{r}-band luminosity (both $L_{obs}$ and $L_{tot}$) and $M_{*}^{tot}$.
Our PCA results are consistent whether we use $L_{obs}$, $L_{tot}$, or $M_{*}^{tot}$ as a proxy for total cluster mass. Furthermore, our results remain the same whether we use $-\log(p_{\Delta})$ or number of components as a measure of amount of substructure. Thus, we present and discuss only the results when using $L_{obs}$ and $-\log(p_{\Delta})$. Table~\ref{tab:PCATab} displays the results of our analysis. It shows the values of the four PCs for our four variables; and the standard deviation, proportion of variance, and cumulative proportion for these PCs.
The cumulative proportion shows that the first two PCs account for 83\% of the variance in these cluster properties, with each PC being equally important. Thus, we will focus primarily on the first two PCs. The PC1 values of D8 and $L_{obs}$ are close in magnitude and of the same sign, confirming the correlation between these two variables. Since the value of $f_{SF}$ is of opposite sign, this suggests that $f_{SF}$ is weakly anti-correlated with D8 and $L_{obs}$. Furthermore, the PC2 value of D8 -- also of opposite sign to $f_{SF}$ -- is approximately four times larger than that of $L_{obs}$. This suggests that D8 rather than $L_{obs}$ is more strongly related to $f_{SF}$. This agrees with our analysis in \S\ref{sec:PairwiseSec}, where we confirm that differences in $M_{*}^{tot}$ (i.e., cluster \emph{r}-band luminosity) are not the cause of the observed correlation between $f_{SF}$ and D8.
The PC2 values of $f_{SF}$ and amount of substructure are of similar magnitude and of the same sign, confirming the direct correlation between these variables found in \citetalias{Cohen2014}. While the PC1 values of these variables are of opposite sign, the correlation suggested by the PC2 values is more robust: the values of these variables along PC2 are closer in magnitude to each other than those along PC1; and the D8 and $L_{obs}$ values along PC2 are much lower than those along PC1. These values, compared to the others discussed, suggest that $f_{SF}$ is probably most strongly related to amount of substructure than the other variables discussed.
Finally, the PC1 values of D8 and amount of substructure suggest a direct correlation between these variables, which agrees with the findings in \citetalias{Einasto2012a} (though they admit that this correlation is weak). Furthermore, PC1 also shows that amount of substructure is also correlated with $L_{obs}$. Intuitively, this is expected: a richer cluster will have a higher luminosity and more opportunity for substructure to be detectable. This effect does act counter to the main result of this paper -- the inverse correlation between D8 and $f_{SF}$ -- and the result of \citetalias{Cohen2014} -- the direct correlation between substructure and $f_{SF}$ (see Figures 5 and 6 in that paper). In other words, as D8 and luminosity increase, the results from this paper and \citetalias{Cohen2014} suggest that $f_{SF}$ (and thus amount of substructure) should decrease, not increase. This, however, bolsters the correlation we find between D8 and $f_{SF}$ -- it must be significant enough to counter the weak correlation between D8 and amount of substructure.
\section{Discussion}
\label{sec:DiscussionSec}
We find a significant inverse correlation between the density of supercluster environment and the amount of SF within galaxy clusters. While this could in principle be an indirect result of a correlation between supercluster density and cluster substructure, we find this not to be the case. Rather, both cluster substructure and supercluster environment are independently related to a cluster's SF, and, while these effects oppose each other, the influence of cluster substructure appears stronger than that of supercluster environment. These results are not simply due to the correlation between D8 and cluster mass, luminosity, or richness.
We also find that supercluster morphology is important in affecting cluster SF: the relation between supercluster density and SF is observed only in filament rather than spider superclusters. Furthermore, SF in spider superclusters is higher at high densities compared to filament superclusters. When we consider these differences between filament and spider superclusters, from the complexity of effects explained above emerges a coherent picture. Spider superclusters have richer inner structure, and are dynamically younger, than filament superclusters \citepalias[e.g.,][]{Einasto2012b}. We expect to find more SF in dynamically younger systems, and we indeed see this in the high-density cores of spider superclusters.
In galaxy clusters, more structure indicates a less relaxed, younger system (e.g., \citealt{Bird1993}; \citealt{Knebe2000}; \citetalias{Cohen2014}; \citealt{Cohen2015}). Such clusters are more likely to live in superclusters with richer inner structure where group mergers occur more easily than in superclusters with simple inner structure. Thus, combining our results from this work with those of \citetalias{Cohen2014}, we can explain in more detail the effects of cluster substructure and supercluster environment on SF. As clusters form hierarchically from smaller groups, the dynamically younger systems exhibit more SF, since the SF in these systems has had less time to be quenched by various gravitational and hydrodynamical processes (see \citealt{Boselli2006} for a review of such mechanisms). Thus, it is more likely to find high SF in clusters that find themselves in the high-density environments of spider superclusters than filament superclusters. Additionally, this shows that high-density cores of superclusters are a special environment for clusters. For instance, they may be collapsing \citep{Einasto2015, Gramann2015, Einasto2016}, possibly affecting the properties of galaxy clusters and their galaxy populations. This interesting result of our study emphasizes the role of supercluster morphology in shaping the properties of galaxies and groups/clusters in them.
The main result of this work agrees with \citet{Lietzen2012}, who found that more elliptical galaxies are found in the higher-density environments of superclusters than at lower densities. Furthermore, \citet{Luparello2013} used the galaxy spectra parameter $D_{n}4000$ to show that galaxies in groups in superclusters are systematically \emph{older} than those in lower-density environments. They found that this result holds even though the groups themselves have higher velocity dispersions and are therefore dynamically \emph{younger} than groups elsewhere. These results agree well with the interpretation from this work explained above.
We note that \citet{Costa2013} found no correlation between the mean stellar ages of superclusters and the shape parameter of superclusters. This is not in conflict with our results, since we used information about the inner structure of superclusters to divide them into two morphological classes, while \citet{Costa2013} only used the shape parameter to characterize the outer shape of superclusters. \citet{Einasto2014} showed, in agreement with \citet{Costa2013}, that galaxy content of superclusters depends only weakly on the overall shape of superclusters.
Our results, while significant, still exhibit substantial scatter. This owes to the complicated dynamics affecting cluster $f_{SF}$, many aspects of which are discussed here. One variable we have not taken into account is the stage of formation of a cluster or supercluster. Studies have shown that clusters with similar degrees of apparent substructure can exhibit different $f_{SF}$ measurements due to different stages of cluster merger activity \citep[e.g.,][]{Hwang2009}. Future studies including the affects of cluster or supercluster age could succeed in reducing the scatter in the results discussed in this work.
\acknowledgements
We thank the SDSS team, as well as the MPA/JHU and NYU researchers, for the publicly available data releases and VAGCs. Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is http://www.sdss.org/. The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge, Case Western Reserve University, University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington. ME and JV were supported by the ETAG project
IUT26-2 and by the Centre of Excellence ``Dark side of the Universe" (TK133) financed by the European Union through the European Regional Development Fund.
|
{
"timestamp": "2016-12-16T02:00:10",
"yymm": "1612",
"arxiv_id": "1612.04813",
"language": "en",
"url": "https://arxiv.org/abs/1612.04813"
}
|
\section{Introduction}
A central topic in modern probability and statistical physics is the study of {\em random walks in random media}. Among the most important classes of such models is the so-called random walk in i.i.d.~random environment on $\Z^d$. While these models have been studied for decades (see e.g.~\cite{Zeit04}), there are a number of fundamental problems that remain open in dimensions $d\ge 2$. These problems include providing verifiable conditions under which the random walk is ballistic (and/or have Brownian motion as their scaling limit). Existing results have largely been restricted to situations where the random environment is {\em elliptic}, i.e.~where steps to all nearest neighbours are possible (see e.g.~\cite{BRS16} and the references therein). In other contexts (such as random walk on percolation clusters) where ellipticity may or may not be assumed, a crucial ingredient for establishing asymptotic behaviour of the walk is a property called {\em reversibility} (see e.g.~\cite{BBT2016} and the references therein). Except in trivial cases, random walks in i.i.d.~random environments are {\em not} reversible.
We will study random walks in i.i.d.~random environments (RWRE) that are non-elliptic, such as in the following example.
\medskip
\begin{EXA}[$2$-dimensional orthant model]
\label{exa:NE_SW}
At each site $x\in \Z^2$ independently toss a (possibly biased) coin. If the toss results in heads (probability $p$), insert one directed edge pointing up $\uparrow$ (to $x+(0,1)$) and one pointing right $\rightarrow$ (to $x+(0,1)$). Otherwise (probability $1-p$) insert directed edges pointing down $\downarrow$ and left $\leftarrow$ (see Figure \ref{fig:orthant_env}). Now start a random walk at the origin $o$ that evolves by choosing uniformly from available arrows at its current location.
\end{EXA}
\medskip
\begin{figure}
\begin{center}
\includegraphics[scale=.5]{orthantpoint6.eps}
\end{center}
\vspace{-1cm}
\caption{A finite region of the random environment in Example~\ref{exa:NE_SW} for $p=.6$.
}
\label{fig:orthant_env}
\end{figure}
Standard techniques used to establish ballistic behaviour in elliptic environments (see e.g.~\cite{Zeit04}) do not apply to this model (as it is not elliptic!).
It is proved in \cite{RWDRE} that the random walk in Example \ref{exa:NE_SW} has an asymptotic velocity $v[p]$ that is monotone in $p$, and that $v[\frac12]=0$ by symmetry. It is also established that the walk is transient in direction $\nearrow$ when $p>p_c^{\smallOTSP}$ (where $p_c^{\smallOTSP}$ is the critical probability for oriented site percolation on the triangular lattice on $\Z^2$), using the fact that for such $p$, almost surely the origin is connected to only finitely many sites in direction $\swarrow$. These results do not establish ballisticity (i.e.~that $v[p]$ is non-zero) for any non-trivial value of $p$.
When applied to Example~\ref{exa:NE_SW}, our main results (Theorem \ref{thm:main}, and Propositions \ref{prp:E+} and \ref{prp:transverse}) imply that $v[p]\cdot(1,1)>0$ for $p>p_c^{\smallOTSP}$, where $p_c^{\smallOTSP}\approx 0.5956$ \cite{DeBE,JG}\footnote{The best rigorous bounds that we are aware of are that $.5730<p_c^{\smallOTSP}<0.7491$ \cite{HS_DRE2,BBS}},
and by symmetry that $v[p]\cdot(1,1)<0$ for $p<1-p_c^{\smallOTSP}$. Moreover in this regime the random walk obeys an invariance principle with deterministic variance, for almost every environment.
\subsection{The model and main results}
\label{sec:model}
For fixed $d\ge 2$, let $\mc{E}=\{\pm e_i: i=1,\dots,d\}$ be the set of unit vectors in $\Z^d$, and let $\mc{E}_+=\{+e_i:i=1,\dots, d\}$ denote the standard basis vectors. We use a graphical shorthand for subsets of $\mc{E}$, so that (for example) $\NE=\mc{E}_+$. Let $\mc{P}=M_1(\mc{E})$ denote the set of probability measures on $\mc{E}$, and let $\mu$ be a probability measure on $\mc{P}$. If $\gamma\in \mc{P}$ we will abuse notation and write $\mu(\gamma)$ for $\mu(\{\gamma\})$. Let $\Omega=\mc{P}^{\Z^d}$ be equipped with the product measure $\nu=\mu^{\otimes \Z^d}$ (and the corresponding product $\sigma$-algebra). An environment $\omega=(\omega_x)_{x\in \Z^d}$ is an element of $\Omega$. We write $\omega_x(e)$ for $\omega_x(\{e\})$. Note that $(\omega_x)_{x\in \Z^d}$ are i.i.d.~with law $\mu$ under $\nu$. An environment is {\em $2$-valued} if $\mu$ is supported on exactly 2 points (such as in Example \ref{exa:NE_SW}). In this case we take the convention that $\mu$ is supported on $\{\gamma^{\sss(1)},\gamma^{\sss(2)}\}$ with $p=\mu(\gamma^{\sss(1)})$.
The random walk in environment $\omega$ is a time-homogeneous (but not necessarily irreducible) Markov chain with transition probabilities from $x$ to $x+e$ defined by
\begin{equation*}
p_{\omega}(x,x+e)=
\omega_x(e).
\end{equation*}
Given an environment $\omega$, we let $\mathbb{P}_{\omega}$ denote the (quenched) law of this random walk $X_n$, starting at the origin. Let $P$ denote the law of the annealed/averaged random walk, i.e.~$P(\cdot, \star):=\int_{\star}\mathbb{P}_{\omega}(\cdot)d\nu$.
For $\gamma\in \mc{P}$, let $\mc{S}(\gamma)=\{e\in \mc{E}:\gamma(e)>0\}\subset \mc{E}$ denote the support of $\gamma$. For $\mc{A}\subset \mc{E}$ we will write $\mu(\mc{A})$ to mean $\mu(\{\gamma\in \mc{E}:\mc{S}(\gamma)=\mc{A}\})$, i.e.~the $\mu$-measure of the set of probabilities on $\mc{E}$ whose support is $\mc{A}$. For each environment $\omega$ we let $\mc{G}_x(\omega)=\mc{S}(\omega_x)$ and associate a directed graph $\mc{G}(\omega)$ with vertex set $\Z^d$ and edge set $e(\mc{G})$ given by
\begin{equation*}
(x,x+u)\in e(\mc{G}) \iff u\in\mc{G}_x
\end{equation*}
Note that under $\nu$, the $(\mc{G}_x)_{x\in \Z^d}$ are i.i.d.~subsets of $\mc{E}$. The directed graph $\mc{G}(\omega)$ is the entire graph $\Z^d$ (with directed edges), precisely when the environment is {\em elliptic}, i.e.~$\omega_x(u)>0$ for each $u \in \mc{E}, x\in \Z^d$ (i.e.~$\mu(\mc{E})=1$, using our other notation). Much of the current literature on random walk in random media assumes either (uniform) ellipticity or reversibility, neither of which hold for Example \ref{exa:NE_SW}.
On the other hand, given a directed graph $\mc{G}=(\mc{G}_x)_{x\in \Z^d}$ (with vertex set $\Z^d$, and such that $\mc{G}_x\ne \varnothing$ for each $x$), we can define a {\em uniform} random environment $\omega=(\omega_x(\mc{G}_x))_{x\in \Z^d}$ on $\mc{G}$. Let $|A|$ denote the cardinality of $A$, and set
\[\omega_x(e)=\begin{cases}
|\mc{G}_x|^{-1}, & \text{ if }e \in \mc{G}_x\\
0, & \text{otherwise}.
\end{cases}\]
The corresponding RWRE then moves by choosing uniformly from available steps at its current location.
This natural class of RWRE will henceforth be referred to as {\em uniform RWRE}.
In particular, the 2-dimensional-orthant model (Example \ref{exa:NE_SW}) is the uniform RWRE on the random directed graph which has $\mathcal{G}_x=\NE$ with probability $p$, and $\mathcal{G}_x=\SW$ with probability $1-p$.
For any $n$, we call a sequence $(y_0, \dots, y_n)$ with each $y_i \in\Z^d$ a \emph{ $\mc{G}$-path} if $(y_i,y_{i+1})\in e(\mc{G})$ for each $i=0,1,\dots, n-1$. For any site $x\in \Z^d$, we let its \emph{forward cluster} $\mathcal{C}_x$ be the set of sites $y\in \Z^d$
such that there exists an $n$ and a $\mc{G}$-path $(y_0, \dots, y_n)$ such that $y_0=x$ and $y_n=y$.
We say that $V$ is an orthogonal set if $u\cdot v=0$ for every distinct pair $u,v\in V$. Instead of ellipticity, we will assume the following properties:
\medskip
\begin{COND}
\label{cond:orthogonal}
There exists an orthogonal set $V\subset \mc{E}$ such that $\mu(\mc{S}\cap V\ne \varnothing)=1$.
\end{COND}
\medskip
\begin{COND}
\label{cond:ddim}
There exists an orthogonal set $V'\subset \mc{E}$ with $|V'|=d$ such that $\mu(e\in\mc{S})>0$ for every $e\in V'$.
\end{COND}
\medskip
Condition \ref{cond:orthogonal} requires that there is a set of orthogonal directions such that from any site the walker is able to follow at least one of these directions. This assumption is precisely that required to ensure that the random walker never gets stuck in a finite set (see \cite[Theorem 1.2]{RWDRE}). In the presence of Condition \ref{cond:orthogonal}, Condition \ref{cond:ddim} is equivalent to saying that the walk is truly $d$-dimensional.
Note that Example \ref{exa:NE_SW} satisfies Condition \ref{cond:orthogonal} with $V=\{-e_1,e_2\}=\NW$ or equivalently with $V=\{e_1,-e_2\}=\SE$. It clearly also satisfies Condition \ref{cond:ddim} (with $d=2$).
The RWRE literature contains a number of abstract conditions that imply ballisticity, which we discuss in Section \ref{sec:elliptic}. These can be difficult to verify directly in concrete examples. We turn first to our version of such an abstract condition. Following that we will turn to local conditions, that may be directly verified, and which imply the abstract one.
Fix $d\ge 2$ and $\ell\in{\mathbb R}^d\setminus\{o\}$. For $\kappa>0$, we consider the cone
$$
\mc{K}_{\kappa,\ell}=\{u\in{\mathbb R}^d: u\cdot\ell\ge \kappa\| u\|\}.
$$
Let $R_n=|\{x: X_m=x \text{ for some }m\le n\}|$ denote the range of the walker up to time $n$. Our first main result states that, if the forward cluster $\mc{C}_o$ is contained in a cone (whose apex is far from $o$ with only low probability), and the range of the walker is not too small, then the walk is ballistic and satisfies an annealed/averaged invariance principle.
\medskip
\begin{THM}
\label{thm:main}
Let $d\ge 2$ and assume Conditions \ref{cond:orthogonal} and \ref{cond:ddim}. Let $\alpha, \beta, \kappa>0$ and take $\ell\in{\mathbb R}^d\setminus\{o\}$. Assume the following conditions:
\begin{itemize}
\item[(a)] There exist $C_1,\gamma_1>0$ such that \\
$\nu(\mc{C}_o\subset -n\ell + \mc{K}_{\kappa,\ell})\ge 1-C_1e^{-\gamma_1 n^\beta}$ for all $n \in {\mathbb N}$;
\item[(b)] For every $C>0$, there exist $C_2,\gamma_2>0$ such that \\
$P(R_n\le Cn^{\alpha})\le C_2e^{-\gamma_2 n^\beta}$, for all $n\in {\mathbb N}$.
\end{itemize}
Then there exist $v\in {\mathbb R}^d\setminus \{o\}$ and a non-negative definite
matrix $\Sigma\in {\mathbb R}^{d\times d}$ such that $P(n^{-1}X_n \rightarrow v)=1$ and under the annealed/averaged measure $P$,
\begin{align}
\Big(\frac{X_{\lfloor nt\rfloor}-vnt}{\sqrt{n}}\Big)_{t\ge 0}\Rightarrow(B_t)_{t\ge 0},\qquad \textrm{ as }n \rightarrow \infty,\nonumber
\end{align}
where $B_t$ is a $d$-dimensional Brownian motion with covariance matrix $\Sigma$, and $\Rightarrow$ denotes weak convergence. Moreover $v\cdot\ell >0$.
\end{THM}
\medskip
For $\ell'\in {\mathbb R}^d\setminus\{o\}$ set $X'_n=X_n\cdot \ell'$ and call this the \emph{transverse walk}. In many settings we will be able to conclude that the range of the walker satisfies condition (b) of the theorem by proving that it holds for the range of such a transverse walk.
For Example \ref{exa:NE_SW}, if we take $\ell=(1,1)$ and $\ell'=(1,-1)$ then the transverse walk is a simple symmetric random walk on $\Z$, so (b) holds (for $\alpha<1/2$). We will show that for $p>p_c^{\smallOTSP}\approx 0.5956$ (a) also holds. In fact for this model we get a {\em quenched} FCLT by applying \cite[Theorem 1.1]{RS09}. Our results leave unresolved the question of whether Example \ref{exa:NE_SW} is ballistic when $\frac12<p\le p_c^{\smallOTSP}$, and even whether the speed in direction $(1,1)$ is strictly monotone for $p>p_c^{\smallOTSP}$. We conjecture that it is strictly monotone in $p\in[0,1]$ (see Figure \ref{fig:vorthant}). When $p=\frac12$, we conjecture that infinitely many sites are visited infinitely often by the walk.
\begin{figure}
\includegraphics[scale=.5]{vorthant.eps}
\caption{Estimates of $v[p]\cdot (1,1)$ as a function of $p$ for the 2-dimensional orthant model (Example \ref{exa:NE_SW}) based on 1000 simulations of 1000 step walks for each $p$.}
\label{fig:vorthant}
\end{figure}
Note that Theorem \ref{thm:main}(a) is not sufficient to conclude ballisticity in general, as per the following example.
\medskip
\begin{EXA}
\label{exa:speed0}
Choose $\mu(\gamma(e_1)=1)=p$ and for each $i\in {\mathbb N}$,
\[\mu(\gamma(-e_1)=1-2^{-i}, \gamma(e_2)=\gamma(-e_2)=2^{-(i+1)}\big)=\frac{c}{i^2},\]
where $\sum_{i\in {\mathbb N}}c2^{-i}=1-p$. Then the expected time that the walk spends oscillating between $(0,0)$ and $(1,0)$ before moving to another site is infinite for all $p<1$. For all $p$ sufficiently large Theorem \ref{thm:main}(a) holds, and the walker is transient in direction $e_1$, but the speed is zero for all $p<1$.
\end{EXA}
\medskip
\begin{PRP}
\label{prp:E+}
Assume Conditions \ref{cond:orthogonal} and \ref{cond:ddim}.
\noindent For each $d\ge 2$ there is a $p_d\in (1/2,1)$ such that if $\mu(\mathcal{G}_o\subset\mc{E}_+)> p_d$ then the condition of Theorem \ref{thm:main}(a) is satisfied.
If $d=2$ then this holds with $p_d= p_c^{\smallOTSP}$.
\end{PRP}
\medskip
The condition in Proposition \ref{prp:E+} is a local condition saying that with high probability the only transitions allowed by the local environment lie in a cone pointing in direction $\ell_+$. This condition is similar in spirit to the ``forbidden direction'' condition of \cite{RS06} and \cite{RS07}. They assume a direction that is forbidden with probability 1 (together with a so-called non-nestling assumption that makes ballisticity immediate) and then prove an invariance principle. One might describe our assumptions on the environment as having a direction that is rare;y allowed rather than forbidden.
The hypothesis of Proposition \ref{prp:E+} is not equivalent to Theorem \ref{thm:main}(a). For example, the uniform $(\rightarrow \, \updownarrow)$ example (i.e.~$\mu(\gamma(e_1)=1)=p$ and $\mu(\gamma(e_2)=1/2=\gamma(-e_2))=1-p$) satisfies Theorem \ref{thm:main}(a) for every $p>0$. Note that in this example $-e_1$ is a {\em forbidden direction}. Similarly, if $\mu(\gamma(e_1)=1)=p_1$ and $\nu(\gamma(-e_1)=\frac{1}{2}=\gamma(e_2))=p_2$ and $\mu(\gamma(e_2)=1/2=\gamma(-e_2))=1-p_1-p_2$ then Theorem \ref{thm:main}(a) will hold as long as $p_2$ is very small relative to $p_1$, even if $p_1$ itself is small.
Let $\mc{F}'_k=\sigma(X'_0,\dots, X'_k)$. The following give verifiable conditions under which the condition of Theorem \ref{thm:main}(b) holds.
\medskip
\begin{PRP}
\label{prp:transverse}
Assume Conditions \ref{cond:orthogonal} and \ref{cond:ddim}.
\begin{itemize}
\item[(a)] If for some $\ell'\ne o$ the transverse walk $X'_k$ is a submartingale (with bounded step size) under $P$ such that for some $\eta, \eta'>0$,
\[P(|X'_{k+1}- X'_{k}|>\eta|\mc{F}'_k)>\eta',\]
then the condition of Theorem \ref{thm:main}(b) is satisfied.
\item[(b)] If $d\ge 2$ and $\mu$ is 2-valued then the condition of Theorem \ref{thm:main}(b) is satisfied.
\end{itemize}
\end{PRP}
\medskip
Proposition \ref{prp:transverse}(a) requires a projection of the walk to be a submartingale that can always move with probability bounded away from zero. In the terminology of \cite{Zern98} this implies that the walk is either {\it non-nestling} or {\it marginal-nestling} in the transverse direction.
The condition of Proposition \ref{prp:transverse}(a) does not hold for the following 2-valued example.
\medskip
\begin{EXA}
\label{exa:E_NSW}
For the uniform RWRE $(\rightarrow \NSW)$, the only projection that gives a submartingale is the projection in the direction $\pm e_2$. However this martingale does not move at $\rightarrow$ sites, so it does not satisfy Proposition \ref{prp:transverse}(a).
\end{EXA}
\medskip
Nevertheless, according to Proposition \ref{prp:transverse}(b), Theorem \ref{thm:main}(b) holds for Example \ref{exa:E_NSW}. Therefore the walk of Example \ref{exa:E_NSW} will be ballistic in direction $\rightarrow$ as soon as $p>p_c^{\smallOTSP}$. We believe that in this example, our arguments prove ballisticity for a wider range of $p$, namely $p>p_c^{\smallFSOSP}$, where the latter is defined in \cite{HS_DRE1} (see the discussion preceding Theorem 3.13 of that paper). But we have not verified all the details.
As in Example \ref{exa:E_NSW}, the following is an immediate corollary of the above propositions and Theorem \ref{thm:main}.
\medskip
\begin{COR}
\label{cor:2valued} Let $d\ge 2$, and assume Conditions \ref{cond:orthogonal} and \ref{cond:ddim}.
\noindent If $\mu$ is 2-valued and $\mu(\mc{G}_o\subset \mc{E}_+)>p_d$ then the model is ballistic in the direction $\ell_+$.
\end{COR}
\medskip
We suspect that one can replace $\mu$ being 2-valued with $\mu$ having {\em finite} support in Corollary \ref{cor:2valued}. Note that Example \ref{exa:speed0} does not have finite support.
\medskip
The remainder of this paper is organised as follows. In Section \ref{sec:ball} we recall some facts about directional transience, regeneration and ballisticity. In Section \ref{sec:proof_main} we prove Theorem \ref{thm:main}. In Section \ref{sec:propositionproof} we prove Proposition \ref{prp:transverse}. In Section \ref{sec:percolation} we prove Proposition \ref{prp:E+}
by examining percolation-type properties (the structure of forward clusters for certain degenerate random environments).
Finally in Section \ref{sec:elliptic} we discuss other ballisticity conditions, and compare our results with those of the elliptic theory. In particular, we will find that in Example \ref{exa:NE_SW}, having strong barriers $\SW$ is an insurmountable obstacle to obtaining a positive speed in direction $\ell=(1,1)$ using one of the standard ballisticity conditions. One way of interpreting our results in the context of Example \ref{exa:NE_SW} is that we can overcome the presence of strong barriers $\SW$ by strengthening the forward push and including sufficiently many sites $\NE$ that don't permit backwards motion.
\section{Regeneration and Ballisticity}
\label{sec:ball}
In non-elliptic environments (such as that of Example \ref{exa:NE_SW}) some sites may be unreachable by the walk. Moreover, if Condition \ref{cond:orthogonal} does not hold then the range $\mc{R}$ of the random walk is finite.
Fix $\ell\in {\mathbb R}^d\setminus \{o\}$. Let $A_+^\ell$ and $A_-^\ell$ denote the events that $X_n\cdot \ell\to\infty$ and $X_n\cdot \ell\to-\infty$ respectively. The following is proved in \cite[Theorems 1.2--1.5]{RWDRE}, in most cases by adapting the methods of Kalikow \cite{K81}, Sznitman and Zerner \cite{SZ99}, Zerner \cite{Zern02,Zern07} and Zerner and Merkl \cite{MZ01} to the non-elliptic setting.
\medskip
\begin{THM}[{\cite[ Theorems 1.2--1.5]{RWDRE}}]
\label{thm:old}
For i.i.d.~RWRE the following hold (for every $\ell\in {\mathbb R}^d\setminus \{o\}$):
\begin{itemize}
\item[(a)] $P(|\mc{R}|=\infty)\in \{0,1\}$, with $P(|\mc{R}|=\infty)=1$ if and only if Condition \ref{cond:orthogonal} holds.
\item[(b)] $P(A_+^\ell\cup A_-^\ell)\in \{0,1\}$.
\item[(c)] There exist deterministic $v_{+}(\ell), v_{-}(\ell)$ such that
\[\lim_{n\rightarrow \infty}\frac{X_n\cdot \ell}{n}=v_{+}(\ell)\mathbbm{1}_{A_+^\ell}+v_{-}(\ell)\mathbbm{1}_{A_{-}^\ell}, \quad P-\text{a.s.}\]
\item[(d)] When $d=2$, $P(A_+^{\ell})\in \{0,1\}$ (hence a deterministic velocity vector $v$ always exists in 2-dimensions).
\item[(e)] Assume that $\mu$ is 2-valued, and supported on $\{\gamma^{\sss(1)},\gamma^{\sss(2)}\}$, with $p=\mu(\gamma^{\sss(1)})$. Assume that the velocity $v=v[p]$ exists for each $p$. Then each coordinate of $v[p]$ is monotone in $p$.
\end{itemize}
\end{THM}
\medskip
Note that since $P(A)=E_{\nu}[\mathbb{P}_{\omega}(A)]$ and $0\le \mathbb{P}_{\omega}(A)\le 1$, $P(A)=1$ if and only if $\mathbb{P}_{\omega}(A)=1$ for $\nu$-almost every $\omega$. Similarly $P(A)=0$ if and only if $\mathbb{P}_{\omega}(A)=0$ for $\nu$-almost every $\omega$.
Theorem \ref{thm:old}(c) relies on a regeneration structure that is present on the event of directional transience. This is well known in the uniformly elliptic setting, but perhaps less so in the non-elliptic setting. For the purposes of this paper, fix $\ell\in {\mathbb R}^d\setminus o$ and assume almost sure transience in direction $\ell$, i.e.~that
\begin{equation}
\label{eqn:transience}
P(A^\ell_+)=1.
\end{equation}
For the remainder of this section, we assume Conditions \ref{cond:orthogonal} and \ref{cond:ddim}. The regeneration structure is then as follows (see the proof of Theorem 1.4 of \cite{RWDRE}, and note that the following makes a slight correction to how the structure was stated there).
Let $T_0=M_0=0$ and $D_0=\inf\{n>0:X_n\cdot \ell<0\}$. Let $T_1=\inf\{n:X_n\cdot \ell\ge 1\}$. For $k\ge 1$ and $T_k<\infty$ let $D_k=\inf\{n>T_k:X_n\cdot \ell<X_{T_k}\cdot \ell\}$. If $D_k<\infty$ then we let $M_k=\sup\{X_n\cdot \ell:n\le D_k\}$ and $T_{k+1}=\inf\{n>D_k:X_n\cdot \ell\ge M_k+1\}$. Set $\Delta_{k+1}=M_{k+1}-M_k$.
Let $K=\inf\{k\ge 1:D_k<\infty\}$. Then \eqref{eqn:transience} implies that $K<\infty$ a.s., and indeed, $P(D_k=\infty|T_k<\infty)$ is some fixed value $q>0$, so $K\ge 1$ is geometrically distributed;
\begin{equation*}
P(K> k)=(1-q)^k, \quad \text{ for $k\ge 1$.}
\end{equation*}
Thus we may define $\mathcal{T}_1=T_K$. This $\mathcal{T}_1$ acts as a regeneration time, as the process $\hat X_n=X_{\mathcal{T}_1+n}-X_{\mathcal{T}_1}$ and the environment $\hat \omega_x=\omega_{x+X_{\mathcal{T}_1}}$ (for $x\cdot \ell\ge 0$) are independent of the environment and walk observed up to time $\mathcal{T}_1$. This allows one to construct additional regeneration times $\mathcal{T}_{1}<\mathcal{T}_2<\dots$ such that the $X_{(\mathcal{T}_k+n)\land \mathcal{T}_{k+1}}-X_{\mathcal{T}_k}$ are i.i.d.~(over $k$) segments of path.
Then the above discussion says that $\{(X_{\mc{T}_{k+1}}-X_{\mc{T}_{k}},\mc{T}_{k+1}-\mc{T}_k)\}_{k \in {\mathbb N}}$ are i.i.d.~copies of $(X_{\mc{T}_2}-X_{\mc{T}_1},\mc{T}_2-\mc{T}_1)$. As a consequence (see e.g.~the proof of Theorem 1.4 of \cite{RWDRE}),
$$
\label{speedformula}
v\cdot \ell=\frac{E[(X_{\mc{T}_2}-X_{\mc{T}_1})\cdot \ell]}{E[\mc{T}_2-\mc{T}_1]}=\frac{E[X_{\mc{T}_1}\cdot \ell\mid D_0=\infty]}{E[\mc{T}_1\mid D_0=\infty]}.
$$
Since for a unit vector $\ell$ we have that $\mc{T}_1\ge 1$ and $0<X_{\mc{T}_1}\cdot \ell<\mc{T}_1$ (by definition of $\mc{T}_i$) we immediately have the following well-known ballisticity criterion.
\medskip
\begin{LEM}
\label{lem:ballisticitycriterion}
Assume \eqref{eqn:transience} as well as Conditions \ref{cond:orthogonal} and \ref{cond:ddim}. If $E[\mc{T}_1\mid D_0=\infty]<\infty$ then $v\cdot\ell >0$.
\end{LEM}
\medskip
The corresponding criterion for an invariance principle is the following, which follows immediately using methods of \cite{Sz00}.
\medskip
\begin{LEM}
\label{lem:CLTcriterion}
Assume \eqref{eqn:transience} as well as Conditions \ref{cond:orthogonal} and \ref{cond:ddim}. Assume also that $E[\mc{T}^{2}_1\mid D_0=\infty]<\infty$. Then there exists a non-negative definite
matrix $\Sigma$ (and $v\in {\mathbb R}^d\setminus \{o\}$) such that under the annealed/averaged measure $P$,
$$
\Big(\frac{X_{\lfloor nt\rfloor}-vnt}{\sqrt{n}}\Big)_{t\ge 0}\Rightarrow(Z_t)_{t\ge 0},\qquad \textrm{ as }n \rightarrow \infty,
$$
where $Z_t$ is a $d$-dimensional Brownian motion with covariance matrix $\Sigma$,
and $\Rightarrow$ denotes weak convergence.
\end{LEM}
\medskip
Note that ellipticity enters the arguments of \cite{Sz00} in two ways: to obtain the regeneration structure, and to prove non-degeneracy of the covariance matrix. The former was extended to the non-elliptic case in \cite{RWDRE}, so those arguments carry over. The latter can actually fail in our setting (which is why we only claim non-negative definiteness of the covariance). For example, a RWRE in which $\mathcal{G}_x$ is either $\uparrow$ or $\rightarrow$ satisfies our hypothesis, yet its covariance is degenerate, because $X_n-\frac{n}2(e_1+e_2)$ is 1-dimensional.
\section{Proof of Theorem \ref{thm:main}}
\label{sec:proof_main}
Fix $d,\alpha, \beta, \kappa, \ell,\ell'$ as in the theorem. Without loss of generality we may assume that $\|\ell\|=\|\ell'\|=1$. By hypothesis (a), we may apply \cite[Theorem 2.7]{RWDRE} to conclude that $P(A_+^{\ell})=1$. Therefore the regeneration structure exists as described above.
Since $q=P(D_0=\infty)>0$ we can define $P_0(\cdot)$ to be the conditional probability measure $P(\cdot\mid D_0=\infty)$.
We set $\mc{T}=\mc{T}_1=T_K$. Note that $X_{\mc{T}}\cdot \ell\in (M_{K-1},M_{K-1}+2]$.
To prove Theorem \ref{thm:main} note that by Lemmas \ref{lem:ballisticitycriterion} and \ref{lem:CLTcriterion} it suffices to find $C,\gamma,\delta>0$ such that
\begin{equation}
\label{eqn:estimategoal}
P_0(\mc{T}>n)\le Ce^{-\gamma n^\delta}, \qquad \textrm{ for every }n.
\end{equation}
Choose $\alpha_1,\alpha_2,\alpha_3$
such that $0<\alpha_3<\alpha_2<\alpha_1+\alpha_2<\alpha$, and
let $F_n$ be the event that $\mc{C}_o\subset -n^{\alpha_3}\ell+\mc{K}_{\kappa,\ell}$. Then
\begin{equation}
P_0(\mc{T}>n)\le P_0(F_n^c) +P_0(F_n, \mathcal{T}>n, M_{K-1}\le n^{\alpha})+P_0(F_n, M_{K-1}>n^{\alpha}).\label{firstbreak}
\end{equation}
Note that $P_0(A)\le q^{-1}P(A)$ for any $A$, so hypothesis (a) of the Theorem shows that there exist $C_1,\gamma_1>0$ such that
\begin{equation}
P_0(F_n^c)\le q^{-1}P(F_n^c)\le C_1e^{-\gamma_1 n^{\beta\alpha_3}}.\label{b1}
\end{equation}
To bound the second term on the RHS of \eqref{firstbreak}, note that the diameter of $\{x\in -n^{\alpha_3}\ell +\mc{K}_{\kappa,\ell}: x\cdot\ell\le n^\alpha\}$ is at most $C_3 n^\alpha$ for some $C_3$. Therefore if $F_n$ occurs, $\mathcal{T}>n$, and $M_{K-1}\le n^{\alpha}$ then $\max_{k\le n}X_k\cdot \ell\le n^{\alpha}$ and $R_n\le C_3n^\alpha$. Hypothesis (b) of the theorem then implies that there exist $C_2,\gamma_2>0$ such that
\begin{align}
\nonumber P_0(F_n, \mathcal{T}>n, M_{K-1}\le n^{\alpha})&\le q^{-1}P(F_n, \mathcal{T}>n, M_{K-1}\le n^{\alpha})\\
&\le C_2e^{-\gamma_2 n^\beta}.\label{b2}
\end{align}
For the third term on the RHS of \eqref{firstbreak}, observe that if $K\le n^{\alpha_1}$ and $\Delta_k\le n^{\alpha_2}$ for each $k< K$ then $M_{K-1}=\sum_{k=0}^{K-1}\Delta_k\le n^{\alpha_1}\cdot n^{\alpha_2}<n^\alpha$. Therefore this term is bounded above by
$$
P_0(K>n^{\alpha_1})+P_0(F_n, \, \exists k< K\le n^{\alpha_1} \textrm{ with }\Delta_k>n^{\alpha_2}).
$$
Since $K$ is geometrically distributed under $P$, the first term satisfies
\begin{align}
P_0(K>n^{\alpha_1})\le q^{-1}(1-q)^{n^{\alpha_1}}.\label{b3}
\end{align}
It therefore remains to bound the quantity
$$
q^{-1}P(F_n, \, \exists k<K\le n^{\alpha_1} \textrm{ with }\Delta_k>n^{\alpha_2})
$$
Observe that if $\Delta_k>n^{\alpha_2}$ then there is a $j<D_{k}$ such that
$$
X_{D_{k}}\cdot\ell+n^{\alpha_2}< X_j\cdot\ell\le X_{D_{k}}\cdot\ell +n^{\alpha_2}+2.
$$
On the event
$
\left\{\exists k<K\le n^{\alpha_1} \textrm{ with }\Delta_k>n^{\alpha_2}\right\}
$,
if $k_1< n^{\alpha_1}$ is the first $k$ such that $\Delta_k>n^{\alpha_2}$, and $j_1$ is the corresponding $j$ then $x\equiv X_{j_1}\in \mc{C}_o$ satisfies $\mc{C}_{x}\not\subset x-n^{\alpha_2}\ell + \mc{K}_{\kappa,\ell}$ and
\begin{align*}
0\le x\cdot \ell\le &(k_1-1)n^{\alpha_2}+n^{\alpha_2}+2\le (k_1+2)n^{\alpha_2}\le 2n^{\alpha_1}n^{\alpha_2}
\le 2n^{\alpha}.
\end{align*}
On the event $F_n$, there are at most $C_4(n^{\alpha})^d$ points $x$ satisfying $x\in \mc{C}_o$ and $0\le x\cdot\ell \le 2n^{\alpha}$, which we collect in a (non-random) set $J$.
Therefore by hypothesis (a) and translation invariance,
\begin{align}
\nonumber P(F_n, \, \exists k< K\le n^{\alpha_1} \textrm{ with }\Delta_k>n^{\alpha_2})&\le P\Big(\bigcup_{x\in J}\{\mc{C}_{x}\not\subset x-n^{\alpha_2}\ell + \mc{K}_{\kappa,\ell}\}\Big)\\
&\le \sum_{x\in J}P\left(\mc{C}_{x}\not\subset x-n^{\alpha_2}\ell + \mc{K}_{\kappa,\ell}\right)\nonumber\\
&\le C_1C_4n^{d\alpha}e^{-\gamma_1 n^{\beta\alpha_2}}.\label{b4}
\end{align}
Every one of the bounds \eqref{b1},\eqref{b2},\eqref{b3},\eqref{b4} can be rewritten as $Ce^{-\gamma n^\delta}$ for some single choice of $\delta$ and $\gamma$, so combining them establishes the desired bound \eqref{eqn:estimategoal}.\qed \medskip
\section{Proof of Proposition \ref{prp:transverse} }
\label{sec:propositionproof}
The following Lemma, when applied to the transverse walk $X'$ proves Proposition \ref{prp:transverse}(a).
\medskip
\begin{LEM}
\label{lem:martrange} Let $M_k$ be a submartingale with respect to a filtration $\mc{F}_k$. Assume that $M_0=0$, $\delta\le E[(M_{k+1}-M_k)^{2}\mid\mc{F}_k]$ for some $\delta>0$, and $|M_{k+1}-M_k|\le m$ for some $m<\infty$. Then there exist $C,m_0,\gamma>0$, depending only on $\delta$ and $m$, such that
$$
P\big(\max_{k\le n}|M_k|\le y\big)\le Ce^{-\gamma n/y^{2}}
$$
for every $n\ge 1$ and $y\ge m_0$.
\label{lem:marts}
\end{LEM}
\medskip
To see that this implies Proposition \ref{prp:transverse}(a), note that as the walk $X$ is a nearest neighbour walk, we may apply Lemma \ref{lem:marts} to the submartingale $X_k'$ to see that there exist $C',\gamma'>0$ such that for every $n \in {\mathbb N}$,
\begin{align}
P\big(\max_{k\le n}|X'_k|\le y\big)\le C'e^{-\gamma' n/y^{2}}.\label{marty}
\end{align}
Letting $0<\alpha<\frac12$ and $y=Cn^{\alpha}$, then \eqref{marty} implies that for every $n \in {\mathbb N}$
\begin{align}
P\big(\max_{k\le n}|X'_k|\le Cn^{\alpha}\big)\le C'e^{-\frac{\gamma'}{C^{2}}n^{1-2\alpha}},\label{rangebound}
\end{align}
which establishes Proposition \ref{prp:transverse}(a) with $\beta\in (0,1-2\alpha]$, $C_2=C'$ and $\gamma_2=\frac{\gamma'}{C^{2}}$.
\bigskip
\noindent {\em Proof of Lemma \ref{lem:martrange}}.
Our proof is motivated by the quasi-stationary distribution for Brownian motion on an interval. Consider $g(u)=\cos(\frac{\pi}{4}+u)$. Then $g''+g=0$, $g\le 1$, and $|g'''|\le 1$. Fix $\frac12<a<\frac{\pi}{4}$. We can choose $\epsilon>0$ so that for $u\in[-a,a]$ we have $g'(u)\le 0$ and $0<\epsilon\le g(u)$. Let $\Delta=M_{k+1}-M_k$ and $\gamma=\frac{\delta}{16}$. By Taylor's theorem, provided that $|\frac{M_k}{2y}|\le a$ we have
\begin{align}
\nonumber
E\left[g\Big(\frac{M_{k+1}}{2y}\Big)\Big|\mc{F}_k\right]
\le &E\left[g\Big(\frac{M_{k}}{2y}\Big)+\frac{\Delta}{2y}g'\Big(\frac{M_{k}}{2y}\Big)
+\frac{\Delta^2}{8y^2}g''\Big(\frac{M_{k}}{2y}\Big)+\frac{m^3}{48y^3}\Big|\mc{F}_k\right]\\
=&g\Big(\frac{M_{k}}{2y}\Big)-g\Big(\frac{M_{k}}{2y}\Big)E\left[\frac{\Delta^2}{8y^2} | \mc{F}_k\right]+\frac{m^3}{48y^3}\nonumber\\
&+g'\Big(\frac{M_{k}}{2y}\Big)E\left[\frac{\Delta}{2y}|\mc{F}_k\right].\label{jolly1}
\end{align}
Since $M$ is a submartingale we have $\mathbb{E}[\Delta|\mc{F}_k]>0$. Now using the facts that $y>0$, $|\frac{M_k}{2y}|\le a$ and $g'<0$ on $[-a,a]$ we can bound the final term \eqref{jolly1} above by 0 to get
\begin{align}
E\left[g\Big(\frac{M_{k+1}}{2y}\Big)\Big|\mc{F}_k\right]&\le g\Big(\frac{M_{k}}{2y}\Big)\left(1-\frac{E\left[\Delta^2 | \mc{F}_k\right]}{8y^2}\right)+\frac{m^3}{48\epsilon y^3}\epsilon \nonumber \\
&\le\Big(1-\frac{\delta}{8y^2}+\frac{m^3}{48\epsilon y^3}\Big)g\Big(\frac{M_{k}}{2y}\Big),\label{martlast}
\end{align}
where we have used the fact that $g(M_k/(2y))\ge \epsilon$ when $|\frac{M_k}{2y}|\le a$.
Choose $y_0>0$ such that $\frac{m^3}{48\epsilon y_0}<\frac{\delta}{16}$ and $\frac12+\frac{m}{2y_0}\le a$. Let $y\ge y_0$. Then \eqref{martlast} is bounded above by
$(1-\frac{\delta}{16y^2})g(\frac{M_k}{2y})\le e^{-\gamma/y^2}g(\frac{M_k}{2y})$, so $e^{\gamma k/y^2}g(\frac{M_k}{2y})$ is a supermartingale (while $|\frac{M_k}{2y}|\le a$). Let $T=\inf\{n: |M_n|> y\}$. Since $|M_{(T-1)\wedge n}|\le y$ it follows that $|M_{T\wedge n}|\le y+m$, so $|\frac{M_{T\wedge n}}{2y}|\le \frac12+\frac{m}{2y}\le a$ by choice of $y$.
Now observe that
\begin{align*}
P\big(\max_{k\le n}|M_k|\le y\big)\le P(T>n)&\le e^{-\gamma n/y^2}\min\big(e^{\gamma n/y^2},\mathbb{E}[e^{\gamma T/y^2}]\big),\\
&= e^{-\gamma n/y^2}\mathbb{E}\big[e^{(\gamma \wedge T)n/y^2}\big]\\
&\le \epsilon^{-1} e^{-\gamma n/y^2}E\left[e^{\gamma(T\land n)/y^2}g\Big(\frac{M_{T\land n}}{2y}\Big)\right]\\
&\le \epsilon^{-1} e^{-\gamma n/y^2}E\left[g(0)\right]=\frac{1}{\sqrt{2}\epsilon} e^{-\gamma n/y^2},
\end{align*}
where we have used optional sampling to obtain the last inequality.
\qed
\bigskip
\noindent
{\bf Remark:} For a related estimate see \cite[Proposition 4.1]{MPRV}.
\medskip
\bigskip
For 2-valued models in which the local environments are $\gamma^{\sss(1)}$ or $\gamma^{\sss(2)}$, it is useful to consider the local biases $\vec{u}^{\sss(i)}=\sum_{i=1}^d (\gamma^{\sss(i)}(e_i)-\gamma^{\sss(i)}(-e_i))e_i$.
Consider, for example, a 2-valued model in 2 dimensions with $\mc{S}(\gamma^{\sss(1)})=e_1$ and $\mc{S}(\gamma^{\sss(2)})=\{-e_1,e_2,-e_2\}$ (i.e.~the induced random graph is the same as in Example \ref{exa:E_NSW}). If $\vec{u}_2^{\sss(2)}\ne 0$ then we can find a direction $\ell$ in which both environments induce a drift. If (as in Example \ref{exa:E_NSW}) $\vec{u}_2^{\sss(2)}= 0$ then $X_n\cdot e_2$ is a martingale that does not move when $X_n$ is at a $\gamma^{\sss(1)}$ environment. This martingale therefore does not satisfy the conditions of Lemma \ref{lem:martrange}. Nevertheless Corollary \ref{cor:2valued} shows that it is still ballistic.
The following result gives various cases in which Lemma \ref{lem:martrange} applies directly to 2-valued models.
\medskip
\begin{LEM}
\label{lem:2valuedrange}
Let $\mu(\gamma^{\sss(1)})=p=1-\mu(\gamma^{\sss(2)})\in (0,1)$ be a 2-valued model satisfying Conditions \ref{cond:orthogonal} (for a set $V$) and \ref{cond:ddim} with $d\ge 2$.
\begin{itemize}
\item[(I)] If $\vec{u}^{{\sss(1)}}\ne o$ and $\vec{u}^{\sss(2)}\ne o$ and $\vec{u}^{\sss(2)}\ne -c\vec{u}^{\sss(1)}$ for any $c>0$ then there exists $\ell'\ne o$ such that $\vec{u}^i\cdot \ell'>0$ for $i=1,2$. For this $\ell'$, Lemma \ref{lem:martrange} applies to the transverse walk $X'_k$.
\item[(II)] If $\vec{u}^{\sss(2)}=-c\vec{u}^{\sss(1)}$ for some $c>0$, and if $\vec{u}^{\sss(1)}\perp \ell'$ for some $\ell'= \sum_{e\in V}x_e e$ with all $x_e\neq 0$, then Lemma \ref{lem:martrange} applies to $X'_k$.
\item[(III)] If $\vec{u}^{\sss(2)}=o$ then Lemma \ref{lem:martrange} applies to $X'_k$, for one of $\ell'=\pm\sum_{e\in V}e$.
\end{itemize}
\end{LEM}
\proof (I) If $\vec{u}^{{\sss(1)}}\ne o$ and $\vec{u}^{\sss(2)}\ne o$ then the $\{\ell:\vec{u}^{\sss(i)}\cdot \ell>0\}$ are half spaces, which must intersect unless one is the negative of the other. The latter possibility is ruled out, since it would imply that
$\vec{u}^{\sss(2)}= -c\vec{u}^{\sss(1)}$ for some $c>0$. Therefore we can find an $\ell'$ in the intersection. It follows that $X'_k$ is a submartingale. Because $\vec{u}^{\sss(i)}\cdot \ell'\neq 0$ for each $i$, there is a positive probability of movement in either environment.
(II) $\vec{u}^{\sss(2)}\cdot \ell'=0=\vec{u}^{\sss(1)}\cdot \ell'$ so $X'_k$ is a martingale. By Condition \ref{cond:orthogonal}, it is possible to take a step of size at least $\min_{e\in V}|x_e|$ in either environment.
(III) Either $\ell'=\sum_{e\in V}e$ or $\ell'=-\sum_{e\in V}e$ will have $\vec{u}^{\sss(1)}\cdot \ell'\ge 0$, so that $X'_k$ is a submartingale. Again, there is a positive probability of movement in either environment by Condition \ref{cond:orthogonal}.
\qed
\blank{
\begin{LEM}
\label{lem:livesincone}
Let $\mu(\gamma^{\sss(1)})=p=1-\mu(\gamma^{\sss(2)})\in (0,1)$ be a 2-valued model. If hypothesis (a) of Theorem \ref{thm:main} holds, then at least one of $\gamma^{\sss(1)}$ or $\gamma^{\sss(2)}$ is supported on an orthogonal set.
\end{LEM}
\proof If neither $\gamma^{\sss(1)}$ nor $\gamma^{\sss(2)}$ is supported on an orthogonal set, there are $i,j$ such that $\pm e_i\in\mc{S}(\gamma^{\sss(1)})$ and $\pm e_j\in\mc{S}(\gamma^{\sss(2)})$. If $i=j$ then $\mathbb{Z}e_i\subset\mc{C}_0$ which contradicts hypothesis (a).
So assume $i\neq j$. By Theorem 4.9 of \cite{HS_DRE1}, the intersection of $\mc{C}_o$ and the hyperplane $H$ spanned by $e_i$ and $e_j$ is infinite in every direction of $H$. This contradicts hypothesis (a).
\qed
\medskip
}
\medskip
For any 2-valued model $\mu(\gamma^{\sss(1)})=p=1-\mu(\gamma^{\sss(2)})\in (0,1)$, let $N^{\sss(i)}_n=\#\{0\le m<n:\omega_{X_m}=\gamma^{\sss(i)}\}$ and note that $N^{\sss(1)}_n+N^{\sss(2)}_n=n$. Let us write $N_n$ for $N_n^{\sss(1)}$.
\medskip
\noindent {\em Proof of Proposition \ref{prp:transverse}(b)}. If either $\vec{u}^{{\sss(1)}}$ or $\vec{u}^{{\sss(2)}}$ equals $o$, or if $\vec{u}^{\sss(2)}\neq -c\vec{u}^{\sss(1)}$ for any $c>0$, then the claim holds by Lemmas \ref{lem:martrange} and \ref{lem:2valuedrange}.
So assume that $\vec{u}^{\sss(2)}= -c\vec{u}^{\sss(1)}\neq o$, for some $c>0$. Write $\vec{u}=(u_1,\dots, u_d)$ for $\vec{u}^{\sss(1)}$. Without loss of generality there exists $k\le d$ such that $e_i \cdot \vec{u}> 0$ for $i=1,\dots, k$, and $e_i \cdot \vec{u}=0$ for each $i=k+1,\dots, d$.
If $k>1$ then the vector $\ell'=(-(u_2+\dots+u_d),u_1, u_1, \dots, u_1)\perp \vec{u}$, and by Lemma \ref{lem:2valuedrange} the transverse walk $X'$ for this $\ell'$ is a martingale to which Lemma \ref{lem:martrange} applies.
Therefore we will assume for the rest of the proof that $k=1$, so $\vec{u}^{\sss(1)}=u_1e_1$ and $\vec{u}^{\sss(2)}=-cu_1e_1$. For each $i\ge 2$, condition \ref{cond:ddim} implies that either $\gamma^{\sss(1)}(e_i)=\gamma^{\sss(1)}(-e_i)>0$ or $\gamma^{\sss(2)}(e_i)=\gamma^{\sss(2)}(-e_i)>0$ (or both).
If $\gamma^{\sss(1)}(e_{i_1})=\gamma^{\sss(1)}(-e_{i_1})>0$ and $\gamma^{\sss(2)}(e_{i_2})=\gamma^{\sss(2)}(-e_{i_2})>0$ for some $i_1, i_2\ge 2$ (possibly equal) ,then let $\ell'=e_{i_1}+e_{i_2}$. We see that the transverse walk $X'$ for this $\ell'$ is a martingale, and Lemma \ref{lem:martrange} applies since $X'$ has a positive probability of moving in either environment.
It remains to handle the case that one of the $\gamma^{\sss(i)}$ (which we will take to be $\gamma^{\sss(1)}$) is supported on $\pm e_1$. In this case by Condition \ref{cond:ddim}, $\gamma^{\sss(2)}(e_i)=\gamma^{\sss(2)}(-e_i)>0$ for each $i\ge 2$ (i.e.~we have basically reduced the problem to something like Example \ref{exa:E_NSW}).
Let $\delta>0$, and let $B_n=\{n^{-1}N_n\le 1-\delta\}$. Then on $B_n$ we have at least $\delta n$ departures from $\gamma^{\sss(2)}$ sites by time $n$. Let $\ell'=(0,1,1,\dots,1)$ and let $X'_k=X_k\cdot \ell'$ be the transverse walk. Let $\tilde X_n$ be $X'_k$ time changed by $N_n^{\sss(2)}$, i.e.~so that time only advances at $\gamma^{\sss(2)}$ sites. It is a martingale and Lemma \ref{lem:martrange} applies to $\tilde X_n$, so
there is a $C'$ with
$$
P\big(\max_{k\le n}|X'_k|\le y, B_n\big)\le P\big(\max_{k\le \delta n}|\tilde X_k|\le y\big)\le C'e^{-\gamma \delta n/y^{2}}.
$$
As in \eqref{rangebound} this implies that
\begin{equation}
P(R_n< Cn^\alpha, B_n)\le C'e^{-\frac{\gamma\delta}{C^2}n^{1-2\alpha}}.
\label{firsthalf}
\end{equation}
We therefore take $\alpha<\frac12$ and $\beta=1-2\alpha$.
Now let $\Xi_k=\pm 1$ according to whether the $k$th departure from a $\gamma^{\sss(1)}$ site is $\pm e_1$. In other words, the $\Xi_k$ are independent, with $P(\Xi_k=1)=\gamma^{\sss(1)}(e_1)$ and $P(\Xi_k=-1)=\gamma^{\sss(1)}(-e_1)$, so $E[\Xi_k]=u_1$. Let $Y_n=\sum_{k=1}^n \Xi_i$. Choose $\delta<\frac{u_1}{4}$. On $B_n^c$ there are then at most $\frac{u_1n}{4}$ departures from $\gamma^{\sss(2)}$ sites by time $n$. So if $Y_{N_n}>\frac{u_1n}{2}$, it follows that $X_1\cdot e_1>\frac{u_1n}{4}$ and hence $R_n>\frac{u_1n}{4}$. Since $\alpha<\frac12$ for each
$C$ we have
$Cn^\alpha<\frac{u_1n}{4}$ for all $n\ge n_C$. Therefore, for such $n$,
$$
P(R_n< Cn^\alpha, B_n^c)
\le P(Y_{N_n} <\frac{u_1 n}{2}, B_n^c)\le \sum_{k=(1-\delta)n}^n P(Y_k< \frac{u_1 n}{2}).
$$
By Cramer's theorem (see e.g.~\cite{DeZ}), there exist $c$, $c'$ such that this is $\le nc'e^{-cn}$. Since $\beta<1$ we may combine this estimate with \eqref{firsthalf} to obtain the bound of Theorem \ref{thm:main}(b), for large $n$. Raising these constants we obtain the bound for all $n$.
\qed
\section{Proof of Proposition \ref{prp:E+}}
\label{sec:percolation}
Consider a model with 2-valued support of the form $\mu(\mc{E}_+)=p$ (i.e.~$\mu(\gamma:\mc{S}(\gamma)=\mc{E}_+)=p$) and $\mu(\mc{E})=1-p$. Define $\ell_+=\sum_{e\in \mc{E}_+}e=(1,\dots,1)$ and
\begin{align}
p_c(d)=\inf\Big\{p>0:&\exists \kappa>0 \text{ such that }\nonumber\\
&\nu\big(\cup_{n=1}^{\infty}\{\mc{C}_o(p)\subset-n\ell_++\mc{K}_{\kappa, \ell_+}\}\big)=1\Big\}.\nonumber
\end{align}
In other words, for $p>p_c(d)$ the forward cluster of such a model is contained in a cone. It is an immediate consequence of \cite[Theorem 1.6]{HS_DRE2} that $p_c(d)>.5730$ for all $d\ge 2$. Since $\mc{E}_+\subset \mc{E}$, under the natural coupling of environments for all $p$ (i.e.~$\mc{G}_x=\mc{E}_+$ if and only if $U_x\le p$, where $U_x\sim U[0,1]$ are independent) $\mc{C}_o(p)$ is monotone decreasing in $p>p_c(d)$. We conclude the following.
\medskip
\begin{LEM}
\label{lem:pcd}
Suppose that $\mu(\mc{E}_+)=p$, $\mu(\mc{E})=1-p$. Then for all $p>p_c(d)$ there exist $\kappa=\kappa(p,d)>0$ such that
\begin{align}
\nu\big(\cup_{n=1}^{\infty}\{\mc{C}_o(p)\subset -n\ell_++\mc{K}_{\kappa,\ell_+}\}\big)=1,\label{yoyo1}
\end{align}
moreover $\kappa(p,d)$ is non-decreasing in $p$ for each $d$. For $p<p_c(d)$, \eqref{yoyo1} fails for every $\kappa>0$.
\end{LEM}
\medskip
Although we believe that Theorem \ref{thm:main}(a) does hold in this setting as soon as $p>p_c(d)$, Lemma \ref{lem:pcd} is not sufficient to establish that result as it does not give tail probabilities. Let $\sigma_d$ be the connective constant for self-avoiding walks on the cubic lattice $\mathbb{Z}^d$, defined as $\lim_{N\to\infty}c_N^{1/N}$, where $c_N$ is the number of self-avoiding walks of length $N$. Let $p_d=1-\sigma_d^{-2}$. The following result (based on \cite[Theorem 4.2]{HS_DRE1} and proved below) verifies that $p_c(d)\le p_d<1$ for each $d$, and gives bounds for the relevant tail probabilities when $p>p_d$.
\medskip
\begin{LEM}
\label{lem:SAW}
Let $d\ge 2$. Consider an i.i.d. RWRE in which $\mu(\gamma:\mc{S}(\gamma)\subset\mc{E}_+)>p_d$.
Then $\exists$ constants $C,\kappa,\gamma>0$ such that $\nu(\mc{C}_o\subset -n\ell_++\mc{K}_{\kappa,\ell_+})\ge 1-Ce^{-\gamma n}$ for every $n$.
\end{LEM}
\medskip
With additional conditions on $\mu$, the constant $p_d$ in the above result may be improved slightly (see \cite{HS_DRE1}, as well as for a table of values for $\sigma_d$). When $d=2$, duality with an oriented percolation model (whose critical percolation probability is $p_c^{\smallOTSP}\approx .5956$) allow us to prove the following.
\medskip
\begin{LEM}
\label{lem:basiclemma}
Fix $d=2$. Then $p_c(2)=p_c^{\smallOTSP}$. Moreover, if $\mu(\gamma:\mc{S}(\gamma)\subset\mc{E}_+)>p_c(2)$ there exist constants $C,\kappa,\gamma>0$ such that
\begin{align*}
\nu(\mathcal{C}_o\subset -n\ell_++\mc{K}_{\kappa,\ell_+})\ge 1-Ce^{-\gamma n}.
\end{align*}
\end{LEM}
\medskip
Clearly Lemmas \ref{lem:SAW} and \ref{lem:basiclemma} imply Proposition \ref{prp:E+}. Therefore to prove the proposition it is sufficient to prove each of the lemmas.
The proof of Lemma \ref{lem:SAW} is a relatively straightforward adaptation of the proof of \cite[(4.1)]{HS_DRE1}.
\medskip
\noindent {\em Proof of Lemma \ref{lem:SAW}}.
Set $p=\mu(\mathcal{G}_o\subset\mc{E}_+)$. Assume that $p>1-\sigma_d^{-2}$, in other words, that $\sqrt{1-p}<\frac{1}{\sigma_d}$. We may therefore find a $\theta<\frac12$ and a $\mu>\sigma_d$ such that $(1-p)^\theta <\frac{1}{\mu}$. Now choose $\kappa<1-2\theta$. If $c_N$ denotes the total number of $N$-step self avoiding walks from $o$ then we may, by definition of $\sigma_d$, find a constant $C$ such that $c_N\le C\mu^N$ for every $N$.
Set $\Gamma=(-n\ell_+ + \mc{K}_{\kappa,\ell_+})^c$. Fix, for the moment, a lattice point $y\in\Gamma$ and self-avoiding path $(w_0,\dots,w_N)$ from $o=w_0$ to $y=w_N$. Clearly $N\ge n$ (since $\kappa<1$)
Suppose that at most a fraction $\theta$ of the steps of the path are from $\mc{E}_-$. Then $y\cdot \ell_+\ge N(1-\theta) -N\theta=N(1-2\theta)>\kappa N$. We also have $\kappa<1<\sqrt{d}$, so
$$
(y+n\ell_+)\cdot\ell_+
\ge \kappa N +nd
\ge \kappa\|y\|+\kappa n\sqrt{d}
=\kappa\|y\|+\kappa\|n\ell_+\|
\ge\kappa\|y+n\ell_+\|.
$$
In other words, $y\in -n\ell_+ +\mc{K}_{\kappa,\ell_+}=\Gamma^c$, which is impossible. Therefore, at least $N\theta$ of the steps belong to $\mc{E}_-$, so the probability that this particular path is actually a $\mc{G}$-path is at most $(1-p)^{N\theta}$.
If $\mc{C}_o$ intersects $\Gamma$ then there is a self-avoiding $\mc{G}$-path from $o$ to some point in $\Gamma$. By the above estimate,
$$
\nu(\text{$\mc{C}_o$ intersects $\Gamma$})
\le \sum_{N=n}^\infty c_N(1-p)^{N\theta}
\le \sum_{N=n}^\infty C\Big(\mu(1-p)^\theta\Big)^N
=C'e^{-\gamma n}
$$
where $e^{-\gamma}=\mu(1-p)^\theta$.
\qed \medskip
For the comparable result (in dimension $d=2$) our arguments rely on estimates for oriented percolation as in \cite{Dur84} (see Lemmas \ref{lem:Durrett} and \ref{lem:abovealine} below). Recall that $p_c^{\smallOTSP}$ denotes the critical percolation parameter for oriented site percolation on the triangular lattice.
It is shown in \cite[Prop.~3.1]{HS_DRE2} that $\mathcal{C}_o$ has a lower boundary if and only if $p>p_c^{\smallOTSP}$. Moreover by \cite[Theorem 1.5(III)]{HS_DRE2},
if $p>p_c^{\smallOTSP}$ then this boundary almost surely has an asymptotic slope of $\rho_p<-1$ in the northwest direction and $1/\rho_p>-1$ in the southeast direction, so $\#\{x\in \mc{C}_o:x\cdot \ell_+<0\}$ is almost surely finite. Label any vertex $y$ as {\it open} if $\mathcal{G}_y\subset\NE$. An {\it open path} is a sequence of open vertices $y_i$ such that each $y_{i+1}-y_i\in \OTSP=\{-e_1,e_2,e_2-e_1\}$. The idea is that
an infinite oriented open path in both directions in the triangular lattice (generated by
$(\leftrightarrow, \updownarrow, \mathrlap{\nwarrow}{\searrow})$ lines) that passes below $o$, also passes below $\mc{C}_o$.
\medskip
\noindent{\em Proof of Lemma \ref{lem:basiclemma}.}
Let $p=\mu(\mathcal{G}_o\subset\NE)>p_c^{\smallOTSP}$ and choose $\theta$ such that $\rho_p<\theta<-1$. Let $L$ and $L'$ be the lines with slope $\theta$ and $1/\theta$ through $(-1,-1)$. Let $\Gamma$ denote the set of $x$ lying above both $L$ and $L'$.
Choose $\epsilon$ so that $0<\epsilon<\frac{1}{\theta}-\theta$. Let $A$ and $A'$ be the line segments $\{(0,z): z\in[\frac{1}{\theta}-1-\epsilon,\frac{1}{\theta}-1]\}$ and $\{(z,0): z\in[\frac{1}{\theta}-1-\epsilon,\frac{1}{\theta}-1]\}$. Therefore $A$ lies below $L'$ and above $L$, while $A'$ lies below $L$ and above $L'$.
By Lemma \ref{lem:abovealine} below we may discard an event of probability at most $Ce^{-\gamma n}$ and obtain an infinite open path from some site in $nA$ that lies above $nL$. By symmetry, we may discard a further event of probability at most $Ce^{-\gamma n}$ and obtain an infinite open path terminating at some site in $nA'$ that lies above $nL'$. By construction, these paths must cross somewhere in $(-\infty,0]\times(-\infty,0]$, so following first one and then the other gives us an open path that is infinite in both directions. It lies above both $nL$ and $nL'$, and separates these lines from $o$. As remarked above, this implies that $\mathcal{C}_o\subset n\Gamma=-n\ell_++\mc{K}_{\kappa,\ell_+}$ for a suitable choice of $\kappa$.
\qed \medskip
In the remainder of this section, we specialize to $d=2$, and will consider various estimates for oriented site percolation on the triangular lattice $\mathbb{Z}^{\sss(2)}$. We realize the latter using the vertices of $\mathbb{Z}^{2}$ connected by horizontal and vertical bonds, as well as by bonds of slope $-1$. Given $p$, let sites in $\mathbb{Z}^{\sss(2)}$ be open with probability $p$, independently of each other.
As above, we call a sequence $(\dots, y_{-1},y_0, y_1, \dots)$ -- finite or infinite -- an \emph{open path} if each $y_i$ is an open site, and each $y_{i+1}-y_i\in \OTSP=\{-e_1,e_2,e_2-e_1\}$.
For any site $x\in \mathbb{Z}^{\sss(2)}$, let its forward cluster $\mathbf{C}_x$ be the set of sites $y\in \mathbb{Z}^{\sss(2)}$ for which there is an open path starting at $x$ and ending at $y$. Let $\mathbf{C}_x^\infty$ be the set of $y\in\mathbf{C}_x$ such that $|\mathbf{C}_y|=\infty$. For $A\subset \mathbb{Z}^{\sss(2)}$, set $\mathbf{C}_A=\cup_{x\in A}\mathbf{C}_x$ and $\mathbf{C}_A^\infty=\cup_{x\in A}\mathbf{C}_x^\infty$. In other words, each site in $\mathbf{C}_A^\infty$ can be reached from $A$ by an open path, and is then left via an infinite open path.
For $Y=\{(0,z)\in\mathbb{Z}^{\sss(2)}: z\le 0\}$ set $\bar{u}_n = \max\{y: (-n,y)\in\mathbf{C}_Y\}$.
Also let $\tau_o=\sup\{y-x: (x,y)\in \mathbf{C}_o\}$, which measures the furthest diagonal line reached by the forward cluster of the origin.
More generally, if $A\subset \mathbb{Z}^{\sss(2)}$, let $\tau_A=\sup\{y-x: (x,y)\in \mathbf{C}_A\}$.
Note that for $A$ finite, $|\mathbf{C}_A|=\infty\Leftrightarrow \tau_A=\infty$. If we wish to measure diagonal displacement relative to a point $z=(x_0,y_0)$ other than $o$, we will use $\tau_A^z=\tau_A-(y_0-x_0)$.
Let $p_c^{\smallOTSP}$ denote the critical probability for oriented site percolation on the triangular lattice. The following bounds are known:
$$
0.5699\le p_c^{\smallOTSP}\le 0.7491;
$$
the former is Corollary 6.3 of of \cite{HS_DRE2}, while the latter follows from the square lattice bound $\le p_c^{\smallNE}\le 0.7491$ of Balister et al \cite{BBS}, since $p_c^{\smallOTSP}\le p_c^{\smallNE}$.
Fix $p>p_c^{\smallOTSP}$. Proposition 4.1 of \cite{HS_DRE2} (reformulating results in \cite{Dur84}) states that there exists a $\rho_p<-1$ such that on the event $\{|\mathbf{C}_o|=\infty\}$, the set $\mathbf{C}_o$ has an upper boundary with asymptotic slope $\rho_p$ and a lower boundary with asymptotic slope $1/\rho_p$.
\medskip
\begin{LEM}
\label{lem:Durrett}
Let $p>p_c^{\smallOTSP}$ and choose $\theta_1$ with $\rho_p<\theta_1<-1$. Then
\begin{enumerate}
\item $\exists$ a constant $\gamma_1>0$ such that $P(\bar u_n\le -n\theta_1)\le e^{-\gamma_1 n}$, $\forall n$;
\item $\exists$ constants $C_2$, $\gamma_2>0$ such that $P(n\le \tau_o<\infty)\le C_2e^{-\gamma_2 n}$, $\forall n$;
\item $\exists$ a constant $\gamma_3>0$ such that $P(\tau_A<\infty)\le e^{-\gamma_3 |A|}$, $\forall A\subset Y$.
\end{enumerate}
\end{LEM}
\begin{proof}
These are all taken from \cite{Dur84}. The lattice used there is different from ours, but it can be verified that the arguments all apply equally well in our setting.
See also Section 4 of \cite{HS_DRE2} where a similar translation is carried out. In particular, (a) is formula (11.1) of \cite{Dur84}, (b) is formula (12.1), and (c) is formula (10.5).
\end{proof}
We will follow the convention that constants $C$ and $\gamma$ may change from line to line. If specific values are to be tracked, we will index them (as in the above result).
For $\theta_1$ as above, choose
$\epsilon>0$, and $\theta$ with
$\theta_1<\theta<-1$. Set $A_{n,\epsilon}=\{(n,y)\in\mathbb{Z}^{\sss(2)}: 0\le y\le\epsilon n\}$. Let $L$ be the line through $o$ with slope $\theta$, and let $L_n$ be the line through $(n,0)$ with slope $\theta_1$. We are interested in the event
$$
\mathcal{A}_{n,\epsilon,\theta}=\{\text{$\exists$ infinite open path, starting in $A_{n,\epsilon}$ and lying above $L$}\}.
$$
\medskip
\begin{LEM}
\label{lem:abovealine}
Let $p>p_c^{\smallOTSP}$. Choose $\theta_1$ and $\theta$ with $\rho_p<\theta_1<\theta<-1$, and choose $\epsilon>0$. There are constants $C>0$ and $\gamma>0$ such that $P(\mathcal{A}_{n,\epsilon,\theta})\ge 1-Ce^{-\gamma n}$ for every $n$.
\end{LEM}
\begin{proof}
We will temporarily fix $k\ge 0$, and will estimate the probability that there exists a point $(-k,z)\in \mathbf{C}_{A_{n,\epsilon}}^\infty$ that lies above $L$. Constants below are as taken from Lemma \ref{lem:Durrett}
Discarding an event of probability at most $e^{-\gamma_3\epsilon n}$, there is an infinite open path $\sigma_1$, starting from some point $x_1$ of $A_{n,\epsilon}$. Let $y_1=(-k,z_1)$ be the first site on $\sigma_1$ whose first coordinate equals $-k$. Discarding a further event, of probability at most $e^{-\gamma_1(k+n)}$ there is a also an open path $\sigma_2$ from some site $x_2=(n,z_2)$ with $z_2\le 0$, to a point $y_2=(-k, z_2')$ lying above $L_n$. Let $k_3$ be the first integer exceeding $-\theta k$, and let $x_3=(z_3,k_3)$ be the first site on $\sigma_2$ whose second coordinate exceeds $-\theta k$. If $\mathbf{C}_{x_3}$ is infinite, we claim that there will be a point $y=(-k,z)\in \mathbf{C}_{A_{n,\epsilon}}^\infty$.
To see this, we know there is an infinite open path $\sigma_3$ starting at $x_3$. Let $y_3$ be the first site on $\sigma_3$ whose first coordinate equals $-k$. By construction, $y_3$ lies above $L$. If $y_1$ lies above $L$ then take $y=y_1\in \mathbf{C}_{A_{n,\epsilon}}^\infty$. If $y_1$ lies below $L$ then $\sigma_1$ crosses $\sigma_2$ before the latter reaches $x_3$. By following $\sigma_1$ from $x_1$ till it crosses $\sigma_2$, then $\sigma_2$ to $x_3$, and then $\sigma_3$, we see that we can take $y=y_3\in \mathbf{C}_{A_{n,\epsilon}}^\infty$. Either way, we have found our $y$.
Note that the lines of slope 1 through $x_3$ and $y_2$ are well separated. The closest they can be is when $x_3=(-k, k_3)$ and $y_2=(-k, -\theta_1(n+k))$, so we always have $\tau_{x_3}^{x_3}\ge -\theta n + k(\theta-\theta_1)-1$. In particular, if $\mathbf{C}_{x_3}$ is finite, then $-\theta n + k(\theta-\theta_1)-1\le \tau_{x_3}^{x_3}<\infty$. But there are at most $n+k$ possible values for $x_3$. Taking a union over these values shows that
\begin{multline*}
1-P(\text{$\exists$ a point $y=(-k,z)\in \mathbf{C}_{A_{n,\epsilon}}^\infty$ that lies above $L$})\\
\le e^{-\gamma_3\epsilon n} + e^{-\gamma_1(k+n)}+C_2(n+k)e^{-\gamma_2(-\theta n + k(\theta-\theta_1)-1)}.
\end{multline*}
In fact, the first excluded event is common to all $k$, so summing over $k$ we get that
\begin{align*}
&1-P(\cap_{k\ge 0}\{\text{$\exists$ a point $(-k,z)\in \mathbf{C}_{A_{n,\epsilon}}^\infty$ that lies above $L$}\})\\
&\qquad\qquad \le e^{-\gamma_3\epsilon n}+\sum_{k=0}^\infty[e^{-\gamma_1(k+n)}+C_2(n+k)e^{-\gamma_2(-\theta n + k(\theta-\theta_1)-1)}]\\
&\qquad\qquad \le e^{-\gamma_3\epsilon n}+ C[e^{-\gamma_1 n}+ne^{\gamma_2\theta n}] \le Ce^{-\gamma n}
\end{align*}
for some $C$, provided we choose $\gamma<\min(\gamma_1, -\gamma_2\theta, \gamma_3\epsilon)$.
Under the above event, there are open paths from a single $x_1\in A_{n,\epsilon}$ to each such $(-k,z)$, so we can take the maximum over all these paths and obtain a single infinite path from $x_1$ that lies completely above $L$. In other words, $1-P(\mathcal{A}_{n,\epsilon,\theta})\le Ce^{-\gamma n}$, as required.
\end{proof}
\section{Ballisticity in the elliptic case}
\label{sec:elliptic}
For i.i.d. uniformly elliptic walks, there are a number of abstract conditions that imply ballisticity, starting with Kalikow's condition \cite{K81}. The proof of ballisticity in that context is due to Sznitman and Zerner \cite{SZ99}. In \cite{Sz01}, Sznitman introduces a condition weaker than Kalikow's, but which also implies ballisticity. He called this condition (T), and it is defined using exponential moments of the walk up to the regeneration time $\mc{T}$. Our condition \eqref{eqn:estimategoal} is therefore very similar in character.
In \cite{Sz02} he formulated weaker conditions ($\text{T}'$) and $(\text{T})_\gamma$ that don't require knowledge of $\mc{T}$, but instead are based on the distributions of the walk prior to exiting from arbitrarily large slabs. \cite{Sz02} shows that ballisticity holds under ($\text{T}'$), and as well that ($\text{T}'$) is equivalent to what is there called an ``effective condition''. In other words, a condition that can be verified by finding a large but finite box on which it holds.
Berger, Drewitz and Ram\'irez show in \cite{BDR14} that ($\text{T}'$) is equivalent to $(\text{T})_\gamma$ for each $0<\gamma<1$, and moreover that these conditions are in turn equivalent to a polynomial decay condition $(\text{P})_{\text{M}}$ that is also effective. Uniform ellipticity is relaxed in \cite{CR13} and then further in \cite{BRS16}, where ballisticity is shown under $(\text{P})_{\text{M}}$, for elliptic (but not uniformly elliptic) walks. In those results, all directions have nonzero probability of being chosen, but certain directions are allowed to have probabilities that decay to zero in a controlled way.
None of the above ballisticity conditions is strictly local, in the sense that it is formulated solely in terms of the law $\mu$ of the environment at a single site. In contrast, our Propositions \ref{prp:E+} and \ref{prp:transverse} together do provide such local conditions. In the uniformly elliptic case, the best known local condition is the following from \cite{K81}
\begin{equation}
\hat \varepsilon_\ell >0\text{ where } \hat\varepsilon_\ell=\inf_{f\in\mathcal{F}}\frac{E_\mu\Big[\frac{\sum_{e\in\mathcal{E}}\gamma(e)\ell\cdot e}{\sum_{e\in\mathcal{E}}\gamma(e)f(e)}\Big]}{E_\mu\Big[\frac{1}{\sum_{e\in\mathcal{E}}\gamma(e)f(e)}\Big]}.
\label{localKalikow}
\end{equation}
Here $\mathcal{F}$ denotes the set of nonzero functions on $\mathcal{E}$ with values in $[0,1]$. In the presence of uniform ellipticity this implies Kalikow's condition and hence ballisticity (note that \cite{Sz02} differentiates between \eqref{localKalikow} and Kalikow's condition by calling the former {\it Kalikow's criterion}).
We wish to compare \eqref{localKalikow} with our conditions, and understand what it tells us about Example \ref{exa:NE_SW}. Note that $\hat\varepsilon_\ell$ is a lower bound for a quantity $\varepsilon_\ell$ that arises in Kalikow's condition, which in turn is a lower bound for $v\cdot\ell$, so \eqref{localKalikow} implies $v\cdot\ell>0$.
Since the above results depend on uniform ellipticity, which fails for Example \ref{exa:NE_SW}, we work with the following uniformly elliptic version instead:
\medskip
\begin{EXA}[modified 2-d orthant model]
$\mu_{\epsilon,\delta}$ is 2-valued, with $\mu(\gamma^{\sss(1)})=p$ and $\mu(\gamma^{\sss(2)})=1-p$; $\gamma^{\sss(1)}(e_1)=\gamma^{\sss(1)}(e_2)=\frac{1-\epsilon}{2}$ and
$\gamma^{\sss(1)}(-e_1)=\gamma^{\sss(1)}(-e_2)=\frac{\epsilon}{2}$;
$\gamma^{\sss(2)}(e_1)=\gamma^{\sss(2)}(e_2)=\frac{\delta}{2}$ and
$\gamma^{\sss(2)}(-e_1)=\gamma^{\sss(2)}(-e_2)=\frac{1-\delta}{2}$.
\end{EXA}
\medskip
In other words, we add back the missing directions to $S(\gamma^{\sss(1)})$ and $S(\gamma^{\sss(2)})$, with probabilities $\epsilon$ and $\delta$ respectively. Let $\ell=e_1+e_2$. We will examine the range of $p$ for which \eqref{localKalikow} holds while letting $\epsilon\downarrow 0$ or $\delta\downarrow 0$.
By symmetry (i.e.~$\gamma^{\sss(i)}(e_1)=\gamma^{\sss(i)}(e_2)$, $\gamma^{\sss(i)}(-e_1)=\gamma^{\sss(i)}(-e_2)$), we may assume that $f(e_1)=f(e_2)=a$ and $f(-e_1)=f(-e_2)=b$. So $(a,b)\in F=[0,1]^2\setminus\{(0,0)\}$.
We get that
$$
\hat\varepsilon_\ell =\inf_{(a,b)\in F}\frac{p(1-2\epsilon)[\delta a+(1-\delta)b]-(1-p)(1-2\delta)[(1-\epsilon)a + \epsilon b]}{p[\delta a+(1-\delta)b]+(1-p)[(1-\epsilon)a + \epsilon b]}.
$$
This fraction has the form $\frac{Aa+Bb}{Ca+Db}$, and an elementary calculation shows that $AD-BC=-2p(1-p)(1-\epsilon-\delta)^2\le 0$. From this it follows that the fraction is
$\downarrow$ in $a$ and $\uparrow$ in $b$, so the infimum occurs at $(1,0)$, giving
$$
\hat\varepsilon_\ell =
\frac{(p(1-2\epsilon)\delta - (1-p)(1-2\delta)(1-\epsilon))}{(p\delta+(1-p)(1-\epsilon))}.
$$
Restricting attention to the case $\epsilon<\frac12$ and $\delta<\frac12$, \eqref{localKalikow} becomes that
$$
\frac{p}{1-p}>\frac{(1-2\delta)(1-\epsilon)}{(1-2\epsilon)\delta}.
$$
In other words, sending $\epsilon\downarrow 0$ is inconsequential for \eqref{localKalikow}; it only expands the range of $p$ for which \eqref{localKalikow} implies ballisticity in direction $\ell=(1,1)$. But when $\delta\downarrow 0$, the condition becomes increasingly restrictive; for there to be any $\epsilon\in(0,\frac12)$ for which \eqref{localKalikow} holds, we require $p>1-\frac{\delta}{1-\delta}$. The right hand side approaches 1 as $\delta \downarrow 0$.
We can interpret this observation as saying that the absence of arrows $\NE$ in environment $\gamma^{\sss(2)}$ creates an insurmountable obstacle for obtaining ballisticity in direction $\ell=(1,1)$ via condition \eqref{localKalikow}. The barriers $\SW$ are too strong for \eqref{localKalikow} to handle. One way of interpreting our main result is that we can overcome the presence of strong barriers $\SW$ by strengthening the forward push, i.e.~including sufficiently many sites $\NE$ that don't permit backwards motion.
\section*{Acknowledgements}
Holmes's research was supported in part by the Marsden Fund, administered by RSNZ. Salisbury's research is supported in part by NSERC.
\bibliographystyle{plain}
|
{
"timestamp": "2016-12-15T02:08:24",
"yymm": "1612",
"arxiv_id": "1612.04761",
"language": "en",
"url": "https://arxiv.org/abs/1612.04761"
}
|
\section{Introduction and main results}
Let $ K \subset \mathds{R}^d $ be a full dimensional convex body, i.e. a compact convex set with non-empty interior.
Fix $n \in \mathds{N}$ and denote by $ P = P_{K,n} \supset K$ a circumscribed polytope, with at most $n$ facets, minimizing the Hausdorff distance $ d_H (K,P) $.
It is well known that $d_H(K,P)$ is of order $n^{- 2/(d-1)}$ (see, e.g., \cite{Gruber1993book}).
Thus, we set $ \Cl[dist]{boundC} ( K , n ) := d_H (K,P) n^{2/(d-1)}$ and $ \Cl[dist]{boundCbis} ( K , n ) := \sup_{ m \geq n } \Cr{boundC} ( K , m ) $,
and we have
\[ d_H (K,P)
= \frac{ \Cr{boundC} ( K , n ) }{n^{2/(d-1)}}
\leq \frac{ \Cr{boundCbis} ( K , n ) }{n^{2/(d-1)}}.
\]
Estimating $\Cr{boundC} ( K , n )$ and $\Cr{boundCbis} ( K , n )$ is a classical problem.
We refer the reader to the well known surveys of P. M. Gruber \cite{Gruber1993book,Gruber1994} and E. M. Bronstein \cite{Bronstein2008} for an excellent overview of the huge amount of results and literature about polytopal approximation.
The specificity of our main result is that we take into account how much $K$ is `\textit{elongated}'.
We denote by $V_i$ the $i$-th intrinsic volume (see Section \ref{sec:Setting} for the definition).
For any $ 1 \leq i < j \leq d $, we call $ V_j(K)^{1/j} V_i(K)^{-1/i} $ the \textit{$(i,j)$-isoperimetric ratio} of $K$.
It is scale and translation invariant.
The isoperimetric inequality (see Section \ref{sec:Setting}, inequality \eqref{eq:IsoperimetricIneq}, for a statement) says that it is maximized by the balls.
On the other hand, $ V_j(K)^{1/j} V_i(K)^{-1/i} \simeq 0 $ precisely when the normalized body $ V_i(K)^{-1/i} K $ is close to a $(j-1)$-dimensional convex body.
If an isoperimetric ratio of $K$ is close to zero, we say that $K$ is \textit{elongated}.
More precisely, if $ V_j(K)^{1/j} V_i(K)^{-1/i} < \epsilon $, we say that
\textit{$K$ is $ ( \epsilon \colon i , j ) $-elongated}.
The following theorem gives a bound for the Hausdorff distance between a convex body $K$ and its best approximating polytope.
This bound can be arbitrarily small if $K$ is sufficiently elongated.
\begin{theorem}
\label{thm:main}
Assume $ 1 \leq i < j \leq \lceil (d-1) /2 \rceil $.
Set $\alpha=2 \lceil (d-1)/2 \rceil (d-1) d^{-1}$
and $ \beta = \lceil (d-1)/2 \rceil (d-1)^{-1} d^{-1} $.
There exist constants $\delta_{i,j}$ and $n_{i,j}$, both depending on $d$, such that the following holds.
For any $ \epsilon >0 $ and any convex body $ K $
\[ \text{ if } K \text{ is } (\epsilon\colon i,j) \text{-elongated then } \Cr{boundCbis} \left( K , n_{i,j} \epsilon^{-\alpha} \right)
< \delta_{i,j} \epsilon^\beta V_1(K) . \]
I.e. for any $ \epsilon>0 $ and any convex body $K$
\[ \text{ if } \frac { V_j(K)^{1/j} } { V_i(K) ^{1/i} } < \epsilon
\text{ then } d_H ( K , P ) < \delta_{i,j} \epsilon^\beta \frac{V_1(K)}{n^{2/(d-1)}} \text{ for any } n \geq n_{i,j} \epsilon^{-\alpha} ,\]
where $ P = P_{K,n} \supset K$ is a circumscribed polytope, with at most $n$ facets, minimizing the Hausdorff distance $ d_H (K,P) $.
\end{theorem}
Note that the case $i=1$ and $ j = \lceil (d-1) /2 \rceil $ implies all the others.
This is a consequence of the isoperimetric inequality.
We conjecture that Theorem \ref{thm:main} remains true for any $ 1 \leq i < j \leq d-1 $
and that $\beta$ could be replaced by $1$.
Equation $\eqref{eq:BnAsymptotic}$ below gives support to this conjecture.
\medskip
Let us recall a few important results in order to motivate Theorem \ref{thm:main}.
If $K$ has a twice differentiable smooth boundary, we have a precise asymptotic approximation of $\Cr{boundCbis} ( K , n )$. After planar results due to T\'oth \cite{Toth1948} and McClure and Vitale \cite{McClure1975}, Schneider \cite{Schneider1981,Schneider1987} and Gruber \cite{Gruber1993article} succeeded in proving that
\begin{equation*}
\lim_{n \to \infty }\Cr{boundCbis} (K,n)
= \frac12 \left( \frac{\vartheta_{d-1}}{\kappa_{d-1}} \int_{\partial K} \kappa_C (\vect{x})^{1/2} \sigma ( \mathrm{d} \vect{x}) \right) ^{2/(d-1)},
\end{equation*}
where $\vartheta_k$ is the minimum covering density of $\mathds{R}^k$ with balls of fixed radius,
$\kappa_k$ the volume of the $k$-dimensional ball,
$\kappa_C(\vect{x})>0$ the Gaussian curvature of $K$ at the point $\vect{x}$,
and $\sigma(\cdot)$ the surface area measure.
More recently, B\"or\"oczky \cite{Boroczky2000} removed the condition $\kappa_C(\vect{x})>0$.
In many practical situations it is out of reach to compute the integral explicitly.
But if $K$ is elongated, we can have a good upper bound.
H\"older's inequality implies
\begin{align*}
\int_{\partial K} \kappa_C (\vect{x})^{1/2} \mathrm{d} \sigma (\vect{x})
& \leq (2 d \kappa_d)^{1/2} V_{d-1} (K) ^{1/2}
\\
&= (2 d \kappa_d)^{1/2} \left[ \frac{ V_{d-1} (K) ^{1/(d-1)} }{ V_1 (K) } \right]^{(d-1)/2} V_1(K) ^{(d-1)/2}.
\end{align*}
Hence, with the isoperimetric inequality, for any $ 1 \leq i < j \leq d-1 $, we have that
\begin{equation}
\label{eq:BnAsymptotic}
\text{ if } K \text{ is } (\epsilon\colon i,j) \text{-elongated then }
\lim_{n \to \infty }\Cr{boundCbis} (K,n)
\leq \delta'_{i,j} \epsilon V_1(K)
\end{equation}
with
\[ \delta'_{i,j} :=
\frac12 \left( \frac{\vartheta_{d-1}}{\kappa_{d-1}} \right)^{2/{d-1}}
( 2 d \kappa_d )^{1/(d-1)}
\frac{ V_{d-1} (B^d) ^{1/(d-1)} }{ V_1 (B^d) }
\frac{ V_i (B^d) ^{1/i} }{ V_j (B^d) ^{1/j} }.
\]
Therefore, we have a good asymptotic bound for elongated smooth convex bodies.
\medskip
The main goal of this paper is to extend these results to the non-asymptotic and non-smooth case.
The order $\epsilon$ in $\eqref{eq:BnAsymptotic}$ should be compared to the order $\epsilon^\beta$ in Theorem \ref{thm:main} for a fixed $n$.
It is especially of interest, for example, if we approximate a polytope with many facets by one with fewer facets.
This was considered by Reisner, Sch{\"u}tt and Werner in \cite{ReisnerSchuttWerner01}.
This paper was the starting point of our investigations.
A reader who has studied it will notice that principal ideas of their work are still present in our proofs.
Theorem \ref{thm:main} should be compared to the following result.
It was obtained independently in \cite{Bronshteyn1975} and \cite{Dudley1974}.
The constants were improved in \cite{ReisnerSchuttWerner01}.
There exist constants
$ \Cl[global]{boundFixedR} ( d ) $
and $ \Cl[global]{boundN}(d) $
such that
$ \Cr{boundCbis} ( K , \Cr{boundN}(d) ) < \Cr{boundFixedR} ( d ) R(K) $,
i.e.
\begin{equation*}
d_H (K,P) \leq \Cr{boundFixedR} (d) \frac{ R(K) }{n^{2/(d-1)}} \text{ for } n > \Cr{boundN}(d) ,
\end{equation*}
where $ R(K) $ is the radius of the smallest ball containing $K$.
Note that $ R(K) $ is of the same order as $V_1(K)$.
Although this bound is sharp in general, it is worse if we assume that $K$ is elongated.
The following is an example of such a situation.
Fix a small $ \epsilon > 0 $.
Let $ K \subset \mathds{R}^4 $ be a convex body.
It is well known that there exists an ellipsoid $ E $ such that
$ E \subset K \subset d E $ (see, e.g., \cite{John1948}).
Let $ r_1 > \cdots > r_4 $ be the lengths of the principal axes of $E$.
Assume that $ r_1 = 1 $ and $ r_2 $ is sufficiently small.
Thus, $K$ is $(\epsilon \colon 1,2)$-elongated.
For $ n > n_{1,2} \epsilon^{-3} $, Theorem~\ref{thm:main} says that
$ \Cr{boundC} ( K , n )
< \delta_{1,2} \epsilon^{1/6}
\ll \Cr{boundFixedR} (d) R(K) $.
\medskip
Finally, we would like to highlight the following theorem.
Not only is it an important step in the proof of Theorem \ref{thm:main} but also an interesting result on its own.
\begin{theorem}
\label{thm:intermediate}
There exist absolute constants $\Cr{abs:1}$ and $\Cr{abs:2}$, independent of $d$, such that the following holds.
Let $K$ be a convex body.
Then,
$$ \Cr{boundCbis} ( K , \Cr{abs:1}^d d^{d/2}
V_{d-1}(K+B^d ) )
< \Cr{abs:2} d
V_{d-1}(K+B^d )^{2/(d-1)} .$$
I.e. for any integer $ n > \Cr{abs:1}^d d^{d/2} V_{d-1}(K+B^d ) $,
there exists a polytope
$P\supset K$ with $n$ facets such that
$$\mathrm{d}_H(K,P) < \Cr{abs:2} d V_{d-1}(K+B^d )^{2/(d-1)} n^{-2/(d-1)}.$$
\end{theorem}
\medskip
The paper is structured as follows.
In the next section setting, notation and background material from convex geometry are provided.
In Section \ref{sec:deltaNet} we give a general framework to build a $\delta$-net on an abstract measured metric space satisfying mild properties, and we apply it to prove Theorem \ref{thm:intermediate}.
The proof of Theorem \ref{thm:main} is given in Section \ref{sec:MainProof}.
It uses a shape factor introduced and described in Section \ref{sec:ShapeFactor}.
\section{Setting, notation and background}
\label{sec:Setting}
We work in the euclidean space $\mathds{R}^d$ with origin $\boldsymbol{o}$, scalar product $ \langle \cdot , \cdot \rangle $ and associated norm $|\cdot|$.
We denote by $ B ( \vect{x} , r ) $ and $ S ( \vect{x} , r ) $, respectively, the ball and the sphere of center $ \vect{x} $ and radius $r$.
The unit ball $ B^d = B ( \boldsymbol{o} , 1 )$ has volume $\kappa_d$
and the unit sphere $ \mathds{S}^{d-1} = S ( \boldsymbol{o} , 1 ) $ has surface area $\omega_d = d \kappa_d $.
We denote by $\mathcal{K}$ the set of \textit{convex bodies} (compact and convex sets of $\mathds{R}^d$) with at least $2$ points.
This set is equipped with the Hausdorff distance
\[
d_H (K,L) = \min_{r\geq0} \left( K \subset L + r B^d , L \subset K + r B^d \right)
\] and its associated topology and Borel structure.
The same holds for the future subsets of $\mathcal{K}$ that we will encounter in this paper.
The set $\mathcal{K}$ is also equipped with Minkowski sum and scale action.
For any $ t \in \mathds{R} $ and $ A , B \in \mathcal{K} $, we have
\[
t A := \{ t \vect{a} \mid \vect{a} \in A \},
\quad
A + B := \{ \vect{a} + \vect{b} \mid \vect{a} \in A , \vect{b} \in B \}.
\]
We denote by $ \partial K $ the boundary of a given convex body $ K $.
Let $f:\mathcal{K}\to\mathds{R}$ be a map.
If there exists $k\in\mathds{R}$ such that
$ f( t K ) = t^k K $ for any $ K \in\mathcal{K}$ and $t>0$,
we say that $f$ is \textit{homogeneous of degree $k$}.
We say that $f$ is \textit{scale invariant} if $f$ is homogeneous of degree $0$.
If $f(K+\vect{x})=f(K)$ for any $K\in\mathcal{K}$ and $\vect{x}\in\mathds{R}^d$, we say that $f$ is \textit{translation invariant}.
We say that $f$ is a \textit{shape factor} if $f$ is scale and translation invariant.
For the following facts of convex geometry we refer the reader to \cite{Gruber07}.
\textbf{The Steiner Formula and Intrinsic Volumes.}
We denote by $V_d(\cdot)$ the volume, i.e., the $d$-dimensional Lebesgue measure.
The Steiner Formula tells us that there exist functionals
$V_i : \mathcal{K} \to [0,\infty) $, for $ 0 \leq i \leq d $, such that
for any $K\in \mathcal{K}$ and $ \epsilon \geq 0 $
\[
V_d ( K + \epsilon B^d ) = \sum_{i=0}^d \epsilon^{d-j} \kappa_{d-j} V_j(K).
\]
$V_i(K)$ is called the \textit{$i$-th intrinsic volume} of $K$.
Some of the intrinsic volumes have a clear geometric meaning.
$V_d$ is the volume.
If $K$ has non-empty interior, then
\[ V_{d-1} (K) = \frac12 \mathcal{H}^{d-1} ( \partial K ), \]
where $ \mathcal{H}^{d-1} ( \partial K ) $ is the $ (d-1) $-dimensional Hausdorff measure of the boundary $ \partial K $ of $ K $.
Thus, $ 2 V_{d-1} $ is the surface area.
$V_1$ is proportional to the \textit{mean width $b$}.
More precisely,
\[
\frac{d \kappa_d}{2} b (K)
= \kappa_{d-1} V_1 (K)
= \int_{\mathds{S}^{d-1}} h ( K , \vect{u} ) \sigma( \mathrm{d} \vect{u} ),
\]
where $\sigma$ is the surface area measure on $\mathds{S}^{d-1}$
and $h(K, \vect{u} ):=\max\{\langle \vect{x}, \vect{u} \rangle\mid \vect{x}\in K\}$ is the value of the \textit{support function} of $K$ at $ \vect{u} $.
$V_0(K)=1$ is the Euler characteristic.
For $ 1 \leq i < j \leq d $ and $K\in\mathcal{K}$, we call the shape factor $ V_j(K)^{1/j} V_i(K)^{-1/i} $ the $(i,j)$-\textit{isoperimetric ratio} of $K$.
\textbf{The Isoperimetric Inequality.}
Let $B \subset \mathds{R}^d$ be a $d$-dimensional ball.
For any $ K \in \mathcal{K} $ and for any $ 1 \leq i < j \leq d $,
\begin{equation}
\label{eq:IsoperimetricIneq}
V_j(K) ^{1/j} \leq \frac{ V_j ( B ) ^{1/j} }{ V_i ( B ) ^{1/i} } V_i(K) ^{1/i},
\end{equation}
with equality if and only if $K$ is a ball.
\textbf{A Steiner-type Formula.}
For any $ K \in \mathcal{K} $
\begin{equation}
\label{eq:SteinerType}
V_{d-1} (K+B^d)
= \sum_{k=0}^{d-1} \frac{(d-k) \kappa_{d-k} }{2d} V_k (K).
\end{equation}
The isoperimetric inequality and the Steiner-type formula imply the next fact.
\textbf{Fact :}
Let $ d \geq 3 $,
$I$ be an interval (convex hull of two distinct points),
$B$ be a ball,
and $ K \in \mathcal{K} $ neither an interval nor a ball.
Assume that $ V_1(I) = V_1(K) = V_1(B) $.
Note that $ V_1(I) $ is just the length of the segment $I$.
Then, we have
\begin{equation}
\label{eq:IneqSurfaceConvexPlusBall}
V_{d-1} ( I + B^d )
< V_{d-1} ( K + B^d )
< V_{d-1} ( B + B^d ) .
\end{equation}
\textbf{Convention about the constants.}
The constants are denoted by $c_i$, where $i$ is an index.
They depend on $d$ but are independent of any other quantity.
We estimate the dependence on $d$ using the Landau notation.
By $c_i = \boldsymbol{\Theta} ( f(d) ) $ we mean that there exist absolute constants $ k_0 , k_1 > 0 $ such that
$ k_0 f(d) < c_i < k_1 f(d) $
for any $d$.
\section{\texorpdfstring{$\delta$}{delta}-net and polytopal approximation}
\label{sec:deltaNet}
First, let us set some notation.
Assume $ M $ is metric space with distance $ d_M $.
I.e. a set $ M $ and a function $ d_M \colon M \times M \to [0,\infty)$ such that, for any $x,y,z\in M$, $d_M(x,y)=0$ if and only if $x=y$, $d_M(x,y)=d_M(y,x)$, and $d_M(x,z)\leq d_M(x,y) + d_M(y,z)$.
We write
$ B_M ( \vect{x} , r ) := \{ \vect{y} \in M \mid d_M ( \vect{x} , \vect{y} ) \leq r \} $.
\begin{definition}
Let $M$ be a metric space and $S$ a discrete subset of $M$.
We say that
\begin{itemize}
\item $ S $ is a \textit{$\delta$-covering of $M$} if $ \cup_{ \vect{x} \in S } B_M ( \vect{x} , \delta ) = M $,
\item $ S $ is a \textit{$\delta$-packing of $M$} if
$ B_M ( \vect{x} , \delta ) \cap B_M ( \vect{y} , \delta ) = \emptyset $
for any $ \vect{x} \neq \vect{y} \in S $,
\item $ S $ is a \textit{$\delta$-net of $M$} if it is both a \textit{$\delta$-covering of $M$} and a \textit{$(\delta/2)$-packing of $M$}.
\end{itemize}
\end{definition}
Note that, in the poset of $(\delta/2)$-packings ordered under inclusion, a maximal element is a $\delta$-net.
Zorn's lemma shows that, for any metric space $M$, there exists a $\delta$-net.
In the following lemma,
under some assumptions on $\psi$, we give bounds for the cardinality of a $\delta$-net.
The construction of these bounds is adapted from the proof of the following well known result, see e.g. \cite[Proposition 31.1]{Gruber07}.
If $C\subset\mathds{R}^d$ is a convex body with non empty interior such that $C=-C$, then there exists a packing of translated copies of $C$ in $\mathds{R}^d$ of density at least $2^{-d}$, where, roughly speaking, density means the proportion of $\mathds{R}^d$ covered by the translated copies of $C$.
\begin{lemma}
\label{lem:Saturated}
Let $M$ be a space equipped with a measure $ \psi $ and a measurable metric $ d_M $.
Assume that $\psi(M)<\infty$.
Let $ \delta_0 > 0 $ and $ S $ be a $\delta$-net of $M$ with $\delta \in ( 0 , \delta_0 ) $.
Let $ k > 0 $.
\begin{enumerate}
\item Assume there exists a constant $ c > 0 $ such that,
for any $ \vect{x} \in M $ and $ r \in ( 0 , \delta_0 ) $,
it holds that
$ c r^k
> \psi ( B_M ( \vect{x} , r ) ) $.
Then
$ \card{S}
> c ^{-1} \psi(M) \delta^{-k} $.
\item Assume there exists a constant $ c' > 0 $ such that,
for any $ \vect{x} \in M $ and $ r \in ( 0 , \delta_0 ) $,
it holds that
$ c' r^k
< \psi ( B_M ( \vect{x} , r ) ) $.
Then
$ \card{S}
< 2^k c'^{-1} \psi(M) \delta^{-k} $.
\end{enumerate}
\end{lemma}
\begin{proof}
To prove $(1)$, we only have to observe that since $ S $ is a $\delta$-covering, we have that
\[ \psi ( M )
\leq \sum_{ \vect{x} \in S } \psi ( B_M ( \vect{x} , \delta ) )
< \card{S} c \delta^k \]
because $ M = \cup_{ \vect{x} \in S } B_M ( \vect{x} , \delta ) $.
The proof of $(2)$ is similar.
Since $ S $ is a $(\delta/2)$-packing, we have that
\[ \psi ( M )
\geq \sum_{ \vect{x} \in S } \psi ( B_M ( \vect{x} , \delta / 2 ) )
> \card{S} c' \delta^k 2^{-k} \]
because, for any distinct $ \vect{x} , \vect{y} \in S $, we have
$ B_M ( \vect{x} , \delta / 2 ) \cap B_M ( \vect{y} , \delta / 2 ) = \emptyset $.
\end{proof}
In Lemma \ref{lem:deltaNet}, we will apply the previous lemma to the space $M=\partial(K+B^d)$, where $K$ is an arbitrary convex body and $M$ is equipped with the surface area measure and the restriction of the euclidean distance.
In this space the balls are caps on the boundary of the convex body $D=K+B^d$, where a cap is defined as follow.
For a convex body $D$,
a point $ \vect{d} \in D $ (usually $ \vect{d} \in \partial D $),
and a positive radius $\delta>0$,
we define the \textit{cap of $D$ of center $\vect{d}$ and radius $\delta$} to be the set
\[ \Cap{D}{\vect{d}}{\delta}
= \{ \vect{y} \in \partial D \mid | \vect{d} - \vect{y} | <\delta\}. \]
Note that our definition differs slightly from the more usual one, where a cap is the intersection of the boundary $\partial D$ with a half-space.
In the next lemma we give bounds for the surface area of caps of radius $\delta\in(0,\delta_0)$, of bodies of the form $K+B^d$, with $\delta_0=1$ independent from $K$.
Precise bounds for spherical caps are known, see e.g. Lemma 2.1 in \cite{BriedenAndAl01}, Lemmas 2.2 and 2.3 in \cite{Ball97} or Remark 3.1.8 in \cite{ArtsteinAvidanAndAll15}.
Lemma 6.2 in \cite{RichardsonAndAl08} gives bounds for more general bodies then the sphere, namely those with $\mathcal{C}^2$ boundary of positive curvature, but with a $\delta_0$ depending on $K$.
It does not seems to the author that we can deduce easily Lemma \ref{lem:MeasCap} from these results.
\begin{lemma}
\label{lem:MeasCap}
Let $ K \in \mathcal{K} $ and $ D = K + B^d $.
Let $\vect{d} \in \partial D $ and $ \delta \in ( 0 , 1 ) $.
Then
\[ \delta^{d-1} \kappa_{d-1} 2^{-(d-1)}
< \mathcal{H}^{d-1} ( \Cap{D}{\vect{d}}{\delta} )
< \delta^{d-1} \kappa_{d-1} d .
\]
\end{lemma}
\begin{proof}
For the lower bound, we approximate the cap by a $(d-1)$-dimensional disc of radius
$ \delta \sqrt{ 1 - \delta^2 / 4 } $
(see Figure \ref{fig:LowerBoundCap}).
Let $H$ be the tangent hyperplane to $D$ at $\vect{d}$.
We have
\begin{figure}
\includegraphics*{./LowerBound.pdf}
\caption{ $\mathcal{H}^{d-1}( \Cap{D}{ \vect{d} }{\delta} )
\geq \delta^{d-1} \kappa_{d-1} \left( 1 - \frac{ \delta^2 } { 4 } \right)^{(d-1)/2} $}
\label{fig:LowerBoundCap}
\end{figure}
\begin{align*}
\mathcal{H}^{d-1}( \Cap{D}{ \vect{d} }{\delta} )
& \geq
\mathcal{H}^{d-1}( H \cap B ( \vect{d} , d ( \vect{d} , \vect{e} ) ) )
\\
& = \delta^{d-1} \kappa_{d-1} \left( 1 - \frac{ \delta^2 } { 4 } \right)^{(d-1)/2}
\\
& > \delta^{d-1} \kappa_{d-1} \left( \frac{ 3 } { 4 } \right)^{(d-1)/2}
> \delta^{d-1} \kappa_{d-1} 2^{-(d-1)} .
\end{align*}
For the upper bound, we approximate the cap by the union of a $(d-1)$-dimensional disc of radius $\delta$ and the spherical boundary of a cylinder of radius $\delta$ and height $ \delta^2 $
(see Figure \ref{fig:UpperBoundCap}).
\begin{figure}
\includegraphics*{./UpperBound.pdf}
\caption{ $\mathcal{H}^{d-1}( \Cap{D}{ \vect{d} }{\delta} )
< \delta^{d-1} \kappa_{d-1}
+ \delta^{d-2} \omega_{d-1} \delta^2 $}
\label{fig:UpperBoundCap}
\end{figure}
Thus
\begin{align*}
\mathcal{H}^{d-1}( \Cap{D}{ \vect{d} }{\delta} )
& <
\mathcal{H}^{d-1}( H \cap B ( \vect{d} , \delta ) )
+ \mathcal{H}^{d-2}( H \cap S ( \vect{d} , \delta ) ) \delta^2
\\
& =
\delta^{d-1} \kappa_{d-1}
+ \delta^{d-2} \omega_{d-1} \delta^2
\\
& =
\delta^{d-1} \kappa_{d-1} \left( 1 + \delta ( d - 1 ) \right)
<
\delta^{d-1} \kappa_{d-1} d .
\end{align*}
\end{proof}
Set
$ \Cl[global]{12min}
:= 2 d^{-1} \kappa_{d-1}^{-1}
= \boldsymbol{\Theta} ( d^{1/2} )^d $
and
$ \Cl[global]{12}
:= 4^{d} \kappa_{d-1}^{-1}
= \boldsymbol{\Theta} ( d^{1/2} )^d $.
As a direct consequence of the two previous lemmas and the fact that
$\mathcal{H}^{d-1}( \partial D ) = 2 V_{d-1} ( D )$,
we have the following lemma.
We omit the proof.
\begin{lemma}
\label{lem:deltaNet}
Let $ K \in \mathcal{K} $ and $ D = K + B^d $,
$ \delta \in ( 0 , 1 ) $
and $S$ a $\delta$-net of the boundary $\partial D$.
We have that
\[ \Cr{12min} V_{d-1}(D) \delta^{-(d-1)}
< | S |
< \Cr{12} V_{d-1}(D) \delta^{-(d-1)} . \]
\end{lemma}
For a convex body $K$ with boundary $\partial K$ of differential class $\mathscr{C}^1$ and $ \vect{x} \in \partial K $,
we denote by $ \vect{v} ( \vect{x} ) $ the outer unit normal vector of $ K $ at $ \vect{x} $.
Using Lemma \ref{lem:deltaNet}, we can prove the two following lemmas in a similar way as Propositions 2.4 and Proposition 2.7 of \cite{ReisnerSchuttWerner01}.
We will only sketch the proofs.
\begin{lemma}
\label{lem:deltaNetBis}
Let $ K \in \mathcal{K} $ with $ \partial K $ of class $ \mathscr{C}^1 $ and
$ \delta \in ( 0 , 1 ) $.
There exists a $\delta$-net of $\partial K$, with respect to the distance
$ d_m ( \vect{x} , \vect{y} )
= \max ( | \vect{x} - \vect{y} | , | \vect{v} ( \vect{x} ) - \vect{v} ( \vect{y} ) | ) $,
of cardinality at most
$ \Cr{12} V_{d-1}(K+B^d ) \,\delta^{-(d-1)} $.
\end{lemma}
\begin{proof}[Sketch of the proof]
Set $D=K+B^d$.
Construct a $\delta$-net on the boundary $\partial D$ and then project it onto $\partial K$.
The bound on the cardinality comes from Lemma \ref{lem:deltaNet}.
\end{proof}
Set
$ \Cl[global]{12bis}
:= 3^{(d-1)/4} \Cr{12}
= \boldsymbol{\Theta} ( d^{1/2} )^d $.
\begin{lemma}
\label{prop:bestApproxFixNumberFacets}
Let $ K \in \mathcal{K} $ and $0<\epsilon<1$.
Then, there exists a polytope $P_\epsilon\supset K$ with $$\mathrm{d}_H(K,P_\epsilon)<\epsilon$$
and with number of facets at most
$$\Cr{12bis} V_{d-1}(K+B^d ) \,\epsilon^{-(d-1)/2}.$$
\end{lemma}
\begin{proof}[Sketch of the proof]
Reduce the proof to the case where $K$ has a smooth boundary.
Set an appropriate value $\delta = \delta(\epsilon)$.
Consider the $\delta$-net $S$ built in Lemma \ref{lem:deltaNetBis}.
Construct the circumscribed polytope $ P \supset C $ with one facet tangent to $C$ at each point of $S$.
Finally bound the Hausdorff distance $d_H(C,P)$.
The bound on the number of facets comes from the bound on the cardinality of the $\delta$-net in Lemma \ref{lem:deltaNetBis}.
\end{proof}
Set
$ \Cl[global]{13}
:= \Cr{12bis}^{2/(d-1)}
= \boldsymbol{\Theta} ( d ) $.
With the last lemma, we can now prove Theorem \ref{thm:intermediate}.
\begin{proof}[Proof of Theorem \ref{thm:intermediate}]
Let $ n > \Cr{12bis} V_{d-1} ( K + B^d ) $.\\
Set $ \epsilon = \Cr{13} V_{d-1}(K+B^d )^{2/(d-1)} n^{-2/(d-1)}$.
By the assumption made on $n$, we have $\epsilon<1$.
Hence, we can apply Lemma~\ref{prop:bestApproxFixNumberFacets}.
There exists a polytope $P_\epsilon \supset K$ with $d_H(K,P_\epsilon)<\epsilon$ and such that its number of facets is at most
$$ \Cr{12bis} V_{d-1}(K+B^d ) \,\epsilon^{-(d-1)/2}
= n . $$
The approximations of the constants $c_i$ using the Landau notation tells us that there exist absolute constants $\Cl[abs]{abs:1}$ and $\Cl[abs]{abs:2}$ such that
$\Cr{12bis} < \Cr{abs:1} ^d d^{d/2}$
and $ \Cr{13} < \Cr{abs:2} d $
for any $d$.
This yields the proof.
\end{proof}
\section{Shape factor}
\label{sec:ShapeFactor}
In this section we define $\ShapeFactor$, a \textit{shape factor},
i.e. a scale and translation invariant function on $\mathcal{K}$.
Lemma \ref{lem:propertiesOfgl} tells us how $\ShapeFactor(K)$ describes the elongation of a given convex body $K$.
Set
$ \Cl[global]{12bisbis}
:= \Cr{12bis} V_{d-1}(B^d )$.
\begin{definition}
\label{def:fnandgn}
For any fixed parameter $ l > \Cr{12bisbis} $ we define the functions
$ \FunctionBound , \ShapeFactorBis ,\ShapeFactor: \mathcal{K} \to (0,\infty) $
by
\[ \FunctionBound(K)
= \sup \{ t \in ( 0 , \infty ) \mid l > \Cr{12bis} V_{d-1}( t K + B^d ) \} , \]
\[ \ShapeFactorBis(K)
= \inf_{ t \in (0,\FunctionBound (K)) } \frac{ V_{d-1} ( t K + B^d )^{ 2/(d-1) } }{t} , \]
and
\[ \ShapeFactor(K)
= \frac{ \ShapeFactorBis(K)}{V_{1}(K)} . \]
\end{definition}
It is clear that the three functions are translation invariant.
One can check that $ \FunctionBound $ is homogeneous of degree $-1$, $ \ShapeFactorBis $ is homogeneous of degree $1$ and $ \ShapeFactor $ is homogeneous of degree $0$.
Therefore, for any fixed $l$, $ \ShapeFactor$ is a shape factor.
The next lemma gives a geometric interpretation of $ \ShapeFactor $.
\begin{lemma}
\label{lem:propertiesOfgl} \
\begin{enumerate}
\item For any $K \in \mathcal{K} $, the function $ l \mapsto \ShapeFactor (K) $ is decreasing.
\item If $d=2$ and $ l > \Cr{12bisbis} $ is fixed, then $ \ShapeFactor $ is constant on $\mathcal{K}$.
\item If $d \geq 3$, $ l > \Cr{12bisbis} $ is fixed, and $ K \in \mathcal{K} $ is neither an interval nor a ball, then
\[ \ShapeFactor (I) < \ShapeFactor (K) < \ShapeFactor (B)
\text{ for any } l > \Cr{12bisbis},\]
where $I$ denotes an interval and $B$ a ball.
\item Assume that $ 1 \leq i < j \leq \lceil (d-1)/2 \rceil$.
There exist constants $ \delta_{i,j} $ and $ n_{i,j} $, both depending on $d$, such that the following holds.
For any convex body $K\in\mathcal{K}$ and $\epsilon>0$, we have
\begin{equation}
\label{eq:almostFlatBodies}
\text{if } \frac{ V_j (K)^{1/j} }{ V_i(K)^{1/i} } < \epsilon \text{ then } \ShapeFactor[N_{i,j}(\epsilon)] (K) \leq \delta_{i,j} \epsilon^{\beta},
\end{equation}
where $ N_{i,j} (\epsilon) := n_{i,j} \epsilon^{-\alpha}$ with $\alpha=2 \lceil (d-1)/2 \rceil (d-1) d^{-1} $,
and $ \beta = 2 \lceil (d-1)/2 \rceil (d-1)^{-1} d^{-1}$.
\end{enumerate}
\end{lemma}
\begin{proof}
(1) is a direct consequence of the definition of $\ShapeFactor$.
(2) comes from the fact that in this case $V_{d-1}=V_1$ is additive.
(3) is implied by $\eqref{eq:IneqSurfaceConvexPlusBall}$.
It only remains to prove~(4).
For the rest of the proof we write $ v_i := V_i (B^d)^{1/i} $ for $ i = 1, \ldots , d $.
Thanks to point $3$ of the present lemma, we have that
$\ShapeFactor[N_{i,j}(\epsilon)] (K)
\leq \ShapeFactor[N_{i,j}(\epsilon)] (B) $.
This implies that, without loss of generality, we can assume that $\epsilon < c $, for $c>0$ as small as one need.
We also reduce the proof to the case $ i=1 $ and $ j=j_0 = \lceil (d-1)/2 \rceil$.
Because of the isoperimetric inequality $\eqref{eq:IsoperimetricIneq}$, we have
\begin{equation}
\label{eq:IV3}
\frac{ V_{j_0} (K)^{1/j_0} }{ V_1(K) }
\leq c_{i,j}
\frac{ V_j (K)^{1/j} }{ V_i(K)^{1/i} }
\text{ where }
c_{i,j} := \frac{v_{j_0} v_i}{ v_j v_1 } .
\end{equation}
Assume that there exist constants $\delta_{1,j_0}$ and $n_{1,j_0}$ such that $\eqref{eq:almostFlatBodies}$ holds for $ i=1 $ and $ j=j_0$.
Let $ 1 \leq i < j \leq j_0 $ and $(i,j)\neq(1,j_0)$.
We set $ \delta_{i,j} := \delta_{1,j_0} c_{i,j}^\beta $ and $n_{i,j} := n_{1,j_0} c_{i,j}^{-\alpha}$.
In particular,
$ N_{i,j}(\epsilon)
= n_{i,j} \epsilon^{-\alpha}
= n_{1,j_0} (c_{i,j} \epsilon)^{-\alpha}
= N_{1,j_0}(c_{i,j}\epsilon) $.
Assume that $K$ is such that $V_j(K)^{1/j} V_i(K)^{-1/i} < \epsilon$.
By $\eqref{eq:IV3}$ we have $V_{j_0}(K)^{1/j_0} V_1(K) < c_{i,j} \epsilon$.
This implies that
$ \ShapeFactor[N_{i,j}(\epsilon)](K)
= \ShapeFactor[N_{1,j_0}(c_{i,j}\epsilon)](K)
\leq \delta_{1,j_0} (c_{i,j}\epsilon)^\beta
= \delta_{i,j} \epsilon^\beta $.
This shows that we only have to consider the case $ i=1 $ and $ j = j_0$.
Since both parts of $\eqref{eq:almostFlatBodies}$ are scale invariant, we also assume without loss of generalities that $V_1(K)=1$.
Let $ \epsilon \in (0,1) $ and $ l > \Cr{12bisbis} $.
From now on, we assume that
\begin{equation}
\label{eq:assumtionVj}
V_{j_0} (K) ^{1/j_0} < \epsilon .
\end{equation}
Set
\[ p_C (t)
:= V_{d-1}( t K + B^d )
\overset{\eqref{eq:SteinerType}}{=} \sum_{k=0}^{d-1} \frac{(d-k)\kappa_{d-k}}{2d} V_k(K) t^k .\]
Observe that it is a strictly increasing and continuous function and that
\begin{align}
\label{eq:IV1}
\ShapeFactor (K)
& = \ShapeFactorBis (K)
= \left( \inf_{ t \in (0,\FunctionBound(K))} t^{-(d-1)/2} p_C (t) \right)^{2/(d-1)} \\
\notag
\text{and } \FunctionBound(K)
& = p_C^{-1} ( \Cr{12bis}^{-1} l ) .
\end{align}
Observe that $ j_0 - 1 - (d-1)/2 \leq -1/2 $.
Hence, for $ t>1 $,
\[ t^{-(d-1)/2} p_C (t) \leq S_1 (K) t^{-1/2} + S_2 (K) t^{(d-1)/2} , \] where
\[ S_1(K) := \sum_{k=0}^{j_0-1} \frac{(d-k)\kappa_{d-k}}{2d} V_k(K)
\text{ and }
S_2(K) := \sum_{k=j_0}^{d-1} \frac{(d-k)\kappa_{d-k}}{2d} V_k(K) .\]
The isoperimetric inequalities $\eqref{eq:IsoperimetricIneq}$ gives that
\[ S_1(K)
\leq \frac{ \kappa_d }{ 2 } + \sum_{k=1}^{j_0-1} \frac{(d-k)\kappa_{d-k}}{2d} \left( \frac{v_k}{v_1} \right)^k
=: \Cl[global]{IV1} . \]
It also implies that, for $ k = j_0,\ldots,d-1 $, we have
$ V_k (K) \leq (v_k / v_{j_0})^k V_{j_0}(K)^{k/j_0} $.
And since $ V_{j_0}(K)^{k/j_0} < \epsilon^k \leq \epsilon^{j_0} $, it follows that
\[ S_2(K)
\leq \sum_{k=j_0}^{d-1} \frac{(d-k)\kappa_{d-k}}{2d} \left( \frac{v_k}{v_{j_0}} \right)^{k} \epsilon^{j_0}
=: \Cl[global]{IV2} \epsilon^{j_0} .\]
Therefore, for $t>1$,
\begin{equation}
\label{eq:IV2}
t^{-(d-1)/2} p_C (t)
\leq \Cr{IV1} t^{-1/2} + \Cr{IV2} \epsilon^{j_0} t^{(d-1)/2}
=: q_\epsilon (t) .
\end{equation}
Since we want $t^{-(d-1)/2} p_C (t)$ small, we define $ t_\epsilon > 0 $ such that $q_\epsilon (t_\epsilon)$ is minimal.
But it holds that the derivative of $q_\epsilon$ is
\[ q_\epsilon' (t)
= \frac{ - \Cr{IV1} }2 t^{-3/2} + \frac{ \Cr{IV2} \epsilon^{j_0} (d-1) }2 t^{(d-3)/2}. \]
Thus,
\[ t_\epsilon = \left( \frac{\Cr{IV2} \epsilon^{j_0} (d-1)}{ \Cr{IV1} } \right) ^{-2/d}
= \Cr{IV3} \epsilon^{- 2 j_0 / d } \]
with $ \Cl[global]{IV3}
:= \left( \Cr{IV2} (d-1) / \Cr{IV1} \right) ^{-2/d}$.
Now, we observe that
\begin{equation*}
t_\epsilon^{-(d-1)/2} p_C ( t_\epsilon )
\overset{\eqref{eq:IV2}}{\leq} q_\epsilon (t_\epsilon)
= \Cr{IV1} (\Cr{IV3} \epsilon^{- 2 j_0 / d })^{-1/2} + \Cr{IV2} \epsilon^{j_0} (\Cr{IV3} \epsilon^{- 2 j_0 / d })^{(d-1)/2}
= \Cr{IV4} \epsilon^{j_0/d}
\end{equation*}
with
$\Cl[global]{IV4} := \Cr{IV1} \Cr{IV3}^{-1/2} + \Cr{IV2} \Cr{IV3}^{(d-1)/2}$.
This implies that if
$ \FunctionBound[N_{1,j_0} (\epsilon)] (K) > t_\epsilon $
then
\[ \ShapeFactor[N_{1,j_0} (\epsilon)] (K)
\overset{\eqref{eq:IV1}}{\leq} \left( t_\epsilon^{-(d-1)/2} p_C ( t_\epsilon ) \right)^{2/(d-1)}
\leq \left( \Cr{IV4} \epsilon^{j_0/d} \right)^{ 2 / (d-1) }
\leq \delta_{1,j_0} \epsilon^\beta
\]
with $\delta_{1,j_0} := \Cr{IV4}^{2/(d-1)}$ and
$\beta
= 2 j_0 (d-1)^{-1} d^{-1} $.
It remains only to set $ N_{1,j_0} (\epsilon) $ such that $ \FunctionBound[ N_{1,j_0} (\epsilon) ] (K) > t_\epsilon $.
Set
\[ \Cl[global]{IV5} := \frac{ \kappa_d }{ 2 } + \sum_{k=1}^{d-1} \frac{(d-k)\kappa_{d-k}}{2d} \left( \frac{v_k}{v_1} \right)^{k}
\text{ and }
\tilde{p} (t) := \Cr{IV5} t^{d-1}.\]
Again because of the isoperimetric inequality, we have that $ p_C ( t ) < \tilde{p} (t) $, for any $t>1$.
Hence if $ u > \tilde{p} (1) = \Cr{IV5} $ then $p_C^{-1} (u) > \tilde{p}^{-1} (u) $.
Set
$$ N_{1,j_0} (\epsilon)
:= \Cr{12bis} \Cr{IV5} t_{\epsilon}^{d-1}
= n_{1,j_0} \epsilon^{- \alpha } $$
with
$ n_{1,j_0} := \Cr{12bis} \Cr{IV5} \Cr{IV3}^{d-1} $
and
$ \alpha := 2 j_0 (d-1) d^{-1} $.
Thus we have
\[ \FunctionBound[N_{1,j_0} (\epsilon)] (K)
= p_C^{-1} ( \Cr{12bis}^{-1} N_{1,j_0} (\epsilon) )
= p_C^{-1} ( \Cr{IV5} t_{\epsilon}^{d-1} )
> \tilde{p}^{-1} ( \Cr{IV5} t_{\epsilon}^{d-1} )
= t_{\epsilon}\]
whenever $ t_\epsilon > 1 $.
But $ t_\epsilon > 1 $ when $ \epsilon < \Cr{IV3}^{-1/\alpha} $.
This completes the proof.
\end{proof}
\section{Proof of Theorem \ref{thm:main}}
\label{sec:MainProof}
Theorem \ref{thm:main} is a direct consequence of the following lemma and point $4$ of Lemma~\ref{lem:propertiesOfgl}.
Let
$ \Cl[global]{13bis}
> \Cr{13} $.
\begin{lemma}
Let $ K \in \mathcal{K} $.
For any $ n > \Cr{12bisbis} $, we have
$$ \Cr{boundC} ( K , n ) < \Cr{13bis} \ShapeFactor[n] (K) V_1(K).$$
I.e. for any integer $ n > \Cr{12bisbis} $, there exists a polytope
$ P \supset K $ with $ n $ facets such that
$$ \mathrm{d}_H(K,P)
< \Cr{13bis} \ShapeFactor[n] (K) \frac{ V_1(K) }{ n^{2/(d-1)} } . $$
\end{lemma}
\begin{proof}
The condition
$ n > \Cr{12bisbis} $
implies that
$ \FunctionBound[n] (K) $ and $ \ShapeFactor[n] (K) $ are well defined.
Let $ t \in (0,\FunctionBound[n] (K) ) $.
We have defined $\FunctionBound[n] (K) $ precisely such that the convex body $ t K $ and the number $n$ satisfy the conditions required to apply Theorem \ref{thm:intermediate}.
So there exists a polytope $ P_t $ with $ n $ facets such that
$$ \mathrm{d}_H ( t K , P_t)
< \Cr{13} V_{d-1} ( tC + B^d ) ^{2/(d-1)} n ^{-2/(d-1)} .$$
Therefore, for any $ t \in ( 0 , \FunctionBound[n](K) ) $, we see that
$$ \mathrm{d}_H \left( K , \frac1t P_t \right)
< \Cr{13} \frac{ V_{d-1} ( t K + B^d ) ^{2/(d-1)} }{ t } n^{-2/(d-1)} . $$
Since $ \Cr{13bis} > \Cr{13} $, there exists $ t_0 \in ( 0 , \FunctionBound[n] ) $ such that
$$ \mathrm{d}_H \left( K , \frac1{t_0} P_{t_0} \right)
< \Cr{13bis} \left( \inf_{ t \in ( 0 , \FunctionBound[n] ) } \frac{ V_{d-1} ( t K + B^d ) ^{ 2 / (d-1) } }{ t } \right) n^{-2/(d-1)} . $$
But it holds that
$$ \inf_{ t \in ( 0 , \FunctionBound[n] ) } \frac{V_{d-1}(t K +B^d )^{2/(d-1)}}{t}
= \ShapeFactorBis[n] (K)
= \ShapeFactor[n] (K) V_1(K) , $$
which yields the proof.
\end{proof}
\bibliographystyle{plain}
|
{
"timestamp": "2016-12-15T02:07:16",
"yymm": "1612",
"arxiv_id": "1612.04706",
"language": "en",
"url": "https://arxiv.org/abs/1612.04706"
}
|
"\\section{Introduction}\n\tIt is well known that the computational speed of human brain is several (...TRUNCATED)
| {"timestamp":"2016-12-15T02:06:09","yymm":"1612","arxiv_id":"1612.04659","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\r\n\r\nThe problem of filter sensitivities evaluation plays a key role in m(...TRUNCATED)
| {"timestamp":"2017-04-07T02:07:41","yymm":"1612","arxiv_id":"1612.04777","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\r\n\r\nA scanning tunneling microscope (STM) demonstrates remarkable latera(...TRUNCATED)
| {"timestamp":"2016-12-15T02:05:03","yymm":"1612","arxiv_id":"1612.04622","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\\label{sec:intro}\n\nConnectivity augmentation is a classical problem in co(...TRUNCATED)
| {"timestamp":"2016-12-15T02:08:52","yymm":"1612","arxiv_id":"1612.04780","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\r\n\r\nFollowing the influential work by \\cite{hamilton89}, dynamic models(...TRUNCATED)
| {"timestamp":"2018-05-11T02:03:50","yymm":"1612","arxiv_id":"1612.04932","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\n\\seclab{intro}\n\nKey requirements for the successful implementation of p(...TRUNCATED)
| {"timestamp":"2017-04-06T02:00:21","yymm":"1612","arxiv_id":"1612.04803","language":"en","url":"http(...TRUNCATED)
|
End of preview.