Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    ArrowInvalid
Message:      JSON parse error: Missing a closing quotation mark in string. in row 3
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 145, in _generate_tables
                  dataset = json.load(f)
                File "/usr/local/lib/python3.9/json/__init__.py", line 293, in load
                  return loads(fp.read(),
                File "/usr/local/lib/python3.9/json/__init__.py", line 346, in loads
                  return _default_decoder.decode(s)
                File "/usr/local/lib/python3.9/json/decoder.py", line 340, in decode
                  raise JSONDecodeError("Extra data", s, end)
              json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 32783)
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1995, in _prepare_split_single
                  for _, table in generator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 148, in _generate_tables
                  raise e
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables
                  pa_table = paj.read_json(
                File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: JSON parse error: Missing a closing quotation mark in string. in row 3
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

text
string
meta
dict
\section{Introduction}\label{sec:introduction} The efficient market hypothesis is technically interpreted to mean that the market is impossible to beat and that there are no autocorrelation (such as systematically repeated price/returns patterns) that can be used for profit \cite{Mandelbrot1966}. The idea is a pillar and a paradigm of classical market finance and it is widely discussed in the influential review of E.~Fama in Ref.~\cite{Fama1970}, dated 1970. A Markov market, i.e. the assumption that future fluctuations depend only on the last observed price, satisfies the condition of a market that is strictly impossible to beat and thus perfectly efficient. In more recent decades, with advancements in trading technologies, has been clear that the EMH can be broken; a comprehensive survey and discussion can be found in Ref.~\cite{Markiel2003}. However, real markets remains very hard to beat and models that generate no autocorrelation in the price increments are very good first step approximations. In such models, in the limit of small time intervals, the correlations of the increments $x(t_1)-x(t_1-T_1)$ and $x(t_2+T_2) - x(t_2)$ for a generic price series $x(t)$ vanish \begin{equation} \left< (x(t_1)-x(t_1-T_1))(x(t_2+T_2) - x(t_2)) \right> = 0\, \label{eq:emh} \end{equation} for each set of finite instants $t_1,T_1,t_2,T_1$ if there is no overlap in the intervals, i.e. if $(t_1-T_1,t_1)\cap (t_2,t_2+T_2) = \emptyset$. This condition is much weaker, and more pregnant, than the assumption of stochastically independent increments. Eq.~(\ref{eq:emh}) means that nothing that happened during a time interval can be used to systematically forecast the returns in a future time interval, at least at the level of simple averages and pair correlations. That is, the market is ``effectively efficient'' in the sense that it is impossible to forecast its direction, paving the way for the application of semi-martingale processes. Contrarily to some other well known stochastic processes such as the fractional Brownian motion, there is no memory in pair correlations to be exploited. In general, this does not rule out higher order correlations or cross-correlations between assets that might be used for technical trading. A Markovian market is ``efficient'' in the strictest sense: it is impossible to beat. Instead, a martingale market leaves the opportunity of exploiting high order dependencies. It is an acknowledged, tested and recognized fact that individual asset time series do not present features challenging the efficiency - were it not for mean-reverting effects at very short time scales (some seconds), mainly due to the bid-ask bounce. The main question we are addressing in this work is whether interdependences between distinct stock price increments can be employed to produce a single time series presenting a non vanishing value of autocorrelation. Basically this is not a new idea - in fact, the ideas beyond pair trading, i.e. co-integration \cite{Johansen1988}, and most of the statistical arbitrage strategies, are simpler versions of this one. The originality of the work we present here lies, firstly, in the focus on short time scales (at the order of tens of seconds) and, secondly and more important, in the design of an optimized collective dynamics. Precisely, starting with an original set of $N$ stock time series $x_i(t)$ ($i=1,\ldots,N$), we are asking whether it is possible to find a suitable set of weights $w_i$ for which the basket \begin{equation} B(t) = \sum_{i=1}^N w_i x_i(t) \label{eq:basket} \end{equation} presents systematic autocorrelations, or, more generally speaking, a non trivial persistent behaviour. Unfortunately, assessing the persistent or anti-persistent behaviour of a time series raises various difficulties, and the vast confusion in the literature does not help to overcome them. Often, an estimation of the Hurst exponent is recognized as a good indicator, but this is misleading. Strictly speaking, the Hurst exponent is just a measure of the scaling properties of a time series and, alone, does not give any information on the possible persistent behaviour when the analyzed data are not stationary, as is typically the case in the financial world. Moreover, the estimation of the Hurst exponent is by itself a difficult task, and there is a host of available estimators which seldom allow for cross-validations (see, for example, Ref.~\cite{Mielniczuk2007}). To avoid these problems, or at least to minimize them, we will mainly focus our discussion directly on the autocorrelation, or, more precisely, on the anti-autocorrelation. This choice will help us overcome the tedious and not always solvable issues of considering trends, leaving us only with the problem of non-stationarity in higher order fluctuations. This could still has an effect on the value of the sample autocorrelation, but we will not focus on that value itself. Rather, we simply seek to assess whether it is significant or not, using tests based on the stationarity of the increments. A consistent way to estimate the statistical significance for non stationary time series is absent, at least to the authors' knowledge. The paper is organized as follow. In section 2 we present the dataset we use for the study, together with the sampling rules. In section 3 the main statistical objects (autocorrelation and Hurst exponent) are presented and discussed. Section 4 is devoted to the explanation of the tools and procedures we follow looking for the desired non-efficient basket. Finally, section 5 and 6 are dedicated respectively to the results analysis and the conclusions. \section{Data set} \label{sec:data} \begin{figure} \centering \includegraphics[height=5.5cm, width=.95\columnwidth]{midprice} \caption{Scheme explaining the nature of the series we study and the procedure used to obtain them. For each time window, we observe the last trade (trades are dashed vertical lines, randomly spaced) and consider the bid and ask prices immediately before the transaction (blue and red large points). The blue line is the bid time series and the red one is the ask time series. The considered midprice is simply the average of the considered bid and ask prices.} \label{fig:midprice} \end{figure} The original data set consists of all trades registered in the primary markets of the analyzed stocks. The data are stored in the Thomson Reuters RDTH data base made available to the Chair of Quantitative Finance by BNP Paribas. For the purpose of our study, we extract from the RDTH database records consisting in the time of a transaction, the bid and ask prices prior to each transaction, and the traded price. These data, appropriately filtered in order to remove misprints in prices and times of execution, correspond to the trades registered at NYSE or at NASDAQ during 2007, for the 30 shares of the Dow Jones Industrial Average Index, namely, at that time: AA.N AIG.N AXP.N BA.N C.N CAT.N CCE.N DD.N DIS.N GE.N GM.N HD.N HON.N HPQ.N IBM.N INTC.O JNJ.N JPM.N MCD.N MMM.N MO.N MRK.N MSFT.O PFE.N PG.N T.N UTX.N VZ.N WMT.N XOM.N. 28 of those companies were traded primarily at NYSE. IBM.O and MSFT.O are the only two traded primarily at NASDAQ. The full meaning of the symbols is available from {\tt www.reuters.com}. The choice of one year of data is a trade-off between the necessity of managing enough data for significant statistical analyses and the goal of minimizing the effect of strong macro-economic fluctuations. However, the consistency of the discussed results during extreme condition periods are beyond the purposes of the present paper, and are left for future studies. Precisely, for each day the period we consider starts at $10.00am$ and ends at $3.45pm$, leading to $4140$ increments when considering fixed time steps of $5s$, which will be our basic time scale unit. The choice of restricting the considered periods only to the central part of the trading day, discarding the opening and closing period, is justified by the anomalies data often exhibit during these parts of the trading day: errors tend to occur more often during the first and last part of the continuous trading day, it often happens that some shares are open to trading several minutes after the others, due to potential issues during the opening auction. Moreover, as is well known, the activity near the opening and closing period is much higher than during the central part of the trading day, adding a non trivial problem to our study that is strictly based on physical time steps. One could try to work in event time, following a multidimensional approach to trading time that has been recently proposed and successfully used to establish some joint distributional properties of baskets of stocks Ref.~\cite{HuthAbergel2010}, but we thought that the added complexity of this change of time would actually conceal rather than emphasize the idea we are putting forward in this work. Having fixed a time window, we consider the mid-prices at the time of the last registered trades. Fig.~\ref{fig:midprice} gives a comprehensive explanation of the procedure. We would like to point out to the readers attention the fact we are not working with log-returns, but only with increments. \section{Autocorrelation}\label{sec:autocorrelation} Several definitions of the sample autocorrelation coefficients have been proposed in the literature. We consider the most standard one (see Ref.~\cite{Anderson1941} for a classical discussion and Ref.~\cite{Kan2010} for a modern one): given n observations of a discrete time series $y_1,\ldots,y_n$ the sample autocorrelation coefficient at lag $k$ is given by \begin{equation} \hat{\rho}(k) = \frac{\sum_{j=1}^{n-k} (y_i - \bar{y})(y_{i+k} - \bar{y})}{\sum_{i=1}^n (y_i-\bar{y})^2}\,, \label{eq:autocorrest} \end{equation} where $\bar{y} = (\sum_{i=1}^n y_i)/n$ is the sample mean, and $1\leq k \leq n-1$. Under the hypothesis of having $y_1,\ldots,y_n$ IID $N(0,\sigma)$ the lag one ($k=1$) autocorrelation can be rewritten for large $n$ as \begin{equation} \hat{\rho}(k) = \frac{\sum_{j=1}^{n-1} y_i y_{i+1}}{\sum_{i=1}^n y_i^2}\,. \end{equation} The numerator has a null expectation value and a variance \begin{eqnarray} \mathrm{Var}\left[\sum_{j=1}^{n-1} y_i y_{i+1}\right] &= \mathop{\mathbf E}\left[\left(\sum_{j=1}^{n-1} y_i y_{i+1}\right)^2\right]\\ &= \mathop{\mathbf E}\left[\sum_{h=1}^{n-1} \sum_{j=1}^{n-1} y_i y_{i+1} y_h y_{h+1}\right] &= \mathop{\mathbf E}\left[\sum_{j=1}^{n-1} y_i^2 y_{i+1}^2\right] = (n-1){\sigma}^4\,. \nonumber \end{eqnarray} For large $n$, the classical Central Limit Theorem shows that $\sum_{j=1}^{n-1} y_i y_{i+1}$ is asymptotically normal, distributed as $N(0,(n-1)\sigma^4)$. The denominator, when divided by $(n-2)$, is an estimator of the variance. Therefore \begin{equation} \hat{\rho}(1) = \frac{\sum_{j=1}^{n-1} y_i y_{i+1}}{\sum_{i=1}^n y_i^2} \sim N\left(0,\frac{1}{n}\right)\,, \end{equation} for $n >> 1$. For this reason the $95\%$ confidence interval of autocorrelation coefficient can be approximated by $\pm 2 \sqrt{1/n}$. The null hypothesis of non significance of a value is then based on the normality and stationarity of the increments. The first hypothesis can easily be weakened by simply imposing the finiteness of the second moment, but the second is much harder to do without, and our data are affected by non-stationarity. We will however rely on this confidence level to provide a better picture of the obtained results, but the reader should consider those confidence intervals with a somewhat suspicious mind. \subsection{Hurst exponent} The Hurst exponent is defined in the framework of fractional Brownian motion, a well-known stochastic process where the second order moments of the increments scale as \begin{equation} \mathbf{E}\left[(x(t_2)-x(t_1))^2\right] \propto \left| t_2 - t_1\right|^{2H}\,, \end{equation} with $H \in [0, 1]$. The Brownian motion is the particular case when $H = 1/2$. If $H < 1/2$ the behaviour of the process is anti-persistent, that is, deviations of one sign are generally followed by deviations with the opposite sign, an effect that in finance is usually called mean reversion. The limiting case $H = 0$, corresponds to white noise, where fluctuations at all frequencies are equally present or $1/f$-noise (pink noise), where the power spectral density is inversely proportional to the frequency. If $H > 1/2$, then the behaviour of the process is persistent, i.e. consecutive increments tend to have the same sign and we observe a trend-following process. The limiting case $H = 1$, reflects $x(t) \propto t$ (locally), i.e. a smooth function. As pointed out in the introduction, the correspondence between the values of $H$ and the behaviours described above is true only in the particular framework of the fractional Brownian motion. Otherwise, an estimation of $H$ is only an indicator of the scaling properties of the time series \cite{Embrechts2002}. As a limit example, one can consider the case of the stable L\'evy processes. A stability index $\alpha$ leads to a Hurst exponent equal to $\max(1,1/\alpha)$, but there is no presence of persistent behaviour. Therefore, we will not rely on the mere estimation of the Hurst exponent to assess whether our final time series is efficient or not, but we will rather use it as an indicator of a possible special behaviour. \section{The procedure} We now describe our method to find the suitable weights $w_i$ leading to the desired ``non-efficient'' basket. Since we have decided to focus our attention on negative autocorrelation, we will start looking for the basket that minimize this statistic in a fixed period of time. More precisely, we consider a number of consecutive days $D$ and we look for the set of weights producing $B_D$, the basket with the minimum value of autocorrelation possibly obtainable given the recorded data. Please beware that this is an \emph{a posteriori} evaluation: we first observe the data, and then choose the weights. At this stage, the efficiency of the market is not under attack. The efficiency is beaten only if we are able to provide the weights of a negatively autocorrelated basket without using the information of the data we are testing. And to do so, immediately after the $D$ consecutive days, that we call \emph{minimization period}, we pick $S$ consecutive days in which we test $B_D$. If the value of the anti-autocorrelation remains significantly low during those $S$ days, we now have evidence for non-efficiency. We can choose among a plethora of methods to optimize the property we are interested in \cite{Ciarlet1989,Schneider2006}. For the sake of simplicity, we will use the autocorrelation matrix. This approach presents some difficulties: for example we are working with scales in which the Epps effect is strong \cite{Epps1979,Toth2009}, and, aside of this, we will have to estimate $N\times N$ parameters, each one effected by a measurement error. Aside from these purely technical aspects, we could also define other measures of mean-reversion than the autocorrelation. For example the sign correlation, or the Hurst exponent itself (just remembering the interpretation problems quickly explained in Sec.~\ref{sec:introduction} and \ref{sec:autocorrelation}), would be reasonable choices, and in those cases the matrix approach would not work, or at least not in its usual formulation. The classical correlation matrix approach to collective behaviour involve equal time correlations. Instead, the construction of a delay correlation matrix (or time lagged correlation matrix) involves calculating correlations between different assets fixing a time delay \cite{Biely2008,Mayya2007}. Let us define a matrix $\mathbf{M}$ of order $N\times T$ where each row is filled by the records of a discretized time series; $N$ is the number of time series of length $T$. Suppose $i$ and $j$ are two distinct assets among the given time series. The delay correlation matrices between $i$ and $j$ at time lag $k$ is given by \begin{equation} \mathbf{\hat{C}}_{i,j} = \frac{1}{T} \sum_{t=1}^{T-k} \mathbf{M}_{i,t} \mathbf{M}_{j,t+k}\, \end{equation} where with $\mathbf{A}_{h,k}$ stands for the element in the $h^{\mathrm{th}}$ row and $k^{\mathrm{th}}$ column of the $\mathbf{A}$ matrix. The matrix $\mathbf{\hat{C}}$ thus constructed is asymmetric, but the associated quadratic minimization problem $\mathbf{e}^\mathsf{T} \mathbf{\hat{C}} \mathbf{e}$ is equivalent to that in the symmetrized case with corresponding matrix \begin{equation} \mathbf{C}_{i,j} = \mathbf{C}_{i,j} = \frac{\mathbf{\hat{C}}_{i,j} + \mathbf{\hat{C}}_{j,i}}{2}\,, \end{equation} and it can be simply justified observing that $(\mathbf{e}^\mathsf{T} \mathbf{\hat{C}} \mathbf{e})^\mathsf{T} = \mathbf{e}^\mathsf{T} \mathbf{\hat{C}}^\mathsf{T} \mathbf{e}$. The minimum eigenvalue of this matrix represents the mode for which the auto-correlation at time lag $K$ is minimized. This means that fixing $k=1$, filling the matrix $\mathbf{M}$ with the increments of the analyzed stocks during the $D$ minimization days (increments normalized by the sample volatility in this period) and finding the eigenspace corresponding to the minimum eigenvalue, we have found the basket with the minimal possible autocorrelation, among all baskets with unit euclidean norm; $w_i$ must be equated to the $i^{\mathrm{th}}$ component of the eigenvector. So, calling these components $e_1,\ldots,e_N$ we obtain \begin{equation} B_{Dt} = \sum_{i=1}^N e_i x_{it} \end{equation} where $x_{it}$ is the element of the $i^{\mathrm{th}}$ discretized time series and at discrete time $t$. Please note that $x_{it}$ are the \emph{raw} increments and they are not normalized by the variance as into the matrix $\mathbf{M}$. The minimum eigenvalue by itself does not give us the value of the autocorrelation; we must compute it directly from $B_D$ using the estimator in Eq.~(\ref{eq:autocorrest}) and we will repeat this operation in both the minimization period and the subsequent test period. So, schematically, we proceed as follow: \begin{enumerate} \item We consider $D$ consecutive days and the immediately following $S$ days. \item The time series are discretized (Sec.~\ref{sec:data}), normalized by the variance during the $D$ days, and used to build the matrix $\mathbf{C}$ with $k=1$. \item From the components of the eigenvector corresponding to the smallest eigenvalue we can build the basket $B_D$. \item We evaluate the autocorrelation of $B_D$ during the whole minimization period and, independently, during the test period. \item Leaving the number of days $D$ fixed we shift the minimization period ahead by one day, and we go back to point 2. \end{enumerate} The considered time scales (the discretization time step) are $5,10,\ldots,55,60$ seconds. The number of test days $S$ is arbitrary. Since we are fixing $B_D$ outside these data we must only take care of having enough statistics but we cannot increase much the minimization period, for fear of structural changes in the inter-dependences between the stocks that could cancel the desired effect. Therefore we will test $B_D$ for $S=1$ and $S=5$. The choice of $D$ is more tedious. By choosing a small number of days, we decrease the size of the data set and we are more likely to catch some unstable interdependences, where by unstable we mean dependences standing only for a short period of time. On the other hand, considering a large number of days $D$ we take the risk of averaging out the time changes, finding at the end a non satisfactory value of the autocorrelation. As trade off we fix $D=10$. Tab.~\ref{tab:popolazione} contains the length of the time series when these numbers are applied. In addition, we compute the Hurst exponent of $B_D$ using the periodogram method. As stated above, the Hurst exponent alone does not give any information regarding the persistence of the autocorrelation and, moreover, the estimators are well known to be inaccurate (biased) in most of the situations. For those reason we do not use more complex estimators, nor do we want to speculate on the values we find, but rather present them only as indication of ``possible'' persistence effects. The estimation of the Hurst exponent is carried out each time considering a time scale of $5s$. This is because the Hurst exponent measures a global property of the process; stretching the time scale would only decrease the statistics and cutting the effects at short time scales. \begin{figure} \centering \includegraphics[scale=0.5]{white} \caption{Empirical distribution of the optimal anti-autocorrelation for a basket build up of 30 synthetic stock timeseries with IID normally distributed increments.} \label{fig:white} \end{figure} \begin{table} \begin{center} \begin{tabular}{cccc} & Minimization $D=10$ & Test $S=1$ & Test $S=5$\\ \hline 5 s & 41400 & 4140 & 20700 \\ 10 s & 20700 & 2070 & 10450 \\ 15 s & 13800 & 1380 & 6900 \\ 20 s & 10350 & 1035 & 5175 \\ 25 s & 8280 & 828 & 4140 \\ 30 s & 6900 & 690 & 3450 \\ 35 s & 5910* & 591 & 2455 \\ 40 s & 5170* & 517 & 2065 \\ 45 s & 4600 & 460 & 2300 \\ 50 s & 4140 & 414 & 2070 \\ 55 s & 3760* & 376 & 1880 \\ 60 s & 3450 & 345 & 1525 \\ \end{tabular} \end{center} \caption{Length of the time series as function of the considered time window size. For the starred values the considered time could have not been extended precisely until 3.45 pm; in these cases the considered period cannot be perfectly divided into the desired time steps, so some seconds preceding 3.45 pm have been discarded. } \label{tab:popolazione} \end{table} The autocorrelations values for the out-of-sample periods can be tested using the confidence interval discussed in section Sec.~\ref{sec:autocorrelation}. On the other hand, for the values obtained in the minimization periods we cannot follow this simple rule. They are the smallest values possibly obtainable by the given data so the null hypothesis cannot be the absence of autocorrelation. It must rather be the likelihood to obtain such values as minimization of independent time series. A heuristic approach is to run the minimization procedure described above in this section using a set of $N=30$ synthetic stocks with IID, normally distributed increments and zero correlations. Doing so, we are able to plot the distribution of the minimal values obtainable by simulation. In Fig.~\ref{fig:white} this distribution is reported for the values of $T=5000,10000,20000$ and for a statistic made by a population of $20000$ synthetic baskets. Even for the shorter sample (small $T$), where spurious effects are more likely to occur, the value is hardly larger than $0.12$, and the typical values drop sensitively for the larger samples. \begin{figure*} \centering \includegraphics[height=5.5cm, width=.49\columnwidth]{5sec10days_png} \includegraphics[height=5.5cm, width=.49\columnwidth]{10sec10days_png}\\ \includegraphics[height=5.5cm, width=.49\columnwidth]{15sec10days_png} \includegraphics[height=5.5cm, width=.49\columnwidth]{20sec10days_png}\\ \includegraphics[height=5.5cm, width=.49\columnwidth]{25sec10days_png} \includegraphics[height=5.5cm, width=.49\columnwidth]{30sec10days_png}\\ \caption{Histograms of the resulting autocorrelation values. In red we have the distributions for the minimal values obtained during $D=10$ consecutive days, in blue and black we have the distributions for the test performed considering respectively $S=1$ and $S=5$ subsequent days. The dashed lines are the critical values for the significance test using the 5\% of confidence.} \label{fig:res1} \end{figure*} \begin{figure*} \centering \includegraphics[height=5.5cm, width=.49\columnwidth]{35sec10days_png} \includegraphics[height=5.5cm, width=.49\columnwidth]{40sec10days_png}\\ \includegraphics[height=5.5cm, width=.49\columnwidth]{45sec10days_png} \includegraphics[height=5.5cm, width=.49\columnwidth]{50sec10days_png}\\ \includegraphics[height=5.5cm, width=.49\columnwidth]{55sec10days_png} \includegraphics[height=5.5cm, width=.49\columnwidth]{60sec10days_png}\\ \caption{Histograms of the resulting autocorrelation values. In red we have the distributions for the minimal values obtained during $D=10$ consecutive days, in blue and black we have respectively the distributions for the test performed considering $S=1$ or $S=5$ subsequent days. The dashed lines are the critical values for the significance test using the 5\% of confidence.} \label{fig:res2} \end{figure*} \begin{figure} \centering \includegraphics[height=6.5cm, width=.95\columnwidth]{meanvalues} \caption{Mean values of the samples as function of the considered time step. The red line stands for the minimized value of the initial sample, the blue line stands for the out-of-sample test using one day of data and, finally, the black line stands for the out-of-sample test using 5 days of data. At the top, the thin dashed lines (blue and black) reports the values of $5\%$ confidence level null hypothesis of non significant anticorrelation.} \label{fig:meanvalues} \end{figure} \begin{figure} \centering \includegraphics[height=6.5cm, width=.95\columnwidth]{significant} \caption{Fraction of the out-of-sample test values that are rejected by the significance test, using the $5 \%$ confidence level. Blue stands for the one-day test and black for the five-day test. } \label{eq:significative} \end{figure} \begin{figure} \centering \includegraphics[height=6.5cm, width=.95\columnwidth]{hurst} \caption{Mean value of the Hurst exponent on the three samples. The minimized portfolio (red) presents a strongest mean Hurst exponent in the range 20~35s. The out-of-sample test using five days (black) still retains a significant value and presents the stronger effect around the scale 20~30s. } \label{fig:hurst_results} \end{figure} \begin{figure} \centering \includegraphics[scale=0.5]{hurst_random} \caption{Distribution of Hurst exponent estimated using the periodogram method by randomly extracting ten consecutive day of the DJIA data and building a basket. The population is made by $20000$ extractions and the mean value is $0.494$.} \label{fig:hurst_rand} \end{figure} \begin{figure} \centering \includegraphics[scale=0.5]{./weights} \caption{Distribution of weights considering the unitarian Euclidean norm. The mass around zero is important but yet the distribution suggests the dynamics is actively participated by a large portion of the considered stocks. The depicted case corresponds to a time scale of $15s$; the remaining ones keep qualitatively the same look.} \label{fig:weights} \end{figure} \begin{figure} \centering \includegraphics[scale=0.5]{./crossauto} \caption{Empirical distribution of minimal values obtained during $D=10$ consecutive days (red) and distribution of single entries the corresponding matrix $\mathbf{\hat{C}}$ (black). The depicted case corresponds to a time scale of $15s$; the remaining ones keep qualitatively the same look.} \label{fig:crossauto} \end{figure} \section{Results} The main results are reported in Fig.~\ref{fig:res1} and \ref{fig:res2}. Each distinct picture contains the data for a distinct time scale. The obtained autocorrelation values are reported as histograms. The red stands for the values during the minimization periods, blue and black for, respectively, the test with $S=1$ and $S=5$. The vertical lines indicate the corresponding null hypothesis. Fig.~\ref{fig:meanvalues} plots the mean values of these distributions as functions of the time scale. The minimal values are far below those obtained from the synthetic independent time series (see Fig.~\ref{fig:white}). There is not much deviation among the different time scales, if only in the variance of the distributions. The mean value does not experience large deviations. The only time scale where we observe a smaller effect is $5s$. However, the desired property appears more strikingly for time scales of 10 and $15s$. The autocorrelations computed during the test periods deviate from the minimized values, with differences that become stronger for higher time scales. In this case too, the time scales of 10 and $15s$ exhibit the most important signal. Fig.~\ref{eq:significative} shows the fraction of values that violate the null hypothesis, i.e. the values that are not compatible with independence as defined in Sec.~\ref{sec:autocorrelation}. For the longer test and for the time scale equal to $15s$, this fraction is about 0.9. In the same fashion of Fig.~\ref{fig:meanvalues}, Fig.~\ref{fig:hurst_results} reports the mean values of the estimated Hurst exponent. In order to provide a comparison, Fig.~\ref{fig:hurst_rand} presents the distribution of the Hurst exponents estimated by randomly extracting 10 consecutive days of DJIA data. The mean value of this distribution is slightly smaller than 0.5 and it is equal to 0.494. In general, the values of $H$ we obtain in Fig.~\ref{fig:hurst_results} are significantly different from 0.5, with the stronger effects for time scales of about $20~35s$. A study of the weight dynamics is beyond the purpose of this paper. However, Fig.~\ref{fig:weights} reports the empirical weight distribution with an Euclidean normalization. The pick around zero shows the optimal basket is often not formed by all the considered stocks, but the remaining part suggests we are effectively observing collective dynamics rather than just handling a small subset of the assets. Finally, Fig.~\ref{fig:crossauto} reports the distribution of the entries of $\mathbf{\hat{C}}$, i.e. the autocorrelations and the lead-lag cross-correlations, together with the found minima, showing that the ``distance'' between those two samples is strong an suggesting one more time the collective nature of the measured effect. In both Fig.~\ref{fig:weights} and Fig.~\ref{fig:crossauto} the time scale is $15s$ and the qualitative result does not change in the other cases. \section{Conclusion} A clear answer to the main research question of this work has been given. In fact, at short time scales it is possible to build a basket that does not fit into the efficient market paradigm. Since we have analyzed mid-price time series, it is not possible, at this stage, to ascertain whether this effect is strong enough to be exploited for trading purposes. Preliminary results in this direction suggest that this practice is not trivial but yet possible, i.e. the fluctuations we observe often fall within the bid-ask spread, but this is not always the case. Another approach, perhaps using non-parametric optimization methods such as a stochastic optimization algorithm, is likely to provide stronger results. Moreover, we have just shown the values for the optimized baskets, but it is likely that penalizing the optimal values with some other suitable criterion would lead to a more favorable situation in terms of trading applications - for example, one can try to reproduce this study, but this time considering baskets which minimize the autocorrelation only among those with a minimal given value of volatility. Moreover, the value of $D$ for which we have shown the results can be tuned, and so can the time scales. Approaching the problem without imposing a time step would probably lead to a much more complete view of the effect. Finally, we must also point out the fact that we considered all the 30 DJIA stocks, without developing any smart way of picking them. Since we already have clear results using the full index, we are keen to think that a smarter choice of the companies would lead to stronger effects. \section*{Acknowledgement} F.A. would like to acknowledge that the idea of optimizing a basket of stocks with respect to statistical properties arose from a collaboration with Laurent Jaillet while they were colleagues at CAI Cheuvreux. \bibliographystyle{plain}
{ "timestamp": "2010-06-29T02:01:38", "yymm": "1006", "arxiv_id": "1006.5230", "language": "en", "url": "https://arxiv.org/abs/1006.5230" }
\section{Introduction And Preliminaries} Isotropic Brownian flows (IBFs) are a fairly natural class of stochastic flows and have been studied by various authors in different directions, e.g.~\cite{bh}, \cite{lj}, \cite{gd} and~\cite{vB} - just to name a few references. ~\cite{css} and \cite{ls} study the evolution of the diameter of a bounded and non-trivial set under the evolution of such a flow giving upper and lower bounds for the linear growth rate. Nevertheless these bounds turn out to be far from each other in some examples and there is little hope to match these bounds with the methods from~\cite{css} or~\cite{ls}. We will follow a different approach which first appeared in~\cite{dkk}, wherein a class of periodic stochastic flows on $\ensuremath{\mathbb{R}}^2$ (or stochastic flows on the torus) is considered. \cite{dkk} develop a similar limit theorem (even with a stronger assertion) using the fact that their model essentially lives on a compact manifold. Although we will sometimes follow the lines of thought of~\cite{dkk} in the first part, we will see that to get the assertion we will have to replace the methods relying on the assumption of periodicity (which means perfect dependence of particles which are far from each other) on $\ensuremath{\mathbb{R}}^2$ by different ones. This is done using the invariance properties with respect to time reversal of IBFs. These properties are not shared by the model of~\cite{dkk} and hence are a novelty in the present subject. The paper is divided into several sections. First we briefly review the important definitions and cite some prerequisites from the literature including a subsection on the smoothness of the density of the two-point motion. Afterwards we give the proper definition of the asymptotic linear expansion speed and state the main result, from which the fact, that the asymptotic expansion speed is constant, turns out to be a corollary. We give the proofs of the main results in the last two sections. The first of these is dedicated to the proof of the lower bound i.e. that the expansion is sufficiently fast. Here we also identify the set $\mathcal{B}$ in terms of a stable norm (which is a concept from~\cite{dkk}). We finally finish the proof in the last section by showing that the expansion is sufficiently slow, for which it will turn out to be sufficient to show that the expansion speed is independend of the initial set. We will work in general dimension $d$ where possible. But since several important features of the proof obviously fail in higher dimensions the reader might assume that $d$ is always equal to two. \subsection{Stochastic Flows And Stochastic Differential Equations} Let us first state the definition of a Brownian flow. \begin{definition} Let $\left(\Phi_{s,t}(x,\omega): s,t\in[0,\infty),x\in\mathbb{R}^{d},\omega\in\Omega\right)$ be a continuous $\mathbb{R}^{d}$-valued random vector field defined on a probability space $(\Omega,\mathcal{F},\mathbb{P})$. $\left(\Phi_{s,t}(x,\omega)\right)$ is called a Brownian flow of diffeomorphisms, if there is a $\mathbb{P}$-null set $N \subset\Omega$ such that we have for $\omega\in N^{C}$: \begin{enumerate} \item $\Phi_{s,u}(\omega)=\Phi_{t,u}(\omega)\circ\Phi_{s,t}(\omega)$ and $\Phi_{s,s}(\omega)=\left.\textnormal{id}\right|_{\mathbb{R}^{d}}$ for any $0\leq s,u,t<\infty$, \item $\Phi_{s,t}(\omega): \mathbb{R}^{d}\rightarrow\mathbb{R}^{d}$ is an onto map for arbitrary $0\leq s,t<\infty$, \item $\Phi_{s,t}(x,\omega)$ is $k$ times continuously differentiable w.r.t. $x$ for any $k$, \item $\Phi$ is Brownian i.e. for $n\in\mathbb{N}$ and $0\leq s_{1}<s_{2}<\ldots<s_{n}<\infty$ we have that the family of random variables $\left(\Phi_{s_{j-1},s_{j}}:j\in\{1,\ldots,n\}\right)$ is independent. \end{enumerate} \end{definition} We will write $\Phi_{t}(x,\omega),\ldots$ for $\Phi_{0,t}(x,\omega),\ldots$ and for $x,y,z,\ldots\in\mathbb{R}^{d}$ we abbriviate $x_{t}:=\Phi_{t}(x),\ldots$. Due to \cite[Theorem 4.4.1]{k} stochastic flows are generated by Kunita-type stochastic differential equations of the form \begin{equation}dX(t)=M(dt,X(t))\label{eq:flussgleichung}\end{equation} wherein $M$ is a suitable semimartingale field. We will briefly describe the construction of the fields leading to isotropic Brownian flows in the sequel. See \cite{ls} or \cite{bh} for further details. We choose a modification of $\Phi$ that satisfies the above with $N=\emptyset$. \subsection{Covariance Tensors And Correlation Functions} \begin{definition} A function $b:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d\times d}$ is an isotropic covariance tensor if $x\mapsto b(x)$ is $C^\infty$ and all derivatives of any order are bounded, $b(0)=E_{d}$ (the $d$-dimensional unit matrix), $x\mapsto b(x)$ is not constant and $b(x)=O^{*}b(Ox)O$ holds for any $x\in \mathbb{R}^{d}$ and any orthogonal matrix $O$. \end{definition} \textbf{Remark}: The assumptions on the differentiability of the flow as well as of the generating tensor are a bit restrictive, but we do not want to mess with smoothness problems coming especially from Malliavin calculus (we strongly conjecture that for the $2$-dimensional case a $C^6_b$-assumption should be sufficient - see~\cite{vBdiss}). \begin{lemma}\label{le:isoko}\label{le:korrfunk} Let $b$ be an isotropic covariance tensor. The functions $B_L$ and $B_N$ defined by $B_{L}(r):=b_{ii}(r e_{i}), r\geq0$ and $B_{N}(r):=b_{ii}(r e_{j}), r\geq0,i\neq j$ are the longitudinal (and normal - respectively) correlation functions of $b$. Their definition does not depend on the specific choice of $1\leq i,j\leq d$ and we have for arbitrary $i,j\in\{1,\ldots,d\}$ and $x\in\mathbb{R}^{d}$: \[ b^{ij}(x)=\left\{\begin{array}{ccc} (B_{L}(|x|)-B_{N}(|x|))x^{i}x^{j}/|x|^{2}+B_{N}(|x|)\delta^{ij}&:&x\neq0\\ \delta^{ij}=\delta^{ij}B_{L}(0)=\delta^{ij}B_{N}(0)&:&x=0 \end{array} \right. \] The right-hand derivatives of $B_{L/N}$ satisfy $\beta_{L/N}:=-B_{L/N}''(0)>0$ and we have the Taylor-expansions $B_{L/N}(r)=1-\frac{1}{2}\beta_{L/N}r^{2}+O(r^{4}):(r\rightarrow0)$ as well as the global estimate $||B_{L/N}||_{\infty}= 1$. We will use the above Taylor-expansion in the following weaker form. For any $\ensuremath{\epsilon}>0$ there is $r^{(\ensuremath{\epsilon})}>0$ such that we have for $0<r<r^{(\ensuremath{\epsilon})}$ that \begin{equation}\label{eq:B-approximationsgleichung} \frac{\left|1-B_{L}(r)-\frac{1}{2}\beta_{L}r^{2}\right|\vee \left|1-B_{N}(r)-\frac{1}{2}\beta_{N}r^{2}\right|}{r^{3}}<\epsilon. \end{equation} \end{lemma} Proof: \cite[(2.5), (2.6), (2.13), (2.8), (2.9) and (2.18)]{bh}.\hfill$\Box$\\ \subsection{Brownian Fields And Generated Flows} Now we can define the semimartingale field $M$ which in fact is a martingale field. \begin{definition}\label{def:isoko} Let $b$ be an isotropic covariance tensor. An $\mathbb{R}^{d}$-valued random vector field $\left(M(t,x): t\geq0, x\in\mathbb{R}^{d}\right)$ - defined on a probability space $(\Omega,\mathcal{F},\mathbb{P})$ - is an isotropic Brownian field if $(t,x)\mapsto M(t,x)$ is a centered Gaussian process with $\mathit{cov}(M(s,x),M(t,y))=(s\wedge t)b(x-y)$ and $(t,x)\mapsto M(t,x)$ is continuous for almost all $\omega$. A stochastic flow, generated via (\ref{eq:flussgleichung}) by an isotropic Brownian field is called an isotropic Brownian flow (IBF). \end{definition} \begin{theorem} The $n$-point-motion $(x^{(1)}_{t},\ldots,x^{(n)}_{t}):=(\Phi_{t}\left(x^{(1)}\right),\ldots,\Phi_{t}\left(x^{(n)}\right))$ is a $\mathbb{R}^{nd}$-valued diffusion with the following properties: \begin{enumerate} \item For $g\in C^{2}_{b}$ its generator $L^{(n)}g(x^{(1)},\ldots,x^{(n)})$ given by \[ \frac{1}{2}\sum_{l,m=1}^{n}\sum_{i,j=1}^{d}b\left(x^{(l)}-x^{(m)}\right)\frac{\partial^2g}{\partial x^{(l)}_i\partial x^{(m)}_j}\left(x^{(1)},\ldots,x^{(n)}\right). \] \item There is a standard Brownian motion $W$ such that $\rho^{xy}_{t}$ solves the SDE \begin{equation}\label{eq:zweipunktgleichung} d\rho^{xy}_{t}=(d-1)\left(\frac{1-B_{N}(\rho^{xy}_{t})}{\rho^{xy}_{t}}\right)dt+\sqrt{2(1-B_{L}(\rho^{xy}_{t}))}dW_{t}. \end{equation} \end{enumerate} \end{theorem} Proof: \cite[p. 617]{lj}, \cite[p.124]{k}, \cite[p. 4]{ls} and \cite[(3.11)]{bh}.\hfill$\Box$\newline \textbf{Remark:} \cite{lj} uses a slightly different definition. Assume $\alpha=1$ there to get things into line with the definitions above. The previous theorem shows, that $(x_{t},y_{t})$ coincides in law with the solution of the following SDE. \begin{equation}\label{eq:zweipunktSDE} \left(\begin{array}{c}x_{t}-x\\y_{t}-y\end{array}\right)= \int_{0}^{t}\left(\begin{array}{cc}E_{d}&b\left(x_{s}-y_{s}\right)\\b\left(x_{s}-y_{s}\right)&E_{d}\end{array}\right)^{1/2}d{W}_{s}=:\int_{0}^{t}\bar{b}\left(x_{s}-y_{s}\right)dW_{s} \end{equation} Therein $W$ is a $2d$-dimensional standard Brownian motion. The following lemma states some information about the eigenvalues of $b$ and $\bar b$ respectively. \begin{lemma}\label{le:eigenwerte} For $z\in\mathbb{R}^{d}$ we have: \begin{enumerate} \item $z$ is an eigenvector of $b(z)$ to the eigenvalue $B_{L}\left(\left|z\right|\right)$. \item Any vector $0\neq z^{\bot}$ perpendicular to $z$ is an eigenvector of $b(z)$ to the eigenvalue $B_{N}\left(\left|z\right|\right)$. \item $\bar{b}$ has the eigenvalues $\left\{1\pm B_{L}(z),1\pm B_{N}(z) \right\}$ with multiplicities $1$ and $n-1$ respectively. \end{enumerate} \end{lemma} Proof: straightforward computations using Lemma~\ref{le:isoko}.\hfill$\Box$\\ Observe that the previous lemma ensures that $\bar{b}$ is elliptic apart from the diagonal $\{x=y\}$. \subsection{Density Of The $n$-Point Motion} As we have already seen, the one-point-motion of an IBF is a standard Brownian motion and so of course possesses a $C^\infty$-density. This section is devoted to the question if this is true for the $2$-point-motion $(x_t,y_t)$ of an $d$-dimensional IBF ($x\neq y$). The homeomorphic properties of the flow do not allow for $x_t=y_t$ to hold at any time except on a null set (remember that we decided to modify the flow in a way such that $x_t=y_t$ is impossible). One might expect the process $(x_t,y_t)$ to posesses a density on $\ensuremath{\mathbb{R}}^{2d}_\times:=\ensuremath{\mathbb{R}}^{2d}\setminus\{ z\in\ensuremath{\mathbb{R}}^{2d}:z_i=z_{d+i}\forall i=1,\ldots,d\}$. This is in fact true as we shall see in the following. \begin{theorem}\label{th:posdens} The two-point-motion $(x_t,y_t)$ interpreted as a diffusion on $\ensuremath{\mathbb{R}}^{2d}_\times$ possesses a strictly positive $C^\infty$-density on $\ensuremath{\mathbb{R}}^{2d}_\times$. \end{theorem} Proof: We restrict ourselves to $t=1$ by scaling. First observe that our smoothness assumptions on $b$ allow for the use of H\"ormander's Theorem \cite{ho}. See~\cite{nu} for details and stochastic interpretations. Since we already observed that the process satisfies the SDE~\eqref{eq:zweipunktSDE} and since Lemma~\ref{le:eigenwerte} ensures that H\"ormander's condition is satisfied we can conclude that on $\ensuremath{\mathbb{R}}^{2d}_\times$ a $C^\infty$-density exists. We now have to show that it is strictly positive there. We want to apply the results of~\cite{l1}, so we have to consider the following control problem. \begin{equation} dz_t(h)=\bar{b}(z_t(h))h_t dt \end{equation} Therein $h$ is a square-integrable, $\ensuremath{\mathbb{R}}^{2d}$-valued control function (in fact chosen to be continuously differentiable). $z_t$ is a $2d$-dimensional process to be thought of as a deterministic version of the two-point-motion. Fix $(x,y)\in \ensuremath{\mathbb{R}}^{2d}_\times$. In order to show that $(x_1,y_1)$ has positive transition density for any $x^{(1)},y^{(1)}\in \ensuremath{\mathbb{R}}^{2d}_\times$ it is enough to establish the following Bismut Condition (see~\cite{bi}) \begin{condition} For any $(x,y)=z\in \ensuremath{\mathbb{R}}^{2d}_\times,(x^{(1)},y^{(1)})\in \ensuremath{\mathbb{R}}^{2d}_\times$ there is an $h\in L^2$ such that \begin{equation} z_1(h)=(x^{(1)},y^{(1)}) \label{eq:ankommen}\end{equation} and such that $h\mapsto(z_1(h))$ is a submersion in $h$. (we identify $\ensuremath{\mathbb{R}}^{2d}$ and $\ensuremath{\mathbb{R}}^{d}\times\ensuremath{\mathbb{R}}^d$ in the obvious way). \end{condition} Proof.: Step 1: Let us assume first that $\overline{x,x^{(1)}}$ and $\overline{y,y^{(1)}}$ are disjoint and that each of them consists at least of two points. ($\overline{x,y}$ denoting the convex hull of $x$ and $y$.) We construct a control satisfying~\eqref{eq:ankommen} such that the stream lines of $z_t$ are exactly $\overline{x,x^{(1)}}\cup\overline{y,y^{(1)}}$ . This ensures that $\bar{b}(z_t(h))$ is regular and its determinant is bounded away from zero for all $t$. The simplest way to obtain the desired streamlines is to ensure $\bar{b}(z_t(h))h_t\equiv \left(z^{(1)}-\begin{pmatrix}x\\y\end{pmatrix}\right)$. We may hope to achieve this by setting $h_0:=\bar{b}\left(\begin{pmatrix}x\\y\end{pmatrix}\right)^{-1}\left(z^{(1)}-\begin{pmatrix}x\\y\end{pmatrix}\right)$ as well as $0=\frac{d}{dt}\left[\bar{b}(z_t(h)) h_t\right] $ which is the same as \begin{equation} \label{eq:controlode} \bar{b}(z_t(h))\frac{dh_t}{dt}=-\left[\left( \ip{\frac{dz_t(h)}{dt}}{\nabla} \bar{b}\right)(z_t(h))\right]h_t \end{equation} So we see that we can choose $h_t$ to be the projection on the first $2d$ coordinates of the solution to the following $4d$-dimensional initial value problem. \begin{equation} \left\{\begin{array}{rl} \frac{d}{dt}\begin{pmatrix}h_t\\z_t(h) \end{pmatrix}=&\begin{pmatrix}-\bar{b}^{-1}(z_t(h))\left[\left(\ip{\frac{dz_t(h)}{dt}}{\nabla} \bar{b}\right)(z_t(h))\right]h_t\\ \bar{b}(z_t(h))h_t \end{pmatrix},\\ h_0=&\bar{b}\left(\begin{pmatrix}x\\y\end{pmatrix}\right)^{-1}\left(z^{(1)}-\begin{pmatrix}x\\y\end{pmatrix}\right),\hspace{1cm}z_0(h)=\begin{pmatrix}x\\y \end{pmatrix} \end{array}\right. \end{equation} Existence and uniqueness of a solution to this initial value problem can be obtained from the standard theorems because we ensured that the determinant of $\bar{b}$ is bounded away from zero and hence that the right-hand-side of \eqref{eq:controlode} is continuously differentiable. \\Step 2: For a general positions of $x$, $x^{(1)}$, $y$ and $y^{(1)}$ observe that we can divide the action into two parts i.e. timesteps of length 0.5 and choose the streamlines of $x$ and $y$ to be piecewise linear and disjoint.\\ Step 3: Finally we have to note that by Theorems 1.1 (smoothness) and 1.10 (surjectivity) of~\cite{bi} we have a submersion in $h$. \hfill$\Box$\\ \subsection{Lyapunov-Exponents} As proved in \cite[(7.2) and (7.3)]{bh} IBFs have Lyapunov exponents which satisfy $\mu_{i}:=\frac{1}{2}\left[(d-i)\beta_{N}-i\beta_{L}\right]$. The top Lyapunov-exponent $\mu_1$ i.e. its sign crucially affects the asymptotic behaviour of the flow, as shown in~\cite{css}. We make the standing assumption $\mu_{1}=\frac{1}{2}[(d-1)\beta_{N}-\beta_{L}]>0$. If this is not fulfilled then~\cite{ss} shows that our main result cannot be expected to be true because the flow contracts a closed ball of positive diameter to a point with positive probability. \subsection{Support Theorem For Isotropic Brownian Flows} As to every Gaussian measure one can associate a Hilbert space to an isotropic Brownian field - the so-called reproducing kernel Hilbert space $\mathcal{H}$ of $M$. For details on this we refer the reader to~\cite{b} in the general context of Gaussian measures and to~\cite{gd} the special case considered here. We will only need the fact from~\cite{gd} that for $x\in\ensuremath{\mathbb{R}^{d}}$ and an arbitrary signed measure $\mu$ on the Borel sets of $\mathbb{R}^{d}$ the vectorfield $\int b^{i,j}(x-y)d\mu(y,j)$ belongs to $\mathcal{H}$. \begin{theorem}\label{sa:ibfsupp} Let $M$ be an isotropic Brownian field. Due to~\cite[Theorem 3.5.1]{b} this can be written as $M(t,x)=\sum_{i=1}^{\infty}V_{i}(x)W_{t}^{i}$ wherein $(V_i)_{i\in\ensuremath{\mathbb{N}}}$ is a complete orthonormal system in $\mathcal{H}$. Assume that $V_{1}$ is four times continuously differentiable and that all derivatives up to order four are bounded. Then for $\mathbb{K}\subset\subset\mathbb{R}^{d}$, $T>0$ and $\delta>0$ there are positive numbers $\epsilon$ and $C_1$, such that: \[ \prob{\sup_{0\leq t\leq T}\sup_{x\in\mathbb{K}}\left\|x_{\frac{t}{C_1}}-\psi_{t}(x) \right\|<\delta}>\epsilon \] Therein $\psi=\psi_{t}(x)$ is the solution of the following deterministic control problem: \[ \left\{ \begin{array}{ccc} \partial_{t}\psi_{t}(x)&=&V_{1}(\psi_{t}(x))\\ \psi_{0}(x)&=&x \end{array} \right. \] \end{theorem} Proof: This is Theorem 6.2.3 of~\cite{gd}. \hfill$\Box$\newline For the convenience of the reader we include the following definition. \begin{definition} An $\mathcal{H}$-simple control $V$ is a mapping from $[0,T]$ to $\mathcal{H}$, which is piecewise constant. \end{definition} \subsection{Time Reverse And Markov-Properties} \begin{lemma}\label{le:reverse} For arbitrary $T>0$ we have \begin{equation} \mathcal{L}\left[\left(\Phi_{s,t}(.):0\leq s\leq t\leq T \right)\right]=\mathcal{L}\left[\left(\Phi_{T-s,T-t}(.):0\leq s\leq t\leq T \right)\right]. \end{equation} \end{lemma} Proof: Due to \cite[Theorem 4.2.10]{k} the backward flow is driven by the same infinitesimal generator as the forward flow (see Lemma~\ref{le:korrfunk} for details). Therefore the law of the forward flow and the law of the backward flow coincide.\hfill$\Box$\\ \begin{lemma} Let $\mathcal{F}_{s,t}$ be the $\sigma$-field generated by $\left\{\Phi_{r,u}:s\leq r\leq u \leq t \right\}$ . \begin{enumerate} \item For an $\left(\mathcal{F}_{s,t}:t\in[s,\infty) \right)$-stopping-time $\tau$ we have: \begin{equation} \mathcal{L}\left[\left(\Phi_{\tau,r}\left(\Phi_{s,\tau}(.) \right):r\geq\tau\right) \right|\left.\mathcal{F}_{s,\tau}\right]=\mathcal{L}_{s,\tau}\left[\Phi_{s,s+r-\tau}(.):r\geq\tau\right] \end{equation} \item For any $\left\{\mathcal{F}_{s,t}:s\in(-\infty,t]\right\}$-stopping-time $\tau$ we have: \begin{equation} \mathcal{L}\left[\Phi_{\tau,t}\left(\Phi_{r,\tau}(.) \right):r\leq\tau \right|\left.\mathcal{F}_{\tau,t}\right]=\mathcal{L}_{\tau,t}\left[\Phi_{t+r-\tau,t}(.):r\leq\tau\right] \end{equation} \end{enumerate} \end{lemma} Proof: 2. is a consequence of 1. and Lemma~\ref{le:reverse}. 1. is~\cite[Theorem~4.2.1]{k}.\hfill $\Box$\\ \subsection{Chasing Ball Property, LDP For Discrete Supermartingales} The first of the following lemmas states that the distance of a non-trivial set under the action of the flow tends to approach another moving particle (arbitrary non anticipating movement) provided that the other particle does not move too fast. Therein we mean \begin{definition}A subset of $\mathbb{R}^{d}$ is called non-trivial, if it is bounded, connected and consists of at least two different points.\end{definition} Note that for IBFs the estimates of the local characteristics and the ellipticity bounds of~\cite{ss} hold. Therefore we may use the following lemma. For $t\geq0$ denote by $\mathcal{F}_{t}:=\mathcal{F}_{0,t}$ the sigma-field, generated by the flow up to time $t$. \begin{lemma}\label{le:lokdrift} Let $\Phi$ be an IBF with generator $M$. Then there are functions $G':[0,\infty)\times[0,\infty)\times[0,\infty)\rightarrow[0,\infty)$ and $G'':[0,\infty)\times[0,\infty)\rightarrow[0,\infty)$, such that there is $r_0>0$ depending only on $b$ such that we have the following. \begin{enumerate} \item For all $s\in [0,\infty)$ the function $G'(\cdot,s,\cdot)$ is continuous, non-increasing with\\ $\lim_{K\rightarrow\infty}\lim_{r\rightarrow\infty}G'(K,s,r)=0$. \item For all $s\in [0,\infty)$ $G''(s,\cdot)$ is continuous and $r\in(0,r_0)\Rightarrow G''(s,r)>0$. \item Let $s>0$ and $r<r_0$. Let $\tau$ be a finite stopping time for the flow and $x,y,z$ $\mathcal{F}_{\tau}$-measurable random points in $\mathbb{R}^{d}$ with $\left\|x-y\right\|=r$. Define $r_{1}:=\left\|x-z\right\|\wedge\left\|y-z\right\|$, $r_{2}:=\left\|\Phi_{\tau,\tau+s}(x)-z\right\|\wedge\left\|\Phi_{\tau,\tau+s}(y)-z\right\|$. Then we have \[\ce{r_{2}\vee (r_1-K)}{\mathcal{F}_{\tau}}\leq r_{1}+G'(K,s,r_{1})-G''(s,r).\] \end{enumerate} \end{lemma} Proof:~\cite[Lemma 2.5]{ss}. Observe that $K$ does not appear in the original result in~\cite{ss} but can be obtained by adding it in the proof of (15) on pages 2055 and 2056 of~\cite{ss} to obtain instead of (15) the estimate $\expec{||x_{\tau+s}||\vee (x^1+K)}-x^1\leq g(K,x^1)$ with $\lim_{x\to\infty}g(K,x)=g_K$ and $\lim_{K\to\infty}g_K=0$ and by proceeding as in~\cite{ss} afterwards.\hfill$\Box$\\ The next lemma is an elementary large deviation principle and we recall it for the convenience of the reader. \begin{lemma}\label{le:mmb} Let $\{\xi_{j}:j\in\mathbb{N}\}$ be a sequence of real-valued random variables with \begin{enumerate} \item $\ce{\xi_{j+1}}{\xi_{1},\ldots,\xi_{j}}\leq0$, \item $\forall m\in\mathbb{N}:\exists K_{m}\in\mathbb{R}:\forall j\in\mathbb{N}:\mathbb{E}\left[|\xi_{j}|^{m}\right]\leq K_{m}$. \end{enumerate} Then we have that for $\ensuremath{\epsilon}>0$ there exist constants $\kappa^{(1)}_m$ depending on $\epsilon$ and $(K_{n})_{n\in\ensuremath{\mathbb{N}}}$ such that we have for $n\in\ensuremath{\mathbb{N}}$ that $\mathbb{P}\left[\sum_{j=1}^{n}\xi_{j}\geq\epsilon n\right]\leq\kappa^{(1)}_m n^{-m}$. \end{lemma} Proof: \cite[Lemma 2]{dkk}.\hfill$\Box$\\ \subsection{Sub-Gaussian Tails And Sublinear Growth} \begin{lemma}\label{le:subgauss} There is a positive constant $C_2$, such that $\mathbb{P}$-a.s. for any bounded subset $\gamma$ of $\mathbb{R}^{d}$ we have $\limsup_{T\rightarrow\infty}(\sup_{t\in[0,T]}\sup_{x\in\gamma}\frac{1}{T}\left\|\Phi_{t}(x)\right\|)\leq C_2.$ \end{lemma} Proof: \cite[Theorem 2.1]{ls2}.\hfill$\Box$\\ \section{Statement Of The Main Results} \begin{theorem}\label{th:main} For any bounded, connected $\gamma\subset\ensuremath{\mathbb{R}}^2$ consisting of at least two different points we let $\gamma_t:=\Phi_t(\gamma) $ and $\mathcal{W}_{t}(\gamma):=\bigcup_{0\leq s\leq t}\gamma_{s}$. Then there exists a deterministic set $\mathcal{B}$ such that we get for any $\ensuremath{\epsilon}>0$: \begin{enumerate} \item There is $\mathbb{P}$-a.s. $0<T(\gamma,\epsilon)<\infty$, such that for any $t>T(\gamma,\epsilon)$ the following holds. \begin{equation}(1-\epsilon)t\mathcal{B}\subset\mathcal{W}_{t}(\gamma).\end{equation} \item There is $\mathbb{P}$-a.s. a sequence $\left(t_{k}:k\in\mathbb{N}\right)\subset\ensuremath{\mathbb{R}}_+$ with $t_{k}\nearrow\infty$ that fulfills \begin{equation} \mathcal{W}_{t_{k}}(\gamma)\subset(1+\epsilon)t_{k}\mathcal{B}.\end{equation} \end{enumerate} We also have \begin{equation} \lim_{T\to\infty}\prob{(1-\epsilon)T\mathcal{B}\subset\mathcal{W}_T(\gamma)\subset(1+\epsilon)T\mathcal{B}}=1. \end{equation} \end{theorem} Proof: The proof will be given in the sections~\ref{sec:lb} and~\ref{sec:ub}. \begin{corollary} If we define for $\gamma$ as above the asymptotic linear expansion speed to be $$\liminf_{T\to\infty}\frac{1}{T}\textnormal{diam}(\mathcal{W}_T(\gamma))$$ then it is independend of $\gamma$ and a.s. constant. \end{corollary} Proof: This follows directly from Theorem~\ref{th:main}.\hfill$\Box$\\ \section{The Lower Bound} \label{sec:lb} \subsection{Hitting Time Of Far Away Balls} Assume that the original set $\gamma\subset\mathbb{R}^{d}$ is connected, compact and that it consists of at least two different points (the assumption of compactness is made for simplicity and could be omitted). Denote by $\gamma_{t}:=\Phi_{t}(\gamma)$ the set $\gamma$ at time $t$ and by $d_{t}:=\textnormal{diam}(\gamma_{t})$ its diameter. Further denote for any $R>0$ by $\tau^{R}(\gamma,P):=\inf\left\{t>0:\textnormal{dist}(\gamma_{t},P)\leq R, d_{t}\geq 1\right\}$ the time it takes for $\gamma$ to reach an $R$-neighbourhood of $P\in\ensuremath{\mathbb{R}^{d}}$. In fact it will turn out that $\liminf_{t\rightarrow\infty}d_{t}\geq 1$ $a.s.$. We call a subset of $\mathbb{R}^{d}$ large if it is bounded and has diameter at least $1$. Due to the results of \cite{ss} and \cite{css} we may assume that $\gamma$ is large (the following will proof that $\gamma$ will become large a.s. anyways). \begin{theorem}\label{th:cor6} Let $P\in\ensuremath{\mathbb{R}^{d}}$, assume that $\gamma\subset\ensuremath{\mathbb{R}^{d}}$ is large and define $\bar{r}:=1\vee\mathit{dist}(P,\gamma)$. There is a constant $R>0$ such that for any $m\in\mathbb{N}$ there is $\kappa^{(2)}_{m}>0$ (not depending on $\gamma$, $P$ or $\bar{r}$) such that for $\beta>1$ we have $\mathbb{P}\left[\tau^{R}(\gamma,P)>\kappa^{(2)}_m\beta \bar{r}\right]\leq \kappa^{(2)}_{m}\beta^{-m}\bar{r}^{-m}$. \end{theorem} The proof consists of several steps: \begin{enumerate} \item Construction of a strictly increasing $C^{2}$-function $f:(0,\infty)\rightarrow\mathbb{R}$ with $\lim_{r\rightarrow\infty}f(r)=\infty$ such that $f(\rho^{xy}_{t})$ is a submartingale for any $x, y\in\ensuremath{\mathbb{R}^{d}} $. The drift of this submartingale has to be bounded away from zero for small $\rho_{t}^{xy}$. \item Getting estimates on the growth of $d_{t}$ on average \item Getting estimates of the probability of finding $\gamma_{t}$ not being large after a long time \item Establishing a negative upper bound for the \glqq drift\grqq of $r_{t}:=\textnormal{dist}(\gamma_{t},P)$ outside $K_{R}(P):=\left\{x\in\ensuremath{\mathbb{R}^{d}}:|x-P|\leq R \right\}$ \item Glue all the above together to prove Theorem~\ref{th:cor6} \end{enumerate} \subsubsection{Construction Of $f$} The first ingredient needed to construct $f$ is the following lemma. \begin{lemma} For any $0<c_8<c_9<\infty$, $\delta>0$ and $-\infty<c_{10}<0<c_{11}<\infty$ there is a decreasing $C^{2}$-function $h:[c_8,c_9]\rightarrow [h(c_9),0]$ with \label{le:ableitungskorrektur} \begin{enumerate} \item $h'(c_8)=h'(c_9)=0$, \item $h''(c_8)=c_{10}$, $h''(c_9)=c_{11}$ and $h''$ is increasing, \item $h(c_8)=0$, \item $\sup_{c_8\leq r\leq c_9}\{|h'(r)|\}\leq\delta$. \end{enumerate} \end{lemma} Proof: For $0<\epsilon<0,5(c_{11}\wedge-c_{10})$ define $h''_{\epsilon}:[c_8,c_9]\rightarrow[c_{10},c_{11}]$ via\\ $h''_{\epsilon}(r):=\ind{\left[c_8,c_{8,\epsilon}\right]}(r)(r-c_{8,\epsilon})\frac{c_{10}^{2}}{2\epsilon(c_9-c_8)}+\ind{\left[c_{9,\epsilon},c_9\right]}(r)(r-c_{9,\epsilon})\frac{c_{11}^{2}}{2\epsilon(c_9-c_8)}$ with\\ $c_{8,\epsilon}:=c_8-\frac{2\epsilon(c_9-c_8)}{c_{10}}$ and $c_{9,\epsilon}:=c_9-\frac{2\epsilon(c_9-c_8)}{c_{11}}$. Letting also $ h'_{\epsilon}(r):=\int_{c_8}^{r}h''_{\epsilon}(s)ds=\ind{[c_{9,\epsilon},c_9]}(r)(r-c_{9,\epsilon})^{2}\frac{c_{11}^2}{4\epsilon(c_9-c_8)}-\ind{]c_{8,\epsilon},c_9]}(r)\epsilon(c_9-c_8)$\\ $\hspace{5mm}+\ind{[c_8,c_{8,\epsilon}]}(r)\left[(r-c_{8,\epsilon})^{2}-(c_8-c_{8,\epsilon})^{2}\right]\frac{c_{10}^2}{4\epsilon(c_9-c_8)}$ ensures 1.. We then also have $h'_{\epsilon}\leq0$. 4. follows from choosing $\epsilon\leq\frac{\delta}{c_9-c_8}$. Setting $h(r):=\int_{c_8}^{r}h'_{\epsilon}(s)ds$ for such an $\epsilon$ makes $h$ decreasing, ensures 3. and finishes the proof of Lemma~\ref{le:ableitungskorrektur}. \hfill$\Box$\newline \begin{lemma}\label{le:exf} There is a strictly increasing $C^2$-function $f$ with the following properties. \begin{enumerate} \item $\lim_{r\rightarrow\infty}f(r)=\infty$ and $f(\rho^{xy}_{t})$ is a submartingale for any $x, y\in\ensuremath{\mathbb{R}^{d}} $ \item $f(1)=0$ \item Writing with It\^{o}s formula, (\ref{eq:zweipunktgleichung}) and Fubinis theorem \begin{eqnarray} &&\mathbb{E}\left[f(\rho^{xy}_{t+s})-f(\rho^{xy}_{s})\right]\nonumber\\ &=&\int_{s}^{t+s}\mathbb{E}\left[f'(\rho^{xy}_{r})\frac{1-B_{N}(\rho^{xy}_{r})}{\rho^{xy}_{r}}(d-1)+f''(\rho^{xy}_{r})\left(1-B_{L}(\rho^{xy}_{r})\right)\right]dr\nonumber\\ &+&\mathbb{E}\left[\int_{s}^{t+s}f'(\rho^{xy}_{r})\sqrt{2(1-B_{L}(\rho^{xy}_{r}))}dW_{r}\right]\nonumber\\ &=:&\int_{s}^{t+s}\mathbb{E}\left[g(\rho^{xy}_{r})\right]dr +\mathbb{E}\left[\int_{s}^{t+s}\tilde{g}(\rho^{xy}_{r})dW_{r}\right] \end{eqnarray} we get that $\tilde{g}$ is bounded and that $g-\frac{1}{8}(\beta_{N}(d-1)-\beta_{L})\geq0$. \item There are $C_8>0$ and $C_9>0$ such that \begin{equation}\ce{(f(d_{t+1})-f(d_{t}))\wedge C_9}{\mathcal{F}_{t}}\geq C_7.\label{eq:driftschranke}\end{equation} ($\mathcal{F}_{t}$ denotes the $\sigma$-field generated by the flow up to time $t$.) \end{enumerate} \end{lemma} Proof: We choose the following ansatz for $f$ which uses a local linearization of (\ref{eq:zweipunktgleichung}) near the origin. \begin{equation} f(r)-c_{1}:=\left\{\begin{array}{ll} \log r+c_{2}&:0<r<c_8\\ c_{3}\sqrt{r}+h(r)&:c_8\leq r\leq c_9\\ c_{4}r+c_{5}&:c_9<r \end{array}\right. \end{equation}Put \begin{align} \epsilon:=&1\wedge\frac{1}{8}\frac{\beta_{N}(d-1)-\beta_{L}}{d-1}\wedge\frac{1}{24}(\beta_{N}(d-1)-\beta_{L})\left(\frac{\beta_{L}}{\beta_{N}(d-1)}\right)^{\frac{1}{3}}\nonumber\\ &\wedge\frac{1}{24}(\beta_{N}(d-1)-\beta_{L}) \left(\frac{\beta_{N}(d-1)}{\beta_{L}}\right)^{-\frac{4}{3}} \end{align} and choose $r_{\epsilon}$ according to (\ref{eq:B-approximationsgleichung}). Further set \begin{align} c_9:=&r_{\epsilon}\wedge1,\hspace{1cm}c_8:=c_9\left(\frac{\beta_{N}(d-1)}{\beta_{L}}\right)^{-\frac{2}{3}},\nonumber\\ \delta:=&\frac{1}{24}(\beta_{N}(d-1)-\beta_{L})\left\|\frac{1-B_{N}(.)}{c_8}\right\|_{\infty}^{-1}(d-1)^{-1},\nonumber\\ c_{11}:=&\frac{1}{2c_9\sqrt{c_8 c_9}},\hspace{0.5cm}\textnormal{ and }\hspace{0.5cm}c_{10}:=-\frac{1}{2c_8^{2}}.\nonumber \end{align} Choosing $h$ according to Lemma~\ref{le:ableitungskorrektur} and \begin{align} c_{3}:=&\frac{2}{\sqrt{c_8}},\hspace{1cm}c_{2}:=2-\log(c_8),\hspace{1cm}c_{4}:=\frac{c_{3}}{2\sqrt{c_9}}+h'(c_9)=\frac{1}{\sqrt{c_8 c_9}},\nonumber\\ c_{5}:=&\sqrt{\frac{c_9}{c_8}}+h(c_9),\hspace{1cm}c_{1}:=-c_{4}-c_{5}\nonumber \end{align} ensures the $C^2$-property of $f$. Its submartingale property will follow if we can show that $g$ is strictly positive and that $\tilde{g}$ is bounded. To check this let us give $f$ and its derivatives in terms of $c_8$, $c_9$ and $\delta$. \begin{align} f(r)+\sqrt{\frac{c_9}{c_8}}+h(c_9)+\frac{1}{\sqrt{c_8c_9}}&=&\left\{\begin{array}{ll} \log r+2-\log(c_8)&:0<r<c_8\\ \frac{2}{\sqrt{c_8}}\sqrt{r}+h(r)&:c_8\leq r\leq c_9\\ \frac{r}{\sqrt{c_8c_9}}+\sqrt{\frac{c_9}{c_8}}+h(c_9)&:c_9<r \end{array}\right.\nonumber ,\end{align} \[ \left(f'(r),f''(r)\right)=\left\{\begin{array}{ll} \left(\frac{1}{r},-\frac{1}{r^{2}}\right)&:0<r<c_8\\ \left(\frac{1}{\sqrt{c_8r}}+h'(r),-\frac{1}{2r\sqrt{c_8r}}+h''(r)\right)&:c_8\leq r\leq c_9\\ \left(\frac{1}{\sqrt{c_8c_9}},0\right)&:c_9<r \end{array}\right. ...\] The computation for the boundedness $\tilde{g}$ is rather simple and yields $|\tilde{g}(r)|\leq \sqrt{\beta_{L}}+\sqrt{2}+\left|f'(c_8)\right|\left\|\sqrt{2(1-B_{L}(.))}\right\|_{\infty}<\infty$. We leave the details to the reader. Now we can turn to the estimation of $g(r)$: For $r\geq c_9$ we obviously have $g(r)>0$ since $f'(r)>0$, $f''(r)=0$ and $B_{N}(r)<1$. The case $c_8\leq r\leq c_9$ needs a little more attention. \begin{align} g(r)=&\frac{\beta_{N}r}{2\sqrt{c_8r}}(d-1)-\frac{\beta_{L}r^{2}}{2}\left(\frac{1}{2r\sqrt{c_8r}}-h''(r)\right)+\frac{d-1}{\sqrt{c_8r}}\left(\frac{1-B_{N}(r)-\frac{1}{2}\beta_{N}r^{2}}{r}\right)\nonumber\\ &+\left(h''(r)-\frac{1}{2r\sqrt{c_8r}}\right)\left(1-B_{L}(r)-\frac{1}{2}\beta_{L}r^{2}\right)+h'(r)\frac{1-B_{N}(r)}{r}(d-1)\nonumber\\ \geq&\frac{\beta_{N}r}{2\sqrt{c_8r}}(d-1)-\frac{\beta_{L}r^{2}}{2}\left(\frac{1}{2r\sqrt{c_8r}}+\frac{1}{2c_8^2}\right)-\frac{d-1}{\sqrt{c_8r}}\left|\frac{1-B_{N}(r)-\frac{1}{2}\beta_{N}r^{2}}{r}\right|\nonumber\\ &-\left|h''(r)-\frac{1}{2r\sqrt{c_8r}}\right|\left|1-B_{L}(r)-\frac{1}{2}\beta_{L}r^{2}\right|-\left|h'(r)\frac{1-B_{N}(r)}{r}(d-1)\right|\nonumber\\ =:&\mathit{I}-\mathit{II}-\mathit{III}-\mathit{IV}. \end{align} For we chose $c_8:=c_9\left(\frac{\beta_{N}(d-1)}{\beta_{L}}\right)^{-\frac{2}{3}}$ and $c_9:=r_{\epsilon}\wedge1$ we get with $\delta$ as defined above \begin{align} I=&\frac{\sqrt{r}}{2\sqrt{c_8}}\left[\frac{1}{2}(\beta_{N}(d-1)-\beta_{L})+\frac{1}{2}\left(\beta_{N}(d-1)-r^{\frac{3}{2}}c_8^{-\frac{3}{2}}\beta_{L}\right)\right]\nonumber\\ \geq&\frac{1}{4}(\beta_{N}(d-1)-\beta_{L})+\frac{1}{4}\sqrt{\frac{r}{c_8}}\left[\beta_{N}(d-1)-c_9^{\frac{3}{2}}c_8^{-\frac{3}{2}}\beta_{L}\right]\nonumber\\ =&\frac{1}{4}(\beta_{N}(d-1)-\beta_{L}),\\ \mathit{II}\leq&\frac{r\sqrt{rc_9}}{\sqrt{c_8r}}\left|\frac{1-B_{N}(r)-\frac{1}{2}\beta_{N}r^{2}}{r^{3}}(d-1)\right| \leq\frac{1}{24}(\beta_{N}(d-1)-\beta_{L}), \end{align}\begin{align} \mathit{III}=&\left|h''(r)-\frac{1}{2r\sqrt{c_8r}}\right|\left|1-B_{L}(r)-\frac{1}{2}\beta_{L}r^{2}\right|\nonumber\\ \leq&\left|\frac{1}{2c_8^2}+\frac{1}{2r\sqrt{c_8r}}\right|r^{3}\left|\frac{1-B_{L}(r)-\frac{1}{2}\beta_{L}r^{2}}{r^{3}}\right|\nonumber\\ \leq&\frac{c_9^2}{c_8^2}r \left|\frac{1-B_{L}(r)-\frac{1}{2}\beta_{L}r^{2}}{r^{3}}\right| \leq\frac{1}{24}(\beta_{N}(d-1)-\beta_{L}),\\ \mathit{IV}=&\left|h'(r)\frac{1-B_{N}(r)}{r}(d-1)\right|\leq\frac{1}{24}(\beta_{N}(d-1)-\beta_{L}) \end{align} and so finally $g(r)\geq \frac{1}{8}(\beta_{N}(d-1)-\beta_{L})>0$. The remaining case $r\leq c_8$ similar to the above but simpler. It remains to show \eqref{eq:driftschranke}. This is done in the following subsubsection. \subsubsection{Growth Of $f(d_{t})$ On Average}\label{suse:mitwachs} There are two cases. If $d_{t}< \frac{r^{(\epsilon)}}{2}$ it is sufficient to consider the two-point motion. Due to the Markov-property of the submartingale $f(\rho^{xy}_{t})$ we get choosing $x_{t}$ and $y_{t}$ with $|x_{t}-y_{t}|=d_{t}$ and some constant $C_9>0$ (to be specified later) \begin{align} &\ce{\left((f(d_{t+1})-f(d_{t}))\wedge C_9\right)\indb{d_{t}<\frac{r^{(\epsilon)}}{2}}}{\rule{0mm}{4mm}\mathcal{F}_{t}}\nonumber\\ \geq&\ce{(f(\rho^{xy}_{t+1})-f(\rho^{xy}_{t}))\wedge C_9}{\mathcal{F}_{t}}\indb{f(d_{t})<f\left(\frac{r^{(\epsilon)}}{2}\right)}\nonumber\\ =&\mathbb{E}_{f(\rho^{xy}_{t})}\left[(f(\rho^{xy}_{1})-f(\rho^{xy}_{0}))\wedge C_9 \right]\indb{f(d_{t})<f\left(\frac{r^{(\epsilon)}}{2}\right)}\nonumber\\ \geq&\left[\left(f\left(r^{(\epsilon)}\right)-f\left(\frac{r^{(\epsilon)}}{2}\right)\right)\wedge C_9\right]\mathbb{P}_{f(\rho^{xy}_{t})}\left[\sup_{0\leq s\leq1}f(\rho^{xy}_{s})\geq f(r^{(\epsilon)})\right]\indb{f(d_{t})<f\left(\frac{r^{(\epsilon)}}{2}\right)}\nonumber\\ &+\left(\frac{1}{8}(\beta_{N}(d-1)-\beta_{L})\wedge C_9\right)\mathbb{P}_{f(\rho^{xy}_{t})}\left[\sup_{0\leq s\leq1}f(\rho^{xy}_{s})< f(r^{(\epsilon)})\right]\indb{f(d_{t})<f\left(\frac{r^{(\epsilon)}}{2}\right)}\nonumber\\ \geq&\left(\left[f\left(r^{(\epsilon)}\right)-f\left(\frac{r^{(\epsilon)}}{2}\right)\right]\wedge\frac{1}{8}(\beta_{N}(d-1)-\beta_{L})\wedge C_9\right)\indb{d_{t}<\frac{r^{(\epsilon)}}{2}}.\nonumber \end{align} If $d_{t}\geq \frac{r^{(\epsilon)}}{2}$ first consider the growth of $d_{t}$. We may assume $G''(1,\frac{r^{(\epsilon)}}{10})=:\frac{c_6}{c_4}>0$ (otherwise we decrease $r^{(\epsilon)}$, see Lemma~\ref{le:lokdrift}). There are $\hat{r}$ and $C_9>0$ such that for any $r\geq\hat{r}$ we have $G'(\frac{C_9}{2c_4},1,r)<\frac{c_6}{2c_4}$. Choose $x^{(1)}_{t},x^{(2)}_{t},y^{(1)}_{t},y^{(2)}_{t}\in\gamma_{t}$ with $|x^{(1)}_{t}-x^{(2)}_{t}|=d_{t}, |x^{(i)}_{t}-y^{(i)}_{t}|=\frac{r^{(\epsilon)}}{10}: i=1,2$ and define \begin{align} z^{(1)}:=&x_{t}^{(1)}+\frac{x^{(1)}_{t}-x^{(2)}_{t}}{|x^{(1)}_{t}-x^{(2)}_{t}|}\hat{r}, z^{(2)}:=x_{t}^{(2)}+\frac{x^{(2)}_{t}-x^{(1)}_{t}}{|x^{(2)}_{t}-x^{(1)}_{t}|}\hat{r},\nonumber\\ r^{(i)}_{1}:=&|x_{t}^{(i)}-z^{(i)}|\wedge|y_{t}^{(i)}-z^{(i)}|=\hat{r}: i=1,2\textnormal{ and}\nonumber\\ r^{(i)}_{2}:=&|x_{t+1}^{(i)}-z^{(i)}|\wedge|y_{t+1}^{(i)}-z^{(i)}|:i=1,2\nonumber \end{align} (see Fig.~\ref{abb:mitwachs} for the geometry at time $t$). \begin{figure}\centering\resizebox{10cm}{!}{\includegraphics{mitwachs.pdf}}\caption{ growth of $d_t$ on average\label{abb:mitwachs}}\end{figure} Lemma~\ref{le:lokdrift} provides for $i=1,2$ that we have $\cel{(c_4(r^{(i)}_{1}-r^{(i)}_{2}))\wedge \frac{C_9}{2}}{\mathcal{F}_{t}}\geq G''(1,\frac{r^{(\epsilon)}}{10})-G'\left(\frac{C_9}{2c_4},1,\hat{r}\right)\geq \frac{c_{6}}{2}>0$ and therefore $|z^{(1)}-z^{(2)}|=r^{(1)}_{1}+d_{t}+r^{(2)}_{1}$ and $|z^{(1)}-z^{(2)}|\leq r^{(1)}_{2}+d_{t+1}+r^{(2)}_{2}\Rightarrow$\\ $(c_4(d_{t+1}-d_{t}))\wedge C_9\geq (c_4(r^{(1)}_{1}-r^{(1)}_{2}))\wedge \frac{C_9}{2}+(c_4(r^{(2)}_{1}-r^{(2)}_{2}))\wedge \frac{C_9}{2}$ impliing\\ \begin{align} &\ce{(c_4(d_{t+1}-d_{t}))\wedge C_9}{\mathcal{F}_{t}}\nonumber\\ \geq&\cel{(c_4(r^{(1)}_{1}-r^{(1)}_{2}))\wedge \frac{C_9}{2}}{\mathcal{F}_{t}}+\cel{(c_4(r^{(2)}_{1}-r^{(2)}_{2}))\wedge \frac{C_9}{2}}{\mathcal{F}_{t}}\geq c_{6}\nonumber. \end{align} Now we turn this into an estimate for $f(d_{t})$. Abbreviate $\rho_{t}:=\rho^{x^{(1)}x^{(2)}}_{t}$ and consider for $K>0$ \begin{align}\label{eq:c7def} &\cel{\left((f(d_{t+1})-f(d_{t}))\wedge C_9\right)\indb{f\left(\frac{r^{(\epsilon)}}{2}\right)\leq f(d_{t})\leq K}}{\rule{0mm}{4mm}\mathcal{F}_{t}}\nonumber\\ \geq& \cel{\left((f\left(\rho_{t+1}\right)-f\left(\rho_{t}\right))\wedge C_9\right)\indb{f\left(\frac{r^{(\epsilon)}}{2}\right)\leq f(d_{t})\leq K}} {\rule{0mm}{4mm}\mathcal{F}_{t}}\\ \geq&\inf_{f\left(\frac{r^{(\epsilon)}}{2}\right)\leq f\left(\rho_{t}\right)\leq K}\expec[f(\rho_{t})]{(f(\rho_1)-f(\rho_0))\wedge C_9}\indb{f\left(\frac{r^{(\epsilon)}}{2}\right)\leq f(d_{t})\leq K}\nonumber\\ =:&\indb{f\left(\frac{r^{(\epsilon)}}{2}\right)\leq f(d_{t})\leq K}c_{7}>0.\nonumber \end{align} The last inequality follows from the continuity and positivity ($g(r)>0$ for $r\geq 0$) of the mapping $r\mapsto \expec[f(\rho_0)=r]{(f(\rho_1)-f(\rho_0))\wedge C_9}$. \eqref{eq:driftschranke} is now an easy consequence of the following proposition. \begin{proposition}\label{be:driftschranke2}\mbox{}\\ There is $K\in\mathbb{N}$ such that $\ce{\left((f(d_{t+1})-f(d_{t}))\wedge C_9\right)\indb{ f(d_{t})> K}}{\mathcal{F}_{t}}\geq\frac{c_6}{2}\indb{ f(d_{t})> K}$. \end{proposition} Proof of Proposition~\ref{be:driftschranke2}: Consider for $F\in\mathcal{F}_{t}$ and $K\in\mathbb{N}$ that \begin{align}\label{eq:grosswachs} &\expec{\left((f(d_{t+1})-f(d_{t}))\wedge C_9\right)\indb{ f(d_{t})> K}\ind{F}}\nonumber\\ =&\expec{\left((f(d_{t+1})-f(d_{t}))\wedge C_9\right)\ind{\left\{f(d_{t})> K\right\}\cap F\cap\left\{f(d_{t+1})\geq 0 \right\}}}\nonumber\\ &+\expec{\left((f(d_{t+1})-f(d_{t}))\wedge C_9\right)\ind{\left\{f(d_{t})> K\right\}\cap F\cap\left\{f(d_{t+1})<0 \right\}}}\nonumber\\ \geq&\expec{\left((c_{4}(d_{t+1}-d_{t}))\wedge C_9\right)\ind{\left\{f(d_{t})> K\right\}\cap F\cap\left\{f(d_{t+1})\geq 0 \right\}}}\nonumber\\ &+\expec{\left(f\left(\rho_{t+1}\right)-f\left(\rho_{t}\right)\right)\indb{f\left(\rho_{t+1}\right)-f\left(\rho_{t}\right)<-K}\ind{\left\{f\left(d_{t}\right)> K\right\}\cap F\cap\left\{f\left(d_{t+1}\right)<0\right\}}}\nonumber\\ \geq&\expec{\left((c_{4}(d_{t+1}-d_{t}))\wedge C_9\right)\ind{\left\{f(d_{t})> K\right\}\cap F}}\nonumber\\ &+\expec{\expec[f\left(\rho_{t}\right)]{\left(f\left(\rho_{1}\right)-f\left(\rho_{0}\right)\right)\indb{f\left(\rho_{1}\right)-f\left(\rho_{0}\right)<-K}}\ind{\left\{f\left(d_{t}\right)> K\right\}\cap F}}\nonumber\\ \geq&\expec{ \ce{\left(c_4(d_{t+1}-d_{t})\right)\wedge C_9}{\mathcal{F}_{t}} \ind{\left\{f(d_{t})> K\right\}\cap F}}\nonumber\\ &-\expec{\ind{\{f(d_{t})>K\}\cap F}\sum_{n=K}^{\infty}n\prob[f\left(\rho_{t}\right)] {f\left(\rho_{1}\right)-f\left(\rho_{0}\right)<1-n}}\\ =:&\expec{ \ce{\left(c_4(d_{t+1}-d_{t})\right)\wedge C_9}{\mathcal{F}_{t}} \ind{\left\{f(d_{t})> K\right\}\cap F}}-I\nonumber\\ \geq& c_{6}\prob{f(d_{t})>K;F;f(d_{t+1})\geq 0 }-I\nonumber. \end{align} For the estimation of $I$ (defined in the above computation) the next lemma ist useful. \begin{lemma}\label{le:schrumpftail} For $x,y\in\mathbb{R}^{2}$ and $f\left(\rho^{xy}_{t}\right)\in\mathbb{R}$ we have that\\ \mbox{}\hspace{2cm}$\prob[f(\rho^{xy}_{t})]{f(\rho^{xy}_{0})-f(\rho^{xy}_{1})>n}\leq\frac{\left\|\tilde{g}\right\|_{\infty}}{n\sqrt{2\pi}}\exp\left\{-\frac{n^{2}}{2\left\|\tilde{g}\right\|_{\infty}^{2}}\right\}$. \end{lemma} Proof of Lemma~\ref{le:schrumpftail}: Due $df(\rho^{xy}_{s})=\tilde{g}(\rho^{xy}_{s})dW_{s}+g(\rho^{xy}_{s})ds$ with $\tilde{g}\leq\bar{M}$ and $g\geq0 $ some standard results (~\cite[Proposition 5.2.18]{ks}, \cite[Theorem 4.6]{ks} and \cite[Chapter V, Theorem 1.7]{ry} e.g.) imply \begin{align} \prob[f(\rho^{xy}_{i-1})]{f(\rho^{xy}_{0})-f(\rho^{xy}_{1})>n}\leq&\prob[f(\rho^{xy}_{i-1})]{\int_{0}^{1}\tilde{g}(\rho^{xy}_{s})dW_{s}<-n}\nonumber\\ \leq\prob{W_{\int_{0}^{1}\left\|\tilde{g}\right\|_{\infty}^{2}ds}<-n}\leq\prob{\left\|\tilde{g}\right\|_{\infty} W_{1}<-n}\leq&\frac{\left\|\tilde{g}\right\|_{\infty}}{n\sqrt{2\pi}}\exp\left\{-\frac{n^{2}}{2\left\|\tilde{g}\right\|_{\infty}^{2}}\right\}.\nonumber\end{align} completing the proof of Lemma~\ref{le:schrumpftail}. \hfill$\Box$\\ The convergence $I\leq\expec{\ind{\{f(d_{t})>K\}\cap F}\sum_{n=K}^{\infty}\frac{n\left\|\tilde{g}\right\|_{\infty}}{(n-1)\sqrt{2\pi}} \exp\left\{-\frac{(n-1)^{2}}{2\left\|\tilde{g}\right\|_{\infty}^{2}}\right\}} \stackrel{K\rightarrow\infty}{\rightarrow}0$ implies for sufficiently large $K$ (uniformly in $F$) that\\ $I\leq\frac{c_{4}c_{6}}{2}\prob{\left\{f(d_{t})>K\right\}\cap F\cap\left\{f(d_{t+1})\geq 0 \right\}}$, because we also have that\\ $\cp{f(d_{t+1})\geq0}{f(d_{t})>K}\rightarrow1$ for $K\rightarrow\infty$ which together with (\ref{eq:grosswachs}) completes the proof of Proposition~\ref{be:driftschranke2}.\hfill$\Box$\\ The proof of \eqref{eq:driftschranke} is now straightforward. Choose $K>1$ for which Proposition~\ref{be:driftschranke2} holds, $c_{7}=c_{7}(K)$ according to \eqref{eq:c7def} and consider \begin{align} &\ce{(f(d_{t+1})-f(d_{t}))\wedge C_9}{\mathcal{F}_{t}}\nonumber\\ =&\ce{\left((f(d_{t+1})-f(d_{t}))\wedge C_9\right)\indb{f(d_{t})<f\left(\frac{r^{(\epsilon)}}{2}\right)}}{\mathcal{F\rule{0mm}{4mm}}_{t}}\nonumber\\ &+\ce{ \left((f(d_{t+1})-f(d_{t}))\wedge C_9\right) \indb{ f\left(\frac{r^{(\epsilon)}}{2}\right) \leq f(d_{t})\leq K} }{\mathcal{F\rule{0mm}{4mm}}_{t}}\nonumber\\ &+\ce{\left((f(d_{t+1})-f(d_{t}))\wedge C_9\right)\indb{f(d_{t})> K}}{\mathcal{F}_{t}}\nonumber\\ \geq&\left[f\left(r^{(\epsilon)}\right)-f\left(\frac{r^{(\epsilon)}}{2}\right)\right]\wedge\frac{1}{8}(\beta_{N}(d-1)-\beta_{L})\wedge C_9\wedge c_{7}\wedge\frac{c_6}{2}=:C_8>0 \nonumber. \end{align} The proof of Lemma~\ref{le:exf} is complete. \hfill$\Box$\\ The estimate (on average) is about to be transformed into one of the probability of the event that our original set is not large after a long time. \subsubsection{Pathwise Growth Of $f(d_{t})$}\label{suse:pww} We have $\ce{f(d_{t+1})\wedge (f(d_{t})+C_9)}{\mathcal{F}_{t}}-f(d_{t})\geq C_8>0$. So we can verify the assumptions of Lemma~\ref{le:mmb} for $\xi_{i}:=f(d_{i-1})-[f(d_{i})\wedge (f(d_{i-1})+C_9)]+C_8$. we only have to prove $\mathbb{E}[\xi_{i}^{m}]\leq K_{m}$ for certain real $K_{m}$. \begin{align} \expec{|\xi_{i}|^{m}}\leq&2^{m}C_8^m+ 2^mC_9^m\prob{d_{i}>d_{i-1}}+2^{m}\expec{\left( f(d_{i-1})-f(d_{i})\right)^{m} \indb{d_{i-1}>d_{i}}}\nonumber\\ \leq&2^{m}C_8^m+ 2^mC_9^m+2^m\expec{\left( f(d_{i-1})-f(d_{i})\right)^{m} \indb{d_{i-1}>d_{i}}}\nonumber\\ :=&2^m(C_8^m+C_9^m)+2^m\mathit{I}\label{eq:ximoment1}. \end{align} To estimate $I$ we choose $x$ and $y$ in $\gamma$ such that $\left\|x_{i-1}-y_{i-1}\right\|=d_{i-1}$. For shrinking of $d_{t}$ implies decreasing of $||x_{t}-y_{t}||$ we can further conclude \begin{align} I\leq&\expec{\left( f(\rho^{xy}_{i-1})-f(\rho^{xy}_{i})\right)^{m} \indb{\rho^{xy}_{i-1}>\rho^{xy}_{i}}}\nonumber\\ =&\expec{\ce{ \left( f(\rho^{xy}_{i-1})-f(\rho^{xy}_{i})\right)^{m} \indb{f\left(\rho^{xy}_{i-1}\right)>f\left(\rho^{xy}_{i}\right)}}{\rule{0mm}{4mm}\mathcal{F}_{i-1}} }\nonumber\\ \leq&1+\sum_{n=1}^{\infty}(n+1)^{m}\sup_{f(r)\in\mathbb{R}^{+}}\prob[f(r)]{f(\rho^{xy}_{0})-f(\rho^{xy}_{1})>n}\label{eq:ximoment2}. \end{align} Combining (\ref{eq:ximoment1}), (\ref{eq:ximoment2}) and Lemma~\ref{le:schrumpftail} yields \[ \expec{|\xi_{i}|^{m}}\leq2^{m}(C^m_8+C_9^m+1)+2^{m}\sum_{n=1}^{\infty}\frac{\left\|\tilde{g}\right\|_{\infty}(n+1)^{m}}{n\sqrt{2\pi}}\exp\left\{-\frac{n^{2}}{2\left\|\tilde{g}\right\|_{\infty}^{2}}\right\}=:K_{m}. \] Concluding with Lemma~\ref{le:mmb} we have for $m\in\mathbb{N}$ the existence of $\kappa^{(1)}_{m}\in\mathbb{R}$, such that for $n\geq\frac{2}{C_8}\left|f(d_{0})\right|$ the following holds. \begin{align} \prob{d_{n}<1}=&\prob{f(d_{n})<0}=\prob{\sum_{i=0}^{n-1}\left(f(d_{i})-f(d_{i+1})+C_8\rule{0pt}{12pt}\right)> f(d_{0})+C_8 n}\nonumber\\ \leq&\prob{\sum_{i=0}^{n-1}\left(f(d_{i})-[f(d_{i+1})\wedge (f(d_{i})+K)]+C_8 \rule{0pt}{12pt} \right)\geq f(d_{0})+C_8 n}\nonumber\\ =&\prob{\sum_{i=1}^{n}\xi_{i}\geq f(d_{0})+C_8 n}\leq\prob{\sum_{i=1}^{n}\xi_{i}\geq \frac{C_8 n}{2}}\leq\kappa_{m}^{(1)} n^{-m}. \end{align} Increasing $\kappa_{m}^{(1)}$ ensures that this holds for all $n$ (this is to be assumed). \\ \textbf{Remark:} The assumption of largeness of $\gamma$ makes this correction uniform in $\gamma$.\\ So we can estimate the probability of $F_{n}:=\{\exists i\in[\left\lfloor \sqrt{n}\right\rfloor,\infty]\cap\mathbb{N}:d_{i}<1\}$ via \begin{equation}\label{eqn:kappaquer2komma5} \prob{F_{n}}\leq\sum_{i=\left\lfloor \sqrt{n}\right\rfloor}^{\infty}\kappa^{(1)}_{2+2m}i^{-2-2m} \leq\left(\kappa^{(1)}_{2+2m}\sum_{i=1}^{\infty}i^{-2}\right)n^{-m}=:\kappa^{(6)}n^{-m}. \end{equation} A simple Borel-Cantelli-argument shows that the flow cannot contract a non-rivial set to a point i.e. $d_t$ a.s. does not converge to zero as $t\rightarrow\infty$. \subsubsection{Getting Estimates On The Tails Of $\tau^{R}(\gamma,P)$} First let $r_t:=\textnormal{dist}{(\gamma_t,P)}$ and observe for $n\in\mathbb{N}$ \begin{equation} \prob{\tau^{R}(\gamma,P)>n}\leq\prob{F_{n}}+\mathbb{P}[F_{n}^{C},\bigcap_{i=\left\lfloor \sqrt{n} \right\rfloor}^{n}\left\{r_{i}>R\right\}]=:I+\mathit{II}.\label{eqn:kappaquer1}\end{equation} $I$ is aready treated, so only $\mathit{II}$ is left. For arbitrary $\delta>0$ and $n\geq4\vee4(r_{0}-R)\delta^{-1}$ we can estimate {\fontsize{10pt}{2ex}\[ \hspace*{-2cm}\mathit{II}\leq\prob{ \bigcap_{i=\left\lfloor n \right\rfloor}^{n}\left\{r_{i}>R\right\},\bigcap_{i=\left\lfloor\sqrt{n}\right\rfloor}^{\infty}\left\{d_{i}\geq1\right\} } \hspace{5.17cm}\]\[ =\prob{(r_{\left\lfloor \sqrt{n}\right\rfloor}-r_{0})+\sum_{i=\left\lfloor \sqrt{n}+1\right\rfloor}^{n}\left(r_{i}-r_{i-1}\right)>R-r_{0},\bigcap_{i=\left\lfloor n \right\rfloor}^{n-1}\left\{r_{i}>R\right\},\bigcap_{i=\left\lfloor\sqrt{n}\right\rfloor}^{\infty}\left\{d_{i}\geq1\right\}} \hspace{0.99cm}\]\[ \leq\prob{\eta^{(n)}\geq\frac{\delta n}{8}}\hspace{11.2cm}\]\[ +\prob{\sum_{i=\left\lfloor \sqrt{n}+1\right\rfloor}^{n}\left[(r_{i}-r_{i-1})\indb{d_{i-1}\geq1,r_{i-1}>R}-\delta\ind{\left\{d_{i-1}<1\right\}\cup\left\{r_{i-1}\leq R\right\}}+\delta\right]\geq\frac{\delta}{4}(n-\left\lfloor\sqrt{n}\right\rfloor)}\]\begin{equation} =:\mathit{III}+\mathit{IV}. \hspace{11.4cm}\label{eqn:kappaquer2} \end{equation}} Therein $\eta^{(n)}:=r_{\left\lfloor \sqrt{n}\right\rfloor}-r_{0}$ is used. The term $\mathit{III}$ can be estimated by the growth of a Brownian motion. Choose $z\in\gamma$ with $\left\|z-P\right\|=r_{0}$. Then we have \begin{equation} \mathit{III}\leq\prob{\left\| z_{\left\lfloor \sqrt{n}\right\rfloor}-z_{0} \right\|\geq\frac{\delta n}{8}}\leq\kappa^{(3)}_{m}n^{-m}\label{eqn:kappaquer3} \end{equation} for suitable $\kappa^{(3)}_{m}\in \mathbb{R}$. The estimation of $\mathit{IV}$ applies Lemma~\ref{le:mmb} again. For $\delta>0$ and $n\geq4\vee4(r_{0}-R)\delta^{-1}$ observe \begin{equation} \label{eq:badsum} \mathit{IV}\leq\prob{\sum_{i=\left\lfloor \sqrt{n}+1\right\rfloor}^{n}\xi_{i}^{(C_{10},\delta)} \geq(n-\left\lfloor \sqrt{n}\right\rfloor)\frac{\delta}{4}}. \end{equation} Therein for $C_{10}>0$ and $i\in\mathbb{N}$ set\\ $\xi_{i}^{(C_{10},\delta)}:=\left[\left(r_{i}\vee (r_{i-1}-C_{10})-r_{i-1}\right)\indb{d_{i-1}\geq1,r_{i-1}>R}-\delta\ind{\left\{d_{i-1}<1\right\}\cup\left\{r_{i-1}\leq R\right\}}+\delta\right]$ The sequel aims at showing that $(\xi_{i}^{(C_{10},\delta)}:i\in\mathbb{N})$ for suitable $C_{10}$ and $\delta$ satisfies the assumptions of Lemma~\ref{le:mmb}. Afterwards this lemma and a treatment of the fact that there are some terms $\xi_{i}^{(C_{10},\delta)}$ missing in the last sum which makes Lemma~\ref{le:mmb} not directly suitable for (\ref{eq:badsum}) will complete the proof. Therefore we have to show: $\expec{\left|\xi_{i}^{(C_{10},\delta)}\right|^{m}}\leq K_{m}<\infty$ for any $m$ and uniformly in $i$. \begin{equation} \expec{\left|\xi_{i}^{(C_{10},\delta)}\right|^{m}}\leq2^m\delta^m+2^mC_{10}^m+2^m\expec{\left(r_{i}-r_{i-1}\right)^{m}\indb{d_{i-1}\geq1,r_{i}>r_{i-1}}}\nonumber. \end{equation} For $\gamma$ cannot get away from $P$ without having its nearest (w.r.t. $P$) point doing so we can proceed for the estimation of the above as follows: Let $z\in\gamma$ such that $\left\|z_{i-1}-P\right\|=r_{i-1}$ and consider \begin{equation} \expec{\left(r_{i}-r_{i-1}\right)^{m}\indb{d_{i-1}\geq1,r_{i}>r_{i-1}}}\leq\expec{|z_i-z_{i-1}|^m} =\expec{\left|\mathcal{N}(0,1)^{\otimes d}\right|^{m}}. \end{equation} Therefore we can choose $K_{m}=K_{m}(C_{10},\delta):=2^{m}\delta^{m}+2C_{10}^{m}+2^{m}\expec{\left|\mathcal{N}(0,1)^{\otimes d}\right|^{m}}$ and it only remains to show that there are $C_{10}>0$ and $\delta>0$ such that $\ce{\xi_{i}^{(C_{10},\delta)}}{\xi_{i-1}^{(C_{10},\delta)}\ldots\xi_{1}^{(C_{10},\delta)}}$ is negative. ($\expec{\left|\mathcal{N}(0,1)^{\otimes d}\right|^{m}}$ here simply denotes the $m$th moment of the $d$-dimensional normal.) Therefore it is sufficient to show $\ce{\xi_{i}^{(C_{10},\delta)}}{\rule[2mm]{0mm}{2mm}\mathcal{F}_{i-1}}\leq0$ for suitable $C_{10}$ and $\delta$. On $\left\{d_{i-1}<1\right\}$ and on $\left\{r_{i-1}\leq R\right\}$ this is evident. On $\left\{d_{i-1}\geq1, r_{i-1}>R\right\}$ we use Lemma~\ref{le:lokdrift}. Because of 2. there is $0.5\geq\rho>0$ with $G''(1,\rho)=:2\delta>0$. 1. yields the existence of $C_{10}>0$ and $\hat{r}>0$ such that we have for $r>\hat{r}$: $G'(C_{10},1,r)<\delta$. Now choose $x,y\in\gamma$ with $\left\|x_{i-1}-P\right\|=r_{i-1}$ and $\left\|y_{i-1}-x_{i-1}\right\|=\rho$. With 3. we conclude ($\tau\equiv i-1$ and $z\equiv P$) that for $l_{1}:=\left\|x_{i-1}-P\right\|\wedge\left\|y_{i-1}-P\right\|$ and $l_{2}:=\left\|x_{i}-P\right\|\wedge\left\|y_{i}-P\right\|$ we have \begin{align} &\ce{\left(r_i\vee(r_{i-1}-C_{10})-r_{i-1}\right)\indb{r_{i-1}>R,d_{i-1}\geq1}}{\mathcal{F}_{i-1}}\nonumber\\ \leq&\ce{\left(l_2\vee(l_1-C_{10})-l_1\right)}{\mathcal{F}_{i-1}}\indb{r_{i-1}>R,d_{i-1}\geq1}\nonumber\\ \leq& \left(G'\left(C_{10},1,r_{i-1}\right)-G''(1,\rho)\right)\indb{r_{i-1}>R,d_{i-1}\geq1} \leq-\delta\indb{r_{i-1}>R,d_{i-1}\geq1},\nonumber \end{align} provided we choose $R:=\hat{r}$ (which we do). So we can apply Lemma~\ref{le:mmb} to $(\xi_{i}^{(\delta,C_{10})}:i\in\mathbb{N})$ for these $C_{10}$ and $\delta$. We will abbreviate $\xi_{i}^{(C_{10},\delta)}$ as $\xi_{i}$. Fix $C_{10}$ and $\delta$ satisfiing the assumptions of Lemma~\ref{le:mmb} and conclude for $n\geq4\vee4(r_{0}-R)\delta^{-1}\vee\left(16 C_{10}+1\right)^{2}\delta^{-2}$: For $m\in\mathbb{N}$ there is $\kappa^{(4)}_{m}\in\mathbb{R}$ such that \begin{align} \mathit{IV}\leq&\prob{\sum_{i=\left\lfloor \sqrt{n}+1\right\rfloor}^{n}\xi_{i} \geq(n-\left\lfloor\sqrt{n}\right\rfloor)\frac{\delta}{4},\sum_{i=1}^{n}\xi_{i} \geq\frac{\delta n}{16}}\nonumber\\ &+\prob{\sum_{i=\left\lfloor \sqrt{n}+1\right\rfloor}^{n}\xi_{i} \geq(n-\left\lfloor\sqrt{n}\right\rfloor)\frac{\delta}{4},\sum_{i=1}^{\left\lfloor \sqrt{n}\right\rfloor}\xi_{i} \leq-\frac{\delta n}{16}}\nonumber\\ \leq&\prob{\sum_{i=1}^{n}\xi_{i} \geq\frac{\delta n}{16}}+\prob{\sum_{i=1}^{\left\lfloor \sqrt{n}\right\rfloor}\xi_{i} \leq-\frac{\delta n}{16}}\leq\kappa^{(4)}_{m}n^{-m}.\label{eqn:kappaquer4} \end{align} Observe that due to $n>\left(16C_{10}\right)^{2}\delta^{-2}\Rightarrow \sum_{i=1}^{\left\lfloor \sqrt{n}\right\rfloor}\frac{\xi_{i}}{\left\lfloor\sqrt{n}\right\rfloor}\geq -C_{10}>-\frac{\delta \sqrt{n}}{16} $ the last term vanishes. Combining the equations (\ref{eqn:kappaquer1}), (\ref{eqn:kappaquer2komma5}), (\ref{eqn:kappaquer2}), (\ref{eqn:kappaquer3}) and (\ref{eqn:kappaquer4}) yields for $n\geq4\vee4(r_{0}-R)\delta^{-1}\vee\left(16C_{10}+1\right)^{2}\delta^{-2}$: \begin{equation} \prob{\tau^{R}(\gamma,P)>n}\leq\kappa^{(6)}_{m}n^{-m}+\kappa^{(3)}_{m}n^{-m}+\kappa^{(4)}_{m}n^{-m}=:\kappa_m^{(5)}n^{-m}, \end{equation} which proves that for $m\in\mathbb{N}$ the choice\vspace{-1mm} \[\kappa^{(2)}_{m}:=\left[(\kappa^{(5)}_{m}\vee1)\sup_{r>1}\left(\frac{r}{\left\lfloor r\right\rfloor}\right)^{m}\right]\left[ 4\vee\left(\frac{16C_{10}+1}{\delta}\right)^{2}\vee\frac{4}{\delta}\right]<\infty \]\vspace{-1mm} is appropriate completing the proof of Theorem~\ref{th:cor6}.\hfill$\Box$\newline \subsection{Linear Expansion And Stable Norm} The next two subsections follow closely the line of thought of~\cite{dkk} although we cannot use their results directly. \subsubsection{Implications Of Theorem~\ref{th:cor6}} For collecting the following corollaries of Theorem~\ref{th:cor6} we let $$ \mathcal{W}_{t}(\gamma):=\bigcup_{0\leq s\leq t}\gamma_{s}\textnormal{ and } \mathcal{W}_{t}^{R}(\gamma):=\left\{ x\in\mathbb{R}^d: \textnormal{dist}\left(x,\mathcal{W}_{t}(\gamma)\right)\leq R\right\}. $$ \begin{corollary}\label{ko:wrt} There are positive constants $C_{11}$ and $R$, such that $\mathbb{P}\textnormal{-}a.s.$ we have for large $t$ (i.e. for all $t$ that are bigger than some a.s. finite random variable) that \begin{equation} K_{C_{11}t}(0)\subset\mathcal{W}_{t}^{R}(\gamma). \end{equation} $K_{r}(x)$ denotes the closed $r$-Ball centered at $x$ as before. \end{corollary} Proof: Cover $K_{C_{11}t}(0)$ with balls of radius $R$. Due to Theorem~\ref{th:cor6} the probability, that a fixed one of these balls has not been hit by $\gamma$ up to time $t$, decays faster than any power of $t$, if we choose $C_{11}$ small enough and $R$ large enough. For the number of balls needed to cover $K_{C_{11}t}(0)$ only grows like $t^{d}$ the probability that any of these balls has not been hit up to time $t$ decays faster than any power of $t$ provided $R$ is sufficiently large and $C_{11}$ sufficiently small. So Corollary~\ref{ko:wrt} follows from the first Borel-Cantelli-lemma.\hfill$\Box$\\ For the sequel fix $R>0$ large enough for Theorem~\ref{th:cor6} and Corollary~\ref{ko:wrt} to hold with this $R$. Assuming that $\gamma$ is large makes all the estimates of Theorem~\ref{th:cor6} uniform in $\gamma\in\mathcal{C}_{R}$ with $\mathcal{C}_{R}:=\left\{\gamma: \textnormal{diam}(\gamma)\geq1 ,\gamma\subset K_{2R}(0)\right\}$. (w.l.o.g. we assume $R>1$ ). The following is immediate from Theorem~\ref{th:cor6}. \begin{corollary}\label{ko:ggi} The family of random variables $\left(\left(\frac{\tau^{R}(\gamma, tv)}{t}\right)^{k}\right)_{t\geq1,\left\|v\right\|=1,\gamma\in\mathcal{C}_{R}}$ is uniformly integrable for any $k\in\mathbb{N}$. \end{corollary} \subsubsection{The Stable Norm} Set $\left|v\right|^{R}:=\sup_{\gamma\in\mathcal{C}_{R}}\expec{\tau^{R}(\gamma,v)}$, which due to the isotropic properties of the flow does not depend on the direction of $v$. We obviously have \begin{equation}\label{eq:suba1} \expec{\tau^{2R}\left(\gamma,(t_{1}+t_{2})v\right)}\leq \expec{\tau^{R}\left(\gamma,t_{1}v\right)}+\sup_{\check{\gamma}\in\mathcal{C}_{R}}\expec{\tau^{R}\left(\check{\gamma},t_{2}v\right)}. \end{equation} With Theorem~\ref{th:cor6} we get in addition \begin{equation}\label{eq:suba2} \expec{\tau^{R}\left(\gamma,(t_{1}+t_{2})v\right)}\leq\expec{\tau^{2R}\left(\gamma,(t_{1}+t_{2})v\right)}+C_{12} \end{equation} for some constant $C_{12}>0$. Combining (\ref{eq:suba1}) and (\ref{eq:suba2}) yields the subadditivity of $t\mapsto \left|tv\right|^{R}+C_{12}$. Using Feketes lemma we conclude that $\left\|v\right\|^{R}:=\lim_{t\rightarrow\infty}(\left|tv\right|^{R}+C_{12})t^{-1}=\lim_{t\rightarrow\infty}\left|tv\right|^{R}t^{-1}$ is well-defined i.e. the limit exists and equals $\inf_{t\geq0} (\left|tv\right|^{R}+C_{12})t^{-1}$. Since $|v|^{R}$ only depends on $\left\|v\right\|$ and since it is increasing with respect to this argument we get (again from the isotropy of the flow) that $\left\|s v_{1}+(1-s)v_{2}\right\|^{R}\leq s\left\|v_{1}\right\|^{R}+(1-s)\left\|v_{2}\right\|^{R}$. Set $\mathcal{B}:=\{v\in\mathbb{R}^{d}:\left\|v\right\|^{R}\leq1\}$ and observe that $\mathcal{B}$ is a compact convex set (see Lemma~\ref{le:subgauss}). Corollary~\ref{ko:wrt} shows $\left\|v\right\|^{R}\neq0$ provided $v\neq0$. Of course the isotropic properties of the flow imply that $\mathcal{B}$ is a ball centered at the origin. We will show later that its radius does not depend on $R$. First we can prove the following lemma. \begin{lemma}\label{le:usvor} For any $\gamma\in\mathcal{C}_{R}$ and $\epsilon>0$ there is $\mathbb{P}$-a.s. $T(\gamma,\epsilon)>0$, such that for $t>T(\gamma,\epsilon)$ we have: $(1-\epsilon)t\mathcal{B}\subset\mathcal{W}^{R}_{t}(\gamma)$. \end{lemma} Proof: We need to show that for $v$ with $\left\|v\right\|^{R}\leq1$ and $m\in\mathbb{N}$ there is $\kappa_m^{(7)}=\kappa_m^{(7)}(\epsilon)>0$, such that \begin{equation}\label{eq:hittail}\prob{\tau^{R}(\gamma,tv)\geq(1+\epsilon)t}\leq \kappa_m^{(7)}t^{-m}\end{equation} holds uniformly in $\gamma\in\mathcal{C}_{R}$ and $\left\|v\right\|^{R}\leq1$. All the estimates we made so far are uniform in $\left\|v\right\|^{R}=1$, because they do not depend on the direction of $v$. By definition of $\left\|.\right\|^{R}$ there is $\tilde{t}>0$ with $\expec{\tau^{R}(\gamma,tv)}\leq(1+\frac{\epsilon}{2})t$ for any $t\geq\tilde{t}$ and $\gamma\in\mathcal{C}_{R}$. Define the stopping time $\tau_{1}^{R}$ via \[\tau^{R}_{1}:=\inf\left\{t>0,\gamma_{t}\cap K_{R}(\tilde{t}v)\neq\emptyset,\textnormal{diam}(\gamma_{t})\geq1\right\}.\] Denote by $\gamma^{(1)}$ a large connected subset of $\gamma_{\tau_{1}^{R}}$ which is contained in $K_{2R}(\tilde{t}v)$ and which has non-empty intersection with $K_{R}(\tilde{t}v)$. We can choose it to be $\mathcal{F}_{\tau^{R}_{1}}$-mesureable which we do. Now define an increasing sequence of stopping times $\left(\tau^{(i)}:i\in\mathbb{N}\right)$ recursively via {\fontsize{10pt}{2ex} \begin{align} \label{eq:tautimes} \tau_{i}^{R}:=&\inf\left\{t>\tau_{i-1}^{R},\textnormal{diam}\left(\Phi_{\tau_{i-1}^{R},t}\left(\gamma^{(i-1)}\right)\right) \geq1, \Phi_{\tau_{i-1}^{R},t}\left(\gamma^{(i-1)}\right)\cap K_{R}(i\tilde{t}v)\neq\emptyset\right\}, \nonumber\\ \gamma^{(i)}:=&\Phi_{\tau_{i-1}^{R},\tau_{i}^{R}}\left(\gamma^{(i-1)}\right)\cap K_{2R}(\tilde{t}iv).\nonumber \end{align} } \hspace{-1.5mm}(If necessary we choose a subset of $\gamma^{(i)}$ as $\gamma^{(i)}$ to ensure that it is connected.) We have (putting $\tau_{0}^{R}\equiv0$) that $\tau^{R}(\gamma,n\tilde{t}v)\leq\sum_{j=1}^{n}(\tau_{j}^{R}-\tau_{j-1}^{R})$ (see Fig.~\ref{abb:uni}). \begin{figure}\centering \resizebox{6cm}{!}{\includegraphics{uniform.pdf}}\caption{line: direction of $v$, fat: $\gamma^{(i)}$, regular: rest of $\Phi_{\tau_{i}^{R}}\gamma$\label{abb:uni}}\end{figure} Due to the strong Markov-property, the isotropy of $\Phi$ and the definition of $\tilde{t}$ yield $\ce{\tau_{j}^{R}-\tau_{j-1}^{R}}{\mathcal{F}_{\tau_{j-1}^{R}}}\leq\left(1+\frac{\epsilon}{2}\right)\tilde{t}$. Due to Theorem~\ref{th:cor6} we can define $\xi_{j}:=\tau_{j}^{R}-\tau_{j-1}^{R}-\left(1+\frac{\epsilon}{2}\right)\tilde{t}$ and obtain that the sequence $\left(\xi_{i}:i\in\mathbb{N}\right)$ satisfies the assumptions of Lemma~\ref{le:mmb}. So we conclude \begin{align} \prob{\tau^{R}(\gamma,n\tilde{t}v)\geq(1+\epsilon)n\tilde{t}}\leq&\prob{\sum_{j=1}^{n}\tau_{j}^{R}-\tau_{j-1}^{R}\geq(1+\epsilon)n\tilde{t}}\nonumber\\ =\prob{\sum_{j=1}^{n}\xi_{j}\geq\frac{\epsilon}{2}n\tilde{t}} \leq\kappa_m^{(1)}n^{-m}=&\kappa_m^{(1)}\tilde{t}^{m}(n\tilde{t})^{-m} =:\kappa_m^{(7)}\left(n\tilde{t}\right)^{-m} \end{align} which implies that a.s. for any $\epsilon>0$ the inclusion $(1-\epsilon)n\tilde{t}\mathcal{B}\subset\mathcal{W}^{R}_{n\tilde{t}}(\gamma)$ fails to hold only a finite number of times. Considering that the definition $t^{\downarrow}:=\left\lfloor\frac{t}{\tilde{t}}\right\rfloor\tilde{t}$ implies $ \lim_{t\rightarrow\infty}\frac{t^{\downarrow}}{t}=1$ and for all $\epsilon>0$ that there is $\check{t}>0$ such that for $t\geq\check{t}$ we have $t^{\downarrow}\geq\frac{1-\epsilon}{1-\frac{1}{2}\epsilon}t$ finally proofs for $t\geq\check{t}\vee \max\left\{n\in\mathbb{N}:(1-\frac{\epsilon}{2})n\tilde{t}\mathcal{B} \nsubseteq \mathcal{W}^{R}_{n\tilde{t}}(\gamma)\right\}\tilde{t}$ that $(1-\epsilon)t\mathcal{B}\subset\left(1-\frac{\epsilon}{2}\right)t^{\downarrow}\mathcal{B}\subset\mathcal{W}^R_{t^{\downarrow}}\subset\mathcal{W}^R_t\textnormal{ a.s.}$ and hence completes the proof of Lemma~\ref{le:usvor}.\hfill$\Box$\\ \subsection{Sweeping Lemma And Lower Bound - The $2$-Dimensional Case} In this subsection assume $d=2$. We will also assume that $\gamma$ is a curve (which we could have assumed before). In this case we have \begin{theorem}\label{sa:us} For any $\gamma\in\mathcal{C}_{R}$ and $\epsilon>0$ there is $\mathbb{P}$-a.s. $T(\gamma,\epsilon)>0$, such that for any $t>T(\gamma,\epsilon)$ the following holds $$(1-\epsilon)t\mathcal{B}\subset\mathcal{W}_{t}(\gamma).$$ This is 1. of Theorem\ref{th:main}. \end{theorem} (Note that we do not distinguish between the $T(\gamma,\epsilon)$ here and the $T(\gamma,\epsilon)$ of Lemma~\ref{le:usvor} because the two times are very close to each other as we will see in the sequel. The proof of Theorem~\ref{sa:us} depends apart from Lemma~\ref{le:usvor} on the following Sweeping Lemma, which will be proved after Theorem~\ref{sa:us}. \begin{lemma}\label{le:sweep} Let $\gamma$ be a large curve with $\textnormal{dist}(\gamma,P)\leq R$ (for an $R$ as defined before). Define $\tilde{\tau}=\tilde{\tau}^{R}(\gamma,P):=\tilde{\tau}(P):=\inf_{t>0}\{K_{R}(P)\subset\bigcup_{0\leq s\leq t}\gamma_{s}\}$ Then for $m\in\mathbb{N}$ there is $\kappa^{(8)}_m\in\mathbb{R}$ such that $\prob{\tilde{\tau}>t}\leq \kappa_m^{(8)}t^{-m}$ holds uniformly in $\gamma$.\end{lemma} Proof of Theorem~\ref{sa:us}: There is a positive integer $k$, such that for any $n\in\mathbb{N}$ $(1-\epsilon)n\mathcal{B}$ can be covered with $n^{2}k$ balls $\left\{K_{R}\left(P_{i}^{n}\right):i=1,\ldots,n^{2}k\right\}$ of radius $R$. By~\eqref{eq:hittail} the probability, that one of these balls has not been hit by the (at the hitting time large) curve $\gamma$ up to time $(1-0,5\epsilon)n$ decays faster than any power of $n$. Due to Lemma~\ref{le:sweep} $\prob{\tilde{\tau}\left(P_i^n\right)-\tau_{R}\left(\gamma,P_i^n\right)\geq 0,5\epsilon n}$ decays faster than any power of $n$, too. So the probability, that there is one among the balls $K_{R}\left(P_{i}^{n}\right)$ for $i\in\left\{1,\ldots,n^{2}k\right\}$ that is not completely included in $\mathcal{W}_{n}$ at time $n$ decays faster than any power of $t$, which proves Theorem~\ref{sa:us} because we have for large $t$ that $(1-2\epsilon)t\mathcal{B}\subset(1-\epsilon)\left\lfloor t\right\rfloor\mathcal{B}\subset\mathcal{W}_{\left\lfloor t\right\rfloor}\subset\mathcal{W}_{t}$.\hfill$\Box$\\ Proof of Lemma~\ref{le:sweep}: The proof consists of six steps. These are carried out similarly to a proof in \cite{dkk}. \subsubsection{Localizing Of Lemma~\ref{le:sweep}} Assume we can prove the following: For any $Q\in K_{R}(P)$ there is an open superset $U_{Q}$ of $Q$, such that for any $\tilde{\tau}_{Q}:=\inf_{t>0}\left\{U_{Q}\subset\cup_{0\leq s\leq t}\gamma_{s}\right\}$ the following holds. For $m\in\mathbb{N}$ there is $\kappa_m^{(9)}\in\mathbb{R}$, such that \begin{equation}\label{eq:lokal}\prob{\tilde{\tau}_{Q}>t}\leq \kappa_m^{(9)} t^{-m}\end{equation} holds uniformly for large curves $\gamma$ which have an non-empty intersection with $K_{R}(P)$. For the covering of $\overline{K_{R}(P)}$ requires only a finite number of the $U_{Q}$ Lemma~\ref{le:sweep} holds because of $\left\{\tilde{\tau}>t\right\}\subset \left\{\tilde{\tau}_{Q}>t \textnormal{ for one of these } Q \right\}$. \subsubsection{Definition Of A Small Square} Set $(q_{1},q_{2}):=Q\in K_{R}(P)$ and consider the following elements of the RKHS $\mathcal{H}$ of $\Phi$: \begin{eqnarray} V_{1}^{i}(.)&:=&\int b^{ij}(.-y)d\delta_{Q}\otimes\delta_{1}(y,j)=b^{i,1}(.-Q):i=1,2;\nonumber\\ V_{2}^{i}(.)&:=&\int b^{ij}(.-y)d\delta_{Q}\otimes\delta_{2}(y,j)=b^{i,2}(.-Q):i=1,2. \end{eqnarray} We have $w_{1}:=V_{1}(Q)=b^{.1}(0)=\left(\begin{array}{c}1\\0\end{array}\right)$ and $w_{2}:=V_{2}(Q)=b^{.2}(0)=\left(\begin{array}{c}0\\1\end{array}\right)$. Lemma~\ref{le:isoko} implies the Taylor expansions \\ $\left(V_{1}(x-Q),V_{2}(x-Q)\right)=E_{2}+O\left(\left\|x-Q\right\|^{2}\right):(x-Q\rightarrow0)$. So there are $C_{13}>0$ and $\delta>0$ such that we have $\left\|V_{1/2}(x)-w_{1/2}\right\|\leq C_{13} \left\|x-Q\right\|^{2}$ for $\left\|x-Q\right\|<\delta$. This implies that for $n\in\mathbb{N}$ there is $\epsilon>0$ such that \[U_{Q}^{n,\epsilon}:=\left]q_{1}-n\epsilon,q_{1}+n\epsilon\right[\times\left]q_{2}-n\epsilon,q_{2}+n\epsilon\right[\subset\left\{y\in\mathbb{R}^{2}:\left\|V_{1/2}(y)-w_{1/2}\right\|\leq\epsilon\right\},\] because for $\epsilon\leq2^{-0,5}n^{-1}\delta\wedge\left(2C_{13}n^{2}\right)^{-1}$ and $x:=(x_{1},x_{2})\in U_{Q}^{n,\epsilon}$ we get \[ \left\|V_{1}(x)-w_{1}\right\|\vee\left\|V_{2}(x)-w_{2}\right\|\leq C_{13}\left[(x_{1}-q_{1})^{2}+(x_{2}-q_{2})^{2}\right]\leq 2C_{13}n^{2}\epsilon^{2}\leq\epsilon. \] Note that this still holds, if we decrease $\epsilon$ (for a fixed $n$). Define \begin{align} \tilde{U}_{Q}^{n}:=&\left]q_{1}-\frac{n\epsilon}{2},q_{1}+\frac{n\epsilon}{2}\right[\times\left]q_{2}-\frac{n\epsilon}{2},q_{2}+\frac{n\epsilon}{2}\right[,\nonumber\\ t_{u}^{n}:=&\frac{n\epsilon}{2}\left(\sup_{z\in U_{Q}^{n,\epsilon}}\left(\left\|V_{1}(z)\right\|\vee\left\|V_{2}(z)\right\|\right)\right)^{-1}>0.\nonumber \end{align} We may assume $t_{u}^{n}\geq3^{-1}n\epsilon$ as well as $\epsilon\leq 102^{-1}$ (otherwise choose a smaller $\epsilon$). Denote by $\psi^{(i)}_{st}(x)$ for $i=1,2$ the deterministic flow defined to be the solution of the control problem. $\psi^{(i)}_{st}(x)=x+\int_{s}^{t}V_{i}\left(\psi^{(i)}_{sr}(x)\right)dr$. \begin{proposition}\label{be:nospeed} For $t\leq t_{u}^{n}$, $z\in\tilde{U}^{n}_{Q}$ we have $\left\|\psi^{(1/2)}_{0,t}(z)-z-tw_{1/2}\right\|\leq \epsilon t$. \end{proposition} Proof of Proposition~\ref{be:nospeed}: For $z\in \tilde{U}^n_Q$ we have $\left\|V_{1/2}(z)\right\|\leq 0,5n\epsilon \left(t_{u}^{n}\right)^{-1}$, which implies for $z\in\tilde{U}_{Q}^{n}$ that \begin{equation} \inf_{t>0}\left\{\left\|\psi_{0t}^{(i)}(z)-z\right\|>\frac{n\epsilon}{2}\right\}\geq t_{u}^{n}\Rightarrow\nonumber\\ \inf_{t>0}\left\{\psi_{0t}^{(i)}(z)\notin U_{Q}^{n} \right\}\geq t_{u}^{n}, \end{equation} which proves Proposition~\ref{be:nospeed} because we have for $z\in \tilde{U}^n_Q$ that \begin{eqnarray} \left\|\psi_{0t}^{(i)}(z)-z-tw_{i}\right\|\leq\int_{0}^{t}\left\|V_{i}\left(\psi_{0s}^{(i)}(z)\right)-w_{i}\right\|ds\leq\epsilon t. \end{eqnarray} \hfill$\Box$\\ Considering the following coordinates: $Z:=(Z_{1},Z_{2}):\tilde{U}_{Q}^{102}\rightarrow]-51,51[$, generated by the vectorfields $\left(\epsilon w_{i}:i=1,2\right)$ we choose $U_{Q}:=Z^{-1}\left(]-1,1[^{2}\right)$. \subsubsection{From Large To Positive Probability} As we will see it suffices to show that there is $0<\theta<1$ such that for a large curve wtih a non-empty intersection with $K_{R}(P)$ we have uniformly in $Q\in K_{R}(P)$ that \begin{equation}\label{eq:cp} \cp{\tilde{\tau}_{Q}<t_{j}}{\tilde{\tau}_{Q}>t_{j-1}}\geq \theta. \end{equation} Therein for a $T>0$ (to be specified later) let $t_{0}:=0$ and for $j\in\mathbb{N}$ define \[t_{j}:=\inf\left\{t\in\mathbb{R}:t\geq t_{j-1}+1+T:\gamma_{t}\cap K_{R}(P)\neq\emptyset,\textnormal{diam}(\gamma_{t})\geq1 \right\}.\] Following Theorem~\ref{th:cor6} there is $C_{14}>0$ and for $m\in\mathbb{N}$ a $\kappa_m^{(10)}\in\mathbb{R}$, such that for $j\in\mathbb{N}$ we have $\prob{t_{j}>C_{14}j}\leq \kappa_m^{(10)}j^{-m}$ which implies \begin{align} &\prob{\tilde{\tau}_{Q}>t} \leq\prob{t_{\left\lfloor\frac{t}{C_{14}}\right\rfloor}>t}+\prob{\tilde{\tau}_{Q}> t_{\left\lfloor\frac{t}{C_{14}}\right\rfloor}}\nonumber\\ \leq&\prob{t_{\left\lfloor\frac{t}{C_{14}}\right\rfloor}>C_{14} \left\lfloor\frac{t}{C_{14}}\right\rfloor}+\cp{\tilde{\tau}_{Q}> t_{\left\lfloor\frac{t}{C_{14}}\right\rfloor}}{\tilde{\tau}_{Q}>t_{\left\lfloor\frac{t}{C_{14}}\right\rfloor-1}}\prob{\tilde{\tau}_{Q}>t_{\left\lfloor\frac{t}{C_{14}}\right\rfloor-1}}\nonumber\\ \leq&\prob{t_{\left\lfloor\frac{t}{C_{14}}\right\rfloor}>C_{14} \left\lfloor\frac{t}{C_{14}}\right\rfloor}+\ldots \leq\kappa_m^{(10)}\left\lfloor\frac{t}{C_{14}}\right\rfloor^{-m}+(1-\theta)^{\left\lfloor\frac{t}{C_{14}}\right\rfloor} \leq \kappa_m^{(9)}t^{-m}\nonumber \end{align} for suitable $\kappa_m^{(9)}\in\mathbb{R}$. So we have only to prove (\ref{eq:cp}). Therefore it is enough to show that for $\gamma$ (as before) there are $T>0$ and $\theta>0$ (not depending on the chosen $\gamma$) such that we have uniformly in $Q\in K_{R}(P)$ that \begin{equation}\label{eq:prob} \prob{U_{Q}\subset\bigcup_{0\leq s\leq T}\gamma_{s}}\geq\theta. \end{equation} \subsubsection{Approaching The Small Square} Let $\hat{U}_{Q}:=Z^{-1}\left(]-7,7[^{2}\right)$. Then we have obviously $U_{Q}\subset\hat{U}_{Q}$. Choose $x$ and $y$ in $\gamma$ with $\left\|x-P\right\|\leq R$ and $\left\|x-y\right\|\geq 0.5$. Due to the Lemmas~\ref{le:eigenwerte} and \ref{le:korrfunk} the eigenvalues of $\bar{b}^{2}(z):=\bar{b}^{*}(z)\bar{b}(z)$ are bounded below by a positive constant $C_{4}$ on $\left\{\left\|z\right\|\geq\delta\right\}$ for arbitrary $\delta>0$. The boundedness of the correlation functions gives an upper bound $C_{5}$. Therefore the $\mathbb{R}^{4}$-valued semimartingale \[\left\{\left(\begin{array}{l}x_{t}-\left(x+2t(Q-x)\right)\\y_{t}-\left(y+2t(Q-x)\right) \end{array}\right):t\in\left[0,\frac{1}{2}\right]\right\}\] satisfies the assumptions of~\cite[Lemma 2.4]{ss}. So this lemma yields for $t=0.5$ and $\delta=0.5\epsilon$ ($C_{4}$, $C_{5}$ and $C_6$ can be chosen to be independent of $x$ and $y$): \begin{equation}\prob{x_{\frac{1}{2}}\in U_{Q},y_{\frac{1}{2}}\notin \hat{U}_{Q}}\geq p, \end{equation} where $p>0$ does not depend on the special choice of $\gamma$, because $\epsilon\leq(56\sqrt{2})^{-1}$ implies $\textnormal{diam}\hat{U}_{Q}\leq0.25$. Denote by $\hat{\hat{\gamma}}$ the subcurve of $\gamma$, between $x_{0.5}$ and $y_{0.5}$ and by $\hat{\gamma}$ a minimal subcurve of $\hat{\hat{\gamma}}$, which is contained in $\hat{U}_{Q}$ and which links $\partial\hat{U}_{Q}$ to $\partial U_{Q}$ (minimal means that no proper subcurve has these properties). Due to minimality of $\hat{\gamma}$ the set $\hat{\gamma}\cap \partial \hat{U}_{Q}$ consists of a single point which we will denote by $z$. $\partial\hat{U}_{Q}$ consists of four pieces. Without loss of generality assume $z\in Z^{-1}\left(\{-7\}\times [-7,7]\right)$ (the other cases are similar).Let $\tilde{\gamma}$ be the minimal subcurve of $\hat{\gamma}$, linking $z$ with $Z^{-1}(\{-1\}\times[-7,7])$ and $\tilde{y}:=\tilde{\gamma}_{0.5}\cap Z^{-1}(\{-1\}\times[-7,7])$ the intersection point (see~fig.~\ref{abb:vorsweep}). \begin{figure}\centering \resizebox{3cm}{!}{\includegraphics{sweep2.pdf}}\caption{$\tilde{\gamma}$ (fat), rest of $\hat{\gamma}$ (regular), The endpoints of $\tilde{\gamma}$ are $z$ (left) and $\tilde{y}$ (right). \label{abb:vorsweep}}\end{figure} We have to show that there exist $T>0$ and $\theta>0$ such that for any curve $\tilde{\gamma}\subset Z^{-1}\left([-7,7]\times[-7,7]\right)$ linking $Z^{-1}(\{-7\}\times[-7,7])$ to $Z^{-1}(\{-1\}\times[-7,7])$ the following holds: \begin{equation}\label{eq:tilga}\prob{U_{Q}\subset\cup_{0\leq s\leq T}\tilde{\gamma}_{s}}\geq\theta.\end{equation} \subsubsection{Reduction To A Control Problem} For an $\mathcal{H}$-simple control $V$ denote by $\psi_{s}^{(V)}(x)$ the solution to the control problem $$\left\{ \begin{array}{ccc} \partial_{t}\psi_{t}^{(V)}(x)&=&V\left(\psi^{(V)}_{t}(x)\right)\\ \psi_{0}^{(V)}(x)&=&x \end{array}\right..$$ Assume we can construct a $\mathcal{H}$-simple control $V$ with the following property: If $\Psi(.,.)$ is a continuous mapping from $\left[0,T\right]\times\mathbb{R}^{2}$ to $\mathbb{R}^{2}$ which satisfies \begin{equation}\label{eq:sweepapp} \left|Z\left(\Psi(s,x)\right)-Z\left(\psi^{(V)}_{s}(x)\right)\right|<\frac{1}{2}\end{equation} for $x\in\tilde{\gamma}$ and $s\in[0,T]$, then we also have $U_{Q}\subset\bigcup_{x\in\tilde{\gamma}}\bigcup_{0\leq s\leq1}\Psi(s,x)$. Then Theorem~\ref{sa:ibfsupp} applied to the intervals of constance of $V$ and the independence properties of Brownian flows prove (\ref{eq:tilga}). Let $\tilde{\gamma}=\left\{\tilde{\gamma}(u):u\in[0,1]\right\}$ be a parametrization of $\tilde{\gamma}$ and $\tilde{\psi}(s,u):=\psi^{(V)}_{s}\left(\tilde{\gamma}(u)\right)$ as well as $\tilde{\Psi}(s,u):=\Psi(s,\tilde{\gamma}(u))$.\\ ($\bar{\psi}$ and $\bar{\Psi}$ are defined similarly with $\tilde{\gamma}$ replaced by $\bar{\gamma}$, see its definition below.) We want to construct a $\mathcal{H}$-simple control $V$, that implies for any $\Psi$ fulfilling (\ref{eq:sweepapp}) that $U_{Q}\subset \cup_{\frac{7}{17}T\leq s\leq T}\cup_{0\leq u\leq1}\tilde{\Psi}(s,u)$.\\ Set $\bar{\gamma}:=\partial\left(\left[\frac{7}{17}T,T\right]\times\left[0,1\right]\right)$. $V$ is supposed to yield for $\tilde{Q}\in U_{Q}$ that \begin{equation}\label{eq:index}\textnormal{ind}\left(\bar{\Psi},\tilde{Q}\right)=1.\end{equation} Therein we denote by $\textnormal{ind}\left(\bar{\Psi},\tilde{Q}\right)$ the curving number of $\bar{\Psi}$ arround $\tilde{Q}$. To show (\ref{eq:index}) for all $\tilde{\Psi}$ with \begin{equation}\label{eq:indexvor}\left\|Z(\tilde{\Psi}(.,.))-Z(\tilde{\psi}(.,.))\right\|_{\infty}\leq 0.5\end{equation} we construct $V$ in a way, that provides for $\tilde{Q}\in U_{Q}$ the following \begin{equation}\label{eq:index2} \textnormal{ind}\left(\bar{\psi},\tilde{Q}\right)=1 \textnormal{ and } \textnormal{dist}\left(Z\left(\bar{\psi}\right),Z(U_{Q})\right)\geq 1. \end{equation} Note that (\ref{eq:index2}) implies that for any $\tilde{\Psi}$ satisfiing (\ref{eq:indexvor}) we indeed get (\ref{eq:index}). ($\tilde{\psi}$ sweeps the entire set $Z^{-1}\left([-2,2]^{2}\right)$). \subsubsection{Construction Of A Sweeping Control} Consider the following $\mathcal{H}$-simple control $V:[0,T]:=[0,34\epsilon]\rightarrow\mathcal{H}$: \begin{equation} V(.,t):=\left\{ \begin{array}{rl} -V_{2}(.)&:t\in[0,10\epsilon[\\ V_{1}(.)&:t\in[10\epsilon,14\epsilon[\\ V_{2}(.)&:t\in[14\epsilon,34\epsilon] \end{array}\right.. \end{equation} This control satisfies all our wishes (see~Fig.~\ref{abb:sweep}).\begin{figure}\centering \resizebox{4cm}{!}{\includegraphics{sweep.pdf}}\caption{$\gamma$ (curve), $\partial U_{Q}$ and $Z^{-1}\left(\{-1\}\times[-7,7]\right)$ (dashed) , $Z^{-1}([-2,2]^{2})$ and $\partial\hat{U}_{Q}$ (black), way of $\gamma$ along the control (gray) \label{abb:sweep}}\end{figure} Note that $34\epsilon=\frac{102}{3}\epsilon\leq t^{102}_{u}$ ensures the suitability of Proposition~\ref{be:nospeed} for $t\leq34\epsilon$. So we get for $t\leq34\epsilon$, $z\in\hat{U}_{Q}$ and $i=1,2$ that $\left\|\Psi^{(i)}_{0t}(z)-z-tw_{i}\right\|\leq 34\epsilon^{2}\leq\frac{\epsilon}{3}$, which gives the following estimates for $x\in\tilde{\gamma}$. \begin{align} -7&\leq& Z_{1}(x)&\leq& -1;&&&-7&\leq& Z_{2}(x)&\leq& 7&:&t=0\nonumber,\\ -7-\frac{1}{3}&\leq& Z_{1}(x_{t})&\leq& -1+\frac{1}{3};&&&-17-\frac{1}{3}&\leq& Z_{2}(x_{t})&\leq& -3+\frac{1}{3}&:&t=10\epsilon\nonumber,\\ -3-\frac{2}{3}&\leq& Z_{1}(x_{t})&\leq& 3+\frac{2}{3};&&&-17-\frac{2}{3}&\leq& Z_{2}(x_{t})&\leq& -3+\frac{2}{3}&:&t=14\epsilon\nonumber,\\ -3-1&\leq& Z_{1}(x_{t})&\leq& 3+1;&&&3-1&\leq& Z_{2}(x_{t})&\leq& 17+1&:&t=34\epsilon\nonumber. \end{align} Herein we used that elapsing of time through the intervals of constance of $V$ changes any coordinate as if $Z$ was generated by $\epsilon V$ with an error of at most $3^{-1}$ (see Fig.~\ref{abb:sweep}). A similar arguing holds for $z$ and $\tilde{y}$. \begin{align} &&Z_{1}(z)&=&-7;&&&&Z_{1}(\tilde{y})&=&-1&:&t=0\nonumber,\\ -7-\frac{1}{3}&\leq& Z_{1}(z_{t})&\leq& -7+\frac{1}{3};&&-1-\frac{1}{3}&\leq& Z_{1}(\tilde{y}_{t})&\leq& -1+\frac{1}{3}&:&t=10\epsilon\nonumber,\\ -3-\frac{2}{3}&\leq& Z_{1}(z_{t})&\leq& -3+\frac{2}{3};&&3-\frac{2}{3}&\leq& Z_{1}(\tilde{y}_{t})&\leq& 3+\frac{2}{3}&:&t=14\epsilon\nonumber,\\ -3-1&\leq& Z_{1}(z_{t})&\leq& -3+1;&&3-1&\leq& Z_{1}(\tilde{y}_{t})&\leq& 3+1&:&t=34\epsilon\nonumber. \end{align} This means that for $t\in\left[14\epsilon,34\epsilon\right]$ $z_{t}$ is on the left and $\tilde{y}_{t}$ is on the right of $\check{U}$ which implies (\ref{eq:index2}) and completes the proof of Lemma~\ref{le:sweep}. \hfill$\Box$\\ \subsection{Dependence Of $\left\|v\right\|^{R}$ On $R$} >From now on we leave the ideas of~\cite{dkk} and show the following directly: \begin{lemma}\label{le:const} If we define for $R>0$, $\tilde{R}\geq1$ and $v\in\mathbb{R}^d$ $\left\|v\right\|_{\tilde{R}}^{R}$ via $\left\|v\right\|^{R}_{\tilde{R}}:=\lim_{t\rightarrow\infty}\frac{ \sup_{\gamma\in\mathcal{C}_{\tilde{R}}}\expec{\tau^{R}(\gamma,tv)} }{t}$, then this limit exists and we have for arbitrary $R_{1}>0$, $R_{2}>0$ and $\tilde{R}_{1}\geq1$, $\tilde{R}_{2}\geq1$ that $\left\|.\right\|^{R_{1}}_{\tilde{R}_{1}}\equiv\left\|.\right\|^{R_{2}}_{\tilde{R}_{2}}$ i.e. $\left\|v\right\|^{R}$ does not depend on $R$. \end{lemma} Proof: Define for $t\geq 0$, $R>0$ and $\tilde{R}\geq1$ the function $\bar{g}=\bar{g}(R,\tilde{R},t)$ via $\bar{g}(R,\tilde{R},t):=\sup_{\gamma\in\mathcal{C}_{\tilde{R}}}\left\{\expec{\tau^{R}(\gamma, tv)} \right\}$. Herein fix $v\in\mathbb{R}^d$ with $\left\|v\right\|=1$. As already seen there is $R>0$ such that \begin{equation}\label{eq:rrkonv} \left\|v\right\|_{R}^{R}:=\lim_{t\rightarrow\infty}\frac{g(R,R,t)}{t} \end{equation} exists (the limit was named $\left\|v\right\|^{R}$). We will first prove that if we fix $R\geq1$ in a way that we have convergence in (\ref{eq:rrkonv}), then we have for arbitrary $\tilde{R}\geq1$ that \begin{equation}\label{eq:stattprop1} \left\|v\right\|_{\tilde{R}}^{R}=\lim_{t\rightarrow\infty}\frac{g(R,\tilde{R},t)}{t}=\left\|v\right\|_{R}^{R}. \end{equation} \newpage Observe: \begin{enumerate} \item If $\tilde{R}\geq R$ then $\bar{g}(R,\tilde{R},t)\geq \bar{g}(R,R,t)$ is obvious.\\ Isotropy yields: $\bar{g}(R,\tilde{R},t)\leq\bar{g}(R,R,t+\tilde{R})$. For we have that\\ $\expec{\tau^{R}\left(\gamma,(t+\tilde{R})v\right)}\leq\expec{\tau^{R}(\gamma,tv)}+C_{15}$ for a $C_{15}>0$ (uniformly chosen in $\gamma$) we get $\bar{g}(R,R,t)\leq \bar{g}(R,\tilde{R},t)\leq \bar{g}(R,R,t)+C_{15}.$ \item If $\tilde{R}<R$ we obtain similarly that $\bar{g}(R,\tilde{R},t)\leq \bar{g}(R,R,t)\leq\bar{g}(R,\tilde{R},t)+C_{15}$ \end{enumerate} Sending $t\rightarrow\infty$ proves~\eqref{eq:stattprop1} from the latter.\\ Now we will prove that $\left\|v\right\|^{R}_{1}$ exists for any $R>0$ and we have: \begin{equation}\label{eq:stattprop2} \left\|.\right\|_{1}^{R}\equiv\left\|.\right\|_{1}^{\tilde{R}} \end{equation} Without loss of generality assume that $\tilde{R}>R$. Then $\bar{g}(\tilde{R},1,t)\leq \bar{g}(R,1,t)$ is obvious. Addionally (in $\tau^{R}(\gamma,tv)$ one takes a subcurve if necessary) we have $\expec{\tau^{R}(\gamma,tv)}\leq\expec{\tau^{\tilde{R}}(\gamma,tv)}+\sup_{\check{\gamma}\in\mathcal{C}_{1}}\expec{\tau^{R}(\check{\gamma},\tilde{R}v)}$, so~\eqref{eq:stattprop2} follows via $t\rightarrow\infty$ from $\bar{g}(R,1,t)\leq\bar{g}(\tilde{R},1,t)+C_{16}$ for some $C_{16}>0$. The proof of Lemma~\ref{le:const} is complete.\hfill$\Box$\\ \section{The Upper Bound} \label{sec:ub} \subsection{The Speed Of A Slow Curve Asymptotically Has A Dirac Distribution On The Proper Time Scale } Let $R$ be as before and $v\in\ensuremath{\mathbb{R}}^d$ with $\left\|v\right\|^R=1$. Choose for any $t\in\left]0,\infty\right[$ a curve $\gamma^{(t)}\in\mathcal{C}_{R/2}$ such that $|\expec{\tau^R(\gamma^{(t)},tv)}-|tv|^R|t^{-1}\to0$. By the definition of $\left\|v\right\|^R$ we already know \begin{equation}\label{eq:expecconv} \left|\frac{ \expec{\tau^R(\gamma^{(t)},tv)} }{t}-\left\|v\right\|^R \right|\leq \left|\frac{\expec{\tau^R(\gamma^{(t)},tv)}-|tv|^R}{t}\right|+\left|\frac{|tv|^R}{t}-\left\|v\right\|^R \right| \rightarrow0. \end{equation} We will investigate the asymptotic law of $\tau^R(\gamma^{(t)},tv)$ in the following lemma. \begin{lemma}\label{le:diracdist} Let $\left(X_t:t>0 \right)$ be a family of integrable random variables such that we have $\lim_{t\rightarrow\infty}\expec{X_t-\expec{X_t};X_t-\expec{X_t}>\delta}=0$ for any $\delta>0$. Then we also have for $\delta>0$ that $\lim_{t\rightarrow\infty}\prob{X_t-\expec{X_t}<-\delta}=0$. \end{lemma} Proof: Denote $X_t-\expec{X_t}$ by $Y_t$ and suppose that we can find $\delta>0$, $\epsilon>0$ und a sequence $(t_n)_{n\in\mathbb{N}}$ such that for $t_n\nearrow\infty$ and any $n$ we have $\prob{Y_{t_n}<-\bar{\delta}}>\epsilon$. This immediately yields\\ $0=\expec{Y_{t_n}}=\expec{Y_{t_n};Y_{t_n}\leq-\delta} +\expec{Y_{t_n};Y_{t_n}\in(-\delta,\frac{\epsilon\delta}{2})}+\expec{Y_{t_n};Y_{t_n}\geq\frac{\epsilon\delta}{2}} $ i.e. a contradiction for large $n$ because the right-hand site of the latter is strictly negative for large $n$.\hfill$\Box$\\ \begin{lemma} Let $R$ be as before and fix $v$ with $\left\|v\right\|^R=1$, Then for any $\delta>0, n,m\in\mathbb{N}$ there are $K_m^{(n)}(\delta)\in\ensuremath{\mathbb{R}}$ and $\tilde{t}_m^{(n)}(\delta)$ such that we have for all $t\geq0$ that \begin{equation} \expec{\left(\tau^R(\gamma^{(t)},tv)t^{-1}\right)^n;\tau^R(\gamma^{(t)},tv)t^{-1}>(1+\delta)}\leq K_m^{(n)}(\delta)t^{-m}\nonumber \end{equation} and for $t\geq\tilde{t}_m^{(n)}(\delta)$ that \begin{equation} \expec{\left(\tau^R(\gamma^{(t)},tv)t^{-1}\right)^n;\tau^R(\gamma^{(t)},tv)t^{-1}>\expec{\tau^R(\gamma^{(t)},tv)t^{-1}}+\delta}\leq K_m^{(n)}(\delta)t^{-m}.\nonumber \end{equation} \end{lemma} Proof: The first equation follows from straightfoward estimates using~\eqref{eq:hittail} and the second one is implied by the first one and \eqref{eq:expecconv}.\hfill$\Box$\\ The previous lemma implies that we can apply Lemma~\ref{le:diracdist} to\\ $X_t:=\tau^R(\gamma^{(t)},tv)t^{-1}$ to conclude that $X_t$ converges to $1$ in probability. \subsection{Time Reverse - Comparison Of Fast And Slow Curves} We will have to assume $d=2$ from now on (unless otherwise stated) for the following arguments strongly depend on the topology of the plane. \begin{theorem}\label{th:allesdirac} Let $\Gamma:=\partial K_R(0)$. There are $C_{20}>0$ and $C_{17}>0$ and for $m\in\mathbb{N}$ $\kappa_m^{(11)}\in\ensuremath{\mathbb{R}}$ such that we have for $T\geq\sqrt{t}$ and any $\gamma\in\mathcal{C}_{R/2}$ that \begin{equation} \kappa_m^{(11)}T^{-m}+\prob{\tau^{R}(\gamma,tv)\leq T+C_{17}}\geq C_{20}\prob{\tau^{R}(\Gamma,tv)\leq T}. \end{equation} \end{theorem} The proof of Theorem~\ref{th:allesdirac} uses the following lemmas. Denote by $\mathcal{C}_R^*$ the set of all large curves $\gamma$ with $\gamma\cap\partial K_R(0)\neq\emptyset$. \begin{lemma}\label{le:q1} There is a constant $C_{18}>0$ with\\ $ \inf_{\gamma\in\mathcal{C}_{R}^*}\inf_{t\geq C_{18}}\prob{\gamma_t\cap\partial K_{R}(0)\neq\emptyset;\textnormal{diam}(\gamma_t)\geq1}=:p_1>0. $ \end{lemma} Proof: Since for any constant $C_{18}$ the distance process $\textnormal{dist}(\gamma_{t-0.5C_{18}},0)$ of a long curve from the origin can be majorized by a stationary process there is $\epsilon>0$ with $\inf_{\gamma\in\mathcal{C}_{R}^*}\inf_{t\geq C_{18}}\prob{\gamma_t\cap K_{R}(0)\neq\emptyset}\geq\epsilon$. Now choose $C_{18}$ large enough for $\inf_{\gamma\in\mathcal{C}_{R}^*}\inf_{t\geq C_{18}}\prob{\textnormal{diam}\gamma_t>3R}\geq1-\frac{\epsilon}{2}$ to hold. This directly implies \[ \inf_{\gamma\in\mathcal{C}_{R}^*}\inf_{t\geq C_{18}}\prob{\gamma_t\cap K_R(0)\neq\emptyset;\gamma_t\cap K_R(0)^C\neq\emptyset;\textnormal{diam}(\gamma_t)\geq1}\geq\frac{\epsilon}{2} \] and hence we get $p_1>0$ completing the proof of Lemma~\ref{le:q1}. \hfill$\Box$\\ \begin{lemma}\label{le:q2} There is $C_{19}>0$ with $\inf_{\bar{\gamma}\in\mathcal{C}_R^*}\inf_{\gamma\in\mathcal{C}_{R/2}}\prob{\bar{\gamma}_{C_{19}}\cap\gamma\neq\emptyset}=:p_2>0$. \end{lemma} Proof: First write $t=t_{1}+t_{2}$ for some non-negative $t_{1}$, $t_{2}$ and observe \begin{align} &\prob{\gamma_{t}\cap\bar{\gamma}=\emptyset}&=&\prob{\Phi_{t_{1},t}\left(\Phi_{0,t_{1}}(\gamma)\right)\cap\bar{\gamma}=\emptyset}\nonumber\\ =&\prob{\Phi_{0,t_{1}}\left(\gamma\right)\cap\Phi_{t,t_{1}}\left(\bar{\gamma}\right)=\emptyset}&=&\left(\mathbb{P}\otimes\mathbb{P}\right)\circ\pi_{1}^{-1}\left[\Phi_{0,t_{1}}\left(\gamma\right)\cap\Phi_{t,t_{1}}\left(\bar{\gamma}\right)=\emptyset\right]\nonumber\\ =&\mathbb{P}\otimes\mathbb{P}\left[\Phi_{0,t_{1}}\left(\gamma\right)\cap\tilde{\Phi}_{t,t_{1}}\left(\bar{\gamma}\right)=\emptyset\right]&=&\mathbb{P}\otimes\mathbb{P}\left[\Phi_{0,t_{1}}\left(\gamma\right)\cap\tilde{\Phi}_{t_{1},t}\left(\bar{\gamma}\right)=\emptyset\right]\nonumber\\ =&\mathbb{P}\otimes\mathbb{P}\left[\Phi_{0,t_{1}}\left(\gamma\right)\cap\tilde{\Phi}_{0,t_{2}}\left(\bar{\gamma}\right)=\emptyset\right]. \end{align} Herein we denote by $\tilde{\Phi}$ an independent copy of $\Phi$ (for example defined on $\left(\Omega\times\Omega,\mathcal{F}\otimes\mathcal{F},\mathbb{P}\otimes\mathbb{P} \right)$). So we can (instead of having $\bar{\gamma}$ running $t$) split $t$ and let $\gamma$ run one part of the time and $\bar{\gamma}$ the rest of it. We already know that for sufficiently large $t_1$ $\gamma_{t_1}$ is dense as much as $\sqrt{t_1}$ in $K_{t_{1}^{0.9}}(0)$ with probability say at least $0.5$ uniformly in $\gamma$ (this probability converges to one as $t_1\rightarrow\infty$). Rename $\gamma_{t_1}$ to be a connected subcurve $\gamma_{t_1}\cap K_{t_{1}^{0.9}}(0)$ of diameter $t_{1}^{0.8}$ that has distance not more than $\sqrt{t_1}$ from the origin. We may asume that the endpoints of the new $\gamma_{t_1}$ have distance $t_{1}^{0.8}$ from each other (which we do). Note that $\gamma_{t_1}$ is contained in the intersection of the $t^{0.8}_1$-balls around its endpoints.(see Fig.~\ref{abb:kurve}) \begin{figure}\centering \resizebox{6cm}{!}{\includegraphics{Kurve.pdf}}\resizebox{4cm}{!}{\includegraphics{Kurve2.pdf}} \caption{Definition of $\gamma_{t_1}$ via intersection of circles arround its endpoints \label{abb:kurve}} \caption{ Adding to half lines to $\gamma_{t_1}$ divides $\ensuremath{\mathbb{R}}^2$ into black and white parts \label{abb:teilung}}\end{figure} By adding two half lines to $\gamma_{t_1}$ we cut the plane into two parts (see Fig.~\ref{abb:teilung}) say the black part and the white part. Now we fix a ball $\mathcal{K}$ of radius of $t_1^{0.6}$ centered on the perdendicular bisector of the endpoints of $\gamma_{t_1}$ exactly one half of which is black (measured with Lebesgue measure). The existence of such a ball follows from a continuity argument and the fact that there are completely black and completely white balls centered there. Fix $t_1$ large enough for $t^{0.9}_1>>t^{0.8}_1>>t^{0.7}_1>>t^{0.6}_1>>t^{0.5}_1>>1$ to hold. Observe now that any curve that links the black part to the white part of $\mathcal{K}$ without intersecting $\gamma_{t_1}$ must have diameter at least $t^{0.7}_1$. Of course all the choices above can be made $\mathcal{F}_{t_1}$-mesureable. \\ Now it is $\bar{\gamma}$'s turn to do the rest within time $t_2=t_3+1$. Choose a point in $\gamma_{t_1}$ that has distance at least $t^{0.5}$ to the complement of $\mathcal{K}$ such that at least one third of its $2R$-neighbourhood is black and white respectively. Fix $t_3$ large enough that the probability of the event that $\tilde{\Phi}(\bar{\gamma}_{t_3})$ has distance to this point less or equal to $R$ (and $\tilde{\Phi}(\bar{\gamma}_{t_3})$ is long) is at least $\epsilon$ uniformly in $\bar{\gamma}$. So with probability at least $0.5\epsilon$ a point say $x$ in $\tilde{\Phi}(\bar{\gamma}_{t_3})$ has an environment of diameter $3R$ at least one percent of which is white and black respectively. Choose another point in $\tilde{\Phi}(\bar{\gamma}_{t_3})$ say $y$ with distance $1/2$ of $x$ such that the subcurve (denoted $\check{\gamma}$) of $\tilde{\Phi}(\bar{\gamma}_{t_3})$ linking $x$ and $y$ has diameter $1/2$ and observe now that the lemma follows from Theorem~\ref{th:posdens} because we can choose $t_1$ large enough for $\check{\gamma}$ not to reach $\mathcal{K}^C$ within the remaining time $1$ with sufficiently large probability. With $C_{19}:=t_1+t_3+1$ the proof is complete.\hfill$\Box$\\ \newpage \begin{lemma}\label{le:q3} There is $p_3>0 $ such that we have\\ $\prob{\tau^R(\Gamma,tv)\leq T}\leq \frac{1}{p_3}\prob{\tau^R(\Gamma,tv)\leq T;\textnormal{diam}(\Gamma_{T+C_{18}})\geq 1}$. \end{lemma} Proof: This a direct consequence of the fact, that the diameter of long curves uniformly has a chance to grow to infinity without being smaller than $1$ after time $C_{18}$. If necessary we increase $C_{18}$ (without changing notation).\hfill$\Box$\\ Proof of Theorem~\ref{th:allesdirac}: First we use Lemma~\ref{le:q1} to estimate \begin{eqnarray} &&\prob{\Gamma_{T+C_{18}}\cap\partial K_R(tv)\neq\emptyset}\geq\prob{\tau^R(\Gamma,tv)\leq T;\textnormal{diam}(\Gamma_{T+C_{18}})\geq1}\nonumber\\ &\cdot&\cp {\textnormal{diam}(\Gamma_{T+C_{18}})\geq 1;\Gamma_{T+C_{18}}\cap\partial K_R(tv)\neq\emptyset}{\tau^R(\Gamma,tv)\leq T}\nonumber \end{eqnarray} which implies with Lemma~\ref{le:q3} \begin{align} &\prob{\tau^{R}(\Gamma,tv)\leq T}\leq \frac{1}{p_3}\prob{\tau^R(\Gamma,tv)\leq T;\textnormal{diam}(\Gamma_{T+C_{18}})\geq 1}\nonumber\\ \leq&\frac{1}{p_1p_3} \prob{\Gamma_{T+C_{18}}\cap\partial K_R(tv)\neq\emptyset}\nonumber\\ \leq&\frac{1}{p_1p_3}\left( \prob{\Gamma_{T+C_{18}}\cap\partial K_R(tv)\neq\emptyset;\textnormal{diam}(\Gamma_{T+C_{18}})\geq1}+\prob{\textnormal{diam}\Gamma_{T+C_{18}}<1 }\right).\nonumber \end{align} Usage of Lemma~\ref{le:reverse} and symmetry that $\prob{\tau^{R}(\Gamma,tv)\leq T}p_1p_3 $ is at most \begin{align} &\prob{\Gamma\cap\partial K_R(tv)_{T+C_{18}}\neq\emptyset;\textnormal{diam}(\partial K_R(tv)_{T+C_{18}})\geq1}\nonumber\\ \leq&\frac{\prob{\gamma\cap\partial K_R(tv)_{T+C_{18}+C_{19}}\neq\emptyset;\Gamma\cap\partial K_R(tv)_{T+C_{18}}\neq\emptyset}}{\cp{\gamma\cap\partial K_R(tv)_{T+C_{18}+C_{19}}\neq\emptyset}{\Gamma\cap\partial K_R(tv)_{T+C_{18}}\neq\emptyset;\textnormal{diam}(\partial K_R(tv)_{T+C_{18}})\geq1}}\nonumber\\ \leq&\frac{\prob{\gamma\cap\partial K_R(tv)_{T+C_{18}+C_{19}}\neq\emptyset}}{\cp{\gamma\cap\partial K_R(tv)_{T+C_{18}+C_{19}}\neq\emptyset}{\Gamma\cap\partial K_R(tv)_{T+C_{18}}\neq\emptyset;\textnormal{diam}(\partial K_R(tv)_{T+C_{18}})\geq1}}\nonumber. \end{align} Applying Lemma~\ref{le:q2} conditioned on $\mathcal{F}_{T+C_{18}}$ we obtain that $\prob{\tau^{R}(\Gamma,tv)\leq T}$ can be bounded from above by \begin{align} &\frac{1}{p_1p_2p_3}\prob{\gamma\cap\partial K_R(tv)_{T+C_{18}+C_{19}}\neq\emptyset}= \frac{1}{p_1p_2p_3}\prob{\gamma_{T+C_{18}+C_{19}}\cap\partial K_R(tv)\neq\emptyset}\nonumber\\ \leq&\frac{1}{p_1p_2p_3}\left( \prob{\tau^R(\gamma,tv)\leq T+C_{18}+C_{19}}+\prob{\textnormal{diam}(\gamma_{T+C_{18}+C_{19}})<1} \right) \end{align} where again we used Lemma~\ref{le:reverse}. The fact the we are only considering $T\geq\sqrt{t}$ now shows that for $m\in\mathbb{N}$ there is $\kappa_m^{(11)}\in\ensuremath{\mathbb{R}}$ such that \[ \prob{\textnormal{diam}(\gamma_{T+C_{18}+C_{19}})<1}\vee\prob{\textnormal{diam}(\Gamma_{T+C_{18}})<1}\leq \kappa_m^{(11)} T^{-m} \] which completes the proof (choosing $C_{20}:=p_1p_2p_3$ and $C_{17}:=C_{18}+C_{19}$). \hfill$\Box$\\ If we consider now for $t\geq2\frac{C_{17}}{\delta}$ \begin{align} \prob{\tau^R(\Gamma,tv)\leq(1-\delta)t}\leq&\frac{1}{C_{20}}\prob{\tau^R(\gamma^{(t)},tv)\leq(1-\delta)t+C_{17}}+\frac{\kappa_m^{(11)} t^{-m}}{C_{20}}\nonumber\\ \leq&\frac{1}{C_{20}}\prob{\tau^R(\gamma^{(t)},tv)\leq(1-\frac{\delta}{2})t}+\frac{\kappa_m^{(11)} t^{-m}}{C_{20}}\rightarrow0\nonumber \end{align} together with $\prob{\tau^R(\Gamma,tv)\leq(1+\delta)t}\geq\prob{\tau^R(\gamma^{(t)},tv)\leq(1+\delta)t}\rightarrow1$ we get that $\tau^{R}(\Gamma,tv)t^{-1}$ converges to $1$ in probability. The diffeomorphic property of the flow of course implies that this convergence holds uniformly in $\gamma\in\mathcal{C}_{R/2}$ if we replace $\Gamma$ by $\gamma$. Corollary~\ref{ko:ggi} also shows that it also holds in $L^p$ for any $p\geq1$. We thus proved the following corollary. \begin{corollary}\label{ko:konvinwkt} We have $\lim_{t\rightarrow\infty} \sup_{\gamma\in\mathcal{C}_R}\prob{\left|\frac{\tau^{R}(\gamma,tv)}{t}-\left\|v\right\|^R\right|>\epsilon}=0$ for $\epsilon>0$ as well as for any $p>0$ that $\lim_{t\rightarrow\infty}\sup_{\gamma\in\mathcal{C}_R}\expec{\left| \frac{\tau^{R}(\gamma,tv)}{t}-\left\|v\right\|^R \right|^p}=0$. \end{corollary} Proof: There is nothing left to show since we ensured that the assertions above do not depend on $R$. \hfill$\Box$\\ We now turn to the last assertion of Theorem~\ref{th:main}. \begin{lemma}\label{le:osvor} We have for any $\epsilon>0$ that $\lim_{t\rightarrow\infty}\prob{\mathcal{W}^{R}_{t}(\gamma)\subset (1+\epsilon)t\mathcal{B}}=1$. \end{lemma} Proof: For $\ensuremath{\epsilon}>0$ we have equivalence with \[\lim_{t\rightarrow\infty}\prob{\exists x\in\mathcal{W}^{R}_{t}(\gamma):x\notin(1+\epsilon)t\mathcal{B}}=0. \] So it is enough to show that for $\ensuremath{\epsilon}>0$ we have \[\lim_{t\rightarrow\infty}\prob{\exists x\in\mathbb{R}^{2}:\tau^{R}(\gamma,tx)\leq \frac{t}{1+\epsilon}; \left\|x\right\|^{R}=1}=0.\] Choose $\delta\ll\epsilon$ and a $\delta$-net on $\partial\mathcal{B}$ denoted by $\left\{v_{j}:j=1,\ldots,N_{\delta} \right\}$. Then we can apply Corollary~\ref{ko:konvinwkt} to obtain \begin{equation} \lim_{t\rightarrow\infty}\prob{\exists j\in\{1,\ldots,N_{\delta}\}:\frac{\tau^{R}(\gamma,tv_{j})}{t}<\frac{1}{1+\frac{\epsilon}{2}}}=:\lim_{t\rightarrow\infty}\prob{F_{1}(t)}=0. \end{equation} Due to Theorem~\ref{th:cor6} and Lemma~\ref{le:sweep} (similarly to the proof of Theorem~\ref{sa:us}) we have for large $t$ (because for $v\in\mathbb{R}^{2}$ with $\left\|v\right\|^{R}=1$ there is $j$ with $\left\|v-v_{j}\right\|\leq\delta$) \begin{align} &\cp{F_{1}(t)}{F_{2}(t)}\nonumber\\ :=&\cp{\exists j\in\{1,\ldots,N_{\delta}\}:\frac{\tau^{R}(\gamma,tv_{j})}{t}<\frac{1}{1+\frac{\epsilon}{2}}} {\exists v\in\partial\mathcal{B}:\frac{\tau^{R}(\gamma,tv)}{t}<\frac{1}{1+\epsilon} }\nonumber\\ \geq&\inf_{v\in\partial\mathcal{B}, \gamma\in\mathcal{C}_{\frac{R}{2}}+tv}\prob{\tilde{\tau}^{\delta t}(\gamma,tv)<\left(\frac{1}{1+\frac{\epsilon}{2}}-\frac{1}{1+\epsilon}\right)t}\nonumber\\ =&\inf_{\gamma\in\mathcal{C}_{\frac{R}{2}}}\prob{\tilde{\tau}^{\delta t}(\gamma,0)<\left(\frac{1}{1+\frac{\epsilon}{2}}-\frac{1}{1+\epsilon}\right)t}\geq1-\kappa_m^{(12)}t^{-m}\label{eq:f1f2}. \end{align} where $\kappa_m^{(12)}\in\mathbb{R}$ exists for $m\in\mathbb{N}$, provided $\delta=\delta(\epsilon)$ is chosen sufficiently small. This implies $ \limsup_{t\rightarrow\infty}\prob{F_{2}(t)}=0$ and hence Lemma~\ref{le:osvor}.\hfill$\Box$\\ The remaining part of Theorem~\ref{th:main} now turns out to be a conclusion from Lemma~\ref{le:osvor} and the fact, that convergence in probability implies a.s. convergence of some subsequence.\hfill $\Box$\\
{ "timestamp": "2010-07-01T02:01:30", "yymm": "1006", "arxiv_id": "1006.5851", "language": "en", "url": "https://arxiv.org/abs/1006.5851" }
\section{Introduction} This paper will describe an application of my work on the foundations of quantum field theory (much of it joint with Owen Gwilliam) to topology. I will show how to construct the Witten genus of a complex manifold $X$ from a rigorous analysis of a quantum field theory of maps from an elliptic curve to $X$. Usually the Witten genus is defined by its $q$-expansion. From this point of view, modularity of the Witten genus is not obvious. In the construction presented here, however, we find \emph{directly} a function on the moduli space of (suitable decorated) elliptic curves. It is only after careful calculation that we can compute the $q$-expansion of this function and identify it with the Witten class. Hopefully, this construction will give some hints about the mysterious geometric origins of elliptic cohomology. I am very grateful to Dennis Gaitsgory, Owen Gwilliam, Mike Hopkins, David Kazhdan, Jacob Lurie, Josh Shadlen, Yuan Shen, Stefan Stolz and Peter Teichner for many helpful conversations about the material in this paper. \section{Hochschild homology and the Todd class} Before turning to elliptic cohomology and the Witten class, I will describe the analog of my construction for the Todd class. The most familiar way in which the Todd class occurs is, of course, in the Grothendieck-Riemann-Roch theorem. Let me recall the statement. Let $X$ be a smooth projective variety, and let $E$ be an algebraic vector bundle on $X$. Then, the Grothendieck-Riemann-Roch theorem states that $$ \sum (-1)^i \operatorname{dim} H^i(X,E) = \int_X \operatorname{Td}(T X) \operatorname{ch}(E). $$ Another (and closely related) way in which the Todd class appears is in the study of deformation quantization. There is a rich literature on algebraic and non-commutative analogs of the index theorem: see \cite{Fed96,BreNesTsy99}. Much of this literature concerns index-type statements on quantizations of general symplectic manifolds. For the purposes of this paper, we are only interested in the relatively simple case when we are quantizing the cotangent bundle of a complex manifold $X$. Let $\Diff_X$ denote the algebra of differential operators on $X$. Let $\Diff^\hbar_X$ denote the sheaf of algebras on $X$ over the ring $\mathbb C[\hbar]$ obtained by forming the Rees algebra of the filtered algebra $\Diff_X$. Explicitly, $$ \Diff^\hbar_X \subset \Diff_X \otimes \mathbb C[\hbar] $$ is the subalgebra consisting of those finite sums $$ \sum \hbar^i D_i $$ where $D_i$ is a differential operator of order at most $i$. Thus, $\Diff^\hbar_X$ is a $\mathbb C[\hbar]$ algebra whose specialization to $\hbar = 0$ is the commutative algebra $\mscr O_{T^\ast X}$ of functions on the cotangent bundle of $X$. When specialized to a non-zero value of $\hbar$, $\Diff^\hbar_X$ is just $\Diff_X$. \subsection{} The theorem we are interested in states that the Todd class of $X$ appears when one compute the Hochschild homology of the algebra $\Diff^\hbar_X$. The index theorem concerns, ultimately, traces of differential operators. Since $HH( \Diff^\hbar_X)$ is the universal recipient of a trace on the algebra $\Diff^\hbar_X$, it is perhaps not so surprising that the Todd class should appear in this context. \subsection{} Recall that the Hochschild-Kostant-Rosenberg theorem gives a quasi-isomorphism $$ \mathcal{I}_{HKR} : HH ( \mscr O_X) \cong \Omega^{-\ast}(X). $$ Here $HH ( \mscr O_X)$ refers to the sheaf of Hochschild chains of $\mscr O_X$, and $\Omega^{-\ast}(X)$ refers to the algebra of forms of $X$, with reversed grading. Applied to the cotangent bundle of $X$, the Hochschild-Kostant-Rosenberg theorem gives an isomorphism $$ \mathcal{I}_{HKR} : HH( \mscr O_{T^\ast X} ) \cong \Omega^{-\ast} ( T ^\ast X). $$ The algebra $\Diff^\hbar_X$ is a deformation quantization of $\mscr O_{T^\ast X}$. We will see that the Todd genus appears when we study how $HH( \mscr O_{T^\ast X})$ changes when we replace $\mscr O_{T^\ast X}$ by $\Diff^\hbar_X$. \subsection{} Before we state the theorem, we need some notation. Let $\pi \in \Gamma ( T^\ast X, \wedge^2 T ( T^\ast X) )$ denote the canonical Poisson tensor on $T^\ast X$. Let $$ L_\pi : \Omega^i ( T^\ast X) \to \Omega^{i-1} (T^\ast X) $$ denote the operator of Lie derivative with respect to $\pi$. Thus, if $i_\pi$ is contraction by $\pi$, $$ L_\pi = [ i_\pi, \d_{dR} ]. $$ Note that $L_\pi^2 = 0$, so that $\Omega^{-\ast}(T^\ast X)$ becomes a cochain complex when endowed with differential $L_\pi$. The cohomology of this complex is called Poisson homology. Let $$ \operatorname{Td}(X) \in H^0( X, \Omega^{-\ast}(X)) = \oplus H^i(X, \Omega^i ) $$ be the Todd class of $X$. Note that the reversal of grading in the de Rham complex means that $\operatorname{Td}(X)$ is an element of cohomological degree $0$. The first statement of the theorem is as follows. \begin{theorem}[Fedosov \cite{Fed96}, Bressler-Nest-Tsyan \cite{BreNesTsy99}] There is a natural quasi-isomorphism of cochain complexes $$ HH ( \Diff^{\hbar}_X ) \simeq \left( \Omega^{-\ast} ( T ^\ast X ) [\hbar], \hbar L_{\pi} \right ) $$ sending $1 \in HH ( \Diff^{\hbar}_X)$ to $$\operatorname{Td}(X) \in \mbb R \Gamma(X, \Omega^{-\ast} (X) ).$$ \end{theorem} \subsection{} This is a rather weak formulation of the theorem, because both sides in the quasi-isomorphism are simply cochain complexes. There is a refined version which identifies a certain algebraic structure present on both sides. It will take a certain amount of preparation to state this refined version. The operator $L_\pi$ is an order two differential operator with respect to the natural product on $\Omega^{-\ast}(T^\ast X)$. We will let $\{-,-\}_{\pi}$ denote the Poisson bracket on $\Omega^{-\ast}( T^\ast X)$ of cohomological degree $1$ defined by the standard formula $$ \{a,b\}_{\pi} = L_\pi (a b ) - (L_\pi a ) b - (-1)^{\abs{a}} a L_\pi b. $$ The bracket $\{-,-\}_{\pi}$ is of cohomological degree $1$, and satisfies the standard Leibniz rule. Further, $L_\pi$ is a derivation for the bracket $\{-,-\}_{\pi}$. \begin{theorem} There is a quasi-isomorphism of cochain complexes $$ HH( \Diff^\hbar_X ) \simeq \left( \Omega^{-\ast} ( T ^\ast X ) [\hbar], \hbar L_{\pi} + \hbar \{\log \operatorname{Td}(X) , - \}_\pi \right ). $$ \end{theorem} The isomorphism in this theorem is related to that of the previous formulation by conjugating by $\operatorname{Td}(X)$. \subsection{} The isomorphism appearing in this second formulation is the one that is compatible with an additional algebraic structure. The structure is that of an algebra over a certain operad, introduced by Beilinson and Drinfeld \cite{BeiDri04}. \begin{definition} A \emph{Beilinson-Drinfeld algebra} (or BD algebra) is a flat graded $\mathbb C[\hbar]$ module $A$ endowed with the following structures. \begin{enumerate} \item A commutative unital product. \item A Poisson bracket $\{-,-\}$ of cohomological degree $1$. \item A differential $D: A \to A$ of cohomological degree $1$, satisfying $D^2 = 0$ and $D 1 = 0$, such that $$ D ( a b ) = (D a ) b + (-1)^{\abs{a}} a (D b) + \hbar \{a,b\}. $$ \end{enumerate} A \emph{filtered} Beilinson-Drinfeld algebra is a BD algebra $A$ with a $\mathbb C^\times$ action, such that the operator of multiplication by $\hbar$ is of $\mathbb C^\times$ weight $1$, the differential and the product on $A$ are of weight $0$, and the Poisson bracket $\{-,-\}$ is of weight $-1$. \end{definition} \begin{remark} The term \emph{filtered} is used because of the well-known isomorphism between filtered vector spaces and $\mathbb C^\times$ equivariant vector bundles on $\mathbb C$; or equivalently, $\mathbb C^\times$-equivariant flat modules over the ring $\mathbb C[\hbar]$, where $\hbar$ is given weight $1$. \end{remark} The complex $HH ( \Diff^\hbar_X )$ is endowed with the structure of filtered BD algebra in a natural way. The complex $\Omega^{-\ast}( T^\ast X ) [\hbar]$ also has the structure of filtered BD algebra. The product on $\Omega^{-\ast}( T^\ast X ) [\hbar]$ is the ordinary wedge product of forms. The $\mathbb C^\times$ action on $\Omega^{-\ast}(T^\ast X)$ arises from rescaling the fibres of the cotangent bundle $T^\ast X$. The bracket on $\Omega^{-\ast}(T^\ast X)$ is $\{-,-\}_\pi$, and the differential $\hbar L_\pi + \hbar \{\log \operatorname{Td}(X) , - \} $. \begin{proposition} The quasi-isomorphism $$ HH( \Diff^\hbar_X ) \simeq \left( \Omega^{-\ast} ( T ^\ast X ) [\hbar], \hbar L_{\pi} + \hbar \{\log \operatorname{Td}(X) , - \} \right ). $$ is a quasi-isomorphism of (sheaves of) filtered BD algebras. \end{proposition} This refined statement is enough to fix the Todd class uniquely, as the following lemma shows. \begin{lemma} Let $$\alpha, \alpha' \in H^0( X, \oplus_{i \ge 1} \Omega^{i} X[i]) = \oplus_{i \ge 1} H^i(X, \Omega^i X) $$ be forms on $X$, with no zero-form component, and suppose that we have a quasi-isomoprhism sheaves of filtered $BD$ algebras $$ \left( \Omega^{-\ast}(T^\ast X) [\hbar], \hbar L_\pi + \hbar \{\alpha, - \} \right) \simeq \left( \Omega^{-\ast}(T^\ast X) [\hbar], \hbar L_\pi + \hbar \{\alpha', - \} \right). $$ Then $$ [\alpha] = [\alpha'] \in \oplus_{i > 0} H^i(X, \Omega^i(X)) . $$ \end{lemma} \subsection{} In this paper I will state a generalization of this characterization of the Todd class, in which the Witten class appears in place of the Todd class. \section{Factorization algebras} Hochschild homology, $K$-theory and the Todd genus are all intimately concerned with the concept of associative algebra. In order to understand the Witten genus, one needs to consider a richer algebraic structure called a factorization algebra (or more precisely, a translation-invariant factorization algebra on the complex plane $\mathbb C$). Factorization algebras can be defined on any smooth manifold: they can be viewed as a ``multiplicative'' analog of a cosheaf. In the algebro-geometric context, factorization algebras were first considered by Beilinson and Drinfeld \cite{BeiDri04}. In this section, I will give the formal definition of a factorization algebra, and state a theorem (from \cite{CosGwi10}) which allows one to construct factorization algebras using the machinery of perturbative renormalization developed in \cite{Cos10}. The approach to constructing factorization algebras developed in \cite{CosGwi10} is a quantum field theoretic analog of the deformation quantization approach to quantum mechanics. Thus, a classical field theory yields a \emph{commutative} factorization algebra (I will define what this means shortly). Quantizing a classical field theory amounts to replacing this commutative factorization algebra by a plain factorization algebra. Just like the Todd genus of a complex manifold $X$ appears when one considers the deformation quantization of the cotangent bundle $T^\ast X$, we will see that the Witten genus arises when we consider the quantization of a commutative factorization algebra associated to a classical field theory whose fields are maps from a Riemann surface to $T^\ast X$. \subsection{} The definition of a factorization algebra is rather straightforward to give. First I will define the notion of prefactorization algebra. \begin{definition} Let $M$ be a manifold. A prefactorization algebra $\mathcal{F}$ on $M$ is consists of the following data. \begin{enumerate} \item For every open set $U \subset M$, a cochain complex of topological vector spaces, $\mathcal{F}(U)$. \item If $U_1, \ldots, U_k$ are disjoint open sets in $M$, all contained in a larger open set $V$, a continuous linear map $$ \mathcal{F}(U_1) \otimes \cdots \otimes \mathcal{F}(U_k) \to \mathcal{F}(V) $$ (where we use the completed projective tensor product). These maps are required to be $S_k$-covariant. \item These maps must satisfy an evident compability condition, which says that different ways of composing these maps yield the same answer. More precisely, suppose that $V_1,\ldots, V_l$ are disjoint open subsets of $V$ such that each $U_i$ is in some $V_j$. Then the diagram $$ \xymatrix{ \otimes_{j = 1}^l \left( \otimes_{U_i \subset V_j} \mathcal{F}(U_i) \right) \ar[r] \ar[d] & \mathcal{F}(V) \\ \otimes_{j = 1}^l \mathcal{F}(V_j) \ar[ur] & } $$ is required to commute. \item $\mathcal{F}(\emptyset) = \mathbb C$. \end{enumerate} \end{definition} \subsection{} A factorization algebra is a prefactorization algebra that satisfies the //locality// axiom. This axiom is the analog of the gluing axiom for sheaves; it expresses how the values on big open sets are determined by the values on small open sets. For sheaves, the gluing axiom says that for any open set $U$ and any cover of that open set, we can determine the value of the sheaf on $U$ from the values on the open cover. For factorization algebras, we require our covers to be fine enough that they capture all the ``multiplicative structure.'' \begin{definition} Let $U$ be an open set and $\mathfrak{U} := \{ U_i \mid i \in I\}$ a cover of $U$ by open sets. The cover $\mathfrak{U}$ is \emph{factorizing} if for any finite collection of points $\{x_1,\ldots,x_k\}$ in $U$, there is a finite collection of pairwise disjoint opens $\{U_{i_1}, \ldots, U_{i_n}\}$ from the cover such that $\{x_1,\ldots,x_k\} \subset U_{i_1} \cup \cdots \cup U_{i_n}$. \end{definition} \begin{remark} There is a simple way to find a factorizing cover of a Riemannian manifold $M$. Namely, fix a Riemannian metric on $M$, and consider \[ \{ B_r(x) \,:\, \forall x \in M, \text{ with } 0 < r < InjRad(x)\}, \] the collection of open balls, running over each point $x \in M$, whose radii are less than the injectivity radius at $x$. Another construction is simply to take the collection of open sets in $M$ diffeomorphic to the open $n$-ball. \end{remark} \subsection{} In order to motivate our definition of factorization algebra, let us write briefly recall the cosheaf axiom. Since, in this paper, we are exclusively interested in factorization algebras and cosheaves with values in cochain complexes, we will discuss the homotopical version of the cosheaf axiom. This homotopy cosheaf axiom is defined using the \v{C}ech complex associated to an open cover. Let $\Phi$ be a pre-cosheaf on $M$, and let $\mathfrak{U} = \{U_i \mid i \in I\}$ be a cover of some open subset $U$ of $M$. The \v{C}ech complex of $\mathfrak{U}$ with coefficients in $\Phi$ is is defined in the usual way, as $$ \oplus_k \oplus_{j_1,\ldots, j_k \in I} \Phi (U_{j_1} \cap \cdots \cap U_{j_k} ) [ k - 1] $$ where the differential is defined in the usual way. We say that $\Phi$ is a homotopy cosheaf if the natural map from the \v{C}ech complex to $\Phi(U)$ is a quasi-isomorphism, for every open $U \subset M$ and every open cover of $U$. We will define the notion of factorization algebra in a similar way, except that instead of considering elements $U_i$ of the cover, one considers finite collections of disjoint elements of the cover. In order to make this precise, we need to introduce some notation. Let $P I$ denote the set of finite subsets $\alpha \subset I$, with the property that if $j,j' \in \alpha$, $U_j \cap U_{j'} = \emptyset$. If $\alpha \in PI$, let us define $\mathcal{F}(\alpha)$ by $$ \mathcal{F}(\alpha) = \otimes_{j \in \alpha} \mathcal{F}( U_j ). $$ Similarly, if $\alpha_1, \ldots, \alpha_k \in P I$, we will let $$ \mathcal{F}(\alpha_1, \ldots, \alpha_k ) = \otimes_{j_1 \in \alpha_1,\ldots, j_k \in \alpha_k} \mathcal{F} (U_{j_1} \cap \cdots \cap U_{j_k}). $$ Note that there are natural maps $$ p_i : \mathcal{F}( \alpha_1,\ldots, \alpha_k ) \to \mathcal{F}( \alpha_1, \ldots, \widehat{\alpha_i}, \ldots, \alpha_k ) $$ for each $1 \le i \le k$. This allows us to define the \v{C}ech complex of the cover $\mathfrak{U}$ with coefficients in $\mathcal{F}$ by $$ \check{C} ( \mathfrak{U}, \mathcal{F}) = \oplus_{k \ge 0} \oplus_{\alpha_1,\ldots, \alpha_k \in P I } \mathcal{F} ( \alpha_1, \ldots, \alpha_k) [k-1]. $$ The differential defined in the usual way. \begin{definition} A prefactorization algebra is a factorization algebra if, for every open subset $U \subset M$ and every factorizing cover $\mathfrak{U}$ of $U$, the natural map $$ \check{C}(\mathfrak{U}, \mathcal{F}) \to \mathcal{F}(U) $$ is a quasi-isomorphism. \end{definition} We call this axiom the \emph{locality} axiom\footnote{In the version of this paper which appears in the ICM proceedings, a slightly weaker locality axiom was posited. Owen Gwilliam and I have since found that the stronger axiom presented here satisfies better formal properties.}. This axiom is, of course, only the correct one for factorization algebras taking values in the category of cochain complexes. For factorization algebras taking values in vector spaces, one should require that the map from the \v{C}ech complex to $\mathcal{F}(U)$ is an isomorphism on $H_0$. This amounts to saying that the sequence $$ \oplus_{\alpha,\beta} \mathcal{F}(\alpha,\beta) \to \oplus_{\gamma}\mathcal{F}(\gamma) \to \mathcal{F}(U) \to 0 $$ is exact on the right. A special case of this locality axiom asserts that, $U_1, U_2$ are disjoint subsets, then $$ \mathcal{F}(U_1 \amalg U_2) = \mathcal{F}(U_1) \otimes \mathcal{F}(U_2). $$ Similarly, if $\{U_i \in i \in I\}$ is any collection of disjoint open subsets of $M$, then $$ \mathcal{F}(\amalg U_i) = \otimes_{i \in I} \mathcal{F}(U_i) . $$ The cochain complexes $\mathcal{F}(U_i)$ are pointed; the map $\emptyset \to U$ induces a map $\mathcal{F}(\emptyset) = \mathbb C \to \mathcal{F}(U)$. The infinite tensor product of pointed vector spaces is defined as the colimit over all finite tensor products, in the usual way. \subsection{} The definition of a factorization algebra is reminiscent of that of an $E_n$ algebra. In fact, Jacob Lurie has shown the following \cite{Lur09a}. \begin{proposition} There is an equivalence of $(\infty,1)$-categories between the category of $E_n$ algebras, and the category of factorization algebras $\mathcal F$ on $\mbb R^n$ with the additional property that if $B \subset B'$ are balls, the map $$ \mathcal F(B) \to \mathcal F(B') $$ is a quasi-isomorphism. \end{proposition} In another direction, what we call a factorization algebra is the $C^{\infty}$ analog of a definition introduced by Beilinson and Drinfeld \cite{BeiDri04}. Beilinson and Drinfeld introduced an algebro-geometric version of the notion of factorization algebra, in order to give a geometric formulation of the axioms of a vertex algebra. In particular, every vertex algebra yields a factorization algebra. \subsection{} As our first example of a factorization algebra, let us see how a differential graded associative algebra $A$ gives rise to a translation-invariant factorization algebra $\mathcal F_A$ on $\mbb R$. We will define the value of $\mathcal F_A$ on the open intervals of $\mbb R$; the value of $\mathcal F_A$ on more complicated open subsets is formally determined by this data. Let $-\infty \le a < b \le \infty$, and let $(a,b)$ be the corresponding (possibly infinite) open interval in $\mbb R$. We set $$ \mathcal F_A((a,b)) = A. $$ If $(a,b) \subset (c,d)$, then the map $$\mathcal F_A( (a,b) ) \to \mathcal F_A ((c,d))$$ is the identity map on $A$. If $-\infty \le a_1 < b_1 < a_2 < b_2 < \dots < a_n < b_n \le \infty$, then the intervals $(a_i,b_i)$ are disjoint. Part of the data of a factorization algebra is thus a map $$ \mathcal F_A ((a_1, b_1 ) ) \otimes \cdots \otimes \mathcal F_A ((a_n, b_n ) ) \to \mathcal F_A ( (a_1, b_n ) ). $$ Once we identify each $\mathcal F_A ((a_i, b_i ) )$ with $A$, this map is the $n$-fold product map \begin{align*} A ^{\otimes n} & \to A\\ \alpha_1 \otimes \cdots \otimes \alpha_n & \mapsto \alpha_1 \cdot \alpha_2 \cdot \cdots \cdot \alpha_n. \end{align*} The value of $\mathcal{F}_A$ on any other open subset of $\mbb R$ is determined from this data by the axioms of a factorization algebra. \section{Descent and factorization homology} In this paper, we are only interested in translation-invariant factorization algebras on $\mathbb C$. In this section, we will see that associated to such a factorization algebra $\mathcal{F}$, and to an elliptic curve $\mathscr{E}$, equipped with a never-vanishing volume element $\omega$, one can define the \emph{factorization homology} $$ FH( E, \mathcal{F} ). $$ Factorization homology is the analog, in the world of factorization algebras, of Hochschild homology. As motiviation, I will first explain how the Hochschild homology groups of an associative algebra $A$ can be viewed as the factorization homology of the translation-invariant factorization algebra $\mathcal{F}_A$ on $\mbb R$ associated to $A$. \subsection{} Factorization algebras satisfy a gluing axiom. Suppose that our manifold $M$ is written as a union $M = U \cup V$ of two open subsets. If $\mathcal{F}$ is a factorization algebra on an open subset $U \subset M$, and if $\G$ is a factorization algebra on $V$, and if $$ \phi : \mathcal{F} \mid_{U \cap V} \to \G \mid_{U \cap V} $$ is an isomorphism of factorization algebras on $U \cap V$, then we can construct a factorization algebra $\mathcal{H}$ on $M$, whose restriction to $U$ is $\mathcal{F}$ and whose restriction to $V$ is $\G$. Similarly, factorization algebras satisfy descent. Suppose that a discrete group $G$ acts properly discontinuously on a manifold $M$, and suppose that $\widetilde{\mathcal{F}}$ is a $G$-equivariant factorization algebra on $M$. Then, $\widetilde{\mathcal{F}}$ descends to a factorization algebra $\mathcal{F}$ on the quotient $M / G$. \subsection{} Since we will be using the descent property extensively, it is worth explaining how one constructs the descended factorization algebra $\mathcal{F}$. Let us start by mentioning a general construction of factorization algebras from partial data. Let $\mathfrak{U}$ be a basis of open sets of $X$ which is also a factorizing cover (that is, $\mathfrak{U}$ is a \emph{factorizing basis}). \begin{definition} A $\mathfrak{U}$-factorization algebra $\mathcal{F}$ is like a factorization algebra, except that $\mathcal{F}((U)$ is only defined for $U \in \mathfrak{U}$. \end{definition} Let $\mathcal{F}$ be a $\mathfrak{U}$-factorization algebra. Let us define a prefactorization algebra $\operatorname{Fact} (\mathcal{F})$ on $X$ by $$ \operatorname{Fact}(\mathcal{F})(V) = \check{C}(\mathfrak{U}_V, \mathcal{F}). $$ Here $\mathfrak{U}_V$ is the cover of $V$ consisting of those open subsets in the cover $\mathfrak{U}$ which are contained in $V$. \begin{lemma} With this definition, $\operatorname{Fact}(\mathcal{F})$ is a factorization algebra whose restriction to open sets in the cover $\mathfrak{U}$ is quasi-isomorphic to $\mathcal{F}$. \end{lemma} \begin{comment} \begin{proof} We need to check that, if $\mathfrak{U}'$ is a factorizing cover of $V \subset X$, that $$ \operatorname{Fact}(\mathcal{F})(V) \simeq \check{C}(\mathfrak{U}', \operatorname{Fact}(\mathcal{F})). $$ We can replace $\mathfrak{U}$ by the subcover $\widetilde{\mathfrak{U}}$ of open sets subordinate to $\mathfrak{U}'$. Then, $$ \check{C}(\mathfrak{U}', \operatorname{Fact}(\mathcal{F})) \simeq \check{C}(\widetilde{\mathfrak{U}}, \mathcal{F}). $$ Indeed, there's clearly a natural map from the left hand side to the right hand side. The complex on the left break up as a direct sum of pieces corresponding to tuples $\alpha_1,ldots,\alpha_n \in P \widetilde{\mathfrak{U}}$. If $$\beta \in P \mathfrak{U}'$ and $\alpha \in P \widetilde{\mathfrak{U}}$, say $\alpha \subset \beta$ if $U_{\alpha} \subset U'_\beta$. Then, let us define a complex $$ \check{C}(\mathfrak{U}', \operatorname{Fact}(\mathcal{F})) = \oplus_{\alpha_1,\ldots,\alpha_n} \mathcal{F}(\alpha_1,\ldots,\alpha_n)[k] \otimes \left( \oplus_{\substack{\beta_1,\ldots,\beta_m \\ \alpha_i \subset \beta_j \text{ all } i,j}} \mathbb C \cdots (\beta_1,\ldots,\beta_k) \right) $$ Here $(\beta_1,\ldots,\beta_k)$ denotes a vector in degree $-k$. We need to show that the complex inside the tensor product is contractible. We have as usual $$ \d (\beta_1,\ldots,\beta_k) = \sum (-1)^i (\beta_1,\ldots,\widehat{\beta_i}, \ldots,\beta_k). $$ But this is just the simplicial chains on the complete simplicial set based on vertices $\beta_i$, i.e. on the infinite simplex. Next, the fact that $\mathcal{F}$ is a $\mathfrak U$-factorization algebra implies that the map $$ \check{C}(\widetilde{U}, \mathcal{F}) \to \check{C}(\widetilde{U}_V, \mathcal{F}) $$ is a quasi-isomorphism. \end{proof} \end{comment} \subsection{} Now, let us see how this lemma allows us to construct a descended factorization algebra on $M / G$ from a $G$-equivariant factorization algebra $\widetilde{\mathcal{F}}$ on $M$. Let $\mathfrak{U}$ be the open cover of $M / G$ consisting of connected open sets which admit a section of the map $M \to M / G$. We will define a $\mathfrak{U}$-factorization algebra $\mathcal{F}_0$ by saying that, if $U \in \mathfrak{U}$, $$ \mathcal{F}_0(V) = \widetilde{\mathcal{F}} ( \widetilde{U} ) $$ where $\widetilde{U} \subset M$ is a lift of $U$ . Since $\widetilde{\mathcal{F}}$ is $G$-invariant, this is independent of the choice of lift $\widetilde{U}$. Now we can apply the extension procedure provided by the lemma. We will let $$ \mathcal{F} = \operatorname{Fact}(\mathcal{F}_0). $$ $\mathcal{F}$ is the desired descended factorization algebra. \subsection{} This descent property implies that any translation-invariant factorization algebra $\mathcal{F}$ on $\mbb R$ descends to a factorization algebra $\mathcal{F}^{S^1}$ on $S^1 = \mbb R / \mathbb Z$. We will let $$ FH ( S^1, \mathcal{F} ) = \mathcal{F}^{S^1} (S^1) $$ denote the complex of global sections of the factorization algebra $\mathcal{F}^{S^1}$ on $S^1$. We will refer to the complex $FH ( S^1, \mathcal{F})$ as the factorization homology complex of $S^1$ with coefficients in $\mathcal{F}$. \begin{lemma} Let $\mathcal{F}_A$ denote the factorization algebra on $\mbb R$ associated to a differential graded associated algebra $A$. Then, there is a natural quasi-isomorphism $$ FH ( S^1, \mathcal{F}_A ) \simeq HH(A) $$ between the factorization homology complex of $S^1$ with coefficients in $\mathcal{F}_A$, and the Hochschild complex of $A$. \end{lemma} \begin{proof} If one analyzes the descent prescription described above, one sees that $$ FH ( S^1, \mathcal{F}_A ) = \op{hocolim}_{I_1, \ldots ,I_n} A^{\otimes n} $$ where the homotopy colimit is over disjoint unordered intervals in $S^1$. The maps in this homotopy colimit just arise form multiplication in $A$. One sees that a complex which looks like the ordinary cyclic bar complex emerges from this procedure. In \cite{Lur09a} it is proven that the result of this homotopy colimit is indeed homotopy equivalent to the cyclic bar complex. \end{proof} \subsection{} If $\lambda \in \mbb R_{> 0}$, let $S^1_\lambda$ be the quotient of $\mbb R$ by the lattice $\lambda \mathbb Z$. If $\mathcal{F}$ is a translation-invariant factorization algebra on $\mbb R$, then we can descend $\mathcal{F}$ to a factorization algebra on $S^1_\lambda$, and thus define factorization homology $FH (S^1_\lambda, \mathcal{F})$. When $\lambda = 1$, this coincides with the definition given above. In principle, there is no reason that $FH(S^1_\lambda, \mathcal{F})$ should be independent of $\lambda$. If we use the factorization algebra $\mathcal{F}_A$ arising from an associative algebra $A$, then all the factorization homology complexes $FH (S^1_\lambda ,\mathcal{F}_A)$ are canonically isomorphic. This is because the factorization algebra $\mathcal{F}_A$ on $\mbb R$ is not only translation invariant but also dilation invariant. \subsection{} As I mentioned earlier, the factorization algebras relevant to the Witten genus are translation-invariant factorization algebras on $\mathbb C$. Let $\mathcal{F}$ be such a factorization algebra. Let $E$ be an elliptic curve equipped with a volume element $\omega$. We will write $E$ as a quotient $\mathbb C / \Lambda$ of $\mathbb C$ by a lattice $\Lambda$, in such a way that form $\omega$ on $E$ pulls back to the volume form $\d z$ on $\mathbb C$. Since $\mathcal{F}$ is translation-invariant, it is in particular invariant under $\Lambda$. Thus, $\mathcal{F}$ descends to a factorization algebra $\mathcal{F}^E$ on $E$. We define the factorization homology complex of $E$ with coefficients in $\mathcal{F}$ by $$ FH ( E , \mathcal{F}) = \mathcal{F}^E ( E). $$ Thus, $FH ( E, \mathcal{F})$ is the global sections of $\mathcal{F}^E$ on $E$. Thus, there is an analog of the Hochschild homology groups for every elliptic curve $E$ with volume element $\omega$. \section{Main theorem} The main theorem states that the Witten class of a complex manifold $X$ arises when one considers the factorization homology of a certain sheaf (on $X$) of translation-invariant factorization algebras on $\mathbb C$. Before I state this theorem, I need to recall the definition of the Witten class. \subsection{} Let $E$ be an elliptic curve, and let $\omega$ be a translation-invariant volume element on $E$. The Witten class $$ \operatorname{Wit}(X,E,\omega) \in \mbb R\Gamma(X, \Omega^{-\ast}(X)) = \oplus H^i( X, \Omega^i(X)) $$ is a cohomology class, defined as follows. Let $$ E_{2k}(E,\omega) = \sum_{\lambda \in \Lambda} \lambda^{-2k} $$ be the Eisenstein series of the marked elliptic curve $(E,\omega)$. Here, as before, we are writing $E$ as the quotient of $\mathbb C$ by a lattice $\Lambda$, in such a way that $\omega$ corresponds to $\d z$. The Witten class of $X$ is defined by $$ \operatorname{Wit}(X, E,\omega) = \exp \left\{\sum_{k \ge 2} \frac{(2k-1)!}{ (2 \pi i )^{2k} } E_{2k}(E,\omega) ch_{2k} ( T X) \right\}. $$ If $\tau$ is in the upper half-plane, let $(E_\tau, \omega_\tau)$ denote the elliptic curve associated to the lattice generated by $(1,\tau)$, with volume form $\omega_\tau$ corresponding to $\d z$. Then, the Witten class has the property that $$ \lim_{\tau \to i \infty} \operatorname{Wit}(X,E_\tau, \omega_\tau) = e^{-c_1(T_X) / 2} \operatorname{Td}(T X) . $$ This follows from the identities \begin{align*} \lim_{\tau \to i \infty} E_{2k} ( E_\tau, \omega_\tau ) &= 2 \zeta(2k ) \\ \sum_{k \ge 1} 2 \zeta(2k) \frac{x^{2k}}{ 2k (2 \pi i)^{2k} } &= \log \left( \frac{x}{1 - e^{-x} } \right) - \frac{x}{2} \end{align*} where $\zeta$ is the Riemann zeta function. \subsection{} Now we can state the theorem. \begin{theorem} Let $X$ be a complex manifold, equipped with a trivialization of the second Chern character $ch_2(T X)$. Then, there is a sheaf $D^{\hbar}_{X,ch}$ of translation-invariant factorization algebras on $\mathbb C$, over the algebra $\mathbb C[\hbar]$, such that, for every elliptic curve $E$ with volume element $\omega$, there is a natural isomorphism of BD algebras $$ FH( E, D^{\hbar}_{X,ch} ) \simeq \left( \Omega^{-\ast} (T^\ast X)[\hbar], \hbar L_\pi + \hbar \{ \log \operatorname{Wit}(X,E,\omega) , - \} \right). $$ Alternatively, there is an isomorphism of cochain complexes $$ FH( E, D^{\hbar}_{X,ch} ) \simeq \left( \Omega^{-\ast} (T^\ast X)[\hbar], \hbar L_\pi\right). $$ sending $$ 1 \to \operatorname{Wit}(X,E,\omega). $$ \end{theorem} \subsection{} The factorization algebra I construct is an analytic avatar of the chiral differential operators constructed by Gorbounov, Mailkov and Schechtman \cite{GorMalSch00}. Note that in their work, the $q$-expansion of the Witten genus appears as the character of the algebra of chiral differential operators. The way the Witten genus appears in this paper is somewhat different, and has the advantage that we see the Witten genus directly as a function on the moduli space of elliptic curves, and not just as a $q$-expansion. A further analysis of the relationship between the $q$-expansion of the Witten genus and the chiral differential operators has been undertaken by Cheung \cite{Che08}. \section{Factorization algebras from quantum field theory} A factorization algebra is the algebraic structure satisfied by the observables of a quantum field theory. In \cite{CosGwi10} we prove a theorem allowing one to construct factorization algebras using the techniques of perturbative renormalization. The factorization algebra $D^\hbar_{X,ch}$ encoding the Witten genus will be constructed by quantizing a certain two-dimensional quantum field theory, called holomorphic Chern-Simons theory. Before I discuss this particular quantum field theory, let me explain, heuristically, why one would expect the observables of a quantum field theory to form a factorization algebra. Suppose we have a quantum field theory (whatever that is) on a manifold $M$. Then, for every open subset $U \subset M$, we would expect the set of observables on $U$ -- that is, the set of measurements that can be made by an observer in the open subset $U$ -- to form a vector space, which we call $\mathcal F(U)$. If $U \subset V$, then an observable on $U$ will, in particular, be an observable on $V$, so that we get a map $\mathcal F (U) \to \mathcal F(V)$. If $U_1$ and $U_2$ are disjoint, we would expect that all obervables on $U_1 \amalg U_2$ are obtained by taking the product of an observable on $U_1$ with one on $U_2$. Thus, we would expect that $$\mathcal F (U_1 \amalg U_2) = \mathcal F (U_1) \otimes \mathcal F(U_2).$$ Together, these maps give $\mathcal{F}$ the structure of a factorization algebra. \subsection{} The idea that the observables of a quantum field theory form a factorization algebra is compatible with two familiar examples. Quantum mechanics is a quantum field theory on the real line $\mbb R$. The obserables for quantum mechanics form an associative algebra. Associative algebras are a particular class of factorization algebras on $\mbb R$. In \cite{CosGwi10}, we show that the factorization algebra associated to the free field theory on $\mbb R$ is, in fact, an $E_1$ algebra; specifically, it is the familiar Weyl algebra of observables of quantum mechanics. A second well-understood example is conformal field theory. The observables of conformal field theory on $\mathbb C$ form a vertex algebra; and, as we have seen, vertex algebras are a special class of factorization algebra $\mathbb C$. \subsection{} Let me now briefly state the results of \cite{CosGwi10} and \cite{Cos10}, allowing one to construct factorization algebras. In \cite{Cos10}, I gave a definition of a quantum field theory on a manifold $M$, using a synthesis between Wilson's concept of a low-energy effective field theory and the Batalin-Vilkovisky formalism for quantizing gauge theories. Further, I developed techniques (based on the machinery of perturbative renormalization) allowing one to construct such quantum field theories from Lagrangians. Many quantum field theories of physical and mathematical interest, such as Chern-Simons theory and Yang-Mills theory, can be put in this framework. The most succinct way to state the main construction of \cite{CosGwi10} is as follows. \begin{theorem} Any quantum field theory in the sense of \cite{Cos10}, on a manifold $M$, yields a factorization algebra on $M$. \end{theorem} We have seen that factorization algebras satisfy a descent property: if a discrete group $G$ acts properly discontinuously on a manifold $M$, then a $G$-equivariant factorization algebra $\widetilde{\mathcal{F}}$ on $M$ descends to the quotient $M / G$. Quantum field theories in the sense of \cite{Cos10} satisfy a similar descent property, and the construction of a factorization algebra from a quantum field theory is compatible with descent. \section{Deformation quantization in quantum field theory} In this section, I will explain a little about how one associates a factorization algebra to a classical or quantum field theory. We will see that the procedure of quantizing a classical field theory can be interpreted in algebraic terms as a kind of deformation quantization in the world of factorization algebras. The observables of a classical mechanical system form a commutative algebra, whereas the observables of a quantum mechanical system are only an associative algebra. We should view this commutativity as being an extra structure present on the observables of a classical system. There is a similar story in field theory: the observables of a classical field theory on a manifold $M$ have an extra structure, that of a \emph{commutative} factorization algebra. Because we will use this concept several times, it is worth introducing the most general notion. \begin{definition} A \emph{Hopf operad} is an operad in the category of differential graded cocommutative algebras. \end{definition} If $P$ is a Hopf operad, then for each $n$, $P(n)$ is a cocommutative co-dga, and the operad maps are compatible with the coproduct. If $V,W$ are $P$-algebras, then $V \otimes W$ is also: the coproduct on the spaces $P(n)$ allows one to define the action of $P$ on $V \otimes W$. For example, the commutative operad $\operatorname{Com}$, defined by $\operatorname{Com}(n) = \mathbb C$ for each $n > 0$, is a Hopf operad. The tensor product on commutative algebras defined by this Hopf operad structure is the usual tensor product. \begin{definition} Let $P$ be a Hopf operad. Then a $P$ factorization algebra is a factorization algebra $\mathcal{F}$ such that each $\mathcal{F}(U)$ is equipped with the structure of $P$-algebra, and the structure maps $$ \mathcal{F}(U_1) \otimes \cdots \mathcal{F}(U_n) \to \mathcal{F}(V) $$ are maps of $P$-algebras. \end{definition} \subsection{} The main object of interest in a classical field theory is the space of solutions to the Euler-Lagrange equation. If $U \subset M$ is an open set, let $\mathcal{EL}(U)$ be this space. Sending $U \mapsto \mathcal{EL}(U)$ defines a sheaf of formal spaces on $M$. This sheaf of solutions to the Euler-Lagrange equations can be encoded in the structure of a commutative factorization algebra. If $U \subset M$ is an open subset, we will let $\mscr O(\mathcal{EL}(U))$ denote the space of functions on $\mathcal{EL}(U)$. Sending $U \mapsto \mscr O(\mathcal{EL}(U))$ defines a commutative factorization algebra: if $U_1,\ldots, U_n \subset U_{n+1}$ are disjoint open subsets, there is a restriction map $$ \mathcal{EL}(U_{n+1}) \to \mathcal{EL} (U_1) \times \cdots \times \mathcal{EL} ( U_n) . $$ Replacing the map of spaces by the corresponding map of algebras of functions yields the desired structure of commutative factorization algebra. \subsection{} In the familiar deformation quantization story, the algebra of observables of a classical mechanical system is a commutative algebra endowed with an extra structure, namely a Poisson bracket. This extra structure is what tells us that the commutative algebra ``wants'' to deform into an associative algebra. There is a similar picture in the world of factorization algebras: the commutative factorization algebra associated to a classical field theory is endowed with an extra structure, which makes it ``want'' to deform into a plain factorization algebra. Ordinary Poisson algebras interpolate between commutative algebras and associative (or $E_1$) algebras. For us, the object describing the observables of a quantum field theory is not an $E_1$ algebra in a symmetric monoidal category; instead, it is an $E_0$ algebra. An $E_0$ algebra in vector spaces is simply a vector space with an element. An $E_0$ algebra in any symmetric monoidal category is an object of this category with a map from the unit object. An $E_0$ algebra in the symmetric monoidal category of factorization algebras is simply a factorization algebra, as every factorization algebra is equipped with a unit. Thus, the analog of the Poisson operad we are searching for is an operad that interpolates between the commutative operad and the $E_0$ operad. Such an operad was constructed by Beilinson and Drinfeld \cite{BeiDri04}; we will call it the BD operad\footnote{Beilinson and Drinfeld called this operad the Batalin-Vilkovisky operad. However, in the literature, the Batalin-Vilkovisky operad has, unfortunately, come to refer to a different object.}. In the introduction I described what an algebra over the BD operad is. Here is a description of the operad itself. \begin{definition} Let $P_0$ be the graded operad over $\mathbb C$ generated by a commutative and associatve product, $\ast$, and a Poisson bracket $\{-,-\}$ of cohomological degree $+1$. Let $BD$ denote the differential graded operad over the ring $\mathbb C[\hbar]$ which, as a graded operad, is simply $P_0 \otimes \mathbb C[\hbar]$, but which is equipped with differential $$ \d \ast = \hbar \{-,-\}. $$ \end{definition} If we specialize to $\hbar =0$, we find the $BD$ operad becomes the operad $P_0$. If we specialize to $\hbar = 1$, however, the $BD$ operad becomes the $E_0$ operad. Thus, we find that the operad $P_0$ bears the same relationship to the operad $E_0$ as the usual Poisson operad bears to the associative operad $E_1$. The operads $P_0$ and $BD$ are both Hopf operads. (Recall that this means that the tensor product of two $P_0$ (or $BD$) algebras has a natural structure of $P_0$ (or $BD$) algebra). Thus, we can talk about $P_0$ factorization algebras and $BD$ factorization algebras. A $P_0$ factorization algebra $\mathcal{F}_{cl}$ is a factorization algebra such that, for each $U$ in $M$, $\mathcal{F}^{cl}(U)$ is a $P_0$ algebra, and all the structure maps are maps of $P_0$ algebras; and similar for $BD$ factorization algebras. \begin{definition} A \emph{quantization} of a $P_0$ factorization algebra $\mathcal{F}_{cl}$ is a BD factorization algebra factorization algebra $\mathcal{F}_q$, $$ \mathcal{F}_q \otimes_{\mathbb C[\hbar]} \mathbb C \cong \mathcal{F}_{cl} $$ of Poisson factorization algebras. \end{definition} Thus, $\mathcal{F}_q(U)$ is a quantization of the $P_0$ algebra $\mathcal{F}_{cl}(U)$, for each $U \subset M$. \subsection{} Now we can restate the main results of \cite{CosGwi10}. \begin{theorem} Every classical field theory on $M$ gives rise to a Poisson factorization algebra on $M$. A quantization of this classical field theory (in the sense of \cite{Cos10}) gives rise to a quantization of this Poisson factorization algebra. \end{theorem} What I mean by a classical field theory on $M$ is detailed in \cite{Cos10}, but it is something rather familiar. There is a space of fields, which is taken to be the space $\mathscr{E}$ of sections of some vector bundle $E$ on $M$, or more generally some space of maps $M \to N$ to some other manifold $N$. In addition, there is an action functional $S : \mathscr{E} \to \mbb R$ (or to $\mathbb C$), which is taken to be the integral of some Lagrangian density. When dealing with theories with gauge symmetry, this basic picture needs to be modified by the introduction of fields which possess a cohomological degree. This more sophisticated picture is known as the Batalin-Vilkovisky formalism. In \cite{Cos10, CosGwi10} we always work in the Batalin-Vilkovisky formalism. Thus, our space of classical fields is equipped with a symplectic form of cohomological degree $-1$. The fact that the factorization algebra of observables of a classical field theory is equipped with a Poisson bracket of degree $+1$ is simply a version of the familiar statement that the algebra of functions on a symplectic manifold has a natural Poisson bracket. \section{Holomorphic Chern-Simons theory} As we have seen, when we work in the BV formalism, the space of classical fields is a (typically infinite dimensional) differential graded manifold equipped with a symplectic form of cohomological degree $-1$. The action functional is a secondary object in this approach. The differential on the space of fields preserves the symplectic form, and thus, at least locally, is given by Poisson bracket with some Hamiltonian function $S$, of cohomological degree zero. This function is the classical action. In the paper \cite{AleKonSch95}, Alexandrov, Kontsevich, Schwartz and Zabronovsky introduced a beautiful and general method for constructing classical field theories in the BV formalism. Many quantum field theories studied in mathematics arise from the AKSZ construction. For example, Chern-Simons theory, Rozansky-Witten theory and the Poisson $\sigma$ model all fit very naturally into this framework. For us, the relevance of the AKSZ construction is that the classical field theory related to the Witten genus arises most naturally from the AKSZ construction. Before I introduce the AKSZ construction, we need some notation. \begin{definition} A \emph{differential graded manifold} is a smooth manifold $X$ equipped with a sheaf $\mscr O_X$ of differential graded commutative algebras over $\mathbb C$, with the property that $\mscr O_X$ is locally isomorphic as a graded algebra to $C^{\infty}_X [[x_1,\ldots, x_n]]$, where $x_i$ are formal variables of cohomological degree $d_i \in \mathbb Z$. \end{definition} In this definition, $C^{\infty}_X$ refers to the sheaf of complex-valued smooth functions on $X$. One can talk about geometric structures -- such as Poisson or symplectic structures -- on a differential graded manifolds. If $X$ is a smooth manifold, we will let $X_{dR}$ denote the dg manifold whose underlying smooth manifold is $X$, and whose sheaf of functions is the complexified de Rham complex $\Omega^\ast_X$ of $X$. If $X$ is a complex manifold, we will let $X_{\br{\partial}}$ denote the dg manifold whose underlying smooth manifold is $X$, and whose sheaf of functions is the Dolbeault complex $\Omega^{0,\ast}_X$. \subsection{} Now we can explain the AKSZ construction. Suppose we have a compact differential graded manifold $M$, equipped with volume element of cohomological degree $k$. Let $X$ be a differential graded manifold with a symplectic form of cohomological degree $l$. Then, the infinite-dimensional differential graded manifold $\operatorname{Maps}(M,X)$ acquires a symplectic form of cohomological degree $l - k$. If $f : M \to X$ is a map, then the tangent space to $\operatorname{Maps}(M,X)$ at $f$ is $$ T_f \operatorname{Maps}(M,X) = \Gamma(M, f^\ast T X). $$ We define a pairing on $T_f \operatorname{Maps}(M,X)$ by the formula $$ \ip{\alpha,\beta} = \int_M \ip{\alpha,\beta}_X. $$ Since the integration map $\int_M : C^{\infty}(M) \to \mbb R$ is of cohomological degree $-k$, and the symplectic pairing on $T X$ is of cohomological degree $m$, the pairing on $T_f \operatorname{Maps}(M,X)$ is of cohomological degree $m - k$. The case of interest in the Batalin-Vilkovisky formalism is when $m-k = -1$. There is a variation of this construction which applies when the source manifold $M$ is non-compact. In this situation, the space $\operatorname{Maps}(M,X)$ has a natural integrable distribution given by the subspace $$ \Gamma_c(M, f^\ast T X) \subset T_f^c \operatorname{Maps}(M,X) \subset T_f \operatorname{Maps}(M,X) $$ consisting of compactly supported tangent vector fields. In this situation, instead of having a symplectic pairing on $T_f \operatorname{Maps}(M,X)$, we only have one on the distribution $T_f^c\operatorname{Maps}(M,X)$. The action functional, instead of being a closed one-form on $\operatorname{Maps}(M,X)$, is a closed one-form on the leaves of the foliation. \subsection{} There are two broad classes of AKSZ theories which are commonly considered. These are the theories of Chern-Simons type, and the theories of holomorphic Chern-Simons type. The two classes of theories are distinguished by the nature of the source dg manifold. In theories of Chern-Simons type, the source differential graded ringed space is $M_{dR}$, where $M$ is an oriented manifold. The orientation on $M$ gives rise to a volume element on $M_{dR}$ of cohomological degree $\operatorname{dim}(M)$. The target manifolds for Chern-Simons theories of dimension $k$ are dg symplectic manifolds of dimension $k-1$. For example, perturbative Chern-Simons theory arises when we take the target to be the dg manifold whose underlying manifold is a point, and whose algebra of functions is the algebra $C^\ast(\mathfrak g)$ of cochains on a semi-simple Lie algebra $\mathfrak g$. The Killing form endows this dg manifold with a symplectic form of cohomological degree $2$. This theory is perturbative, because maps $M_{dR} \to C^\ast(\mathfrak{g})$ are the same as connections on the trivial principal $G$ bundle which are infinitesimally close to the trivial connection. Non-perturbative Chern-Simons theory arises from a genearlized form of the AKSZ construction whcih takes the stack $BG$ as the target manifold. Vector bundles on $B G$ are the same as $G$-modules; the tangent bundle of $B G$ is the adjoint module $\mathfrak{g}[1]$. The Killing form on $\mathfrak{g}$ is $G$-equivariant, and so gives rise to a symplectic form on $B G$ of cohomological degree $2$. Rozansky-Witten theory also arises from this framework. Let $X_{\br{\partial}}$ be a holomorphic symplectic manifold. Let us work over the base ring $\mathbb C[q,q^{-1}]$ where $q$ is a parameter of degree $-2$. Then the symplectic form $q^{-1} \omega$ on $X_{\br{\partial}}$ is of cohomological degree $2$, and so we can define a $3$-dimensional Chern-Simons type theory. The fields of this theory are maps $M_{dR} \to X_{\br{\partial}}$, where $M$ is a $3$-manifold, and everything takes place over the base ring $\mathbb C[q,q^{-1}]$. Another example is the Poisson $\sigma$-model of \cite{Kon97, CatFel01}. Here, the source is $\Sigma_{dR}$ where $\Sigma$ is a smooth surface. The target is the differential graded manifold $T^\ast[1] X$, whose underlying smooth manifold is $X$, and whose algebra of functions is $\Gamma(X, \wedge^\ast T X)$. The Schouten-Nijenhuis bracket $\{-,-\}$ endows this dg manifold with a symplectic form of cohomological degree $1$. The differential on $\Gamma(X,\wedge^\ast TX)$ is given by bracketing with the Poisson tensor $\pi$. \subsection{} Let us now discuss holomorphic Chern-Simons theory, which is the only quantum field theory we will be concerned with in this paper. In holomorphic Chern-Simons theory, the source dg manifold is $M_{\br{\partial}}$, where $M$ is a complex manifold equipped with a never-vanishing holomorphic volume element $\omega$ (thus, $M$ is a Calabi-Yau manifold). This volume form can be thought of as a volume element on $M_{\br{\partial}}$ of cohomological degree $\operatorname{dim}_{\mathbb C}(M)$. Integration against this volume element is simply the map \begin{align*} \Omega^{0,\operatorname{dim}_{\mathbb C}(M)} (M) &\to \mathbb C \\ \alpha &\mapsto \int_M \omega \wedge \alpha. \end{align*} Theories of holomorphic Chern-Simons type on Calabi-Yau manifolds of complex dimension $k$ can thus be constructed from dg symplectic manifolds with a symplectic form of cohomological degree $k-1$. In this paper, we are only interested in one-dimensional holomorphic Chern-Simons theories. In these theories, the source dg manifold is $\Sigma_{\br{\partial}}$, where $\Sigma$ is a Riemann surface equipped with a never-vanishing holomorphic volume form. The target is $X_{\br{\partial}}$, where $X$ is a holomorphic symplectic manifold. (The holomorphic symplectic form can be thought of as a dg symplectic form on $X_{\br{\partial}}$). \subsection{} We can now give a more precise statement of the theorem relating elliptic cohomology and the Witten genus. \begin{theorem} Let $X$ be a complex manifold. Then, \begin{enumerate} \item The obstruction to quantizing the holomorphic Chern-Simons theory whose fields are maps $\mathbb C_{\br{\partial}} \to (T^\ast X)_{\br{\partial}}$ is $$\operatorname{ch}_2 ( T X ) \in H^2(X, \Omega^2_{cl}(X))$$ where $\Omega^2_{cl}(X)$ is the sheaf of closed holomorphic $2$-forms on $X$. \item If this obstruction vanishes (or, more precisely, is trivialized), then we can quantize holomorphic Chern-Simons theory to yield a factorization algebra on $\mathbb C$ with values in quasi-coherent sheaves on $X_{dR}\times \operatorname{Spec} \mathbb C[\hbar]$. We will call this factorization $D^{\hbar}_{X,ch}$. \item If $E$ is an elliptic curve, then there is a quasi-isomorphism of $BD$ algebras in quasi-coherent sheaves on $X_{dR} \times \mathbb C[\hbar]$ $$ FH ( E, D^{\hbar}_{X,ch } ) \simeq \left( \Omega^{-\ast} ( T^\ast X ) [\hbar] , \hbar L_\pi + \hbar \{ \log \operatorname{Wit}(X,E) - \} \right) . $$ \end{enumerate} \end{theorem} \subsection{} Recall that the factorization homology complex $FH ( E, D^{\hbar}_{X,ch} )$ is defined by first constructing a factorization algebra $D^{E,\hbar}_{X,ch}$ on $E$, using the descent property of factorization algebras; and then taking global sections. Quantum field theories in the sense of \cite{Cos10} have a descent property similar to that satisfied by factorization algebras, and the construction of a factorization algebra from a quantum field theory is compatible with descent. The quantum field theory on an elliptic curve $E$ which arises by descent from holomorphic Chern-Simons theory on $\mathbb C$ is simply holomorphic Chern-Simons theory on $E$. Thus, one can interpret the factorization homology group $FH ( E, D^{\hbar}_{X,ch} )$ in terms of holomorphic Chern-Simons theory on the elliptic curve $E$. From this point of view, $FH( E, D^{\hbar} _{X,ch} ) $ is the cochain complex of global observables for the holomorphic Chern-Simons theory of maps $E \to T^\ast X$. This theorem is proved using the Wilsonian approach to quantum field theory developed in \cite{Cos10}. The result is then translated into the language of factorization algebras. The proof appears in \cite{Cos10b}. \subsection{} As I mentioned earlier, this factorization algebra is an analytic avatar of the chiral algebra of chiral differential operators constructed by Gorbounov, Malikov and Schechtman \cite{GorMalSch00}. A detailed description of the factorization algebra constructed from holomorphic Chern-Simons theory, and its relationship to chiral differential operators, will appear elsewhere. In the physics literature, Kapustin\cite{Kap05} and Witten \cite{Wit05} have argued that chiral differential operators arise from what is called, in section 3.3 of \cite{Wit05}, a non-linear $\beta$-$\gamma$ system. This non-linear $\beta$-$\gamma$ system is the same as what I have called holomorphic Chern-Simons theory. \begin{comment} \subsection{} One can also ask how this construction relates to the original suggestion of Witten \cite{Wit87, Wit88}. Witten argued that the Witten genus arises as the partition function of a certain supersymmetric $\sigma$-model. In this last section I will explain briefly how the the $N = 1$ supersymmetric $\sigma$ model, when rewritten in th Let us first consider the ordinary $\sigma$-model of maps $f : \Sigma \to X$, where $X$ is a Kahler manifold. The action of this theory is $$ S(f) = \int_{\Sigma}\ip{\d f, \d f} . $$ One can argue \cite{} that the infinite-volume limit of this theory is decouples as a tensor product of the holomorphic and anti-holomorphic Chern-Simons theory. It seems that introducing the \end{comment} \subsection{}} \newcommand{ (i \pi)^{-1}}{ (i \pi)^{-1}} \newcommand{ i \pi }{ i \pi } \newcommand{\br{u}}{\overline{u}} \newcommand{\mathscr{I}}{\mathscr{I}} \newcommand{\mc{H}}{\mathcal{H}} \newcommand{\op{At}}{\operatorname{At}} \newcommand{\op{DR}}{\operatorname{DR}} \newcommand{\op{Id}}{\operatorname{Id}} \newcommand{\br{z}}{\overline{z}} \newcommand{\br{w}}{\overline{w}} \newcommand{\what{\mscr M}}{\widehat{\mathscr M}} \newcommand{\E}{\mathscr{E}} \newcommand{\mathcal{F}}{\mathcal{F}} \newcommand{\G}{\mathcal{G}} \newcommand{\mathcal{EL}}{\mathcal{EL}} \newcommand{\op{hocolim}}{\operatorname{hocolim}} \linespread{1.2} \theoremstyle{thm} \newtheorem{principle}[theorem]{Principle} \textwidth=14.5cm \oddsidemargin=1cm \evensidemargin=1cm \setlength{\headsep}{20pt} \begin{document} \title{A geometric construction of the Witten genus, I} \author{Kevin Costello} \begin{abstract} I describe how the Witten genus of a complex manifold $X$ can be seen from a rigorous analysis of a certain two-dimensional quantum field theory of maps from a surface to $X$. \end{abstract} \maketitle \input{b_overview.tex} \input{c_factorization.tex} \input{d_main_theorem.tex} \input{e_qft.tex} \input{f_hcs.tex} \def$'${$'$}
{ "timestamp": "2010-08-30T02:00:54", "yymm": "1006", "arxiv_id": "1006.5422", "language": "en", "url": "https://arxiv.org/abs/1006.5422" }
"\\section*{Introduction}\nIn recent years a great deal work has been made to find necessary\nand s(...TRUNCATED)
{"timestamp":"2010-09-15T02:01:35","yymm":"1006","arxiv_id":"1006.5603","language":"en","url":"https(...TRUNCATED)
"\\section{Introduction}\n\\label{section:introduction}\n\nGlobular clusters (GCs) are among the mos(...TRUNCATED)
{"timestamp":"2010-06-29T02:02:59","yymm":"1006","arxiv_id":"1006.5387","language":"en","url":"https(...TRUNCATED)
"\n\\section{%\n \\@startsection\n {section}%\n {1}%\n {\\z@}%\n {0.8cm \\@plus1ex \\@m(...TRUNCATED)
{"timestamp":"2011-03-11T02:01:54","yymm":"1006","arxiv_id":"1006.5680","language":"en","url":"https(...TRUNCATED)
"\\section{Introduction}\nRecent experiments at the Experimental Storage Ring (ESR) at GSI in Darmst(...TRUNCATED)
{"timestamp":"2010-08-30T02:01:37","yymm":"1006","arxiv_id":"1006.5732","language":"en","url":"https(...TRUNCATED)
"\\section*{Introduction}\r\n\r\nSome twenty five years ago I learnt about\r\nthe existence of the J(...TRUNCATED)
{"timestamp":"2010-07-01T02:01:04","yymm":"1006","arxiv_id":"1006.5801","language":"en","url":"https(...TRUNCATED)
"\\section{Introduction}\nMany of the key questions in the study of galaxy formation and evolution a(...TRUNCATED)
{"timestamp":"2010-06-30T02:00:15","yymm":"1006","arxiv_id":"1006.5456","language":"en","url":"https(...TRUNCATED)
"\\section{Introduction} \nLet $E$ be a type 2 \\textsc{umd} Banach space and let $H$ be a Hilbert s(...TRUNCATED)
{"timestamp":"2010-11-15T02:01:59","yymm":"1006","arxiv_id":"1006.5349","language":"en","url":"https(...TRUNCATED)
End of preview.

No dataset card yet

Downloads last month
5