text
stringlengths 1
1.19M
| meta
dict |
|---|---|
\section{Acknowledgment}
{\bf\Large{Acknowledgement}}
I thank Professor Kirk T. McDonald for bringing this issue to my attention and for
sharing his unpublished note \cite{Kirk}, as well as for extensive discussions.
\newpage
|
{
"timestamp": "2018-06-05T02:18:35",
"yymm": "1806",
"arxiv_id": "1806.01133",
"language": "en",
"url": "https://arxiv.org/abs/1806.01133"
}
|
\section{Introduction}
The extraction of physical laws from experimental data, often in the form of differential and partial differential equations, may be critical to science and engineering applications where governing equations are unknown. Time-series data collected from experiments, or as the macroscopic aggregate of small scale behavior, often obeys unknown governing equations that are parametrized by time-evolving parameters $\mu(t)$. In some cases, it may be possible to derive physical laws from first principals using data as well as some knowledge of the system, but there are many cases where this is elusive such as the large scale networked dynamical systems of the power grid and the brain, or chemical kinetics of, for example, the Belousov-Zhabotinsky reaction. Recently, there has been a substantial research effort towards automating the process of data-driven model discovery in order to identify interpretable expressions for the dynamics in the form of ordinary and partial differential equations (ODEs and PDEs):
\begin{equation}
u_t= N(u, u_x, u_{xx}, \dots ,\mu(t)),\quad t\in [0,T],
\label{eq:overview}
\end{equation}
where $N(\cdot)$ characterizes the evolution of the system and its parametric dependencies through the parameter $\mu(t):[0,T]\rightarrow \R$. Although a number of automated discovery techniques have been developed for discovering the right hand side of (\ref{eq:overview}) with constant parameters $\mu(t)=\mu_0$, none have demonstrated the capability to infer the governing equations when the parametrization $\mu(t)$ has explicit time dependence. By imposing group sparsity techniques, we develop a mathematical architecture that allows us to explicitly disambiguate the dynamical evolution (\ref{eq:overview}) from its parametric dependencies $\mu(t)$. This is a critical innovation in model discovery as most realistic systems do indeed have time-dependent parametric dependencies that must be concurrently extracted during the model discovery process.
Figure~\ref{fig:param} demonstrates two prototypical parametric dependencies: (a) a PDE model whose constant parameters change at fixed points in time and (b) a PDE model that depends continuously on the parameter $\mu(t)$. Our proposed model discovery method provides a principled approach to efficiently handle these two parametric cases, thus advancing the field of model discovery by disambiguating the dynamics from parametric time dependence.
Even if the governing equations are known, parametric dependencies in PDEs complicate numerical simulation schemes and challenge one's ability to produce accurate future state predictions.
Characterizing parametric dependence is also an critical task for model reduction in both time-independent~\cite{quarteroni2015reduced,hesthaven2016certified} and time-dependent PDEs~\cite{benner2015survey,benner2017model}.
Thus the ability to explicitly extract the parametric dependence of a spatio-temporal system is necessary for accurate, quantitative characterization of PDEs.
\begin{figure}[t]
\centering
\begin{overpic}[width=0.56\textwidth]{param_fig}
\put(-6,17){${\mu}(t)$}
\put(3,30){(a)}
\put(57,30){(b)}
\put(18,-3){\small Time $t$}
\put(73,-3){\small Time $t$}
\put(65,8){$u_t \!=\! N({u},{\mu}(t))$}
\put(14,8){$u_t \!=\! N({u},{\mu}(t))$}
\end{overpic}
\caption{Two prototypical scenarios for parametric model discovery: (a) Parameters $\mu(t)$ are piece wise constant and change at fixed points in time, (b) The underlying PDE model $u_t \!=\! N({u},{\mu}(t))$ depends on continuously varying parameter ${\mu}(t)$.}
\label{fig:param}
\vspace{-.25in}
\end{figure}
More broadly, system identification using machine learning methods has emerged as a viable alternative to expert knowledge and first principles derivations. It is important to separate the field of system identification into two distinct categories: (i) methods that accurately reflect observed dynamics using black box functions (e.g. neural networks), and (ii) methods that recover closed form and interpretable expressions for the dynamics in the form of ordinary and partial differential equations (ODEs and PDEs). This duality reflects the two cultures narrative of machine learning and classical statistics made popular by Leo Breiman~\cite{breiman2001statistical}. On one hand, the research may assume a specific model for the data with known mechanism, while on the other the research is interested in algorithmic models that, while not necessarily reflecting the true mechanism, are accurate in prediction. While several recent works have made progress from both viewpoints, we focus on the former. The terms in a differential equation often have physical interpretations and motivation, e.g. diffusion and advection are ubiquitous in many physical systems and are characterized by prototypical expressions in a PDE. For systems where first principal derivations prove intractable, we may gain insights into the underlying physics of the system based on the terms in the identified PDE. Further, we view the process of extracting closed form equations as being more generalizable than fitting black box models to a specific dataset. Specifically, altering initial conditions will not change governing equations, but may break a machine learned black box solver.
Research towards the automated inference of dynamical systems from data is not new \cite{Crutchfield1987cs}. Methods for extracting linear systems from time series data include
the eigensystem realization algorithm (ERA) \cite{juang1985eigensystem} and Dynamic Mode Decomposition (DMD) \cite{Rowley2009jfm,schmid2010dynamic,tu2013dynamic,Brunton2015jcd,kutz2016dynamic,askham2018variable}. Identification of nonlinear systems has, until very recently, relied on black box methods. These include NARMAX \cite{chen1989representations}, neural networks \cite{gonzalez1998identification}, equation free methods \cite{Kevrekidis2003cms,kevrekidis2004equation,kevrekidis2009equation}, and Laplacian spectral analysis \cite{Giannakis2012pnas}. There has also been considerable recent work towards data-driven approximation of the Koopman operator~\cite{Mezic2005nd,Budivsic2012chaos,Mezic2013arfm} via extensions of DMD \cite{williams2015data}, diffusion maps \cite{giannakis2017data}, delay-coordinates \cite{brunton2017chaos, Arbabi2016arxiv, das2017delay} and neural networks \cite{Yeung2017arxiv,Takeishi2017nips,Wehmeyer2017arxiv,Mardt2017arxiv,Otto2017arxiv,lusch2017deep}.
The use of genetic algorithms for nonlinear system identification \cite{Bongard2007pnas, Schmidt2009science} allowed for the derivation of physical laws in the form of ordinary differential equations. Genetic algorithms are highly effective in learning complex functional forms but are slower computationally than simple regression. Sparsity promoting methods have been used previously in dynamical systems~\cite{Schaeffer2013pnas, mackey2014compressive, caflisch2013pdes}, and sparse regression has been leveraged to identify parsimonious ordinary differential equations from a large library of allowed functional forms~\cite{Brunton2016,chen2017network}. Much work building on the sparse regression framework has followed and includes inferring rational functions \cite{Mangan2016}, the use of information criteria for model validation~\cite{mangan2017model}, constrained regression for conservation laws \cite{loiseau2018constrained}, model discovery with highly corrupt data \cite{tran2017exact}, the learning of bifurcation parameters \cite{schaeffer2017learning_group}, stochastic dynamics \cite{boninsegna2017sparse}, weak forms of the observed dynamics \cite{schaeffer2017sparse}, and regression with small amounts of data \cite{schaeffer2017extracting, kaiser2017sparse}. In contrast to sparse regression, a neural network based approach was proposed to identify ordinary differential equations, sacrificing some interpretability for a richer class of allowed functional forms \cite{raissi2018multistep}.
Sparse regression based methods for PDEs were first used in \cite{rudy2017data,schaeffer2017learning}. These methods were demonstrated on a large class of PDEs and have the benefit of being highly interpretable, but struggle with numerical differentiation of noisy data. In Rudy {\em et al.}~\cite{rudy2017data} the noise was addressed by testing with only a small degree of noise (large SNR), while in Schaeffer {\em et al.}~\cite{schaeffer2017learning} noise was added to the time derivative after it was computed from clean data. Alternatively, Gaussian processes were used to determine linear PDEs \cite{raissi2017machine} and nonlinear PDEs known up to a set of coefficients \cite{raissi2017hidden}. Using Gaussian process regression requires less data than sparse regression and naturally manages noise, but the method is only applicable to PDEs with a known structure. Reference \cite{long2017pde} makes a substantial contribution by using neural networks to accurately learn partial differential equations with non constant coefficients. A neural network is constructed that mimics a forward Euler timestepping scheme and the accuracy of potential models is evaluated based on their future state prediction accuracy. While seemingly more robust than sparse regression, the method in \cite{long2017pde} does not penalize extraneous terms in the learned PDE and thus falls short of producing optimally parsimonious models. Furthermore, \cite{long2017pde} only tests the method on a nonlinear problem using a relatively strong Ansatz. Neural networks were also used in \cite{raissi2017physics1} and \cite{raissi2017physics2} to solve and to estimate parameters in partial differential equations with known terms to a high degree of accuracy. However, similar to \cite{raissi2017hidden}, it is assumed that the PDE is known up to a set of coefficients. A more sophisticated neural network approach was used in \cite{raissi2018deep} to learn dynamics of systems with unknown terms. However, the approach in \cite{raissi2018deep} does not give closed form representations of the dynamics and the resulting neural network model therefore does not give insights into the underlying physics.
In this work, we present a sparse regression framework for identifying PDEs with non-constant coefficients, something that none of the previous PDE discovery methods are equipped to do. Specifically, we allow for variation in the value of a coefficient across time or space, but maintain that the active terms in the PDE must be consistent. This is an important innovation in practice as the parameters of physical systems often vary during the measurement process, so that the parametric dependencies be disambiguated from the PDE itself. Our method extends the sparse regression frameworks first proposed for PDE discovery~\cite{rudy2017data,schaeffer2017learning} by using group sparsity and results in a more parsimonious and interpretable model than neural networks. We are still limited by the accuracy of numerical differentiation and by the library terms in the sparse regression. Numerical differentiation using neural networks as shown in \cite{raissi2018deep} appears promising as a method for obtaining more accurate time derivatives from noisy data. The limitation based on terms included in the library seems more permanent. Any closed-form model expression must be representable with a finite set of building blocks. Here we only use monomials in our data and its derivatives, since these are the common terms seen in physics, but there is no limitation to the terms included in the library.
\section{Methods}
The parametric discovery method relies on several foundational mathematical tools. In the following subsections, we will discuss the identification of constant coefficient equations as well as regression methods for group sparsity. Finally, we combine these ideas to show how one may identify parametric PDEs and suggest a methodology for model selection that balances accuracy and the number of active terms in the PDE.
\subsection{Identification of constant coefficient partial differential equations}
Several recent methods have been proposed for the identification of constant coefficient partial differential equations from data. In this work we expand on the sparse regression framework, PDE-FIND, used in \cite{rudy2017data}. We will briefly elaborate on the method and refer the reader to the original paper for details. The PDE-FIND algorithm provides a principled technique for discovering the underlying PDE from spatial time series measurements alone using a library of candidate functions for the PDE and sparse regression. For the identification of constant coefficient PDEs, we have a dataset ${\bf U}$, which is a discretization of a function $u(x,t)$ that we assume satisfies the PDE of the form given in (\ref{eq:overview}):
\begin{equation}
u_t= N(u, u_x, u_{xx}, \dots) = \sum_{j=1}^d N_j(u, u_x, u_{xx}, \dots) \xi_j \, .
\label{eq:const_coeff_PDE}
\end{equation}
We assume that the nonlinear expression $N(\cdot )$ may be expanded as a sum of simple monomial basis functions $N_j$ of $u$ and its derivatives. Note that this sum is not unique and that we can include extra basis functions by simply setting the corresponding $\xi_j$ to be zero. In PDE-FIND, which constructs an overcomplete library of many possible monomial basis functions and regresses to find $\xi$, sparsity is used to ensure that basis functions that do not appear in the PDE are set to zero in the sum.
To be more precise, given a dataset ${\bf U} \in \R^{n x m}$ representing $m$ timesteps of a PDE discretized with $n$ gridpoints, we numerically differentiate in both $x$ and $t$ to form the linear regression problem given by (\ref{eq:PDE_FIND})
\begin{equation}
\underbrace{
\begin{pmatrix}
u_t(x_1,t_1) \\
u_t(x_2,t_1) \\
\vdots \\
u_t(x_n,t_m) \\
\end{pmatrix}}_{{\bf U}_t}
=
\underbrace{
\begin{pmatrix}
1 & u(x_1,t_1) & u_x(x_1,t_1) & \hdots & u^3u_{xxx}(x_1,t_1) \\
1 & u(x_2,t_1) & u_x(x_2,t_1) & \hdots & u^3u_{xxx}(x_2,t_1) \\
\vdots & \vdots & \vdots & & \vdots \\
1 & u(x_n,t_m) & u_x(x_n,t_m) & \hdots & u^3u_{xxx}(x_n,t_m) \\
\end{pmatrix}}_{\mathbf{\Theta} ({\bf U})}
\xi\label{eq:PDE_FIND}
\end{equation}
which is a large, overdetermined linear system of equations ${\bf A}{\bf x}={\bf b}$. Note that here we have shown a problem where derivatives up to third order are multiplied by powers of $u$ up to cubic order, but one could include arbitrarily many library functions. Solving for $\xi$ and ensuring sparsity gives the PDE. PDE-FIND has been shown to accurately identify several partial differential equations from data alone. The sparsity constraint is a regularizer for the linear regression~\cite{rudy2017data}.
\subsection{Group Sparsity}
In a typical sparse regression, we seek a sparse solution to the linear system of equations ${\bf A}{\bf x}={\bf b}$. Accuracy of the predictor, $\|${\bf A}{\bf x}-{\bf b}$\|$, is balanced against the number of nonzero coefficients in ${\bf x}$. Thus the sparse regularization enforces a solution ${\bf x}$ with many zeros (which is the variable $\boldsymbol{\xi}$ in (\ref{eq:PDE_FIND})). In this paper, we use the notion of group sparsity to find time series representing each parameter in the PDE, rather than single values. We group collections of terms in ${\bf x}$ together and seek solutions to ${\bf A}{\bf x}={\bf b}$ that minimize the number of groups with nonzero coefficients.
One well studied method for solving regression problems with group sparsity is the group LASSO (GLASSO)~\cite{friedman2010note}
\begin{equation}
\label{eq:group_lasso}
\hat{\bf x} = \underset{\bf w}{\mbox{arg\,min}} \frac{1}{2n} \left\|{\bf b} - \sum_{g \in \mathcal{G}} \mathbf{A}^{(g)} {\bf w}^{(g)}\right\|_2^2 + \lambda \sum_{g \in \mathcal{G}} \|{\bf w}^{(g)}\|_2 .
\end{equation}
Here $\mathcal{G}$ is a collection of groups, each of which contains a subset of the indices enumerating the columns of ${\bf A}$ and coefficients in ${\bf x}$. Note that the second term in the GLASSO corresponds to a convex relaxation of the number of groups containing a nonzero value.
The concept of group sparsity has been used in several previous methods for identifying dynamics given by ordinary differential equations \cite{chen2017network, schaeffer2017learning_group}. As it will be shown in the following, we find that the GLASSO performs poorly in the case of identifying PDEs. We instead use a sequential thresholding method based on ridge regression, similar to the method used in \cite{rudy2017data}, but adapted for group sparsity. A sequential thresholding method was also used in \cite{schaeffer2017learning_group} for group sparsity but for ordinary and not partial differential equations. Our method, which we call Sequential Grouped Threshold Ridge Regression (SGTR), is summarized in algorithm \ref{SGTR}.
\begin{center}
\begin{minipage}{0.8\textwidth}
\begin{algorithm}[H]
\caption{SGTR($\mathbf{A}, {\bf b}, \mathcal{G}, \lambda, \epsilon, \text{maxit}, f({\bf x}) = \|{\bf x} \|_2$)
}
\label{SGTR}
\vspace{1 mm}
\# Solves ${\bf x} \approx \mathbf{A}^{-1}{\bf b}$ with sparsity imposed on groups in $\mathcal{G}$\\
\vspace{1 mm}
\# Initialize coefficients with ridge regression\\
${\bf x} = \mbox{arg\,min}_{\bf w} \|{\bf b} - \mathbf{A}{\bf w}\|_2^2 + \lambda \|{\bf w} \|_2^2$
\vspace{3 mm}
\# Threshhold groups with small $f$ and repeat
for $iter = 1,\hdots,\text{maxit}$:\\
\hspace{1 cm} \# Remove groups with sufficiently small $f({\bf x}^{(g)})$
\hspace{1 cm} $\mathcal{G} = \{g \in \mathcal{G} : f({\bf x}^{(g)}) > \epsilon\}$\\
\hspace{1 cm} \# Refit these groups (note this sets ${\bf x}^{(g)}=0$ for $g \not\in \mathcal{G}$)
\hspace{1 cm} ${\bf x} = \mbox{arg\,min}_{\bf w} \|{\bf b} - \sum_{g \in \mathcal{G}}\mathbf{A}^{(g)} {\bf w}^{(g)}\|_2^2 + \lambda \|{\bf w}\|_2^2$
\vspace{3 mm}
\# Get unbiased estimates of coefficients after finding sparsity\\
${\bf x}^{(\mathcal{G})} = \mbox{arg\,min}_{\bf w} \|{\bf b} - \sum_{g \in \mathcal{G}}\mathbf{A}^{(g)} {\bf w}^{(g)}\|_2^2$\\
return ${\bf x}$
\end{algorithm}
\end{minipage}
\end{center}
\vspace{3 mm}
Throughout the training, $\mathcal{G}$ tracks the groups that have nonzero coefficients, and it is paired down as we threshold coefficients with sufficiently small relevance, as measured by $f$. We use the 2-norm of the coefficients in each group for $f$ but one could also consider arbitrary functions. In particular, for problems where the coefficients within each group have a natural ordering, as they do in our case as time series or spatial functions, one could consider smoothness or other properties of the functions. In practice, we normalize each column of ${\bf A}$ and ${\bf b}$ so that differences in scale between the groups do not affect the result of the algorithm. For the GLASSO we always perform an unregularized least squares regression on the nonzero coefficients after the sparsity pattern has been discovered to debias the coefficients. We found SGTR to outperform the GLASSO for the problem of correctly identifying the active terms in parametric PDEs.
\subsection{Data-driven identification of parametric partial differential equations}
In the identification of parametric PDEs, we consider equations of the form
\begin{equation}
\label{eq:parametric_PDE}
u_t = N(u, u_x, \hdots, \mu(t)) = \sum_{j=1}^D N_j(u, u_x, \hdots) \xi_j (t).
\end{equation}
Note that this equation is similar to (\ref{eq:const_coeff_PDE}) but has time-varying parametric dependence. To capture spatial variation in the coefficients, we simply replace $\xi(t)$ with $\xi(x)$. The PDE is assumed to contain a small number of active terms, each with a time varying coefficient $\xi (t)$. We seek to solve two problems: (i) determine which coefficients are nonzero and (ii) find the values of the coefficients for each $\xi_j$ at each timestep or spatial location for which we have data.
For time dependent problems, we construct a separate regression for each timestep, allowing for variation in the PDE between timesteps. Similar to the PDE-FIND method, we construct a library of candidate functions for the PDE using monomials in our data and its derivatives so that
\begin{equation}
\label{eq:single_timestep_library}
\mathbf{\Theta} \left(u^{(j)}\right) = \begin{pmatrix}
\vline & \vline & \vline & \vline \\
1 & u^{(j)} & \hdots & u^3u^{(j)}_{xxx} \\
\vline & \vline & \vline & \vline
\end{pmatrix}
\end{equation}
where the set of $m$ equations is given by
\begin{equation}
\label{eq:parametric_PDE_FIND_setup_2}
u_t^{(j)} = \mathbf{\Theta} \left(u^{(j)}\right)\xi^{(j)} ,\,\, j = 1,\hdots, m \, .
\end{equation}
Our goal is to solve the set of equations given by (\ref{eq:parametric_PDE_FIND_setup_2}) with the constraint that each $\xi^{(j)}$ is sparse and that they all share the same sparsity pattern. That is, we want a fixed set of active terms in the PDE. To do this, we consider the set of equations as a single linear system and use group sparsity. Expressing the system of equations for the parametric equation as a single linear system we get the block diagonal structure given by
\begin{equation}
\label{eq:parametric_PDE_FIND_setup}
\begin{pmatrix}
u_t^{(1)}\\
u_t^{(2)}\\
\vdots \\
u_t^{(m)}
\end{pmatrix} =
\underbrace{\begin{pmatrix}
\mathbf{\Theta} \left(u^{(1)}\right) \\
& \mathbf{\Theta} \left(u^{(2)}\right) \\
&& \ddots \\
&&& \mathbf{\Theta} \left(u^{(m)}\right)
\end{pmatrix}}_{\tilde{\mathbf{\Theta}}}
\begin{pmatrix}
\xi^{(1)}\\
\xi^{(2)}\\
\vdots \\
\xi^{(m)} \,
\end{pmatrix}.
\end{equation}
We solve (\ref{eq:parametric_PDE_FIND_setup}) using SGTR with columns of the block diagonal library matrix grouped by their corresponding term in the PDE. Thus for $m$ timesteps and $d$ candidate functions in the library, groups are defined as $\mathcal{G} = \{ {j + d\cdot i : i = 1,\dots, m} : j = 1,\hdots, d\}$. This ensures a sparse solution to the PDE while also allowing arbitrary time series for each variable. To obtain the correct level of sparsity, we separate 20\% of the data from each timestep to use as a validation set and search over the parameter $\lambda$ in the SGTR algorithm using cross validation to find the optimal $\lambda$. For problems with spatial, rather than temporal, variation in the coefficients, we simply group by spatial rather than time coordinate. A similar block diagonal structure is obtained but with $n$ blocks of size $m \times d$ rather than $m$ blocks of size $n \times d$. The groups are defined by $\mathcal{G} = \{ {j + d\cdot i : i = 1,\dots, n} : j = 1,\hdots, d\}$.
Since we are evaluating the relevance of groups based on their norm, it is important to consider differences in the scale of the candidate functions. For example, if $u \sim \mathcal{O}(10^{-2})$ then a cubic function will be $\mathcal{O}(10^{-6})$ and relatively large coefficients multiplying this data may not have a large effect on the dynamics, but will not be removed by the hard threshold due to its size. To remedy this, we normalize each candidate function represented in $\mathbf{\Theta}$ as well as each $u_t^{(j)}$ to have unit length prior to the group thresholding algorithm and then correct for the normalization after we have discovered the correct sparsity pattern.
\subsection{Model Selection}
For each model we test both the GLASSO as well as SGTR using an exhaustive range of parameter values. Let $\tilde{\mathbf{\Theta}}$ denote the block diagonal matrix $\mathbf{\Theta}$ shown in equation \eqref{eq:parametric_PDE_FIND_setup} but with all columns normalized to have unit length and $\tilde{u}_t$ be the vector of all time derivatives having been normalized to unit length so that $\|\tilde{u}_t\| = \sqrt{m}$. For the GLASSO, we find the minimal value of $\lambda$ that will set all coefficients to zero which is given by
\begin{equation}
\label{eq:lam_max}
\lambda_{\mbox{max}} = \underset{g \in \mathcal{G}}{\mbox{max}} \,\frac{1}{n}\|\tilde{\mathbf{\Theta}}^{(g)^T}\tilde{u}_t\|_2 .
\end{equation}
We check 50 evenly spaced values of $\lambda$ between $10^{-5} \lambda_{\mbox{max}}$ and $\lambda_{\mbox{max}}$ on a logarithmic scale.
For SGTR, we search over the range of tolerances between $\epsilon_{min}$ and $\epsilon_{max}$ defined as
\begin{equation}
\label{eq:max_min_tol}
\epsilon_{max/min} = \underset{g \in \mathcal{G}}{max/min}\, \|\xi_{\text{ridge}}^{(g)}\|_2,
\end{equation}
where $\xi_{\text{ridge}} = (\tilde{\mathbf{\Theta}}^T\tilde{\mathbf{\Theta}} + \lambda I)^{-1}\tilde{\mathbf{\Theta}}^T\tilde{u}_t$. Note that by definition, $\epsilon_{min}$ is the minimum tolerance that has any effect on the sparsity of the predictor and $\epsilon_{max}$ is the minimum tolerance that guarantees all coefficients to be zero. A set of 50 intermediate tolerances, equally spaced on a logarithmic scale, is tested between $\epsilon_{min}$ and $\epsilon_{max}$.
To select the optimal model generated via each method, we evaluate the models using the AIC-inspired loss function
\begin{equation}
\label{eq:AIC}
\mathcal{L}(\xi) = N \ln\left(\dfrac{\|\tilde{\mathbf{\Theta}}\xi-\tilde{u}_t\|_2^2}{N} + \epsilon \right) + 2k
\end{equation}
where $k$ is the number of nonzero coefficients in the identified PDE, $\|\xi\|_0/m$, and $N$ is the number of rows in $\mathbf{\Theta}$, which is equal to the size of our original dataset $u$.
Equation \eqref{eq:AIC} is closely related to the Akaike Information Criterion (AIC) \cite{akaike1974new}. Typically, the mean square error of a linear model is used to evaluate goodness of fit, but in our case there is error in computing the time derivative $u_t$, so we assume that any linear model which perfectly fits the data is overfit. We have added $\epsilon = 10^{-5}$ to the mean square error of each model as a floor in order to avoid overfitting. Without this addition, our algorithm selects insufficiently parsimonious representations of the dynamics.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\textwidth]{aic_example}
\caption{Example of loss function evaluated for a number of candidate models for the parametric Burgers' equation. Library included derivatives up to 4th order muliplying powers of $u$ up to cubic. Left: 50 models obtained via SGTR algorithm using values of $\epsilon$ between $\epsilon_{min}$ and $\epsilon_{max}$. Right: 50 models obtained via GLASSO for $\lambda$ between $10^{-5}\lambda_{max}$ and $\lambda_{max}$.}
\label{fig:aic_fig}
\end{figure}
Figure~\ref{fig:aic_fig} illustrates the loss function from equation \eqref{eq:AIC} evaluated on models derived from 50 values of $\epsilon$ and $\lambda$ using SGTR and GLASSO respectively. Initially, a low penalty in each algorithm yields a model that is overfit to the data given our sparsity criteria. For an intermediate value of $\epsilon$ or $\lambda$, a more parsimonious but still predictive model is obtained. For sufficiently high values, the model is too sparse and is no longer predictive.
\section{Computational Results of Parametric PDE Discovery}
We test our method for the discovery of parametric partial differential equations on four canonical models; Burgers' equation with a time varying nonlinear term, the Navier-Stokes equation for vorticity with a jump in Reynolds number, a spatially dependent advection equation, and a spatially dependent Kuramoto-Sivashinsky equation. In each case the method is also tested after introducing white noise with mean magnitude equal to 1\% of the $L^2$-norm \ale{$L^2$ or $\ell_2$}of the dataset. The method is able to accurately identify the dynamics in each case except the Kuramoto-Sivashinsky equation, where the inclusion of a fourth order derivative makes numerical evaluation with noise highly challenging. A comparison with GLASSO regression is also given for a number of the examples.
\subsection{Burgers' Equation with Diffusive Regularization}
\begin{figure}[t]
\centering
\includegraphics[width=1.0\textwidth]{burgers_solution}
\caption{Left: dataset for identification of the parametric diffusive Burgers' equation. Here the PDE was evolved on the interval $[-8,8]$ with periodic boundary conditions and $t \in [0,10]$. Right: Coefficients for the terms in the parametric Burgers' equation. The diffusion was held constant at 0.1 while the nonlinear advection as coefficient is given by $a(t)=-(1+sin(t)/4)$.}
\label{fig:parametric_burgers_solution}
\end{figure}
To test the parametric discovery of PDEs, we consider a solution of Burgers' equation with a sinusoidally oscillating coefficient $a(t)$ for the nonlinear advection term
\begin{equation}
\label{eq:burgers}
\begin{aligned} u_t &= a(t) uu_x + 0.1 u_{xx}\\[.1in]
a(t) &= -\left(1 + \frac{\sin(t)}{4}\right)
\end{aligned}
\end{equation}
where a small amount of diffusion is added to regularize the evolution dynamics.
The time dependent Burgers' equation was solved numerically using a spectral method on the interval $[-8,8]$ with periodic boundary conditions and $t \in [0,10]$ with $n = 256$ grid points and $m = 256$ time steps. We search for parsimonious representations of the dynamics by including powers of $u$ up to cubic order, which can be multiplied by derivatives of $u$ up to fourth order. For the noise-free dataset we use the discrete Fourier transform for computing derivatives. For the noisy dataset, we use polynomial interpolation to smooth the derivatives~\cite{rudy2017data}.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\textwidth]{parametric_burgers_coeffs}
\caption{Time series discovered for the coefficients of the parametric Burgers' equation. Top row: SGTR method, which correctly identifies the two terms. Bottom row: GLASSO method which adds several additional (incorrect) terms to the model. The left panels are noise-free, while the right panels contain 1\% noise. This parametric dependency is illustrated in Fig.~\ref{fig:param}b.}
\label{fig:parametric_burgers}
\end{figure}
The resulting time series for the identified nonzero coefficients are shown in Fig.~\ref{fig:parametric_burgers}. SGTR correctly identified the active terms in the PDE for both the noise-free and noisy datasets, whereas GLASSO fails in both cases to produce the correct PDE and its parametric dependencies.
\subsection{Navier-Stokes: Flow around a cylinder}
We consider the fluid flow around a circular cylinder by simulating the Navier-Stokes vorticity equation
\begin{equation}
\label{eq:navier_stokes}
\omega_t + \mathbf{u}\cdot \nabla \omega = \dfrac{1}{\nu (t)} \Delta \omega.
\end{equation}
Data is generated using the Immersed Boundary Projection Method (IBPM) \cite{taira:07ibfs,taira:fastIBPM} with $n_x = 449$ and $n_y=199$ spatial points in $x$ and $y$ respectively, and 1000 timesteps with $dt = 0.02$. The Reynolds number is adjusted half way through the simulation from $\nu = 100$ initially to $\nu = 75$. This is representative of the fluid velocity exhibiting a sudden decrease midway through the data collection. Our library of candidate functions is constructed using up to second order derivatives of the vorticity and multiplying by up to quadratic functions of the data. To keep the size of the machine learning problem tractable, we subsample 1000 random spatial location from the wake of the cylinder to construct our library at every tenth timestep~\cite{rudy2017data}. For the noise-free dataset, far fewer points are needed to accurately identify the dynamics. We suspect that with a more careful treatment of the numerical differentiation in the case of noisy data, such as that used in \cite{raissi2018deep}, the same would be true for the dataset with artificial noise, however such work is not the focus of this paper.
The identified time series for the Navier-Stokes equation are shown in Fig.~\ref{fig:parametric_ns}. SGTR and GLASSO both correctly identify the active terms in the PDE.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\textwidth]{navier_stokes_solution}
\caption{Left: dataset for identification of the parametric Navier-Stokes equation (\ref{eq:navier_stokes}). Right: coefficients for Navier-Stokes equations exhibiting jump in Reynolds number from 100 to 75 at $t=10$. This parametric dependency is illustrated in Fig.~\ref{fig:param}a.}
\label{fig:parametric_ns_solution}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=1.0\textwidth]{parametric_ns_coeffs}
\caption{Identified time series for coefficients for the Navier Stokes equation. Distinct axes are used to highlight jump in Reynolds number. Left: no noise. Right: 1\% noise}
\label{fig:parametric_ns}
\end{figure}
\subsection{Spatially Dependent Advection-Diffusion Equation}
\begin{figure}[t]
\centering
\includegraphics[width=1.0\textwidth]{advection_solution}
\caption{Left: dataset for identification of the spatially dependent advection diffusion equation. Right: Spatial dependence of PDE. In this case, the loadings $\xi_j(t)$ in (\ref{eq:parametric_PDE}) are replaced by $\xi_j(x)$.}
\label{fig:advection_solution}
\end{figure}
The advection-diffusion equation is a simple model for the transport of a physical quantity in a velocity field with diffusion. Here, we adapt the equation to have a spatially dependent velocity
\begin{equation}
\label{eq:advection}
u_t = (c(x) u)_x + \epsilon u_{xx} = c(x) u_x + c'(x)u+ \epsilon u_{xx}
\end{equation}
which models transport through a spatially varying vector field due to $c=c(x)$.
The PDE is solved on a periodic domain $[-L,L]$ with $L=5$, $\epsilon = 0.1$, and $c(x) = -1.5 + \cos(2\pi x /L)$ using a spectral method with $n = 256$ and $m = 256$. The library consists of powers of $u$ up to cubic, multiplied by derivatives of $u$ up to fourth order.
Results for the advection-diffusion equation are shown in Fig.~\ref{fig:spatial_advection}. In the noise-free and noisy datasets, both SGTR and GLASSO Correctly identify the active terms in the PDE.
\begin{figure}[h]
\centering
\includegraphics[width=1.0\textwidth]{spatial_advection_coeffs}
\caption{Spatial dependence of advection diffusion equation. Left: no noise. Right: 1\% noise. Both SGTR and GLASSO correctly identified the active terms.}
\label{fig:spatial_advection}
\end{figure}
\subsection{Spatially Dependent Kuramoto-Sivashinsky Equation}
\begin{figure}[t]
\centering
\includegraphics[width=1.0\textwidth]{ks_solution}
\vspace{-.2in}
\caption{Left: dataset for identification of the spatially dependent Kuramoto Sivashinsky equation. Right: parametric dependency of the governing equations.}
\label{fig:ks_solution}
\end{figure}
We now test the method on a Kuramoto-Sivashinsky equation with spatially varying coefficients
\begin{equation}
\label{eq:KS}
u_t = a(x) uu_x + b(x) u_{xx} + c(x) u_{xxxx} .
\end{equation}
We use a periodic domain $[-L,L]$ with $L=20$ and coefficients $a(x) = 1 + \sin(2\pi x /L) /4$, $b(x) = -1 + e^{-(x-2)^2/5}/4$ and $c(x) = -1 - e^{-(x+2)^2/5}/4$.
The equation is solved numerically to $t = 200$ using $n=512$ grid points and $m=1024$ timesteps. The second half of the data set is used, so as to only consider the region where the dynamics exhibited spatio-temporal chaos, resulting in a dataset containing 512 snapshots of 512 gridpoints. Since the Kuramoto Sivashinsky equation involves a fourth order derivative, it is very difficult to correctly identify it with noisy data since it is exceptionally difficult to accurately compute the fourth derivative. Indeed, our method fails to correctly identify the active terms when 1\% noise is added. With 0.01\% noise the correct terms were identified but with substantial error in coefficient value. We suspect that this shortcoming could at least be partially remedied by a more careful treatment of the numerical differentiation such as in \cite{bruno2012numerical}. The results of our parametric identification are shown in Fig.~\ref{fig:spatial_ks}.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\textwidth]{spatial_ks_coeffs}
\vspace{-.2in}
\caption{Spatial dependence of Kuramoto-Sivashinsky. top row: SGTR. Bottom row: GLASSO. Left: no noise. Right 0.01\% noise using SGTR. SGTR detects correct sparsity with significant parameter error. GLASSO does not correctly identify the parsimonious model, nor does it do a good job at predicting the correct parametric values.}
\label{fig:spatial_ks}
\end{figure}
\section{Discussion}
We have presented a method for identifying governing laws for physical systems which exhibit either spatially or temporally dependent behavior. The method builds on a growing body of work in the applied mathematics and machine learning community that seeks to automate the process of discovering physical laws. To the best of our knowledge, our method is the first approach for deriving parsimonious PDE expressions of spatio-temporal system in the case of non-constant coefficients. Specifically, we can disambiguate between the governing PDE evolution and its parametric dependencies. In all examples, the SGTR algorithm outperformed the GLASSO in correctly identifying active terms in the PDE. Errors from the latter were generally in the form of extra terms added with small coefficient values throughout the time series. It may seem reasonable to threshold these time series after the discovery algorithm, but doing so assumes that the terms importance in the PDE is directly related to its magnitude, an assumption which we do not make given the normalization prior to sparse regression.
In this work we have split the data into distinct timesteps or spatial locations in order to find PDE models for each subset of the data, resulting in coefficients that can vary in space or time. However, with a sufficiently fine grid, it seems feasible that one could bin the data by areas localized in space and time to determine a coefficient varying in both space and time with some loss of resolution. This same result may be achievable in a more stable manner by introducing a sparsity term to the work in \cite{long2017pde}.
As is the case with other sparse regression methods for identifying dynamical systems, this method is constrained by the ability of the user to accurately differentiate data. For ordinary differential equations, this may be circumvented by looking at the weak form of the dynamics \cite{schaeffer2017extracting}, but doing so for PDEs seems difficult since there are derivatives that need to be evaluated with respect to multiple variables. We find the automatic differentiation approach used in \cite{raissi2018deep} promising and suspect that the inclusion of neural network based differentiation could radically improve the ability of our method to identify dynamics from noisy data. With sufficient knowledge of data it may also be possible to obtain better estimates through tuning the polynomial based differentiation \cite{bruno2012numerical}.
Automating the identification of closed form physical laws from data will hopefully boost scientific progress in areas where deriving the same laws from first principals proves intractable. There are several limitations to many methods proposed in the field thus far. In particular, current methods have generally studied equations of the form $u_t = N(u,x,t)$ but many equations in physics are not in this class. Indeed, if measuring a system with parametric dependencies, then past methods are be unable to disambiguate between the evolution dynamics and its parametric dependencies $\mu(t)$, thus greatly limiting model discovery.
There is also a trade off between methods that are able to derive parsimonious representations, though which are limited to a finite set of library elements, and those that use black box models to represent larger classes of possible functions. The researcher may also find difficulties in attempting to infer dynamics from the wrong set of measurements. For example, one could not derive the Schr\"{o}dinger by only looking at measurements of intensity. While not addressing these issues, this work makes a step towards generalizing the class of equations which may be accurately identified via machine learning methods.\\
\noindent\footnotesize{Code: \href{https://github.com/snagcliffs/parametric-discovery}{https://github.com/snagcliffs/parametric-discovery}
}
\begin{appendix}
\end{appendix}
\bibliographystyle{siamplain}
|
{
"timestamp": "2018-06-05T02:09:19",
"yymm": "1806",
"arxiv_id": "1806.00732",
"language": "en",
"url": "https://arxiv.org/abs/1806.00732"
}
|
\subsection{Fully Connected Neural Networks} \label{subsec:conserved-fully-connected}
We first formally define a fully connected feed-forward neural network with $N$ ($N\ge2$) layers.
Let $\mathbf{W}^{(h)} \in \mathbb{R}^{n_h \times n_{h-1}}$ be the weight matrix in the $h$-th layer, and define $\vect{w} = ( \mathbf{W}^{(h)} )_{h=1}^N $ as a shorthand of the collection of all the weights.
Then the function $f_{\vect{w}}: \mathbb{R}^d \to \mathbb{R}^p$ ($d = n_0, p = n_N$) computed by this network can be defined recursively: $f_{\vect{w}}^{(1)}(\vect{x}) = \mathbf{W}^{(1)}\vect{x}$, $f_{\vect{w}}^{(h)}(\vect{x}) = \mathbf{W}^{(h)} \phi_{h-1}(f_{\vect{w}}^{(h-1)}(\vect{x}))$ ($h = 2, \ldots, N$), and $f_{\vect{w}}(\vect{x}) = f_{\vect{w}}^{(N)}(\vect{x})$, where each $\phi_h$ is an activation function that acts coordinate-wise on vectors.\footnote{We omit the trainable bias weights in the network for simplicity, but our results can be directly generalized to allow bias weights.}
We assume that each $\phi_h$ ($h\in[N-1]$) is \emph{homogeneous}, namely, $\phi_h(x) = \phi_h'(x)\cdot x$ for all $x$ and all elements of the sub-differential $\phi_h'(\cdot) $ when $\phi_h$ is non-differentiable at $x$.
This property is satisfied by functions like ReLU $\phi(x) = \max\{x, 0\}$, Leaky ReLU $\phi(x) = \max\{x, \alpha x\}$ ($0<\alpha<1$), and linear function $\phi(x) = x$.
Let $\ell: \mathbb{R}^p \times \mathbb{R}^p \to \mathbb{R}_{\ge0}$ be a differentiable loss function. Given a training dataset $\left\{ (\vect{x}_i, \vect{y}_i) \right\}_{i=1}^m \subset \mathbb{R}^d\times\mathbb{R}^p$, the training loss as a function of the network parameters $\vect{w}$ is defined as \begin{align}
L(\vect{w}) = \frac1m \sum_{i=1}^m \ell\left( f_{\vect{w}}(\vect{x}_i), \vect{y}_i \right). \label{eqn:nn_loss}
\end{align}
We consider gradient descent with infinitesimal step size (also known as gradient flow) applied on $L(\vect{w})$, which is captured by the differential inclusion:
\begin{equation} \label{eqn:gf-nn}
\frac{\d\mathbf{W}^{(h)}}{\mathrm{d}t} \in - \frac{\partial L(\vect{w})}{\partial \mathbf{W}^{(h)}}, \qquad h = 1, \ldots, N,
\end{equation}
where $t$ is a continuous time index, and $\frac{\partial L(\vect{w})}{\partial \mathbf{W}^{(h)}}$ is the Clarke sub-differential \citep{clarke2008nonsmooth}. If curves ${\mathbf{W}}^{(h)} = {\mathbf{W}}^{(h)} (t)$ ($h\in[N]$) evolve with time according to \eqref{eqn:gf-nn} they are said to be a solution of the gradient flow differential inclusion.
Our main result in this section is the following invariance imposed by gradient flow.
\begin{thm}[Balanced incoming and outgoing weights at every neuron] \label{thm:conserved-neuron}
For any $h\in[N-1]$ and $i\in[n_h]$, we have
\begin{equation} \label{eqn:conserved-neuron}
\frac{\d}{\mathrm{d}t} \left( \|\mathbf{W}^{(h)}[i, :]\|^2 - \|\mathbf{W}^{(h+1)}[:,i]\|^2 \right) = 0.
\end{equation}
\end{thm}
Note that $\mathbf{W}^{(h)}[i, :]$ is a vector consisting of network weights coming into the $i$-th neuron in the $h$-th hidden layer, and $\mathbf{W}^{(h+1)}[:,i]$ is the vector of weights going out from the same neuron.
Therefore, Theorem~\ref{thm:conserved-neuron} shows that gradient flow exactly preserves the difference between the squared $\ell_2$-norms of incoming weights and outgoing weights at any neuron.
Taking sum of \eqref{eqn:conserved-neuron} over $i\in[n_h]$, we obtain the following corollary which says gradient flow preserves the difference between the squares of Frobenius norms of weight matrices.
\begin{cor}[Balanced weights across layers] \label{cor:conserved-F-norm}
For any $h\in[N-1]$, we have
\begin{equation*}
\frac{\d}{\mathrm{d}t} \left( \|\mathbf{W}^{(h)}\|_F^2 - \|\mathbf{W}^{(h+1)}\|_F^2 \right) = 0.
\end{equation*}
\end{cor}
Corollary~\ref{cor:conserved-F-norm} explains why in practice, trained multi-layer models usually have similar magnitudes on all the layers:
if we use a small initialization, $\|\mathbf{W}^{(h)}\|_F^2 - \|\mathbf{W}^{(h+1)}\|_F^2$ is very small at the beginning, and Corollary~\ref{cor:conserved-F-norm} implies this difference remains small at all time.
This finding also partially explains why gradient descent converges.
Although the objective function like \eqref{eqn:nn_loss} may not be smooth over the entire parameter space, given that $\|\mathbf{W}^{(h)}\|_F^2 - \|\mathbf{W}^{(h+1)}\|_F^2$ is small for all $h$, the objective function may have smoothness.
Under this condition, standard theory shows that gradient descent converges.
We believe this finding serves as a key building block for understanding first order methods for training deep neural networks.
For linear activation, we have the following stronger invariance than Theorem~\ref{thm:conserved-neuron}:
\begin{thm}[Stronger balancedness property for linear activation] \label{thm:conserved-linear}
If for some $h\in[N-1]$ we have $\phi_h(x) = x$, then
\begin{equation*}
\frac{\d}{\mathrm{d}t} \left( \mathbf{W}^{(h)} (\mathbf{W}^{(h)})^\top - (\mathbf{W}^{(h+1)})^\top \mathbf{W}^{(h+1)} \right) = \mathbf{0}.
\end{equation*}
\end{thm}
This result was known for linear networks \citep{arora2018optimization}, but the proof there relies on the entire network being linear while Theorem~\ref{thm:conserved-linear} only needs two consecutive layers to have no nonlinear activations in between.
While Theorem~\ref{thm:conserved-neuron} shows the invariance in a node-wise manner, Theorem~\ref{thm:conserved-linear} shows for linear activation, we can derive a layer-wise invariance.
Inspired by this strong invariance, in Section~\ref{sec:mf} we prove gradient descent with positive step sizes preserves this invariance approximately for matrix factorization.
\subsection{Convolutional Neural Networks}
\label{subsec:cnn}
Now we show that the conservation property in Corollary~\ref{cor:conserved-F-norm} can be generalized to convolutional neural networks.
In fact, we can allow \emph{arbitrary sparsity pattern and weight sharing structure} within a layer; convolutional layers are a special case.
\paragraph{Neural networks with sparse connections and shared weights.}
We use the same notation as in Section~\ref{subsec:conserved-fully-connected}, with the difference that some weights in a layer can be \emph{missing} or \emph{shared}.
Formally, the weight matrix $\mathbf W^{(h)} \in \mathbb{R}^{n_h\times n_{h-1}}$ in layer $h$ ($h\in [N]$) can be described by a vector $\vect{v}^{(h)} \in \mathbb{R}^{d_h}$ and a function $g_h: [n_h]\times[n_{h-1}] \to [d_h]\cup\{0\}$.
Here $\vect{v}^{(h)}$ consists of the actual \emph{free parameters} in this layer and $d_h$ is the number of free parameters (e.g. if there are $k$ convolutional filters in layer $h$ each with size $r$, we have $d_h = r\cdot k$).
The map $g_h$ represents the sparsity and weight sharing pattern:
\begin{align*}
\mathbf W^{(h)}[i, j] = \begin{cases}
0, & g_h(i, j) = 0, \\
\vect{v}^{(h)}[k], & g_h(i, j) =k > 0.
\end{cases}
\end{align*}
Denote by $\vect{v} = \left( \vect{v}^{(h)} \right)_{h=1}^N$ the collection of all the parameters in this network, and we consider gradient flow to learn the parameters:
\begin{equation*}
\frac{\d\vect{v}^{(h)}}{\mathrm{d}t} \in - \frac{\partial L(\vect{v})}{\partial \vect{v}^{(h)}}, \qquad h = 1, \ldots, N.
\end{equation*}
The following theorem generalizes Corollary~\ref{cor:conserved-F-norm} to neural networks with sparse connections and shared weights:
\begin{thm}\label{thm:cnn}
For any $h\in[N-1]$, we have
\begin{equation*}
\frac{\d}{\mathrm{d}t} \left( \|\vect{v}^{(h)}\|^2 - \|\vect{v}^{(h+1)}\|^2 \right) = 0.
\end{equation*}
\end{thm}
Therefore, for a neural network with arbitrary sparsity pattern and weight sharing structure, gradient flow still balances the magnitudes of all layers.
\subsection{Proof of Theorem~\ref{thm:conserved-neuron}}
\label{subsec:proof_main}
The proofs of all theorems in this section are similar. They are based on the use of the chain rule (i.e. back-propagation) and the property of homogeneous activations.
Below we provide the proof of Theorem~\ref{thm:conserved-neuron} and defer the proofs of other theorems to Appendix~\ref{sec:proof-conserved}.
\begin{proof}[Proof of Theorem~\ref{thm:conserved-neuron}]
First we note that we can without loss of generality assume $L$ is the loss associated with one data sample $(\vect{x}, \vect{y}) \in \mathbb{R}^d\times\mathbb{R}^p$, i.e., $L(\vect{w}) = \ell(f_{\vect{w}}(\vect{x}), \vect{y})$.
In fact, for $L(\vect{w}) = \frac1m \sum_{k=1}^m L_k(\vect{w})$ where $L_k(\vect{w}) = \ell\left( f_{\vect{w}}(\vect{x}_k), \vect{y}_k \right)$, for any single weight $\mathbf W^{(h)}[i, j]$ in the network we can compute $\frac{\d}{\mathrm{d}t} (\mathbf{W}^{(h)}[i,j])^2 = 2 \mathbf{W}^{(h)}[i,j] \cdot \frac{\d \mathbf{W}^{(h)}[i,j]}{\mathrm{d}t} = -2 \mathbf{W}^{(h)}[i,j] \cdot \frac{\partial L(\vect{w})}{\partial \mathbf{W}^{(h)}[i,j]} = -2 \mathbf{W}^{(h)}[i,j] \cdot \frac1m \sum_{k=1}^m \frac{\partial L_k(\vect{w})}{\partial \mathbf{W}^{(h)}[i,j]}$, using the sharp chain rule of differential inclusions for tame functions \citep{drusvyatskiy2015curves,davis2018stochastic}.
Thus, if we can prove the theorem for every individual loss $L_k$, we can prove the theorem for $L$ by taking average over $k\in[m]$.
Therefore in the rest of proof we assume $L(\vect{w}) = \ell(f_{\vect{w}}(\vect{x}), \vect{y})$.
For convenience, we denote $\vect{x}^{(h)} = f_{\vect{w}}^{(h)}(\vect{x})$ ($h\in[N]$), which is the input to the $h$-th hidden layer of neurons for $h\in[N-1]$ and is the output of the network for $h=N$.
We also denote $\vect{x}^{(0)} = \vect{x}$ and $\phi_0(x)=x$ ($\forall x$).
Now we prove \eqref{eqn:conserved-neuron}.
Since $\mathbf{W}^{(h+1)}[k,i]$ ($k\in[n_{h+1}]$) can only affect $L(\vect{w})$ through $\vect{x}^{(h+1)}[k]$ , we have for $k\in[n_{h+1}]$,
\begin{equation*}
\frac{\partial L(\vect{w})}{ \partial \mathbf{W}^{(h+1)}[k,i] }
= \frac{\partial L(\vect{w})}{ \partial \vect{x}^{(h+1)}[k] } \cdot \frac{\partial \vect{x}^{(h+1)}[k]}{\partial \mathbf{W}^{(h+1)}[k,i]}
= \frac{\partial L(\vect{w})}{ \partial \vect{x}^{(h+1)}[k] } \cdot \phi_{h}(\vect{x}^{(h)}[i]),
\end{equation*}
which can be rewritten as
\begin{equation*}
\frac{\partial L(\vect{w})}{ \partial \mathbf{W}^{(h+1)}[:,i] } = \phi_{h}(\vect{x}^{(h)}[i]) \cdot \frac{\partial L(\vect{w})}{ \partial \vect{x}^{(h+1)} }.
\end{equation*}
It follows that
\begin{equation} \label{eqn:proof-conserved-neuron-1}
\begin{aligned}
\frac{\d}{\mathrm{d}t} \|\mathbf{W}^{(h+1)}[:,i]\|^2
&= 2 \left\langle \mathbf{W}^{(h+1)}[:,i], \frac{\d}{\mathrm{d}t} \mathbf{W}^{(h+1)}[:,i] \right\rangle = -2 \left\langle \mathbf{W}^{(h+1)}[:,i], \frac{\partial L(\vect{w})}{ \partial \mathbf{W}^{(h+1)}[:,i] } \right\rangle \\
= -2 \phi_{h}(\vect{x}^{(h)}[i]) \cdot \left\langle \mathbf{W}^{(h+1)}[:,i], \frac{\partial L(\vect{w})}{ \partial \vect{x}^{(h+1)} } \right\rangle.
\end{aligned}
\end{equation}
On the other hand, $\mathbf{W}^{(h)}[i, :]$ only affects $L(\vect{w})$ through $\vect x^{(h)}[i]$. Using the chain rule, we get
\begin{align*}
\frac{ \partial L(\vect{w}) }{ \partial \mathbf{W}^{(h)}[i, :] }
&= \frac{ \partial L(\vect{w}) }{ \partial \vect x^{(h)}[i] } \cdot \phi_{h-1} (\vect x^{(h-1)})= \left\langle \frac{ \partial L(\vect{w}) }{ \partial \vect x^{(h+1)} } , \mathbf{W}^{(h+1)}[:, i] \right\rangle \cdot \phi_h'(\vect x^{(h)}[i]) \cdot \phi_{h-1} (\vect x^{(h-1)}),
\end{align*}
where $\phi'$ is interpreted as a set-valued mapping whenever it is applied at a non-differentiable point.\footnote{More precisely, the equalities should be an inclusion whenever there is a sub-differential, but as we see in the next display the ambiguity in the choice of sub-differential does not affect later calculations.}
It follows that\footnote{This holds for any choice of element of the sub-differential, since $\phi'(x) x = \phi(x)$ holds at $x=0$ for any choice of sub-differential.}
\begin{align*}
& \frac{\d}{\mathrm{d}t} \|\mathbf{W}^{(h)}[i, :]\|^2
= 2 \left\langle \mathbf{W}^{(h)}[i, :], \frac{\d}{\mathrm{d}t} \mathbf{W}^{(h)}[i, :] \right\rangle
= -2 \left\langle \mathbf{W}^{(h)}[i, :], \frac{\partial L(\vect{w})}{ \partial \mathbf{W}^{(h)}[i, :] } \right\rangle \\
=\,& -2 \left\langle \frac{ \partial L(\vect{w}) }{ \partial \vect x^{(h+1)} } , \mathbf{W}^{(h+1)}[:, i] \right\rangle \cdot \phi_h'(\vect x^{(h)}[i]) \cdot \left\langle \mathbf{W}^{(h)}[i, :], \phi_{h-1} (\vect x^{(h-1)}) \right\rangle \\
=\,& -2 \left\langle \frac{ \partial L(\vect{w}) }{ \partial \vect x^{(h+1)} } , \mathbf{W}^{(h+1)}[:, i] \right\rangle \cdot \phi_h'(\vect x^{(h)}[i]) \cdot \vect x^{(h)}[i] = -2 \left\langle \frac{ \partial L(\vect{w}) }{ \partial \vect x^{(h+1)} } , \mathbf{W}^{(h+1)}[:, i] \right\rangle \cdot \phi_{h}(\vect{x}^{(h)}[i]).
\end{align*}
Comparing the above expression to \eqref{eqn:proof-conserved-neuron-1}, we finish the proof.
\end{proof}
\section{Introduction}
\label{sec:intro}
\input{intro.tex}
\section{The Auto-Balancing Properties in Deep Neural Networks}
\label{sec:conserved}
\input{conserved.tex}
\section{Gradient Descent Converges to Global Minimum for Asymmetric Matrix Factorization}
\label{sec:mf}
\input{mf.tex}
\section{Empirical Verification}
\label{sec:exp}
\input{exp.tex}
\section{Conclusion and Future Work}
\label{sec:con}
\input{conclusion.tex}
\section*{Acknowledgements}
\label{sec:ack}
\input{ack.tex}
\subsection{Related Work}
\label{sec:rel} \input{rel.tex}
\subsection{Paper Organization}
The rest of the paper is organized as follows.
In Section~\ref{sec:conserved}, we present our main theoretical result on the implicit regularization property of gradient flow for optimizing neural networks.
In Section~\ref{sec:mf}, we analyze the dynamics of randomly initialized gradient descent for asymmetric matrix factorization problem with unregularized objective function~\eqref{eqn:intro_mf_obj}.
In Section~\ref{sec:exp}, we empirically verify the theoretical result in Section~\ref{sec:conserved}.
We conclude and list future directions in Section~\ref{sec:con}.
Some technical proofs are deferred to the appendix.
\subsection{Notation}
We use bold-faced letters for vectors and matrices.
For a vector $\vect x$, denote by $\vect x[i]$ its $i$-th coordinate.
For a matrix $\vect A$, we use $\mathbf A[i, j]$ to denote its $(i, j)$-th entry, and use $\mathbf A[i, :]$ and $\mathbf A[:, j]$ to denote its $i$-th row and $j$-th column, respectively (both as column vectors).
We use $\norm{\cdot}_2$ or $\norm{\cdot}$ to denote the Euclidean norm of a vector, and use $\norm{\cdot}_F$ to denote the Frobenius norm of a matrix.
We use $\langle \cdot, \cdot \rangle$ to denote the standard Euclidean inner product between two vectors or two matrices.
Let $[n] = \{1, 2, \ldots, n\}$.
\subsection{The General Rank-$r$ Case}
First we consider the general case of $r\ge1$.
Our main theorem below says that if we use a random small initialization $(\mathbf U_0, \mathbf V_0)$, and set step sizes $\eta_t$ to be appropriately small, then gradient descent \eqref{eqn:mf-gd-dynamics} will converge to a solution close to the global minimum of \eqref{eqn:mf}.
To our knowledge, this is the first result showing that gradient descent with random initialization directly solves the un-regularized asymmetric matrix factorization problem~\eqref{eqn:mf}.
\begin{thm}\label{thm:mf-main}
Let $0<\epsilon < \norm{\mathbf M^*}_F$.
Suppose we initialize the entries in $\mathbf U_0$ and $\mathbf V_0$ i.i.d. from $\mathcal{N}(0, \frac{\epsilon}{\mathrm{poly}(d)})$ ($d = \max\{d_1, d_2\}$), and run \eqref{eqn:mf-gd-dynamics} with step sizes $\eta_t = \frac{\sqrt{\epsilon/r}}{100(t+1) \norm{\mathbf M^*}_F^{3/2}}$ ($t=0,1,\ldots$).\footnote{The dependency of $\eta_t$ on $t$ can be $\eta_t = \Theta\left( t^{-(1/2+\delta)} \right)$ for any constant $\delta \in (0, 1/2]$.}
Then with high probability over the initialization, $\lim_{t\to\infty}(\mathbf U_t, \mathbf V_t) = (\bar{\mathbf U}, \bar{\mathbf V})$ exists and satisfies $\norm{\bar{\mathbf U} \bar{\mathbf V}^\top - \mathbf M^*}_F \le \epsilon$.
\end{thm}
\paragraph{Proof sketch of Theorem~\ref{thm:mf-main}.}
First let's imagine that we are using infinitesimal step size in GD. Then according to Theorem~\ref{thm:conserved-linear} (viewing problem~\eqref{eqn:mf} as learning a two-layer linear network where the inputs are all the standard unit vectors in $\mathbb{R}^{d_2}$), we know that $\mathbf U^\top \mathbf U - \mathbf V^\top \mathbf V$ will stay invariant throughout the algorithm.
Hence when $\mathbf U$ and $\mathbf V$ are initialized to be small, $\mathbf U^\top \mathbf U - \mathbf V^\top \mathbf V$ will stay small forever.
Combined with the fact that the objective $f(\mathbf U, \mathbf V)$ is decreasing over time (which means $\mathbf U \mathbf V^\top$ cannot be too far from $\mathbf M^*$), we can show that $\mathbf U$ and $\mathbf V$ will always stay bounded.
Now we are using positive step sizes $\eta_t$, so we no longer have the invariance of $\mathbf U^\top \mathbf U - \mathbf V^\top \mathbf V$.
Nevertheless, by a careful analysis of the updates, we can still prove that $\mathbf U_t^\top \mathbf U_t - \mathbf V_t^\top \mathbf V_t$ is small, the objective $f(\mathbf U_t, \mathbf V_t)$ decreases, and $\mathbf U_t$ and $\mathbf V_t$ stay bounded.
Formally, we have the following lemma:
\begin{lem} \label{lem:mf-balance}
With high probability over the initialization $(\mathbf U_0, \mathbf V_0)$, for all $t$ we have:
\begin{enumerate}[(i)]
\item Balancedness: $\norm{\mathbf U_t^\top \mathbf U_t - \mathbf V_t^\top \mathbf V_t}_F \le \epsilon$;
\item Decreasing objective: $f(\mathbf U_{t}, \mathbf V_{t}) \le f(\mathbf U_{t-1}, \mathbf V_{t-1}) \le \cdots \le f(\mathbf U_{0}, \mathbf V_{0}) \le 2\norm{\mathbf M^*}_F^2$;
\item Boundedness: $\norm{\mathbf U_{t}}_F^2 \le 5\sqrt{r} \norm{\mathbf M^*}_F, \norm{\mathbf V_t}_F^2 \le 5\sqrt{r} \norm{\mathbf M^*}_F$.
\end{enumerate}
\end{lem}
Now that we know the GD algorithm automatically constrains $(\mathbf U_t, \mathbf V_t)$ in a bounded region, we can use the smoothness of $f$ in this region and a standard analysis of GD to show that $(\mathbf U_t, \mathbf V_t)$ converges to a stationary point $(\bar{\mathbf U}, \bar{\mathbf V})$ of $f$ (Lemma~\ref{lem:mf-convergence}).
Furthermore, using the results of \citep{lee2016gradient, panageas2016gradient} we know that $(\bar{\mathbf U}, \bar{\mathbf V})$ is almost surely not a strict saddle point.
Then the following lemma implies that $(\bar{\mathbf U}, \bar{\mathbf V})$ has to be close to a global optimum since we know $\norm{\bar{\mathbf U}^\top \bar{\mathbf U} - \bar{\mathbf V}^\top \bar{\mathbf V}}_F \le \epsilon$ from Lemma~\ref{lem:mf-balance} (i). This would complete the proof of Theorem~\ref{thm:mf-main}.
\begin{lem} \label{lem:mf-strict-saddle}
Suppose $(\mathbf{U}, \mathbf{V})$ is a stationary point of $f$ such that $\norm{\mathbf U^\top \mathbf U - \mathbf V^\top \mathbf V}_F \le \epsilon$.
Then either $\norm{\mathbf U \mathbf V^\top - \mathbf M^*}_F \le \epsilon$, or $(\mathbf{U}, \mathbf{V})$ is a strict saddle point of $f$.
\end{lem}
The full proof of Theorem~\ref{thm:mf-main} and the proofs of Lemmas~\ref{lem:mf-balance} and \ref{lem:mf-strict-saddle} are given in Appendix~\ref{sec:proof-mf}.
\subsection{The Rank-$1$ Case}
We have shown in Theorem~\ref{thm:mf-main} that GD with small and diminishing step sizes converges to a global minimum for matrix factorization.
Empirically, it is observed that a constant step size $\eta_t \equiv \eta$ is enough for GD to converge quickly to global minimum.
Therefore,
some natural questions are how to prove convergence of GD with a constant step size, how fast it converges, and how the discretization affects the invariance we derived in Section~\ref{sec:conserved}.
While these questions remain challenging for the general rank-$r$ matrix factorization, we resolve them for the case of $r=1$.
Our main finding is that with constant step size, the norms of two layers are always within a constant factor of each other (although we may no longer have the stronger balancedness property as in Lemma~\ref{lem:mf-balance}), and
we utilize this property to prove the \emph{linear convergence} of GD to a global minimum.
When $r=1$, the asymmetric matrix factorization problem and its GD dynamics become
$$
\min_{\vect{u} \in \mathbb{R}^{d_1}, \vect{v} \in \mathbb{R}^{d_2}} \frac12 \norm{\vect{u}\vect{v}^\top - \mathbf{M}^*}_F^2
$$
and
\begin{align*}
\vect{u}_{t+1} = \vect{u}_t - \eta (\vect{u}_t\vect{v}_t^\top - \mathbf{M}^*)\vect{v}_t, \qquad
\vect{v}_{t+1} = \vect{v}_t - \eta\left(\vect{v}_t\vect{u}_t^\top - \mathbf{M}^{*\top} \right)\vect{u}_t.
\end{align*}
Here we assume $\mathbf{M}^*$ has rank $1$, i.e., it can be factorized as $\mathbf{M}^* = \sigma_1 \vect{u}^*\vect{v}^{*\top}$ where $\vect{u}^*$ and $\vect{v}^*$ are unit vectors and $\sigma_1>0$.
Our main theoretical result is the following.
\begin{thm}[Approximate balancedness and linear convergence of GD for rank-$1$ matrix factorization]
\label{thm:rank_1}
Suppose $\vect{u}_0 \sim \mathcal{N}(\vect 0,\delta\mathbf{I})$, $\vect{v}_0 \sim \mathcal{N}(\vect 0,\delta \mathbf I)$ with $\delta = c_{init} \sqrt{\frac{\sigma_1}{d}} $ ($d = \max\{d_1, d_2\}$)
for some sufficiently small constant $c_{init} >0$,
and $\eta = \frac{c_{step}}{\sigma_1}$ for some sufficiently small constant $ c_{step} >0$.
Then with constant probability over the initialization, for all $t$ we have $c_0\le \frac{\abs{\vect{u}_t^\top \vect u^*}}{\abs{\vect{v}_t^\top \vect{v}^*}}\le C_0$ for some universal constants $ c_0,C_0>0$.
Furthermore, for any $0<\epsilon< 1$, after $t = O\left( \log\frac{d}{\epsilon} \right)$ iterations, we have $\norm{\vect{u}_t\vect{v}_t^\top - \mathbf{M}^*}_F \le \epsilon \sigma_1$.
\end{thm}
Theorem~\ref{thm:rank_1} shows for $\vect{u}_t$ and $\vect{v}_t$, their strengths in the signal space, $\abs{\vect{u}_t^\top \vect{u}^*}$ and $\abs{\vect{v}_t^\top \vect{v}^*}$, are of the same order.
This approximate balancedness helps us prove the linear convergence of GD.
We refer readers to Appendix~\ref{sec:proof-rank-1} for the proof of Theorem~\ref{thm:rank_1}.
\subsection{Proof of Lemma~\ref{lem:mf-balance}}
Recall the following three properties we want to prove in Lemma~\ref{lem:mf-balance}, which we call $\mathcal{A}(t)$, $\mathcal{B}(t)$ and $\mathcal{C}(t)$, respectively:
\begin{align*}
\mathcal{A}(t):\qquad & \norm{\mathbf U_t^\top \mathbf U_t - \mathbf V_t^\top \mathbf V_t}_F \le \epsilon, \\
\mathcal{B}(t):\qquad & f(\mathbf U_{t}, \mathbf V_{t}) \le f(\mathbf U_{t-1}, \mathbf V_{t-1}) \le \cdots \le f(\mathbf U_{0}, \mathbf V_{0}) \le 2 \norm{\mathbf M^*}_F^2, \\
\mathcal{C}(t):\qquad & \norm{\mathbf U_{t}}_F^2 \le 5\sqrt{r} \norm{\mathbf M^*}_F, \norm{\mathbf V_t}_F^2 \le 5\sqrt{r} \norm{\mathbf M^*}_F.
\end{align*}
We use induction to prove these statements.
For $t=0$, we can make the Gaussian variance in the initialization sufficiently small such that with high probability we have $$\norm{\mathbf U_{0}}_F^2 \le \epsilon,\qquad \norm{\mathbf V_0}_F^2 \le \epsilon, \qquad \norm{\mathbf U_0^\top \mathbf U_0 - \mathbf V_0^\top \mathbf V_0}_F \le \frac\epsilon2.$$
From now on we assume they are all satisfied.
Then $\mathcal{A}(0)$ is already satisfied, $\mathcal{C}(0)$ is satisfied because $\epsilon < \norm{\mathbf M^*}_F$,
and $\mathcal{B}(0)$ can be verified by $f(\mathbf U_{0}, \mathbf V_{0}) = \frac12 \norm{\mathbf U_0 \mathbf V_0^\top - \mathbf M^*}_F^2 \le \norm{\mathbf U_0 \mathbf V_0^\top}_F^2 + \norm{\mathbf M^*}_F^2 \le \norm{\mathbf U_0}_F^2 \norm{\mathbf V_0^\top}_F^2 + \norm{\mathbf M^*}_F^2 \le \epsilon^2+\norm{\mathbf M^*}_F^2\le 2\norm{\mathbf M^*}_F^2$.
To prove $\mathcal{A}(t)$, $\mathcal{B}(t)$ and $\mathcal{C}(t)$ for all $t$, we prove the following three claims. Since we have $\mathcal{A}(0)$, $\mathcal{B}(0)$ and $\mathcal{C}(0)$, if the following claims are all true, the proof will be completed by induction.
\begin{enumerate}[(i)]
\item $\mathcal{B}(0), \ldots, \mathcal{B}(t), \mathcal{C}(0), \ldots, \mathcal{C}(t) \implies \mathcal{A}(t+1)$;
\item $\mathcal{B}(0), \ldots, \mathcal{B}(t), \mathcal{C}(t) \implies \mathcal{B}(t+1)$;
\item $\mathcal{A}(t), \mathcal{B}(t) \implies \mathcal{C}(t)$.
\end{enumerate}
\begin{claim}
$\mathcal{B}(0), \ldots, \mathcal{B}(t), \mathcal{C}(0), \ldots, \mathcal{C}(t) \implies \mathcal{A}(t+1)$.
\end{claim}
\begin{proof}
Using the update rule \eqref{eqn:mf-gd-dynamics} we can calculate
\begin{align*}
& \mathbf U_{t+1}^\top \mathbf U_{t+1} - \mathbf V_{t+1}^\top \mathbf V_{t+1} \\
=\, & \left(\mathbf U_t - \eta_t (\mathbf U_t \mathbf V_t^\top - \mathbf M^*) \mathbf V_t\right)^\top \left(\mathbf U_t - \eta_t (\mathbf U_t \mathbf V_t^\top - \mathbf M^*) \mathbf V_t\right) \\ & - \left(\mathbf V_t - \eta_t (\mathbf U_t \mathbf V_t^\top - \mathbf M^*)^\top \mathbf U_t\right)^\top \left(\mathbf V_t - \eta_t (\mathbf U_t \mathbf V_t^\top - \mathbf M^*)^\top \mathbf U_t\right) \\
=\, & \mathbf U_t^\top \mathbf U_t - \mathbf V_t^\top \mathbf V_t + \eta_t^2 \left( \mathbf V_t^\top \mathbf R_t^\top \mathbf R_t \mathbf V_t - \mathbf U_t^\top \mathbf R_t^\top \mathbf R_t \mathbf U_t \right),
\end{align*}
where $\mathbf R_t = \mathbf U_t \mathbf V_t^\top - \mathbf M^*$.
Then we have
\begin{equation} \label{eqn:mf-inproof-1}
\begin{aligned}
&\norm{\mathbf U_{t+1}^\top \mathbf U_{t+1} - \mathbf V_{t+1}^\top \mathbf V_{t+1}}_F\\
\le\,& \norm{\mathbf U_t^\top \mathbf U_t - \mathbf V_t^\top \mathbf V_t}_F + \eta_t^2 \left( \norm{\mathbf V_t^\top \mathbf R_t^\top \mathbf R_t \mathbf V_t }_F + \norm{ \mathbf U_t^\top \mathbf R_t^\top \mathbf R_t \mathbf U_t}_F \right) \\
\le\,& \norm{\mathbf U_t^\top \mathbf U_t - \mathbf V_t^\top \mathbf V_t}_F + \eta_t^2 \left( \norm{\mathbf V_t}_F^2 \norm{\mathbf R_t}_F^2 + \norm{\mathbf U_t}_F^2 \norm{\mathbf R_t}_F^2 \right) \\
=\,& \norm{\mathbf U_t^\top \mathbf U_t - \mathbf V_t^\top \mathbf V_t}_F + 2\eta_t^2 \left( \norm{\mathbf V_t}_F^2 + \norm{\mathbf U_t}_F^2 \right) f(\mathbf U_t, \mathbf V_t) \\
\le\,& \norm{\mathbf U_t^\top \mathbf U_t - \mathbf V_t^\top \mathbf V_t}_F + 2\eta_t^2 \cdot 10\sqrt{r} \norm{\mathbf M^*}_F \cdot 2 \norm{\mathbf M^*}_F^2,
\end{aligned}
\end{equation}
where the last line is due to $\mathcal{B}(t)$ and $\mathcal{C}(t)$.
Since we have $\mathcal{B}(t')$ and $\mathcal{C}(t')$ for all $t' \le t$, \eqref{eqn:mf-inproof-1} is still true when substituting $t$ with any $t'\le t$. Summing all of them and noting $\norm{\mathbf U_0^\top \mathbf U_0 - \mathbf V_0^\top \mathbf V_0}_F \le \frac{\epsilon}{2}$, we get
\begin{align*}
&\norm{\mathbf U_{t+1}^\top \mathbf U_{t+1} - \mathbf V_{t+1}^\top \mathbf V_{t+1}}_F \\
\le\,& \norm{\mathbf U_0^\top \mathbf U_0 - \mathbf V_0^\top \mathbf V_0}_F + 40\sqrt{r} \norm{\mathbf M^*}_F^3 \sum_{i=0}^t \eta_i^2 \\
\le\,& \frac\epsilon2 + 40\sqrt{r} \norm{\mathbf M^*}_F^3 \sum_{i=0}^t \frac{1}{(i+1)^2} \cdot \frac{\epsilon/r}{100^2\norm{\mathbf M^*}_F^3}\\
\le\,&\epsilon.
\end{align*}
Therefore we have proved $\mathcal{A}(t+1)$.
\end{proof}
\begin{claim}
$\mathcal{B}(0), \ldots, \mathcal{B}(t), \mathcal{C}(t) \implies \mathcal{B}(t+1)$.
\end{claim}
\begin{proof}
Note that we only need to show $f(\mathbf U_{t+1}, \mathbf V_{t+1}) \le f(\mathbf U_{t}, \mathbf V_{t})$.
We prove this using the standard analysis of gradient descent, for which we need the smoothness of the objective function $f$ (Lemma~\ref{lem:mf-local-smooth}).
We first need to bound $\norm{\mathbf U_t}_F$, $\norm{\mathbf V_t}_F$, $\norm{\mathbf U_{t+1}}_F$ and $\norm{\mathbf V_{t+1}}_F$. We know from $\mathcal{C}(t)$ that $\norm{\mathbf U_{t}}_F^2 \le 5\sqrt{r} \norm{\mathbf M^*}_F$ and $\norm{\mathbf V_t}_F^2 \le 5\sqrt{r} \norm{\mathbf M^*}_F$.
We can also bound $\norm{\mathbf U_{t+1}}_F^2$ and $\norm{\mathbf V_{t+1}}_F^2$ easily from the GD update rule:
\begin{align*}
&\norm{\mathbf U_{t+1}}_F^2 \\
=\,& \norm{\mathbf U_t - \eta_t (\mathbf U_t \mathbf V_t^\top - \mathbf M^*) \mathbf V_t}_F^2 \\
\le\,& 2 \norm{\mathbf U_t}_F^2 + 2\eta_t^2 \norm{\mathbf U_t \mathbf V_t^\top - \mathbf M^*}_F^2 \norm{\mathbf V_t}_F^2 \\
\le\,& 2\cdot 5\sqrt{r} \norm{\mathbf M^*}_F + 2 \eta_t^2 \cdot 2f(\mathbf U_t, \mathbf V_t) \cdot 5\sqrt{r} \norm{\mathbf M^*}_F\\
\le\,& 10\sqrt{r} \norm{\mathbf M^*}_F + 2 \cdot \frac{\epsilon/r}{100^2 (t+1)^2 \norm{\mathbf M^*}_F^3} \cdot 4\norm{\mathbf M^*}_F^2 \cdot 5\sqrt{r} \norm{\mathbf M^*}_F & \text{(using $\mathcal{B}(t)$)} \\
\le \,& 10\sqrt{r} \norm{\mathbf M^*}_F + \frac{\epsilon}{100} \\
\le\, & 11\sqrt{r} \norm{\mathbf M^*}_F. & \text{(using $\epsilon < \norm{\mathbf M^*}_F$)}
\end{align*}
Let $\beta = (66\sqrt{r}+2) \norm{\mathbf M^*}_F $.
From Lemma~\ref{lem:mf-local-smooth}, $f$ is $\beta$-smooth over $\mathcal{S} = \{(\mathbf U, \mathbf V): \norm{\mathbf U}_F^2 \le 11\sqrt{r} \norm{\mathbf M^*}_F, \norm{\mathbf V}_F^2 \le 11\sqrt{r} \norm{\mathbf M^*}_F \}$.
Also note that $\eta_t < \frac{1}{\beta}$ by our choice. Then using smoothness we have
\begin{equation} \label{eqn:mf-obj-decrease}
\begin{aligned}
&f(\mathbf U_{t+1}, \mathbf V_{t+1}) \\
\le\,& f(\mathbf U_{t}, \mathbf V_{t}) + \left\langle \nabla f(\mathbf U_t, \mathbf V_t), \begin{pmatrix}
\mathbf U_{t+1}\\ \mathbf V_{t+1}
\end{pmatrix} - \begin{pmatrix}
\mathbf U_{t}\\ \mathbf V_{t}
\end{pmatrix} \right\rangle + \frac{\beta}{2} \norm{ \begin{pmatrix}
\mathbf U_{t+1}\\ \mathbf V_{t+1}
\end{pmatrix} - \begin{pmatrix}
\mathbf U_{t}\\ \mathbf V_{t}
\end{pmatrix} }_F^2 \\
=\,& f(\mathbf U_{t}, \mathbf V_{t}) - \eta_t \norm{ \nabla f(\mathbf U_t, \mathbf V_t) }_F^2 + \frac{\beta}{2}\eta_t^2 \norm{ \nabla f(\mathbf U_t, \mathbf V_t) }_F^2 \\
\le\,& f(\mathbf U_{t}, \mathbf V_{t}) - \frac{\eta_t}{2} \norm{ \nabla f(\mathbf U_t, \mathbf V_t) }_F^2.
\end{aligned}
\end{equation}
Therefore we have shown $\mathcal{B}(t+1)$.
\end{proof}
\begin{claim}
$\mathcal{A}(t), \mathcal{B}(t) \implies \mathcal{C}(t)$.
\end{claim}
\begin{proof}
From $\mathcal{B}(t)$ we know $\frac12 \norm{\mathbf U_t \mathbf V_t^\top - \mathbf M^*}_F^2 \le 2 \norm{\mathbf M^*}_F^2$ which implies $\norm{\mathbf U_t \mathbf V_t^\top}_F \le 3\norm{\mathbf M^*}_F$.
Therefore it suffices to prove
\begin{equation} \label{eqn:mf-inproof-toshow-1}
\norm{\mathbf U \mathbf V^\top}_F \le 3\norm{\mathbf M^*}_F,
\norm{\mathbf U^\top \mathbf U - \mathbf V^\top \mathbf V}_F \le \epsilon
\implies \norm{\mathbf U}_F^2 \le 5\sqrt{r} \norm{\mathbf M^*}_F, \norm{\mathbf V}_F^2 \le 5\sqrt{r} \norm{\mathbf M^*}_F.
\end{equation}
Now we prove \eqref{eqn:mf-inproof-toshow-1}.
Consider the SVD $\mathbf U = \mathbf\Phi \mathbf\Sigma \mathbf{\Psi}^\top$, where $\mathbf \Phi \in \mathbb{R}^{d_1\times d_1}$ and $\mathbf{\Psi} \in \mathbb{R}^{r\times r}$ are orthogonal matrices, and $\mathbf\Sigma \in \mathbb{R}^{d_1\times r}$ is a diagonal matrix.
Let $\sigma_i = \mathbf \Sigma[i, i]$ ($i\in[r]$) which are all the singular values of $\mathbf U$.
Define $\widetilde{\mathbf V} = \mathbf V \mathbf\Psi$. Then we have
\begin{align*}
3\norm{\mathbf M^*}_F
\ge \norm{\mathbf U \mathbf V^\top}_F
= \norm{\mathbf\Phi \mathbf\Sigma \mathbf{\Psi}^\top \mathbf{\Psi} \widetilde{\mathbf V}^\top }_F
= \norm{\mathbf \Sigma \widetilde{\mathbf V}^\top }_F
= \sqrt{\sum_{i=1}^r \sigma_i^2 \norm{\widetilde{\mathbf V}[:,i]}^2}
\end{align*}
and
\begin{align*}
\epsilon &\ge \norm{\mathbf U^\top \mathbf U - \mathbf V^\top \mathbf V}_F
= \norm{ \mathbf\Psi \mathbf\Sigma^\top \mathbf{\Phi}^\top \mathbf\Phi \mathbf\Sigma \mathbf{\Psi}^\top - \mathbf{\Psi} \widetilde{\mathbf V}^\top \widetilde{\mathbf V} \mathbf{\Psi}^\top }_F
= \norm{ \mathbf\Sigma^\top \mathbf\Sigma - \widetilde{\mathbf V}^\top \widetilde{\mathbf V} }_F \\
&\ge \sqrt{\sum_{i=1}^r \left( \sigma_i^2 - \norm{\widetilde{\mathbf V}[:,i]}^2 \right)^2}.
\end{align*}
Using the above two inequalities we get
\begin{align*}
\sum_{i=1}^r \sigma_i^4
&\le \sum_{i=1}^r \left( \sigma_i^4 + \norm{\widetilde{\mathbf V}[:,i]}^4 \right)
= \sum_{i=1}^r \left( \sigma_i^2 - \norm{\widetilde{\mathbf V}[:,i]}^2 \right)^2 + 2 \sum_{i=1}^r \sigma_i^2 \norm{\widetilde{\mathbf V}[:,i]}^2 \\
&\le \epsilon^2 + 2 \left(3\norm{\mathbf M^*}_F\right)^2
\le 19 \norm{\mathbf M^*}_F^2.
\end{align*}
Then by the Cauchy-Schwarz inequality we have
\begin{align*}
\norm{\mathbf U}_F^2 = \sum_{i=1}^r \sigma_i^2
\le \sqrt{r \sum_{i=1}^r \sigma_i^4}
\le \sqrt{r\cdot 19 \norm{\mathbf M^*}_F^2}
\le 5\sqrt{r} \norm{\mathbf M^*}_F.
\end{align*}
Similarly, we also have $\norm{\mathbf V}_F^2 \le 5\sqrt{r} \norm{\mathbf M^*}_F$. Therefore we have proved \eqref{eqn:mf-inproof-toshow-1}.
\end{proof}
\subsection{Convergence to a Stationary Point}
With the balancedness and boundedness properties in Lemma~\ref{lem:mf-balance}, it is then standard to show that $(\mathbf U_t, \mathbf V_t)$ converges to a stationary point of $f$.
\begin{lem} \label{lem:mf-convergence}
Under the setting of Theorem~\ref{thm:mf-main}, with high probability $\lim_{t\to\infty}(\mathbf U_t, \mathbf V_t) = (\bar{\mathbf U}, \bar{\mathbf V})$ exists, and $(\bar{\mathbf U}, \bar{\mathbf V})$ is a stationary point of $f$.
Furthermore, $(\bar{\mathbf U}, \bar{\mathbf V})$ satisfies $\norm{\bar{\mathbf U}^\top \bar{\mathbf U} - \bar{\mathbf V}^\top \bar{\mathbf V}}\le \epsilon$.
\end{lem}
\begin{proof}
We assume the three properties in Lemma~\ref{lem:mf-balance} hold, which happens with high probability.
Then from \eqref{eqn:mf-obj-decrease} we have
\begin{equation}\label{eqn:mf-obj-decrease-2}
\begin{aligned}
f(\mathbf U_{t+1}, \mathbf V_{t+1})
&\le f(\mathbf U_{t}, \mathbf V_{t}) - \frac{\eta_t}{2} \norm{ \nabla f(\mathbf U_t, \mathbf V_t) }_F^2 \\
&= f(\mathbf U_{t}, \mathbf V_{t}) - \frac{1}{2} \norm{ \nabla f(\mathbf U_t, \mathbf V_t) }_F \norm{\begin{pmatrix}
\mathbf U_{t+1} \\ \mathbf V_{t+1}
\end{pmatrix} - \begin{pmatrix}
\mathbf U_{t} \\ \mathbf V_{t}
\end{pmatrix}}_F .
\end{aligned}
\end{equation}
Under the above descent condition, the result of \cite{absil2005convergence} says that the iterates either diverge to infinity or converge to a fixed point.
According to Lemma~\ref{lem:mf-balance}, \{$(\mathbf U_t, \mathbf V_t)\}_{t=1}^\infty$ are all bounded, so they have to converge to a fixed point $(\bar{\mathbf U}, \bar{\mathbf V})$ as $t\to\infty$.
Next, from \eqref{eqn:mf-obj-decrease-2} we know that $\sum_{t=1}^\infty \frac{\eta_t}{2} \norm{ \nabla f(\mathbf U_t, \mathbf V_t) }_F^2 \le f(\mathbf U_0, \mathbf V_0)$ is bounded.
Notice that $\eta_t$ scales like $1/t$. So we must have $\liminf_{t\to\infty} \norm{ \nabla f(\mathbf U_t, \mathbf V_t) }_F = 0$.
Then according to the smoothness of $f$ in a bounded region (Lemma~\ref{lem:mf-local-smooth}) we conclude $ \nabla f(\bar{\mathbf U}, \bar{\mathbf V}) = \mat0$, i.e., $(\bar{\mathbf U}, \bar{\mathbf V})$ is a stationary point.
The second part of the lemma is evident according to Lemma~\ref{lem:mf-balance} (i).
\end{proof}
\subsection{Proof of Lemma~\ref{lem:mf-strict-saddle}}
The main idea in the proof is similar to \cite{ge2017no}. We want to find a direction $\mathbf \Delta$ such that either $[\nabla^2 f(\mathbf U, \mathbf V)] (\mathbf \Delta, \mathbf \Delta)$ is negative or $(\mathbf U, \mathbf V)$ is close to a global minimum. We show that this is possible when $\norm{\mathbf U^\top \mathbf U - \mathbf V^\top \mathbf V}_F \le \epsilon$.
First we define some notation.
Take the SVD $\mathbf M^* = \mathbf \Phi^* \mathbf \Sigma^* \mathbf \Psi^{*\top}$, where $\mathbf \Phi^* \in \mathbb{R}^{d_1\times r}$ and $\mathbf \Psi^* \in \mathbb{R}^{d_2\times r}$ have orthonormal columns and $\mathbf \Sigma^* \in \mathbb{R}^{r\times r}$ is diagonal.
Denote $\mathbf U^* = \mathbf \Phi^* (\mathbf \Sigma^*)^{1/2}$ and $\mathbf V^* = \mathbf \Psi^* (\mathbf \Sigma^*)^{1/2}$.
Then we have $\mathbf U^* \mathbf V^{*\top} = \mathbf M^*$ (i.e., $(\mathbf U^*, \mathbf V^*)$ is a global minimum) and $\mathbf U^{*\top} \mathbf U^* = \mathbf V^{*\top} \mathbf V^*$.
Let $\mathbf M = \mathbf U \mathbf V^\top$, $\mathbf W = \begin{pmatrix}
\mathbf U\\ \mathbf V
\end{pmatrix}$ and $\mathbf W^* = \begin{pmatrix}
\mathbf U^*\\ \mathbf V^*
\end{pmatrix}$.
Define $$\mathbf{R} = \mathrm{argmin}_{\mathbf R' \in \mathbb{R}^{r\times r} \text{, orthogonal}} \norm{\mathbf W - \mathbf W^* \mathbf R'}_F$$
and
\[
\mathbf\Delta = \mathbf W - \mathbf W^* \mathbf R.
\]
We will show that $\mathbf\Delta$ is the desired direction.
Recall \eqref{eqn:mf-hessian}:
\begin{equation} \label{eqn:mf-hessian-to-bound}
\begin{aligned} \
[\nabla^2 f(\mathbf U, \mathbf V)] (\mathbf \Delta, \mathbf \Delta)
= 2 \left\langle \mathbf M - \mathbf M^*, \mathbf\Delta_{\mathbf U} \mathbf{\Delta}_{\mathbf V}^\top \right\rangle + \norm{\mathbf U \mathbf\Delta_{\mathbf V}^\top + \mathbf\Delta_{\mathbf U} \mathbf V^\top }_F^2,
\end{aligned}
\end{equation}
where $ \mathbf \Delta = \begin{pmatrix}
\mathbf \Delta_{\mathbf U} \\ \mathbf{\Delta}_{\mathbf V}
\end{pmatrix},
\mathbf \Delta_{\mathbf U} \in \mathbb{R}^{d_1\times r}, \mathbf \Delta_{\mathbf V} \in \mathbb{R}^{d_2\times r}$.
We consider the two terms in \eqref{eqn:mf-hessian-to-bound} separately.
For the first term in \eqref{eqn:mf-hessian-to-bound}, we have:
\begin{claim} \label{claim:mf-hessian-to-bound-1}
$\left\langle \mathbf M - \mathbf M^*, \mathbf\Delta_{\mathbf U} \mathbf{\Delta}_{\mathbf V}^\top \right\rangle = -\norm{\mathbf M - \mathbf M^*}_F^2$.
\end{claim}
\begin{proof}
Since $(\mathbf U, \mathbf V)$ is a stationary point of $f$, we have the first-order optimality condition:
\begin{equation} \label{eqn:mf-inproof-first-order-opt-cond}
\frac{\partial f(\mathbf U, \mathbf V)}{\partial \mathbf U} = (\mathbf M - \mathbf M^*) \mathbf V = \mat0, \qquad
\frac{\partial f(\mathbf U, \mathbf V)}{\partial \mathbf V} = (\mathbf M - \mathbf M^*)^\top \mathbf U = \mat0.
\end{equation}
Note that $\mathbf \Delta_{\mathbf U} = \mathbf U - \mathbf U^* \mathbf R$ and $\mathbf \Delta_{\mathbf V} = \mathbf V - \mathbf V^* \mathbf R$.
We have
\begin{align*}
&\left\langle \mathbf M - \mathbf M^*, \mathbf\Delta_{\mathbf U} \mathbf{\Delta}_{\mathbf V}^\top \right\rangle \\
=\,& \left\langle \mathbf M - \mathbf M^*, (\mathbf U - \mathbf U^* \mathbf R) (\mathbf V - \mathbf V^* \mathbf R)^\top \right\rangle \\
=\,& \left\langle \mathbf M - \mathbf M^*, \mathbf M - \mathbf U^* \mathbf R \mathbf V^\top - \mathbf U \mathbf R^\top \mathbf V^{*\top} + \mathbf M^* \right\rangle \\
=\,& \left\langle \mathbf M - \mathbf M^*, \mathbf M^*\right\rangle \\
=\,& \left\langle \mathbf M - \mathbf M^*, \mathbf M^* - \mathbf M \right\rangle \\
=\,& -\norm{\mathbf M - \mathbf M^*}_F^2,
\end{align*}
where we have used the following consequences of \eqref{eqn:mf-inproof-first-order-opt-cond}:
\begin{align*}
&\left\langle \mathbf M - \mathbf M^*, \mathbf M \right\rangle = \left\langle \mathbf M - \mathbf M^*, \mathbf U \mathbf V^\top \right\rangle = 0, \\
&\left\langle \mathbf M - \mathbf M^*, \mathbf U^* \mathbf R \mathbf V^\top \right\rangle = 0, \\
&\left\langle \mathbf M - \mathbf M^*, \mathbf U \mathbf R^\top \mathbf V^{*\top} \right\rangle = 0.
\end{align*}
\end{proof}
The second term in \eqref{eqn:mf-hessian-to-bound} has the following upper bound:
\begin{claim} \label{claim:mf-hessian-to-bound-2}
$\norm{\mathbf U \mathbf\Delta_{\mathbf V} + \mathbf\Delta_{\mathbf U} \mathbf V}_F^2 \le \norm{\mathbf M - \mathbf M^*}_F^2 + \frac12 \epsilon^2 $.
\end{claim}
\begin{proof}
We make use of the following identities, all of which can be directly verified by plugging in definitions:
\begin{equation} \label{eqn:mf-inproof-identity-1}
\mathbf U \mathbf\Delta_{\mathbf V}^\top + \mathbf\Delta_{\mathbf U} \mathbf V^\top = \mathbf\Delta_{\mathbf U} \mathbf\Delta_{\mathbf V}^\top + \mathbf M - \mathbf M^*,
\end{equation}
\begin{equation} \label{eqn:mf-inproof-identity-2}
\norm{\mathbf{\Delta} \mathbf{\Delta}^\top}_F^2 = 4 \norm{\mathbf\Delta_{\mathbf U} \mathbf\Delta_{\mathbf V}^\top}_F^2 + \norm{\mathbf\Delta_{\mathbf U}^\top\mathbf\Delta_{\mathbf U} - \mathbf\Delta_{\mathbf V}^\top \mathbf\Delta_{\mathbf V}}_F^2,
\end{equation}
\begin{equation} \label{eqn:mf-inproof-identity-3}
\begin{aligned}
\norm{\mathbf W \mathbf W^\top - \mathbf W^* \mathbf W^{*\top}}_F^2 =\ & 4\norm{\mathbf M - \mathbf M^*}_F^2 - 2 \norm{\mathbf U^\top \mathbf U^* - \mathbf V^\top \mathbf V^*}_F^2 \\&+ \norm{\mathbf U^\top \mathbf U - \mathbf V^\top \mathbf V}_F^2 + \norm{\mathbf U^{*\top} \mathbf U^* - \mathbf V^{*\top} \mathbf V^*}_F^2.
\end{aligned}
\end{equation}
We also need the following inequality, which is \citep[Lemma 6]{ge2017no}:
\begin{equation} \label{eqn:mf-inproof-rong-ineq}
\norm{\mathbf{\Delta} \mathbf{\Delta}^\top}_F^2 \le 2 \norm{\mathbf W \mathbf W^\top - \mathbf W^* \mathbf W^{*\top}}_F^2.
\end{equation}
Now we can prove the desired bound as follows:
\begin{align*}
& \norm{\mathbf U \mathbf\Delta_{\mathbf V} + \mathbf\Delta_{\mathbf U} \mathbf V}_F^2 \\
=\, & \norm{\mathbf\Delta_{\mathbf U} \mathbf\Delta_{\mathbf V}^\top + \mathbf M - \mathbf M^*}_F^2 & (\eqref{eqn:mf-inproof-identity-1}) \\
=\,& \norm{\mathbf\Delta_{\mathbf U} \mathbf\Delta_{\mathbf V}^\top }_F^2 + 2 \left\langle \mathbf M - \mathbf M^*, \mathbf\Delta_{\mathbf U} \mathbf{\Delta}_{\mathbf V}^\top \right\rangle + \norm{\mathbf M - \mathbf M^*}_F^2 \\
=\,& \norm{\mathbf\Delta_{\mathbf U} \mathbf\Delta_{\mathbf V}^\top }_F^2 - \norm{\mathbf M - \mathbf M^*}_F^2 & (\text{Claim~\ref{claim:mf-hessian-to-bound-1}})\\
\le\, & \frac14 \norm{\mathbf{\Delta} \mathbf{\Delta}^\top}_F^2 - \norm{\mathbf M - \mathbf M^*}_F^2 & (\eqref{eqn:mf-inproof-identity-2}) \\
\le\, & \frac12 \norm{\mathbf W \mathbf W^\top - \mathbf W^* \mathbf W^{*\top}}_F^2 - \norm{\mathbf M - \mathbf M^*}_F^2 & (\eqref{eqn:mf-inproof-rong-ineq}) \\
=\, & 2\norm{\mathbf M - \mathbf M^*}_F^2 - \norm{\mathbf U^\top \mathbf U^* - \mathbf V^\top \mathbf V^*}_F^2 + \frac12\norm{\mathbf U^\top \mathbf U - \mathbf V^\top \mathbf V}_F^2 \\& + \frac12\norm{\mathbf U^{*\top} \mathbf U^* - \mathbf V^{*\top} \mathbf V^*}_F^2 - \norm{\mathbf M - \mathbf M^*}_F^2 & (\eqref{eqn:mf-inproof-identity-3}) \\
\le\,& \norm{\mathbf M - \mathbf M^*}_F^2 + \frac12 \epsilon^2,
\end{align*}
where in the last line we have used $\mathbf U^{*\top} \mathbf U^* = \mathbf V^{*\top} \mathbf V^*$ and $\norm{\mathbf U^{\top} \mathbf U - \mathbf V^{\top} \mathbf V} \le \epsilon$.
\end{proof}
Using Claims~\ref{claim:mf-hessian-to-bound-1} and \ref{claim:mf-hessian-to-bound-2}, we obtain an upper bound on \eqref{eqn:mf-hessian-to-bound}:
\begin{align*}
[\nabla^2 f(\mathbf U, \mathbf V)] (\mathbf \Delta, \mathbf \Delta) \le -\norm{\mathbf M - \mathbf M^*}_F^2 + \frac12 \epsilon^2.
\end{align*}
Therefore, we have either $\norm{\mathbf U \mathbf V^\top - \mathbf M^*}_F = \norm{\mathbf M - \mathbf M^*}_F \le \epsilon$ or $[\nabla^2 f(\mathbf U, \mathbf V)] (\mathbf \Delta, \mathbf \Delta) \le -\frac12 \epsilon^2 <0$.
In the latter case, $(\mathbf U, \mathbf V)$ is a strict saddle point of $f$.
This completes the proof of Lemma~\ref{lem:mf-strict-saddle}.
\subsection{Finishing the Proof of Theorem~\ref{thm:mf-main}}
Theorem~\ref{thm:mf-main} is a direct corollary of Lemma~\ref{lem:mf-convergence}, Lemma~\ref{lem:mf-strict-saddle}, and the fact that gradient descent does not converge to a strict saddle point almost surely \citep{lee2016gradient, panageas2016gradient}.
\subsection{Proof Sketch of Theorem~\ref{thm:rank_1}}
To analyze the convergence, we define four key quantities.
\begin{align*}
\alpha_t = \vect{u}^\top\vect{x}_t,\quad
\alpha_{t,\perp} = \norm{\vect{u}_\perp^\top\vect{x}_t}_2, \quad
\beta_{t} = \vect{v}^\top \vect{y}_t, \quad
\beta_{t,\perp} = \norm{\vect{v}_\perp^\top \vect{y}_t}_2
\end{align*}
where $\vect{u}_\perp^\top$ and $\vect{v}^\top_{\perp}$ represent the projection matrix onto the complement space of $\vect{u}$ and $\vect{v}$, respectively.
Notice that $\norm{\vect{x}_t}_2^2 = \alpha_t^2 + \alpha_{t,\perp}^2$ and $\norm{\vect{y}_t}_2^2 = \beta_t^2 + \beta_{t,\perp}^2$.
Note if eventually we find a global minimum, we have $\beta_{t,\perp} = 0, \alpha_{t,\perp}=0$ and $\alpha_t \beta_t = \sigma_1$.
It turns out we can write out an explicit formula for the dynamics of the these quantities.
\begin{align*}
\alpha_{t+1} =
\left(1-\eta\left(\beta_t^2 +\beta_{t,\perp}^2\right)\right)\alpha_t + \eta\sigma_1\beta_t, \quad
&\beta_{t+1} = \left(1-\eta\left(\alpha_t^2+\alpha_{t,\perp}^2\right)\right)\beta_t + \eta_1\sigma_1 \alpha_t\\
\alpha_{t+1,\perp} = \left(1-\eta\left(\beta_t^2+\beta_{t,\perp}^2\right)\right)\alpha_{t,\perp}, \quad
&\beta_{t+1,\perp} = \left(1-\eta\left(\alpha_t^2+\alpha_{t,\perp}^2\right)\right)\beta_{t,\perp}
\end{align*}
Since we use Gaussian initialization, by standard random matrix theory, with overwhelming probability, we know the following facts about these quantities \[
\alpha_0 \asymp \frac{c_{init}\sigma_1}{\sqrt{d}}, \quad \alpha_{0,\perp} \asymp c_{init} \sqrt{\sigma_1}, \quad \beta_0 \asymp\frac{c_{init}\sigma_1}{\sqrt{d}}, \quad\beta_{0,\perp} \asymp c_{init} \sqrt{\sigma_1}.
\]
We also assume signal at the beginning is positive: $\alpha_0 > 0, \beta_0 > 0.$, which holds with probability $0.5$ using Gaussian initialization.
To prove $c_0\le \frac{\abs{\vect{u^\top\vect{x}_t}}}{\abs{\vect{v}^\top \vect{y}_t}}\le C_0$ for all iterations, we cannot only analyze quantity $\frac{\abs{\vect{u^\top\vect{x}_t}}}{\abs{\vect{v}^\top \vect{y}_t}}$ along, the convergence rate to the global minimum also plays an important role in affecting $\frac{\abs{\vect{u^\top\vect{x}_t}}}{\abs{\vect{v}^\top \vect{y}_t}}$.
We divide the dynamics into two stages.
\begin{thm}[Stage 1: Escaping from $(\vect{
0},\vect{0})$ Saddle Point]\label{thm:rank_1_stage_1}
Let $T := \mathrm{argmin}_{t} \left\{\alpha_t^2 + \beta_t^2 \le c_1\sigma_1\right\}$ for $c_1 = 1/2$.
If $\eta \le \frac{c_{step}}{\sigma_1}$, then for $t=1,\ldots,T-1$, the followings hold\begin{itemize}
\item Magnitude in complement space remain small: $\xi_t \le \xi_0$;
\item Growth of magnitude in signal space: $\abs{\alpha_{t+1}+\beta_{t+1}} \ge \left(1+\frac{\eta\sigma_1}{3}\right) \abs{\alpha_t+\beta_t}$;
\item Bounded ratio between two layers: $\abs{\alpha_t - \beta_t} \le c_{diff}\abs{\alpha_t+\beta_t}$ where $c_{diff} \triangleq \frac{\abs{\alpha_0-\beta_0}}{\abs{\alpha_0+\beta_0}}$.
\end{itemize}
Furthermore, after at most $T_1 = O\left(\frac{1}{\eta\sigma_1}\log\left(\frac{\sigma_1}{\alpha_0+\beta_0}\right)\right)$ iterations, $T_1 \le T$, we have $\alpha_{T_1} \beta_{T_1} \ge c \sigma_1$ for some small constant $0 < c < 1$.
\end{thm}
In this stage, the strength in the noise space remains small ($\xi_t \le \xi_0$) and the strength in the signal space is growing ($\abs{\alpha_{t+1}+\beta_{t+1}} \ge \left(1+\frac{\eta\sigma_1}{3}\right) \abs{\alpha_t+\beta_t}$).
Further, because $\abs{\alpha_t - \beta_t} \le c_{diff}\abs{\alpha_t+\beta_t}$, we know $\frac{1-c_{diff}}{1+c_{diff}}\le \frac{\alpha_t}{\beta_t} \le \frac{1+c_{diff}}{1-c_{diff}}$, which is our desired result.
Here we consider $\abs{\alpha_t - \beta_t} / \abs{\alpha_t - \beta_t}$ instead of directly bounding $\frac{\alpha_t}{\beta_t}$ because the dynamics of $\abs{\alpha_t - \beta_t} / \abs{\alpha_t - \beta_t}$ is much simpler to analyze.
This will be apparent in the proof.
Now we enter stage 2, which is essentially a local convergence phase.
The following theorem characterizes the behaviors of the strength in the signal and in the noise space in this stage.
\begin{thm}[Stage 2: Local Convergence]\label{thm:rank_1_stage_2}
Suppose at $T_1$-th iteration, the followings hold \begin{align*}
\alpha_{T_1}\beta_{T_1} \ge c\sigma_1, \quad
\alpha_{T_1,\perp}^2+\beta_{T_1,\perp}^2 \le &10c_{init} \sigma_1,\quad
\abs{\alpha_{T_1}-\beta_{T_1}} \le c_{diff}\abs{\alpha_{T_1}+\beta_{T_1}}
\end{align*} where $c > 1/5$, $c_{init} < 1/100$, $c_{diff}< 1$ and $\eta \le \frac{c_{step}}{\sigma_1}$ for $c_{step} \le 1/100$.
Let $\tau_t = \min\{\alpha_t,\beta_t\}$ and $\tau_{min} \triangleq \frac{\tau_{T_1}}{2}$.
Then for all $t = T+1,T+2,\ldots$, the followings hold
\begin{align*}
\tau_t \ge \prod_{i=T+1}^{t}\left(1-\eta\xi_0\left(1-\eta\tau_{min}^2\right)^{i-T}\right)\tau_T, \quad
\xi_t \le \left(1-\eta\tau_{min}^2\right)^{t-T} \xi_0, \quad \alpha_t\beta_t \le \sigma_1.
\end{align*}
\end{thm}
In Theorem~\ref{thm:rank_1_stage_2}, $\tau_t$ characterizes the smaller signal strength in $\vect{x}_t$ and $\vect{y}_t$ and $\xi_t$ characterizes the strength in the noise space.
Note even though $\tau_t$ may decrease, with some algebra, Theorem~\ref{thm:rank_1_stage_2} shows $\tau_t$ is uniformly bounded by $\Omega\left(\sqrt{\sigma_1}\right)$ for all $t \ge T_1$.
Combining with the fact that $\alpha_t\beta_t \le \sigma_1$, we know $c_0\le \frac{\alpha_t}{\beta_t} \le C_0$ for all $t \ge T_1$.
Similar to Theorem~\ref{thm:rank_1_stage_1}, here we consider $\tau_t$ and $\alpha_t\beta_t$ instead of directly analyzing $\frac{\alpha_t}{\beta_t}$ because dynamics of $\tau_t$ and $\alpha_t\beta_t$ are easier to analyze.
A directly corollary is the following local convergence rate result, which is proved by examining the dynamics of $\alpha_t\beta_t$.
\begin{cor}\label{cor:rank_1local_convergence_rate}
Under the same assumptions as in Theorem~\ref{thm:rank_1_stage_2}, we have after $T_2 = O\left(\frac{1}{\eta \sigma_1}\log\left(\frac{1}{\epsilon}\right)\right)$ iterations, $
\norm{\vect{x}\vect{y}^\top - \mathbf{M}}_F^2 \le \epsilon^2 \sigma_1^2.$
\end{cor}
Theorem~\ref{thm:rank_1_stage_1}, Theorem~\ref{thm:rank_1_stage_2} and Corollary~\ref{cor:rank_1local_convergence_rate} together imply Theorem~\ref{thm:rank_1}.
|
{
"timestamp": "2018-11-01T01:14:46",
"yymm": "1806",
"arxiv_id": "1806.00900",
"language": "en",
"url": "https://arxiv.org/abs/1806.00900"
}
|
\section{Introduction}
\label{sec:Introduction}
Mean field games were introduced by \cite{LasryLions.06a,LasryLions.06b,LasryLions.07} and \cite{HuangMalhameCaines.07, HuangMalhameCaines.06} to overcome the notorious intractability of $n$-player games. Two key simplifications are made. First, agents interact symmetrically through the empirical distribution of their states. Second, by formally letting $n\to\infty$, one passes to a representative agent whose actions do not affect this distribution because each individual agent becomes negligible. Thus, the mean field game is seen as an approximation of the $n$-player game for large $n$. We refer to the lecture notes~\cite{Cardaliaguet.13} and the monographs~\cite{BensoussanFrehseYam.13,CarmonaDelaRue.17a,CarmonaDelaRue.17b} and their extensive references for further background.
In this paper, we conduct a case study of an $n$-player game of optimal stopping where multiple equilibria may occur naturally. We formulate an associated mean field game and highlight that certain mean field equilibria are limits of $n$-player equilibria while others are not, and study how to distinguish them. Equilibria that are not limit points are questionable from the point of view of applications, at least if they are motivated as ``$n$-player games with large $n$.''
Several ways of connecting $n$-player and mean field games have been studied in the literature. In many cases it is easier to establish the reverse direction, namely that a given mean field equilibrium induces an \emph{approximate} Nash equilibrium in the $n$-player game for large $n$. This goes back to \cite{HuangMalhameCaines.06} and is by now established in some generality, see in particular \cite{Lacker.14} for diffusion control, \cite{CarmonaDelarueLacker.17} for games of timing or~\cite{CecchinFischer.18} for finite state games (but see also \cite{CampiFischer.18} for a counterexample in a degenerate case with absorption). It then follows, conversely, that mean field equilibria are limits of approximate $n$-player equilibria. However, we emphasize that approximate and actual Nash equilibria may look quite different, and in particular one cannot expect in general that there is a true Nash equilibrium in the proximity of an approximate one.
The convergence of $n$-player Nash equilibria to the mean field limit is often more delicate.
The deep result of \cite{CardaliaguetDelarueLasryLions.15} shows convergence for a class of (closed-loop) games where agents choose drifts of diffusions. In their setting, the mean field game has a unique equilibrium as a consequence of the so-called monotonicity condition~\cite{LasryLions.06a} which postulates that it is disadvantageous for agents' states to be close to one another. In a related but different (open-loop) framework, and without imposing uniqueness, \cite{Fischer.14} obtains convergence under the assumption that the limiting measure flow is deterministic. More comprehensively, \cite{Lacker.14} shows that $n$-player equilibria converge to a weak notion of mean field equilibria which can include mixtures of deterministic equilibria, for a general class of diffusion-control games. A corresponding result for games of timing is established in \cite{CarmonaDelarueLacker.17}. Most recently, \cite{Lacker.18b} provides results along the lines of~\cite{Lacker.14} for the closed-loop case.
Convergence has also been shown in a number of more specific problems, for instance stationary mean field games~\cite{LasryLions.06a}, linear-quadratic problems~\cite{Bardi.12} or a game of Poissonian control~\cite{NutzZhang.17}, among others. However, to the best of our knowledge, the question which mean field equilibria are limit points of (true) $n$-player equilibria has not been emphasized as such in the literature. We can mention the parallel work~\cite{CecchinDaiPraFischerPelino.18} on a two-state game: the game has unique $n$-player equilibria and these converge to a mean field equilibrium as expected; however, a second, less plausible mean field solution can appear for certain parameter values and this solution is not a limit. Another interesting parallel work \cite{DelarueFoguenTchuendom.18} studies several approaches of selecting an equilibrium in a linear-quadratic mean field game with multiple equilibria, including the convergence of $n$-player equilibria. Different approaches are shown to select different equilibria.
From the perspective of mean field games, being a limit point of $n$-player equilibria can be seen as a stability property of equilibria with respect to the number of players. We are not aware of a systematic study in this direction (but see~\cite{BrianiCardaliaguet.18} for a recent investigation of a different stability property that is potentially related). Since mean field equilibria are often motivated as ``large~$n$'' equilibria, it seems desirable to understand the phenomenon in some generality and at least establish sufficient conditions. A general formulation and investigation of this stability seems wide open at this time, whence our focus on a case study in the present paper.
\subsection{Synopsis}
We start by introducing an $n$-player game of optimal stopping inspired by~\cite{Bertucci.17,BouveretDumitrescuTankov.18,CarmonaDelarueLacker.17,Nutz.16} and the literature on bank-runs following~\cite{DiamondDybvig.83}. In addition to their i.i.d.\ signals, players observe how many other players have already stopped. A crucial feature is that whenever an agent leaves the game, staying in the game becomes less attractive for the remaining agents. For instance, this may reflect that the bank is more likely to default if other clients withdraw their savings. In particular, the game satisfies the opposite of Lasry and Lions' monotonicity condition, or \emph{strategic complementarity} in Economics terminology~\cite{BulowGeanakoplosKlemperer.85}. Indeed, the model exhibits a ``flocking'' or ``herding'' behavior where groups of agents can collectively decide to stop or not. We will see that these choices can naturally give rise to multiple equilibria; more precisely, they parametrize the full range of $n$-player equilibria.
Next, we review the mean field version of the game which was introduced in~\cite{Nutz.16} without discussing the $n$-player game. Enhancing slightly a result of~\cite{Nutz.16}, mean field equilibria are described by a simple equation: for any equilibrium, the proportion $\rho(t)$ of agents that have stopped by time~$t$ is a zero of a deterministic function $g_{t}$ on $[0,1]$ as is Figure~\ref{fig:intro}. More generally, any equilibrium $t\mapsto \rho(t)$ is characterized as an increasing, right-continuous selection of such zeros. In Figure~\ref{fig:intro}, we can distinguish several types of zeros: increasing-transversal~($i$), tangential~($t$) and decreasing-transversal~($d$).
\begin{figure}[b]
\begin{center}
\begin{tikzpicture}[scale=1.1]
\draw[line width=1pt,-latex] (-5,0) to (5,0) node[right=2pt] {$u$};
\fill (-4.25,0) circle (1.5pt) node[above=1pt] at (-4.25,0) {0};
\fill (4.25,0) circle (1.5pt) node[above=1pt] at (4.25,0) {1};
\draw[line width=1.5pt] (-4.25,-.5)
to[out=0,in=225] (-3.25,0)
to[out=45,in=135] (-1.75,0)
to[out=-55,in=180] (-.575,0)
to[out=0,in=235] (.5,0)
to[out=45,in=135] (2,0)
to[out=-45,in=225] (3.5,0)
to[out=45,in=190] (4.25,.5)
node[above=1pt] {$g_{t}(u)$};
\fill (-3.25,0) circle (2pt) node[above=2pt]{$i$};
\fill (-1.75,0) circle (2pt) node[above=2pt]{$d$};
\fill (-.575,0) circle (2pt) node[above=2pt]{$t$};
\fill (.5,0) circle (2pt) node[above=2pt]{$i$};
\fill (2,0) circle (2pt) node[above=2pt]{$d$};
\fill (3.5,0) circle (2pt) node[above=2pt]{$i$};
\end{tikzpicture}
\end{center}
\caption{Types of mean field equilibria at a fixed time $t$}
\label{fig:intro}
\end{figure}
These types are related to how concentrated the distribution of the agents' signals is in a neighborhood of the zero, relative to the strength of interaction. Intuitively, tangential solutions are delicate in that they may disappear if Figure~\ref{fig:intro} is perturbed, whereas the transversal solutions are stable in this sense.
We then turn to our main question and study which mean field equilibria are limits of $n$-player equilibria. Roughly, the main result is that
\begin{enumerate}
\item Increasing-transversal solutions are limits of $n$-player equilibria,
\item decreasing-transversal solutions \emph{fail} to be limits,
\item tangential solutions can but need not be limits.
\end{enumerate}
Specifically, we first consider the minimal and maximal equilibria, corresponding to the left- and right-most solutions in Figure~\ref{fig:intro}. The $n$-player game also has such extremal equilibria and these yield natural candidates for sequences converging to their mean field counterparts. After introducing appropriate notions for dynamic equilibria, we show that this convergence indeed holds, under the condition that the solutions are increasing-transversal (on a sufficiently large set of times $t$). However, we also find that if the minimal (say) solution is tangential, the minimal $n$-player equilibria can converge to a mixture of mean field equilibria and then the minimal mean field equilibrium may fail to be a proper limit. (The minimal and maximal solutions can be increasing-transversal or tangential, but never decreasing-transversal.) This also yields a novel example of how randomization can emerge in mean field games.
Second, we study the convergence to a general mean field equilibrium, possibly somewhere in the middle of Figure~\ref{fig:intro}. In that case, there are no obvious candidates for the $n$-player approximations and more abstract arguments need to be used. We show by a fixed point construction that all increasing-transversal solutions are limits of $n$-player equilibria. Quite surprisingly however, (``strongly'') decreasing-transversal solutions fail to be limits despite appearing stable in Figure~\ref{fig:intro}. In fact, these solutions merely occur as parts of mixtures that are limits, and the weight within these mixtures can be bounded by a monotone function of the slope in Figure~\ref{fig:intro}. It turns out that some fairly detailed asymptotic statistics, such as the expected number of $n$-player equilibria, can be analyzed in our model---which is unusual for mean field games.
The remainder of this paper is organized as follows. In Section~\ref{se:basicSetup}, we introduce the game of optimal stopping. Section~\ref{se:nPlayer} describes the Nash equilibria of the $n$-player version and Section~\ref{se:MFG} covers the analogue for the mean field game. The results on the convergence to the minimal and maximal equilibria are relatively direct and established in Section~\ref{se:convergenceExtremal}, whereas the more abstract results on the convergence to general equilibria are reported in Section~\ref{se:convergenceGeneral}.
\section{Description of the Game}\label{se:basicSetup}
Let $(I,\mathcal{I},\lambda)$ be a probability space representing the agents; we shall be interested in the $n$-player case with a finite $I$ and the mean field case with an atomless space. Let $(\Omega,\mathcal{G},P)$ be another probability space, equipped with a right-continuous filtration $\mathbb{G}=(\mathcal{G}_{t})_{t\in\mathbb{R}_{+}}$ and an exponentially distributed random variable $\mathcal{E}$ which is independent of $\mathbb{G}$.
Given an agent $i\in I$, let $\alpha^{i}\geq0$ be a $\mathbb{G}$-progressively measurable process which is locally integrable
and consider the random time
$$
\theta^{i} = \inf\bigg\{t:\, \int_{0}^{t} \alpha^{i}_{s}\,ds = \mathcal{E}\bigg\}.
$$
As in~\cite{Nutz.16}, one may think of $\theta^{i}$ as the time when agent $i$ expects the default of her bank.
We fix a parameter $r\in\mathbb{R}$, interpreted as the interest rate paid by the bank (and assumed to be constant for simplicity). Following~\cite{Nutz.16}, we suppose that $\alpha^{i}$ is increasing\footnote{Increase is to be understood in the non-strict sense throughout the paper.} and that
\begin{equation}\label{eq:intCond}
\text{ $\inf\{t: \, \alpha^{i}_{t}-r\geq 0\}<\infty$\quad $P$-a.s.} \;
\end{equation}
Denoting by $\mathcal{T}$ the set of all $\mathbb{G}$-stopping times, we then consider the
optimal stopping problem
\begin{equation}\label{eq:optStopProblem}
\sup_{\tau\in\mathcal{T}} E\big[e^{r\tau} \mathbf{1}_{\{\theta^{i}>\tau\}\cup \{\theta^{i}=\infty\}}\big]
\end{equation}
which we assume to have a finite value. Thus, if the default $\theta^{i}>\tau$, we may think of the agent as accruing the interest on an initial unit investment until~$\tau$, but losing everything if $\theta^{i}<\tau$.
If the stopping time
\begin{equation}\label{eq:optStopTime}
\tau^{i} := \inf\{t: \, \alpha^{i}_{t}\geq r\} \in\mathcal{T}
\end{equation}
is a.s.\ finite, then $\tau^{i}$ is optimal and in fact the minimal solution of~\eqref{eq:optStopProblem}; cf. \cite[Lemma~2.1]{Nutz.16}.
The solution is unique for instance if $\alpha^{i}$ is strictly increasing, but not in general. We assume that agents choose~\eqref{eq:optStopTime} in the case of non-uniqueness, which can be motivated e.g.\ as a preference for early stopping when other things are equal. This convention is not essential, but simplifies our exposition and allows us to focus on multiplicity of equilibria due to inherent game-theoretic aspects as it avoids ambiguity at the individual agents' level.
The processes $\alpha^{i}$ will depend on the proportion $\rho(t)$ of players who have already stopped, thus inducing an interaction among the agents. Since given~$\rho$, the optimal stopping times are completely determined by~\eqref{eq:optStopTime}, we shall simply say that an equilibrium is a process $\rho$ which is $\mathbb{G}$-adapted and such that
\begin{equation*}\label{eq:equilibDef}
\rho(t) = \lambda\{i:\, \tau^{i}\leq t\},
\end{equation*}
where it is tacitly assumed that the above set is $\lambda$-measurable.
\section{The $n$-Player Game}
\label{se:nPlayer}
In this section, we formulate the $n$-player version of the ``toy model'' mean field game in \cite[Section~4]{Nutz.16}. Indeed, fix $n\in\mathbb{N}$ and take $I=\{1,\dots,n\}$ to be a set with $n$ elements, equipped with the normalized counting measure. Each player $i$ observes an idiosyncratic signal $Y^i_t\geq 0$ which is right-continuous, progressively measurable, increasing and such that $\{Y^{i}\}_{i\in I}$ are pairwise i.i.d.\ with the common c.d.f.\
$$
y\mapsto F_t(y):=P\{Y_t^i\leq y\}.
$$
Moreover, for a fixed interaction constant\footnote{We could more generally consider processes $\alpha^{i}$ which are nonlinear functions of $Y^{i}$ and $\rho^{-i}$ and possibly a common noise, as in~\cite{Nutz.16}. However, the increased generality does not seem to lead to additional insights regarding the main questions of this paper, so we have chosen to use the simplified ``toy model'' in our exposition. The constant $c$ could in fact be normalized to $1$ by changing $Y^{i}$ and $r$, but we find it useful to represent the strength of interaction explicitly.} $c>0$,
$$
\alpha^i(t)=Y_t^i+c\rho_{n}^{-i}(t), \quad \mbox{where}\quad \rho_n^{-i}(t)=\frac{\#\{j\neq i:\, \tau^j\leq t\}}{n}
$$
is the fraction of other players\footnote{Once again, we have decided to exclude player~$i$ in order to focus on the game-theoretic aspect of multiplicity. If player~$i$ considers her own action; i.e., uses $\rho$ instead of $\rho^{-i}$, non-uniqueness can occur without other agents' involvement simply because of the direct feedback on the state process.} (from the perspective of $i$) that have already stopped, according to $(\tau^{j})_{j\neq i}$.
Specializing from the previous section, an $n$-player equilibrium boils down to the process $\rho_{n}(t)=\#\{j:\,\tau^j\leq t\}/n$ where $\tau^{j}$ are as in~\eqref{eq:optStopTime}. In particular, if $\rho_{n}$ is an equilibrium and $(t,\omega)$ is such that $\rho_n(t)(\omega)=k/n$, then as the stopping times satisfy $\tau^i=\inf\{t:\alpha^i(t)\geq r\}$, we must have\footnote{We will often abbreviate $\#\{i\in I:\, \dots\}$ to $\#\{\dots\}$ in what follows.}
\begin{equation}\label{eq:NplayerEquilibCond}
\#\{Y_t^i(\omega)+c\, \frac{k-1}{n}\geq r\}=k \quad\mbox{and}\quad
\#\{Y_t^i(\omega)+c\, \frac{k}{n}< r\}=n-k.
\end{equation}
This condition is also sufficient, in the sense made precise in Remark~\ref{rk:NplayerEquilibCondSuff}.
Next, we sketch the structure of all equilibria $\rho_n(t)=\#\{i: \tau^i\leq t\}/n$ of this game by a recursive construction, starting with $K=\emptyset$.
\begin{enumerate}
\item[1.] Suppose that at a given stopping time $t_{0}$, a group $K\subsetneq I$ of agents has already stopped. Then every remaining agent $i\notin K$ examines her criterion
$$
\theta_{K}^{i} = \inf\{t:\, Y^{i}_{t} + c\, \frac{\#K}{n}\geq r\}.
$$
If $\theta_{K}^{i}\leq t_{0}$, then player $i$ must stop immediately. We add $i$ to the set $K$ and repeat Step 1 until no further players are forced to stop. (By the monotonicity in $\#K$, it does not matter in which order the agents are processed.)
\item[2.] Beyond individual players forced to stop, a group $J\subseteq K^{c}$ of agents may be able to ``coordinate'' and stop together.\footnote{While we are using suggestive language here, it should be noted that these are simply different configurations which may be equilibria. We are not trying to model a mechanism how players ``find'' an equilibrium.} Indeed, suppose that
$$
\theta_{K}^{J} = \inf\{t:\, Y^{i}_{t} + c\, \frac{\#K+\#J-1}{n}\geq r\}
$$
satisfies $\theta_{K}^{J}\leq t_{0}$ for all $i\in J$. Then it is optimal for all these agents to stop as a group, and they may or may not ``choose'' to do so. If they stop, we add $J$ to $K$ and repeat the procedure starting with Step 1.
\item[3.] After all remaining groups of agents have decided whether to stop at time $t_{0}$, we increment time until there exists a group or individual agent wanting to stop, and start again at Step 1.
\end{enumerate}
The multiplicity of equilibria of this game arises because of the choices taken by the groups $J$ in Step 2, as well as the order in which the groups are processed. Next, we describe two of these equilibria in detail.
The first one is the minimal equilibrium and corresponds to groups~$J$ in Step 2 always choosing not to stop. This is equivalent to all players remaining in the game until their own optimality criterion forces them to quit.
\begin{proposition}
\label{pr:CharacterizationOfNMinEquilibrium}
There exists an $n$-player equilibrium $\rho^m_n$ such that
\begin{equation}\label{eq:CharNMinEquilib}
\rho^m_n(t)=\frac{k}{n}\Longleftrightarrow\left\{
\begin{aligned}
&\#\{Y_t^i+c\, \frac{k}{n}\geq r\}=k \\
&\#\{Y_t^i+c\, \frac{k-l}{n}\geq r\}\geq k-l+1,\quad 1\leq l\leq k.
\end{aligned}
\right.
\end{equation}
This equilibrium is minimal; i.e., $\rho^m_n(t)\leq\rho_n(t)$ for any $n$-player equilibrium~$\rho_n$.
\end{proposition}
\begin{proof}
The construction is iterative. Given a set $K\subsetneq I$ corresponding to players who have already stopped, we can consider for all $i\notin K$ the stopping times
$$
\theta_{K}^{i} = \inf\{t:\, Y^{i}_{t} + c\, \frac{\#K}{n}\geq r\}
$$
with the corresponding order statistics $\theta_{K}^{(1)}\leq\theta_{K}^{(2)} \leq \dots$. We define $\theta_{K}=\theta_{K}^{(1)}$ and $i_{K}=(1)$.
We note that agent $i$ must stop at $\theta_{K}^{i}$, even if no further agents $j\notin K$ choose to stop, and that $i_{K}$ is the first of the agents $i\notin K$ subject to this event.
To define the equilibrium, start with $K_{0}=\emptyset$ and set $\tau^{i}=\theta_{K_{0}}\equiv \theta_{K_{0}}^{(1)}$ on $\{i=i_{K_{0}}\}$. Next, set $K_{1}=\{i_{K_{0}}\}$ and $\tau^{i}=\max\{\theta_{K_{1}},\theta_{K_{0}}\}$ on $\{i=i_{K_{1}}\}$, and continue inductively setting $K_{k}=K_{k-1}\cup \{i_{K_{k-1}}\}$ and $\tau^{i}=\max\{\theta_{K_{k}},\tau^{i_{K_{k-1}}}\}$ on $\{i=i_{K_{k}}\}$ for $k=2,\dots,n-1$. (The maximum needs to be taken since all the $\alpha^{j}$ are increased after player $i_{K_{k-1}}$ stops.)
Setting $\rho^m_n(t)=\#\{i: \tau^i\leq t\}/n$, we have by construction that $\rho^m_n$ is an equilibrium with corresponding optimal stopping times $(\tau^{i})$ and that~\eqref{eq:CharNMinEquilib} holds.
To see the minimality, let $\rho_{n}$ be any $n$-player equilibrium and consider $(t,\omega)$ such that $\rho_n(t)(\omega)=k/n$.
Let $k'$ be such that $\rho^m_n(t)(\omega)=k'/n$. If we had $k' > k$, then~\eqref{eq:CharNMinEquilib} would imply $\#\{Y_t^i(\omega)+c\, \frac{k}{n}\geq r\}\geq k+1$ and hence $\#\{Y_t^i(\omega)+c\, \frac{k}{n}<r\}\leq n-k-1$, a contradiction to~\eqref{eq:NplayerEquilibCond}. Thus, $k'\leq k$ and we have shown that $\rho^m_n\leq \rho_n$.
\end{proof}
\begin{remark}\label{rk:minimalFromGivenEquilibrium}
Let $\rho$ be an $n$-player equilibrium and $t_{0}$ a stopping time. There exists an equilibrium which is minimal among all $n$-player equilibria~$\varrho$ such that $\varrho=\rho$ on $[0,t_{0}]$. Indeed, it is obtained by agents stopping as in $\rho$ until $t_{0}$, whereas from $t_{0}$ onwards we apply the construction in the proof of Proposition~\ref{pr:CharacterizationOfNMinEquilibrium} starting with $K=\{i:\,\tau^{i}\leq t_{0}\}$. We call this $\varrho$ the \emph{minimal extension} of $\rho$ after~$t_{0}$.
\end{remark}
The second extremal equilibrium is maximal and corresponds to players coordinating their actions such as to stop as early as possible. As seen in the construction below, this is equivalent to all players constantly seeking (maximally large) groups of collaborators so that immediate simultaneous stopping is optimal for all agents in the group.
\begin{proposition}
\label{pr:CharacterizationOfNMaxEquilibrium}
There exists an $n$-player equilibrium $\rho^M_n$ such that
\begin{equation}\label{eq:CharNMaxEquilib}
\rho^M_n(t)=\frac{k}{n}\Longleftrightarrow\left\{
\begin{aligned}
&\#\{Y_t^i+c\, \frac{k-1}{n}\geq r\}=k \\
&\#\{Y_t^i+c\, \frac{k+l-1}{n}\geq r\}\leq k+l-1,\quad 1\leq l\leq n-k.
\end{aligned}
\right.
\end{equation}
This equilibrium is maximal; i.e., $\rho^M_n(t)\geq\rho_n(t)$ for any $n$-player equilibrium~$\rho_n$.
\end{proposition}
\begin{proof}
Given a set $K\subsetneq I$ of size $k=\#K$ corresponding to players who have already stopped, we can consider for $1\leq l \leq n-k$ the stopping times
$$
\theta_{K}^{l} = \inf\{t:\, \#\{i\notin K:\, Y_t^i+c\, \frac{k+l-1}{n}\geq r\}\geq l\};
$$
intuitively, this is the first time an additional group $J$ of $\#J=l$ agents can collectively stop.
If $\theta_{K}^{(1)}\leq\dots\leq \theta_{K}^{(n-k)}$ are the corresponding order statistics (ties are split by assigning the lower rank to the larger index $l$), pick $l=(1)$ and let $J=J(K)$ be the set of $i\notin K$ such that $\{Y^{i}_{\theta_{K}^{l}}\}_{i\in J}$ are the $l$ largest elements in $\{Y^{i}_{\theta_{K}^{l}}\}_{i\in K^{c}}$; we think of $J$ as the $l$ most pessimistic agents remaining at time $\theta_{K}^{l}$ and denote $\theta_{K}:=\theta_{K}^{l}$.
To define the equilibrium, start with $K_{0}=\emptyset$ and set $\tau^{i}=\theta_{\emptyset}$ for $i\in J(\emptyset)$. Next, set $K_{1}=J(\emptyset)$ and $\tau^{i}=\theta_{K_{1}}$ for $i\in J(K_{1})$, and continue inductively with $K_{2}=J(K_{1})\cup K_{1}$.
Setting $\rho^M_n(t)=\#\{i: \tau^i\leq t\}/n$, we have by construction that $\rho^M_n$ is an equilibrium with corresponding optimal stopping times $(\tau^{i})$ and that~\eqref{eq:CharNMaxEquilib} holds.
To see the maximality, let $\rho_{n}$ be any $n$-player equilibrium and consider $(t,\omega)$ such that $\rho_n(t)(\omega)=k/n$. Again, $\rho_{n}$ must satisfy~\eqref{eq:NplayerEquilibCond}.
Let $k'$ be such that $\rho^M_n(t)(\omega)=k'/n$. If we had $k' < k$, then~\eqref{eq:CharNMaxEquilib} would imply that $\#\{Y_t^i(\omega)+c\, \frac{k-1}{n}\geq r\}\leq k-1$, contradicting~\eqref{eq:NplayerEquilibCond}.
\end{proof}
The following observations will be used in Section~\ref{se:convergenceGeneral} when we construct $n$-player equilibria converging to a given mean field equilibrium.
\begin{remark}\label{rk:cutAndPaste}
(i) Consider $n$-player equilibria $\rho$ and $\rho'$, a stopping time $t_{0}$ and assume that $\rho(t_{0})\leq \rho'(t_{0})$. Then there exists an $n$-player equilibrium $\varrho$ such that
$$
\varrho\mathbf{1}_{[0,t_{0})}=\rho \mathbf{1}_{[0,t_{0})} \quad \mbox{and}\quad \varrho\mathbf{1}_{[t_{0},\infty)}=\rho'\mathbf{1}_{[t_{0},\infty)}.
$$
Indeed, let $I_{0}$ be the set of agents that have stopped by time $t_{0}$ in equilibrium~$\rho$ and let $I_{1}$ be the analogue for $\rho'$. By~\eqref{eq:optStopTime} we necessarily have $I_{0}\subseteq I_{1}$. The equilibrium $\varrho$ is obtained by following the stopping times of $\rho$ on $[0,t_{0})$. At $t_{0}$, all agents in the group $J=I_{1}\setminus I_{0}$ stop (and this must be optimal as $\rho'$ is an equilibrium). After that the remaining agents act as in~$\rho'$.
(ii) Extending the above, consider $n$-player equilibria $\rho$ and $\rho'$, stopping times $t_{0} \leq t_{1}$ and assume that $\rho(t_{0})\leq \rho'(t_{1})$. Then there exists an $n$-player equilibrium $\varrho$ such that
\begin{equation}\label{eq:cutPastProperty}
\varrho\mathbf{1}_{[0,t_{0})}=\rho \mathbf{1}_{[0,t_{0})} \quad \mbox{and}\quad \varrho\mathbf{1}_{[t_{1},\infty)}=\rho'\mathbf{1}_{[t_{1},\infty)}.
\end{equation}
Indeed, let $\rho_{1}$ be the minimal extension of $\rho$ after $t_{0}$ (cf.\ Remark~\ref{rk:minimalFromGivenEquilibrium}). Let $I_{0}$ be the set of agents that have stopped by time $t_{0}$ in equilibrium $\rho$ and let $I_{1}$ be the set of agents that have stopped by time $t_{1}$ in equilibrium $\rho'$. Again, we observe that $I_{0}\subseteq I_{1}$, due to~\eqref{eq:optStopTime} and the increase of $Y^{i}$. Moreover, $I_{1}$ must include all agents that stop in the construction of the minimal extension on $[t_{0},t_{1}]$. As a result, $\rho_{1}(t_{1})\leq \rho'(t_{1})$, and now the claim follows by applying~(i).
(iii) A last generalization is that when $\rho(t_{0})\leq \rho'(t_{1})$ merely holds on some set $A\in\mathcal{G}_{t_{1}}$, then we can still construct an $n$-player equilibrium $\varrho$ satisfying~\eqref{eq:cutPastProperty} on~$A$. Indeed, $\varrho$ is found as in (ii) except that on $A^{c}$, agents continue to stop according to $\rho_{1}$ after $t_{1}$.
\end{remark}
\begin{remark}\label{rk:NplayerEquilibCondSuff}
(i) The necessary condition~\eqref{eq:NplayerEquilibCond} is sufficient in the following sense. Fix $n$ and a stopping time $t_{0}$, and suppose there exists an $\mathcal{G}_{t_{0}}$-measurable random variable $k$ satisfying~\eqref{eq:NplayerEquilibCond} at $t_{0}$; i.e.,
\begin{equation*
\#\{Y_{t_{0}}^i+c\, \frac{k-1}{n}\geq r\}=k \quad\mbox{and}\quad
\#\{Y_{t_{0}}^i+c\, \frac{k}{n}< r\}=n-k.
\end{equation*}
Then there exists an $n$-player equilibrium $\varrho$ such that $\varrho(t_{0})=k/n$.
To construct $\varrho$, let agents stop as in the minimal equilibrium $\rho^{m}_{n}$ up to time $t_{0}$. By the argument at the end of the proof of Proposition~\ref
{pr:CharacterizationOfNMinEquilibrium}, we must have $\rho^{m}_{n}(t_{0})\leq k/n$. At $t_{0}$, all remaining agents $i$ with $Y^{i}_{t_{0}} + c\, \frac{k-1}{n}\geq r$ stop, so that $\rho(t_{0})=k/n$. After that, the remaining agents follow the construction in the proof of Proposition~\ref{pr:CharacterizationOfNMinEquilibrium} starting with $K=\{i:\,\tau^{i}\leq t_{0}\}$.
(ii) A variant of this holds when~\eqref{eq:NplayerEquilibCond} is satisfied on some set $A\in \mathcal{G}_{t_{0}}$, with the conclusion that $\varrho(t_{0})=k/n$ holds only on $A$. Indeed, we construct $\varrho$ as above on $A$, whereas on $A^{c}$ we use $\rho^{m}_{n}$.
(iii) For later use, we observe that if this construction is applied for two times $t_{0}\leq t_{1}$ and corresponding random variables $k_{0}\leq k_{1}$, the resulting equilibria satisfy $\varrho_{0}\leq\varrho_{1}$.
\end{remark}
\section{The Mean Field Game}
\label{se:MFG}
The game considered in this section is the ``toy model'' mean field game of \cite[Section~4]{Nutz.16}. Indeed, $(I, \mathcal{I}, \lambda)$ is an atomless probability space and we
work on a so-called Fubini extension $(I\times\Omega,\Sigma,\mu)$ of the product $(I\times\Omega,\mathcal{I}\times\mathcal{G},\lambda\times P)$; see \cite[Section~3]{Nutz.16}. For each $i\in I$, let $Y^i_t\geq 0$ be a right-continuous, increasing, $\mathbb{G}$-progressively measurable process such that for each $t\geq 0$, $(i,\omega)\mapsto Y_t^i(\omega)$ is $\Sigma$-measurable and $Y_t^i,\ i\in I$ are $\lambda$-essentially pairwise i.i.d.; see also \cite[Definition 3.1]{Nutz.16}. Working on a Fubini extension ensures that such processes exist, as well as the validity of an Exact Law of Large Numbers. In all that follows, we assume that the c.d.f.\ $y\mapsto F_t(y)=P\{Y_t^i\leq y\}$ is continuous.
Since $\lambda$ is atomless, each individual agent has zero mass and hence does not influence the state process $\rho(t)=\lambda\{i: \tau^i\leq t\}$. In particular, we do not distinguish $\rho$ and $\rho^{-i}$ and simply set $\alpha^i(t)=Y_t^i+c\rho(t)$. We recall that $\rho$ is an equilibrium if $\rho(t) = \lambda\{i:\, \tau^{i}\leq t\}$ where $\tau^{i}$ is as in~\eqref{eq:optStopTime} for $\lambda$-a.e.\ $i\in I$. Such a process may be random (see also~\cite{Nutz.16}). However, as common in the mean field game literature, we pay special attention to equilibria which are deterministic due to the infinite number of players.\footnote{Note that the key message of this paper, namely that some mean field equilibria are not limits of $n$-player equilibria, is only amplified if more mean field equilibria are considered.} The following is an improved version of \cite[Proposition 4.1]{Nutz.16} with necessary and sufficient conditions.
\begin{proposition}\label{pr:MFGequilib}
A real function $\rho: \mathbb{R}_{+}\to[0,1]$ is a mean field game equilibrium if and only if it is increasing, right-continuous and
\begin{equation}\label{eq:master}
\rho(t)+F_t(r-c\rho(t))=1, \quad t\geq0.
\end{equation}
\end{proposition}
\begin{proof}
Suppose that $\rho$ is a mean field game equilibrium, then $\rho$ is clearly increasing. Since $Y_t^i,\ i\in I$ are $\lambda$-essentially pairwise i.i.d., the Exact Law of Large Numbers (e.g., \cite[Section~3]{Nutz.16}) states that
$\lambda\{i:Y^i_t\leq u\}=F_{t}(u)$ for all $u$. Using also~\eqref{eq:optStopTime} and that $y\mapsto F_{t}(y)$ is continuous, we have
\begin{equation}\label{eq:proofRhoRC}
\rho(t)=\lambda\{i:\tau^i\leq t\}=\lambda\{i:Y^i_t+c\rho(t+)\geq r\}=1-F_t(r-c\rho(t+)).
\end{equation}
Recall that $Y^{i}$ has right-continuous paths. Using again the continuity of $F_{t}$, this implies that
\begin{equation}\label{eq:jointRC}
\mbox{$(t,u)\mapsto F_t(r-cu)$ is jointly right-continuous.}
\end{equation}
It follows that $t\mapsto 1-F_t(r-c\rho(t+))$ is right-continuous, and thus the left-hand side of~\eqref {eq:proofRhoRC} must also be right-continuous. That is, $\rho(t)=\rho(t+)$, and then~\eqref{eq:proofRhoRC} becomes~\eqref{eq:master}.
Conversely, suppose that $\rho$ is a function with the stated properties. Defining the corresponding optimal stopping times $\tau^{i}$ as in~\eqref{eq:optStopTime}, the Exact Law of Large Number shows that
$$
\lambda\{i:\, \tau^{i}\leq t\}=\lambda\{i:\, Y^{i}_{t} + c\rho(t)\geq r\}
=1-F_{t}(r-c\rho(t))=\rho(t);
$$
that is, $\rho$ is an equilibrium.
\end{proof}
The following notions will be crucial in determining the convergence to the mean field limit.
\begin{definition}\label{de:transversalDef}
Fix $t\geq0$. A solution $u\in[0,1]$ of $u+F_t(r-cu)=1$ is called \emph{left-increasing-transversal} (or left-transversal for short) if
\begin{equation}\label{eq:leftTrans}
\mbox{for all $\varepsilon>0$ there is $u' \in(u-\varepsilon,u)$ such that $u'+F_t(r-cu')<1$}
\end{equation}
and \emph{right-increasing-transversal} (or right-transversal) if
\begin{equation}\label{eq:rightTrans}
\mbox{for all $\varepsilon>0$ there is $u' \in(u,u+\varepsilon)$ such that $u'+F_t(r-cu')>1$.}
\end{equation}
It is called \emph{increasing-transversal} if both~\eqref{eq:leftTrans} and~\eqref{eq:rightTrans} hold, and \emph{decreasing-transversal} if these hold with the inequality signs reversed.
\end{definition}
For instance, in Figure~\ref{fig:Comparison of special solutions}, $u^{m}$ is left-increasing-transversal and $u^{mrt}, u^{M}$ are right-increasing-transversal, but only $u^{Mlt}$ is increasing-transversal. A decreasing-transversal solution is also depicted.
Next, we introduce a quartet of solutions that will be important in Section~\ref{se:convergenceExtremal}.
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}
\draw[line width=1pt,-latex] (-5,0) to (5,0) node[right=2pt] {$u$};
\fill (-4.2,0) circle (2pt) node[above=2pt] at (-4.2,0) {0};
\fill (4.2,0) circle (2pt) node[above=2pt] at (4.2,0) {1};
\draw[line width=1.5pt] (-5,-1) to[out=50,in=180] (-2.8,0) to[out=0,in=180] (-1.44,0) to[out=0,in=180] (-0.7,.4)
to[out=0,in=180] (0.7,-.5) to[out=0,in=180] (2.1,.4) to[out=0,in=180] (2.8,0) to[out=0,in=210] (5,1.2) node[above=2pt] {$u+F_t(r-cu)$};
\fill (-2.8,0) circle (2pt) node[above=2pt]{$u^{m}$};
\fill (-1.44,0) circle (2pt) node[below=2pt]{$u^{mrt}$};
\fill (1.44,0) circle (2pt) node[above=8pt,left]{$u^{Mlt}$};
\fill (2.89,0) circle (2pt) node[below=2pt]{$u^{M}$};
\end{tikzpicture}
\end{center}
\caption{Solutions $u^m$, $u^{mrt}$, $u^{Mlt}$ and $u^M$}
\label{fig:Comparison of special solutions}
\end{figure}
\begin{lemma}\label{le:solutionsEx}
Fix $t\geq0$. The equation $u+F_t(r-cu)=1$ has a minimal solution $u^{m}\in[0,1]$, a maximal solution $u^{M}\in[0,1]$, a minimal right-transversal solution $u^{mrt}\in[0,1]$, and a maximal left-transversal solution $u^{Mlt}\in[0,1]$.
\end{lemma}
\begin{proof}
Since $G(u):=u+F_t(r-cu)<1$ for $u<0$ and $G(u)>1$ for $u>1$, the existence of $u^{m}$ and $u^{M}$ is immediate from the continuity of $G$. The fact that $G(u)<1$ for all $u<u^{m}$ entails that $u^{m}$ is left-transversal, and since it follows directly from the definition that the set of left-transversal solutions is stable under increasing limits, it follows that $u^{Mlt}$ exists. The argument for $u^{mrt}$ is similar.
\end{proof}
As illustrated in Figure~\ref{fig:Comparison of special solutions}, these four solutions may be distinct, and while $u^{m}$ is automatically left-transversal, it can happen that $u^{mrt}$ is not. Similarly for $u^{M}$ and $u^{Mlt}$. We can also note that $u^{mrt}\leq u^{Mlt}$ may fail, say if the graph is replaced by a flat stretch on $[u^{m},u^{M}]$. But in more generic cases, and in particular whenever $u^{m}$ and $u^{M}$ are not local extrema, the quartet describes at most two distinct solutions $u^m=u^{mrt} \leq u^{Mlt}=u^M$ and these are then increasing-transversal.
In view of Lemma~\ref{le:solutionsEx} we may define, given $t\geq0$,
\begin{equation}\label{eq:partMFGequlibDefs}
\rho^{m}(t)=u^{m},\quad\rho^{M}(t)=u^{M},\quad\rho^{mrt}(t)=u^{mrt},\quad\rho^{Mlt}(t)=u^{Mlt}.
\end{equation}
Using the increase of $Y_{t}$ and~\eqref{eq:jointRC}, one can check that $\rho^{m},\rho^{M},\rho^{mrt},\rho^{Mlt}$ are increasing, $\rho^{M}$ and $\rho^{mrt}$ are right-continuous, and $\rho^{m}$ and $\rho^{Mlt}$ are left-continuous (but not continuous in general).
\begin{corollary}\label{co:minMaxEquilib}
(i) If $\rho: \mathbb{R}_{+}\to[0,1]$ is any increasing function such that~\eqref{eq:master} holds, then $\rho(t+)$ is an equilibrium.
(ii) The functions $t\mapsto \rho^m(t+)$ and $t\mapsto\rho^M(t)$ are the minimal and maximal equilibria of the mean field game; i.e., they are equilibria and any other equilibrium $\rho$ satisfies $\rho^m(t+)\leq \rho(t)\leq \rho^M(t)$ for all $t\geq0$.
\end{corollary}
\begin{proof}
(i) If $\rho$ is any increasing function such that~\eqref{eq:master} holds, then the joint right-continuity in~\eqref{eq:jointRC} implies that $\rho(t+)+F_t(r-c\rho(t+))=1$ for all $t\geq0$. It now follows from Proposition~\ref{pr:MFGequilib} that $\rho(t+)$ is an equilibrium.
(ii) Both $\rho^m(t+)$ and $\rho^M(t)$ are equilibria by~(i). If $\rho$ is any equilibrium, then it is necessarily right-continuous by Proposition~\ref{pr:MFGequilib} and thus $\rho^m\leq \rho\leq \rho^M$ implies $\rho^m(t+)\leq \rho(t)\leq \rho^M(t)$ for all $t\geq0$.
\end{proof}
\section{Convergence to Extremal Equilibria}
\label{se:convergenceExtremal}
The main goal of the last two sections is to understand which mean field equilibria are limits of $n$-player equilibria. In brief, we will see that mean field equilibria described by increasing-transversal solutions of~\eqref{eq:master} (on a sufficiently large sets of times $t$) are such limits, whereas other equilibria need not be proper limits of $n$-player equilibria; they merely occur as parts of mixtures which are limits.
In this section, we focus on the convergence to the minimal and maximal mean field equilibria; the less straightforward interior case is treated in the next section. As a first step, we relate limits of arbitrary $n$-player equilibria to mean field equilibria at a fixed time. We will see in Example~\ref{ex:twoType} that such limits need not be deterministic mean field equilibria as defined in the preceding section, hence the following result relates limits to mixtures of equilibria. This is in line with the results of~\cite{CarmonaDelarueLacker.17,Lacker.14} stating that $n$-player equilibria converge to ``weak'' equilibria of the mean field game, while also illustrating that randomization can indeed occur in a quite natural example.
Given a closed set $A\subseteq\mathbb{R}$, we say that a sequence $(\xi_n)$ of random variables is \emph{asymptotically concentrated} on $A$ if $\lim_{n\to \infty}P(\xi_n\in A_{\varepsilon})=1$ for all $\varepsilon>0$, where $A_{\varepsilon}=\{x\in\mathbb{R}:\, d(x,A)<\varepsilon\}$ is the open $\varepsilon$-neighborhood of $A$. When $(\xi_n)$ is uniformly bounded, as it will be the case below, this is equivalent to any weak cluster point of $(\xi_n)$ being concentrated on $A$. Moreover, for $t\geq0$, we denote the solutions of~\eqref{eq:master} by
$$
\mathcal{U}(t)=\{u\in[0,1]: u+F_{t}(r-cu)=1\}.
$$
\begin{proposition}
\label{pr:convergenceGeneralEquilib}
Fix $t\geq 0$ and let $(\rho_{n})_{n\geq1}$ be a sequence of $n$-player equilibria. Then $\rho_n(t)$ is asymptotically concentrated on $\mathcal{U}(t)$.
\end{proposition}
\begin{proof}
We first show that for any interval $[u_{0},u_{1}]\subseteq [0,1]$ such that $u\mapsto u+F_t(r-cu)$ is strictly smaller than $1$ on $[u_{0},u_{1}]$,
\begin{equation}\label{eq:emptyIntSupport}
P(u_{0} + \varepsilon' \leq \rho_n(t)\leq u_{1}-\varepsilon')\to 0 \quad\mbox{for all}\quad \varepsilon'>0.
\end{equation}
Indeed, let $u_{0}< u_{1}$ be as above. By increasing the value of $u_{1}$ if necessary, we may assume without loss of generality that $u\mapsto u+F_t(r-cu)$ attains its maximum over $[u_{0},u_{1}]$ at $u_{1}$. Given $0<\varepsilon<u_{1}-u_{0}$, we can then choose by continuity some $u \in (u_{1}-\varepsilon,u_{1})$ such that
\begin{equation}\label{eq:leftGlobalMax}
u'+F_t(r-cu')\leq u+F_t(r-cu)<1 \quad\mbox{for all}\quad u_{0}\leq u'\leq u.
\end{equation}
Furthermore, setting
$$
\varepsilon_n(x)=\frac{\#\{Y_t^i+cx\geq r\}}{n}-(1-F_t(r-cx)),\quad x\in\mathbb{R}
$$
and $\varepsilon_n=\sup_{x\in \mathbb{R}}\{|\varepsilon_n(x)|\}$, we have $\varepsilon_n\to 0$ a.s.\ by the uniform convergence in the Glivenko--Cantelli theorem.
Let $X_i=\mathbf{1}_{\{Y_t^i+cu\geq r\}}$, then
\begin{equation}\label{eq:epsNdef}
\frac{X_1+\cdots+X_n}{n}=1-F_t(r-cu)+\varepsilon_n(u).
\end{equation}
Denote by $[x]$ the largest integer $k\leq x$. For any $[u_{0} n]+1 \leq l \leq [u n]$, let $Z_i^l=\mathbf{1}_{\{Y_t^i+c\, \frac{l}{n}\geq r\}}$, then similarly
$$
\frac{Z_1^l+\cdots+Z_n^l}{n}=1-F_t(r-c\, \tfrac{l}{n})+\varepsilon_n(\tfrac{l}{n}).
$$
On the event $\{Z_1^l+\cdots+Z_n^l =l\}$, we then have
$$1+\varepsilon_n(\tfrac{l}{n})=\frac{l}{n}+F_t(r-c\, \tfrac{l}{n})\leq u+F_t(r-cu)$$
by~\eqref{eq:leftGlobalMax} and thus
$$
\frac{X_1+\cdots+X_n}{n}=1-F_t(r-cu)+\varepsilon_n(u)\leq u-\varepsilon_n(\tfrac{l}{n})+\varepsilon_n(u)\leq u+2\varepsilon_n.
$$
Combining this observation with~\eqref{eq:NplayerEquilibCond}, we have for all $[u_{0} n]+1 \leq l \leq [u n]$ that
\begin{align*}
\Big\{\rho_n(t)=\frac{l}{n}\Big\}&\subseteq \Big\{\#\big\{Y_t^i+c\, \tfrac{l}{n}\geq r\big\}=l\Big\}\\
&=\Big\{\frac{Z_1^l+\cdots+Z_n^l}{n}=\frac{l}{n}\Big\}\\
&\subseteq \Big\{\frac{X_1+\cdots+X_n}{n}\leq u+2\varepsilon_n\Big\}.
\end{align*}
Hence,
$$\Big\{ \frac{[u_{0} n]+1}{n} \leq \rho_n(t)\leq \frac{[u n]}{n}\Big\}\subseteq \Big\{\frac{X_1+\cdots+X_n}{n}\leq u+2\varepsilon_n\Big\}$$
and thus
\begin{align*}
P\Big(\frac{[u_{0} n]+1}{n} \leq \rho_n(t)\leq \frac{[u n]}{n}\Big)
&\leq P\Big(\frac{X_1+\cdots+X_n}{n}\leq u+2\varepsilon_n\Big)\\
&= P\Big(u+F_t(r-cu) \geq 1-2\varepsilon_n+\varepsilon_n(u)\Big)\to0
\end{align*}
by~\eqref{eq:epsNdef} and~\eqref{eq:leftGlobalMax}.
Since $\varepsilon>0$ was arbitrary, this shows~\eqref{eq:emptyIntSupport}.
In a symmetric way, one can show the analogue of~\eqref{eq:emptyIntSupport} for intervals where $u\mapsto u+F_t(r-cu)$ is strictly larger than $1$. Since for any $\varepsilon>0$ the complement of $\mathcal{U}(t)_{\varepsilon}$ consists of finitely many intervals of one of these two types, the claim follows.
\end{proof}
Next, we narrow down the asymptotic support for the minimal and maximal $n$-player equilibria $\rho^m_n$ and $\rho^M_n$. We will see in Section~\ref{se:negativeResultsExtremal} that the following result is optimal and the limiting support is not a singleton in general. We recall the notation introduced in~\eqref{eq:partMFGequlibDefs}.
\begin{lemma}
\label{le:convergenceMinMaxEquilib}
Fix $t\geq 0$.
\begin{enumerate}
\item
The minimal $n$-player equilibrium $\rho^m_n(t)$ is asymptotically concentrated on $[\rho^m(t),\rho^{mrt}(t)]\cap \mathcal{U}(t)$.
\item
The maximal $n$-player equilibrium $\rho^M_n(t)$ is asymptotically concentrated on $[\rho^{Mlt}(t),\rho^M(t)]\cap \mathcal{U}(t)$.
\end{enumerate}
\end{lemma}
\begin{proof}
(i) In view of Proposition~\ref{pr:convergenceGeneralEquilib} and the definition of $\rho^{m}(t)$, it suffices to show that
\begin{equation}\label{eq:proofMinMaxConv}
P(\rho^m_n(t)\geq \rho^{mrt}(t)+\varepsilon')\to 0 \quad\mbox{for all}\quad \varepsilon'>0.
\end{equation}
Let $\varepsilon>0$. As $\rho^{mrt}(t)$ is right-transversal we can find $u \in (\rho^{mrt}(t),\rho^{mrt}(t)+\varepsilon)$ such that $1-F_t(r-cu)<u$. For $n$ large enough, we then have $\rho^{mrt}(t)< [u n]/n\leq u$. Let $X_i=\mathbf{1}_{\{Y_t^i+cu\geq r\}}$, then
$$
\frac{X_1+\cdots+X_n}{n}\to EX_i=1-F_t(r-cu)\quad\mbox{a.s.}
$$
by the Law of Large Numbers. Hence,
$$
\frac{X_1+\cdots+X_n}{n}-\frac{[u n]}{n}\to 1-F_t(r-cu)-u<0\quad\mbox{a.s.}
$$
Using also~\eqref{eq:CharNMinEquilib}, we conclude that
\begin{align*}
P(\rho^m_n(t)\geq u)&\leq P\Big(\rho^m_n(t)\geq \frac{[u n]}{n}\Big)\\
&\leq P\Big(\#\big\{Y_t^i+c\, \frac{[u n]}{n}\geq r\big\}\geq [u n]\Big)\\
&\leq P\Big(\frac{\#\{Y_t^i+cu\geq r\}}{n}\geq \frac{[u n]}{n}\Big)\\
&=P\Big(\frac{X_1+\cdots+X_n}{n}-\frac{[u n]}{n}\geq 0\Big)\to 0.
\end{align*}
As $\varepsilon>0$ was arbitrary, the above implies~\eqref{eq:proofMinMaxConv}.
(ii) The arguments are similar to~(i) and therefore omitted.
\end{proof}
Next, we introduce an appropriate notion of convergence for dynamic equilibria as required for our main results. Note that given an increasing function, its right- and left-continuous limits (and all functions between these) differ only by the allocation of the function value at the (countably many) jumps.
The fact that mean field equilibria are right-continuous, cf.\ Proposition~\ref{pr:MFGequilib}, reflects the fact that agents stopping at time $t$ are counted as having left the game at time $t$, whereas left-continuity would correspond to counting them as leaving immediately after~$t$. Since this difference is not fundamental, it seems reasonable to consider limits ``up to taking right-continuous versions.'' This has been accomplished by notions of so-called Fatou convergence, e.g.~\cite{Kramkov.96, Zitkovic.02}, in other areas of stochastic analysis.
For increasing functions $\varphi_{n},\varphi$ on $\mathbb{R}_{+}$, we have that $(\liminf_{n}\varphi_{n})(t+)=(\limsup_{n}\varphi_{n})(t+) = \varphi(t+)$ holds for all $t\in\mathbb{R}_{+}$ if and only if $\lim \varphi_{n}(t)=\varphi(t)$ for all $t$ in a dense subset $D\subseteq\mathbb{R}_{+}$. This motivates the following.
\begin{definition}\label{de:Fatou}
A sequence $(\rho_{n})_{n\geq1}$ of $n$-player equilibria \emph{Fatou converges in probability} to a mean field equilibrium $\rho$ if there exists a dense set $D\subseteq\mathbb{R}_{+}$ such that $\rho_{n}(t)\to\rho(t)$ in probability for all $t\in D$.
\end{definition}
We note that by a diagonalization procedure, Fatou convergence in probability implies Fatou convergence a.s.\ along a subsequence $(n_{k})$, where the a.s.\ convergence is defined by direct analogy to the above. In particular, it then follows that the right-continuous versions of $\liminf_{k}\rho_{n_{k}}$ and $\limsup_{k}\rho_{n_{k}}$ coincide with $\rho$ a.s.
With these notions in place, we can establish the convergence of extremal equilibria in the increasing-transversal case. (Note that the extremal equilibria cannot be decreasing-transversal; they are either increasing-transversal or tangential.)
\begin{theorem}\label{th:convUnderH}
Suppose that for all $t$ in a dense subset $D\subseteq \mathbb{R}_{+}$, the minimal solution $u\in[0,1]$ of $u+F_t(r-cu)=1$ is increasing-transversal.
Then the minimal $n$-player equilibria $\rho^{m}_{n}$ Fatou converge in probability to the minimal mean field equilibrium as $n\to\infty$.
The analogous assertion holds for the maximal equilibria $\rho^{M}_{n}$.
\end{theorem}
\begin{proof}
By the hypothesis, $\rho^m(t)=\rho^{mrt}(t)$ for $t\in D$. Thus, Lemma~\ref{le:convergenceMinMaxEquilib} implies that $\lim \rho^{m}_{n}(t)=\rho^m(t)=\rho^{mrt}(t)$ in probability for $t\in D$. The analogue holds for $\rho^{M}_{n}$.
\end{proof}
Next, we discuss the transversality condition in more detail. In fact, if uniqueness holds for the mean field game, the condition is automatically satisfied and we conclude the following.
\begin{corollary}\label{co:uniqueness}
The following are equivalent:
\begin{enumerate}
\item the mean field game has a unique equilibrium $\rho$,
\item the equation $u+F_t(r-cu)=1$, $u\in [0,1]$ has a unique solution for a dense set of $t\in\mathbb{R}_{+}$.
\end{enumerate}
In that case, any sequence $(\rho_{n})_{n\geq1}$ of $n$-player equilibria Fatou converges in probability to $\rho$.
\end{corollary}
\begin{proof}
If (i) holds, then $\rho^{m}(t+)=\rho^{M}(t)$ for all $t\geq0$ by Corollary~\ref{co:minMaxEquilib}, and~(ii) follows since $\rho^{m}(t+)=\rho^{m}(t)$ except at the (countably many) jumps of $\rho^{m}$. The converse holds because equilibria are right-continuous; cf.\ Proposition~\ref{pr:MFGequilib}. Finally, if $u+F_t(r-cu)=1$ has a unique solution, this solution is necessarily increasing-transversal since $u+F_t(r-cu)<1$ for $u<0$ and $u+F_t(r-cu)>1$ for $u>1$.
\end{proof}
While we will see below that the transversality condition in Theorem~\ref{th:convUnderH} cannot be dropped, we can argue that this condition holds for a generic choice of signals $Y^{i}$. More generally, we discuss the following hypothesis (again, note that the extremal solutions can never be decreasing-transversal).
\begin{definition}\label{de:H}
We say that Hypothesis~(H) holds if for all $t$ in a dense subset of $\mathbb{R}_{+}$, any solution of $u\in[0,1]$ of $u+F_t(r-cu)=1$ is increasing-transversal or decreasing-transversal.
\end{definition}
While this hypothesis does not hold for all choices of $Y^{i}$, the exceptional set is small in the sense that a ``typical'' $F_{t}$ will not have a local extremum of $u\mapsto u+F_t(r-cu)$ at a solution of $u+F_t(r-cu)=1$, so that the latter must be transversal. As $t$ varies over $\mathbb{R}_{+}$, the non-transversal case is somewhat more likely to occur, but typically at only finitely many $t$ so that the hypothesis still holds. There seems to be no obvious way to quantify this. However, we state the following result which confirms the general intuition and shows that Hypothesis~(H) is always valid after a small perturbation of~$Y^{i}$.
\begin{proposition}
\label{prop:generic}
For every $\delta>0$ there exists $0\leq \varepsilon\leq\delta$ such that after replacing $Y^{i}_{t}$ with $Y^{i}_t+\varepsilon$, Hypothesis~(H) is satisfied.
\end{proposition}
\begin{proof}
Let us first observe that for any real function $f(x)$, the set of local minimum values $S=\{f(x):\, x$ is a local minimum of $f\}$ is countable. Indeed, for every $s\in S$ there is an open interval $I_s$ with rational endpoints such that $s=\min\{f(x):x\in I_s\}$. If $s,t\in S$ and $I_s=I_t$, then $s=t$, showing that $I: S\to \mathbb{Q}\times\mathbb{Q}$ is injective.
For fixed $t\geq 0$, denote by $S(t)$ the set of all local minimum and maximum values of $u\mapsto u+F_t(r-cu)-1$, then $\cup_{t\in \mathbb{Q}}S(t)$ is again countable. Thus, we can find a sequence $a_{k}\downarrow 0$ with $a_k \notin \cup_{t\in \mathbb{Q}_{+}}S(t)$. Set $\varepsilon_k=ca_k$. Then, passing from $Y_{t}$ to $Y_t^{\varepsilon_k}=Y_t+\varepsilon_k$, the function under consideration is
$$
u\mapsto u+F_t^{\varepsilon_k}(r-cu)=u+F_t(r-cu-\varepsilon_k)=(u+a_k)+F_t(r-c(u+a_k))-a_k.
$$
By the construction of $a_k$, we know that 1 is not a local extremum value of this function. However, if a solution of $u+F_t^{\varepsilon_k}(r-cu)=1$ failed to be transversal, then 1 would be the value at a local extremum.
\end{proof}
\subsection{Counterexamples}
\label{se:negativeResultsExtremal}
In this section, we illustrate that the assertion of Theorem~\ref{th:convUnderH} may fail without the transversality condition, and more generally that the intervals in Lemma~\ref{le:convergenceMinMaxEquilib} cannot be improved. The examples presented here are essentially static, meaning that $Y^{i}_{t}$ does not depend on $t$. For purely technical reasons, namely to ensure the finiteness of the optimal stopping times~\eqref{eq:optStopTime} as assumed throughout, we introduce a time horizon $T\in(0,\infty)$ at which $Y^{i}_{t}$ jumps to a value larger than $r$, thus ensuring that all players stop.
In the first example, we allow for atoms in the distribution of $Y^{i}_{t}$ to obtain an analytically tractable example. We argue below that the atoms are not essential to the observed phenomenon.
\begin{example}
\label{ex:twoType}
Let $r=c=1$ and let $Y^i_{t}=Y^i_{0}$, $0\leq t< T$ be constant i.i.d.\ processes such that $\Law(Y^i_t)=\frac{1}{2}\delta_{\frac{1}{2}}+\frac{1}{2}\delta_2$ for all $0\leq t<T$, and set $Y^{i}_{t}=2$ for $t\geq T$. Then the law of the minimal $n$-player equilibrium $\rho^m_n(t)$ converges to $\frac{1}{2}\delta_{\frac{1}{2}}+\frac{1}{2}\delta_{1}$ for all $0\leq t <T$
\end{example}
\begin{proof}
Proposition~\ref{pr:CharacterizationOfNMinEquilibrium} yields two cases for every $\omega$. If strictly less than $n/2$ of the realizations $\{Y_0^i(\omega),\, i=1,\dots, n\}$ equal 2, all players~$i$ with $Y_0^i(\omega)=2$ stop at $t=0$ and those with $Y_0^i(\omega)=1/2$ never stop. Whereas if $n/2$ or more of the realizations equal 2, then all agents stop at $t=0$. It follows that the law of $\rho^m_n(t)\equiv\rho^m_n(0)$ converges to $\frac{1}{2}\delta_{\frac{1}{2}}+\frac{1}{2}\delta_{1}$ as $n\to\infty$.
\end{proof}
The limit law $\frac{1}{2}\delta_{\frac{1}{2}}+\frac{1}{2}\delta_{1}$ can be seen as a mixture of the deterministic mean field equilibria $\rho^{m}(t)\equiv \frac{1}{2}$ and $\rho^{mrt}(t)\equiv 1$. In fact, with an appropriate definition allowing for randomized equilibria, this mixture is itself an equilibrium. However, a remarkable conclusion is that there are no $n$-player equilibria converging to the minimal equilibrium $\rho^{m}$.
\begin{corollary}\label{co:twoTypeMinNotLimit}
In the context of Example~\ref{ex:twoType}, $\rho^{m}(t)$ is not a weak accumulation point of $n$-player equilibria, for any $0\leq t<T$.
\end{corollary}
\begin{proof}
Suppose that there exists a subsequence $\rho_{k}=\rho_{n_{k}}$ of $n_{k}$-player equilibria such that $\rho_{k}(t)\to\rho(t)=1/2$ weakly. Then $\rho_{k}(t)\geq\rho_{k}^{m}(t)$ and $\Law(\rho_{k}^{m}(t))\to \frac{1}{2}\delta_{\frac{1}{2}}+\frac{1}{2}\delta_{1}$ yield a contradiction.
\end{proof}
It may be useful to contrast this with the fact that $\rho^{m}$ is a limit of \emph{approximate} Nash equilibria. To wit, if all players~$i$ with $Y_0^i(\omega)=2$ stop at $t=0$ whereas those with $Y_0^i(\omega)=1/2$ do not stop until $T$, we obtain an approximate Nash equilibrium converging to $\rho^{m}$ as $n\to\infty$.
The following example is a smooth version of Example~\ref{ex:twoType} where $Y^{i}_{t}$ admits a density; see also Figure~\ref{fig:cdf of different density functions}(b). It is not analytically tractable but the qualitative behavior is the same.
\begin{figure
\centering
\subfigure[$\frac{1}{2}\delta_{\frac{1}{2}}+\frac{1}{2}\delta_2$]{\label{fig:fft:a}
\begin{minipage}[c]{0.3\textwidth}
\centering
\begin{tikzpicture}
\draw[-latex] (0,0) -- (3,0);
\draw[-latex] (0,0) -- (0,3);
\node[below,left] at (0,0) {0};
\draw[dashed] (0,2.5) -- (2.5,0);
\draw[very thick] (0,1.25) -- (1.25,1.25) node[minimum width=2.5pt,inner sep=0,draw, fill,circle]{};
\draw[dotted] (1.25,1.25) -- (1.25,0);
\draw[very thick] (1.25,0) node[minimum width=3pt,inner sep=0,draw, fill=white,circle]{} -- (2.5,0) node[minimum width=2.5pt,inner sep=0,draw, fill,circle]{};
\end{tikzpicture}
\end{minipage}
}
\subfigure[$4\, \mathbf{1}_{[\frac{3}{8},\frac{1}{2}]}(y)+\mathbf{1}_{[\frac{3}{2},2]}(y)$]{\label{fig:fft:b}
\begin{minipage}[c]{0.3\textwidth}
\centering
\begin{tikzpicture}
\draw[-latex] (0,0) -- (3,0);
\draw[-latex] (0,0) -- (0,3);
\node[below,left] at (0,0) {0};
\draw[dashed] (0,2.5) -- (2.5,0);
\draw[dotted] (1.25,1.25) -- (1.25,0);
\draw[very thick] (0,1.25) -- (1.25,1.25) node[minimum width=2.5pt,inner sep=0,draw, fill,circle]{};
\draw[very thick] (1.25,1.25) -- (1.5625,0) -- (2.5,0) node[minimum width=2.5pt,inner sep=0,draw, fill,circle]{};
\end{tikzpicture}
\end{minipage}
}
\subfigure[$\mathbf{1}_{[0,\frac{1}{2}]}(y)+\mathbf{1}_{[\frac{3}{2},2]}(y)$]{\label{fig:fft:c}
\begin{minipage}[c]{0.3\textwidth}
\centering
\begin{tikzpicture}
\draw[-latex] (0,0) -- (3,0);
\draw[-latex] (0,0) -- (0,3);
\node[below,left] at (0,0) {0};
\draw[dashed] (0,2.5) -- (2.5,0);
\draw[dotted] (1.25,1.25) -- (1.25,0);
\draw[very thick] (0,1.25) -- (1.25,1.25) node[minimum width=2.5pt,inner sep=0,draw, fill,circle]{};
\draw[very thick] (1.25,1.25) -- (2.5,0) node[minimum width=2.5pt,inner sep=0,draw, fill,circle]{};
\end{tikzpicture}
\end{minipage}
}
\caption{Graphs of $F_t(1-u)$ (solid) and $1-u$ (dashed)}
\label{fig:cdf of different density functions}
\end{figure}
\begin{example}
\label{ex:twoTypeDensity}
Let $r=c=1$ and let $Y^i_{t}=Y^{i}_{0}$, $0\leq t< T$ be i.i.d.\ processes such that the law of $Y^i_t$ has the density $f_{t}(y)=4\, \mathbf{1}_{[\frac{3}{8},\frac{1}{2}]}(y)+\mathbf{1}_{[\frac{3}{2},2]}(y)$ for all $0\leq t<T$, and let $Y^i_{t}=2+X^{i},$ $t\geq T$, where $X^{i}$ are i.i.d.\ with a continuous distribution on $[0,1]$. Then the simulation of $\rho^{m}_{n}(t)$, cf.\ Figure~\ref{fig:fft:I}, shows that $\rho^m_n(t)$ again converges to $\frac{1}{2}\delta_{\frac{1}{2}}+\frac{1}{2}\delta_{1}$ for $0\leq t<T$ which is again a mixture of the deterministic mean field equilibria $\rho^{m}(t)\equiv \frac{1}{2}$ and $\rho^{mrt}(t)\equiv 1$.
\end{example}
In the third example, the mean field game admits a continuum of solutions; see also Figure~\ref{fig:cdf of different density functions}(c).
\begin{example}
\label{ex:twoTypeDensityInterpol}
Consider the setting of Example~\ref{ex:twoTypeDensity} with density $f_{t}(y)=\mathbf{1}_{[0,\frac{1}{2}]}(y)+\mathbf{1}_{[\frac{3}{2},2]}(y)$. In this case, we again have $\rho^{m}(t)\equiv \frac{1}{2}$ and $\rho^{mrt}(t)\equiv 1$, but now all values in between also correspond to mean field equilibria. The simulation of $\rho^{m}_{n}(t)$, cf.~Figure \ref{fig:fft:II}, illustrates that the law of $\rho^m_n(t)$ converges to a mixture of all these equilibria.
\end{example}
\begin{figure}[thb]
\centering
\subfigure[density $4\, \mathbf{1}_{[\frac{3}{8},\frac{1}{2}]}(y)+\mathbf{1}_{[\frac{3}{2},2]}(y)$]{\label{fig:fft:I}
\begin{minipage}[c]{0.4\textwidth}
\centering
\includegraphics[width=2.2in,height=1.7in]{simulation1.jpg}
\end{minipage}
}
\subfigure[density $\mathbf{1}_{[0,\frac{1}{2}]}(y)+\mathbf{1}_{[\frac{3}{2},2]}(y)$]{\label{fig:fft:II}
\begin{minipage}[c]{0.5\textwidth}
\centering
\includegraphics[width=2.2in,height=1.7in]{simulation2.jpg}
\end{minipage}
}
\caption{Simulations for $n$-player minimal equilibria ($n=10'000$). Locations $k/n$ of equilibria with $k$ stopped players on the $x$-axis, number of samples with that equilibrium on the $y$-axis.}
\label{fig:simulations for $n$-player game minimal equilibrium}
\end{figure}
When the minimal mean field equilibrium is not increasing-transversal, the preceding examples illustrate that it need not be the limit of the minimal $n$-player equilibria. The final example shows that both cases are possible: it may be the limit even if it is not increasing-transversal.
\begin{example}
\label{ex:twoTypeDensityGood}
Consider the setting of Example~\ref{ex:twoTypeDensity} with density $f_{t}(y)=2\,\mathbf{1}_{[1/2,1]}(y)$. In this case, we easily compute that $\rho^{m}(t)\equiv 0$ and $\rho^{mrt}(t)\equiv 1$. Nevertheless, $\rho^{m}_{n}(t)\equiv0$ due to $Y^{i}_{t}<r$ a.s., and thus $\rho^{m}_{n}(t)\to\rho^{m}(t)$.
\end{example}
\section{Convergence to General Equilibria}\label{se:convergenceGeneral}
Theorem~\ref{th:convUnderH} shows that if the minimal and maximal mean field equilibria are increasing-transversal (on a dense set), then they are the limits of the minimal and maximal $n$-player equilibria. Indeed, the latter are obvious candidates for sequences converging to these mean field equilibria. For mean field equilibria that are not extremal, there are no obvious candidates for the approximating $n$-player equilibria. The following result shows that increasing-transversal equilibria are still limits; however, the approximating $n$-player equilibria have no simple description. We will see in Section~\ref{se:decrasingTransversal} that the analogue for decreasing-transversal solutions fails.
\subsection{Increasing-Transversal Equilibria}
\begin{theorem}\label{th:convToIncreasing}
Let $\rho$ be a mean field equilibrium. Suppose that for all $t$ in a dense subset $D\subseteq \mathbb{R}_{+}$, the solution $u:=\rho(t)$ of $u+F_t(r-cu)=1$ is increasing-transversal. Then there exist $n$-player equilibria $(\rho_{n})_{n\geq1}$ which Fatou converge in probability to $\rho$ as $n\to\infty$.
\end{theorem}
The first step of the proof is to solve a static version of the problem. This will be accomplished by a fixed point argument for monotone functions.
\begin{lemma}\label{le:convToIncreasingStatic}
Let $t\geq0$, let $u\in[0,1]$ be an increasing-transversal solution of $u+F_t(r-cu)=1$ and let $\varepsilon,\delta>0$. There are $n_{0}\in\mathbb{N}$ and $A\in\mathcal{G}_{t}$ with $P(A)>1-\varepsilon$ such that for all $n\geq n_{0}$ and $\omega\in A$, there exists $k(\omega)\in\mathbb{N}$ such that $|u-k(\omega)/n|\leq \delta$ and~\eqref{eq:NplayerEquilibCond} holds; i.e.,
\begin{equation*
\#\{Y_t^i(\omega)+c\, \frac{k(\omega)-1}{n}\geq r\}=k(\omega) \quad\!\mbox{and}\!\quad
\#\{Y_t^i(\omega)+c\, \frac{k(\omega)}{n}< r\}=n-k(\omega).
\end{equation*}
Moreover, $k(\omega)$ can be chosen as a measurable function of $Y^{1}_{t}(\omega),\dots, Y^{n}_{t}(\omega)$.
\end{lemma}
\begin{proof}
Since $u$ is increasing-transversal, there are points $u_{0},u_{1}\in\mathbb{R}$ such that $u-\delta/2 \leq u_{0} < u < u_{1} \leq u+ \delta/2$ and
$$
u_{0}< 1 - F_{t}(r-cu_{0}) \leq 1 - F_{t}(r-cu_{1}) < u_{1},
$$
where the inequality in the middle is due to the monotonicity of $F_{t}$.
The Glivenko--Cantelli theorem then implies that the event $A_{n}$ consisting of all~$\omega$ such that
$$
[nu_{0}]\leq \#\{Y_t^i(\omega)+c\, \frac{[nu_{0}]-1}{n}\geq r\} \leq \#\{Y_t^i(\omega)+c\, \frac{[nu_{1}]}{n}\geq r\} \leq [nu_{1}]
$$
satisfies $P(A_{n})\to 1$. For fixed $n$ and $\omega\in A_{n}$, consider the integer-valued function
$$
k\mapsto G(k):=\#\{Y_t^i(\omega)+c\, \tfrac{k}{n}\geq r\}.
$$
By the above, $G$ maps $\{[nu_{0}]-1,[nu_{0}],\dots,[nu_{1}]\}$ into $\{[nu_{0}],\dots,[nu_{1}]\}$. Moreover, $G$ is monotone increasing. Lemma~\ref{le:fixedpoint} below then yields the existence of $[nu_{0}]\leq k \leq [nu_{1}]$ such that $G(k-1)=G(k)=k$ which is exactly~\eqref{eq:NplayerEquilibCond}. By the choice of $u_{0},u_{1}$ we also have $|u-k/n|\leq \delta$ for $n$ large. Moreover, it is clear from the proof of Lemma~\ref{le:fixedpoint} that $k$ is a measurable function of $Y^{1}_{t},\dots, Y^{n}_{t}$.
\end{proof}
\begin{lemma}\label{le:fixedpoint}
Let $x_{0}<x_{1}<\dots < x_{N}$ be real numbers for some $N\geq 1$. Let $J=\{x_{1},\dots,x_{N}\}$ and $J_{0}=\{x_{0}\}\cup J$. If $f: J_{0}\to J$ is monotone increasing, there exists $k\in\{1,\dots,N\}$ such that $f(x_{k-1})=f(x_{k})=x_{k}$.
\end{lemma}
\begin{proof}
Since $f$ is monotone and maps $J$ into $J$, it must have a fixed point in~$J$. We claim that the minimal $k\in\{1,\dots,N\}$ such that $f(x_{k})=x_{k}$ has the desired property. Indeed, if $k=1$, monotonicity implies that $f(x_{0})=f(x_{1})$ and the proof is complete. If $k>1$, we observe that $f(x_{l-1})\geq x_{l}$ for all $1\leq l\leq k$. Indeed, $f(x_{1})\geq x_{2}$ since $x_{1}$ is not a fixed point, but then $f(x_{2})\geq x_{3}$ since $x_{2}$ is not a fixed point and $f$ is monotone, and so on. In particular, $f(x_{k-1})\geq x_{k}$ and thus $f(x_{k-1})=f(x_{k})=x_{k}$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{th:convToIncreasing}]
Fix $N\in\mathbb{N}$ and let $t_{1}<\dots<t_{N}$ be in $D$. For $n$ large enough, Lemma~\ref{le:convToIncreasingStatic} allows us to find sets $A_{l}\in\mathcal{G}_{t_{l}}$ with $P(A_{l})>1-N^{-2}$ and random variables $k_{l}$ satisfying $|\rho(t_{l})-k_{l}/n|\leq \delta:=1/N$ and~\eqref{eq:NplayerEquilibCond} on~$A_{l}$, for $1\leq l \leq N$.
Following Remark~\ref{rk:NplayerEquilibCondSuff}, we can construct $n$-player equilibria $\rho^{l}_{n}$ such that $\rho^{l}_{n}(t_{l})=k_{l}/n$ on $A_{l}$. Next, we argue that these $\rho^{l}_{n}$ can be chosen such that
\begin{equation}\label{eq:FPordered}
\rho^{1}_{n}(t_{1})\leq \cdots \leq \rho^{m}_{n}(t_{m}) \mbox{ on } A_{1}\cap\cdots\cap A_{m},\quad 1\leq m\leq N.
\end{equation}
Indeed, we have $\rho(t_{l})\leq\rho(t_{l+1})$ by the increase of $\rho$. If $\rho(t_{l})<\rho(t_{l+1})$, then we can ensure $\rho^{l}_{n}(t_{l})\leq \rho^{l+1}_{n}(t_{l+1})$ on $A_{l}\cap A_{l+1}$ simply by choosing $\delta<|\rho(t_{l})-\rho(t_{l+1})|/2$ in Lemma~\ref{le:convToIncreasingStatic}. If $\rho(t_{l})=\rho(t_{l+1})$, we can observe that if the construction in the proof of Lemma~\ref{le:convToIncreasingStatic} is executed twice with $t_{l}$ and $t_{l+1}$, then by choosing the same parameters $u_{0},u_{1}$ the corresponding functions $f_{l}$ and $f_{l+1}$ satisfy $f_{l}\leq f_{l+1}$ due to the increase of $Y^{i}$. This implies that the corresponding minimal fixed points produced by the proof of Lemma~\ref{le:fixedpoint} satisfy $\rho^{l}_{n}(t_{l})\leq \rho^{l+1}_{n}(t_{l+1})$.
In view of~\eqref{eq:FPordered}, we can use Remark~\ref{rk:cutAndPaste}(iii) to construct from the equilibria $(\rho^{l}_{n})_{1\leq l\leq N}$ another $n$-player equilibrium $\varrho_{n}$ with the property that $\varrho_{n}(t_{l})= \rho^{l}_{n}(t_{l})$ for all $1\leq l \leq N$ on $A^{N}:=\cap_{l=1}^{N}A_{l}$.
To summarize, $\varrho_{n}$ satisfies $|\rho(t_{l})-\varrho_{n}(t_{l})|\leq 1/N$ for all $1\leq l \leq N$ on the set $A^{N}$ which has probability $P(A^{N})\geq 1-N^{-1}$. By letting $t_{1},\dots,t_{N}$ exhaust a countable dense subset $D'\subseteq D\subseteq \mathbb{R}_{+}$ as $N\to\infty$, this shows that there exist $n$-player equilibria $(\varrho_{n})_{n\geq1}$ such that $\varrho_{n}(t)\to\rho(t)$ in probability for all $t\in D'$ and the proof is complete.
\end{proof}
\begin{remark}\label{rk:convergenceToMixtures}
The construction leading to Theorem~\ref{th:convToIncreasing} is pathwise and thus extends beyond deterministic mean field equilibria. For instance, let $\rho^{1},\rho^{2}$ be such equilibria satisfying the assumption of Theorem~\ref{th:convToIncreasing}, let $\lambda\in[0,1]$ and suppose that the $n$-player game admits a set $A\in\mathcal{G}_{0}$ with $P(A)=\lambda$. Then we can apply the construction separately on $A$ and $A^{c}$ to find $n$-player equilibria $\rho_{n}$ converging to the mixture $\lambda \delta_{\rho^{1}} + (1-\lambda)\delta_{\rho^{2}}$ on a dense set. In the same vain, convergence to more general mixtures could be analyzed.
\end{remark}
\subsection{Decreasing-Transversal Equilibria}
\label{se:decrasingTransversal}
Let us begin with a simulation and then establish that the observations correspond to a general result.
\begin{figure}[bh]
\centering
\begin{tikzpicture}[scale=0.95]
\draw[-latex] (0,0) -- (4,0);
\draw[-latex] (0,0) -- (0,4);
\node[below] at (0,0) {\small 0};
\node[below] at (3,0) {\small 1};
\node[left] at (0,3) {\small 1};
\node[below] at (1.5,0) {\small $0.5$};
\draw[dashed] (0,3) -- (3,0);
\draw[very thick,domain=0:1.5] plot(\x,{3*(1-2*((\x)/3)^2)]});
\draw[very thick,domain=1.5:3] plot(\x,{6*(1-(\x)/3)^2});
\draw[dotted] (1.5,0) -- (1.5,1.5);
\fill (0,3) circle (2pt);
\fill (3,0) circle (2pt);
\fill (1.5,1.5) circle (2pt);
\end{tikzpicture}
\hspace{.5em}
\includegraphics[scale=.55]{simulation3.jpg}
\caption{C.d.f.\ and simulation of Example~\ref{ex:tent}. The decreasing-transversal equilibrium at $0.5$ can only be approximated on 12.5\% of the samples.}
\label{fig:tent}
\end{figure}
\begin{example}
\label{ex:tent}
Let $r=c=1$ and let $Y^i_{t}=Y^i_{0}$, $0\leq t< T$ be constant i.i.d.\ processes such that $\Law(Y^i_t)$ has the tent-shaped probability density $f(x)=2-4|x-1/2|$, $x\in[0,1]$. As illustrated in Figure~\ref{fig:tent} (left panel), the corresponding equation~\eqref{eq:master} has a decreasing-transversal solution at $u=1/2$ and increasing-transversal solutions at $u=0$ and $u=1$. For the game with $n=10'000$ players, the histogram in Figure~\ref{fig:tent} shows the values of $k/n$ such that $k$ satisfies the equilibrium conditions~\eqref{eq:NplayerEquilibCond}. The simulation illustrates the convergence to the equilibria at $u=0,1$ as proved in Theorem~\ref{th:convToIncreasing} but also suggests that $u=1/2$ is not a limit of $n$-player equilibria; indeed, only about 12.5\% of the samples
allow for an $n$-player equilibrium with $k/n$ close to~$1/2$. In Proposition~\ref{pr:expectedNumber}, we will establish an asymptotic upper bound which yields $e^{-2}\approx 13.5\%$ in this example.
\end{example}
In the remainder of this section we assume that $F_{t}$ admits a continuous density~$f_{t}$. Let $x\in[0,1]$ be a solution of $u+F_{t}(r-cu)=1$. We say that $x$ is \emph{strongly decreasing-transversal} if $\partial_{u}|_{u=x}[u+F_{t}(r-cu)]<0$ or equivalently
$$
f_{t}(r-cx)>c^{-1}.
$$
We note that $x$ is then necessarily in $(0,1)$ and decreasing-transversal in the sense of Definition~\ref{de:transversalDef}; the only difference (given the continuity assumption) is that we exclude the case where $u+F_{t}(r-cu)$ has a vanishing derivative at $x$ (see also Remark~\ref{rk:tangentialDecreasingCase}). Intuitively, when $f_{t}(r-cx)$ is large, there are many similar agents (in terms of values of $Y^{i}$ and relative to the interaction constant $c$) close to such a state. As a result, these agents may tend to coordinate and either all stop or all not stop: it may be impossible to break up the group\footnote{Clearly, this intuition does not explain the phase-transition character of the phenomenon. To gather the intuition for a large density, it may be useful to consider the limiting case of an atom in~$F_{t}$: all agents corresponding to the atom make the same stopping decision.} and create an $n$-player equilibrium close to $x$.
\begin{theorem}\label{th:convToDecreasing}
Let $\rho$ be a mean field equilibrium and suppose that the set
$$
\{t\geq0:\, \rho(t) \mbox{ is strongly decreasing-transversal}\}
$$
has nonempty interior.\footnote{Note that the condition is nonempty interior rather than the set being nonempty. This corresponds to the fact that convergence in probability on a dense set of times~$t$ is sufficient for Fatou convergence; cf.\ Definition~\ref{de:Fatou}.} Then there does not exist a sequence of $n$-player equilibria $\rho_{n}$ Fatou converging to $\rho$ in probability.
\end{theorem}
This theorem follows from Corollary~\ref{co:decreasingLessOne} below which shows non-existence with positive probability at any fixed time $t$ where $\rho(t)$ is strongly decreasing-transversal. For brevity, we set
$$
G_{n,t}(k)=\#\{Y_t^i+c\, \tfrac{k}{n}\geq r\}
$$
so that the $n$-player equilibrium conditions~\eqref{eq:NplayerEquilibCond} can be expressed concisely as
$G_{n,t}(k)=k=G_{n,t}(k-1)$.
Moreover, we introduce
\begin{align*}
\mathcal{K}_{n,t}
=\{0\leq k\leq n: G_{n,t}(k)=k=G_{n,t}(k-1)\}.
\end{align*}
Roughly speaking, we think of $\mathcal{K}_{n,t}(\omega)$ as the set of all $k$ such that $k/n=\rho_{n}(t)(\omega)$ for some $n$-player equilibrium $\rho_{n}(t)$. (This is not quite meaningful since equilibria can always be altered on nullsets.) More precisely, we have that if $\rho_{n}$ is a given equilibrium, then $n\rho_{n}(t)\in \mathcal{K}_{n,t}$ a.s.\ by~(3.1). In particular, we will use below that $\{|x-\rho_{n}(t)|<\varepsilon\}\subseteq \{\exists\, k\in \mathcal{K}^{}_{n,t}:\, |x-\tfrac{k}{n}|<\varepsilon\}$~a.s. for all $x\in[0,1]$ and $\varepsilon>0$.
Finally, we also introduce the superset
\begin{align*}
\mathcal{K}^{*}_{n,t}
&=\{0\leq k\leq n: G_{n,t}(k)=k\} \supseteq\mathcal{K}_{n,t}
\end{align*}
which has no direct interpretation in terms of our game but is conveniently related to crossings of empirical distribution functions (see the proof below).
\begin{proposition}\label{pr:NairSheppKlass}
Fix $t\geq0$ and let $x\in(0,1)$ satisfy $x+F_{t}(r-cx)=1$. Let $\alpha:=cf_{t}(r-cx)$ and assume that $\alpha>1$. Then
$$
\lim_{\varepsilon\to0}\lim_{n\to\infty} P(\exists\, k\in \mathcal{K}^{*}_{n,t}:\, |x-\tfrac{k}{n}|<\varepsilon) = \frac{1-\theta}{\alpha-1}<1
$$
where $\theta\in(0,1)$ is defined through $\theta e^{-\theta}=\alpha e^{-\alpha}$.
\end{proposition}
\begin{proof}
We first observe the local nature of the claim. Indeed, introducing the uniform random variables $U^{i}=F_{t}(Y_t^{i})$ we see that the event
\begin{align*}
A_{n,\varepsilon}
&=\{\exists\, k\in \mathcal{K}^{*}_{n,t}:\, |x-\tfrac{k}{n}|<\varepsilon\}\\
&= \{ \exists\, 0\leq k\leq n:\, \#\{Y_t^i+c\, \tfrac{k}{n}\geq r\}=k,\;|x-\tfrac{k}{n}|<\varepsilon\}\\
&= \{ \exists\, 0\leq k\leq n:\, \#\{U^i\geq F_{t}(r-c\, \tfrac{k}{n})\}=k,\;|x-\tfrac{k}{n}|<\varepsilon\}
\end{align*}
depends only on the values of $F_{t}$ in an $\varepsilon$-neighborhood of $x$. In particular, for $\varepsilon$ small enough, we may change $F_{t}$ outside that neighborhood to guarantee that the set of solutions of $u+F_{t}(r-cu)=1$ is $\{0,x,1\}$.
Considering the c.d.f.\ $G(u)=1-F_{t}(r-cu)$, the proposition can be rephrased as the probability of having no crossings of the empirical distribution of $G$ and the (theoretical) uniform distribution near $x$:
\begin{align*}
A_{n,\varepsilon}
=\{ \exists\, t\in[0,1]:\, \tfrac{1}{n}\#\{G^{-1}(U^i)\leq t\}=t,\;|x-t|<\varepsilon\}.
\end{align*}
(To see this identity, note that $\tfrac{1}{n}\#\{G^{-1}(U^i)\leq t\}=t$ implies $t=k/n$ for some $0\leq k\leq n$.) Following~\cite{NairSheppKlass.86}, this problem can be related to boundary-crossing probabilities of Poisson processes which turn out to be computable. In particular, after changing $F_{t}$ as outlined above, the conditions of~\cite[Theorem~1]{NairSheppKlass.86} are satisfied for $G$ and noting that $\alpha=G'(x)$, this theorem yields the result.
\end{proof}
In view of $\mathcal{K}_{n,t}\subseteq \mathcal{K}^{*}_{n,t}$, we have the following consequence (see also Figure~\ref{fig:bounds}).
\begin{corollary}\label{co:decreasingLessOne}
Fix $t\geq0$ and let $x\in[0,1]$ satisfy $x+F_{t}(r-cx)=1$. If~$x$ is strongly decreasing-transversal, then
$$
\lim_{\varepsilon\to0}\lim_{n\to\infty} P(\exists\, k\in \mathcal{K}^{}_{n,t}:\, |x-\tfrac{k}{n}|<\varepsilon)<1.
$$
\end{corollary}
\begin{figure}[bth]
\centering
\includegraphics[scale=.60]{bounds.jpg}
\caption{Bounds for the probability of finding an $n$-player equilibrium near~$x$ as in Corollary~\ref{co:decreasingLessOne}. The dashed and dashed-dotted lines are the upper bounds derived from Proposition~\ref{pr:NairSheppKlass} and Proposition~\ref{pr:expectedNumber}, respectively. The solid line is the lower bound from Proposition~\ref{pr:limitProbLowerBound}.}
\label{fig:bounds}
\end{figure}
\begin{remark}\label{rk:weakerEquilibCond}
One can ask if the non-existence result is related to the convention made in Section~\ref{se:nPlayer} that players do not consider their own impact on the state process. To address this question, we can drop the first equation in the equilibrium conditions~\eqref{eq:NplayerEquilibCond} and keep only the second (which seems uncontroversial); i.e., $\#\{Y_t^i+c\, \frac{k}{n}< r\}=n-k$.
This corresponds to the definition of $\mathcal{K}^{*}_{n,t}$ and Proposition~\ref{pr:NairSheppKlass} shows that non-existence holds even under this condition alone.
\end{remark}
\begin{remark}\label{rk:tangentialDecreasingCase}
Heuristics suggest that in the tangential case of a decreasing-transversal $x$ with $\alpha=1$, the limiting probability is $1$; i.e., the equilibrium is in fact a limit of $n$-player equilibria. The tangential case is less important because it generically does not occur, in the same sense as discussed below Definition~\ref{de:H}. We do not provide a rigorous result.
\end{remark}
In our last result, we determine the asymptotic expected number of equilibria close to $x$ (for both increasing- and decreasing-transversal cases). Importantly, it implies that this number is positive with positive probability. When $\alpha>1$ is not close to $1$, it also yields a fairly accurate upper bound for the probability of not finding an $n$-player equilibrium close to~$x$ (cf.~Example~\ref{ex:tent}) since the probability of finding more than one solution is small. On the other hand, we see that as $\alpha\to1$, the expected number of solutions tends to infinity, and in particular the probability of finding many solutions becomes large\footnote{In fact, one can show that $\lim_{\alpha\to1}\limsup_{n\to\infty} P(\#\{\mathcal{K}_{n,t}\cap |x-\tfrac{k}{n}|<\varepsilon\}=j)=0$ for all finite $j\geq0$ when $\varepsilon>0$ is small enough.}.
\begin{proposition}\label{pr:expectedNumber}
Fix $t\geq0$ and let $x\in(0,1)$ satisfy $x+F_{t}(r-cx)=1$. Let $\alpha:=cf_{t}(r-cx)$ and assume that $\alpha\neq1$. Then
$$
\lim_{\varepsilon\to0}\lim_{n\to\infty} E[\#\{k\in \mathcal{K}_{n,t}:\, |x-\tfrac{k}{n}|<\varepsilon\}]= \frac{e^{-\alpha}}{|1-\alpha|}.
$$
In particular,
$$
\limsup_{\varepsilon\to0}\limsup_{n\to\infty} P(\exists\, k\in \mathcal{K}_{n,t}:\, |x-\tfrac{k}{n}|<\varepsilon)\leq \frac{e^{-\alpha}}{|1-\alpha|}.
$$
\end{proposition}
One consequence of Proposition~\ref{pr:expectedNumber} is that non-uniqueness is indeed the typical case for the $n$-player game, as claimed in the Introduction: under the stated smoothness assumption on $F_{t}$, we typically have at least one mean field equilibrium corresponding to $0\neq\alpha<1$ and then the proposition and Lemma~\ref{le:convToIncreasingStatic} imply that there is more than one $n$-player equilibrium, for large~$n$.
\begin{proof}[Proof of Proposition~\ref{pr:expectedNumber}.]
We may assume that $c=1$, and we drop the index $t$ everywhere. We denote
$$
\alpha(z)=f(r-z)
$$
and recall that $x\in(0,1)$ and $\alpha=\alpha(x)\neq1$. Fix $\varepsilon>0$ and denote
$$
x_{-}=x-\varepsilon,\quad x_{+}=x+\varepsilon,
$$
$$
F_{-}=F(r-x-\varepsilon),\quad F_{+}=F(r-x+\varepsilon),
$$
$$
\alpha_{-}=\inf_{|z-x|<\varepsilon} \alpha(z),\quad \alpha_{+}=\sup_{|z-x|<\varepsilon} \alpha(z),
$$
$$
m(z)=\inf_{|z-x|\leq\varepsilon} z(1-z),\quad M(z)=\sup_{|z-x|\leq\varepsilon} z(1-z).
$$
We assume that $\varepsilon$ is small enough such that $x_{\pm}\in(0,1)$ and $1\notin[\alpha_{-},\alpha_{+}]$.
\vspace{.5em}
\emph{Step 1: Bounds for $P(k\in\mathcal{K}_{n})$.} Fix $n$ and let $U^{i}=F(Y^{i})$, $1\leq i\leq n$ so that $(U^{i})$ are i.i.d.\ $\Unif[0,1]$, and let $U^{(1)}\geq \dots\geq U^{(n)}$ be the associated reverse order statistics. Noting that $U^{(k)}=U_{(n-k+1)}$ for the usual (increasing) order statistics $U_{(\cdot)}$, we have that $U^{(k)}\sim \Beta(n-k+1,k)$ and $U^{(k+1)}=U^{(k)}W_{k}^{\frac{1}{n-k}}$ where $W_{k}\sim \Unif[0,1]$ is independent; cf.\ \cite[Section~4]{ArnoldEtAl.13}.
Moreover, we note that $k\in\mathcal{K}_{n}$ is equivalent to
\begin{equation}\label{eq:orderStat1}
U^{(k)}\geq F(r-\tfrac{k-1}{n})=:F_{k-1} \quad\mbox{and}\quad U^{(k+1)}\leq F(r-\tfrac{k}{n})=:F_{k}.
\end{equation}
As a result, for any deterministic integer $1\leq k\leq n$,
\begin{align}
P(k\in\mathcal{K}_{n})
&= P\big(U^{(k+1)}\leq F_{k},\, U^{(k)}\geq F_{k-1}\big)\nonumber\\
&= \int_{F_{k-1}}^{1} P\big(U^{(k+1)}\leq F_{k}\big|U^{(k)}=z\big)\, dP(U^{(k)}=z)\nonumber\\
&= \int_{F_{k-1}}^{1} P\big(W\leq (F_{k}/z)^{n-k}\big|U^{(k)}=z\big)\, dP(U^{(k)}=z)\nonumber\\
&= \frac{n!}{(n-k)!(k-1)!} \int_{F_{k-1}}^{1} F_{k}^{n-k}(1-z)^{k-1}\,dz\nonumber\\
&= \binom{n}{k}F_{k}^{n-k}(1-F_{k-1})^{k} \label{eq:orderStat2}
\end{align}
where $dP(U^{(k)}=z)$ indicates integration with respect to the law of $U^{(k)}$. We may observe that this quantity is reminiscent of a binomial distribution except that the success probability changes with $k$.
Next, we use Taylor's theorem to find that
\begin{equation}\label{eq:alphaTaylor}
F_{k-1}=F(r-\tfrac{k}{n}+\tfrac{1}{n})=F(r-\tfrac{k}{n})+\alpha_{k}/n=F_{k}+\alpha_{k}/n
\end{equation}
where $\alpha_k=\alpha(\eta_k)$ with $\eta_k \in [\frac{k-1}{n},\frac{k}{n}]$ and in particular $\alpha_{k}\in[\alpha_{-},\alpha_{+}]$. Now suppose that $|x-\tfrac{k}{n}|<\varepsilon$. Then $k\geq nx_{-}$, and using also $F_{k}\geq F_{-}$,
\begin{align*}
P(k\in\mathcal{K}_{n})
&= \binom{n}{k}F_{k}^{n-k}(1-F_{k}-\alpha_{k}/n)^{k}\\
&= \binom{n}{k}F_{k}^{n-k}(1-F_{k})^{k}\left(1-\frac{\alpha_{k}}{(1-F_{k})n}\right)^{k}\\
&\leq \binom{n}{k}F_{k}^{n-k}(1-F_{k})^{k}\left(1-\frac{\alpha_{-}}{(1-F_{-})n}\right)^{nx_{-}}.
\end{align*}
The fact that
$$
(1-y) \leq e^{-y} \leq (1-y)(1+o(y))
$$
as $y\to0$ applied with $y=w/n$ yields
$$
(1-\tfrac{w}{n})^{n} \leq e^{-w} \leq (1-\tfrac{w}{n})^{n}\,(1+O(1/n))
$$
as $n\to\infty$, uniformly over $w$ in a compact interval. This leads us to the upper bound
\begin{align}\label{eq:proofStep1Conclusion}
P(k\in\mathcal{K}_{n})
&\leq \binom{n}{k}F_{k}^{n-k}(1-F_{k})^{k} e^{-{\frac{\alpha_{-}x_{-}}{1-F_{-}}}}.
\end{align}
Similarly, we have the lower bound
\begin{align*}
P(k\in\mathcal{K}_{n})
&\geq \binom{n}{k}F_{k}^{n-k}(1-F_{k})^{k}\left(1-\frac{\alpha_{+}}{(1-F_{+})n}\right)^{nx_{+}}\\
&\geq \binom{n}{k}F_{k}^{n-k}(1-F_{k})^{k}e^{-{\frac{\alpha_{+}x_{+}}{1-F_{+}}}}\,(1+O(1/n)).
\end{align*}
\vspace{.5em}
\emph{Step~2: Decay away from $x$.}
Let us recall Robbin's version~\cite{Robbins.55} of the Stirling approximation,
\begin{equation}\label{eq:robbins}
\sqrt{2\pi n} (\tfrac{n}{e})^{n} e^{\tfrac{1}{12n+1}} \leq n! \leq \sqrt{2\pi n} (\tfrac{n}{e})^{n} e^{\tfrac{1}{12n}},
\end{equation}
showing in particular that $n! = \sqrt{2\pi n}\, (\tfrac{n}{e})^{n}\,(1+O(1/n))$.
Since $n-k$ and $k$ are comparable to $n$ when $|x-\tfrac{k}{n}|<\varepsilon$, we have
$$
\binom{n}{k}= (1+O(1/n)) \frac
{\sqrt{2\pi n}\, (\tfrac{n}{e})^{n}}
{\sqrt{2\pi (n-k)}\, (\tfrac{n-k}{e})^{(n-k)}\, \sqrt{2\pi k}\, (\tfrac{k}{e})^{k}}
$$
uniformly over all $k$ such that $|x-\tfrac{k}{n}|<\varepsilon$.
This shows that
\begin{align}\label{eq:proofZsum}
Z_{n,\varepsilon}
&:=\sum_{k:\, |x-\tfrac{k}{n}|<\varepsilon} \binom{n}{k}F^{n-k}_{k}(1-F_{k})^{k}\nonumber\\
& =(1+O(1/n))\sum_{k:\, |x-\tfrac{k}{n}|<\varepsilon} \frac{1}{\sqrt{2\pi n(1-\tfrac{k}{n})\tfrac{k}{n}}}\,
\frac{F^{n-k}_{k}(1-F_{k})^{k}}{(1-\tfrac{k}{n})^{n-k}(\tfrac{k}{n})^{k}}\nonumber\\
&\leq \frac{1+O(1/n)}{\sqrt{m(x)}}\sum_{k:\, |x-\tfrac{k}{n}|<\varepsilon} \frac{1}{\sqrt{2\pi n}}\,
\frac{F^{n-k}_{k}(1-F_{k})^{k}}{(1-\tfrac{k}{n})^{n-k}(\tfrac{k}{n})^{k}}.
\end{align}
Our next goal is to estimate the summand above. We introduce the function
$$
\varphi(z)=(1-z)^{n-k}z^{k}
$$
so that
\begin{equation}\label{eq:proofFraction}
\frac{F^{n-k}_{k}(1-F_{k})^{k}}{(1-\tfrac{k}{n})^{n-k}(\tfrac{k}{n})^{k}}=\frac{\varphi(1-F_{k})}{\varphi(\tfrac{k}{n})}
\end{equation}
is the term in question.
We can use Taylor's theorem similarly as above to find
$$
F_{k}=F(r-\tfrac{k}{n})=F(r-x+x-\tfrac{k}{n})=F(r-x)+\tilde{\alpha}_{k}(x-\tfrac{k}{n})
$$
where $\tilde{\alpha}_{k}\in[\alpha_{-},\alpha_{+}]$. As $F(r-x)=1-x$, this equality can be rewritten as
$$
F_{k}=1-\tfrac{k}{n}+(\tilde{\alpha}_{k}-1)(x-\tfrac{k}{n}).
$$
Introducing also
$$
\psi(z)=\log\varphi(z)=(n-k)\log(1-z)+ k \log z,
$$
we have
$$
\psi'(z) = -\frac{n-k}{1-z}+\frac{k}{z},\quad \psi''(z) = -n\left[\frac{1-\tfrac{k}{n}}{(1-z)^{2}}+\frac{\tfrac{k}{n}}{z^{2}}\right]<0
$$
and then $\psi'(k/n)=0$ shows that $\psi$ and $\varphi$ have a global maximum at~$k/n$.
Taylor's theorem at the second order yields
$$
\psi(1-F_{k})-\psi(\tfrac{k}{n})=\psi(\tfrac{k}{n}-(\tilde{\alpha}_{k}-1)(x-\tfrac{k}{n}))-\psi(\tfrac{k}{n})=\frac{\psi''(\xi_{k})}{2} (\tilde{\alpha}_{k}-1)^{2}(x-\tfrac{k}{n})^{2}
$$
for a suitable number $\xi_{k}$ between $\tfrac{k}{n}$ and $\tfrac{k}{n}-(\tilde{\alpha}_{k}-1)(x-\tfrac{k}{n})$. Therefore, we have
$|\xi_{k}-x|<A\varepsilon$, with $A=\max\{\alpha_+,1\}$. Using the above formula for $\psi''(z)$ and setting
$$
\Gamma_{\varepsilon}=\inf_{\substack{|p-x|<\varepsilon\\|z-x|<A\varepsilon}} \left[\frac{1-p}{(1-z)^{2}}+\frac{p}{z^{2}}\right],
$$
we arrive at
$$
\psi(1-F_{k})-\psi(\tfrac{k}{n})\leq -\frac{n}{2} \Gamma_{\varepsilon} (\tilde{\alpha}_{k}-1)^{2}(x-\tfrac{k}{n})^{2}.
$$
Setting also $\alpha_{*}=\alpha_{-}$ if $\alpha>1$ and $\alpha_{*}=\alpha_{+}$ if $\alpha<1$, exponentiating leads us to the desired estimate
$$
\frac{\varphi(1-F_{k})}{\varphi(\tfrac{k}{n})} \leq \exp \left(-\frac{n}{2} \Gamma_{\varepsilon} (\alpha_{*}-1)^{2}(x-\tfrac{k}{n})^{2}\right)
$$
and plugging this into~\eqref{eq:proofZsum} we have that
\begin{align*}
Z_{n,\varepsilon}
&\le \frac{1+O(1/n)}{\sqrt{m(x)}}\sum_{k:\, |x-\tfrac{k}{n}|<\varepsilon} \frac{1}{\sqrt{2\pi n}}\, \exp \left(-\frac{n}{2} \Gamma_{\varepsilon} (\alpha_{*}-1)^{2}(x-\tfrac{k}{n})^{2}\right).
\end{align*}
Set
$$
w_{k}= \sqrt{n\Gamma_{\varepsilon}}|\alpha_{*}-1|(\tfrac{k}{n}-x)
$$
and note that
$$
\Delta w: = w_{k}-w_{k-1}=\frac{1}{\sqrt{n}}\sqrt{\Gamma_{\varepsilon}}|\alpha_{*}-1|.
$$
The above sum can then be written as
\begin{align*}
Z_{n,\varepsilon}
& \le \frac{1+O(1/n)}{\sqrt{m(x)}}\sum_{k:\, |w_{k}|<\sqrt{n\Gamma_{\varepsilon}}|\alpha_{*}-1|\varepsilon}
\frac{1}{\sqrt{2\pi n}}\, e^{-w_{k}^{2}/2}\\
&= \frac{1+O(1/n)}{\sqrt{m(x)\Gamma_{\varepsilon}}}\frac{1}{|\alpha_{*}-1|}
\sum_{k:\, |w_{k}|<\sqrt{n\Gamma_{\varepsilon}}|\alpha_{*}-1|\varepsilon} \frac{1}{\sqrt{2\pi}}\, e^{-w_{k}^{2}/2} \Delta w
\end{align*}
which suggests comparison with a Gaussian integral $\int_{\mathbb{R}}\frac{1}{\sqrt{2\pi}}\, e^{-w^{2}/2}\,dw=1$. Indeed, after subtracting the two largest summands neighboring the origin, the sum can be seen as a Riemann sum which is entirely below the integral. These two summands are $O(1/\sqrt{n})$ so that
$$
\sum_{k:\, |w_{k}|<\sqrt{n\Gamma_{\varepsilon}}|\alpha_{*}-1|\varepsilon} \frac{1}{\sqrt{2\pi}}\, e^{-w_{k}^{2}/2} \Delta w \leq 1+O(1/\sqrt{n})
$$
and finally
\begin{align*}
Z_{n,\varepsilon}
& \leq \frac{1}{\sqrt{m(x)\Gamma_{\varepsilon}}}\frac{1}{|\alpha_{*}-1|} \, (1+O(1/\sqrt{n})).
\end{align*}
\vspace{.5em}
\emph{Step 3: Conclusion.} Recalling~\eqref{eq:proofStep1Conclusion} we have
\begin{align*}
E[\#\{k\in \mathcal{K}_{n}:\, |x-\tfrac{k}{n}|<\varepsilon\}]
& = \sum_{k:\, |x-\tfrac{k}{n}|<\varepsilon} P(k\in\mathcal{K}_{n}) \\
& \leq e^{-{\frac{\alpha_{-}x_{-}}{1-F_{-}}}} Z_{n,\varepsilon}\\
& \leq e^{-{\frac{\alpha_{-}x_{-}}{1-F_{-}}}} \frac{1}{\sqrt{m(x)\Gamma_{\varepsilon}}}\frac{1}{|\alpha_{*}-1|} \, (1+O(1/\sqrt{n}))
\end{align*}
and hence
\begin{align*}
\limsup_{n\to\infty} E[\#\{k\in \mathcal{K}_{n}:\, |x-\tfrac{k}{n}|<\varepsilon\}]
& \leq e^{-{\frac{\alpha_{-}x_{-}}{1-F_{-}}}} \frac{1}{\sqrt{m(x)\Gamma_{\varepsilon}}}\frac{1}{|\alpha_{*}-1|}.
\end{align*}
As $\varepsilon\to0$ we have $x_{-}\to x$, $\alpha_{-}\to\alpha$, $\alpha_{*}\to\alpha$, $F_{-}\to F(r-x)=1-x$ and
$$
m(x)\to x(1-x),\quad \Gamma_{\varepsilon}\to \frac{1}{1-x}+\frac{1}{x}=\frac{1}{x(1-x)}.
$$
Thus,
\begin{align*}
\limsup_{\varepsilon\to0}\limsup_{n\to\infty} E[\#\{k\in \mathcal{K}_{n}:\, |x-\tfrac{k}{n}|<\varepsilon\}]
& \leq \frac{e^{-\alpha}}{|\alpha-1|}.
\end{align*}
The matching lower bound follows similarly after replacing $\alpha_{-}$ by $\alpha_{+}$, $F_{-}$ by $F_{+}$, and so on.
\end{proof}
\begin{remark}\label{rk:rootNconvergence}
The above proof offers insight into the speed of convergence of $n$-player equilibria. Specifically, the estimates entail that if $\varepsilon_{n}\downarrow0$ is such that $\varepsilon_{n}\sqrt{n}\to \beta\in[0,\infty]$, then
\begin{align*}
E[\#\{k\in \mathcal{K}_{n,t}:\, |x-\tfrac{k}{n}|<\varepsilon_{n}\}] \to \frac{e^{-\alpha}}{|1-\alpha|} \mu\bigg(-\frac{|\alpha-1|}{\sqrt{x(1-x)}}\beta,\frac{|\alpha-1|}{\sqrt{x(1-x)}}\beta\bigg)
\end{align*}
where $\mu$ is the standard Gaussian distribution.
Thus, a ball of radius $r_{n}/\sqrt{n}$ around $x$, where $r_{n}\to\infty$ arbitrarily slowly, will asymptotically contain all $n$-player equilibria converging to $x$, and this is optimal in the sense that if $\limsup r_{n}<\infty$ the ball will miss some solutions.
\end{remark}
In our final result we complement the upper bound in Proposition~\ref{pr:expectedNumber} by a lower bound. The gap between the bounds vanishes for large $\alpha$; see also Figure~\ref{fig:bounds}.
\begin{proposition}\label{pr:limitProbLowerBound}
Fix $t\geq0$, let $x\in(0,1)$ satisfy $x+F_{t}(r-cx)=1$ and suppose that $\alpha:=cf_{t}(r-cx)>1$. Then
$$
\liminf_{\varepsilon\to0}\liminf_{n\to\infty} P(\exists\, k\in \mathcal{K}_{n,t}:\, |x-\tfrac{k}{n}|<\varepsilon)\geq L(\alpha)>0
$$
where
$$
L(\alpha) = \frac{e^{-\alpha}}{\big(\alpha-1\big)\left(1+2\sqrt{\frac{2}{|a_0|}}\left\{1-\Phi\left(\sqrt{2|a_0|}\right)\right\}\right)}
$$
with $a_0:=1-\alpha+\log(\alpha)<0$ and $\Phi$ is the standard normal c.d.f.
\end{proposition}
Since the lower bound is strictly positive, we can interpret the result as stating that~$x$ is necessarily part of a mixture which is itself a limit of $n$-player equilibria. In summary, when $x$ is strongly decreasing-transversal, we cannot find $n$-player equilibria converging to $x$ at time~$t$, but at least we can find $n$-player equilibria converging to a randomized mean field equilibrium which charges~$x$.
\begin{proof}[Proof of Proposition~\ref{pr:limitProbLowerBound}]
We use the notation from the proof of Proposition~\ref{pr:expectedNumber} and suppress~$t$.
Let $\mathcal{K}= \mathcal{K}_{n,t}$ and $X=X_{n,\varepsilon}=\#\{k\in \mathcal{K}_{n,t}:\, |x-\frac{k}{n}|\le\varepsilon\}$. Set $\mu=E[X]$ and let
$$
A=A_{n,\varepsilon}=\{|X-c\mu|\ge c\mu\}
$$
for a constant $c>0$ to be chosen later. Clearly $P(X=0)\le P(A)$. Using the Markov inequality
$$
P(|X-c\mu|\ge c\mu)\le \frac{E\left((X-c\mu)^2\right)}{c^2 \mu^2}=
\frac{E\left(X^2\right)}{c^2 \mu^2}-\frac{2}{c}+1=
\frac{2}{c}\left(\frac{E\left(X^2\right)}{2c \mu^2}-1\right)+1
$$
and choosing $c=\theta\frac{E[X^2]}{2\mu^2}$ for some $\theta>1$, we obtain that
$$
P(X=0)\le1-\frac{4\mu^2(\theta-1)}{\theta^2E[X^2]}.
$$
Optimizing over the right-hand side, we note that $\theta=2$ yields the best bound, so we choose $c=\frac{E[X^2]}{\mu^2}$ and conclude that
\begin{equation}\label{eq:prooflimitProbLower1}
P(X=0)\le 1-\frac{\mu^2}{E[X^2]}= 1-\frac{E[X]^{2}}{E[X^2]}.
\end{equation}
Since we have already determined the limit of~$E[X]$ in Proposition~\ref{pr:expectedNumber}, our goal is to find an upper bound for~$E[X^2]$.
To that end, we first compute
$$
P(k\in \mathcal{K}, j\in \mathcal{K})=P(U^{(k+1)}\le F_k, U^{(k)}\ge F_{k-1},U^{(j+1)}\le F_j, U^{(j)}\ge F_{j-1})
$$
for $k<j$; recall the notation of~\eqref{eq:orderStat1}. In fact, this probability is zero for $j=k+1$, so we focus on $k+2\leq j$. Conditionally on $U^{(k+1)}=h< U^{(k)}=u$, the pair
$(U^{(j)},U^{(j+1)})$ has the same distribution as
$(hV^{(j-(k+1))},hV^{(j-k)})$ where $V^{(\ell)}$ are the reverse order statistics of an i.i.d.\ sample $V_1,\cdots,V_{n-(k+1)}$ of size $n-(k+1)$ and distribution~$\Unif[0,1]$.
Thus, we have
\begin{align}\label{eq:V1}
&P\left(U^{(j+1)}\le F_j, U^{(j)}\ge F_{j-1} \big| U^{(k+1)}=h, U^{(k)}=u\right)\nonumber\\
&=P\left(V^{j-(k+1)}\le \frac{F_j}{h}, V^{(j-k)}\ge \frac{F_{j-1}}{h}\right).
\end{align}
Clearly $P(V^{(j-k)}\ge \frac{F_{j-1}}{h})=0$ if $F_{j-1} \ge h$, so we only need to consider the case $h\in [F_{j-1},F_k]$. Using the formula developed in~\eqref{eq:orderStat2}, we obtain
\begin{align}\label{eq:V2}
&P\left(U^{(j+1)}\le F_j, U^{(j)}\ge F_{j-1} \big| U^{(k+1)}=h, U^{(k)}=u\right)\nonumber\\
&=\binom{n-(k+1)}{j-(k+1)} \left(\frac{F_j}h\right)^{n-j}\, \left(1-\frac{F_{j-1}}h\right)^{j-(k+1)}.
\end{align}
As above~\eqref{eq:orderStat1}, the joint density of $U^{(k)}$ and $U^{(k+1)}$ can be computed using the fact that $U^{(k)}\sim\Beta(n-k+1,k)$ and $U^{(k+1)}=W_{k}^{\frac{1}{n-k}}U^{(k)}$ where $W_{k}\sim\Unif[0,1]$ is independent of $U^{(k)}$:
$$
dP\left(U^{(k)}=u, U^{(k+1)}=h\right)=k(n-k)\binom{n}{k}(1-u)^{k-1} h^{n-(k+1)} \, \mathbf{1}_{0\le h\le u\le 1}\, du \, dh.
$$
Integrating with respect to this density and using the appropriate restrictions, we deduce that
\begin{align*}
P(k\in \mathcal{K}, j\in \mathcal{K})
&=k(n-k)\binom{n}{k} \binom{n-(k+1)}{j-(k+1)} F_j^{n-j} \\
&\quad\,\times\int_{F_{k-1}}^1 (1-u)^{k-1} \, du \int_{F_{j-1}}^{F_k} (h-F_{j-1})^{j-(k+1)}\, dh\\
&=\binom{n}{k} (1-F_{k-1})^k \,
\frac{n-k}{j-k}\binom{n-(k+1)}{j-(k+1)} F_j^{n-j} (F_k-F_{j-1})^{j-k}\\
&=\binom{n}{k}(1-F_{k-1})^k F_k^{n-k} \, \binom{n-k}{j-k}
\left(\frac{F_j}{F_k}\right)^{n-j} \left(1-\frac{F_{j-1}}{F_k}\right)^{j-k}\\
&\le \binom{n}{k}(1-F_{k-1})^k F_k^{n-k} \, \binom{n-k}{j-k}
\left(\frac{F_j}{F_k}\right)^{n-j} \left(1-\frac{F_{j}}{F_k}\right)^{j-k}.
\end{align*}
By a repeated application of~\eqref{eq:alphaTaylor} we have that $\frac{F_{j}}{F_k}=1-\frac{\alpha_j (j-k)}{n F_k}$
for some $\alpha_j \in [\alpha_-,\alpha_+]$ and hence the last two terms above satisfy
\begin{align*}
\left(\frac{F_j}{F_k}\right)^{n-j} \left(1-\frac{F_{j}}{F_k}\right)^{j-k}
&\leq \left[1-\frac{\alpha_j(j-k)}{n F_k}\right]^{n-j} \left[\frac{\alpha_j(j-k)}{n F_k}\right]^{j-k}\\
&\leq \exp\left(-\alpha_-(j-k)\frac{n-j}{nF_k}\right) (\alpha_+)^{j-k} \left(\frac{j-k}{nF_k}\right)^{j-k}\\
&\leq \exp\left(-\alpha_-(j-k)\frac{1-x_+}{F_+}\right) (\alpha_+)^{j-k} \left(\frac{j-k}{nF_k}\right)^{j-k}.
\end{align*}
On the other hand, Stirling's approximation as in~\eqref{eq:robbins} yields
\begin{align*}
&\binom{n-k}{j-k} \left(\frac{j-k}{nF_k}\right)^{j-k} \!\!\!\!\!\!
= \frac{(n-k)!}{(n-j)! (j-k)! }\left(\frac{j-k}{nF_k}\right)^{j-k}\!\!\!\!\!
\le \left(\frac{n-k}{nF_k}\right)^{j-k}\! \frac{(j-k)^{j-k}}{(j-k)! }\\
&\le \left(\frac{1-x_-}{F_-}\right)^{j-k}\!\!\!\!\!(j-k)^{j-k}\!\left[\left(\frac{(j-k)}{e}\right)^{j-k}\!\!\!\!\!\sqrt{2\pi(j-k)} \exp\left(\frac1{12(j-k)+1}\right)\right]^{-1}\\
&\le \left(\frac{1-x_-}{F_-}\right)^{j-k}\frac{e^{j-k}}{\sqrt{2\pi(j-k)}}\,.
\end{align*}
As a result, we obtain the upper bound
\begin{equation}\label{eq:combEstimate}
P(k\in \mathcal{K}, j\in \mathcal{K})\le \binom{n}{k}(1-F_{k-1})^k F_k^{n-k} \frac{1}{\sqrt{2\pi(j-k)}}
\exp(a(j-k))
\end{equation}
where
$$
a=a(\alpha,\varepsilon):=1-\alpha_-\frac{1-x_+}{F_+}+\log(\alpha_+)+\log\left(\frac{1-x_-}{F_-}\right).
$$
If the following sums run over indices $i$ with $|x-i/n|\le \varepsilon$, we can express the second moment of $X$ as
\begin{align*}
E[X^{2}]
& = E\left[\left(\sum\nolimits_{k}\mathbf{1}_{k\in\mathcal{K}}\right)\left(\sum\nolimits_{j}\mathbf{1}_{j\in\mathcal{K}}\right)\right]
= E\left[\sum\nolimits_{k,j}\mathbf{1}_{k\in\mathcal{K}}\mathbf{1}_{j\in\mathcal{K}}\right] \\
& = E\left[\sum\nolimits_{k}\mathbf{1}_{k\in\mathcal{K}} + 2\sum\nolimits_{k<j}\mathbf{1}_{k\in\mathcal{K}}\mathbf{1}_{j\in\mathcal{K}}\right]\\
&= \sum\nolimits_{k} P(k\in\mathcal{K}) + 2\sum\nolimits_{k<j} P(k\in\mathcal{K},\,j\in\mathcal{K}).
\end{align*}
Thus, \eqref{eq:combEstimate} leads to
\begin{align*}
E[X^2]
&=\sum_{k:\, |x-k/n|\le \varepsilon} P(k\in \mathcal{K})+
2\sum_{\substack{k,j:\, j\ge k+2, \\ |x-k/n|\le \varepsilon, \\ |x-j/n|\le \varepsilon}} P(k\in \mathcal{K}, j\in \mathcal{K})\nonumber\\
&\le E[X]+ \frac{2}{\sqrt{2\pi}}\, E[X] \sum\limits_{\ell=2}^{n(x_+-x_-)} \frac{1}{\sqrt{\ell}} \, e^{a\ell}.
\end{align*}
Note that $a_{0}:=\lim_{\varepsilon\downarrow 0}a(\alpha,\varepsilon)=1-\alpha+\log(\alpha)$ is strictly negative since $\alpha>1$. Thus, $a=a(\alpha,\varepsilon)<0$ for $\varepsilon$ small enough, so that $\frac{1}{\sqrt{\ell}} \, e^{a\ell}$ is summable.
More precisely,
$$
\frac{1}{\sqrt{2\pi}} \sum_{\ell=2}^\infty \frac{1}{\sqrt{\ell}} \, e^{a\ell}\le
\frac{1}{\sqrt{2\pi}} \int_1^\infty \frac{1}{\sqrt{\ell}} e^{a\ell} \, d\ell=\sqrt{\frac{2}{|a|}}
\frac{1}{\sqrt{2\pi}} \int_{\sqrt{2|a|}}^\infty e^{\frac{-z^2}{2}} \, dz.
$$
Recalling also that $\lim_{\varepsilon \to 0}\lim_{n\to \infty} E[X]=\frac{e^{-\alpha}}{|\alpha-1|}=:H(\alpha)$ by Proposition~\ref{pr:expectedNumber}, we deduce that
\begin{align*}
\limsup_{\varepsilon \to 0}\limsup_{n\to \infty} E[X^2]
\le H(\alpha)\left(1+2\sqrt{\frac{2}{|a_0|}}\left(1-\Phi\left(\sqrt{2|a_0|}\right)\right)\right)
\end{align*}
and combining this with~\eqref{eq:prooflimitProbLower1} yields the claim.
\end{proof}
|
{
"timestamp": "2019-05-30T02:05:21",
"yymm": "1806",
"arxiv_id": "1806.00817",
"language": "en",
"url": "https://arxiv.org/abs/1806.00817"
}
|
"\\section{Introduction}\n\nBehavioral Programming (BP) is a recently developed programming and mode(...TRUNCATED)
| {"timestamp":"2018-06-05T02:11:53","yymm":"1806","arxiv_id":"1806.00842","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\nIn the last two decades the chemistry of nitrogen, the fifth most abundant(...TRUNCATED)
| {"timestamp":"2018-06-05T02:17:48","yymm":"1806","arxiv_id":"1806.01088","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\n\t\n\tA common feature of online social networks (OSNs) is the possibility(...TRUNCATED)
| {"timestamp":"2018-06-05T02:15:19","yymm":"1806","arxiv_id":"1806.01003","language":"en","url":"http(...TRUNCATED)
|
"\\section{Mechanical model}\nConsider an inverted planar pendulum in a gravitational field with its(...TRUNCATED)
| {"timestamp":"2018-06-05T02:17:38","yymm":"1806","arxiv_id":"1806.01075","language":"en","url":"http(...TRUNCATED)
|
"\\subsection{Proof of Theorem \\ref{theo:lo_exact}}\r\n\\begin{theorem*}\r\n\tLet $T$ be the run ti(...TRUNCATED)
| {"timestamp":"2018-06-05T02:18:28","yymm":"1806","arxiv_id":"1806.01128","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\n\n\\blfootnote{\n \\hspace{-0.65cm} \n * Equal contribution\n}\n \\b(...TRUNCATED)
| {"timestamp":"2019-03-18T01:02:15","yymm":"1806","arxiv_id":"1806.00807","language":"en","url":"http(...TRUNCATED)
|
End of preview. Expand
in Data Studio
No dataset card yet
- Downloads last month
- 7