Dataset Viewer
Auto-converted to Parquet Duplicate
text
stringlengths
1
1.06M
meta
dict
\section{Introduction} \begin{figure}[t] \begin{subfigure}{.32\textwidth} \centering \includegraphics[width=\linewidth]{figs/NonSmth3D.png} \end{subfigure} \begin{subfigure}{.32\textwidth} \centering \includegraphics[width=\linewidth]{figs/Lasso3D.png} \end{subfigure} \begin{subfigure}{.32\textwidth} \centering \includegraphics[width=\linewidth]{figs/NoMin3D.png} \end{subfigure} \begin{subfigure}{.32\textwidth} \centering \includegraphics[width=\linewidth]{figs/NonSmth2D.png} \end{subfigure} \begin{subfigure}{.32\textwidth} \centering \includegraphics[width=\linewidth]{figs/Lasso2D.png} \end{subfigure} \begin{subfigure}{.32\textwidth} \centering \includegraphics[width=\linewidth]{figs/NoMin2D.png} \end{subfigure} \caption{Graphical Depiction of some simple objective functions which have a smooth value function but either don't have a minimum or are not differentiable at the minimizer. (Top row) The objective $ f $ is plotted against $ x $ and $ u $ for some toy examples (left) $ \mathbb{R} \times \mathbb{R} \ni (x, u) \mapsto f(x, u) = \exp(x) + \delta_{[u, +\infty)} (x) $, (middle) $ \mathbb{R} \times [0, 4] \ni (x, u) \mapsto f(x, u) = (a x - b)^{2}/2 + \delta_{[-u, u]} (x) $ where $ a, b \neq 0 $ and (right) $ \mathbb{R} \times \mathbb{R} \ni (x, u) \mapsto f(x, u) = \exp(x) + u^{2} / 2 $. The function $u \mapsto (u, x^{*}, f(x^{*}, u)) $ is sketched in red in first two examples (left and middle). No such curve is shown for the third example because the minimum w.r.t. $ x $ is not attained. The black dashed-line (left and middle) shows the boundary of the feasible set in $ \mathbb{R} \times \mathbb{R} $. (Bottom row) Value function $ p $ plotted against $ u $ for the corresponding examples.} \label{fig:IntroEx} \end{figure} Given a function $ f : \mathbb{R}^{N}\times\mathbb{R}^{P} \to \mathbb{R} $ with values $ f(\bm{x}, \u) $, we consider the following parametric optimization problem: \begin{equation} \tag{$ \P $} \label{eq:PrimalProb} p(\u) \coloneqq \inf_{\bm{x} \in \mathbb{R}^{N}} f(\bm{x}, \u) \,. \end{equation} The optimal value of $ f(\cdot, \u) $, which we denote by $ p (\u) $, depends on the parameter $ \u $ and is commonly referred to as the \textit{value function} of \eqref{eq:PrimalProb} or the \textit{infimal projection} of $ f $. When the minimum is attained at some $ \bm{x}^{*}(\u) \in \mathbb{R}^{N} $ for a given $ \u \in \mathbb{R}^{P} $, the value function is given by $ p(\u) = f(\bm{x}^{*}(\u), \u) $. For many applications, quantifying the change in $ p $ with respect to $ \u $ is key, which is achieved by computing gradient or subgradient information of $ p $. This is particularly true for Machine Learning applications, for which a parametric dependency occurs naturally, for example, when solving a min-min or minimax optimization problem, in Structured Support Vector Machines \cite{TGK04, TJHA05}, Sparse Dictionary Learning \cite{MBPS10}, Generative Adversarial Networks \cite{GPM+14} and Matrix Factorization. Another important area where such derivative information is crucial is the Sensitivity Analysis of an optimization problem, which finds applications in the shadow price problem \cite[Section 4.3]{Sti18} and also in bridge crane design or breakwater modeling \cite{CMC08}. The decision-making is based on a measure of how sensitive the model is when parameters $ \u $ are changed. If $\bm{x}^{*}(\u)$ is available and differentiable, the gradient information can be computed by differentiating $ p(\u) = f(\bm{x}^{*}(\u), \u) $ with respect to $ \u $, i.e., \begin{equation*} \grad{}{p}(\u) = D_{\bm{u}} \bm{x}^{*} (\u) \grad{\bm{x}}{f} (\bm{x}^{*}(\u), \u) + \grad{\u}{f} (\bm{x}^{*}(\u), \u) = \grad{\u}{f} (\bm{x}^{*}(\u), \u) \,. \end{equation*} However, clearly, this approach demands for strong smoothness conditions of the parametric function $ f $ and the solution mapping $ \bm{x}^{*}(\u) $, which are not satisfied for common Machine Learning applications. Consider for example the following sparsity constrained linear regression problem: \begin{equation} \label{eq:Lasso} \min_{\bm{x}} \norm{A \bm{x}- b}_2\,,\ \mathrm{s.t.}\ \norm{\bm{x}}_1 \leq u\,, \end{equation} where $A\in \mathbb{R}^{M\times N}$, $b\in \mathbb{R}^M$, and $u\geq 0$. As a constrained optimization problem, the objective (including the constraint in terms of an indicator function) is not differentiable. The value function, however, is continuously differentiable on $ (0, \infty) $ and subdifferentiable at $ u=0 $ \cite{BF08}. As noted in \cite{BF08} and more generally in \cite{BF11, ABF13}, its gradient can be used to solve the following minimial norm problem: \begin{equation} \label{eq:LassoEqv} \min_{\bm{x}} \norm{\bm{x}}_1 \,,\ \mathrm{s.t.}\ \norm{A \bm{x} - b}_2 \leq w\,. \end{equation} The problem in \eqref{eq:Lasso} is one of many instances where the parametric function $ f $ is jointly convex in its arguments. Yet algorithmic differentiation strategies based on differentiating approximations to the solution mapping cannot be applied. This is due to the fact that the boundary of the feasible set changes with $ \u $ and when the solution $ \bm{x}^{*} (\u) $ lies at the boundary for some $ \u $, the subdifferential of $ f $ with respect to $ \u $ at $ (\bm{x}^{*}(\u), \u) $ is a shifted non-trivial cone, hence, in particular not single-valued. We explain this phenomenon more concretely in \sref{prob:DirDiff} (see also \fref{fig:IntroEx}). As a remedy, we invoke standard results from convex duality of the function $f$ to derive the above mentioned differentiability property of the value function for a large class of optimization problems including \eqref{eq:Lasso}. In fact, beyond differentiability, we explore the formula \begin{equation*} \subdiff{}{p} (\u) = \arg\max_{\bm{y} \in \mathbb{R}^{P}} \innerprod{\u}{\bm{y}} - f^{*}(0, \bm{y}) \,, \end{equation*} which expresses the convex (Fr\'echet) subdifferential of the value function $p$ at $\u$ as the set of solutions to a certain optimization problem that depends on the convex conjugate $f^{*}$ of $f$. For a jointly convex function $f$ in $(\bm{x},\u)$, the validity of the formula is asserted under the weak assumptions that $p(\u)$ is finite (i.e. the infimum of $f(\cdot,\u)$ in \eqref{eq:PrimalProb} is finite) and $ \u \in \ri (\dom{p}) $ lies in the relative interior of the domain of $p$. Therefore, in these situations, the problem of differentiating the value function $p$ is equivalent to solving a convex optimization problem, which allows us to explore the large literature on convex optimization algorithms. Since single-valuedness of the subdifferential of $p$ implies differentiability, for example, strict convexity of $\bm{y}\mapsto f^{*}(0, \bm{y})$ implies differentiability of $p$ without the need for $f(\bm{x},\u)$ to be differentiable. Therefore our approach allows us to compute the variation (gradient) of the value function $p$ in situations for which commonly used direct differentiation strategies, for example based on automatic differentiation, cannot be applied. Nevertheless, even if the parametric function $f$ is sufficiently smooth, the flexibility to apply various (optimal) convex optimization algorithms for computing this derivative information compares favorably with those direct differentiation strategies. For the large class of optimization problems that we consider, we summarize algorithms with their convergence guarantees based on the properties of the objective function. \paragraph{Remark.} Differentiation of the value function $p$ in \eqref{eq:PrimalProb} is not to be confused with differentiating the optimal solution mapping $\bm{x}^{*}(\u)$ with respect to $\u$. Besides its usage in computing automatic and implicit gradient estimator, the argmin derivative is used in optimization layers, that is, neural networks whose output is given by solving an optimization problem \cite{AK17, AAB+19}. It is also required in bilevel optimization \cite{DKPK15}, a most well known application of which is gradient-based hyperparameter optimization or parameter learning \cite{Dom12, KP13, DVFP14}. Another problem which is similar to ours is the differentiation of a function $ g(\bm{x}, \u) $ with respect to the parameter $ \u $ evaluated at a solution $ \bm{x}^{*}(\u) $ of a system of parametric nonlinear equations $ h(\bm{x}, \u) = 0 $, where $ g : \mathbb{R}^{N} \times \mathbb{R}^{P} \to \mathbb{R}^{M} $ and $ h : \mathbb{R}^{N} \times \mathbb{R}^{P} \to \mathbb{R}^{N} $ satisfy some regularity conditions. We can also replace the function $ g $ with a functional and the non-linear system with a parametric ordinary differential equation. The two problems are related (but not equivalent) because when $ f $ in \eqref{eq:PrimalProb} is continuously differentiable and has a minimium $ \bm{x}^{*}(\u) $ in $ \bm{x} $ for a given $ \u $, then $ h = \grad{\bm{x}}{f} $ and $ g = f $ with $ M = 1 $ and our goal is to differentiate $ p(\u) = f(\bm{x}^{*}(\u), \u) $. To differentiate $ g(\bm{x}^{*}(\u), \u) $ with respect to $ \u $, we can make use of Piggyback differentiation \cite{GF03} or the Adjoint-state method \cite{Pon61, PLA18}. These techniques find their use in solving constrained optimization problems (where constraints are often given as ODE's or PDE's) with various applications in Geophysics \cite{Ple06}, Medicine \cite{KFTC12} and Neural Networks \cite{CRBD18}. \section{Problem Setting} \label{sec:PS} We consider parametric optimization problems of type \eqref{eq:PrimalProb} and seek for computing $\nabla p(\u)$, i.e., the variation of the value function $p$ with respect to $\u$. One of our major goals is to characterize the properties of $f$ for which various numeric differentiation strategies with theoretical convergence guarantees can be used. We emphasize differentiation strategies based on iterative algorithms and provide convergence rates. First, in Section~\ref{prob:AAIG}, we recall the most widely used approaches for smooth parametric functions $f$, and demonstrate their limitations for several examples in \sref{prob:DirDiff}. Therefore, in \sref{sec:DualGrad}, as a remedy for such situations, we leverage a well known result from convex duality theory for numerical estimation of the variation of the value function, which allows us to classify problem classes with convergence rates. \subsection{Analytical, Automatic and Implicit Gradient Estimator} \label{prob:AAIG} Ablin et al.~\cite{APM20} analyze three different methods for iterative derivative approximation of smooth parametric functions $f$, provide convergences rates and enlighten a super-efficiency phenomenon for the automatic differentiation strategy. We recall their results. Let $f$ be twice continuously differentiable on $ \mathbb{R}^{N}\times\mathbb{R}^{P} $ and $ \bm{x}^{*}(\u) $ be the unique minimizer for every $ \u \in \mathbb{R}^{P} $ such that $ \hess{\bm{x}}{f} (\bm{x}^{*}(\u), \u)$ is positive definite. From the Implicit Function Theorem, we derive that $ \bm{x}^{*} : \mathbb{R}^{P} \to \mathbb{R}^{N} $ is continuously differentiable with derivative $ D_{\bm{u}}\bm{x}^{*}(\u) = \varphi (\bm{x}^{*} (\u), \u) $, where we define the mapping $ \varphi : \mathbb{R}^{N}\times\mathbb{R}^{P} \to \mathbb{R}^{N \times P} $ as: \begin{equation} \label{eq:Dmin} \varphi (\bm{x}, \u) = -[\hess{\bm{x}}{f} (\bm{x}, \u)]^{-1} \grad{\bm{x}\u}{f} (\bm{x}, \u)\,. \end{equation} The value function and its gradient are then given by: \begin{equation*} p(\u) = f (\bm{x}^{*}(\u), \u)\ \mathrm{and}\ \grad{}{p}(\u) = \grad{\u}{f} (\bm{x}^{*}(\u), \u) \,. \end{equation*} The expression for $ \grad{}{p} $ follows from chain rule and the optimality condition $ \grad{\bm{x}}{f} (\bm{x}^{*}(\u), \u) = 0 $. The minimizer $ \bm{x}^{*} (\u) $ is estimated by an iterative optimization method which yields a sequence $ \seq{\bm{x}}{k} $ with limit $ \bm{x}^{*}(\u) $. In a realistic setting such a process is terminated after $ K $ iterations to yield a so-called sub-optimal solution $ \bm{x}^{(K)}(\u) $ for each $ \u $. To compute $ \grad{}{p} $, we either substitute $ \bm{x}^{(K)}(\u) $ in place of $ \bm{x}^{*}(\u) $ in the expression for $ \grad{}{p} $ to obtain the \textbf{analytic gradient estimator}: \begin{equation} \tag{AnG} \label{eq:AnG} \bm{g}^{(K)}_{1} (\u) \coloneqq \grad{\u}{f} (\bm{x}^{(K)}, \u) \end{equation} or in the expression for $ p $ and then differentiate it with respect to $ \u $, assuming that the sequence $ \seq{\bm{x}}{k} $ is differentiable, meaning that the mapping between successive iterates and the dependence on the parameter $\u$ is differentiable, giving us the \textbf{automatic gradient estimator}: \begin{equation} \tag{AuG} \label{eq:AuG} \bm{g}^{(K)}_{2} (\u) \coloneqq [D_{\bm{u}}\bm{x}^{(K)}]^{T} \grad{\bm{x}}{f} (\bm{x}^{(K)}, \u)\ + \grad{\u}{f} (\bm{x}^{(K)}, \u) \,. \end{equation} The term $ D_{\bm{u}}\bm{x}^{(K)} $ is an estimator of $ D_{\bm{u}}\bm{x}^{*} $ and is obtained by applying automatic differentiation on $ \bm{x}^{(K)} $, hence the name. Using the expression in \eqref{eq:Dmin} to estimate $ D_{\bm{u}}\bm{x}^{*} $ yields the \textbf{implicit gradient estimator}, i.e., \begin{equation} \tag{IG} \label{eq:IG} \bm{g}^{(K)}_{3} (\u) \coloneqq [\varphi (\bm{x}^{(K)}, \u)]^{T} \grad{\bm{x}}{f} (\bm{x}^{(K)}, \u)\ + \grad{\u}{f} (\bm{x}^{(K)}, \u) \,. \end{equation} Ablin et al.~\cite{APM20} provide following error bounds for \eqref{eq:AnG}, \eqref{eq:AuG} and \eqref{eq:IG}. \begin{theorem} \label{thm:AAIG} Let $ D \coloneqq D_{x} \times D_{u} \subset \mathbb{R}^{N}\times\mathbb{R}^{P} $ be compact and $ f $ be $ m $-strongly convex with respect to $ \bm{x} $ and twice differentiable over $ D $ with second derivatives $ \grad{\bm{x}\u}{f} $ and $ \hess{\bm{x}}{f} $ respectively $ L_{xu} $ and $ L_{xx} $-Lipschitz continuous. Then the first derivatives $ \grad{\u}{f} $ and $ \grad{\bm{x}}{f} $ are respectively $ L_{u} $ and $ L_{x} $-Lipschitz continuous and for $ \bm{x}^{(k)} $ produced by $ \bm{x}^{(k+1)} \coloneqq \bm{x}^{(k)} - \tau \grad{\bm{x}}{f} (\bm{x}^{(k)}, \u) $ with $ \tau \leq 1/L_{x} $ and $ \omega \coloneqq 1 - m\tau $, following statements hold: \begin{enumerate}[label=\textnormal{(\alph*)}] \item The analytic estimator converges and we have: \begin{equation*} \norm{\bm{g}^{(K)}_{1} - \grad{}{p} (\u)}_{2} \leq L_{x} \norm{\bm{x}^{(0)} - \bm{x}^{*} (\u)}_{2} \omega^{K} \end{equation*} \item The automatic estimator converges and for $ C_{k} \coloneqq \tau (L_{x} k + \omega/2) (L_{xu} + L_{1}L_{xx}) $ with $ \norm{D_{\bm{u}}\bm{x}^{(k)}}_{2} \leq L_{1} $ we have: \begin{equation*} \norm{\bm{g}^{(K)}_{2} - \grad{}{p} (\u)}_{2} \leq C_{K} \norm{\bm{x}^{(0)} - \bm{x}^{*} (\u)}_{2} \omega^{2K - 1} \end{equation*} \item The implicit estimator converges and for $ C \coloneqq (L_{xu} + L_{1}L_{xx}) / 2 + L_{2}L_{x} $ with $ \norm{\varphi (\bm{x}^{(K)}, \u)}_{2} \leq L_{2} $ we have: \begin{equation*} \norm{\bm{g}^{(K)}_{3} - \grad{}{p} (\u)}_{2} \leq C \norm{\bm{x}^{(0)} - \bm{x}^{*} (\u)}_{2} \omega^{2K} \end{equation*} \end{enumerate} \end{theorem} \tref{thm:AAIG} shows faster convergence of automatic and implicit estimators as compared to analytical estimator. The automatic estimator is more stable than implicit estimator as depicted experimentally in \cite{APM20}. This makes the automatic method a strong contender for estimating $ \grad{}{p} $. It is also not computationally expansive thanks to reverse mode AD. The memory overhead is overcome by discarding the iterates $ \bm{x}^{(k)} $ for $ k = 0, \dots, K-1 $ and using $ \bm{x}^{(K)} $ only in all the calculations when going backward \cite{Chr94, MO20}. Ablin et al.~\cite{APM20} also study these methods under weaker conditions, for instance, when $ f(\cdot, \u) $ is $ \mu $-{\L}ojasiewicz \cite{AB09} which generalizes strong convexity. However their results depend on strong smoothness assumptions for $ f $. \subsection{Problems with Direct Differentiation} \label{prob:DirDiff} Obviously, the settings for which Ablin et al.~\cite{APM20} provide convergence rate guarantees is quite limited. We would like to emphasize the fact that differentiability of the parametric function $f$ is not required for that of the value function $ p $. In this section, we show with simple examples that the necessary conditions like differentiablity of the objective and the existence of the minimizer are the key disadvantages of the above methods. \begin{example} \label{exmp:NonSmthToy} Let $ f : \mathbb{R} \times \mathbb{R} \to \mathbb{R} $ be defined as: \begin{equation*} f (x, u) = \exp(x) + \delta_{[u, +\infty)} (x) \,, \end{equation*} then \eqref{eq:AnG} and \eqref{eq:IG} fail to converge for all $ u \in \mathbb{R} $ while \eqref{eq:AuG} converges only when $ x^{(k)} > u $ for all $ k $ (eventually) and $ D_{u} x^{(k)} $ converges to $ D_{u} x^{*}(u) $. \end{example} \paragraph{Detail.} $ f $ is jointly convex in $ x $ and $ u $ and for all $ u \in \mathbb{R}, f(\cdot, u) $ is $ \exp(u) $-strongly convex and $ x^{*} : \mathbb{R} \to \mathbb{R} $ and $ p : \mathbb{R} \to \mathbb{R} $, given by $ x^{*}(u) = u $ and $ p(u) = \exp(u) $ respectively, are continuously differentiable on $ \mathbb{R} $. On the other hand, $ f $ is neither differentiable with respect to $ x $ nor $ u $ at $ (x^{*}(u), u) $ for any $ u \in \mathbb{R} $. To see why $ f $ is not differentiable with respect to $ u $, note that $ f $ is alternatively written as $ f(x, u) = \exp(x) + \delta_{(-\infty, x]} (u) $. The subdifferential of $ f $ with respect to $ u $ is $ \mathbb{R}_{+} $ when $ x = u $ and $ \{ 0 \} $ when $ x > u $. Thus when $ x^{(k)} = u $ for some $ k \in \mathbb{N} $, none of the above methods is useful here. If $ x^{(k)} > u $ for all $ k $, we get $ g^{(k)}_{1} (u) = 0 $ and $ g^{(k)}_{3} (u) = 0 $ since $ \partial f / \partial u (x^{(k)}, u) = 0 $ and $ \partial^{2} f / \partial u^{2} (x^{(k)}, u) = 0 $. The automatic estimator is given by $ g^{(k)}_{2} (u) = \Dutx^{(k)} (u) \exp (u) $ because $ \partial f / \partial x (x^{(k)}, u) = \exp (u) $. It converges to $ p^{\prime} (u) = \exp (u) $ only if $ \Dutx^{(k)} (u) $ converges to $ D_{u}x^{*} (u) = 1 $. The convergence of $ D_{u} x^{(k)} (u) $ to $ D_{u} x^{*} (u) $ in \exref{exmp:NonSmthToy} is possible only under limited conditions which we do not establish here. The next example considers the non-smooth parametric objective of \eqref{eq:Lasso} in an analogue $ 1 $D setting. \begin{example} \label{exmp:NonSmthLasso} Let $ f : \mathbb{R} \times \mathbb{R} \to \mathbb{R} $ be defined as: \begin{equation*} f (x, u) = \frac{1}{2} (a x - b)^{2} + \delta_{[-u, u]} (x) \,, \end{equation*} where $ a, b \in \mathbb{R} \backslash \{0\} $, then for all $ u \in (0, \abs{b/a}) $, \eqref{eq:AnG} and \eqref{eq:IG} fail to converge while \eqref{eq:AuG} converges only when $ x^{(k)} \in (-u, u) $ for all $ k $ (eventually) and $ D_{u} x^{(k)} $ converges to $ D_{u} x^{*}(u) $. \end{example} \paragraph{Detail.} $ f $ is jointly convex in $ x $ and $ u $ and for all $ u \in (0, \abs{b/a}), f(\cdot, u) $ is $ a^{2} $-strongly convex and we have $ x^{*}(u) = \sgn{b/a} u $ and $ p(u) = (a x^{*} (u) - b)^{2}/2 $. Since $ f(x, u) = (a x - b)^{2}/2 + \delta_{[\abs{x}, \abs{b/a})} (u) $ and $ \subdiff{u}{f} (x, u) = N_{[\abs{x}, \abs{b/a})} (u) $, $ f $ is not differentiable with respect to $ u $ at $ (x^{*}(u), u) $ for any $ u \in (0, \abs{b/a}) $. Given a sequence $ x^{(k)} \in (-u, u) $ with limit $ x^{*}(u) $ we have $ g^{(k)}_{1} = 0 $ and $ g^{(k)}_{3} = 0 $ because $ \partial f / \partial u (x^{(k)}, u) = 0 $ and $ \partial^{2} f / \partial u^{2} (x^{(k)}, u) = 0 $. The automatic estimator $ g^{(k)}_{2} (u) = \Dutx^{(k)} (u) a (a x^{(k)} - b) $ converges to $ p^{\prime} (u) = D_{u}x^{*} (u) a (a x^{*} - b) $ when $ \Dutx^{(k)} (u) $ converges to $ D_{u}x^{*} (u) $. \begin{example} \label{exmp:NoMin} Let $ f : \mathbb{R} \times \mathcal{U} \to \mathbb{R} $ be defined as: \begin{equation*} f(x, u) = \exp(x) + \frac{1}{2} u^{2} \,, \end{equation*} then \eqref{eq:AnG}, \eqref{eq:AuG} and \eqref{eq:IG} fail to converge for all $ u \in \mathbb{R} $. \end{example} \paragraph{Detail.} This case is obvious because $ \inf_{x} \exp(x) = 0 $ with minimum not attained, giving us $ p(u) = u^{2} / 2 $ and $ \argmin_{x} f(x, u) = \emptyset $. While one may still use the methods of \cite{APM20} or cvxpylayers \cite{AK17, AAB+19} to efficiently estimate $ \grad{}{p} $ in situations like those presented above, it should be noted that differentiating the solution mapping is not a strictly more general approach than differentiating the value function. This is due to the failure of the applicability of the chain rule for evaluating $ \nabla_{\u} [f(\bm{x}^{*}(\u), \u)] $ for non-smooth functions in general \cite{BP20}. (The concept of a subgradient is not defined for a vector-valued non-smooth function and must be replaced by graphical derivatives and coderivatives; see \cite[Section 9.D]{RW98} and the chain rule in \cite[Theorem 10.49]{RW98}). This calls for a theoretically justified approach for estimating $ \grad{}{p} $ beyond those which are currently available \cite{APM20, AK17, AAB+19}. \section{Dual Gradient Estimator} \label{sec:DualGrad} The discussion in the previous section suggests that a different method is needed which is independent of directly differentiating the parametric objective function $ f $. Trading the differentiability assumption for a joint convexity assumption of $f$ in $(\bm{x},\u)$, we invoke the powerful convex duality for computing derivative information of the value function $p$ in cases beyond differentiability of $f$. Moreover, the same statement provides an expression for the convex subdifferential of $p$. Denoting the convex conjugate of a function $p$ by \[ p^*(\bm{y}) := \sup_{\u} \innerprod{\bm{y}}{\u} - p(\u) \,, \] and its biconjugate by $p^{**}:=(p^*)^*$, the following result can be derived when strong duality, i.e., $p^{**}=p$, holds. The dual of the problem defined in \eqref{eq:PrimalProb} is given by: \begin{equation} \tag{$ \mathcal{D} $} \label{eq:DualProb} p^{**}(\u) = \sup_{\bm{y} \in \mathbb{R}^{P}} \innerprod{\u}{\bm{y}} - f^{*}(0, \bm{y}) \,. \end{equation} and for $ \u \in \ri{(\dom{p})} $ \begin{equation*} \subdiff{}{p} (\u) = \arg\max_{\bm{y} \in \mathbb{R}^{P}} \innerprod{\u}{\bm{y}} - f^{*}(0, \bm{y}) \,. \end{equation*} When $ \subdiff{}{p} (\u) $ is single-valued, $ p $ is differentiable at $ \u $ and therefore solving \eqref{eq:DualProb} yields the gradient of the value function, which does not require differentiability of $ f $. These results rely on the following standard convex duality result, which we state from \cite[Theorem~4.1]{Dru20} \cite[Section~11.H]{RW98}. \begin{theorem} \label{thm:Main} For $ \mathcal{X} \subset \mathbb{R}^{N} $ and $ \mathcal{U} \subset \mathbb{R}^{P} $, let $ f : \mathcal{X} \times \mathcal{U} \to \overline{\mathbb{R}} $ be a proper, lower semi-continuous and convex function. Then following are true for all $ \u \in \mathcal{U} $: \begin{enumerate}[label=\textnormal{(\alph*)}] \item \textbf{Weak Duality:} $ p^{**}(\u) \leq p(\u) $. \item \textbf{Subdifferential:} If $ p(\u) $ is finite, then \begin{equation} \label{eq:thm:Subd} \partial p(\u) \subset \argmax_{\bm{y} \in \mathbb{R}^{P}} \innerprod{\u}{\bm{y}} - f^{*}(0, \bm{y}) \,. \end{equation} If in addition, the inclusion $ \u \in \ri (\dom{p}) $ holds, then \eqref{eq:thm:Subd} holds with equality. \item \textbf{Strong Duality:} If the subdifferential $ \partial p(\u) $ is nonempty, then the equality $ p^{**}(\u) = p(\u) $ holds and the supremum $ p^{**}(\u) $ is attained. \end{enumerate} \end{theorem} Therefore, \eqref{eq:DualProb} is key for computing the variation of $p$ with respect to $\u$. Our goal is reduced to the problem of solving \eqref{eq:DualProb}, for which the machinery of convex optimization can be invoked to state algorithms and convergence rates. Moreover, in contrast to the automatic differentiation strategy (backpropagation), there is no need to store the iterates, which dramatically reduces the memory requirements. Let $ \seq{\bm{y}}{K} $ be a sequence generated by an algorithm for solving \eqref{eq:DualProb}, we call the gradient computed by this method the \textbf{dual gradient estimator}: \begin{equation} \tag{DG} \label{eq:DG} \bm{g}^{(K)}_{4} (\u) = \bm{y}^{(K)} \,. \end{equation} The dual estimator is computationally efficient since it requires solving an optimization problem and does not require computing any additional gradient and Hessian terms. The computational expenses depend only on the method used to solve the problem and the rate of convergence. Such an estimator also does not have a memory overhead like storing the iterates $ \seq{\bm{y}}{k} $. \subsection{A Large Class of Parametric Optimization Problems} As an application of our approach, we consider the following class of parametric optimization problem \begin{equation} \tag{$\mathcal{R}_{f}$} \label{eq:Roc:obj} f(\bm{x}, \u) = \innerprod{\c}{\bm{x}} + h(\b - A\bm{x} + \u) + k(\bm{x}) \,, \end{equation} where $ h : \mathbb{R}^{P} \to \overline{\mathbb{R}} $ and $ k : \mathbb{R}^{N} \to \overline{\mathbb{R}} $ are proper, lower semi-continuous and convex, $ A : \mathbb{R}^{N} \to \mathbb{R}^{P} $ is a linear map and $ (\c, \b) \in \mathbb{R}^{N}\times\mathbb{R}^{P} $. The convex conjugate of $ f $ is given by: \begin{equation*} f^{*}(\v, \bm{y}) = -\innerprod{\b}{\bm{y}} + k^{*}(A^{*}\bm{y} - \c + \v) + h^{*}(\bm{y}) \,, \end{equation*} which yields the conjugate of the value function as: \begin{equation} \tag{$\mathcal{R}_{p^{*}}$} \label{eq:Roc:vfc} p^{*}(\bm{y}) = -\innerprod{\b}{\bm{y}} + k^{*}(A^{*}\bm{y} - \c) + h^{*}(\bm{y}) \,. \end{equation} Therefore, in order to compute the variation of $p$ with respect to $\u$ by using \eqref{eq:thm:Subd}, we must solve a problem of the form: \begin{equation} \label{eq:dual-grad-estimator-problem} \min_{\bm{y}\in\mathbb{R}^{P}} k^{*}(A^{*}\bm{y} - \c + \v) + h^{*}(\bm{y}) - \innerprod{\b + \u}{\bm{y}} \,. \end{equation} In the following section, depending on the properties of $k$, $h$, and $A$, we provide algorithms and convergence rates for solving \eqref{eq:dual-grad-estimator-problem} and, hence, for approximating the variation of the value function $p$. As a generic algorithm for solving \eqref{eq:dual-grad-estimator-problem}, we mention the Primal--Dual Hybrid Gradient Algorithm by Chambolle and Pock~\cite{CP11} here. A sufficient condition for uniqueness of solution of \eqref{eq:dual-grad-estimator-problem} is strong convexity of $ h^{*} $ which follows from the Lipschitz continuity of $ \grad{}{h} $. For a weaker condition, we state the following result: \begin{proposition} Let $ h, k, A $ and $ \c $ in \eqref{eq:Roc:obj} be such that $ h $ is differentiable on $ \intr{(\dom{h})} $ and there exist $ (\bm{x}, \u) \in \dom{k} \times \intr{(\dom{h})} $ with $ A^{*} \grad{}{h} (\u) - \c \in \subdiff{}{k} (\bm{x}) $, then $ \subdiff{}{p} (\u) = \{ \grad{}{p} (\u) \} $ is single-valued for all $ \u \in \ri (A\dom{k} + \dom{h} - \b) $. \end{proposition} \begin{proof} The condition $ A^{*} \grad{}{h} (\u) - \c \in \subdiff{}{k} (\bm{x}) $ guarantees the existence of some $ \bm{y} \in \dom{h^{*}} $ with $ A^{*} \bm{y} - \c \in \dom{k^{*}} $. In such case, the expressions $ h^{*}(\bm{y}) $ and $ k^{*}(A^{*} \bm{y} - \c) $ are finite-valued and $ \dom{p^{*}} $ is non-empty. Since $ p $ is proper, lower semi-continuous and convex \cite[Theorem~3.101]{Hoh19}, for every $ \u \in \ri (\dom p) $ with $ \dom{p} = A\dom{k} + \dom{h} - \b $ \cite[Example~11.41]{RW98}, $ \subdiff{}{p} (\u) $ is non-empty \cite[Theorem~23.4]{Roc70}. The single-valuedness of $ \subdiff{}{p} (\u) $ then follows from the strict convexity of $ h^{*} $ (see \lref{lem:basic}\ref{basic:strict:diff}). \end{proof} To understand how this works we consider the example where $ h = \norm{\cdot}_{2}^{2}/2 $ and $ k = \lambda\norm{\cdot}_{2}^{2}/2 + \gamma \norm{\cdot}_{1} $ for $ \lambda > 0 $ and $ \gamma \geq 0 $. By choosing $ \u = 0 $ and $ \bm{x} $ as: \begin{equation*} \bm{x}_{i} = \begin{cases} (-\c_{i} - \gamma)/\lambda &, \qquad\enspace\; \c_{i} < -\gamma \\[5pt] \qquad 0 &, -\gamma \leq \c_{i} \leq \gamma \\[5pt] (-\c_{i} + \gamma)/\lambda &, \enspace\; \gamma < \c_{i} \,, \end{cases} \end{equation*} we observe that $ A^{*}\grad{}{h}(\u) - \c = -\c \in \subdiff{}{k} (\bm{x}) $. Parametric Optimization problems of the form \eqref{eq:Roc:obj} are ubiquitous in Machine Learning, Computer Vision, and Signal Processing. In Signal and Image Processing, the parameter $ \u $ represents the observed variable while the mapping $ A $ represents the operation performed on the optimal hidden variable $ \bm{x} $ (which is to be determined) to obtain the observed variable. In Machine Learning, $ \u $ is the target or the label vector, $ A $ represents the feature matrix obtained from the independent variable and $ \bm{x} $ denotes the weights of the mapping to be learned which fits the training set $ (A, \u) $. In this model, $ h $ measures the dissimilarity between $ A\bm{x} $ and $ \u $. The second term puts the penalty on $ \bm{x} $ and therefore indicates a prior information of the optimal $ \bm{x} $ which is necessary when $ P < N $. In many applications like supervised learning, image denoising and segmentation, $ N \leq P $, while in those like compressed sensing and deconvolution, $ N > P $. Other than the above applications, \eqref{eq:Roc:obj} also generalizes the classical infimal convolution \cite{Roc70}. For example, such expressions occur in Image Processing applications in the context of regularization via Total Generalized Variation \cite{BKP10}. Moreover, the Moreau envelope \cite{RW98} of a non-smooth function is of the presented form and is employed for solving non-smooth optimization problems and is key for interpretation of many convex optimization algorithms such as proximal splitting methods \cite{LM79, CP11a}. Also, the penalty approaches for approximating the minimization of $ f(\bm{x})+g(\bm{x}) $ via $ \min_{\bm{x}} ( f(\bm{x}) + \min_{\bm{z}} g(\bm{z}) + 1/2\norm{\bm{x}-\bm{z}}^{2} ) $ have the same form, which shows relations to alternating minimization approaches and has been employed in real world machine learning problems \cite{LWC19}. \subsection{Rate of Convergence} \label{DG:RoC} By invoking convex duality, as described in the previous section, the computation of the value function's variation is reduced to solving problems of type \eqref{eq:dual-grad-estimator-problem} for which a large literature of optimization algorithms is available for several special cases. We consider the following situations: \begin{enumerate}[label=\textnormal{(\alph*)}] \item Let \eqref{eq:dual-grad-estimator-problem} be a quadratic problem with matrix $ Q \in \mathbb{R}^{P \times P} $ and let $ L = \lambda_{\max} (Q) $ and $ m = \lambda_{\min} (Q) $, then \eqref{eq:DG} computed by using conjugate-gradient method converges like $ \mathcal{O} (\omega^{K}) $ with $ \omega \coloneqq (\sqrt{L} - \sqrt{m})/(\sqrt{L} + \sqrt{m}) $ \cite[Section 1.6]{Ber99}. \textbf{Discussion.} Depending on whether $ P $ is smaller (resp. larger) than $ N $, this rate is better (resp. worse) than those provided in \tref{thm:AAIG} (see the first column of \fref{fig:Exp} for a comparison). \item Let $ h^{*} $ be possibly non-smooth with efficiently computable proximal mapping and $ k^{*} \circ A^{*} $ has an $ L $-Lipschitz continuous gradient then the following are true for solving \eqref{eq:dual-grad-estimator-problem} by using ISTA and accelerated proximal gradient descent (FISTA): \begin{itemize} \item \eqref{eq:DG} converges with ISTA \cite[Theorem~4.9]{CP16} and FISTA \cite[Theorem~3]{CD15} to $ \grad{}{p} (\u) $. \item If $ h^{*} $ and $ k^{*} \circ A^{*} $ are strongly convex with parameters $ \delta \geq 0 $ and $ \gamma \geq 0 $ and $ \mu = \delta + \gamma > 0 $, \eqref{eq:DG} converges to $ \grad{}{p} (\u) $ like $ \mathcal{O} (\omega_{1}^{K}) $ with ISTA \cite[Theorem~4.9]{CP16} and like $ \mathcal{O} (\omega_{2}^{K}) $ with FISTA \cite[Theorem~4.10]{CP16} where $ \omega_{1} = (1 - \tau \gamma)/(1 + \tau \delta) $ and $ \omega_{2} = 1 - \sqrt{\tau \mu / (1 + \tau \delta)} $. \end{itemize} \label{enum:(F)ISTA} \textbf{Discussion.} This general setting is beyond the theory that is provided by \tref{thm:AAIG}. \item Let $ h^{*} $ and $ k $ be possibly non-smooth and their respective proximal mappings can be computed efficiently, then the following are true for solving \eqref{eq:dual-grad-estimator-problem} by using Primal--Dual Hybrid Gradient Algorithm with $ L = \norm{A^{*}} $ \begin{itemize} \item \eqref{eq:DG} converges to $ \grad{}{p} (\u) $ \cite[Theorem~5.1]{CP16}. \item If either $ h^{*} $ or $ k $ are strongly convex, then \eqref{eq:DG} converges to $ \grad{}{p} (\u) $ like $ \mathcal{O} (1/K^{2}) $ to $ \grad{}{p} (\u) $ \cite[Theorem~2]{CP11}. \item If $ h^{*} $ and $ k $ are both strongly convex with parameters $ \delta $ and $ \gamma $ respectively, then \eqref{eq:DG} converges to $ \grad{}{p} (\u) $ like $ \mathcal{O} (\omega^{K/2}) $ to $ \grad{}{p} (\u) $ with $ \omega = (1 + \theta)/(2 + \mu) $ and $ \mu = 2\sqrt{\delta\gamma} / L $ \cite[Theorem~3]{CP11}. \end{itemize} \label{enum:PDHG} \textbf{Discussion.} Similarly, this setting is more general than that of \tref{thm:AAIG}. \end{enumerate} \paragraph{Remark.} For non-strongly convex settings in \eqref{eq:dual-grad-estimator-problem}, the sequence of values $ p^{*}(\bm{y}^{(K)}) - \innerprod{\u}{\bm{y}^{(K)}} $ converges like $ \mathcal{O} (1/K) $ with ISTA \cite[Theorem~4.9]{CP16} and PDHG \cite[Theorem~1]{CP11} and like $ \mathcal{O} (1/K^{2}) $ with FISTA \cite[Theorem~4.4]{BT09} i.e., we have a potentially accelerated rate of convergence of the objective values. However this rate does not directly translate to a convergence rate of the iterates and hence, to a rate for the convergence of the dual gradient estimator. Such a conclusion requires additional properties of the optimization problem, such as local strong convexity, error bounds, or growth conditions \cite{AB09, FGP15, DL18}. In order to recognize the potential of the dual gradient approach based on properties of the primal functions in \eqref{eq:Roc:obj}, we trace the conditions for the various convergence rates listed above back to properties of the primal functions. These results are based on the following lemma. Their proofs can be found in most standard texts on Convex Analysis, e.g., \cite{HL12} or \cite{Roc70}. \begin{lemma} \label{lem:basic} Let $ g, h : \mathbb{R}^{P} \to \overline{\mathbb{R}} $ be proper, lower semi-continuous and convex functions, $ \d \in \mathbb{R}^{P} $ and $ B : \mathbb{R}^{N} \to \mathbb{R}^{P} $ be a linear mapping. Let $ l : \mathbb{R}^{N} \to \overline{\mathbb{R}} $ be defined by $ l(\bm{x}) = g(B\bm{x} + \d) $. Then the following results hold: \begin{enumerate}[label=\textnormal{(\alph*)}] \item If $ g $ is $ m_{g} $-strongly convex on $ \mathbb{R}^{P} $ for $ m_{g} \geq 0 $, then $ l $ is $ \lambda_{\min} (B^{*} B) m_{g} $-strongly convex on $ \mathbb{R}^{N} $. \item If $ g $ and $ h $ are strongly convex on $ \mathbb{R}^{P} $ with parameters $ m_{g} $ and $ m_{h} $ respectively, then $ g + h $ is $ m_{g} + m_{h} $-strongly convex on $ \mathbb{R}^{P} $. \item If $ g $ has an $ L_{g} $-Lipschitz continuous gradient on $ \mathbb{R}^{P} $ for $ L_{g} \in (0, +\infty) $, then $ l $ has a $ \lambda_{\max} (B^{*} B) m_{g} $-Lipschitz continuous gradient on $ \mathbb{R}^{N} $. \item If $ g $ and $ h $ have Lipschitz continuous gradients on $ \mathbb{R}^{P} $ with parameters $ L_{g} $ and $ L_{h} $ respectively, then $ g + h $ has an $ L_{g} + L_{h} $-Lipschitz continuous gradient on $ \mathbb{R}^{P} $. \item If $ g $ is differentiable on $ \Omega \coloneqq \intr{\dom{g}} $, then $ g^{*} $ is strictly convex on each convex subset $ C \subset \grad{}{g} (\Omega) $. \label{basic:strict:diff} \item $ g $ is $ m_{g} $-strongly convex on $ \mathbb{R}^{P} $ if and only if $ g^{*} $ has a $ 1/m_{g} $-Lipschitz continuous gradient on $ \mathbb{R}^{P} $. \label{basic:strong:lip} \item $ g $ has an $ L_{g} $-Lipschitz continuous gradient on $ \mathbb{R}^{P} $ if and only if $ g^{*} $ is $ 1/L_{g} $-strongly convex on $ \mathbb{R}^{P} $. \label{basic:lip:strong} \end{enumerate} \end{lemma} Let us look at \eqref{eq:Roc:obj} when the regularity conditions given in \tref{thm:AAIG} are satisfied for $ f $. Let $ h $ and $ k $ be strongly convex with parameters $ m_{h} > 0 $ and $ m_{k} > 0 $ respectively and twice differentiable with Lipschitz continuous first and second derivatives. Let $ L_{h} $ and $ L_{k} $ be Lipschitz constants of $ \grad{}{h} $ and $ \grad{}{k} $ and let $ L_{A} = \lambda_{\max} (A^{*} A), m_{p} = \lambda_{\min} (A^{*} A) $ and $ m_{d} = \lambda_{\min} (A A^{*}) $ then $ f(\cdot, \u) $ is $ m_{h} m_{p} + m_{k} $-strongly convex and has an $ L_{h}L_{A} + L_{k} $-Lipschitz continuous gradient. Using these parameters, the optimal convergence rate for gradient descent is given by $ (L - m)/(L + m) $ \cite{Pol87}, which along with \tref{thm:AAIG} gives us the rates for the analytical, automatic and implicit estimators as $ \mathcal{O} (\omega_{p}^{K}), \mathcal{O} (K\omega_{p}^{2K}) $ and $ \mathcal{O} (\omega_{p}^{2K}) $ respectively where we have: \begin{equation*} \omega_{p} = \frac{(L_{h} L_{A} - m_{h} m_{p}) + (L_{k} - m_{k})}{(L_{h} L_{A} + m_{h} m_{p}) + (L_{k} + m_{k})} \,. \end{equation*} The strong convexity parameters of $ k^{*} \circ A^{*} $ and $ h^{*} $ are $ m_{d}/L_{k} $ and $ 1/L_{h} $. The Lipschitz constants of gradients of these functions are $ L_{A}/m_{k} $ and $ 1/m_{h} $. These parameters similarly give us the convergence rate for the dual estimator as $ \mathcal{O} (\omega_{d}^{K}) $ for: \begin{equation*} \omega_{d} = \frac{L_{h} m_{h} (L_{k}L_{A} - m_{k} m_{d}) + L_{k} m_{k} (L_{h} - m_{h})}{L_{h} m_{h} (L_{k}L_{A} + m_{k} m_{d}) + L_{k} m_{k} (L_{h} + m_{h})} \,. \end{equation*} Assuming $ A $ is full rank, the convergence rates depend on whether $ P $ is larger or smaller than $ N $. The condition of strong convexity of $ h $ or $ k $ can be relaxed to non-strong convexity. The expression for convergence rate for primal problem will stay the same with $ m_{h} $ or $ m_{k} $ set to $ 0 $. For the dual problem, we make use of the results listed in \ref{enum:(F)ISTA} and \ref{enum:PDHG} to compute the rate. We note that the theoretical guarantees for the primal gradient estimators are difficult to establish beyond the strong convexity and twice continuous differentiability of $ f $ in \eqref{eq:Roc:obj}. On the other hand, the dual gradient estimator is quite powerful as it converges in a very broad setting and the convergence rates are theoretically justified. \section{Experiments} \label{sec:Exp} We compare the performance of the four different gradient estimators, i.e., \eqref{eq:AnG}, \eqref{eq:AuG}, \eqref{eq:IG} and \eqref{eq:DG} for estimation of $ \grad{}{p} (\u) $ in different settings. Therefore we fix $ N $ and run these methods for different values of $ P $ and for different choices of $ h $ and $ k $ in \eqref{eq:Roc:obj}. Changing $ P $ will affect $ L_{A}, \lambda_{\min} (A^{*} A) $ and $ \lambda_{\min} (A A^{*}) $ while changing $ h $ and $ k $ will modify $ L_{h}, L_{k}, m_{h} $ and $ m_{k} $. This also includes cases of non-differentiability of $ k $ and non-strong convexity of $ h $. For each problem and for each $ P $, we generate error plots of the sequences $ \bm{g}^{(n)}_{i} $ for a given $ \u $. Since the convergence rates for methods \eqref{eq:AnG}, \eqref{eq:AuG} and \eqref{eq:IG} depend on that of the original sequence, we also show the plots for $ \bm{x}^{(n)} $ for each of the examples. We consider the following four examples to experimentally verify our observations: \begin{equation} \label{eq:Exmps} \begin{aligned} f_{1}(\bm{x}, \u) &= \frac{1}{2} \norm{\u - A\bm{x}}_{2}^{2} + \frac{\lambda}{2} \norm{\bm{x}}_{2}^{2} \\ f_{2}(\bm{x}, \u) &= h_{\delta}(\u - A\bm{x}) + \frac{\lambda}{2} \norm{\bm{x}}_{2}^{2} \\ f_{3}(\bm{x}, \u) &= \frac{1}{2} \norm{\u - A\bm{x}}_{2}^{2} + \frac{\lambda}{2} \norm{\bm{x}}_{2}^{2} + \gamma \norm{\bm{x}}_{1} \\ f_{4}(\bm{x}, \u) &= h_{\delta}(\u - A\bm{x}) + \frac{\lambda}{2} \norm{\bm{x}}_{2}^{2} + \gamma \norm{\bm{x}}_{1} \,, \end{aligned} \end{equation} where $ h_{\delta} : \mathbb{R}^{P} \to \mathbb{R} $ in second and fourth equations in \eqref{eq:Exmps} is the function defined by: \begin{equation*} h_{\delta}(\u) \coloneqq \begin{cases} \quad\; \frac{1}{2}\norm{\u}_{2}^{2} &, \quad \norm{\u}_{2} \leq \delta \\[10pt] \delta \Big( \norm{\u}_{2} - \frac{\delta}{2} \Big) &, \quad \norm{\u}_{2} > \delta \,. \end{cases} \end{equation*} The conjugate of the corresponding value functions is given by: \begin{align} p^{*}_{1}(\bm{y}) &= \frac{1}{2\lambda} \norm{A^{T}\bm{y}}_{2}^{2} + \frac{1}{2} \norm{\bm{y}}_{2}^{2} \nonumber \\ p^{*}_{2}(\bm{y}) &= \frac{1}{2\lambda} \norm{A^{T}\bm{y}}_{2}^{2} + h^{*}_{\delta}(\bm{y}) \nonumber \\ p^{*}_{3}(\bm{y}) &= k^{*}(A^{T}\bm{y}) + \frac{1}{2} \norm{\bm{y}}_{2}^{2} \nonumber \\ p^{*}_{4}(\bm{y}) &= k^{*}(A^{T}\bm{y}) + h^{*}_{\delta}(\bm{y}) \nonumber \,, \end{align} with conjugate of elastic-net term $ k \coloneqq \lambda\norm{\cdot}_{2}^{2} + \gamma\norm{\cdot}_{1} $ given by: \begin{equation*} k^{*}(\v) = \sum_{i=1}^{N} \max (0, \abs{v_{i}} - \gamma)^{2} / (2 \lambda) \,. \end{equation*} \begin{figure*} \begin{subfigure}{0.98\textwidth} \centering \includegraphics[width=\linewidth]{figs/legends.png} \end{subfigure} \begin{subfigure}{0.245\textwidth} \centering \includegraphics[width=\linewidth]{figs/LinRid/90X50.png} \end{subfigure} \begin{subfigure}{0.245\textwidth} \centering \includegraphics[width=\linewidth]{figs/HubRid/90X50.png} \end{subfigure} \begin{subfigure}{0.245\textwidth} \centering \includegraphics[width=\linewidth]{figs/LinEls/90X50.png} \end{subfigure} \begin{subfigure}{0.245\textwidth} \centering \includegraphics[width=\linewidth]{figs/HubEls/90X50.png} \end{subfigure} \begin{subfigure}{0.245\textwidth} \centering \includegraphics[width=\linewidth]{figs/LinRid/70X50.png} \end{subfigure} \begin{subfigure}{0.245\textwidth} \centering \includegraphics[width=\linewidth]{figs/HubRid/70X50.png} \end{subfigure} \begin{subfigure}{0.245\textwidth} \centering \includegraphics[width=\linewidth]{figs/LinEls/70X50.png} \end{subfigure} \begin{subfigure}{0.245\textwidth} \centering \includegraphics[width=\linewidth]{figs/HubEls/70X50.png} \end{subfigure} \begin{subfigure}{0.245\textwidth} \centering \includegraphics[width=\linewidth]{figs/LinRid/50X50.png} \end{subfigure} \begin{subfigure}{0.245\textwidth} \centering \includegraphics[width=\linewidth]{figs/HubRid/50X50.png} \end{subfigure} \begin{subfigure}{0.245\textwidth} \centering \includegraphics[width=\linewidth]{figs/LinEls/50X50.png} \end{subfigure} \begin{subfigure}{0.245\textwidth} \centering \includegraphics[width=\linewidth]{figs/HubEls/50X50.png} \end{subfigure} \begin{subfigure}{0.245\textwidth} \centering \includegraphics[width=\linewidth]{figs/LinRid/30X50.png} \end{subfigure} \begin{subfigure}{0.245\textwidth} \centering \includegraphics[width=\linewidth]{figs/HubRid/30X50.png} \end{subfigure} \begin{subfigure}{0.245\textwidth} \centering \includegraphics[width=\linewidth]{figs/LinEls/30X50.png} \end{subfigure} \begin{subfigure}{0.245\textwidth} \centering \includegraphics[width=\linewidth]{figs/HubEls/30X50.png} \end{subfigure} \begin{subfigure}{0.245\textwidth} \centering \includegraphics[width=\linewidth]{figs/LinRid/10X50.png} \end{subfigure} \begin{subfigure}{0.245\textwidth} \centering \includegraphics[width=\linewidth]{figs/HubRid/10X50.png} \end{subfigure} \begin{subfigure}{0.245\textwidth} \centering \includegraphics[width=\linewidth]{figs/LinEls/10X50.png} \end{subfigure} \begin{subfigure}{0.245\textwidth} \centering \includegraphics[width=\linewidth]{figs/HubEls/10X50.png} \end{subfigure} \caption{Error plots shown for the gradient sequences computed by the four methods, i.e., \eqref{eq:AnG}, \eqref{eq:AuG}, \eqref{eq:IG} and \eqref{eq:DG}, as well as the original (primal variable) sequence computed using (proximal) gradient descent (solid lines) and its inertial variant (dashed lines). The sequences are evaluated on four problems $ f_{1}, f_{2}, f_{3} $ and $ f_{4} $ ($ f_{j} $ changes from left to right in the given order) for five different values of $ P $, i.e., $ 90, 70, 50, 30 $ and $ 10 $ (P changes from top to bottom in the given order). Each cell shows the error plots for the ten sequences for a fixed $ P $ and $ f_{j} $; five for gradient descent or ISTA and five for the Heavy-ball method \cite{Pol64} or iPiasco \cite{OBP15}.} \label{fig:Exp} \end{figure*} For evaluation we set $ N $ to $ 50 $ and choose $ P $ from $ \{ 10, 30, 50, 70, 90 \} $. This gives us five different plots for each problem and every value of $ P $ corresponds to a row in \fref{fig:Exp}. We set $ \lambda $ to $ 2 $, $ \gamma $ to $ 0.1 $ and $ \delta $ to $ 0.1 $. We keep $ \delta $ small because for sufficiently large values of $ \delta $, $ f_{2} $ will behave like $ f_{1} $ and $ f_{4} $ will behave like $ f_{3} $. Each element of $ A $ and $ \u $ is drawn from a normal distribution with mean $ 0 $ and standard deviation $ 1 $. Thus $ A $ is full-rank almost surely. We also scale each column of $ A $ differently to introduce ill-conditioning. For all our problems and methods, we use gradient descent when the problem is entirely smooth and proximal gradient descent when the problem has a non-smooth component with an optimal step size $ 2 / (L + m) $. To study the effect of inertia, we additionally employ the Heavy-ball method \cite{Pol64} and iPiasco \cite{OBP15} with optimal step size $ 4 / (\sqrt{L} + \sqrt{m})^{2} $ and momentum parameter $ (\sqrt{L} - \sqrt{m})^{2} / (\sqrt{L} + \sqrt{m})^{2} $ on these problems. The gradients and the Hessians are computed by using the autograd package \cite{MDA15}. For $ f_{1} $, an analytical expression exists for both $ \bm{x}^{*} (\u) $ and $ \grad{}{p} (\u) $. In order to compute a good estimate of such terms for the remaining problems, we run the primal and dual problems respectively for a large number of iterations. We verify the correctness of the obtained estimate of $ \grad{}{p}(u) $ by comparing it with the numerical gradient computed using central differences. Then we run each algorithm for $ 250 $ iterations for each $ P $ and $ f_{j} $ and generate the respective plots. Each cell in \fref{fig:Exp} displays plots for $ \norm{\bm{x}^{(n)} (\u) - \bm{x}^{*} (\u)}_{2} $ and $ \norm{\bm{g}^{(n)}_{i} (\u) - \grad{}{p} (\u)}_{2} $ against the number of iterations $ K $. For $ f_{1} $ (first column), we note that all methods converge since they are backed by theoretical results. We see that for $ P > N $ (first column; first three rows), the dual method is slowest to converge and for $ P < N $ (first column; last two rows), it outperforms the analytical and automatic methods. Since the problem is quadratic therefore the implicit method yields $ \grad{}{p} $ in one step. For $ f_{2} $, the dual method shows faster convergence than all the methods for every choice of $ P $. The remaining two problems (third and forth columns) are not continuously differentiable and therefore \eqref{eq:AnG}, \eqref{eq:AuG} and \eqref{eq:IG} show an erratic behavior. The implicit method (red) performs very poorly in most cases. The analytical (orange) and automatic (green) gradient estimators manage to converge but do so in an irregular manner. Like $ f_{1} $, the dual method converges slowly when $ P \geq N $ for $ f_{3} $ and quickly when $ P < N $. Similarly, just like $ f_{2} $, the performance of the dual method is better than all other methods for every $ P $. The difference between the error plots generated by gradient descent or ISTA (solid lines) and the Heavy-ball method or iPiasco (dashed lines) is also visible. We observe that all the methods benefit from inertia. The fast convergence of automatic method is because of the fact that the acceleration in the convergence of $ \bm{x}^{(K)} $ is also reflected in that of $ D_{\bm{u}}\bm{x}^{(K)} $ \cite{APM20, MO20}. In conclusion, we note that for the given non-smooth problems, especially $ f_{4} $, the dual gradient estimator is not only stable but also performs better than its primal counterparts. \section{Conclusion} The variation of the value function of a parametric optimization problem is desirable in a wide range of Machine Learning and Image Processing applications. The methods for computing this gradient usually rely on directly differentiating the objective and are thus limited to the settings when the objective satisfies strong smoothness conditions. We emphasize that the gradient of the value function can also be computed by using a well-known result from convex duality. This method provides an enormous flexibility for numerical approximation of the value functions derivative, allows to leverage convergence rate results from convex optimization algorithms, and does not rely on differentiability; It can compute a subgradient of the value function. \section*{Acknowledgments} Sheheryar Mehmood and Peter Ochs are supported by the German Research Foundation (DFG Grant OC 150/4-1). {\small \bibliographystyle{ieee}
{ "timestamp": "2020-12-29T02:21:46", "yymm": "2012", "arxiv_id": "2012.14017", "language": "en", "url": "https://arxiv.org/abs/2012.14017" }
\section{Introduction} \qquad The exterior Dirichlet problem (EDP)\ for the minimal surface equation consists on the study of existence/nonexistence and uniqueness of solutions of the PDE boundary problem% \begin{equation} \left\{ \begin{array} [c]{l}% \mathcal{M}\left( u\right) :=\operatorname{div}\left( \frac{\nabla u}% {\sqrt{1+\left\Vert \nabla u\right\Vert ^{2}}}\right) =0\text{, }u\in C^{2}\left( \Omega\right) \cap C^{0}\left( \overline{\Omega}\right) \\ u|_{\partial\Omega}=\varphi \end{array} \right. . \label{exDP}% \end{equation} where $\Omega\subset\mathbb{R}^{n}$, $n\geq2$, is an exterior domain that is, $\Lambda:=\mathbb{R}^{n}\backslash\overline{\Omega}$ is a relatively compact domain, and $\varphi\in C^{0}\left( \partial\Omega\right) $ a given function. Additionally to existence or not of solutions of (\ref{exDP}), one is also interested on global properties of their graphs in $\mathbb{R}^{n+1}.$ In $\mathbb{R}^{2}$ the EDP has a history which goes back to J. C. C. Nitsche who proved (Section 4 of \cite{N}) that any solution of (\ref{exDP}) has a $C^{1}$ expansion,$\ $for $\left\Vert x\right\Vert $ big enough, of the form% \begin{equation} u\left( x_{1},x_{2}\right) =c_{1}x_{1}+c_{2}x_{2}+c\log\left\Vert x\right\Vert +O\left( \left\Vert x\right\Vert ^{-1}\right) . \label{expan}% \end{equation} Regarding the existence/non existence problem, R. Osserman \cite{O} proved that there is a boundary data on the disk for which the EDP (\ref{exDP}) on the complement of the disk has no bounded solution. R. Krust \cite{Kr} proved that Osserman's boundary data has no solution with horizontal end, that is, $c_{1}=c_{2}=0$ in (\ref{expan}) or, equivalently, having vertical Gauss map at infinity, leaving opened the question about the existence or not of a boundary data for which the EDP has no solution at all that is, with no end type restriction. This was solved by N. Kutev and F. Tomi \cite{KT} who proved the existence of a boundary data, with arbitrarily small oscillation and with bounded $C^{0,1}$ norm, for which (\ref{exDP}) has no solution, irrespective of the asymptotic behavior. As to the existence problem, it is proved in \cite{KT} and \cite{RT} that (\ref{exDP}) has\ a solution with horizontal end under conditions involving the curvature of the boundary of the domain, the Lipschitz constant and the oscillation of the boundary data. Regarding the behavior in $\mathbb{R}^{n+1},$ $n\geq2,$ of the graphs of the solutions of (\ref{exDP}), we remark that the fundamental solutions (see next section) on the exterior of any given open ball $B$ of $\mathbb{R}^{n}$, provide examples of foliations with horizontal ends of the open subset of $\mathbb{R}^{n}$% \[ \left\{ \left( x,z\right) \in\mathbb{R}^{n}\backslash\overline{B}% \times\mathbb{R}\text{ such that }-v\left( x\right) <z<v\left( x\right) \right\} , \] where the graph of $v$ is the top of a generalized catenoid with neck size determined by $B.$ This foliation is parametrized by the angle that the Gauss map of the graph of the fundamental solution at the boundary of the domain makes with the positive vertical axis (note that if $\gamma$ is such angle relatively to a fundamental solution $u\in C^{2}\left( \mathbb{R}% ^{n}\backslash B\right) $, then $\tan\gamma=\sup_{\partial B}\left\Vert \nabla u\right\Vert $). A question that arises is if such a similar phenomenon happens with an arbitrary exterior domain. This question was partially answered by the third author in $\mathbb{R}^{2}$ (Theorem 1 of \cite{R})$.$ A complete answer in the two dimensional case was obtained in \cite{RT} where the authors prove that the limit of the leaves in Theorem 1 of \cite{R} can be included in the foliation. We recall that R. Krust proved in \cite{Kr} that if there are two different solutions in $\mathbb{R}^{3}$ with the same Gauss map at infinity then there is a continuum of solutions foliating the space in between$.$ The case $\mathbb{R}^{n}$ for $n\geq3,$ to the authors' knowledge, was investigated only in the work of E. Kuwert \cite{K} where it is proved that the Krust foliation theorem \cite{Kr} is true in any dimension, leaving opened however the problem of existence or not of such foliations. In the present paper we investigate the existence of foliations to the EDP in $\mathbb{R}^{n}$ for $n\geq3$ in arbitrary exterior domains of $\mathbb{R}% ^{n}$ but in the special case that the boundary data $\varphi$ in (\ref{exDP}) is zero. We use in part the technique of \cite{R} for proving that an exterior domain $\Omega$ of $C^{2,\alpha}$ class in $\mathbb{R}^{n},$ $n\geq3,$ determines a non trivial foliation of minimal hypersurfaces in $\Omega\times\mathbb{R\subset R}^{n+1}$ containing the trivial solution as a leaf. As it happens in the $2-$dimensional case, this foliation has horizontal ends and is parametrized by the maximal angle that the Gauss map of the leaves in $\mathbb{R}^{n+1}$ make with the positive vertical axis at $\partial \Omega.$ Moreover, any leaf has a limit height at infinity which can be estimated by the geometry of the domain (see Theorem 1 for a precise statement). A natural problem is to extend our result to more general boundary data. To succeed, applying the technique used here (or of \cite{RT}), one needs to guarantee the existence of at least one solution with the given boundary data. However, although not having a counter example, we do believe, as it happens in the $2-$dimensional case, that without hypothesis on the boundary data such a solution may not exist. And even if one solution exists, it can possibly be the only one. This happens in the $2-$dimensional case on the exterior of a disk for certain boundary data, as proved in Theorem 2.9 of \cite{RT}. Even though, it seems to us that a more difficult part on the nonzero boundary data case is to estimate the values at infinity of the solutions: as done here, one needs the fundamental solutions as barriers and the way they are used applies, in principle, only for zero boundary data. \section{Fundamental solutions} Given $\lambda>0$ and $p\in\mathbb{R}^{n}$ let $B_{\lambda}\left( p\right) $ be the ball centered at $p$ and with radius $\lambda,$ $n\geq2$. The radial function \begin{equation} v_{\lambda}\left( x\right) =\lambda\int_{1}^{\frac{r}{\lambda}}\frac {dt}{\sqrt{t^{2\left( n-1\right) }-1}}\text{, }r=\left\Vert x-p\right\Vert \text{, }x\in\mathbb{R}^{n}\backslash B_{\lambda}\left( p\right) , \label{ncat}% \end{equation} is a solution of (\ref{exDP}) in $\mathbb{R}^{n}\backslash B_{\lambda}\left( p\right) $ vanishing at $\partial B_{\lambda}\left( p\right) .$ We call $v_{\lambda}$, or any vertical translation of $v_{\lambda},$ a fundamental solution. The graph of $v_{\lambda}$ is half of a $n-$dimensional catenoid. By using isometries and homotheties one obtains a family of radial solutions, which we also call fundamental solutions, defined in the exterior of any fixed ball which gradient at the boundary of the ball varies from $0$ to $\infty.$ In this paper we are interested only when $n\geq3.$ In this case we then have% \begin{equation} 0<\sigma_{n}:=\int_{1}^{\infty}\frac{dt}{\sqrt{t^{2\left( n-1\right) }-1}% }<\infty\label{sig}% \end{equation} so that, from (\ref{ncat}), $v_{\lambda}\left( x\right) $ has a limit as $\left\Vert x\right\Vert \rightarrow\infty$ not depending on $p,$ which we denote by $v_{\lambda}\left( \infty\right) $ and which is given by \begin{equation} v_{\lambda}\left( \infty\right) =\sigma_{n}\lambda. \label{vlinf}% \end{equation} \begin{figure}[h] \centering \includegraphics[width=1\linewidth]{fig1} \caption*{Fundamental Solutions with $\lambda=1$} \label{fig:fig1}% \end{figure} \section{The result and its proof} A fundamental tool in PDE, used several times in the proof of Theorem \ref{mt}, is the comparison principle. In our case it states that if $\Omega$ is a bounded domain in $\mathbb{R}^{n}$, $u,v\in C^{2}\left( \Omega\right) $ satisfies $\mathcal{M}\left( u\right) =\mathcal{M}\left( v\right) =0$ and $u\leq v$ at $\partial\Omega$ that is,% \[ \lim\sup_{k}\left( u(x_{k})-v(x_{k})\right) \leq0 \] for any sequence $x_{k}$ in $\Omega$ which leaves any compact subset of $\Omega,$ then $u\leq v$ in $\Omega$ (Proposition 3.1 of \cite{RT2})$.$ An easy consequence of the comparison principle is the maximum principle which asserts that if $u,v\in C^{2}\left( \Omega\right) \cap C^{0}\left( \overline{\Omega}\right) $ satisfy $\mathcal{M}\left( u\right) =\mathcal{M}\left( v\right) =0$ in $\Omega$ then \[ \max_{\overline{\Omega}}\left\vert u-v\right\vert =\max_{\partial\Omega }\left\vert u-v\right\vert \] (Proposition 3.2 of \cite{RT2})$.$ The maximum principle has an useful application on Differential Geometry, known as the tangency principle. In our case it says that if $M_{1}$ and $M_{2}$ are minimal hypersurfaces of $\mathbb{R}^{n}$ (with or without boundary and not necessarily graphs) that have a tangency at some interior or boundary point $p\in M_{1}\cap M_{2},$ and if $M_{1}$ is in one side of $M_{2}$ in a neighborhood $p,$ then $M_{1}$ coincides with $M_{2}$ in a neighborhood of $p$ \cite{FS}. We also remark that once we have a priori $C^{1}$ estimates for the solutions of the minimal surface equation (or to more general quasi linear elliptic PDEs) we also have $C^{1,\alpha}$ a priori estimates from the H\"{o}lder theory (Ch 13 of \cite{GT}). Then well known arguments (see, for example, Section 2.1 of \cite{RT2}) allow to reduce the $C^{2,\alpha}$ a priori estimates and the regularity of solutions of quasi linear elliptic PDEs to a priori estimates and regularity theory of linear elliptic PDEs (Ch 6 of \cite{GT}). In the statement of Theorem \ref{mt} we set, for convenience, $s:=\tan\gamma$, $\left\vert \gamma\right\vert \leq\pi/2$ and, we use $u_{s}$ and $u_{s}\left( \infty\right) $ instead of $u_{\gamma}$ and $c_{\gamma}$ as in the Abstract. \begin{theorem} \label{mt}Assume that $\Omega$ is an exterior domain of $C^{2,\alpha}$ class such that $\Lambda:=\mathbb{R}^{n}\backslash\overline{\Omega},$ $n\geq3,$ satisfies the interior sphere condition with maximal radius $\rho$, namely: Given $p\in\partial\Lambda$, there is a $\left( n-1\right) -$dimensional sphere $S_{p}$ of radius $\rho$ such that $p\in S_{p}$, $S_{p}\subset \overline{\Lambda}$ and $\rho$ is maximal under these conditions. Let $\varrho$ be the radius of the smallest open ball $B_{\varrho}$ of $\mathbb{R}^{n}$ such that $\partial\Omega\subset\overline{B}_{\varrho}.$ Given $s\in\left[ -\infty,\infty\right] $ there is a bounded function $u_{s}\in C^{\infty}\left( \Omega\right) $ satisfying $\mathcal{M}\left( u_{s}\right) =0$ in $\Omega,$ $u_{-s}=-u_{s}$ and such that: If $-\infty<s<\infty$ then $u_{s}\in C^{\infty}\left( \Omega\right) \cap C^{2,\alpha}\left( \overline{\Omega}\right) ,$ \begin{equation} u_{s}|_{\partial\Omega}=0 \label{bou}% \end{equation} and% \begin{equation} \max_{\partial\Omega}\left\Vert \nabla u_{s}\right\Vert =\max_{\Omega }\left\Vert \nabla u_{s}\right\Vert =\left\vert s\right\vert . \label{gr}% \end{equation} The graph of $u_{\infty}$ is contained in a $C^{1,1}$-manifold $M\subset \overline{\Omega}\times\mathbb{R}$ with boundary $\partial M=$ $\partial \Omega.$ For any $s\in\left[ -\infty,\infty\right] $ there exists the limit \begin{equation} u_{s}\left( \infty\right) :=\lim_{\left\Vert x\right\Vert \rightarrow\infty }u_{s}\left( x\right) , \label{cinf}% \end{equation} and% \begin{equation} \lim_{\left\Vert x\right\Vert \rightarrow\infty}\left\Vert \nabla u_{s}\left( x\right) \right\Vert =0. \label{gauss}% \end{equation} Moreover, the maps $s\mapsto u_{s}\left( x\right) $, for fixed $x\in\Omega$ and $s\mapsto u_{s}\left( \infty\right) $ are strictly increasing, bounded and we have the inclusions \begin{equation} \left[ -\sigma_{n}\rho,\sigma_{n}\rho\right] \subset\left[ -u_{\infty }\left( \infty\right) ,u_{\infty}\left( \infty\right) \right] \subset\left[ -\sigma_{n}\varrho,\sigma_{n}\varrho\right] \label{inc}% \end{equation} where $\sigma_{n}$ is given by (\ref{sig}). If one of the inclusions is an equality then $\rho=\varrho$, $\Omega$ is the exterior of a ball of radius $\rho$ and the $u_{s}$ are the fundamental solutions. Finally, the graphs of the solutions of $u_{s},$ $s\in\left( -\infty ,\infty\right) $ foliate the open subset of $\mathbb{R}^{n+1}$ \begin{equation} O:=\left\{ \left( x,z\right) \in\mathbb{R}^{n+1}\backslash\overline{\Omega }\times\mathbb{R}\text{ such that }u_{-\infty}\left( x\right) <z<u_{\infty }\left( x\right) \right\} . \label{O}% \end{equation} \end{theorem} \begin{proof} We first consider the case that $-\infty<s<\infty$. Since the case $s=0$ is trivial and, since if $u_{s}$ is a solution satisfying (\ref{bou}), (\ref{gr}) (\ref{cinf}) and (\ref{gauss}) then $u_{-s}=-u_{s}$ is a solution also satisfying these conditions, we may assume $s>0$. Let $a\in\mathbb{R}$ be such that $B_{a}=B_{a}\left( 0\right) $, the open ball in $\mathbb{R}^{n}$ of radius $a$ centered at origin, contains $\overline{\Lambda}$. Let $v_{a}\in C^{0}\left( \mathbb{R}^{n}\backslash B_{a}\right) $ be given by (\ref{ncat}) with $p=0$. We see that $\left\Vert \nabla v_{a}\left( x\right) \right\Vert \rightarrow0$ as $\left\Vert x\right\Vert \rightarrow\infty$ and we may then choose $k\in\mathbb{N}$, $k>a+1$, large enough such that \begin{equation} \left\Vert \nabla v_{a}\right\Vert _{\partial B_{k}}\leq\frac{s}{2}. \label{vas}% \end{equation} Set $\Omega_{k}=B_{k}\cap\Omega$ and% \begin{equation} T_{k}=\left\{ t\geq 0\ ;\begin{aligned} \ &\ \exists~w_{t}\in C^{2,\alpha}\left( \overline{\Omega }_{k}\right) \text{ s.t. }\mathcal{M}\left( w_{t}\right) =0,\\ &~\sup\nolimits_{\overline{\Omega}_{k}}\left\Vert \nabla w_{t}\right\Vert \leq s, \ w_{t}|_{\partial\Omega}=0,~w_{t}|_{\partial B_{k}}=t \end{aligned}\right\} .\qquad\label{Tk}% \end{equation} The set $T_{k}$ is not empty since $0\in T_{k}$. Moreover, $\sup T_{k}<\infty$ since \[ \sup_{\overline{\Omega}_{k}}\left\Vert \nabla w_{t}\right\Vert \leq s \] for all $t\in T_{k}$. We will prove that \[ t_{k}:=\sup T_{k}\in T_{k}% \] and that \begin{equation} \sup_{\Omega_k}\left\Vert \nabla w_{t_{k}}\right\Vert =\sup_{\partial\Omega_k }\left\Vert \nabla w_{t_{k}}\right\Vert =s\text{.} \label{wtk}% \end{equation} Taking a sequence $\left( t_{m}^{k}\right) $ in $T_{k}$ converging to $t_{k}$ as $m\rightarrow\infty$ the corresponding functions $w_{t_{m}^{k}}$ have uniformly bounded $C^{1}$ norm. By elliptic PDE theory (\cite{GT}, \cite{RT2}) there is a subsequence of $w_{t_{m}^{k}}$ converging on the $C^{2}$ norm on $\overline{\Omega}_{k}$ to a function $w_{k}\in C^{2,\alpha }\left( \overline{\Omega}_{k}\right) $ which satisfies $\mathcal{M}\left( w_{k}\right) =0$ in $\Omega_{k}$. Clearly $w_{k}|_{\partial\Omega}=0$, $w_{k}|_{\partial B_{k}}=t_{k}$ and $\sup_{\Omega_{k}}\left\Vert \nabla w_{k}\right\Vert \leq s$. It follows that $t_{k}\in T_{k}$ and that $w_{k}=w_{t_{k}}$. From the maximality of $t_{k}$ we claim that we cannot have $\sup_{\Omega_{k}% }\left\Vert \nabla w_{k}\right\Vert \nolinebreak<\nolinebreak s.$ Indeed: Consider a function $\phi\in C^{2,\alpha}\left( \mathbb{R}^{n}\right) $ such that $\phi|_{B_{k-1}}=0$ and $\phi|_{\mathbb{R}^{n}\backslash B_{k}}=1$, set \[ C_{0}^{2,\alpha}(\overline{\Omega}_{k})=\left\{ \left. \omega\in C^{2,\alpha}(\overline{\Omega}_{k})\text{ }\right\vert \text{ \ \ }% \omega|_{\partial\Omega_{k}}=0\right\} , \] and define $T\colon\,[-1,1]\times C_{0}^{2,\alpha}(\overline{\Omega}% _{k})\rightarrow C^{\alpha}(\overline{\Omega}_{k})$ by \[ T\left( t,\omega\right) =\mathcal{M}\left( \omega+w_{k}+t\phi\right) . \] Then $T\left( 0,0\right) =0.$ One may see that the Fr\'{e}chet derivative $\partial_{2}T\left( 0,\omega_{k}\right) =d\mathcal{M}_{w_{k}}$ is invertible (Theorem 3.3 of \cite{GT}) so that, from the implicit function theorem on Banach spaces (Theorem 17.6 of \cite{GT}), there exists a continuous function $t\mapsto\omega\left( t\right) \in C_{0}^{2,\alpha }(\overline{\Omega}_{k})$ (continuous on the $C^{2,\alpha}$ topology)$,$ with $\omega(0)=0$ such that $T\left( t,\omega(t)\right) =0,$ $t\in\left( -\varepsilon,\varepsilon\right) .$ Therefore, since $\left\Vert \operatorname{\nabla}w_{k}\right\Vert _{\Omega_k}<s$ there exists $t\in\left( 0,\varepsilon\right) $ such that \[ \sup_{\Omega_k}\left\Vert \nabla\left( \omega\left( t\right) +w_{k}+t\phi\right) \right\Vert <s. \] Since \[ \mathcal{M}\left( \omega\left( t\right) +w_{k}+t\phi\right) =T(t,\omega (t))=0, \] $\omega\left( t\right) +w_{k}+t\phi=0$ at $\partial\Omega$ and $\omega(t)+w_{k}+t\phi=t_{k}+t$ at $\partial B_{k},$ it follows that $t_{k}+t\in T_{k},$ contradiction since $t_{k}=\sup T_{k}.$ We then have $\sup_{\Omega_{k}}\left\Vert \nabla w_{k}\right\Vert =s$. We claim that% \begin{equation} \sup_{\partial B_{k}}\left\Vert \nabla w_{k}\right\Vert \leq s/2. \label{was}% \end{equation} Indeed: Since the graph of $v_{a}$ is vertical at $\partial B_{a}$ it follows from the comparison principle (see \cite{GT}, Ch 10, or Proposition 3.1 of \cite{RT2}) that \begin{equation} v_{a}+t_{k}-v_{a}(x_{0})\leq w_{k}\leq t_{k} \label{in}% \end{equation} where $x_{0}\ $is any but fixed point of $\partial B_{k}.$ From (\ref{vas}) and (\ref{in}) we get (\ref{was}). By the gradient maximum principle (\cite{GT}, Ch 15) we obtain% \[ \sup_{\Omega_k}\left\Vert \nabla w_{k}\right\Vert =\sup_{\partial\Omega _k}\left\Vert \nabla w_{k}\right\Vert =s. \] Letting $k\rightarrow\infty$ and using the diagonal method we obtain a subsequence of $w_{k}$ converging uniformly $C^{2}$ on compact subsets of $\overline{\Omega}$ to a function $u_{s}\in C^{2,\alpha}\left( \overline {\Omega}\right) $ satisfying $\mathcal{M}\left( u_{s}\right) =0$ in $\Omega,$ (\ref{bou}) and (\ref{gr}). From elliptic PDE regularity \cite{GT} $u_{s}\in C^{\infty}\left( \Omega\right) $. Now, for any $s\in\left[ 0,\infty\right) ,$ the graph $G_{s}$ of $u_{s}$ is by construction of (uniform) bounded slope (see \cite{S}). It follows from Proposition 3 of \cite{S} that $G_{s}$ is \emph{regular at infinity }that is, $u_{s}$ has a twice differentiable expansion% \begin{equation} u_{s}\left( x\right) =c_{s}+a_{s}\left\Vert x\right\Vert ^{2-n}+\sum _{j=1}^{n}c_{s,j}x_{j}\left\Vert x\right\Vert ^{-n}+O\left( \left\Vert x\right\Vert ^{-n}\right) \label{exp}% \end{equation} from which it follows that% \begin{equation} u_{s}\left( \infty\right) :=\lim_{\left\Vert x\right\Vert \rightarrow\infty }u_{s}\left( x\right) =c_{s}. \label{cs}% \end{equation} It also follows from (\ref{exp}) that \[ \lim_{\left\Vert x\right\Vert \rightarrow\infty}\left\Vert \nabla u_{s}\right\Vert \left( x\right) =0 \] which implies that $G_{s}$ is horizontal at infinity that is, (\ref{gauss}) is satisfied. This proves that (\ref{cinf}) and (\ref{gauss}) are satisfied for $s\in\left[ 0,\infty\right) .$ Let $v_{\varrho}$ be the fundamental solution on $\mathbb{R}^{n}\backslash B_{\varrho}$ which gradient infinity at $\partial B_{\varrho}.$ Given $s\in\left[ 0,\infty\right) $ we claim that $u_{s}\left( \infty\right) <v_{\varrho}\left( \infty\right) .$ Indeed, coming from $-\infty$ with the graph $G_{\varrho}$ of $v_{\varrho}$ using vertical translations$,$ since the gradient of $v_{\varrho}$ at the boundary of $B_{\varrho}$ is infinity, it follows from the tangency principle that the first contact between $G_{\varrho}$ and the graph of $u_{s}$ has to be at infinity and with the boundary of $G_{\varrho}$ strictly below the level $x_{n+1}=0.$ Hence, at the level $x_{n+1}=0$ one necessarily has $u_{s}\left( \infty\right) <v_{\varrho}\left( \infty\right) .$ It follows from the claim and from (\ref{vlinf}) that $u_{s}$ is bounded by $\sigma\varrho$ for all $s\in\left[ 0,\infty\right) .$ Clearly we have $u_{s}\leq u_{t}$ and also $u_{s}\left( \infty\right) \leq u_{t}\left( \infty\right) $ if $s\leq t$. Hence, for any increasing sequence $s_{m}\rightarrow\infty$ the sequence $u_{s_{m}}$ converges uniformly on compact subsets of $\Omega$ to a $C^{\infty}$ function $u_{\infty}$ in $\Omega$ satisfying $\mathcal{M}\left( u_{\infty}\right) =0$. For proving that the graph $G_{\infty}$ of $u_{\infty}$ is contained in a $C^{1,1}$ manifold with boundary $\partial\Omega$ consider a fixed ball $B_{a}$ with $a>\varrho.$ By \cite{Mi}, given $s\in\left[ 0,\infty\right] $ there is a minimizer $v_{s}$ on the space $\operatorname*{BV}\left( \Omega_{a}\right) $ of bounded variation functions on $\Omega_{a}$ (see \cite{G})$,$ for the functional \[ \mathcal{F}_{s}\left( w\right) =\int_{\Omega_{a}}\sqrt{1+\left\Vert \nabla w\right\Vert ^{2}}+\int_{\partial\Omega_{a}}\left\vert w-\phi_{s}\right\vert ,\text{ }w\in\operatorname*{BV}\left( \Omega_{a}\right) , \] where $\phi_{s}\in C^{\infty}\left( \partial\Omega_{a}\right) $ satisfies $\phi_{s}|_{\partial\Omega}=0$, $\phi_{s}|_{\partial B_{a}}=u_{s}|_{\partial B_{a}}.$ Since $u_{s}$ is also a minimizer for $\mathcal{F}_{s}$ for $0\leq s<\infty,$ we have $u_{s}|_{\Omega_{a}}=v_{s}$ by uniqueness \cite{Mi} (the equality is in\ $\operatorname*{BV}\left( \Omega_{a}\right) $). Noting that \begin{align*} \lim_{s\rightarrow\infty}\mathcal{F}_{s}\left( w\right) & =\mathcal{F}% _{\infty}\left( w\right) ,\text{ }w\in\operatorname*{BV}\left( \Omega _{a}\right) ,\\ \lim_{s\rightarrow\infty}\mathcal{F}_{s}\left( u_{s}\right) & =\lim_{s\rightarrow\infty}\mathcal{F}_{\infty}\left( u_{s}\right) , \end{align*} we have (writing only $u_{s}$ instead of $u_{s}|_{\Omega_{a}})$% \begin{align*} \mathcal{F}_{\infty}\left( v_{\infty}\right) & =\lim_{s\rightarrow\infty }\mathcal{F}_{s}\left( v_{\infty}\right) \geq\lim_{s\rightarrow\infty }\mathcal{F}_{s}\left( v_{s}\right) =\lim_{s\rightarrow\infty}% \mathcal{F}_{s}\left( u_{s}\right) \\ & =\lim_{s\rightarrow\infty}\mathcal{F}_{\infty}\left( u_{s}\right) \geq\mathcal{F}_{\infty}\left( u_{\infty}\right) , \end{align*} where, in the last inequality, we used that $\mathcal{F}_{\infty}$ is lower semicontinuous. It follows that $\mathcal{F}_{\infty}\left( v_{\infty }\right) =\mathcal{F}_{\infty}\left( u_{\infty}\right) $ and hence, by uniqueness, $v_{\infty}=u_{\infty}$ in $\Omega_{a}$. From Theorem 4.2 of \cite{Bo} applied to the functional $\mathcal{F}_{\infty}$, by choosing $\Phi=\partial\Omega$, $\phi_{i}\equiv0$, and using also Theorem 4.7, we conclude that the graph of $u_{\infty}$ is contained in a $C^{1,1}$ manifold $M$ with boundary which boundary is $\partial\Omega.$ We have seen that $s\rightarrow u_{s}\left( \infty\right) $ is increasing and bounded by $\sigma\varrho.$ If $c:=\lim_{s\rightarrow\infty}u_{s}\left( \infty\right) $ then we have $u_{s}\leq u_{\infty}\leq c,$ $s\in\left[ 0,\infty\right) ,$ by the comparison principle, and hence there is the limit $u_{\infty}\left( \infty\right) $ of $u_{\infty}\left( x\right) $ as $\left\Vert x\right\Vert \rightarrow\infty$ and $u_{\infty}\left( \infty\right) =c,$ proving the second inclusion of (\ref{inc}). We shall prove now (\ref{gauss}) for $s=\infty.$ By the way $u_{\infty}$ is obtained we can not conclude directly that the graph of $u_{\infty}$ is of (uniform) bounded slope and hence we don't know if $u_{\infty}$ is regular at infinity and admits an expansion as (\ref{exp}). But this is actually the case, indeed:\ Since $u_{\infty}\left( \infty\right) =c$ the tangent cone to the graph of $u_{\infty}$ at infinity is the hyperplane $\mathbb{R}^{n}=\left\{ x_{n+1}=0\right\} $ of $\mathbb{R}^{n+1}$ (see \cite{Si}) and hence, from Theorem 1 of \cite{Si} it follows that $\nabla u_{\infty}$ has a limit at infinity and $\left\Vert \nabla u_{\infty}\right\Vert $ is bounded outside some compact. Since $u_{\infty }$ is bounded this limit has to be zero and this proves (\ref{gauss}) for $s=\infty.$ Let $c\in\lbrack0,\sigma_{n}\rho]$ be given. We prove that there is a non negative solution $w_{c}\in C^{0}\left( \overline{\Omega}\right) \cap C^{\infty}\left( \Omega\right) $ of (\ref{exDP}) such that $w_{c}% |_{\partial\Omega}=0$ and% \[ \underset{\left\Vert x\right\Vert \rightarrow\infty}{\lim}w_{c}\left( x\right) =c. \] Define \begin{equation} \digamma=\left\{ f\in C^{0}\left( \overline{\Omega}\right) ;\begin{aligned} \ &~f\text{ is a subsolution of }\mathcal{M}\text{ in }\Omega,\\ &~f=0\text{ in }\partial\Omega\text{ and }\limsup \nolimits_{\left\Vert x\right\Vert \rightarrow\infty}f\left( x\right) \leq c \end{aligned}\right\} .\qquad\label{per}% \end{equation} Clearly $\digamma\neq\varnothing$ and it follows from the the comparison principle that $f\leq c$ for all $f\in\digamma.$ We may then apply Perron's method (\cite{GT}, Section 2.8) to conclude that \[ w_{c}\left( x\right) =\sup\left\{ f\left( x\right) ;\text{ }f\in \digamma\right\} \text{, }x\in\overline{\Omega}, \] is $C^{\infty}$ and satisfies $\mathcal{M}\left( w_{c}\right) =0$ in $\Omega$. For proving that \begin{equation} \lim_{\left\Vert x\right\Vert \rightarrow\infty}w_{c}\left( x\right) =c \label{wc}% \end{equation} take $a>0$ large enough, such that $\overline{\Lambda}\subset B_{a}$ satisfies $v_{a}\left( \infty\right) >c$. We have that $f\in C^{0}\left( \overline{\Omega}\right) $ given by% \[ f\left( x\right) =\left\{ \begin{array} [c]{l}% 0\text{ if }x\in\overline{\Omega}\cap B_{a}\\ \max\{0,v_{a}\left( x\right) -\left( v_{a}\left( \infty\right) -c\right) \}\text{, if }x\in\mathbb{R}^{n}\backslash B_{a}% \end{array} \right. \] is a subsolution relatively to the (\ref{exDP}) satisfying $f|_{\partial \Omega}=0$ and \begin{equation} \underset{\left\Vert x\right\Vert \rightarrow\infty}{\lim}f\left( x\right) =c. \label{win}% \end{equation} It follows that $f\in\digamma$ and then $f\leq w_{c}\leq c,$ which proves (\ref{wc}). It remains to prove that $w_{c}$ extends $C^{0}$ to $\overline{\Omega}$ and that $w_{c}|_{\partial\Omega}=0$. Given $p\in\partial\Omega$, by hypothesis there is an open ball $B_{\rho}$ contained in $\Lambda$ such that $\partial B_{\rho}$ is tangent to $\partial\Omega$ ($=\partial\Lambda)$ at $p$. Since \[ c\leq\sigma_{n}\rho=v_{\rho}\left( \infty\right) \] and $v_{\rho}=0$ at $\partial B_{p}$ it follows from the comparison principle\ that $0\leq w_{c}\leq v_{\rho}$. Since $p$ is arbitrary this proves the claim that is, $w_{c}$ extends $C^{0}$ to $\overline{\Omega}$ and $w_{c}|_{\partial\Omega}=0$. Now, assume that $0\leq c<\sigma_{n}\rho.$ Then we may find a fundamental solution $\widetilde{v}$ defined on the exterior of a ball of radius $\rho,$ contained in $\Lambda$, tangent to $\partial\Omega$ with bounded gradient at the boundary of the ball and such that \[ \widetilde{v}\left( \infty\right) =\frac{c+\sigma_{n}\rho}{2}. \] By the comparison principle it follows that $0\leq w_{c}\leq\widetilde{v}.$ This proves that $w_{c}$ extends $C^{1}$ to $\overline{\Omega}$ and, by PDE regularity \cite{GT}, $w_{c}\nolinebreak\in\nolinebreak C^{2,\alpha}\left( \overline{\Omega}\right) \nolinebreak\cap\nolinebreak C^{\infty}\left( \Omega\right) .$ Setting% \[ s_{c}=\max_{\partial\Omega}\left\Vert \nabla w_{c}\right\Vert , \] we prove that $u_{s_{c}}=w_{c}.$ By contradiction, assume the opposite. Then, setting% \begin{equation} d:=\lim_{\left\Vert x\right\Vert \rightarrow\infty}u_{s_{c}} \label{hav}% \end{equation} we cannot have $d>c$ or $d<c$. Indeed: Assume, by contradiction, that $d>c.$ Let $p\in\partial\Omega$ be such that $\left\Vert \nabla w_{c}\right\Vert \left( p\right) =s_{c}.$ If $\left\Vert \nabla u_{s_{c}}\right\Vert \left( p\right) =s_{c}$ we cannot have $w_{c}\left( x\right) \leq u_{s_{c}}\left( x\right) $ for all $x\in\overline{\Omega}$ because of the boundary tangency principle. But if have $w_{c}>u_{s_{c}}$ this inequality must hold only on a bounded open subset of $\Omega$ since $c<d$. One can then make a vertical translation of the graph of one of the solutions to get a tangency between their graphs, with one of them in one side of the other, contradicting the tangency principle. The remaining possibility% \[ \left\Vert \nabla u_{s_{c}}\right\Vert \left( p\right) <s_{c}=\left\Vert \nabla w_{c}\right\Vert \left( p\right) \] also implies that $w_{c}>u_{s_{c}}$ must hold on a bounded open subset of $\Omega$ leading, as before, to a contradiction with the tangency principle. The case that $d<c$ cannot happen by the same arguments. This proves that $c=d$ and, arguing with the tangency principle again, that $w_{c}=u_{s_{c}}.$ Finally, take an increase sequence $c_{m}\in\left[ 0,\sigma_{n}\rho\right) $ converging to\ $\sigma_{n}\rho$ as $m\rightarrow\infty$. The sequence $s_{c_{m}}$ is increasing and then has a limit $s\in\left[ 0,\infty\right] .$ The sequence $\left( u_{s_{c_{m}}}\right) $ converges uniformly $C^{2}$ on compact subsets of $\Omega$ to a solution $u_{s}\in C^{0}\left( \overline{\Omega}\right) \cap C^{\infty}\left( \Omega\right) ,$ $u_{s}|_{\partial\Omega}=0$ and $\sup_{\partial\Omega}\left\Vert \nabla u_{s}\right\Vert =s.$ As before we obtain $u_{s}=w_{\sigma_{n}\rho}$, proving that \[ \left[ 0,\sigma_{n}\rho\right] \subset\left[ 0,u_{\infty}\left( \infty\right) \right] . \] This concludes the proof of (\ref{inc}). If one of the inclusions in (\ref{inc}) is an equality and the corresponding graphs of the solutions with infinite gradient at $\partial\Omega$ are not the same, then either one is below the other or they intersect in interior points. The first case cannot occur because of the boundary tangency principle. The second case neither because otherwise one can make a vertical translation of one of them to get a tangency between the graphs, with one in one side of the other, contradicting the tangency principle. Hence, in case of equality in some of the inclusions (\ref{inc}), $\Omega$ is the exterior of a ball of radius $\rho=\varrho.$ Is a particular consequence of the proof of the foliation property, given below, that the solutions $u_{s}$ are necessarily the fundamental solutions. For proving that the graphs of the solutions $u_{s},$ $s\in\left( -\infty,\infty\right) ,$ foliate the open subset $O$ of $\mathbb{R}^{n}$ (defined in (\ref{O})) we apply Theorem 2 of \cite{K}. It is enough to prove that any solution $u\in C^{0}\left( \overline{\Omega}\right) $ of the minimal surface equation in $\Omega$ with horizontal end and such that $u|_{\partial\Omega}=0$ coincides with $u_{s}$ for some $s\in\left[ -\infty,\infty\right] .$ By using Theorems 4.2 and 4.7 of \cite{Bo}, as above, we may conclude that the graph of $u$ is a $C^{1,1}$ manifold $M$ with boundary and, since $u\in C^{0}\left( \overline{\Omega}\right) $ is a solution of the minimal surface equation in $\Omega$, $M$ is a minimal hypersurface with boundary $\partial\Omega$ of $\mathbb{R}^{n}$. Representing $M,$ locally, as a graph near any given point of $\partial\Omega\ (=\partial M),$ we may use PDE regularity theory to conclude that, indeed, $M$ is a $C^{2,\alpha}$ manifold. Moreover, the assumption that $u$ has horizontal end implies, as already argued before, that $u$ is bounded and that there exists the limit \[ d:=\lim_{\left\Vert x\right\Vert \rightarrow\infty}u\left( x\right) . \] If $M$ has no vertical tangent space at any point of $\partial\Omega$ then it follows by PDE regularity that $u\in C^{2,\alpha}\left( \overline{\Omega }\right) \cap C^{\infty}\left( \Omega\right) $. Setting $s=\max _{\partial\Omega}\left\Vert \nabla u\right\Vert ,$ we can argue as before to prove that $u=u_{s}.$ Assume that $M$ has a vertical tangent space at some point of $\partial \Omega.$ We claim then that $u=u_{\infty}$ or $u=u_{-\infty}.$ We first prove that $d=u_{\infty}\left( \infty\right) $ or $d=u_{-\infty}\left( \infty\right) $. By contradiction, first assume that $0<u_{\infty}\left( \infty\right) <d .$ Arguing with the tangency principle it is easy to see then that $u_{\infty }\leq u.$ But then $u_{\infty}\in C^{0}\left( \overline{\Omega}\right) $ and the graph $G$ of $u_{\infty}$ is a minimal hypersuface of $C^{2,\alpha}$ class with boundary $\partial\Omega$ which has a vertical tangent space at some point $p\in\partial\Omega.$ The hypersurfaces $G$ and $M$ then must have a tangency at $p.$ By the boundary tangency principle it follows that $G=M,$ contradiction! If $0\leq d<u_{\infty}\left( \infty\right) $, since $u_{s}$ converges uniformly on compacts of $\Omega$ to $u_{\infty},$ as $s\rightarrow\infty,$ there is $s$ large enough such that $u_{s}\left( \infty\right) >d$. By using the tangency principle one may see that this leads to a contradiction. For similar reasons one excludes the case $d<u_{-\infty}\left( \infty\right) $ and $u_{-\infty}\left( \infty\right) <d\leq0.$ It then follows that $d=u_{\infty}\left( \infty\right) $ or $d=u_{-\infty }\left( \infty\right) $ from what one easily obtains, from the tangency principle once more, that $u=u_{\infty}$ or $u=u_{-\infty}.$ This concludes with the proof of the theorem. \end{proof} \noindent\textbf{Remarks.} \noindent(a) It is true that the graph of the limit solution $u_{\infty}$ of the EDP in $\mathbb{R}^{2}$ is a $C^{1,\alpha}$ surface with boundary. Moreover, it holds $u_{\infty}\in C^{0}\left( \overline{\Omega}\right) $ in this case \cite{RT}. In higher dimensions, as proved in the Theorem \ref{mt}, the graph of the solution $u_{\infty}$ is part of a $C^{1,1}$ manifold with boundary $\partial\Omega.$ However, we do not know if $u_{\infty}\in C^{0}\left( \overline{\Omega}\right) $. The $2-$dimensional case is studied in \cite{RT} using classical Plateau's problem technique which is typically $2-$dimensional. \begin{figure}[h] \centering \includegraphics[width=1\linewidth]{fig2} \caption*{Possible solutions of arbitrary domains } \label{fig:fig2}% \end{figure} \noindent(b) The EDP for the minimal surface equation is studied in the Riemannian setting in \cite{ARS} and \cite{ER}. \bigskip
{ "timestamp": "2021-09-13T02:02:49", "yymm": "2012", "arxiv_id": "2012.14003", "language": "en", "url": "https://arxiv.org/abs/2012.14003" }
\section*{Acknowledgments} SM, CHS-T and HW are financed in part through the NExT Institute. SM is also funded by the STFC consolidated Grant No. ST/L000296/1. HW acknowledges financial support from the Magnus Ehrnrooth Foundation, the Finnish Academy of Sciences and Letters and STFC Rutherford International Fellowship (funded through the MSCA-COFUND-FP Grant No. 665593). The work of AC is funded by the Department of Science and Technology, Government of India, under Grant No. IFA18-PH 224 (INSPIRE Faculty Award). Some of us also acknowledge the use of the IRIDIS High Performance Computing Facility and associated support services at the University of Southampton. We thank Nishita Desai, Terhi J\"arvinen, Emmanuel Olaiya, Ian Tomalin and Jose Zurita for useful discussions. \section{Event selection and backgrounds} \section{Collider analysis} We shall look at the process $pp\rightarrow H,A \rightarrow \tilde{N}\tilde{N}$, where both sneutrinos decay visibly giving a charged lepton ($\ell = e, \mu$) and a charged higgsino. In this model the sneutrinos do not have a well-defined lepton number and the lepton number violating mass splitting is larger than the decay width, so there is a $50\%$ chance for each sneutrino to decay to either charge lepton. As the backgrounds for same-sign dileptons are smaller, we choose events with two same-sign leptons and veto for a third hard lepton with $p_{T}>15$~GeV. If the higgsinos have a relatively smaller mass splitting, say close to $5-10$~GeV, then the decay width for the sneutrino to visible decays is small. Numerically the minimal width is close to $10^{-14}$~GeV leading to a mean decay length of a couple of millimeters. In contrast, in the region of phase space where the production of heavy Higgses is possible with a reasonable cross section and the decays are kinematically allowed, the decay widths are typically a few times $10^{-13}$~GeV. This implies that the mean decay lengths are around hundreds of micrometers, which leads to final states with DVs. Thus for a large fraction of the signal events we should have two DVs with charged leptons and soft charged tracks. In this work, we require both of the leptons ($\ell = e, \mu$) to be displaced. Such a requirement means that the backgrounds come from processes involving either $b$-quarks or $c$-quarks. The same-sign requirement further reduces these as in the case of only two heavy-flavour quarks same-sign leptons are possible only in the case of flavour oscillations. Our signal events will have missing transverse momentum in the form of the LSPs and, if the spectrum is not compressed, it is possible to require large $\slashed{p}_{T}$ with a rather good signal acceptance. Heavy flavour events rarely satisfy this requirement. In order to have neutrinos with significant transverse momentum, the quarks in the hard process must have large $p_T$ and the cross section falls down quickly with increasing $p_{T}$. Also, $t\overline{t}$ events with both the tops decaying hadronically can kinematically mimic the signal events. \subsection{Event generation procedure} For detailed analysis we choose the three Benchmark Points (BPs) given in Table \ref{tab:benchmark}. While they differ slightly in their mass spectrum, their main difference is in the sneutrino lifetimes. BP-I represents a ``typical'' benchmark for EW scale seesaw with a decay width corresponding to a mean decay length around $1$~{\rm mm}. The second benchmark has a shorter lifetime with a mean decay length less than $0.3$~{\rm mm}, while the third one has a mean decay length around $2$~{\rm mm}, which is about as long as one can get without going to very compressed spectra. For such cases any leptons would be soft and triggering the event would be more difficult. For interested readers, we refer \cite{Fukuda:2019kbp,Bhattacherjee:2020nno} for the recent proposals on these issues. Regarding the constraints on the spectrum, the higgsinos need to be heavier than about $160$~GeV with a mass splitting of $10$~GeV \cite{Sirunyan:2018iwl}. Since $m_{H}>2m_{\tilde{N}}>2m_{\tilde{H}}$, the heavy Higgses need to be beyond $400$~GeV. However, the production cross section falls off rather quickly beyond $500$~GeV. In this range $\tan \beta$ needs to be low to avoid the constraints from $H\rightarrow \tau^{+}\tau^{-}$ searches \cite{Sirunyan:2018zut,Aad:2020zxo}. For our BPs we have $2<\tan\beta < 3$ as this both evades the experimental constraints and gives a large BR($H\rightarrow \tilde{N}\tilde{N}$). We simulate 100,000 signal events and $\mathcal O(10^{7})$ events each for the $t\bar{t}$ and $b\bar{b}$ backgrounds using {\tt MadGraph5 v2.6.6} \cite{Alwall:2011uj} at LO. Parton showering and hadronisation are modelled through {\tt Pythia v8.2} \cite{Sjostrand:2014zea} and fast detector simulation is obtained by {\tt Delphes v3.3.3} \cite{deFavereau:2013fsa} with the ATLAS card. We use a modified version of the default ATLAS card to implement the impact parameter smearing effects. The event rates are then corrected to NLO accuracy with $k$-factor of 2 for the signal \cite{Spira:1993bb,Spira:1995rr,Muhlleitner:2006wx} and to next-to-next-to-leading order (NNLO) accuracy with $k$-factor of 1.8 for the two dominant backgrounds \cite{Czakon:2011xx, Aliev:2010zk, Catani:2020kkl}\footnote{Note that, as pointed out in Ref.\cite{Catani:2020kkl}, at high transverse momenta of the bottom quarks, large logarithmic terms of the form $\ln(\frac{p^b_T}{m_b}) $ become important and need to be resummed properly while estimating the NNLO cross section for the $b\bar{b}$ process. For our study, bottom quarks are pair produced with a minimum $p^b_T$ = 200 GeV, so we make a conservative choice of $k$-factor = 1.8.}. Further, in order to generate events efficiently, we demand that the top quarks are decaying hadronically and the $b$-hadrons, obtained after parton shower and hadronisation of the $b$-quarks, are decaying through leptonic final states. We generate two sets of $b\bar b$ samples by varying the $p_T$ of the bottom quarks at the generation level. We find that the one with generation level cut $p^{b}_{T, {\rm min}}$ = 200 GeV has better sensitivity. In Table \ref{tab:events}, we show the details of event generation of individual signal and background events. \begin{table}[!htb] \begin{center} \begin{tabular}{ |c|c|c|c| } \hline Observable & BP-I & BP-II & BP-III \\ \hline \hline Lightest Higgs mass & 125.0 & 125.2 & 125.6 \\ \hline 2nd Higgs mass & 338.2 & 322.1 & 370.4\\ \hline 3rd Higgs mass & 462.2 & 484.0 & 483.6 \\ \hline Lightest Pseudoscalar Higgs mass & 259.0 & 256.8 & 261.7\\ \hline 2nd Pseudoscalar Higgs mass & 446.9 & 470.2 & 468.1 \\ \hline \hline Lightest Sneutrino mass & 219.4 & 219.9 & 228.4 \\ \hline Lightest CP-odd Sneutrino mass & 220.0 & 219.8 & 229.2 \\ \hline \hline Lightest Chargino mass & 185.8 & 177.6 & 205.1 \\ \hline Lightest Neutralino mass & 177.3 & 168.2 & 196.0 \\ \hline Next-to Lightest Neutralino mass & 200.3 & 193.2 & 218.6 \\ \hline \hline BR($h_3 \to \tilde{\nu_1} \tilde{\nu_1}$) (in \%) & 5.3 & 4.9 & 4.9 \\ \hline BR($A_3 \to \tilde{\nu_1} \tilde{\nu_1}^\prime$) (in \%) & 1.2 & 1.3 & 1.6 \\ \hline BR($\tilde{\nu_1} \to \ell \tilde{\chi^{\pm}_1}$) (in \%) & 48.2 & 48.6 & 45.3 \\ \hline BR($\tilde{\nu_1} \to \nu \tilde{\chi^{0}_1}$) (in \%)& 51.8 & 51.4 & 54.7 \\ \hline $\Gamma(\tilde{\nu_1})$ (GeV) & $1.6 \times 10^{-13}$ & $8.5 \times 10^{-13}$ & $9 \times 10^{-14}$ \\ \hline \end{tabular} \caption{Details of the BPs (all the masses are in GeV). The leptonic BRs include electron, muons and taus.} \label{tab:benchmark} \end{center} \end{table} \begin{table}[!htb] \begin{center} \begin{tabular}{ |c|c|c|c| } \hline Process & Cross section (pb) & Events generated & Event weight factor \\ \hline Signal (BP-I) & 0.0666 & 100,000 & 0.09 \\ \hline Signal (BP-II) & 0.0558 & 100,000 & 0.08 \\ \hline Signal (BP-III) & 0.0508 & 100,000 & 0.07 \\ \hline $t\bar{t}$ (hadronic)& 369.0 & 10,000,000 & 5.1 \\ \hline $b\bar b$ ($p^{b}_{T, {\rm min}}$ = 30 GeV) & 1183654.3 & 5,000,000 & 32432.1 \\ \hline $b\bar b$ ($p^{b}_{T, {\rm min}}$ = 200 GeV) & 378.0 & 10,000,000 & 5.2 \\ \hline \end{tabular} \caption{Event simulation details at $\sqrt s = 13 ~{\rm TeV}$ and $\mathcal L = 137~{\rm fb}^{-1}$.} \label{tab:events} \end{center} \end{table} \subsection{Definition of displaced objects} We now provide the details of the observables used to probe the displaced signal events. \begin{itemize} \item \underline{Displaced leptons}: The isolated leptons ($\ell = e, \mu$) with $p_{T}(\ell)>10$~GeV and $|\eta(\ell)| < 2.5$ must satisfy $|{d_\perp}|> 0.2~{\rm mm}$ \cite{CMS:2014hka, Aad:2019tcc} where $d_{\perp}$ is the transverse impact parameter relative to the primary vertex. The lepton isolation is achieved by demanding the angular separation between the lepton and jets, $\Delta R (\ell, jet)$, should be greater than 0.4. Additionally, we demand that the leptons carry at least 80\% (90\%) of the transverse momentum within a cone of radius R = 0.5 in case of a muon (electron). Note that, we have used a modified version of the default ATLAS card available within {\tt Delphes} to implement the impact parameter smearing effects and obtain the displaced leptons. \item \underline{DVs using displaced tracks}: The tracks used in the DV reconstruction must satisfy the following requirements: $p_{T}>1$~GeV and $|{d_\perp}| > 2~{\rm mm}$. Further, the significance of $d_\perp$ with respect to the beam axis ({\it i.e.}, $|d_\perp|$ divided by its uncertainty $\sigma_{d_\perp}$) should be at least 4 \cite{Sirunyan:2018pwn,Aad:2019kiz,Aad:2019tcc}. The final requirement on track $\frac{|d_\perp|}{\sigma_{d_\perp}}$ improves the identification of displaced tracks associated to the DVs. We collect the displaced tracks and construct the DV. To combine the tracks, we use the truth information of the vertices ({\it i.e.}, vertex position) obtained from the detector emulator and merge those tracks if $|\Delta X (t_i, t_j)| < 0.001$, $|\Delta Y (t_i, t_j)| < 0.001$ and $|\Delta Z (t_i, t_j)| < 0.001$, where $t_i$ denotes the $i$-th displaced track. The invariant mass of the DVs are calculated using the summed 4-momenta of the associated tracks, {\it i.e.}, $m^2_{\rm DV} = {(\sum E_i)}^2 - {(\sum {\vec p}_i)}^2$, where the sum runs over the tracks associated to the DV \cite{Aad:2019kiz}. \item \underline{Jets and displaced jets}: The jets are constructed from calorimeter tower elements using {\tt Fastjet v3.3.2} \cite{Cacciari:2011ma} and the anti-$k_T$ jet clustering algorithm \cite{ Cacciari:2008gp} with jet radius $R = 0.5$. We demand that the jets must satisfy $p_{T}>20$~GeV and $|\eta| < 3.0$. For signal events, hadronic decay of the charginos leads to displaced hadronic final states. Note that long-lived hadrons ({\it e.g.}, $b$-hadrons, $c$-hadrons) as well as soft particles coming from the prompt decay of the displaced hard processes are also present in the events. So, we calculate the angular separation $\Delta R$ between the jet and the displaced tracks then demand that the displaced jet should have at least 2 or more displaced tracks satisfying $\Delta R (j,t) < 0.4$. The displaced jets are constructed following the Refs. \cite{Nemevsek:2018bbt, LLPtalk}. We check that for signal events, the final state stable objects are not energetic enough to pass the jet $p_T$ threshold and provide significant separation from the background events, therefore, we do not consider the displaced jets for further analysis. \end{itemize} \subsection{Distribution of different observables} Here we show the distribution of several kinematic variables relevant for the collider analysis. All the histograms are drawn for events which satisfy the basic selections on the leptons and jets discussed in the previous section. Distributions are scaled for $\mathcal L = 137 ~{\rm fb}^{-1}$ of integrated luminosity at the $\sqrt s = 13$ TeV run of the LHC. \begin{figure}[htb!] \centering \includegraphics[scale=0.3]{nlep.png} \includegraphics[scale=0.3]{nlep_dv.png} \caption{Lepton multiplicity: all (left) and displaced (right).} \label{fig:nlep} \end{figure} \begin{figure}[htb!] \centering \includegraphics[scale=0.3]{dxy_l1.png} \includegraphics[scale=0.3]{dxy_l2.png} \caption{Transverse impact parameter ($d_{\perp}$) of leading (left) and sub-leading (right) lepton. } \label{fig:dxy} \end{figure} \begin{figure}[htb!] \centering \includegraphics[scale=0.3]{m_dv1.png} \caption{Leading DV mass.} \label{fig:mdv} \end{figure} In the left panel of Figure \ref{fig:nlep}, we show the multiplicity of isolated leptons present in the signal and background events. The right panel displays the same distribution but for the isolated displaced leptons with minimum transverse impact parameter $d_{\perp} > 0.2 ~{\rm mm}$. The $d_{\perp}$ distribution of the leading (in $p_T$) two leptons are shown in Figure \ref{fig:dxy}. Significant overlap is overserved expecially at small values of $d_{\perp}$. As mentioned in the previous section, we construct the DVs using the displaced tracks and calculate the mass of the DV. In Figure \ref{fig:mdv}, we plot the mass of the leading (in mass) DV. It is evident that the presence of high $p_T$ displaced tracks, associated with the displaced leptons originating from the sneutrino decay, results in DVs with relatively larger masses. Therefore, we can control the background events by selecting events with mass greater than 5 GeV or so. Also, the distribution has a kinematical endpoint at $m_{\tilde{N}}-m_{\tilde{\chi}^{\pm}}$ so this can be used to estimate the sneutrino mass once the chargino mass is known. \subsection{Event selection and signal significance} After looking at the distributions of several kinematic observables, we find the following selection cuts optimised for our process of interest. The cuts are as follows. \begin{itemize} \item C1: Two same-sign same-flavour leptons (electron or muons) satisfying basic lepton selection criteria. \item C2: The leading (in $p_T$) lepton must satisfy $p_{T}(\ell_{1})>25$~GeV. \item C3: The subleading (in $p_T$) lepton must satisfy $p_{T}(\ell_{2})>15$~GeV. \item C4: Veto on a third lepton with $p_{T}(\ell_{3})>15$~GeV. \item C5: For opposite sign same flavour leptons, veto di-lepton invariant masses around the $Z$ mass {\it i.e.}, $m_{\ell^{\pm}\ell^{\mp}} \ne [80,100]$ GeV. \item C6: Select events with $\cancel p_T$\ $>$ 30 GeV. \item C7: Both leptons have a transverse impact parameter $d_{\perp} > 0.2~{\rm mm}$. \end{itemize} Even though our main backgrounds are from heavy-flavour jets, in this study we did not want to impose a $b$-veto so that we would be free of uncertainties related to the $b$-tagging when we show the viability of our approach. As the displacement of a secondary vertex is a key input for $b$-tagging algorithms, the background displacement distributions would be affected by imposing a $b$-veto. Also the acceptance of the signal events is uncertain. If experimental collaborations were to do this type of an analysis, they may be able to further improve the background rejection with the use of a $b$-tagger. The complete cutflow is presented in Table \ref{tab:cutflow}. The signal significance ($\frac{S}{\sqrt{(S+B)}}$), calculated at $\sqrt s = 13 ~{\rm TeV}$ and $\mathcal L = 137~{\rm fb}^{-1}$, is shown in Table \ref{tab:signi}. We may see that the displacement requirement rejects almost all of the signal of BP-II. For such a case the prompt signature can still be visible --- this is actually BP3 of \cite{Moretti:2019yln} for which a cut-based analysis gave a $\sim 3\sigma$ excess. From Table \ref{tab:cutflow} it is interesting to note that even though the cuts C4 and C5 do not reduce the dominant backgrounds, however, they are important to reduce the sub-dominant backgrouds like WZ, ZZ, tW, tZ and $t\bar tZ$ processes where both prompt and displaced leptons can be present. \begin{table}[!htb] \begin{center} \begin{tabular}{ |c|c|c|c|c|c|c| } \hline & BP-I & BP-II & BP-III & $t\bar{t}$ & $b\bar b$ (30 GeV) & $b\bar b$ (200 GeV) \\ \hline C1 & 420.4 & 354.9 & 160.6 & 2637.8 & 2.7 $\times 10^7$ & 440.2 \\ \hline C2 & 354.7 & 335.8 & 71.9 & 857.0 & 2.4 $\times 10^6$ & 362.5 \\ \hline C3 & 315.3 & 309.1 & 57.6 & 384.9 & 1.2 $\times 10^6$ & 176.1 \\ \hline C4 & 314.7 & 307.8 & 56.9 & 384.9 & 1.2 $\times 10^6$ & 176.1 \\ \hline C5 & 314.7 & 307.3 & 56.9 & 384.9 & 1.2 $\times 10^6$ & 176.1 \\ \hline C6 & 265.8 & 270.4 & 49.6 & 123.2 & 32432.1 & 150.2 \\ \hline C7 & 35.9 & 1.7 & 13.5 & 5.1 & 0 & 5.2 \\ \hline \end{tabular} \caption{Cutflow table ($\sqrt s = 13 ~{\rm TeV}$ and $\mathcal L = 137~{\rm fb}^{-1}$).} \label{tab:cutflow} \end{center} \end{table} \begin{table}[!htb] \begin{center} \begin{tabular}{ |c|c|c|c|c|c| } \hline & Signal (S) & B ($t\bar{t}$) & B ($b\bar b$) & B (total) & Significance = $\frac{S}{\sqrt{(S+B)}}$ \\ \hline BP-I & 35.9 & 5.1 & 5.2 & 10.3 & 5.3 \\ \hline BP-II & 1.7 & 5.1 & 5.2 & 10.3 & 0.5 \\ \hline BP-III & 13.5 & 5.1 & 5.2 & 10.3 & 2.8 \\ \hline \end{tabular} \caption{Signal significances estimated at $\sqrt s = 13 ~{\rm TeV}$ and $\mathcal L = 137~{\rm fb}^{-1}$.\label{table:backgrounds}} \label{tab:signi} \end{center} \end{table} Before we proceed to extract neutrino Yukawa couplings, we discuss a few kinematic observables which can be used, in addition to cuts C1-C6, to reduce the SM backgrounds. For example, we can optimise $d_\perp$ for both the leading two leptons, especially the second lepton which, for signal events, has larger $p_T$ and longer decay lengths (see plots in the upper panel of Figure \ref{fig:aftercuts}). Another important quantity is the ratio of the missing transverse momentum and scalar $H_T$, defined as $\alpha = \frac{\cancel{p_T}}{\sqrt{H_T}}$. Signal events have large missing transverse energy and, therefore, relatively larger values of $\alpha$. These additional observables provide us with an extra handle to minimise the SM backgrounds and improve the sensitivity to sneutrino events. \begin{figure}[htb!] \centering \includegraphics[scale=0.3]{dxy_l1_cut.png} \includegraphics[scale=0.3]{dxy_l2_cut.png} \includegraphics[scale=0.3]{xalpha2_cut.png} \caption{Distribution of the transverse impact parameter ($d_{\perp}$) of the leading (top-left) and sub-leading (top-right) lepton. Bottom: The distribution of $\alpha = \frac{\cancel{p_T}}{\sqrt{H_T}}$. All the figures are drawn {for events satisfying cuts C1-C6}. } \label{fig:aftercuts} \end{figure} \section{Introduction} \label{intro} The Standard Model (SM) has survived nearly all experimental tests. Besides the strong evidence for Dark Matter (DM), the only unexplained experimental phenomenon is neutrino oscillations \cite{Athanassopoulos:1997pv,Fukuda:1998mi,Aguilar:2001ty,Ahn:2002up,Abe:2011sj,An:2012eh}, which in turn imply that neutrinos have a tiny, but non-zero, mass. The standard explanation is that neutrino masses are generated through a seesaw mechanism \cite{Minkowski:1977sc,Konetschny:1977bn,Mohapatra:1979ia,Magg:1980ut,Schechter:1980gr,Foot:1988aq}, where the effective dimension five operator responsible for neutrino masses is suppressed by a heavy mass scale. The range of possible seesaw scales varies from eV-scale sterile neutrinos to those with masses of the order $10^{14}$~GeV. Taking type-I seesaw as an example, the neutrino Yukawa couplings are $\mathcal{O}(1)$ at the upper end while they are tiny at the lower end of the possible spectrum of their interaction strength. One interesting option is that the seesaw scale is around the Electro-Weak scale (EW), which requires the neutrino Yukawa couplings to be somewhat smaller than the electron Yukawa coupling. As the Right-Handed (RH) neutrinos are singlets under the SM gauge group, the only interactions they have are the Yukawa couplings, which being small can lead to Displaced Vertices (DVs) \cite{Basso:2008iv,Helo:2013esa,Izaguirre:2015pga,Accomando:2016rpc,Liu:2019ayx}. Supersymmetry (SUSY) is a well-motivated framework for Beyond the SM (BSM) physics. SUSY is the only space-time symmetry that can be added to the Poincar\'e algebra \cite{Haag:1974qh} and it relates particles with different spins, specifically, bosons to fermions. This relation leads to the cancellation of quadratic divergences emerging in the calculation of the Higgs boson mass in the SM (the so-called hierarchy problem). Furthermore, SUSY may induce the convergence of the Electro-Magnetic (EM), weak and strong couplings at some high energy scale, unlike the SM, a precondition for a theory embedding unification of forces. Finally, if one removes the baryon and lepton number violating couplings by requiring $R$-parity, one can get as a by-product of SUSY a DM candidate in the form of the Lightest Supersymmetric Particle (LSP). Needless to say then, in order to pursue BSM physics that addresses all the aforementioned SM flaws, SUSY is one of the possible paths to follow, so long that it embeds a mechanism for neutrino mass generation. In doing so, it is then necessary to surpass its minimal realisation and consider non-minimal ones \cite{Book}, wherein the gauge and/or Higgs structures are enlarged with respect to the case of the Minimal Supersymmetric Standard Model (MSSM). An attractive framework in this respect is the Next-to-MSSM (NMSSM), wherein a singlet Superfield, containing an extra singlet Higgs state and its SUSY counterpart, is added to the MSSM particle content. This way, the so-called $\mu$-problem \cite{Ellwanger:2009dp} of the MSSM is overcome. If such a construct is supplemented with RH neutrinos and their SUSY counterparts, a viable model for neutrino mass generation based on a type-I seesaw is established. By adopting this theoretical framework, we will show that the heavy Higgs states belonging to it (both CP-even and -odd) can have significant couplings to RH sneutrinos, even in the alignment limit, as required by measurements of the SM-like Higgs boson discovered at the Large Hadron Collider (LHC) in 2012. Furthermore, given that in this model RH (s)neutrinos and higgsinos get their masses through the same mechanism, one can expect the SUSY states to be rather degenerate, yet, suitable soft SUSY-breaking mass terms can render the RH sneutrinos somewhat heavier than the higgsinos. In such a case a RH sneutrino can decay visibly through its Yukawa interactions to a charged lepton ($\ell = e, \mu$) and chargino or else a neutrino and neutralino. Therefore, a typical signal that may emerge at the LHC in this theoretical scenario is heavy Higgs mediated production of a sneutrino pair eventually yielding a di-lepton signature together with soft jets and missing transverse energy. As the seesaw mechanism has a source of lepton number violation, we get both opposite-sign and same-sign dileptons, the latter giving better discovery potential due to smaller backgrounds. Remarkably, the aforementioned mass degeneracy may make the sneutrinos long lived, so that the visible tracks of this signature may be displaced, which in turn implies a smaller background with respect to the one affecting similar prompt signatures \cite{Moretti:2019yln,Moretti:2020zbn}. Here we shall prove that this signature with two DVs can be extracted at the LHC and, moreover, we shall also show how the kinematics of the displaced (visible) tracks could allow for a measurement of the (s)neutrino Yukawa couplings, thereby enabling one to probe the underpinning neutrino mass generation dynamics. Our paper is organised as follows. In the next section we introduce our theoretical framework. In the following one we illustrate the properties of the track displacements and how these can be related to the discussed Yukawa couplings. Then we perform our MC analysis aimed at extracting both the relevant signature and its underlying (s)neutrino mass parameters. We then conclude. \section{NMSSM with RH neutrinos} We shall study the NMSSM with RH neutrinos. It is based on the following Superpotential \cite{Kitano:1999qb,Cerdeno:2008ep} $$ W=y^{u}_{ij}(Q_{i}\cdot H_{u})U^{c}_{j}-y^{d}_{ij}(Q_{i}\cdot H_{d})D^{c}_{j}-y^{\ell}_{ij}(L_{i}\cdot H_{d})E^{c}_{j}+y^{\nu}_{ij}(L_{i}\cdot H_{u})N^{c}_{i} + \lambda S(H_{u}\cdot H_{d}) $$ \begin{equation}\label{eq:superpotential} \hspace*{-9.85cm} +\frac{\lambda_{Ni}}{2}SN_{i}^{c}N_{i}^{c}+\frac{\kappa}{3} S^{3}. \end{equation} As mentioned, this model cures some problems of the MSSM, namely the $\mu$-term will be generated through the Vacuum Expectation Value (VEV) of the scalar component of the singlet Superfield $S$ and we also have a mechanism for neutrino mass generation. As the $\mu$-term should not be too far above the EW scale, the RH neutrino masses are at the EW scale too, hence the neutrino Yukawa couplings need to be very small, of the order $10^{-7}$. We shall look at a scenario where we have light higgsinos and RH neutrinos being roughly degenerate with these. The soft SUSY-breaking masses should then make the RH sneutrinos heavier than the higgsinos and thus the decays $\tilde{N}\rightarrow \tilde{\chi}^{0}\nu,\tilde{\chi}^{\pm}\ell^{\mp}$ are kinematically open. The decay width will be given by the neutrino Yukawa couplings. As mentioned, they will be tiny, and thus may lead to DVs. This model allows two important features: EW Symmetry Breaking (EWSB) generates both a lepton-number violating mass term for the RH sneutrinos and a coupling between Higgs states and the sneutrinos. The coupling in the alignment limit is (neglecting doublet-singlet mixing) \begin{eqnarray} C_{h\tilde{N}\tilde{N}} & = & \pm\frac{1}{2}\lambda\lambda_{N}v\sin 2\beta,\\ C_{H\tilde{N}\tilde{N}} & = & \pm\frac{1}{2}\lambda\lambda_{N}v\cos 2\beta, \end{eqnarray} where the upper (lower) sign is for CP-even (CP-odd) sneutrinos. If $\tan \beta > 1.5$, the heavy Higgs state has a stronger coupling to sneutrinos. If $\lambda$ and $\lambda_{N}$ are large, RH sneutrinos can be pair produced through the heavy Higgs portal and they can be detected through lepton-number violating signatures \cite{Moretti:2019yln,Moretti:2020zbn}. The singlet field is essential in achieving this as in the MSSM with RH neutrinos the Higgs-RH sneutrino couplings would not exist. As intimated, our aim is to use DVs to both improve background rejection and allow for a quantitative estimate of the neutrino Yukawa couplings. \section{From displacements to Yukawa couplings} \label{sec:yukawa1} We shall assume that the only kinematically available decay channels for the sneutrino are $\tilde{N}\rightarrow \tilde{\chi}^{0}\nu,\tilde{\chi}^{\pm}\ell^{\mp}$ and that the neutralino and chargino are higgsino-like. The sneutrino-lepton-chargino vertex factor is \begin{equation} \lambda_{\tilde{N}\ell^{+}\tilde{\chi}^{-}}=\frac{i}{\sqrt{2}}y^{\nu}_{ab}V_{12}\frac{1+\gamma_{5}}{2}, \end{equation} where $a,b$ refer to the flavours of the charged lepton, sneutrino and $V_{12}$ tells the higgsino component of the lightest chargino. For CP-odd sneutrinos we only need the replacement $\frac{i}{\sqrt{2}}\rightarrow \frac{1}{\sqrt{2}}$. This leads to the partial width (neglecting the lepton mass) \begin{equation}\label{eq:charginowidth} \Gamma(\tilde{N}_{i}\rightarrow \ell^{\pm}_{j}\tilde{\chi}^{\mp})=\frac{(m_{\tilde{N}}^{2}-m_{\tilde{\chi}^{\pm}}^{2})^{2}}{16\pi m_{\tilde{N}}^{3}}|y^{\nu}_{ji}|^{2}|V_{12}|^{2}. \end{equation} We shall assume $|V_{12}|=1$ in the following. If we neglect the mixing between Left-Handed (LH) and RH neutrinos\footnote{This mixing introduces a vertex factor $\lambda_{N}N_{j5}$ times the RH neutrino component of the light eigenstates. The elements of the neutrino left-right mixing matrix are of the same order as the elements of $y^{\nu}$. As the singlino component $N_{j5}$ is not negligible, this correction to the vertex factor is numerically only an order of magnitude smaller than $|y^{\nu}N_{j3}|$ so this does introduce an $\mathcal{O}(10\%)$ correction to the partial width.\label{mixing}}, the sneutrino-neutrino-neutralino vertex factor is \begin{equation} \lambda_{\tilde{N}\nu\tilde{\chi}^{0}}=-\sum_{a}\frac{i}{\sqrt{2}}\left( N_{j3}^{*}P^{*}_{ab}y^{\nu}_{bc}\frac{1-\gamma_{5}}{2}+ N_{j3}P_{ab}y^{\nu *}_{bc}\frac{1+\gamma_{5}}{2}\right), \end{equation} where $P$ is the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix, $b$ and $c$ are the flavours of the neutrino and sneutrino, respectively, while $N_{j3}$ gives the $\tilde{H}_{u}$ component of the neutralino $j$. For CP-odd sneutrinos we shall again replace $\frac{i}{\sqrt{2}}\rightarrow \frac{1}{\sqrt{2}}$ in the prefactor. This gives the partial width \begin{equation}\label{eq:neutralinowidth} \Gamma(\tilde{N}_{i}\rightarrow \nu\tilde{\chi}^{0}_{j})=\sum_{k}\frac{(m_{\tilde{N}}^{2}-m_{\tilde{\chi}^{0}}^{2})^{2}}{16\pi m_{\tilde{N}}^{3}}|y^{\nu}_{ki}|^{2}|N_{j3}|^{2}. \end{equation} Here $j$ can get values $1,2$ and $|N_{j3}|^{2}\simeq 1/2$. From Equations (\ref{eq:charginowidth}) and (\ref{eq:neutralinowidth}) we see that the total decay width is proportional to $\sum_{i} |y^{\nu}_{ij}|^{2}$. Hence the measurement of the lifetime of the sneutrino will give an estimate of the sum of the Yukawa couplings, while the ratios $|y_{ik}/y_{jk}|^{2}$ are proportional to BR$(\tilde{N}_{k}\rightarrow \ell_{i}^{\pm}\tilde{\chi}^{\mp})/{\rm BR}(\tilde{N}_{k}\rightarrow \ell_{j}^{\pm}\tilde{\chi}^{\mp})$. Measuring the lifetime and the ratios of the Branching Ratios (BRs) would then give us the absolute values of the individual neutrino Yukawa couplings. We shall assume that the chargino and neutralinos would have been observed already and their masses would be known reasonably well. From the decay mode $\tilde{N}\rightarrow \ell^{\pm}\tilde{\chi}^{\mp}$ and the subsequent chargino decay $\tilde{\chi}^{\pm}\rightarrow \tilde{\chi}^{0}+$ hadrons, the invariant mass of the visible decay products should have an endpoint at $m_{\tilde{N}}-m_{\tilde{\chi}^{0}}$ from which the sneutrino mass could be estimated. We also expect that the mass of the heavy Higgs bosons would be known from one of its fermionic decay modes. In order to extract the lifetime of the sneutrino in its rest frame, we need to measure the position of the secondary vertex and the relativistic $\gamma$-factor of the sneutrino, which can be computed, \textit{e.g.}, through $\gamma_{\tilde{N}}=E_{\tilde{N}}/m_{\tilde{N}}$. This can be done on an event-by-event basis as follows. In the Center-of-Mass (CM) frame energy-momentum conservation gives us \begin{eqnarray} E_{\tilde{N}}^{*} & = & \frac{m_{H}}{2},\\ p_{\tilde{N}}^{*} & = & \sqrt{\frac{m_{H}^{2}}{4}-m_{\tilde{N}}^{2}}.\label{eq:labmomentum} \end{eqnarray} To figure out what are the sneutrino momenta in the laboratory frame, we may use the following facts. \begin{itemize} \item The initial transverse momentum of the heavy Higgs boson is nearly zero as long as there are no hard prompt jets in the event. \item The three-momentum of the sneutrino is directed from the primary vertex to the secondary one, since the sneutrino is a neutral particle and its trajectory will not be curved. \end{itemize} \begin{figure}[htb] \begin{center} \includegraphics[width=\textwidth]{kinematics.png} \end{center} \caption{The kinematics can be solved as shown. As the sneutrinos are neutral, their momentum vectors are aligned with the displacements of the secondary vertices from the primary vertex. We then construct the momenta in the lab frame by using these direction vectors and the fact that the transverse components are equal. Then we may boost to the center-of-mass frame, where we know the total momentum $p^{*}$ drom Eq. (\ref{eq:labmomentum}). \label{fig:kinematics}} \end{figure} We shall draw vectors from the primary vertex to the secondary vertices as shown in Figure \ref{fig:kinematics}. Next we shall scale them so that $|p_{T,\tilde{N}_{1}}|=|p_{T,\tilde{N}_{2}}|$ based on the small initial transverse momentum. Due to the difference of initial state gluon momenta, the final state will have a net momentum in the $z$-direction, but we may boost to the CM frame simply by adding equal vectors to both laboratory frame momenta. Once we are in the CM frame, we know $p_{\tilde{N}}^{*}$ from Equation (\ref{eq:labmomentum}) and hence may solve for $p_{T,\tilde{N}}$. This then allows us to compute \begin{equation} p_{\tilde{N}}=p_{T,\tilde{N}}\sqrt{1+\frac{d_{Z}^{2}}{d_{\perp}^{2}}}, \end{equation} where $d_{z}$ and $d_{\perp}$ are the longitudinal and transverse displacements, respectively. In Figure \ref{fig:dv}, we show a representative image of the trajectory of a long-lived particle in a 2-dimensional plane (from \cite{Allanach:2016pam}). \begin{figure}[!htb] \begin{center} \includegraphics[width=0.5\textwidth]{DV_image.png} \end{center} \caption{Schematic view in the transverse plane of a long-lived particle decay.} \label{fig:dv} \end{figure} We can then deduce $E_{\tilde{N}}=\sqrt{p_{\tilde{N}}^{2}+m_{\tilde{N}^{2}}}$ and from this $\gamma_{\tilde{N}}$. As the total displacement is \begin{equation} d=\gamma_{\tilde{N}} v \tau, \end{equation} where $v$ is the velocity and $\tau$ the lifetime of the sneutrino, we may solve for the lifetime \begin{equation} \tau =\frac{d}{\gamma_{\tilde{N}}\sqrt{1-\frac{1}{\gamma_{\tilde{N}}^{2}}}}. \end{equation} The lifetimes follow an exponential probability distribution \begin{equation} P(\tau)\propto e^{-\tau/\tau_{0}}, \end{equation} where $\tau_{0}$ is the average lifetime. Plotting the lifetime distribution with a suitable binning with a logarithmic scale should then produce a straight line, whose slope gives the inverse of the average lifetime\footnote{We refer to \cite{Banerjee:2019ktv,Liu:2020vur} for elaborated discussions on determining the lifetime of long-lived particles at the LHC.} When Next-to-Leading Order (NLO) and even higher order corrections are taken into account, the Higgs bosons will have a finite initial transverse momentum $p_{T}^{H}$. This will introduce a relative error in the measurement of the momenta, which is of the order of $p_{T}^{H}/p_{\tilde{N}}$. As we are interested in events that do not have a large longitudinal boost, \textit{i.e.}, all decay products are in the barrel region of the detector, we expect typically the $\gamma$-factor of the sneutrino to be rather small. In such a case the error on $\gamma_{\tilde{N}}$ will be of the order of $(p_{T}^{H}/m_{\tilde{N}})^{2}$. If we veto for hard jets a typical $p_{T}^{H}$ is about $10$~GeV \cite{Harlander:2014uea} and the mass of the sneutrino is $200$~GeV or more so the error from the finite initial transverse momentum on the lifetime measurement is very small. \section{Conclusions and outlook} We have studied sneutrino pair production via the heavy Higgs portal in the NMSSM with RH neutrinos. In the model both higgsinos and RH neutrinos get their masses through the singlet VEV, so we expect them to be at the EW scale. If RH sneutrinos are heavier than higgsinos, they can decay to both a charged lepton and a chargino or a neutrino and a neutralino, the former one giving a visible signature at colliders. Since the RH sneutrinos are gauge singlets, their decay modes are dictated by the neutrino Yukawa couplings, which are tiny in an EW scale seesaw model. The smallness of the decay width together with the lepton number violation arising from the RH (s)neutrino mass term leads to a signature with same-sign dileptons emerging from displaced vertices together with missing transverse momentum. The SM backgrounds for such a signal topology are therefore low. We have indeed performed an alaysis proving that the study of the emerging signatures with DVs can lead to the extraction of the neutrino Yukawa couplings, which salient features are as follows. \begin{itemize} \item We searched for displaced leptons and jets, with the leptons originating from sneutrino decays. Several BPs of the aforementioned BSM scenarios were introduced, with varied displacement lengths, and studied by MC analysis assuming a simple cut-based analysis providing a good signal-to-backgroud ratio already at the end of the 13 TeV run of the LHC. \item The kinematic distribution of the displaced vertex mass gives an end point which can be used to estimate the sneutrino mass once the chargino mass is known. \item Additional variables can make the regions of phase-space effectively background-free, thereby implying better handle on the extraction of the intervening Yukawa couplings. \item The decay width of the RH sneutrinos are dictated by the sum of the squares of the Yukawa couplings so a measurement of the sneutrino lifetime would allow one to measure the Yukawa couplings. The individual Yukawa couplings can then be extracted from the sneutrino BRs to different lepton flavours, once corrected with identification efficiencies. \item We ultimately showed that, for lifetimes corresponding to mm scale displacements, lifetimes can be measured with reasonable accuracy, which can then give the absolute values of the Yukawa couplings with a $10\%$ accuracy so long that sufficiently large data samples can be accrued for the signal, like, {\it e.g.}, at the HL-LHC. \end{itemize} In short, such an approach can lead to the extraction of the underlying neutrino dynamics parameters from the study of sneutrinos signatures at hadronic colliders, albeit under a specific SUSY paradigm. The results obtained here from events with DVs therefore complement those obtained in Refs.~\cite{Moretti:2019yln,Moretti:2020zbn} for the case of prompt signatures. \section{Extraction of Yukawa couplings} In order to extract the Yukawa couplings, we need to know the spectrum. The heavy Higgs masses can be measured reasonably accurately from their fermionic decay channels. The higgsinos would also be discovered from other searches and those would give an estimate for the masses of the neutralinos and charginos. The sneutrino mass can be estimated by measuring the invariant mass of the DV related to the sneutrino decay to a lepton and a chargino. The invariant mass distribution has an endpoint at $m_{\tilde{N}}-m_{\tilde{\chi}^{\pm}}$ (see Figure \ref{fig:mdv}), which will give us the sneutrino mass once we know the chargino mass. We assume that we are able to get an essentially background free sample (say, over $95\%$ purity) using the cuts given in the previous section (both those given in Table \ref{tab:cutflow} and discussed at the end of the section) and the usage of $b$-tagging. We then do an event-by-event correction to the displacements described in section \ref{sec:yukawa1} to get the actual lifetimes. As both the CP-even and CP-odd Higgs bosons contribute to the signature, we need to pick which mass we use in the boost correction. As discussed in \cite{Moretti:2019yln}, the CP-even Higgs state usually has the largest BR to sneutrinos, although also the amount of available phase space has an impact. If the CP properties of the two heavy Higgses have been measured, the CP-even mass would give the better estimate, otherwise it would be a reasonable choice to pick the heavier one due to the larger phase space. This ambiguity leads to a systematic error in the measurement of the Yukawa couplings, which is of the order of $10\%$ estimated by studying the impact of choosing the other Higgs mass to the lifetime distribution. The background originating from heavy flavour hadrons that survives the cuts is heavily boosted (with the $p_{T}$ of a typical $b$-jet being $200$~GeV or more) leading to an average displacement greater than $5$~mm. As the boost correction assumes the particle to be heavy, the lifetime of heavy flavour hadrons will be overestimated and will on average correspond to mean decay lengths around $5$~mm, although the exponential distribution of course gives events at all displacements. We shall use a binning of $0.1$~mm. As the total number of background events at the HL-LHC is expected to be around $200$ (scaled from Table \ref{table:backgrounds} to $3000$~fb$^{-1}$) and these are distributed into some $100$ bins or more, there will not be many background events in a single bin. The background distribution may be estimated, \textit{e.g.}, by looking at events with small vertex mass and $\slashed{p}_{T}/\sqrt{H}_{T}$ and the subtract it from the distribution. The signal region includes a cut on the transverse distance of the leptons requiring $d_{\perp}> 0.2$~mm. The distribution of lifetimes close to the cut will be modified and hence we put a lower bound on the lifetime when fitting. We fit the exponential to the distribution in the interval $0.5$~mm$ <c\tau<5$~mm. This leads to rather robust results if the true lifetime is around $c\tau \simeq 1$~mm but, if the lifetime is shorter, the number of events in the fit region becomes so low that the statistical error increases. In such a case it might be reasonable just to use this method to find an upper limit for the lifetime and derive a lower limit for the neutrino Yukawa couplings. An upper limit for the Yukawa couplings can be obtained from the constraint on the sum of neutrino masses. That can be expressed as $\sum m_{\nu}=\mathrm{Tr}(m^{\nu})=\sum_{i,j}|y^{\nu}_{ij}|^{2}v^{2}\sin^{2}\beta/2m_{N_{j}}$. Once we had studied the fitting method with one BP, we tried to analyse other BPs blindly so that one of the authors generated events and another one then analysed them, the result was then compared to the unknown input. As long as the decay lenghts were at the mm scale, a reasonable agreement was achieved. \begin{figure} \begin{center} \includegraphics[width=0.75\textwidth]{lifetimefit_BP1.png} \end{center} \caption{The fit of an exponential to the lifetime distribution of the sneutrino for BP-I. The amount of data corresponds to $3000$~fb$^{-1}$ at $\sqrt{s}=14$~TeV.\label{fig:sneutrinofit}} \end{figure} We show in Figure \ref{fig:sneutrinofit} a fit of the sneutrino lifetime for BP-I. We fit a simple exponential to the lifetime distribution. The fit with a number of events corresponding to HL-LHC energy ($\sqrt s =14$ TeV) and luminosity ($3000$~fb$^{-1}$) gives us $\sum_{i}|y^{\nu}_{i1}|^{2}=(2.62 \pm 0.13 \pm 0.26) \times 10^{-13}$, where the first error is statistical and the second one the theory error (assumend to be $10\%$) from the ambiguity in choosing the CP-even or CP-odd Higgs as discussed above. On top of these there will be experimental systematic errors, which are mostly related to measuring the primary and secondary vertices\footnote{As this is a simple lifetime measurement, many typical sources for systematic errors, like parton distribution functions, do not matter.}. The true value is $2.28\times 10^{-13}$, which slightly more than one standard deviation off. For BP-II the event sample is so small that the Yukawa couplings cannot be reliably estimated. The best fit value would have been $3.57\times 10^{-13}$ but, as the actual average decay length is shorter than the lower end of our fitting region, this estimate is based only on few events and individual outliers can significantly change the result. The estimate was off by a factor of two. With such a short decay length it makes sense only to give a lower bound on the Yukawa couplings. We give the results for our three BPs in Table \ref{tb:yukawas}. When doing the estimate, we added $10\%$ to the invisible decay width of Equation (\ref{eq:neutralinowidth}) due to the mixing effect of LH and RH neutrinos (see footnote on page \pageref{mixing}). \begin{table} \begin{center} \begin{tabular}{|l|c|c|c|} \hline & BP-I & BP-II & BP-III\\ \hline Measured $\sum |y^{\nu}_{1i}|^{2}$ & $2.62\pm 0.13\pm 0.26$ & $>2.94$ & $3.73\pm 0.67 \pm 0.37$\\ Actual $\sum |y^{\nu}_{1i}|^{2}$ & $2.28$ & $7.81$ & $2.40$\\ \hline \end{tabular} \end{center} \caption{The sums of neutrino Yukawa couplings from displacements and the actual input values in units of $10^{-13}$. The first uncertainty is statistical and the second one is a $10\%$ theoretical uncertainty related to the ambiguity between the CP-even/odd Higgs states as discussed in the text. For BP-II the decay length was so short that we only quote a lower limit for the Yukawa couplings.\label{tb:yukawas}} \end{table} From this value we can then determine the absolute values of individual Yukawa couplings as the number of events with a given lepton flavour, which will be proportional to $\epsilon_{i} |y^{\nu}_{i1}|^{2}$, where $\epsilon_{i}$ is the efficiency of identifying the lepton of flavour $i$. Since the error on $|y^{\nu}|^{2}$ will often be below $20\%$ with high enough statistics, the error on the Yukawa couplings themselves will be around $10\%$. Hence even this rather simple fitting method can give a reasonably good estimate of the Yukawa couplings.
{ "timestamp": "2021-05-24T02:15:18", "yymm": "2012", "arxiv_id": "2012.14034", "language": "en", "url": "https://arxiv.org/abs/2012.14034" }
"\\section{Introduction}\nThe Supermembrane theory was originally expected to describe the microsc(...TRUNCATED)
{"timestamp":"2021-11-09T02:38:23","yymm":"2012","arxiv_id":"2012.14069","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\n\\label{sec:intro}\n\nThe dependence of physical observables on the topolo(...TRUNCATED)
{"timestamp":"2021-02-08T02:19:44","yymm":"2012","arxiv_id":"2012.14000","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\n\\label{sec:2}\nThe statistican observes a dataset of $n$ i.i.d. realizat(...TRUNCATED)
{"timestamp":"2021-02-08T02:18:15","yymm":"2012","arxiv_id":"2012.14118","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\nIn traditional ad-hoc retrieval, queries and documents are represented by (...TRUNCATED)
{"timestamp":"2020-12-29T02:21:23","yymm":"2012","arxiv_id":"2012.14005","language":"en","url":"http(...TRUNCATED)
{"timestamp":"2020-12-29T02:23:39","yymm":"2012","arxiv_id":"2012.14063","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\\label{sec: Intro}\n\nLarge-scale natural flows such as atmospheric ones, a(...TRUNCATED)
{"timestamp":"2021-07-12T02:21:50","yymm":"2012","arxiv_id":"2012.14027","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\r\n\\label{thintro}\r\n\r\n Shot noise of nonequilibrium quantized charge\r(...TRUNCATED)
{"timestamp":"2020-12-29T02:25:05","yymm":"2012","arxiv_id":"2012.14102","language":"en","url":"http(...TRUNCATED)
End of preview. Expand in Data Studio

No dataset card yet

Downloads last month
5