Dataset Viewer
Auto-converted to Parquet Duplicate
text
stringlengths
1
2.51M
meta
dict
\section{Introduction} Given their central role in many number theoretic applications, it is no surprise that Weyl sums and their properties have been subject to thorough investigation over the years. For a collection ${\boldsymbol \varphi}} \def\bPhi{{\boldsymbol \Phi}$ of linearly independent polynomials ${\varphi}_1, \ldots, {\varphi}_r \in \mathbb Z[X]$ with respective degrees $k_1, \ldots, k_r$ we consider the {\it Weyl sums} $$ f_{{\boldsymbol \varphi}} \def\bPhi{{\boldsymbol \Phi}}({\boldsymbol \alpha})= \sum_{1 \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec x \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P} {\mathbf{\,e}}({\alpha}} \def\talp{{\widetilde{\alpha}}_1 {\varphi}_1(x) + \ldots + {\alpha}} \def\talp{{\widetilde{\alpha}}_r {\varphi}_r(x)), $$ where ${\mathbf{\,e}}(z) = \exp\(2 \pi i z\)$ and ${\boldsymbol \alpha}=({\alpha}} \def\talp{{\widetilde{\alpha}}_1, \ldots, {\alpha}} \def\talp{{\widetilde{\alpha}}_r)$. We also write $\mathbb T = \mathbb R / \mathbb Z$ for the unit torus, and refer to the end of this section for other notational conventions we use. Whilst it is well known that $f_{{\boldsymbol \varphi}} \def\bPhi{{\boldsymbol \Phi}}({\boldsymbol \alpha})$ can be of order $P$ when the entries of ${\boldsymbol \alpha}$ lie in the neighbourhood of fractions with a small denominator, the general expectation has always been that for a ``typical'' ${\boldsymbol \alpha}$ one should have the upper and lower bounds \begin{equation} \label{sqrt-bd} P^{1/2} \ll f_{{\boldsymbol \varphi}} \def\bPhi{{\boldsymbol \Phi}}({\boldsymbol \alpha}) \ll P^{1/2 + o(1)}. \end{equation} This question has recently been investigated in work by Chen and Shparlinski~\cite{CS1}, which in particular implies that the bounds~\eqref{sqrt-bd} hold for a subset of ${\boldsymbol \alpha} \in \mathbb T^r$ of full Lebesgue measure whenever the polynomials ${\boldsymbol \varphi}} \def\bPhi{{\boldsymbol \Phi}$ have a non-vanishing Wronskian~\cite[Corollary~2.2]{CS1}. A particularly strong version of this result, applicable to the situation when ${\varphi}_j(X)=X^j$ for $1 \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec j \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec r$, is available in subsequent work~\cite{CKMS}, where the interested reader will also find a more comprehensive bibliography on the subject. In practical applications it is often necessary to control the size of $f_{{\boldsymbol \varphi}} \def\bPhi{{\boldsymbol \Phi}}({\boldsymbol \alpha})$ on linear slices of $\mathbb T^r$, where some of the ${\alpha}} \def\talp{{\widetilde{\alpha}}_i$ are fixed to lie in some set of full measure, whereas the remaining ones range over the entire unit interval. Such situations typically arise in ``minor arcs'' situations where some, but not all, entries of ${\boldsymbol \alpha}$ may have a good rational approximation and thus lie in an anticipated exceptional set. This problem has recently been studied in a very general setup by Chen and Shparlinski~\cite{CS1} (see also~\cite{CS3}), refining an approach developed by Wooley~\cite{TDW16}. Their main result~\cite[Theorem~2.1]{CS1} asserts that whenever the polynomials ${\boldsymbol \varphi}} \def\bPhi{{\boldsymbol \Phi}$ have a non-vanishing Wronskian, then for almost all $({\alpha}} \def\talp{{\widetilde{\alpha}}_1, \ldots, {\alpha}} \def\talp{{\widetilde{\alpha}}_d) \in \mathbb T^d$ one has bounds of the shape $$ \sup_{{\alpha}} \def\talp{{\widetilde{\alpha}}_{d+1}, \ldots, {\alpha}} \def\talp{{\widetilde{\alpha}}_r \in \mathbb T} |f_{{\boldsymbol \varphi}} \def\bPhi{{\boldsymbol \Phi}}({\alpha}} \def\talp{{\widetilde{\alpha}}_1, \ldots, {\alpha}} \def\talp{{\widetilde{\alpha}}_r)| \ll P^{1/2 + \Gamma(d, {\boldsymbol \varphi}} \def\bPhi{{\boldsymbol \Phi}) + o(1)}, $$ where $\Gamma(d, {\boldsymbol \varphi}} \def\bPhi{{\boldsymbol \Phi})$ is a non-negative function depending on the degrees of the polynomials ${\boldsymbol \varphi}} \def\bPhi{{\boldsymbol \Phi}$, for the precise definition of which we refer to~\cite{CS1}. Unfortunately, even though the bound of~\cite[Theorem~2.1]{CS1} gives strong results in a number of configurations and notably implies that one can take $\Gam(d, {\boldsymbol \varphi}} \def\bPhi{{\boldsymbol \Phi})=0$ for all admissible $r$-tuples of polynomials when $d=r$, in many other cases the bounds it furnishes do not beat even the trivial bound. In such situations, one has to resort to the more classical methods employing bounds of Weyl or Hua type and their subsequent generalisations (see~\cite[Lemma~2.4 and Theorem~5.2]{V:HL} for the former, and \cite[Lemma~2.5]{V:HL} as well as the results of \cite[Section~14]{TDW19} for the latter). Bounds of this nature provide also the crucial input in the work by Erdo\u{g}an and Shakan~\cite{ES}, as well as in recent work by Chen and Shparlinski~\cite{CS2} in which, motivated by some links to certain questions on classical partial differential equations, they establish upper bounds along linear slices of the exponential sum associated with pairs of polynomials ${\varphi}_1, {\varphi}_2$ differing by a linear term. Several related results have recently been obtained by Barron~\cite{Barr}. However, as these bounds use Vinogradov's mean value theorem (see~\cite[Theorem~1.1]{BDG} or~\cite[Theorem~1.1]{TDW19}) as their main input, which is inefficient for Weyl sums whose degree exceeds their dimension, they are inherently unable to provide bounds stronger than $O(P^{1-c_k})$ for some positive parameter $c_k$ of size $c_k \asymp k^{-2}$. Whilst exponents of this magnitude are not believed to be sharp in general, Brandes et al.~\cite{BPPSV} have recently shown that one cannot hope to have $\Gam(d, {\boldsymbol \varphi}} \def\bPhi{{\boldsymbol \Phi})=0$ for all choices of polynomials with non-vanishing Wronskian when $d <r$. In particular, for the choice ${\varphi}_1(x) = X^k + X$ and ${\varphi}_2(X)=X^k$ with $k=2$ or $k=3$, they show in~\cite[Theorem~1.3]{BPPSV} that for all ${\alpha}} \def\talp{{\widetilde{\alpha}}_2 \in \mathbb R \setminus \mathbb Q$ and any $\tau> 0$ there exist arbitrarily large values of $P$ for which we have the lower bound \begin{equation} \label{BPPSV-bd} \sup_{{\alpha}} \def\talp{{\widetilde{\alpha}} \in \mathbb T} |f_{{\boldsymbol \varphi}} \def\bPhi{{\boldsymbol \Phi}}({\alpha}} \def\talp{{\widetilde{\alpha}}_1, {\alpha}} \def\talp{{\widetilde{\alpha}}_2)| \gg P^{3/4 -\tau}, \end{equation} and that for almost all ${\alpha}} \def\talp{{\widetilde{\alpha}}_2 \in \mathbb T$ this bound can be matched by a corresponding upper bound $$ \sup_{{\alpha}} \def\talp{{\widetilde{\alpha}} \in \mathbb T} |f_{{\boldsymbol \varphi}} \def\bPhi{{\boldsymbol \Phi}}({\alpha}} \def\talp{{\widetilde{\alpha}}_1, {\alpha}} \def\talp{{\widetilde{\alpha}}_2)|\ll P^{3/4 + o(1)}. $$ To our knowledge, this is the first indication in the literature that the expectation that~\eqref{sqrt-bd} should hold for all ${\boldsymbol \alpha}$ on a linear slice of $\mathbb T^r$ may be too naive. In~\cite{BPPSV} the authors speculate that the same behaviour as in~\eqref{BPPSV-bd} might continue to hold for polynomials ${\varphi}_1(X) = X^k + X$ and ${\varphi}_2(X)=X^k$ with $k\ge 4$. The goal of this paper is therefore to extend the bound in~\eqref{BPPSV-bd} to more general polynomials, allowing also for higher degrees. \begin{theorem} \label{thm:main} Let ${\varphi} \in \mathbb Z[X]$ be a polynomial of degree $k \ge 2$, and set \begin{equation} \label{expsum} f({\alpha}} \def\talp{{\widetilde{\alpha}}_1, {\alpha}} \def\talp{{\widetilde{\alpha}}_2) = \sum_{1 \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec x \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P} {\mathbf{\,e}}({\alpha}} \def\talp{{\widetilde{\alpha}}_1 ({\varphi}(x)+x) + {\alpha}} \def\talp{{\widetilde{\alpha}}_2 {\varphi}(x)). \end{equation} There exists a set ${\mathscr C}} \def\scrCbar{{\overline \scrC}}\def\scrCtil{{\widetilde \scrC} \subseteq \mathbb T$ of full Lebesgue measure such that for any $\tau>0$ and all ${\alpha}} \def\talp{{\widetilde{\alpha}}_2 \in {\mathscr C}} \def\scrCbar{{\overline \scrC}}\def\scrCtil{{\widetilde \scrC}$ there exist arbitrarily large values of $P$ for which one has the bound $$ \sup_{{\alpha}} \def\talp{{\widetilde{\alpha}}_1 \in \mathbb T} |f({\alpha}} \def\talp{{\widetilde{\alpha}}_1, {\alpha}} \def\talp{{\widetilde{\alpha}}_2)| \gg P^{3/4-\tau}. $$ \end{theorem} Thus, whenever ${\boldsymbol \varphi}} \def\bPhi{{\boldsymbol \Phi} = ({\varphi}_1, {\varphi}_2)$ is a pair of polynomials differing only by a linear term, the associated exponential sum is are substantially larger than originally anticipated on almost all linear slices of $\mathbb T$. The fact that in our result the polynomials under consideration differ only by a linear term seems to play a role, since linear exponential sums do not exhibit square root cancellation in the same manner as their cousins of higher degree do. It is therefore an interesting question to investigate whether the behaviour observed in Theorem~\ref{thm:main} persists, perhaps in a weaker form, even when the polynomials occurring in the exponential sum differ by more than a linear term. Unlike in~\cite{BPPSV}, our result in Theorem~\ref{thm:main} is not complemented by a corresponding upper bound. The methods presented in~\cite{BPPSV} could conceivably be adapted to provide such upper bounds even in the more general case considered in the manuscript at hand for all ${\alpha}} \def\talp{{\widetilde{\alpha}}_2$ lying in a subset of full measure of a suitably defined set of ``major arcs''. This would be sufficient when $k \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 3$, as then the entire unit interval $\mathbb T$ can be covered by such major arcs. For higher degrees, these methods fail and we have no improvements over the existing results of~\cite{CS2}. Nonetheless, we believe that these difficulties are of a technical rather than fundamental nature, and consequently it seems likely that the exponent $3/4$ should be sharp in those cases also. Our argument is a streamlined version of that presented in~\cite[Section~8]{BPPSV}, which deals with the case of ${\varphi}(X)=X^k$ for $k=2, 3$. However, we augment this approach by two classical results. Firstly, we appeal to a bound of Bombieri~\cite[Theorem~6]{Bom} on exponential sums along a curve over a finite field, and secondly we make use of a result of Duffin and Schaeffer~\cite[Theorem~I]{DuSch} which allows us to restrict to the case where the diophantine approximations we consider have a prime denominator. \textbf{Notation.} Throughout the paper, we make use of the following conventions. When $x \in \mathbb R$ we denote by $\|x \|$ the distance from $x$ to the nearest integer. Moreover, $P$ always denotes a large positive number, and the letter $p$ is reserved for primes. We use the Vinogradov `$\ll$', `$\gg$' and equivalent Bachmann--Landau notations `$O(\cdot)$' liberally, and here the implied constants are allowed to depend on ${\boldsymbol \varphi}} \def\bPhi{{\boldsymbol \Phi}$ and $\tau$, but never on $P$ or ${\boldsymbol \alpha}$. \section{Assembling the toolbox} \subsection{Approximations by rational exponential sums} In our examination of the exponential sum~\eqref{expsum} we rely heavily on our understanding of the closely related sum $$ g({\alpha}} \def\talp{{\widetilde{\alpha}}, {\gamma}} \def\Gam{{\Gamma}} \def\tgam{{\widetilde{\gamma}}) = \sum_{1 \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec x \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P} {\mathbf{\,e}}({\alpha}} \def\talp{{\widetilde{\alpha}} x + {\gamma}} \def\Gam{{\Gamma}} \def\tgam{{\widetilde{\gamma}} {\varphi}(x)) $$ and its associated approximations. Indeed, it is apparent from the respective definitions of these exponential sums that \begin{equation} \label{f=g} f({\alpha}} \def\talp{{\widetilde{\alpha}}_1, {\alpha}} \def\talp{{\widetilde{\alpha}}_2) = g({\alpha}} \def\talp{{\widetilde{\alpha}}_1, {\alpha}} \def\talp{{\widetilde{\alpha}}_1+{\alpha}} \def\talp{{\widetilde{\alpha}}_2). \end{equation} When ${\varphi}(X)=X^k$, the latter one of these has been studied in~\cite{BR} and~\cite{BPPSV}, but it turns out that in the situation we are mainly interested in the pure power may be replaced by a more general polynomial. For $q \in \mathbb N$, $a, c \in \mathbb Z$ and ${\beta} \in \mathbb R$ set $$ S(q; a,c)= \sum_{x=1}^q {\mathbf{\,e}} \left( \frac{a x + c {\varphi}(x)}{q} \right) \qquad \text{ and } \qquad I({\beta}) = \int_0^P {\mathbf{\,e}}({\beta} x) {\,{\rm d}} x, $$ and recall that for non-vanishing ${\beta}$ we can compute \begin{equation} \label{I-bd} |I({\beta})| =P \left| \frac{\sin(\pi \beta P)}{\pi \beta P} \right| \ll \min \{P, \| {\beta} \|^{-1} \}, \end{equation} while a classical Weil bound (see, for example,~\cite[Corollary~II.2F]{Schmidt}) shows that when $p$ is prime and $c \nmid p$ one has \begin{equation} \label{S-bd} S(p; a,c) \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec (k-1)p^{1/2}. \end{equation} We then have the following straightforward modification of~\cite[Theorem~3]{BR} or~\cite[Theorem~4.1]{V:HL}. \begin{lemma}\label{L1} Let ${\varphi} \in \mathbb Z[X]$ be a polynomial of degree $k \ge 2$. Suppose that ${\gamma}} \def\Gam{{\Gamma}} \def\tgam{{\widetilde{\gamma}} \in \mathbb Q$ with ${\gamma}} \def\Gam{{\Gamma}} \def\tgam{{\widetilde{\gamma}} = c/p$ in lowest terms, where $p$ is a prime number, and fix $a \in \mathbb Z$ such that $|{\alpha}} \def\talp{{\widetilde{\alpha}} - a/p| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec (2p)^{-1}$. Set then ${\beta} = {\alpha}} \def\talp{{\widetilde{\alpha}} - a/p$. In this notation we have $$ g({\alpha}} \def\talp{{\widetilde{\alpha}},{\gamma}} \def\Gam{{\Gamma}} \def\tgam{{\widetilde{\gamma}}) = p^{-1} S(p; a,c) I({\beta}) + O(p^{1/2} \log p). $$ \end{lemma} \begin{proof} Just like in the proof of~\cite[Theorem~4.1]{V:HL}, we sort the variables into residue classes, which we then encode in terms of exponential sums. Thus $$ g({\alpha}} \def\talp{{\widetilde{\alpha}}, {\gamma}} \def\Gam{{\Gamma}} \def\tgam{{\widetilde{\gamma}})=\frac{1}{p} \sum_{b=1}^p S(p; a+b, c) f({\beta}-b/p,0). $$ By~\cite[Lemma~4.2]{V:HL} we have $f({\beta}-b/p,0) = I({\beta}-b/p)+O(1)$, so that together with~\cite[Lemma~2.2]{BPPSV} we find that $$ g({\alpha}} \def\talp{{\widetilde{\alpha}}, {\gamma}} \def\Gam{{\Gamma}} \def\tgam{{\widetilde{\gamma}})=\frac{1}{p} \sum_{b=1}^p S(p; a+b, c) I({\beta}-b/p) + O(p^{1/2}). $$ Since $c \nmid p$, it follows upon deploying~\eqref{I-bd} and~\eqref{S-bd} that $$ g({\alpha}} \def\talp{{\widetilde{\alpha}}, {\gamma}} \def\Gam{{\Gamma}} \def\tgam{{\widetilde{\gamma}}) - p^{-1} S(p; a,c) I({\beta}) \ll p^{-1/2 } \sum_{b=1}^{p-1} \|{\beta} - b/p \|^{-1} \ll p^{1/2} \log p, $$ where in the last step we use that $$ \| {\beta}-b/p \| \ge (2p)^{-1} $$ for all $b \not \equiv 0 \mmod p$. This completes the proof. \end{proof} \subsection{A lower bound on rational exponential sums} Our second main tool shows that the complete exponential sum $S(p; a,c)$ cannot be smaller than $p^{1/2}$ too often. It is useful to denote the leading coefficient of ${\varphi}$ by $\lc({\varphi})$. \begin{lemma}\label{L2} Let $p$ be a prime satisfying $p>(2k)^4$ with $p \nmid\lc({\varphi})$, and let $c \in \mathbb Z$ with $p \nmid c$. Then there exists $a \in \mathbb Z$ with $p \nmid (a+c)$ such that $$ S(p; a, a+c) \ge \tfrac13 p^{1/2}. $$ \end{lemma} \begin{proof} When $k=2$, the desired result follows from classical bounds on Gauss sums, so it is sufficient to consider the case when $k \ge 3$. By averaging and shifting the variable of summation, the result follows if we can show that \begin{equation} \label{eq:2nd Mom} \sum_{a=1}^{p-1} |S(p; a-c,a)|^2 \ge \tfrac{1}{3} p^2 \end{equation} for all primes $p > (2k)^4$ not dividing $\lc({\varphi})$. We begin by noting that \begin{align*} \sum_{a=1}^{p-1} |S(p; a-c,a)|^2 & = p \sum_{\substack{m,n = 1 \\ {\varphi}(m)+m \equiv {\varphi}(n)+n \mmod p}}^p {\mathbf{\,e}} \left(\frac{c(m-n)}{p}\right) - \left| \sum_{m=1}^p {\mathbf{\,e}} \left(\frac{cm}{p}\right) \right|^2. \end{align*} The second sum vanishes, and in the first one we make the change of variables $n=m-h$ and isolate the term corresponding to $h=0$. Hence \begin{equation} \label{pre-Bom} \sum_{a=1}^{p-1} |S(p; a-c,a)|^2 = p^2 + p \sum_{m=1}^p \sum_{\substack{h=1 \\ \Del(m,h) \equiv 0 \mmod p}}^{p-1} {\mathbf{\,e}}(ch), \end{equation} where we put $$ \Del(m,h)=({\varphi}(m+h)-{\varphi}(m) + h)/h. $$ Upon re-inserting in the term corresponding to $h=0$ and noting that all exponential sums in question take real values we discern that \begin{align*} \sum_{m=1}^p \sum_{\substack{h=1 \\ \Del(m,h) \equiv 0 \mmod p}}^{p-1} {\mathbf{\,e}}(ch) &= \sum_{\substack{m,h=1 \\ \Del(m,h) \equiv 0 \mmod p}}^{p} {\mathbf{\,e}}(ch) - \sum_{\substack{m=1 \\ \Del(m,0) \equiv 0 \mmod p}}^p 1 \\ &\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \sum_{\substack{m,h=1 \\ \Del(m,h) \equiv 0 \mmod p}}^{p} {\mathbf{\,e}}(ch). \end{align*} If $k \ge 2$, then $\Del(X,Y)$ is a nontrivial polynomial in two variables of degree exactly $k-1$, so the congruence $$ \Del(m,h) \equiv 0 \mmod p $$ defines a curve over the finite field $\mathbb F_p$. Furthermore, if $k> 1$, then $\Del(X,Y)$ is a nontrivial polynomial of degree exactly $k-1$ with respect to $X$ with the leading monomial $ k \lc({\varphi})X^{k-1}$. Thus for $p> k$ and $p \nmid\lc({\varphi})$ the variable $h$ is not constant along this curve. We may therefore apply~\cite[Theorem~6]{Bom} and find that \begin{equation}\label{Bom-bd} \sum_{\substack{m,h=1 \\ \Del(m,h) \equiv 0 \mmod p}}^{p} {\mathbf{\,e}}(ch) \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \((k-1)^2 +2(k-1) -3\) \sqrt p + (k-1)^2 . \end{equation} Under our assumption $p>(2k)^4$, for the right hand side in~\eqref{Bom-bd} we have $$ \((k-1)^2 +2(k-1) -3\) \sqrt p + (k-1)^2 < \tfrac{2}{3}p. $$ In view of~\eqref{pre-Bom}, we derive~\eqref{eq:2nd Mom}, which is sufficient to establish the result. \end{proof} \section{Proof of the main result} The following result, going back to Duffin and Schaeffer~\cite{DuSch}, is a key ingredient in our arguments as it allows us to focus on those ${\alpha}} \def\talp{{\widetilde{\alpha}} \in \mathbb T$ whose rational approximations have prime denominators. \begin{lemma}\label{lem: approx a/[p} There is a set $ {\mathscr C}} \def\scrCbar{{\overline \scrC}}\def\scrCtil{{\widetilde \scrC} \subseteq \mathbb T$ of full Lebesgue measure such that for any $\alpha\in {\mathscr C}} \def\scrCbar{{\overline \scrC}}\def\scrCtil{{\widetilde \scrC}$ there are infinitely many approximations $$ \left| \alpha - \frac{a}{p}\right| < \frac{1}{p^2} $$ with $a \in \mathbb Z$ and $p$ being a prime number. \end{lemma} \begin{proof} This is a direct application of~\cite[Theorem~I]{DuSch}, see also the remark on top of p.~245 of that paper. \end{proof} We also remark that Lemma~\ref{lem: approx a/[p} is a special case of the Duffin-Schaeffer conjecture, recently established as a theorem by Koukoulopoulos and Maynard~\cite{KM}. We now have the wherewithal to embark on the proof of Theorem~\ref{thm:main}. Fix $\tau>0$, and let ${\alpha}} \def\talp{{\widetilde{\alpha}}_2 \in {\mathscr C}} \def\scrCbar{{\overline \scrC}}\def\scrCtil{{\widetilde \scrC}$, where ${\mathscr C}} \def\scrCbar{{\overline \scrC}}\def\scrCtil{{\widetilde \scrC}$ is as in Lemma~\ref{lem: approx a/[p}. Then we can find an arbitrarily large prime number $p$, and $a_2 \in \mathbb Z$ not divisible by $p$, that satisfy $|{\alpha}} \def\talp{{\widetilde{\alpha}}_2 - a_2/p| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec p^{-2}$. For any fixed such $p$ satisfying $p>(2k)^4$ and not dividing $\lc({\varphi})$, define $P$ via the relation \begin{equation} \label{def-P} P^{1+\tau} = p^2. \end{equation} Lemma~\ref{L2} now guarantees the existence of an integer $a_1$ with $a_1+a_2 \not\equiv 0 \mmod p$ and having the property that \begin{equation} \label{large-S} S(p; a_1, a_1+a_2) \gg p^{1/2}. \end{equation} Take now ${\beta}_2 = {\alpha}} \def\talp{{\widetilde{\alpha}}_2-a_2/p$ and ${\beta}_1 = -{\beta}_2$, and put ${\alpha}} \def\talp{{\widetilde{\alpha}}_1 = a_1/p + {\beta}_1$. Then upon recalling that ${\gamma}} \def\Gam{{\Gamma}} \def\tgam{{\widetilde{\gamma}}={\alpha}} \def\talp{{\widetilde{\alpha}}_1+{\alpha}} \def\talp{{\widetilde{\alpha}}_2$ in~\eqref{f=g}, we see that ${\gamma}} \def\Gam{{\Gamma}} \def\tgam{{\widetilde{\gamma}}=c/p$ with $c = a_1 + a_2 \not\equiv 0 \mmod p$, whereupon Lemma~\ref{L1} yields the relation $$ g({\alpha}} \def\talp{{\widetilde{\alpha}}_1, {\gamma}} \def\Gam{{\Gamma}} \def\tgam{{\widetilde{\gamma}}) = p^{-1} S(p; a_1, a_1+a_2) I({\beta}_1) + O(p^{1/2} \log p). $$ Recall now our definition of $P$ from~\eqref{def-P}. Since $|{\beta}_1| = |{\beta}_2| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec p^{-2} = P^{-1-\tau}$, it follows further from~\eqref{I-bd} that $|I({\beta}_1)| = P \, (1+O(P^{-2\tau}))$, so upon inserting~\eqref{large-S} we discern that $$ g({\alpha}} \def\talp{{\widetilde{\alpha}}_1, {\gamma}} \def\Gam{{\Gamma}} \def\tgam{{\widetilde{\gamma}}) \gg P p^{-1/2} \gg P^{3/4-\tau}. $$ In the light of~\eqref{f=g} and Lemma~\ref{lem: approx a/[p}, this establishes the desired result. \section*{Acknowledgements} The authors would like to thank James Maynard for drawing our attention to a result of Duffin and Schaeffer~\cite[Theorem~I]{DuSch} on Diophantine approximations with prime denominators. During the preparation of this manuscript, JB was supported by Starting Grant no.~2017-05110 of the Swedish Science Foundation (Ve\-tenskapsr{\aa}det) and IS was supported by the Australian Research Council Grant DP170100786.
{ "timestamp": "2020-12-17T02:15:51", "yymm": "2012", "arxiv_id": "2012.08877", "language": "en", "url": "https://arxiv.org/abs/2012.08877" }
\section{INTRODUCTION} \label{sec:intro} Many observations and theoretical studies over the years, and more so in the last decade, attribute major roles to jets in shaping planetary nebulae (PNe; e.g., \citealt{Morris1987, Soker1990AJ, SahaiTrauger1998, Boffinetal2012, Miszalskietal2013, Tocknelletal2014, Huangetal2016, Sahaietal2016, RechyGarciaetal2017, GarciaSeguraetal2016, Dopitaetal2018, Fangetal2018, KameswaraRaoetal2018, Lagadec2018, AliDopita2019, Derlopaeta2019, Jonesetal2019jets, Miszalskietal2019, Oroszetal2019, Scibellietal2019, Guerreroetal2020, MonrealIberoetal2020, RechyGarciaetal2020, Soker2020Galax, Tafoyaetal2020, Zouetal2020, Guerreroetal2021}, for a small fraction of many more papers). Observations show a link between the presence of a binary central star and shaping by jets (e.g., \citealt{Boffinetal2012, Miszalskietal2013, Miszalskietal2018a}). This link includes also post-asymptotic giant branch (AGB) stars that might not form a PN (e.g., \citealt{Thomasetal2013, Bollenetal2017, VanWinckel2017, Bollenetal2020, Bollenetal2021}). We clarify that we refer to any bipolar outflow, i.e., two opposite polar outflows with a mirror symmetry about the equatorial plane, as jets. The jets might be narrow, or the half opening angle of each jet might be large, even close to $90^\circ$. As well, the outflow in the jets might be continuous, periodic, or stochastic. We still refer to the polar outflow as a jet. The main aim of hydrodynamical simulations of jets in PNe is to show that jets can account for the different morphological features (e.g., \citealt{LeeSahai2004, Dennisetal2009, Leeetal2009, HuarteEspinosaetal2012, Balicketal2013, Akashietal2015, Balicketal2017, Akashietal2018, Balicketal2018, EstrellaTrujilloetal2019, RechyGarciaetal2019, Balicketal2020}). These and many other simulations have shown that jets can account for a very rich varieties of morphologies. One of the key advantages of jets is that they allow to make use of the energy source that results from mass accretion onto the companion. They introduce axially-symmetric flows that can affect the descendant nebula in many ways, depending, among others, on the intensity of the jets, their duration, and when their activity phase takes place. In this study we consider the jets to be weak, of short duration, and to take place before the main nebula ejection. Our present goal is to show that jets can form `Ears' in elliptical PNe. By ears we refer to two opposite protrusions from the main PN shell. Ears differ from bipolar lobes by three main properties. (1) Ears are smaller than the main inner shell from which they protrude. Most bipolar lobes are larger than the inner main shell. (2) An ear cross section (perpendicular to the symmetry axis) monotonically decreases outward, i.e., as we move from its base at the main inner shell to its tip. Most Bipolar lobes, on the other hand, widen first, and then get narrower toward their tip. A third criterion distinguishes ears from elliptical PNe. (3) The boundary between the ears and the main nebula has a dimple (two inflection points) on each side of each ear. Like bipolar lobes, in most cases ears are along the symmetry axis of the nebula and have different emission properties or brightness, like being fainter, than the main PN shell. By their definition, ears exist only in elliptical PNe. In that regards and related to our study, we assume that all elliptical PNe are shaped by binary interaction, mainly by a low mass main sequence companion that enters a common envelope evolution (CEE) with an AGB star (e.g., \citealt{Soker1997Rev}). We list the 10 best examples we could find of PNe with ears. We give one or two sources for the image of each PN. The images of the first two PNe with ears are from HASH (the Hong Kong/AAO/Strasbourg H$\alpha$ planetary nebula database; \citealt{PArkeretal2016}). In K~3-24 we identify the ears protruding to the north and to the south, while in IC~289 (also \citealt{Hajianetal1997}) the ears are to the north-west and to the south-east, and they are not exactly aligned with the central star. The PN K~3-4 (\citealt{Manchadoetal1996}) is an interesting case. Firstly, the two ears are not aligned with the center of the PN, as in IC~289. Secondly, the ears are large, and just on the border between being lobes and being ears because their width (cross section) stays constant for some distance above their base. Since their length as projected on the plane of the sky is shorter than the main shell, we term them ears (or border-ears). In M 2-53 \citep{Manchadoetal1996} we identify large ears, one in the west and one in the east. The PN NGC~6905 (\citealt{Balick1987, PhillipsRamosLarios2010}) has elongated ears. We term them ears because their width (cross section) decreases monotonically to their tips. The PN NGC~3242 has two pairs of ears along the same axis (\citealt{Schwarzetal1992}). The PN NGC~6563 (\citealt{Schwarzetal1992}) has point-symmetric ears in an `S' shape. Other PN with ears are NGC 6852 \citep{Manchadoetal1996}, Na~1 \citep{Manchadoetal1996}, and M~2-40 \citep{Manchadoetal1996}. The formation of ears in PNe might have relations to ears in some remnants of type Ia supernovae (SNe Ia). Most possibly is that some of these SNe Ia exploded inside a PN, i.e., a SN inside a PN (SNIP). We take the view that in remnants of SNe Ia, like in PNe, the ears are features along the polar (symmetry) axis (e.g., \citealt{TsebrenkoSoker2013}), rather than an equatorial dense gas (e.g., \citealt{Chiotellisetal2020}). In that respect we note that \cite{Blondinetal1996} form ears in type II supernovae by assuming a circumstellar gas with a high equatorial density into which the star explodes. They obtain polar ears, but not by the action of jets. In section \ref{sec:numerical} we describe the three-dimensional (3D) simulations and in section \ref{sec:results} we describe our results of 17 different simulations. We do not try to fit any PN particularly, but only to derive the general structure of ears, because the parameter space (jets' properties, shell properties) is very large. In section \ref{sec:Evolution} we show the evolution with time. We summarise our results in section \ref{sec:summary}. \section{NUMERICAL SET-UP} \label{sec:numerical} \subsection{The numerical scheme and the jets} \label{subsec:Jets} We use version 4.2.2 of the hydrodynamical FLASH code \citep{Fryxell2000} with the unsplit PPM (piecewise-parabolic method) solver to perform our 3D hydrodynamical simulations. FLASH is an adaptive-mesh refinement (AMR) modular code used for solving hydrodynamics and magnetohydrodynamics problems. We do not include radiative cooling in the simulations because the interaction takes place in a dense region close to the binary system, such that some zones are optically thin while others are not. The inclusion of radiative transfer in this 3D complicated flow is too demanding. We instead vary the values of the adiabatic index $\gamma$. We employ a full 3D AMR (7 levels; $2^{9}$ cells in each direction) using a Cartesian grid $(x,y,z)$ with outflow boundary conditions at all boundary surfaces. We take the $z=0$ plane to be in the equatorial plane of the binary system, which is also the equatorial plane of the nebula. We simulate the whole space (the two sides of the equatorial plane). In most simulations the size of the grid is $(4\times 10^{16}~\rm{cm})^{3}$. In two simulations we take twice as large a grid to follow the evolution to later times. At time $t=0$ we fill the grid with a spherical wind with velocity of $v_{\rm AGB}= 20 ~\rm{km} ~\rm{s}^{-1}$ and a mass loss rate $\dot M_{\rm AGB}=10^{-6} M_\odot ~{\rm yr}^{-1}$. We term this wind a regular AGB wind. We launch the two opposite jets from the inner $4\times 10^{14} ~\rm{cm}$ zone along the $z$-axis (at $x=y=0$) and within a half opening angle of $\alpha_{\rm j}$. We chose two values of $\alpha_{\rm j}$, one represents narrow jets, as observed in many young stellar objects, and one represents wide jets, as observed in some post-AGB binary systems (e.g., \citealt{Bollenetal2021}). The injection temperature of the jets is $10^4 ~\rm{K}$, a typical temperature of warm gas. The jets are active during the time period from $t=0$ to $t_{\rm j}$ and the ejection of the dense spherical shell starts one year after $t_{\rm j}$. These time scales are comparable to the dynamical time of the CEE, which we assume is the timescale during which the companion enters the envelope. The jets' initial velocity is $v_{\rm j} = 100$ or $v_{\rm j} = 200 ~\rm{km} ~\rm{s}^{-1}$, which is about the escape velocity from a low-mass main sequence star or from a brown dwarf (the companion star). The mass-loss rate into the two jets together is $\dot M_{\rm 2j} \simeq 10^{-4} - 10^{-5} M_\odot ~\rm{yr}^{-1}$. These mass loss rates are about $0.01-0.1$ times the rates that \cite{Shiberetal2019} take. The reasons for the lower values here are that we take a lower mass companion and that the giant in our case is a more extended AGB star with a lower envelope density compared with the red giant branch model of \cite{Shiberetal2019}. For numerical reasons (to avoid very low densities) we inject a very weak slow wind in the directions where we do not launch the jets, i.e., in the sector $\alpha_{\rm j}<\theta<90^\circ$ in each hemisphere (for more numerical details see \citealt{AkashiSoker2013}). \subsection{The spherical dense shell} \label{subsec:shell} In most of our previous studies (e.g., \citealt{AkashiSoker2013, Akashietal2018, AkashiSoker2018}) we injected the jets into a dense spherical shell (formed by an intensive wind), which itself was embedded in a much less dense wind (formed by the regular AGB wind). Namely, the jets active phase follows the high mass loss rate that formed the dense shell (jets are younger than the dense PN shell). Such interactions can form large bipolar lobes with different properties. In this study we have simulated about twenty different cases where we launched jets into a dense shell. We failed to obtain ears. Namely, we could not form polar lobes that are smaller than the dense shell and that have a cross section that decreases with distance from the center (for definition of ears see section \ref{sec:intro}). These failures led us to conduct simulations where we launch the dense shell after we launch the jets. Such a case might be, for example, when the companion accretes mass from the AGB progenitor of the PN and launches jets. Later it enters a common envelope evolution, a process that ejects the dense shell. The jets, therefore, interact with the less-dense (regular AGB) wind that preceded the ejection of the dense shell. We assume that the main sequence companion that launches the relatively weak jets is of low mass $M_2 \simeq 0.1-0.3 M_\odot$ (in most cases; might even be a brown dwarf), and therefore after it enters the CEE it ejects an elliptical nebula rather than a bipolar nebula or a dense equatorial torus \citep{Soker1997Rev}. This assumption is compatible with the observation that ears are present mainly in elliptical PNe. In assuming that a low mass companion ejects the elliptical shell we have in mind the PN A30 that has an almost spherical morphology (not including the central knots) and has a central binary system with an orbital period of 1.06 days \citep{Jacobyetal2020}. However, we note that in the case of K3-24 there is a dense torus. In this case we expect the companion mass to be $M_2 \ga 0.3 M_\odot$. We eject the dense (intensive) spherical wind that forms the dense shell starting one year after the end of the jet-launching episode, i.e., at $t=t_{\rm j}+1 ~\rm{yr}$, and continue with this mass loss until $t_{\rm w}=60 ~\rm{yr}$. We inject the dense wind at radius $r_{\rm w,in} = 4\times10^{14} ~\rm{cm} $. The mass loss rate and velocity of the spherical dense wind are $\dot M_{\rm w} = 10^{-3} M_\odot ~\rm{yr}^{-1}$ and $v_{\rm w} = 20 ~\rm{km} ~\rm{s}^{-1}$, respectively. In one case we inject the regular AGB winds rather than a dense wind. The simulations of a dense shell that is younger than the jets, i.e., a post-jets shell, is the main new ingredient of our study with respect to our group's previous studies. From observations we know that jets can be younger or older than the dense shell that was presumably ejected in a common envelope evolution (e.g., \citealt{Tocknelletal2014}). In most cases the age difference between the jets and the dense shell is very small and we can refer to them as coeval \citep{Guerreroetal2020}. We summarise the simulations we perform in Table \ref{Table:cases}. \begin{table* \centering \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Simulation & $\dot M_{\rm 2j}$ & $v_{\rm j}$ & $t_{\rm j}$ & $\alpha_{\rm j}$ & $\gamma$ & Figures & $\theta_{ears}$ \\ & $10^{-6} M_\odot ~\rm{yr}^{-1}$ & $~\rm{km} ~\rm{s}^{-1}$& $~\rm{yrs}$ & & & & \\ \hline S1 & $38$ & $100$ & $1$ & $15^\circ$&$1.1$ & \ref{fig:sixteen_ears}, \ref{fig:3Gammas} & $40^\circ$ \\ \hline S2 & $38$ & $100$ & $1$ & $15^\circ$ &$1.33$ &\ref{fig:sixteen_ears}, \ref{fig:3Gammas} & $35^\circ$ \\ \hline S3 & $38$ & $100$ & $1$ & $15^\circ$ & $1.67$ & \ref{fig:sixteen_ears}, \ref{fig:3Gammas} & $25^\circ$\\ \hline S4 &$9.5$ & $200$ & $1$ & $15^\circ$ &$1.1$ & \ref{fig:sixteen_ears}, \ref{fig:evolS4} & $35^\circ$ \\ \hline S5 &$9.5$ & $200$ & $1$ & $15^\circ$ &$1.33$ & \ref{fig:sixteen_ears} & $30^\circ$\\ \hline S6 &$9.5$ & $200$ & $1$ & $15^\circ$ &$1.67$ & \ref{fig:sixteen_ears}, \ref{fig:evolS6} & $30^\circ$ \\ \hline S7 &$9.5$ & $200$ & $2$ & $15^\circ$ & $1.1$ & \ref{fig:sixteen_ears} & $30^\circ$ \\ \hline S8 &$152$ & $50$ & $1$ & $15^\circ$ &$1.67$ & \ref{fig:sixteen_ears} & $40^\circ$ \\ \hline S9 &$9.5$ & $200$ & $1$ & $50^\circ$&$1.1$ & \ref{fig:sixteen_ears} & $40^\circ$ \\ \hline S10 & $38$ & $100$ & $1$ & $50^\circ$ &$1.1$ & \ref{fig:sixteen_ears} & $35^\circ$ \\ \hline S11 &$9.5$ & $200$ & $2$ & $15^\circ$ &$1.67$ & \ref{fig:sixteen_ears} & ($35^\circ$) \\ \hline S12 &$9.5$ & $200$ & $3$ & $15^\circ$ & $1.67$ & \ref{fig:sixteen_ears} & ($10^\circ$)\\ \hline S13 & $38$ & $100$ & $1$ & $50^\circ$ &$1.67$ & \ref{fig:sixteen_ears} & ($20^\circ$) \\ \hline S14 & $38$ & $100$ & $1$ & $50^\circ$ &$1.33$ & \ref{fig:sixteen_ears} & ($20^\circ$) \\ \hline S15 & $9.5$ & $200$ & $1$ & $50^\circ$ &$1.33$ & \ref{fig:sixteen_ears} & ($40^\circ$) \\ \hline S16 & $9.5$ & $200$ & $1$ & $50^\circ$ &$1.67$ & \ref{fig:sixteen_ears} & ($30^\circ$) \\ \hline S1L & $38$ & $100$ & $1$ & $15^\circ$ & $1.1$ & \ref{fig:S1L} & $40^\circ$ \\ \hline \end{tabular} \caption{Summary of the 17 simulations we present in the paper. The columns list, from left to right and for each simulation, its number, the mass loss rate of the two jets combined $\dot M_{\rm 2j}$, the velocity of the jets $v_{\rm j}$, the time period of jets' activity $t_{\rm j}$, the half opening angle of the jets $\alpha_{\rm j}$, and the adiabatic index $\gamma$. In the next column we list the figures presenting each simulation. In all figures beside Fig. \ref{fig:incliden} that we present later on, the symmetry axis is on the plane of the sky, i.e., $i=90^\circ$. In the last column we list the critical inclination angle $i_{\rm ears}$ (defined as the angle between the PN symmetry axis and the line of sight) for each case, below which the ears disappear because they are projected on the main shell. In all cases we start at $t=0$ with a regular AGB wind that fills the grid, and we start to inject the dense shell one year after the end of the jets' activity, i.e., at $t=t_{\rm j} + 1 ~\rm{yr}$. In Simulation S1L we inject a regular AGB wind instead of a dense wind during the post-jets phase. } \label{Table:cases} \end{table*} \section{RESULTS} \label{sec:results} \subsection{A gallery of images} \label{subsec:gallery} We start by comparing 16 simulations that we performed when the bipolar structure reach about the same size as each other. In Fig. \ref{fig:sixteen_ears} we present the artificial intensity maps of these 16 cases. The artificial intensity map is a map of the integration of density squared along the line of sight, here along the $y$ axis. In all simulations we start to blow the dense shell a year after we turned off the jets. Namely, the jets are older than the main nebular shell. For other properties see Table \ref{Table:cases}. \begin{figure*}[ht!] \includegraphics[trim=0.6cm 10.2cm 0.0cm 2.4cm ,clip, scale=0.95]{sixteen_ears.pdf} \\ \caption{Artificial intensity maps for 16 models. Each artificial intensity map is a map of the integration of density squared along the $y$ axis (the line of sight). In all cases the symmetry axis of the two opposite jets is $(x,y,)=(0,0)$, namely, through the center and along the $z$ axis. All panels are square with sizes of $4 \times 10^{16} ~\rm{cm}$. The colors depict the artificial intensity values according to the color bars in the range of $10^{-23} ~\rm{g}^2 ~\rm{cm}^{-5} - 10^{-16} ~\rm{g}^2 ~\rm{cm}^{-5}$. We consider simulations S1 to S7 to yield ears, simulations S9, S15 and S16 to be marginal, and simulations S10-S14 to yield no ears. } \label{fig:sixteen_ears} \end{figure*} We recall our definition of ears as two opposite protrusions from the main shell that (1) are smaller than the main inner shell from which they protrude, (2) have a cross section (perpendicular to the symmetry axis) that monotonically decreases outward, and (3) the boundary between the ears and the main nebula has a dimple (two inflection points) on each side of each ear. We clearly identify ears in simulations S1 to S7, but in simulation S7 the ears are almost too large and very faint. Cases S9, S15, and S16 are marginal as the ears do not have a clear shape as in simulations S1-S7, and are fainter. In cases S10-S14 we do not identify the faint protrusions as ears. Our first conclusion is that the flow sequence of weak jets that interact with a regular AGB wind followed by the ejection of a dense shell (an intensive wind) can lead to ear formation, but not necessarily so. \subsection{The role of the adiabatic index} \label{subsec:Adiabatic} In Fig. \ref{fig:3Gammas} we compare the density, pressure, and temperature maps in the meridional plane $y=0$ of simulations S1, S2, and S3 (from left to right) that differ only by the value of the adiabatic index $\gamma$. \begin{figure*}[ht!] \includegraphics[trim=0.8cm 8.1cm 0.0cm 2.0cm ,clip, scale=0.90]{zzzGamma1.pdf} \\ \caption{Comparing the density (upper row), pressure (middle row), and temperature (lower row) maps in the meridional plane of three simulations that differ only in the value of the adiabatic index. Left column: simulation S1 with $\gamma=1.1$; Middle column: simulations S2 with $\gamma=1.333$; Right column: simulation S3 with $\gamma=1.67$. All panels are square with sizes of $4 \times 10^{16} ~\rm{cm}$ and at $t=44 ~\rm{yr}$. The numbers on the axis are in units of $10^{15}~\rm{cm}$. Densities according to color-bars in the range of $10^{-19} ~\rm{g} ~\rm{cm}^{-3}- 10^{-15} ~\rm{g} ~\rm{cm}^{-3}$, while pressure in the range of $10^{-8} ~\rm{erg} ~\rm{cm}^{-3}- 10^{-5} ~\rm{erg} ~\rm{cm}^{-3}$. Note that the densities in the zone $r \ge10 \times 10^{15} ~\rm{cm}$ are $\le 2.5 \times 10^{-20} ~\rm{g} ~\rm{cm}^{-3}$ (decreasing as $r^{-2}$) and so appear all blue. The temperature ranges are from $1000 ~\rm{K}$ (blue) to $3.5\times 10^4 ~\rm{K}$ in the lower-left panel, to $6.3\times 10^4 ~\rm{K}$ in the lower-middle panel, and to $7.5\times 10^4 ~\rm{K}$ in the lower-right panel. } \label{fig:3Gammas} \end{figure*} The adiabatic index plays a role in both increasing and decreasing the temperature. A higher value of $\gamma$ implies a steeper change in pressure as density changes. In these three simulations the jets start highly supersonic, with a mach number of $\mathcal{M}_{\rm j} = 6.7$. In the postshock region the Mach number and temperature increase as $\gamma$ increases. Indeed, in the lower three panels of Fig. \ref{fig:3Gammas} we see that the higher the value of $\gamma$ is the higher the temperature of the post-shock jets' gas is (note that the red color stands for a higher temperature as $\gamma$ increases in the three panels). On the other hand, as the gas expands a higher value of $\gamma$ implies more rapid loss of pressure; this reduces the expansion velocity. For example, in a gas that is set to expand freely into an empty tube the maximum velocity at the front of the expanding gas is $2 C_{0} / (\gamma-1)$, where $C_0$ is the initial sound speed of the gas. Namely, the maximum additional velocity of the expanding gas is proportional to $(\gamma-1)^{-1}$. In simulations where the jets are active for a long time, the effect of higher post-shock pressures for higher values of $\gamma$ dominates, and flow with higher values of $\gamma$ inflates larger bubbles. The present flow structure has short-lived and weak jets and a slow pre-jet wind and a slow post-jet intensive wind (the dense shell), i.e., a Mach number of only $\mathcal{M}_{\rm s} = 1.3$ for both winds. The result is that the effect of a faster cooling for higher values of $\gamma$ dominates in many parts. Indeed, we see that the high-pressure region (red color in middle row of Fig. \ref{fig:3Gammas}) gets smaller as $\gamma$ increases, and that the temperature in the center is the highest for the lowest value of $\gamma$. As well, the hot thin shell is larger for the lower values of $\gamma=1.1$ and smaller for $\gamma=1.67$, in particular in the equatorial plane. Another comparison is of simulation S7 and S11. In these two simulations the jets have the same power as in simulations S1-S6, but the jets are active for $t_{\rm j}=2 ~\rm{yr}$ instead of for only $t_{\rm j}=1 ~\rm{yr}$. Namely, the jets deposit twice as much energy to the lobes/ears they inflate with respect to simulations S1-S6 (we discuss this further in section \ref{subsec:EnergyMomentum}). In simulation S11 that has a larger value of $\gamma=1.67$ the jets inflate narrower lobes that form a bipolar PN rather than ears. These lobes are not ears because the cross section does not decrease monotonically as we move out. In simulation S7 for which $\gamma=1.1$ the lobes are wide, and almost larger than the dense shell. These are nonetheless ears. \subsection{The role of energy and momentum of the jets} \label{subsec:EnergyMomentum} There are five pairs and one triplet of simulations with the same adiabatic index $\gamma$ and the same power and duration of jets, but different momentum. The pairs are (S1,S4), (S2,S5), (S10,S9), (S13,S16), and (S14,S15), where the first simulation in each pair is the one with twice as large momentum flux compared with the second simulation in the pair. Overall, in the simulations with higher jets' momentum, all other parameters being similar, the lobes/ears are more elongated. As well, in the marginal cases (S10,S9) and (S13,S16) the higher momentum forms a wider lobe/ear on the far zone (far from the center), and therefore the cross section of the lobe/ear does not decrease monotonically. This prevents the lobes from being defined as ears. In the triple (S8,S3,S6) the jets in simulation S8 have twice the momentum of that in simulation S3, that in turn has twice the jets' momentum in simulation S6. While in simulations S3 and S6 we do obtain ears, in simulation S8 the jets' velocity of $v_{\rm j} =50 ~\rm{km} ~\rm{s}^{-1}$ is too low for the jets to inflate ears or lobes and we obtain an elliptical nebula. Simulations S7 and S11 are active for twice as long, while simulation S12 is active for three times as long as the other simulations. In these simulations, in particular S11 and S12, the jets inflate too large lobes to be defined as ears. As expected, energetic jets form bipolar nebulae. \subsection{The role of jets' opening angle} \label{subsec:Angle} There are simulations where we inject wide jets with a half opening angle of $\alpha_{\rm j}=50^\circ$ instead of $\alpha_{\rm j}=15^\circ$. Pairs with narrow and wide, in this order, jets but otherwise identical simulations are (S1, S10), (S4,S9), (S3,S13), (S2,S14), (S5,S15), and (S6,S16). We learn from these comparisons that too wide jets form complicated faint structures in the polar direction that are not what we refer to as ears. Basically, the wide jets inflate large lobes, which because of instabilities form a bumpy outer boundary of the ear, as well as a cross section that not always decreases monotonically outward. These effects prevent the inflated lobe to obey our definition of ears. \subsection{The appearance of ears} \label{subsec:appearance} The interaction of the jets with the regular pre-jets AGB wind forms the ears. The post-jets dense wind forms the dense nebular shell, but otherwise plays no hydrodynamical role in forming the ears. The dense shell serves to form nebulae similar to most of the observed PNe with ears, where the ears are fainter than the main nebula. If there is no dense wind but rather the regular AGB wind continues in the post-jets phase, the ears might merge with the nebula to form an elliptical PN without ears. We demonstrate this with simulation S1L. In simulation S1L the jets properties are as in simulation S1, but instead of injecting a post-jets dense wind, i.e., with a mass loss rate and velocity of $\dot M_{\rm w} = 10^{-3} M_\odot ~\rm{yr}^{-1}$ and $v_{\rm w} = 20 ~\rm{km} ~\rm{s}^{-1}$, respectively, we inject the regular AGB wind with a mass loss rate and velocity of $\dot M_{\rm AGB} = 10^{-6} M_\odot ~\rm{yr}^{-1}$ and $v_{\rm AGB} = 20 ~\rm{km} ~\rm{s}^{-1}$, respectively. We present the artificial intensity maps of simulation S1L in Fig. \ref{fig:S1L}. We do indeed form ears. However these ears are only marginally fainter than the main nebular outskirts, whereas in simulation S1 (upper left panel of Fig. \ref{fig:sixteen_ears}) the ears are much fainter than the nebula. We suspect that in the case of simulation S1L the ears will merge with the main nebula at a later phase after ionisation starts, and will form an elliptical PN without ears. In summary, to form ears that are fainter than the nebula (or nebula brighter than the ears) we find that we should increase the wind mass loss rate in the post-jets phase. \begin{figure*}[ht!] \includegraphics[trim=0.8cm 0.1cm 0.0cm 0.0cm ,clip, scale=0.20]{ears_22_proj.pdf} \\ \caption{ The artificial intensity maps of simulation S1L that has the same parameters as simulation S1 but with a regular AGB wind in the post-jets phase (Table \ref{Table:cases}). The time is the same time as in the upper left panel of Fig. \ref{fig:sixteen_ears} for simulation S1. } \label{fig:S1L} \end{figure*} \subsection{The critical angle for ears} \label{subsec:critical} Finally we refer to the inclination angle. In all figures beside Fig. \ref{fig:incliden} we present the images with an inclination angle of $i=90^\circ$, i.e., the symmetry axis of the PN (through the two ears) is in the plane of the sky. In Fig. \ref{fig:incliden} we present the artificial intensity maps of two cases and at two inclination angles as we indicate in the four panels. These demonstrate how the ears becomes less prominent as the inclination angle decreases. \begin{figure*}[ht!] \includegraphics[trim=0.9cm 11.3cm 0.0cm 1.0cm ,clip, scale=0.8]{S1S7_50_70.pdf} \\ \caption{ Artificial intensity maps for simulations S1 (left column) and S7 (right column) and for two inclination angles (the angle between the symmetry axis of the PN and the line of sight). Each artificial intensity map is a map of the integration of density squared along the line of sight for an inclination angle $i$ as indicated in the inset. The ears disappear as the inclination angle decreases. } \label{fig:incliden} \end{figure*} For small inclination angles of $i < i_{\rm ears}$ the ears are projected on the main nebula and we cannot notice them by the morphology. For each simulation we examine at what inclination angle, the critical inclination angle $i_{\rm ears}$, the ears disappear at the end of the simualtion. Namely, we can observe ears only for $i>i_{\rm ears}$. We list these values (to accuracy of $5^\circ$) in the last column of Table \ref{Table:cases}. We list the critical angle also for cases where we see no ears, cases where the angle is inside parenthesis. In these cases the angle is for the disappearance of the polar protrusions even if they are not ears. Because in all our simulations $i_{\rm ears} \la 35^\circ -40^\circ$, a random orientation of the PN symmetry axis implies that we miss ears because of projection on the main PN shell only in $\simeq 20 \%$ of the cases. \section{Evolution} \label{sec:Evolution} We present the evolution of two simulations. In Fig. \ref{fig:evolS4} we present, from top to bottom, the density, the temperature, and the velocity map in the meridional plane $y=0$ of simulation S4 at three times, from left to right. In the bottom row we present the artificial intensity map (integration of density squared along the line of sight, here along $y$). As we observe at $t=152 ~\rm{yr}$, when the ears reach the edge of the grid, the ears maintain their identity. As the entire nebula is supersonic, Mach numbers $\mathcal{M} > 3$ in most parts, and most of the motion is radial, the nebula will keep its structure at later times as well (unless a too massive circumstellar material further out will change that structure). This simulation shows that for some physical parameters the ears can exist for hundreds of years and more. \begin{figure*}[ht!] \includegraphics[trim=0.9cm 2.3cm 0.0cm 1.0cm ,clip, scale=0.8]{s4_LG.pdf} \\ \caption{Evolution of simulation S4 at three times, from left to right, $t=54 ~\rm{yr}$, $t=104 ~\rm{yr}$ and $t=152~\rm{yr}$. We present the density (upper row), temperature (second row), and velocity magnitude according to the colors with arrows indicating the flow direction (third row), all in the meridional plane $y=0$ and with the color-bars in cgs units. In the lower row we present the evolution of the artificial intensity map (in units of $~\rm{g}^2 ~\rm{cm}^{-5}$ according to the color-bar), where the first panel is as in Fig. \ref{fig:sixteen_ears}. } \label{fig:evolS4} \end{figure*} In Fig. \ref{fig:evolS6} we present the evolution of simulation S6. The same discussion above for simulation S4 holds for this case as well. Basically, although our simulations in both S4 and S6 are for less than 200 years, at the end of the simulation the flow is radial and supersonic, and we expect the ears morphological feature to stay for thousands of years. \begin{figure*}[ht!] \includegraphics[trim=1.0cm 3.cm 0.0cm 0.0cm ,clip, scale=0.8]{evol_LG.pdf} \\ \caption{Similar to Fig. \ref{fig:evolS4} but for simulation S6 and at the three times of $88 ~\rm{yr}$, $136 ~\rm{yr}$ and $171~\rm{yr}$. } \label{fig:evolS6} \end{figure*} Saying all these, we did not follow the nebula to the phases when the central star blows a fast ($\gg 100 ~\rm{km} ~\rm{s}^{-1}$) wind and starts to ionise the nebula. The fast wind interaction with the dense shell influences the evolution at later times (e.g., \citealt{Perinottoetal2004}), e.g., it suffers from instabilities and destroys the smooth structure of the dense shell (e.g., \citealt{ToalaArthur2016}). We expect that the dense shell will nonetheless contain the fast wind such that the fast wind will not affect the ears. The ionisation of the nebula will increase the sound speed and somewhat will change the flow (e.g., \citealt{Perinottoetal2004, Schonberneretal2010}). This might erase small and faint ears, or smear the differences between the ears and the main nebula such as in simulation S1L (Fig. \ref{fig:S1L}), but in most cases we expect ears to survive these late evolutionary phases. Future simulations should examine the role of the fast wind and the ionising radiation to examine whether the ears survive as we expect. \section{Summary} \label{sec:summary} The morphologies of a small fraction of elliptical PNe contain two opposite protrusions from the main PN shell that are smaller than the main PN shell, have a cross section that decreases monotonically outward, and the boundary between the ears and the main nebula has a dimple (two inflection points) on each side of each ear. These two opposite protrusions are termed `ears' (examples are in section \ref{sec:intro}). Our goal was to determine the outflow structure by which jets can inflate ears. In many trials that we do not present here, we could not obtain ears when we launched the jets after we blew the main dense shell. Namely, the jets that interact with the dense shell either do not inflate any protrusions, or if they do inflate protrusions these are large lobes that form bipolar PNe. We therefore simulated here a flow structure where low-energy jets (short-lived and not too powerful) interact with a regular AGB wind, and the dense PN shell is younger than the jets (for details see section \ref{sec:numerical}). In these simulations, that we summarise in Table \ref{Table:cases}, we started to blow the intensive wind that forms the dense inner shell one year after the jets ceased. We assumed that the main sequence companion is of low mass, $M_2 \simeq 0.1-0.3 M_\odot$ (might even be a brown dwarf), and for that it launches weak jets that form the ears, and after it enters the CEE it ejects an elliptical nebula rather than a bipolar nebula or a dense equatorial torus. This assumption is compatible with the presence of ears only in elliptical PNe. In many cases we expect that the low mass companion will not survive the CEE (it will spiral-in all the way to the core and be tidally destroyed). Even if it survives, its low mass implies that it is hard to detect such companions in the centres of PNe. We referred to the PN A30 as an example of an elliptical PN with a post-CEE central binary system \citep{Jacobyetal2020}. The full parameter space is huge as we can vary the jets' opening angle, the mass loss rate into the jets and their velocity, the properties of the regular AGB wind into which the jets expand, and the adiabatic index. For the influence of the adiabatic index see Fig. \ref{fig:3Gammas}. Indeed, from the 16 simulations we conducted we identify clear ears in seven, S1-S7 in Fig. \ref{fig:sixteen_ears}. We found that not under all conditions we form ears. We found that the jets cannot be too energetic, cannot be too wide, and cannot be too slow. At the end of our simulations the outflow is radial and supersonic, and so the jets maintain their morphology for hundreds of years (section \ref{sec:Evolution}; Figs. \ref{fig:evolS4}, \ref{fig:evolS6}), and probably much longer. Our main finding is that weak and short-lived jets that a companion launches before it enters the CEE might form ears in elliptical PNe. We can present this from another perspective where we refer to the large jets' parameter space. Namely, jets that are weak, short-lived, and launched before the main nebular ejection, lead to the formation of ears in elliptical PNe. Because the parameter space is too large to follow in one study, there is much more studies to do before we can clearly reproduce specific PNe with ears. For example, we should conduct 3D hydrodynamical simulations of a binary system that launches jets as it enters a CEE, similar to the simulation by \cite{Shiberetal2019}. As well, we should continue the simulations for thousands of year and include the central fast wind and the ionisation phase of the PN. However, we think that we can confidently state that to form PNe with ears, in the binary-jet paradigm, the progenitor binary system should launch the jets shortly before it blows the dense PN shell. Such a flow structure can come from a system that enters a common envelope evolution. The companion accretes mass through an accretion disk just before it enters the envelope of the AGB star (or even a red giant branch star), and launches jets for a short time. It then enters the envelope and ejects the envelope to form the dense shell of the descendant PN. In other words, our results are consistent with a scenario in the frame of the binary-jet paradigm where in PNe with ears, the progenitor binary system launched the jets shortly before the system entered the common envelope evolution. We do not claim that the binary-jet scenario is the only one to form ears. { For example, the two lobes of a bipolar PN with a small inclination angle, defined as the angle between the PN symmetry axis and the line of sight, (i.e., an almost pole-on PN) might appear as two ears protruding from the main nebula. } It does, however, have some expectations that we find in some PNe, and so might support this scenario. The binary-jets scenario includes the possibility that in some cases the accretion disk will precess, and so will the jets that it launches. As well, in some cases, mainly due to a more massive companion, there will be a dense equatorial outflow during the CEE phase. The PN K3-24 that we list in section \ref{sec:intro} has two pairs of ears not aligned perpendicular to the dense equatorial gas. This clearly suggests a binary interaction. The "S" shape of the ears both in K3-4 and in NGC~6563 suggests precession, which in turn suggests binary interaction. \section*{Acknowledgments} We thank an anonymous referee for very useful and detailed comments. This research was supported by a grant from the Israel Science Foundation (769/20) and a grant from Prof. Amnon Pazy Research Foundation.
{ "timestamp": "2021-04-14T02:18:42", "yymm": "2012", "arxiv_id": "2012.08917", "language": "en", "url": "https://arxiv.org/abs/2012.08917" }
\section*{Availability} \section*{Acknowledgment} The research is based upon work supported by the Department of Defense (DOD), Naval Information Warfare Systems Command (NAVWAR), via the Department of Energy (DOE) under contract DE-AC05-00OR22725. The views and conclusions contained herein are those of the authors and should not be interpreted as representing the official policies or endorsements, either expressed or implied, of the DOD, NAVWAR, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. \bibliographystyle{plain} \section{Introduction} Security operation centers (SOCs)\textemdash teams of security analysts who continually guard networks against cyber attacks\textemdash now employ widespread data collection capabilities \cite{bridges2018information} and follow a ``defense in depth'' strategy~\cite{colarik2015establishing,tirenin1999concept} that includes a tapestry of tools for blocking, alerting, logging, and providing situational awareness. To effectively defend networks and allow analysts to gain actionable insights from this wealth of SOC data, a robust research community and a burgeoning cyber tech industry are integrating machine learning (ML) into novel solutions. Common categories of tools integrating ML to effectively leverage SOC data include the following: modern endpoint protection/anti-virus (AV), endpoint detection and response (EDR), network situational awareness/anomaly detection (AD), user and entity behavioral analytics (UEBA), security incident and event management (SIEM) systems, and security orchestration and automated response (SOAR). Gartner anticipates that by 2024 80\% of SOCs will use ML-based tools to enhance their operations. In light of such widespread adoption, it is vital for the research community to both enumerate and address usability concerns. While prior work has sought to understand the issues that plague SOC operations ~\cite{kokulu2019matched,bridges2018information,goodall2004work,botta2007towards} and create more effective ML tools for SOCS~\cite{arendt2015ocelot,goodall2018situ,best2010real,sopan2018building}, no prior work examines analysts' usage of ML-based tools in situ. This gap in the research is understandable because it is non-trivial to gain access to a high fidelity testing environment and recruit actual SOC analysts to participate in such a study. In this work, we share the results of an in situ study made possible by our sponsor, the US Navy, who purchased time at a testing center known for conducting high fidelity cyber events\textemdash the National Cyber Range (NCR) in Orlando, Florida. The Navy also provided six analysts from their SOCs to participate in the study. With these resources at our disposal, we designed a test to identify potential usability issues in two ML-based tools\textemdash one AV tool that carved files out of network traffic and a real time network-level AD tool. We configured the NCR to simulate a network with $\sim$1000 IPs that included emulated users with access to email, social media, and general websites, as well as management infrastructure and an out-of-band network allowing analysts to access the technologies under evaluation. We then conducted red team campaigns against the network, one for each tool, and observed analysts as they interacted with the tools. After testing, we asked analysts to complete a follow-up survey and discussed their experiences in a focus group. Our analysis identified several serious usability issues, including multiple violations of established usability heuristics for user interface design. We also discovered that analysts lacked a clear mental model of how these tools generate scores, resulting in mistrust and/or misuse of the tools themselves. Surprisingly, we found no correlation between analysts' level of education or years of experience and their performance with either tool, suggesting that other factors such as prior background knowledge or personality play a significant role in ML-based tool usage. Our findings demonstrate that ML-based security tool vendors must put a renewed focus on working with analysts, both experienced and inexperienced, to ensure that their systems are usable and useful in real-world security operations settings. \section{Background}{\label{sec:background}} In this section, we describe the testbed where we conducted the evaluation and the two tools tested, as well as providing an overview of related work. \subsection{National Cyber Range} The National Cyber Range (NCR)~\cite{ferguson2014national} provided the high-fidelity environment for our study. The NCR is a large, air-gapped cyber testbed equipped with state-of-the-art network and user emulation capabilities that enables the rapid emulation of complex, operationally representative networks that can scale to over 50,000 virtual nodes. The range included ``user machines'', emulating real users, a management network with services such as DNS and Active Directory, a server network with on-premise servers such as Apache and IIS, and an "external network" for email, social media, and general websites. The technologies under test were all connected to a core router and/or to a passive tap so each had access to all network traffic and could communicate with any host-based clients forwarding data. User terminals connected to the two technologies under test via an out-of-band network and allowed evaluation team members and/or security analysts (users) access to the user interface (UI). \subsection{Tools Tested} \label{sec:tools} This study included two tools, a commercially available network-based malware detection tool and a government off-the-shelf, anomaly detection tool. Because of a non-disclosure agreement, we cannot disclose the name of the vendor who supplied the first tool. It is a network-based, static-analysis, malware detection tool (NSDT) that is capable of identifying both existing and new/polymorphic attacks in near real time using an on-premises (on-prem) appliance to passively monitor network traffic. The technology centers on a binary (benign/malicious) classification of files and code snippets extracted from network traffic. The second tool, Situ, is a government off-the-shelf (GOTS) tool for near real time network-level anomaly detection and situational awareness/exploration through visualization \cite{goodall2018situ}. Overall, the tool identifies anomalous\textemdash not necessarily malicious\textemdash network behavior and provides an interface for situational awareness, hunting, and forensic investigation. The system ingests network flows, the metadata of IP-to-IP communication and/or firewall logs. \input{30-related_work} \subsection{Related Work} \label{sec:related-works} Related works fall into four categories---visual analytics to aid security analysts, methods to evaluate the effectiveness of security tools in the context of a SOC, studies on SOC operations, and ML for cybersecurity. While prior work relied heavily on interviews or surveys for data collection, our work represents the first assessment of ML-based tool usability performed in situ via participant observation. Previous work on ML and visualization tool development includes tools such as Ocelot~\cite{arendt2015ocelot}, which was designed to help analysts make better decisions about poorly defined network intrusion events, Situ~\cite{goodall2018situ}, used to identify anomalous behavior in network traffic, and the work of Best et al.~\cite{best2010real}, which seeks to give analysts situational understanding of the network utilizing complementary visualization techniques. Bridges et al.~\cite{bridges2018forming} introduced the Interactive Data Exploration \& Analysis System (IDEAS), a research prototype allowing analysts to query data in their SOC log store and select ML models to be run ``under the hood'', then receive outputs in an interactive visualization. Sopan et al.~\cite{sopan2018building} generated a machine learning model to aid SOC analysts in isolating meaningful alerts by conducting two hour interviews with the five most experienced analysts in the SOC to better understand their workflow. They then created a prediction explanation visualization to aid analysts and stakeholders in understanding how the model was making decisions. Work in the second category considers methods for evaluating the effectiveness of security tools. Akinrolabu et al.~\cite{akinrolabu2018challenge} interviewed expert SOC analysts to better understand obstacles to detecting sophisticated attacks and Cashman~\cite{cashman2019user} conducted a user study of a novel approach to developing machine learning models that involved users in the selection process. They both suggest that involving the user in the creation of the machine learning model can provide significant benefits. Jaferian et al.~\cite{jaferian2014heuristics} proposed a new set of usability heuristics based on activity theory that would complement rather than replace traditional methods such as Nielsen's heuristics. Work in the third category focuses on understanding SOC operators. Gutzwiller et al.~\cite{gutzwiller2016task} performed a cognitive task analysis to understand the goals and abstracted elements of awareness cyber analysts use in their jobs. They found that data fusion in visualizations is most useful when it is combined with a strong knowledge of the network itself on the part of the analyst. These results match findings by Ben-Asher et al.~\cite{ben2015effects} that suggest situated knowledge about a network is necessary to make accurate decisions. Botta et al.~\cite{botta2007towards} interviewed a dozen SOC analysts in five companies and found that inferential analysis, pattern recognition and what they call ``bricolage'', or construction with whatever is at hand, are key skills for IT security professionals. Sundaramurthy et al.~\cite{sundaramurthy2016turning} conducted a 3.5 year long anthropological study of four academic and corporate SOCs and concluded that the only way to get new tools incorporated into existing workflows is to meet the spoken and unspoken requirements of analysts and their managers. In a previous study~\cite{sundaramurthy2015human}, they also developed a model for understanding SOC analyst burnout. Goodall et al.~\cite{goodall2004work}, Bridges et al.~\cite{bridges2018information}, and Kokulu et al.~\cite{kokulu2019matched} conducted interviews with security analysts to better understand SOC workflows and the problems plaguing SOC operations. Common problems include disagreements between managers and analysts and low visibility into network infrastructure and endpoints. Work in the fourth category is on ML for cybersecurity. As discussed by the position paper of Sommer and Paxon~\cite{sommer2010outside}, many pitfalls exist when applying machine learning to cybersecurity\textemdash most notably, the ``semantic-gap'', referring to the common difficulty of analysts understanding the output of ML algorithms. The challenge is presenting results in a context that is understandable to, and actionable by, the analysts. More generally, the role of humans interacting with machine learning (ML) systems and the related usability challenges are areas of open research~\cite{gillies2016human}. There is also a plethora of work on the interpretation of ML algorithms, but we do not have space to include it. For a summary, see the work of Gilpin et al.~\cite{gilpin2018explaining}. \section{Methodology} In this section, we discuss our study design, data analysis, and demographics. \subsection{Study Design} This study was not comparative, but rather exploratory in nature. Our goal in this work was to identify usability concerns in ML-based tools; not to compare the efficacy of the two tools being tested. In order to achieve this goal, we observed participants during tool usage, administered a follow-up survey, and held a focus group to better understand users' experience. We used the \emph{think-aloud} methodology~\cite{van1994think} during observation, in which participants verbalized their intentions, so that researchers would be able to understand the reasons behind participant actions. By conducting the focus group after direct observation of each analyst, we utilized it as a way to supplement and refine our observations rather than as a sole source of data~\cite{nielsen1997use,nielsen1994usability}. The participant observation consisted of two campaigns, one for each tool, in which we performed a sequence of malicious actions against the network and analysts utilized the user interface provided by the tool to attempt to gain insight into the attack. Each campaign lasted one hour and fifteen minutes. Prior to the campaign, analysts were given an introduction to each tool and time to familiarize themselves with the interface. During this familiarization period, analysts could ask any questions they had regarding usage of the tool. Answers were directed to the entire group. During each campaign, the same researcher was assigned to each analyst to record information about and observe the analyst's use of the tool. An additional researcher was responsible for monitoring network status and providing notices every fifteen minutes. Think-aloud was practiced during the familiarization period to ensure analysts understood it. Analysts also recorded insights from each tool they thought were significant as they used the tool and rated the significance of each insight. Following each test, analysts were surveyed to better understand their experience with the tool and the observers were able to ask for any necessary clarification. The survey included the System Usability Scale (SUS) along with additional questions designed by the researchers. The day after testing, we held a focus group to supplement and refine our observations. \subsection{Attack Campaigns} \label{sub:red} We created an attack campaign template that contained actions that one or both of the tools under test should catch. During each testing period, we ran through the actions specified in the attack campaign template while slightly permuting the IPs and payloads used so that the analysts' experience from one tool test would not impact their results in the next. Generally, the attack campaigns consisted of the following actions. First, the adversary gains initial access by dropping a customized version of Cobalt Strike's Beacon\footnote{\url{https://www.cobaltstrike.com//help-beacon/}}, a program mimicking APT's in allowing external access, on the initial target. This was meant to simulate a successful phishing attack, wherein an unsuspecting user of the target system is tricked into downloading and running a malicious email attachment. From the infected target, the adversary port scanned other hosts on the network of the first compromised system. The adversary then instructed the infected system to download additional malware over HTTP and then transfer the malware to another host on the network over Samba. The adversary then ascertained administrator credentials by using Beacon's Hashdump functionality. With the newly found administrator privileges, the adversary used \texttt{PSEXEC} to laterally move from the infected foothold to another target on its internal network. The adversary then exfiltrated some data from the file system of the newly infected host back to the command and control server (C2) and disconnected from the infected target. \subsection{Data Analysis} \label{sub:analysis} Our data analysis was broken down into quantitative and qualitative components. The System Usability Scale (SUS) and attacks detected by each analyst were quantitative metrics, while the post-test survey and focus group were qualitative. For the qualitative analysis, we used a modified version of the open coding approach~\cite{strauss1998basics} called pair coding~\cite{sarker2000building,salinger2008coding}, in which researchers create and assign codes collectively. For the follow-up survey, we also conducted a sentiment analysis. Each coder counted $p$, the number of positive, and $n$, the number of negative comments, for each question. We report and define $S_r: = (p-n) / (p+n)$, a sentiment ratio. Note that $S_r \in [-1,1]$ with $S_r = \pm 1$ if all comments were positive/negative, respectively, and $S_r = 0$ if the quantity of positive and negative comments were equal. We added the $p$ and $n$ values of both researchers together and then calculated a composite sentiment ratio. \subsection{Recruitment \& Ethics} This IRB-approved study was conducted as part of a tool evaluation exercise organized by our Navy sponsor. In order to participate, analysts were required to be actively employed in one of the sponsor's SOCs. The sponsor provided six analysts for the event, with both experienced and novice analysts included in the sample. Prior to testing, we went over an information sheet detailing the nature of the research and the participants' rights. \subsection{Demographics} Half of the analysts' highest level of education was high school, while two had completed a Bachelor's and one an Associate's degree. For context, most IT security professionals have either a Bachelor's or an Associate's degree.~\footnote{\url{https://itcareercentral.com/security-roles-salary-expectations-explained/}} Half of the analysts had one year or less of experience on the job, while the others had three, eight, and five years of experience. Ages ranged from twenty-six to thirty-seven. Table~\ref{tab:analysttools} shows the tools each analyst reported using on their job regularly. \section{Analysis \& Results} \label{sec:results} In this section, we discuss our key findings and make recommendations for UI designers based upon the usability issues we identified. While our study is preliminary in nature, our findings demonstrate that ML-based security tool vendors must put a renewed focus on working with analysts, both experienced and inexperienced, to ensure that their systems are usable and useful in real-world security operations settings. \subsection{Tool Usability} To evaluate the overall usability of each tool, we used the System Usability Scale (SUS). For the SUS, ten statements are ranked from 1 to 5, where 1 is strongly disagree and 5 is strongly agree. Half of the statements express a positive experience with the tool and half a negative experience with the tool. The responses are then converted to a composite score on a scale from 0-100, where a score above 68 is considered average, an 81 would be an `A’ and a 50 would be an `F’. The SUS results for the statements expressing a negative experience are shown in Figure~\ref{fig:negsent} and the results for the statements expressing a positive experience in Figure~\ref{fig:possent}. The mean score for Situ was 65.42, which is average, while NSDT was closer to the failure line with a 56.67. Given that NSDT is a commercially available tool, this result is disappointing. Analysts indicated that NSDT is cumbersome and that it contained inconsistencies, issues we will see again in the next section. For Situ, the main issue identified by the SUS was that analysts felt they needed to learn a lot before they could use the system effectively. We suspect analysts responded this way to Situ for two reasons. First, Situ required analysts to synthesize multiple views of the same data built on different statistics (anomaly score, PCR, geographic information). Second, Situ identified anomalous, rather than malicious, activity, requiring analysts to decide when anomalous behavior was worth investigating. The fact that most analysts lacked a clear mental model for how to use the anomaly scores presented by Situ, which we will discuss in Section~\ref{sub:mental}, supports this explanation. To verify that these results were approaching saturation (i.e. they would not change substantially even if we added more analysts), we also computed the hold-one-out average scores with only five of the six analysts for all six combinations. This yielded six average scores: 62.0, 62.5, 63.5, 65.5, 67.5, and 71.5 for Situ and 50.0, 55.0, 54.5, 58.0, 59.0, and 63.5 for NSDT. The similarity in these average scores verifies that our SUS results are near saturation. \begin{table}[t] \setuptable \begin{tabular}{c|ccccccc} \multicolumn{1}{l}{Analyst} \headline[4.5cm] & \headrow{Network Analysis Framework} & \headrow{Automated Malware Analysis} & \headrow{Network Packet Analyzer} & \headrow{Putty, Bash or Powershell} & \headrow{Full Stack Analytics} & \headrow{SIEM} & \headrow{IDS} \\ \hline 1 &\Circle &\Circle &\CIRCLE &\Circle &\Circle &\CIRCLE &\Circle \\ 2 &\CIRCLE &\Circle &\CIRCLE &\CIRCLE &\CIRCLE &\Circle &\CIRCLE \\ 3 &\Circle &\CIRCLE &\CIRCLE &\CIRCLE &\Circle &\CIRCLE &\Circle \\ 4 &\CIRCLE &\CIRCLE &\CIRCLE &\Circle &\Circle &\Circle &\CIRCLE \\ 5 &\CIRCLE &\CIRCLE &\CIRCLE &\CIRCLE &\Circle &\CIRCLE &\CIRCLE \\ 6 &\CIRCLE &\Circle &\Circle &\Circle &\Circle &\CIRCLE &\CIRCLE \\ \hline \end{tabular} \caption{Tools Analysts Reported Using Regularly} \label{tab:analysttools} \end{table} \begin{figure*}[!ht] \centering \begin{subfigure}{.25\textwidth} \centering \includegraphics[width=45mm]{images/n_questions.png} \label{fig:sub0} \end{subfigure} \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=53mm]{images/nsdt_negative.png} \caption{NSDT} \label{fig:sub3} \end{subfigure} \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=53mm]{images/situ_negative.png} \caption{Situ} \label{fig:sub4} \end{subfigure} \caption{SUS Statements Expressing a Negative Experience} \label{fig:negsent} \end{figure*} \begin{figure*}[!ht] \centering \begin{subfigure}{.25\textwidth} \centering \includegraphics[width=45mm]{images/p_questions.png} \label{fig:sub} \end{subfigure} \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=53mm]{images/nsdt_positive.png} \caption{NSDT} \label{fig:sub1} \end{subfigure} \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=53mm]{images/situ_positive.png} \caption{Situ} \label{fig:sub2} \end{subfigure} \caption{SUS Statements Expressing a Positive Experience} \label{fig:possent} \end{figure*} \subsection{User Interface Issues} Table~\ref{tab:heuristics} summarizes which of Nielsen's heuristics~\footnote{\url{https://www.nngroup.com/articles/ten-usability-heuristics/}} for user interface design each system violated. With NSDT, analysts felt particularly frustrated by a lack of consistency in the user interface. Multiple pages contained overlapping content and looked similar, which caused analysts to continually feel lost because they were trying to remember which page contained which content. Some content was also only available for certain file types, exacerbating this feeling of confusion. A2 said they "fought the GUI the entire hour" and A1 said they "had to click around a lot---inconsistency". Analysts main frustration with Situ was that the filters applied to the search bar were only visible in the URL and were not easily modifiable, forcing analysts to start a new search from scratch if they wanted to alter search parameters. A4 said he/she ``hated filters not listed except in the url''. One issue both tools had in common is that they failed to provide the analysts with as much information as they wanted about the scores produced by the tool. Discussing the score provided by NSDT, A4 noted, ``It seems accurate but I would want more info on why it thinks it's malicious provided in more of a clean way''. While Situ did provide explanations in the website documentation, some analysts found them difficult to understand. ML-based tools need to provide clear and easily accessible explanations for how the ML algorithm scores events. Pop-ups explaining each score should be provided with links to additional reading for those analysts who want to go more in depth. \begin{table}[t] \setuptable \begin{tabular}{|l|cc|} \hline Heuristic & NSDT & Situ \\ \hline Visibility of System Status & {\textcolor[RGB]{112,173,71}{\ding{51}}} & {\textcolor[RGB]{192,0,0}{\ding{55}}} \\ System Matches Real World & {\textcolor[RGB]{192,0,0}{\ding{55}}} & {\textcolor[RGB]{192,0,0}{\ding{55}}} \\ User Control and Freedom & {\textcolor[RGB]{112,173,71}{\ding{51}}} & {\textcolor[RGB]{192,0,0}{\ding{55}}} \\ Consistency and Standards & {\textcolor[RGB]{192,0,0}{\ding{55}}} & {\textcolor[RGB]{112,173,71}{\ding{51}}} \\ Error Prevention & {\textcolor[RGB]{112,173,71}{\ding{51}}} & {\textcolor[RGB]{112,173,71}{\ding{51}}} \\ Recognition Not Recall & {\textcolor[RGB]{192,0,0}{\ding{55}}} & {\textcolor[RGB]{112,173,71}{\ding{51}}} \\ Flexibility and Efficiency & {\textcolor[RGB]{112,173,71}{\ding{51}}} & {\textcolor[RGB]{112,173,71}{\ding{51}}} \\ Aesthetic and Minimalist & {\textcolor[RGB]{112,173,71}{\ding{51}}} & {\textcolor[RGB]{112,173,71}{\ding{51}}} \\ Help Users with Errors & {\textcolor[RGB]{112,173,71}{\ding{51}}} & {\textcolor[RGB]{112,173,71}{\ding{51}}} \\ Help and Documentation & {\textcolor[RGB]{192,0,0}{\ding{55}}} & {\textcolor[RGB]{192,0,0}{\ding{55}}} \\ \hline \end{tabular} \begin{tabular}{ll} {\textcolor[RGB]{112,173,71}{\ding{51}}} & No observed violations of this heuristic\\ {\textcolor[RGB]{192,0,0}{\ding{55}}} & Observed violations of this heuristic \end{tabular} \caption{Summary of whether or not each system observed Nielsen's heuristics for user interface design.} \label{tab:heuristics} \end{table} \subsection{How Mental Models Impact Distrust and Misuse of Tools}~\label{sub:mental} With both NSDT and Situ, some analysts distrusted and/or misused the tool because they had an incorrect mental model of how scores were generated. NSDT scored malicious files on a scale of 1 to 10, where 1 meant that the file was benign and 10 that it was malicious. While analysts had little trouble identifying malicious files using this score even if they did not understand how it was generated, the machine learning engine also provided a confidence level along with the score. This confidence level was always 100\%, a fact that A4 found suspicious, saying, "Why trust this score?". An unclear mental model of how NSDT generated the confidence level resulted in A4 mistrusting the tool because the confidence level was always the same. This result supports prior work~\cite{dovsilovic2018explainable}, which found that analysts who did not understand the ML algorithms distrusted the scores they provided Unlike NSDT, Situ produced an anomaly score based on the flow of network traffic. A more anomalous flow received a higher score. Analysts had varying mental models for how Situ worked and therefore approached anomaly scores very differently. For example, A4 focused on any anomaly scores above a particular value they deemed significant, but discounted events as insignificant if the number of bytes transmitted was small. A5 would investigate which model contributed most heavily to the score, but mainly focused on IP associations. And A6 understood that they should use the anomaly scores to identify a sequence of malicious actions composing a campaign, but they did not understand how to decide which anomalous activity warranted further investigation. In summary, analysts misused Situ for several reasons: (1) They did not understand the difference between anomalous and malicious, (2) They did not understand how to map anomaly scores to attacker actions, (3) They did not know how to prioritize anomalous events. Even though we explained how anomaly scores were calculated during the familiarization period prior to testing and allowed analysts to ask for clarification, only A2 claimed to understand how anomaly scores were calculated during the focus group. These results suggest that AD tools such as Situ may require a more accurate mental model of how scores are produced in order for analysts to use them properly because they require analysts to make complex inferences from the score and to differentiate between anomalous and malicious. In contrast, NSDT flagged files as malicious or non-malicious on a scale of 1 to 10 and would not necessarily require any understanding of the ML model to use effectively, though a lack of understanding can lead to distrust. \subsection{Experience, Tool Performance and Tool-Analyst Match} To assess performance, we let $fc$ and $tc$ denote the number of false and true conclusions made by an analysts, respectively, where $fcr: = fc/(fc + tc)$. A false conclusion occurred when an analyst thought they found malicious activity with a tool, and the activity was actually benign. Table~\ref{tab:expact} shows the number of attack actions identified by each analyst and their false conclusion rate, . We found that the mean false conclusion rate for analysts was .57 (std=.13) with Situ and .28 (std=.25) with NSDT. We did not find that an analyst's experience level directly correlated to an ability to use the tools. With NSDT, an analyst with only 1 year of experience (A3) performed as well as an analyst with 8 years of experience (A5). For Situ, an analyst with only 2 months of experience (A1) performed as well as another analyst with 5 years of experience (A6) and better than an analyst with 8 years of experience (A5). We used a scatter matrix to check for correlations between performance and other demographic data collected, such as education, but found none. This result is surprising. We expected analysts with more experience and education to outperform junior analysts. We also found that most analysts performed better with one tool or the other. A1 and A2 performed well with Situ, but poorly with NSDT. A3 and A5 performed well with NSDT, but poorly with Situ. This result may suggest a tool-analyst match, where individual analysts are predisposed to certain tool types. \begin{table}[h!] \begin{center} \begin{tabular}{ll|cc|cc} & & \multicolumn{2}{|c|}{Situ} & \multicolumn{2}{c}{NSDT} \\ \hline Analyst & Experience & $tc$ & $fcr$ & $tc$ & $fcr$ \\ \hline A1 & 2 months & 3 & .5 & 1 & 0\\ A2 & 3 years & 4 & .43 & 1 & .67\\ A3 & 1 year & 2 & .5 & 4 & .2\\ A4 & 1 year & 1 & .8 & 2 & .5\\ A5 & 8 years & 2 & .71 & 4 & 0\\ A6 & 5 years & 3 & .5 & 5 & .33\\ \end{tabular} \end{center} \caption{Analyst Experience and Performance metrics depicted. A false conclusion occurred when an analyst thought they found malicious activity with a tool, but the activity was actually benign. Because NSDT flagged malicious files, an $fcr$ of 0 was possible for analysts who focused solely on flagged files and did not attempt to draw further conclusions about the nature of the attack.} \label{tab:expact} \end{table} \begin{table*}[hbt!] \centering \begin{tabular}{@{}rcc@{}} \toprule \textbf{} & \textbf{NSDT} & \textbf{Situ}\\ \midrule What was your overall impression of the tool? & .39 & .68 \\ Was this tool easy and intuitive to use? & -.08 & .09\\ How do you see this tool fitting into your workflow? & .44 & .53\\ If this tool was in your current work environment, would you use it? & .83 & 1\\ What was your impression of the alerts raised by the tool? & .56 & .53\\ \midrule Average Sentiment Ratio & \textbf{0.43} & \textbf{0.53} \\%\textbf{2.14} & \textbf{2.83}\\ \bottomrule \end{tabular} \caption{Sentiment ratio, $S_r: = (p-n) / (p+n)$ with $p, n$ the number of positive/negative statements, on post-test questionnaire reported. Note that $S_r \in [-1,1]$ with $S_r = \pm 1$ iff all comments were positive/negative, respectively, and $S_r = 0$ iff $p=n$. In spite of the concerns regarding intuitiveness of the alerts raised by the tools, analysts expressed overwhelmingly positive sentiment that they would use both tools if they were integrated into their work environment.} \label{tab:sentiment} \end{table*} \subsection{User Attitudes} Overall, analysts were optimistic about the capabilities these tools could provide. The analysts liked Situ because it allowed them to discover a wide range of attacker actions \textit{during} an attack, whereas they felt most tools only allow them to respond \textit{after} the attack has already taken place. After using Situ, A2 shared that it was "better than waiting for a light to turn red to do your job”. While analysts viewed NSDT as a more retroactive tool, because it flagged malicious files rather than identifying anomalies, they also felt it could help them automate their workflow and conduct additional analysis. Table~\ref{tab:sentiment} summarizes the results of our sentiment analysis of the follow-up survey for each tool, described in Section~\ref{sub:analysis}. Analysts expressed a more positive overall impression of Situ than NSDT. One possible explanation for this fact is that several analysts were very frustrated with NSDT's user interface for reasons noted in the previous section. As a group, analysts did not find either tool particularly intuitive, expressing neutral sentiment for this question. Analysts also showed some reservations about the alerts raised by the tools and how each tool would fit into their workflow. In spite of these concerns, analysts expressed overwhelmingly positive sentiment that they would use both tools if they were integrated into their work environment. These results suggest that analysts are excited about the possibilities that ML tools provide and willing to use them in practice. However, ML-based security tool vendors still have plenty of work to do to enhance the usability of their products, including addressing UI issues, helping analysts interpret alerts, and establishing a more intuitive workflow. \section{Discussion \& Future Work}\label{discussion} This work identified several serious usability issues in the two ML-based tools studied, including failure to follow established usability heuristics for user interface design and a lack of transparency into how scores are produced that caused distrust and/or misuse among analysts. In light of these problems, we make the following recommendations: \begin{enumerate} \item Vendors should conduct usability tests with actual SOC analysts, both experienced and inexperienced, throughout the software development life cycle. While heuristic evaluations are valuable, they require expertise to apply properly~\cite{thovtrup1991assessing} and are not as effective at identifying major issues pertinent to real users~\cite{paz2015heuristic}. This suggestion is also supported by the work of Bano et al.~\cite{bano2013user}, which concluded that software systems benefit from the inclusion of users in early stages of product development. \item ML-based tools should provide analysts with more guidance on how to understand and utilize their output. The benefit of ML is lost if analysts cannot understand the meaning of the scores produced. Prior research recommends including analysts when developing machine learning models to ensure interpretability~\cite{akinrolabu2018challenge,cashman2019user}. At a minimum, the vendor should conduct usability tests to validate that analysts are able to comprehend and use the scores produced by ML tools as intended by the vendor. \end{enumerate} The lack of sufficient explanation of ML concepts in either of the user interfaces we examined resonates with prior work. Sopan et al.~\cite{sopan2018building} found that their initial user interface, which assumed a base level of knowledge about machine learning, had to be modified for the analysts who were not as familiar with relevant terminology. Usable ML tools must bridge the ``semantic gap''~\cite{sommer2010outside} to help analysts who are not machine learning experts identify actionable insights. In addition, our results showed not only that incorrect mental models can cause distrust and misuse of tools, but also suggest that certain categories of ML tools require analysts to have more accurate mental models. Specifically, we found that Situ, an AD tool, required a more accurate mental model to use because analysts had to make inferences based upon anomaly scores, whereas NSDT, an AV tool, flagged files as malicious or non-malicious and was therefore simple to interpret without any understanding of the underlying models. While prior research has explored how mental models impact the usability of encryption~\cite{wu2018tree}, the Tor browser~\cite{winter2018tor}, and password managers~\cite{pearman2019people}, no research has focused specifically on how mental models impact SOC analysts' usage of ML-based tools. Our research also uncovered the possibility of a tool-analyst match. All analysts performed better with one tool or the other, yet we found no correlation between the demographic information we collected and performance. These results suggest that other factors such as prior background knowledge or personality play a significant role in ML-based tool usage. While exploring personal attributes that impact tool usage was not the focus of our study, we believe this is an area that would be fruitful for researchers to explore further. We plan to continue this work in several ways. First, we want to analyze a broader set of ML-based tools in order to identify usability paradigms and common issues within each paradigm. Second, we want to categorize analysts' mental models of different tool types and understand how those mental models impact their ability to use the tools. The analysts in this study were excited about integrating ML tools into their SOCs and our research aims to help ensure that those tools are both usable and useful in real-world contexts. \section{Future Work} We plan to continue this work in several ways. First, we want to \section*{Acknowledgments} The authors thank all security analysts that agreed to participate in this study and the team at the National Cyber Range, without whom this study would not have been possible. [Rest of acknowledgements redacted for double-blind review.]
{ "timestamp": "2020-12-17T02:20:46", "yymm": "2012", "arxiv_id": "2012.09013", "language": "en", "url": "https://arxiv.org/abs/2012.09013" }
\section{Introduction} \subsection{Problem Description, Objectives and Context}\label{sec.intro} We consider the problem of computing the minimum of a set of numbers over a network, and we propose a distributed, iterative solution achieving \emph{global} and \emph{uniform}, albeit \emph{approximate}, asymptotic stability. We are given a set $\mathcal N$ of $N$ decision makers (or \emph{agents}), where each agent $i\in\mathcal N$ is provided with a number ${\rm M}_i\in \mathbb R_{\ge 0}$ not known a priori by the others. The agents exchange information over a communication network with only a subset of other agents (called their \emph{neighborhood}). The approximate minimum sharing problem consists in the design of an algorithm guaranteeing that each agent asymptotically obtains a ``sufficiently good'' estimate of the quantity \begin{equation}\label{d.uM} \M\sr}%{\und{\M} := \min_{i\in\mathcal N} {\rm M}_i . \end{equation} Clearly, ``$x_i=\M\sr}%{\und{\M},\ \forall i\in\mathcal N$'' is also the {\em unique} solution to every constrained optimization problem of the form \begin{equation}\label{s.min_opt} \begin{aligned} &\max \, \sum_{i\in\mathcal N} \psi_i(x_i) \\ &\quad x_i \le {\rm M}_i , & \forall &i\in\mathcal N\\ &\quad x_i=x_j,& \forall & i,j\in\mathcal N \end{aligned} \end{equation} obtained with $\psi_i$, $i\in\mathcal N$, continuous and strictly increasing functions. Therefore, the minimum sharing problem is equivalent to the constrained distributed optimization problem \eqref{s.min_opt}, thus intersecting the wide research field of distributed optimization \cite{NotarstefanoTutorial}. The problem of computing a minimum (or, equivalently, a maximum) over a network of decision makers is a classical problem in multi-agent control, with applications in distributed estimation and filtering, synchronization, leader election, and computation of network size and connectivity (see, e.g., \cite{Bullo2009,Santoro2006,nejad_maxconsensus_2009,iutzeler_analysis_2012,golfar_convergence_2019} and the references therein). Perhaps the most elementary existing algorithms solving the minimum sharing problem are the \emph{FloodMax} \cite{Bullo2009} and the \emph{Max-Consensus} \cite{nejad_maxconsensus_2009,iutzeler_analysis_2012,golfar_convergence_2019}. In its simplest form, Max-Consensus\footnote{For brevity, we only focus on Max-Consensus. However, the same conclusions applies also to the FloodMax.} requires each agent $i\in\mathcal N$ to store an estimate $x_i\in\mathbb R$ of $\M\sr}%{\und{\M}$ which is updated iteratively on the basis of the following update rule \begin{subequations}\label{s.ex.maxconsensus} \begin{equation}\label{s.ex.maxconsensus_updatelaws} x_i^{t+1} = \min_{j\in [i]} x_j^t ,\qquad \forall i\in\mathcal N , \end{equation} with the initialization \begin{equation}\label{s.ex.maxconsensus_initialization} x_i^{t_0}= {\rm M}_i ,\qquad \forall i\in\mathcal N, \end{equation} \end{subequations} where $t$ is the iteration variable, $t_0$ its initial value, and $[i]\subset\mathcal N$ denotes the neighborhood of agent $i$ (we assume $i\in[i]$). The update law \eqref{s.ex.maxconsensus_updatelaws} is decentralized and scalable, in that each agent needs only information coming from its neighbors and each agent stores only one variable. However, although \eqref{s.ex.maxconsensus_updatelaws} guarantees convergence of each $x_i$ to $\M\sr}%{\und{\M}$ when the estimates $x_i$ are initialized as specified in~\eqref{s.ex.maxconsensus_initialization}, \emph{convergence is not guaranteed for an arbitrary initialization}. In fact, if \begin{equation}\label{s.ex.init2} \exists i\in\mathcal N \ {\rm s.t.}\ x_i^{t_0}<\M\sr}%{\und{\M}, \end{equation} then the corresponding estimate $x_i^t$ produced by \eqref{s.ex.maxconsensus_updatelaws} satisfies $x^t_i<\M\sr}%{\und{\M}$ all subsequent $t$, so that $x_i^t\to \M\sr}%{\und{\M}$ cannot hold\footnote{In this specific case, we also observe that any \emph{consensual} configuration (i.e., $x_i=x_j$ for all $i,j\in\mathcal N$) is an equilibrium of \eqref{s.ex.maxconsensus_updatelaws}. This, in turn, is intimately linked to the unfeasibility result of \cite[Theorem 3.1.1]{Santoro2006}, and to the \emph{detectability} issues appearing in many control problems, such as \emph{Extremum Seeking} \cite{Ariyur2003,Tan2006}.}. Therefore, since convergence to $\M\sr}%{\und{\M}$ holds only for some specific initial values $x_i^{t_0}$, the Max-Consensus algorithm~\eqref{s.ex.maxconsensus} is not \emph{globally convergent}. While there are application domains for which attaining global convergence is not strictly necessary, there are many others in which it is a crucial requirement. This is the case, for instance, when the quantities ${\rm M}_i$ can change at run time (see the two use-cases illustrated in Section~\ref{sec.app}). To see how this may be a problem for the update law \eqref{s.ex.maxconsensus}, assume by way of example that the estimates $x_i^t$ have reached at a given $t_1$ the value $\M\sr}%{\und{\M}$, i.e. $x_i^{t_1}=\M\sr}%{\und{\M}$ for all $i\in\mathcal N$, and assume that there is a unique $k\in\mathcal N$ such that $\M\sr}%{\und{\M}={\rm M}_k$. Now, suppose that at some $t_2>t_1$ the value of ${\rm M}_k$ increases, thus determining an increment also of $\M\sr}%{\und{\M}$. Then, the condition \eqref{s.ex.init2} holds for $t_0=t_2$ so as, in view of the discussion above, the update law \eqref{s.ex.maxconsensus_updatelaws} fails to track the new minimum. Global attractiveness is not the only desirable property one may be interested in when the minimum sharing problem is considered over large networks with possibly changing conditions. In fact, a crucial role is also played by \begin{enumerate} \item \emph{Uniformity of the convergence}: the convergence rate does not depend on the initial value $t_0$ of the iteration variable and is constant over compact subsets of initial conditions. \item \emph{Stability of the steady state}: ensures that small variations in the parameters and initial conditions map into small deviations from the unperturbed trajectories. \item \emph{Scalability}: the number of variables stored by each agent does not grow with the network size or the number of interconnections. \item \emph{Decentralization of the updates}: the update law of each agent uses only local information and depends on parameters that are independent from those of the other agents. \end{enumerate} Indeed, uniform global attractiveness and stability of the steady state confer robustness against uncertain and time-varying conditions and parameters (see e.g. \cite[Chapter~7]{Goebel2012}), making the minimum sharing method suitable for applications in which the quantities ${\rm M}_i$ vary in time. Moreover, scalability and decentralization enable the application to large-scale networks. In this direction, in this paper we look for a novel solution to the minimum sharing problem having scalability and decentralization properties similar to those of Max-Consensus~\eqref{s.ex.maxconsensus}, but, in addition, possessing the aforementioned globality, uniformity and stability properties. \subsection{Motivating Applications}\label{sec.app} Our methodology is motivated by two application contexts described below. In both cases, a key element consists in solving an instance of the minimum-sharing problem \eqref{s.min_opt} in which the parameters~${\rm M}_i$, hence the minimum $\M\sr}%{\und{\M}$, may change over time. In this contexts, (i) global attractiveness allows to track the changing minimum $\M\sr}%{\und{\M}$, (ii)~uniformity of convergence guarantees that the convergence rate is always the same and does not decrease with time, and (iii) stability guarantees that relatively small variations of the parameters lead to small transitory deviations from the optimal steady state. \subsubsection{Cooperative Control of Traffic Networks} Consider a traffic network consisting of a set of vehicles driving on a highway in an intense traffic situation. Some of the vehicles have self-driving capabilities, and we can assign their driving policies. The other vehicles are instead human-driven and, thus, they are not controlled. The whole traffic network is seen as a \emph{plant} that, when not properly controlled, may exhibit undesired behaviors, such as ghost jams. The control goal consists in finding a control policy, distributed among the self-driving vehicles, which guarantees that the ``closed-loop'' traffic network behaves properly, leading to a smooth traffic flow where all the vehicles hold a common maximal cruise speed. At each time, the maximum attainable cruise speed of each vehicle $i$ is constrained by a \emph{personal maximum value}, denoted by ${\rm M}_i$, which may depend on mechanical constraints, on the traffic conditions, on standing speed limitations, or other exogenous factors. A key part of the control task consists in the distributed computation of the maximum common cruise speed, $\M\sr}%{\und{\M}$, compatible with all the personal velocity constraints. At each time, the problem of estimating $\M\sr}%{\und{\M}$ is an instance of~\eqref{s.min_opt}, whose solution is precisely~\eqref{d.uM}. \subsubsection{Dynamic Leader Election} Another important motivating application is the distributed \emph{leader election} problem in dynamic networks, which shares many similarities with the previous application. Single-leader election has been proved to be an unsolvable problem in general, even under bi-directionality, connectivity, and total reliability assumptions on the communication networks \cite[Theorem 3.1.1]{Santoro2006}. A standard additional assumption making the problem well-posed is that each agent is characterized by a \emph{unique identifier}~${\rm M}_i$. Hence, the problem of leader election can be cast as finding the minimum, $\M\sr}%{\und{\M}$, of such identifiers. The agent whose identifier coincides with $\M\sr}%{\und{\M}$ declares itself the leader, the others the followers. \subsection{Related Works and State of the Art}\label{sec.literature} Classical algorithmic approaches to the minimum sharing problem in arbitrary networks have been developed in the context of distributed algorithms and robotic applications. They include the \emph{FloodMax} \cite{Bullo2009}, the \emph{Max-Consensus}~\cite{nejad_maxconsensus_2009,iutzeler_analysis_2012,golfar_convergence_2019} (see \eqref{s.ex.maxconsensus}), the \emph{MegaMerger} \cite{Gallager1983}, and the \emph{Yo-Yo} algorithm. See \cite{Santoro2006,Bullo2009} for a more detailed overview. Some of these approaches, such as the basic Max-Consensus~\eqref{s.ex.maxconsensus}, have nice scalability and decentralization properties: the update laws do not depend on \emph{centralized quantities}, such as parameters that need to be known in advance by all the agents, and employ a number of local variables which does not grow with the network size or topology. However, all such approaches require a correct initialization or a pre-processing synchronization phase, which are undesired limitations in applications of interest such as, for example, the ones discussed in Section~\ref{sec.app}. If the minimum sharing problem is cast in terms of the optimization problem \eqref{s.min_opt}, then one can rely on a well-developed literature on discrete-time distributed optimization (see \cite{NotarstefanoTutorial} for a recent overview). If the functions $\psi_i$ in~\eqref{s.min_opt} are convex, indeed, different approaches can be used, such as {consensus-based (sub)gradient methods} \cite{nedic_distributed_2009,nedic_constrained_2010,lobel_distributed_2011,shi_extra_2015,shi_proximal_2015,yuan_convergence_2016}, {second-order methods} \cite{varagnolo_newton-raphson_2016,mokhtari_network_2017}, projected \cite{xie_distributed_2018} and primal-dual \cite{zhu_distributed_2012,chang_distributed_2014} methods with inequality constraints, methods based on the distributed Alternate Direction Method of Multipliers (ADMM) \cite{Boyd2011,mota_d-admm_2013,shi_linear_2014,jakovetic_linear_2015,ling_dlm_2015,chang_proximal_2016,makhdoumi_convergence_2017,NotarstefanoTutorial,bastianello_asynchronous_2020}, and methods based on gradient tracking \cite{xu_augmented_2015,nedic_achieving_2017,nedic_geometrically_2017,qu_harnessing_2018,xi_add-opt_2018,Bin2019}. Gradient methods typically achieve global attractiveness. However, among the cited references only \cite{nedic_constrained_2010} deals with constrained problems with different \emph{local constraints}\footnote{By the term ``local constraints'' we refer to private constraints an agent may have on its own variables that do not depend on the other agents' variables, e.g. the constraints $x_i\le {\rm M}_i$ in \eqref{s.min_opt}.} such as \eqref{s.min_opt}. Yet, \cite{nedic_constrained_2010} requires a vanishing stepsize, which makes convergence not uniform. Gradient methods employing a fixed stepsize thus guaranteeing uniformity are given in \cite{nedic_distributed_2009,lobel_distributed_2011,shi_extra_2015,shi_proximal_2015,yuan_convergence_2016,varagnolo_newton-raphson_2016,mokhtari_network_2017}. However, they do not cover constrained problems of the kind~\eqref{s.min_opt}. Moreover, the first-order methods in \cite{nedic_distributed_2009,lobel_distributed_2011,yuan_convergence_2016} lead to an approximate convergence result in which the convergence speed and the approximation error need to be traded off. This, in turn, is consistent with our results in which a compromise is more generally established between uniformity, approximation error and convergence rate. The approaches~\cite{xie_distributed_2018,zhu_distributed_2012,chang_distributed_2014} deal with inequality constraints including Problem~\eqref{s.min_opt}. Nevertheless, they require a correct initialization and, hence, they do not provide global attractiveness. The same issue applies to gradient-tracking methods~\cite{xu_augmented_2015,nedic_achieving_2017,nedic_geometrically_2017,qu_harnessing_2018,xi_add-opt_2018,Bin2019} (which, anyway, are developed for unconstrained problems), and also for the ``node-based'' formulations of ADMM \cite{mota_d-admm_2013,shi_linear_2014,jakovetic_linear_2015,makhdoumi_convergence_2017,ling_dlm_2015}. Instead, the ``edge-based'' formulations of ADMM (e.g. \cite[Section 3.3]{NotarstefanoTutorial}, \cite{bastianello_asynchronous_2020}) do not suffer from this initialization issue, and they provide a solution which is global and uniform. Nevertheless, the number of variables that each agent has to store grows with the dimension of its neighborhood, thus incurring in scalability issues. Moreover, stability is not usually considered in the analysis of the aforementioned designs, and typically the update laws employ coefficients (e.g. stepsizes) which must be common\footnote{Exceptions are given in the gradient-tracking designs of \cite{xu_augmented_2015,nedic_geometrically_2017}, where agents employ uncoordinated stepsizes. In both the designs, the discrepancy between the stepsizes must be small enough. Hence, these results may be seen as a ``robustness'' property relative to variations of the stepsizes with respect to their average. In turn, this property comes \emph{for free} if the algorithm is proved to be \emph{asymptotically stable} with a common stepsize (see, e.g., \cite[Chapter 7]{Goebel2012}).} to all agent (i.e., they are \emph{centralized} quantities). \subsection{Contributions \& Organization of the Paper} \label{sec.contribution} We propose a new approach to the minimum sharing problem that provides an adjustable \emph{approximate} (or \emph{sub-optimal} in terms of \eqref{s.min_opt}) solution enjoying the globality, uniformity, scalability and decentralization properties stated in Section~\ref{sec.intro}, which do not seem to be possessed altogether by any existing algorithm. The proposed update laws have the form \begin{equation}\label{s.xi_fi} x_i^{t+1} = f_i(t,x^t), \end{equation} for some suitable functions $f_i$, where $x_i\in\mathbb R$ represents the estimate of $\M\sr}%{\und{\M}$ stored by agent $i$, and $x:=(x_i)_{i\in\mathcal N}$ is the aggregate estimate. As formally specified later on in Section~\ref{sec.comm}, the actual structure of the functions $f_i$ encodes the decentralization constraints, allowing an agent update to depend only on the estimates of a subset of other agents (see Remark~\ref{rmk.decentralization}). We show that all the estimates $x_i$ converge, globally and uniformly, to a stable neighborhood of $\M\sr}%{\und{\M}$ whose size can be reduced arbitrarily around $\M\sr}%{\und{\M}$ by suitably tuning some control parameters. More precisely, the proposed approach enjoys the following properties: \begin{enumerate}[(a)] \item The algorithm is distributed and scalable, since the only one variable is stored for each agent. \item\label{item.dec} The update law of each agent employs a gain which can be tuned independently from the others. \item The estimates $x_i$ converge globally and uniformly to a stable steady state which can be made arbitrarily close to $\M\sr}%{\und{\M}$. \item Exact convergence (i.e., all the estimates converge to $\M\sr}%{\und{\M}$) can be achieved, at the price, however, of losing uniformity. \end{enumerate} In view of Item \eqref{item.dec}, the proposed method has good decentralization properties compared to most of the approaches mentioned in Section~\ref{sec.literature}. Nevertheless, we underline that the proposed method is not fully decentralized, as the agents are supposed to know a lower-bound on $\M\sr}%{\und{\M}$ (Assumption~\ref{ass.M_eps}) which explicitly enters in the update laws. The paper is organized as follows. After providing preliminary definitions and remarks in Section~\ref{sec.prelim}, in Section~\ref{sec:min:share} we formulate the minimum-sharing problem and we describe the proposed solution methodology. The main convergence results are given in Section~\ref{sec:conv} and proved in Section~\ref{sec.proof}. Finally, numerical results and concluding remarks are reported in Sections~\ref{sec:simul} and~\ref{sec:concl}, respectively. \section{Preliminaries} \label{sec.prelim} \subsection{Notation} We denote by $\mathbb R$ and $\mathbb N$ the set of real and natural numbers respectively. If $a\in\mathbb R$, $\mathbb R_{\ge a}$ denotes the set of all real numbers larger or equal to $a$, and similar definitions apply to other ordered sets and ordering relations. We denote by $\# A$ the cardinality of a set $A$. If $A,B\subset \mathbb R$, $A\setminus B:=\{ a\in A\,\mid\, a\notin B \}$ denotes the set difference between $A$ and $B$. We identify singletons with their unique element and, for a $b\in\mathbb R$, we thus write $A\setminus b$ in place of $A\setminus \{b\}$. We denote norms by $|\cdot|$ whenever they are clear from the context. With $A\subset\mathbb R^n$ and $x\in\mathbb R^n$, $\setdist{x}{A}:= \inf_{a\in A}|x-a|$ denotes the distance from $x$ to $A$. Sequences indexed by a set $S$ are denoted by $(x_s)_{s\in S}$. For a non-empty interval $[a,b]\subset \mathbb R$, we define the projection map $\projOp{[a,b]}:\mathbb R\to[a,b]$ as $\projOp{[a,b]}(s) := \min\{\max\{ s,\, a \},\, b\}$. A function $f:\mathbb R^n\to\mathbb R^m$, $n,m\in\mathbb N$, is \emph{locally bounded} if $f(K)$ is bounded for each compact set $K\subset\mathbb R^n$. In this paper, we consider discrete-time systems whose solutions are signals defined on a non-empty subset $\dom x$ of $\mathbb N$. For ease of notation, we will use $x^t$ in place of $x(t)$ to denote the values of a signal $x$. With $t_0\in\mathbb N$, we say that $x$ \emph{starts at $t_0$} if $\min \dom x = t_0$. \subsection{Communication Networks}\label{sec.comm} Throughout the paper, $\mathcal N$ denotes the (finite) set of agents in the network, and we let $N:=\#\mathcal N$. The network communication constraints are formally captured by the concept of ``communication structure'' defined below\footnote{A common way to define a communication structure on $\mathcal N$ is to consider an undirected graph $(\mathcal N,\mathcal E)$ with vertices set equal to $\mathcal N$ and edges set $\mathcal E\subset\mathcal N\times\mathcal N$ such that if $(i,j)\in\mathcal E$ then agents $i$ and $j$ can communicate. In this case, $[i]:=\{i\}\cup\{ j\in\mathcal N\,\mid\, (j,i)\in\mathcal E\}$.}. \begin{definition}\label{def.com_struct} A \emph{communication structure} on $\mathcal N$ is a sequence $\mathcal C=([i])_{i\in\mathcal N}$ of subsets $[i]$ of $\mathcal N$ which satisfy $i\in[i]$. \end{definition} For each $i\in\mathcal N$, the set $[i]$ is called the \emph{neighborhood} of $i$. A \emph{communication network} is a pair $(\mathcal N,\mathcal C)$, in which $\mathcal N$ is a set and $\mathcal C$ is a communication structure on $\mathcal N$. For a given $I\subset\mathcal N$, we define the sequence of sets \begin{equation}\label{d.nbds} \begin{array}{lcl} [I]^0 &:=& I \\{} [I]^n &:=& \bigcup_{j\in [I]^{n-1}} [j],\quad n\in\mathbb N_{\ge 1} \end{array} \end{equation} so as, in particular, $[\{i\}]^1=[i]$. If $I=\{i\}$ is a singleton, we use the short notation $[\{i\}]^n=[i]^n$. Moreover, for $n,m\in\mathbb N$ we let \begin{equation*} [I]_m^n := [I]^n \setminus [I]^m. \end{equation*} We consider networks that are \emph{connected} according to the following definition. \begin{definition}\label{def.Iconnected} With $I\subset\mathcal N$, a communication network $(\mathcal N,\mathcal C)$ is said to be $I$-\emph{connected} if there exists $n_I\le N$ such that $[I]^{n_I} = \mathcal N$. \end{definition} The notion of $I$-connectedness is in general weaker than usual \emph{strong connectedness}, which requires the existence of a path between any two agents. Later on, we shall assume that $\mathcal N$ is given a communication structure $\mathcal C$ which is $I^\star$-connected for a specific subset $I^\star\subset\mathcal N$. For the purpose of analysis, this communication structure is assumed static. Likewise also the quantities ${\rm M}_i$ are supposed constant. In fact, this corresponds to a well-defined ``nominal setting'' for the proposed method in which we can prove the desired uniform global attractiveness and stability properties. Proving such properties in the nominal case, in turn, guarantees that the proposed method can be applied also to relevant classes of problems where the communication structure and the parameters ${\rm M}_i$ (hence, their minimum $\M\sr}%{\und{\M}$) may change over time. Indeed, as already mentioned in Section~\ref{sec.intro}, uniform global attractiveness and stability ensure a proper approximate tracking of a time-varying minimum $\M\sr}%{\und{\M}$ provided that its dynamics is sufficiently slow. Moreover, classical results in the context of control under different time-scales (see, e.g., \cite{Kokotovic1999,Teel2003,Tan2006,Wang2012}) also guarantee good tracking performances under changes of the communication structure $\mathcal C$ that are, on average, sufficiently slow with respect to the dynamics of the update laws. In this respect, Section~\ref{sec:simul} provides numerical results in a scenario in which the communication structure and the numbers ${\rm M}_i$ are subject to impulsive changes separated by relatively large intervals of time. \subsection{Stability and Convergence Notions}\label{sec.convergence} We consider discrete-time systems of the form \begin{equation} \label{pre.s.x} x^{t+1} = f(t,x^t), \end{equation} with state $x^t\in\mathbb R^n$, $n\in\mathbb N$. Given a closed set $\mathcal A\subset\mathbb R^n$, we say that $\mathcal A$ is \emph{stable} for \eqref{pre.s.x} if for each $\epsilon>0$ there exists $\delta(\epsilon)>0$ such that every solution of~\eqref{pre.s.x} satisfying $\setdist{x^{t_0}}{\mathcal A}\le \delta(\epsilon)$ also satisfies $\setdist{x^{t}}{\mathcal A}\le \epsilon$, for all $t\ge t_0$. We say that $\mathcal A$ is \emph{attractive} for \eqref{pre.s.x} if there exists an open superset $\mathcal O$ of $\mathcal A$ and, for every $t_0\in\mathbb N$, every solution $x$ to \eqref{pre.s.x} with $x^{t_0}\in \mathcal O$, and every $\epsilon>0$, there exists $t^\star(t_0,x^{t_0},\epsilon)\in\mathbb N$, such that $\setdist{x^t}{\mathcal A}\le \epsilon$ holds for all $t\ge t_0+t^\star(t_0,x^{t_0},\epsilon)$. Different qualifiers can enrich this attractiveness property. In particular, the set $\mathcal A$ is said to be: \begin{itemize} \item \emph{Globally attractive} if $\mathcal O=\mathbb R^n$. \item \emph{Finite-time attractive} if the condition ``$\epsilon>0$'' can be replaced by ``$\epsilon\ge 0$''. \item \emph{Uniformly attractive in the initial time} $t_0$ if the map $t^\star(\cdot)$ does not depend on $t_0$. \item \emph{Uniformly attractive in the initial conditions} $x^{t_0}$ if for each $(t_0,\epsilon)\in\mathbb N\times\mathbb R_{\ge 0}$, the map $t^\star(t_0,\cdot,\epsilon)$ is locally bounded. \item \emph{Uniformly attractive} if it is both uniformly attractive in the initial time and in the initial conditions. \item {\emph{$\epsilon$-approximately attractive}} (with $\epsilon>0$) if the set $\{ x\in\mathbb R^n\,\mid\, \setdist{x}{\mathcal A}\le \epsilon \}$ is attractive. \end{itemize} If $\mathcal A$ is both stable and attractive, it is said to be \emph{asymptotically stable}. Moreover, with $(f_\gamma)_{\gamma\in \Gamma}$ representing a family of functions $f_\gamma:\mathbb N\times\mathbb R^n\to\mathbb R^n$ indexed by a set $\Gamma$, consider the family of systems \begin{equation}\label{pre.s.xa} x^{t+1} = f_\gamma(t,x^t),\qquad\gamma\in \Gamma. \end{equation} Then, we say that the set $\mathcal A$ is \emph{practically attractive} for the family \eqref{pre.s.xa}, if for each $\epsilon>0$, there exists $\gamma^\star(\epsilon)\in\Gamma$ such that the set $\mathcal A$ is $\epsilon$-approximately attractive for the system~\eqref{pre.s.xa} obtained with $\gamma=\gamma^\star(\epsilon)$. \section{Distributed Minimum Sharing}\label{sec:min:share} \subsection{Problem Formulation} We are given a communication network $(\mathcal N,\mathcal C)$. Each agent $i\in\mathcal N$ is provided with a number ${\rm M}_i$, not known a priori by the others, and it stores and updates a local estimate $x_i\in\mathbb R$ of the quantity $\M\sr}%{\und{\M}$ defined in \eqref{d.uM}. Thus, the problem at hand consists in designing an update law for each agent $i\in\mathcal N$ of the form \eqref{s.xi_fi} such that the resulting estimates $x_i^t$ converge to $\M\sr}%{\und{\M}$, in some of the senses defined in Section \ref{sec.convergence}. The resulting family $f:=(f_i)_{i\in\mathcal N}$ is called the \emph{distributed methodology}. In the following, we let $x:=(x_i)_{i\in\mathcal N}$ and we compactly rewrite \eqref{s.xi_fi} as \begin{equation}\label{s.x} x^{t+1} = f(t,x^t). \end{equation} As each agent is allowed to exchange information only with the agents belonging to its neighborhood $[i]$, the functions $f_i$ must respect this constraint. This is formally expressed by the following definitions. \begin{definition} With $V\subset\mathcal N$, a function $g$ on $\mathbb N\times\mathbb R^N$ is said to be \emph{adapted to $V$} if it satisfies $g(t,x)=g(t,z)$ for every $t\in\mathbb N$, and every $x,z\in\mathbb R^N$ satisfying $x_i=z_i$ for all $i\in V$. \end{definition} \begin{definition}\label{def.decentralized} The function $f=(f_i)_{i\in\mathcal N}$ is said to be $\mathcal C$-\emph{decentralized} if, for each $i\in\mathcal N$, the map $f_i$ is adapted to~$[i]$. \end{definition} Then, the \emph{distributed minimum sharing} problem is defined as follows. \begin{problem}\label{prob.1} Design a $\mathcal C$-decentralized function $f$, such that the set \begin{equation}\label{d.A} \mathcal A := \{\M\sr}%{\und{\M}\}^N \end{equation} is globally attractive for \eqref{s.x}. \end{problem} \begin{remark}\label{rmk.decentralization} We stress that, if $f$ is $\mathcal C$-decentralized, then each function $f_i$ in \eqref{s.xi_fi} depends only on $(x_j)_{j\in [i]}$ and not on the whole state $x$. \end{remark} \begin{remark} Depending on the additional qualifiers that may characterize the attractiveness property of $\mathcal A$ in Problem \ref{prob.1}, we may have solutions to Problem \ref{prob.1} in ``different senses''. In the forthcoming section, we propose a methodology obtaining both global attractiveness and global uniform {practical} attractiveness of $\mathcal A$, depending on the value of some user-decided control parameters. We will show that a compromise between how close we can get to $\mathcal A$ and uniformity in the initial time is necessary. In particular, we show that attractiveness is possible only at the price of losing uniformity in the initial time, and that, if such property is needed, then global practical uniform attractiveness is the best we can achieve. \end{remark} \subsection{Standing Assumptions} We consider Problem \ref{prob.1} under two main assumptions specified hereafter. We define the set \begin{equation}\label{d.Isr} \begin{array}{lcl} I^\star &:=& \displaystyle\argmin_{i\in\mathcal N} {\rm M}_i . \end{array} \end{equation} With the following assumption, we require the communication network to be connected with respect to $I^\star$. \begin{assumption}[Connectedness]\label{ass.connected} The communication network $(\mathcal N,\mathcal C)$ is $I^\star$-connected in the sense of Definition~\ref{def.Iconnected}. \end{assumption} The second assumption, instead, requires each agent to know a lower-bound on $\M\sr}%{\und{\M}$. \begin{assumption}[Consistency]\label{ass.M_eps} Each agent $i\in\mathcal N$ knows a number $\mu_i\in\mathbb R_{>0}$ such that $\mu_i\le\M\sr}%{\und{\M}$. \end{assumption} It is worth noting that Assumption \ref{ass.M_eps} is a ``centralized'' assumption, in that it asks each agent to know a lower bound on the common, unknown quantity $\M\sr}%{\und{\M}$. Nevertheless, it introduces almost no loss of generality in different applications of interest, including those mentioned in Section \ref{sec.app}, where knowing a lower-bound on $\M\sr}%{\und{\M}$ is a mild requirement. For instance, in both the traffic control and leader election problems we can assume that the quantities ${\rm M}_i$ are integers, so that ``$\mu_i\in(0,1)$ for all $i\in\mathcal N$'' is a feasible choice requiring no further knowledge on $\M\sr}%{\und{\M}$. Furthermore, this assumption is not in principle needed if an approximate or practical attractiveness result is sought. In fact, if for some $I\subset \mathcal N$, $\epsilon:=\max_{i\in I}\mu_{i}>\M\sr}%{\und{\M}$, then $\M\sr}%{\und{\M}\in[0,\epsilon)$, and, as clarified later on by the asymptotic analysis, we are able to claim that the set $[0,\epsilon]^N$ (which includes $\M\sr}%{\und{\M}$) is practically attractive for $x$, with $\epsilon$, however, that can be made arbitrarily small by choosing $\mu_i$ accordingly. In the following we let \begin{equation}\label{d.ulb} \und{\lb} := \displaystyle\min_{i\in\mathcal N} \mu_i. \end{equation} \subsection{The Update Laws} The proposed update law is obtained by choosing $f$ so that, for each $i\in\mathcal N$, Equation \eqref{s.xi_fi} reads as follows\footnote{Recall that $\projOp{[a,b]}(s) := \min\{\max\{ s,\, a \},\, b\}$.} \begin{equation}\label{s.xi} x_i^+ = \proj{[\mu_i,\, {\rm M}_i]}{{\rm e}^{h_i^t} x_i + k_i \sum_{j\in[i]}\big(x_j-x_i \big)}, \end{equation} in which $\mu_i>0$ is the same quantity of Assumption \ref{ass.M_eps}, $k_i>0$ is a free control gain chosen to satisfy \begin{equation} \label{inq.ki_1} 0< k_i \le \dfrac{1}{\#([i]\setminus i)} \end{equation} and $h_i:\mathbb N\to \mathbb R_{\ge 0}$ is a time signal to be designed later on. Notice that, as in \cite{nedic_constrained_2010}, the update laws \eqref{s.xi} have the form of a projected (onto the interval $[\mu_i,\,{\rm M}_i]$) consensus-like protocol. Unlike \cite{nedic_constrained_2010}, however, the matrix defining the estimates dynamics needs {\em not} be column or row-stochastic, and the coefficients $k_i$ are only constrained by \eqref{inq.ki_1} and, hence, they can be chosen in a completely decentralized way. Moreover, unlike all the aforementioned distributed optimization approaches, the restriction of the dynamics onto the \emph{consensus manifold}\footnote{That is, the set $\{x\in\mathbb R^N\,\mid\, x_i=x_j,\ \forall i,j\in\mathcal N\}$.} is not marginally stable. Rather, it is deliberately made unstable by the terms ${\rm e}^{h_i^t}$. \subsection{Excitation Properties} The signals $h_i$ will be chosen to guarantee one of the following \emph{excitation properties}. \begin{definition}[Sufficiency of Excitation]\label{d.SE} With $t_0\in\mathbb N$, the family $(h_i)_{i\in\mathcal N}$, is said to be \emph{sufficiently exciting from $t_0$} if there exist $\underline h(t_0)>0$ and $\Delta(t_0)\in\mathbb N_{\ge 1}$ such that, for each $m\in\mathbb N_{\ge 1}$ satisfying \begin{align}\label{e.SE.m} m &\le \dfrac{1}{\underline h(t_0)}\log\left( \dfrac{\M\sr}%{\und{\M}}{\und{\lb}} \right) \end{align} and each $i\in\mathcal N$, there exists at least one $s_i\in\{t_0+1+(m-1)\Delta(t_0),\,\dots,\,t_0+m\Delta(t_0)\}$ such that $h_i^{s_i}\ge \underline h(t_0)$. \end{definition} In qualitative terms, given an initial time $t_0$, sufficiency of excitation implies that the signals $h_i$ are positive ``frequently enough" for a ``large enough" amount of time succeeding $t_0$. When $(h_i)_{i\in\mathcal N}$ is sufficiently exciting from \emph{every} $t_0$, and independently on it, then we say that $(h_i)_{i\in\mathcal N}$ enjoys the \emph{uniformity of excitation} property. \begin{definition}[Uniformity of Excitation]\label{d.PE} The family $(h_i)_{i\in\mathcal N}$ is said to be \emph{uniformly exciting} if it is sufficiently exciting from every $t_0$, with $\underline h$ and $\Delta$ not dependent on $t_0$. \end{definition} Uniformity of excitation can be seen as a ``uniform in $t_0$'' version of sufficiency of excitation and, in particular, it implies that all the signals $h_i$ take positive values infinitely often. Defined in this way, both these properties are ``centralized'', in that they employ quantities common to all the agents. However, both can be easily obtained by means of decentralized design policies in which the signals $h_i$ are chosen independently on each other. This is the case, for instance, when the signals $h_i$ are \emph{periodic} (with possibly different periods) and not identically zero, as formalized in the following lemma (proved in \ref{apd.lemma_PE}). \begin{lemma}\label{lem.PE} Suppose that, for each $i\in\mathcal N$, $h_i$ is periodic and there exists $t\in\mathbb N$ for which $h_i^t >0$. Then, the family $(h_i)_{i\in\mathcal N}$ is uniformly exciting. \end{lemma} \begin{remark} If $h_i^t=0$ for all $i\in\mathcal N$ and $t\in\mathbb N$, each of the infinite points of the consensus manifold $\mathcal M$ is an equilibrium for \eqref{s.xi}. Since $\M\sr}%{\und{\M}\in\mathcal M$, this implies that $\M\sr}%{\und{\M}$ is a well-defined steady state for \eqref{s.xi}. However, in this case $\M\sr}%{\und{\M}$ cannot be reached by any of the initial conditions in $\mathcal M$, as they are indeed equilibria. This, in turn, is related to the impossibility result \cite[Theorem 3.1.1]{Santoro2006} in the leader election problem in absence of unique identifiers, and is at the basis of the non-globality of the FloodMax and Max-Consensus algorithms (see Section \ref{sec.intro}). In order to prevent the consensual states in $\mathcal M$ to be equilibria, the signals $h_i^t$ must carry enough excitation, in the sense of Definitions \ref{d.SE} or \ref{d.PE}. As formally stated later on in Theorem \ref{thm.main}, indeed, this permits to recover globality, although it ruins ``exactness'' of convergence of each estimate $x_i$ to $\M\sr}%{\und{\M}$, being it a consensual state. In these terms, the signals $h_i$ play the same role of the \emph{dithering} signals in Extremum Seeking approaches \cite{Tan2006,Ariyur2003}. \end{remark} \section{Convergence Results}\label{sec:conv} \subsection{Main result} For ease of notation, we write the update laws \eqref{s.xi} in the compact form \eqref{s.x}. The following theorem -- which is the main result of the paper -- relates the excitation properties of the signals $h_i$ to the asymptotic convergence of the estimates $x_i$ produced by the update laws \eqref{s.xi} to $\M\sr}%{\und{\M}$. In particular, it shows that sufficiency of excitation implies convergence (possibly exact) and uniformity of excitation implies uniform convergence, but ruins exactness. Further remarks and insights on the results given in the theorem follow thereafter in Section~\ref{sec.remarks}. \begin{theorem}\label{thm.main} Under Assumptions \ref{ass.connected} and \ref{ass.M_eps}, consider the update laws \eqref{s.xi}, in which $k_i$ satisfies \eqref{inq.ki_1}. Suppose that, for a given $t_0\in\mathbb N$, the family $(h_i)_{i\in\mathcal N}$ is sufficiently exciting from $t_0$ in the sense of Definition \ref{d.SE}. Then, the following claims hold: \begin{enumerate} \item \label{thm.main.item1} There exists $t^\star=t^\star(t_0)$ such that every solution $x$ to~\eqref{s.x} starting at $t_0$ satisfies \begin{equation*} \begin{array}{lclcl} x_i^t &\ge& \M\sr}%{\und{\M}, && \forall t\ge t^\star(t_0),\ \forall i \in\mathcal N\setminus I^\star\\ x_i^t &=& \M\sr}%{\und{\M}, && \forall t\ge t^\star(t_0),\ \forall i\in I^\star, \end{array} \end{equation*} with $I^\star$ given by \eqref{d.Isr}. \item \label{thm.main.item2} For each $\epsilon>0$, there exists $\delta(\epsilon)>0$ such that, if \begin{equation}\label{in.ls_hi} \limsup_{t\to\infty} h_i^t \le \delta(\epsilon),\quad \forall i\in\mathcal N , \end{equation} then each solution $x$ starting at $t_0$ satisfies \begin{equation}\label{e.lim_xi_Ai} \lim_{t\to\infty}|x_i^t-\M\sr}%{\und{\M}| \le \epsilon ,\quad \forall i\in\mathcal N. \end{equation} In particular, the set \begin{equation*} \mathcal A_\epsilon:= \prod_{i\in\mathcal N} \big[\M\sr}%{\und{\M},\,\min\{\M\sr}%{\und{\M}+\epsilon,\,{\rm M}_i\}\big] \end{equation*} is globally attractive for \eqref{s.x}. \item \label{thm.main.item3} If the family $(h_i)_{i\in\mathcal N}$ is uniformly exciting in the sense of Definition \ref{d.PE}, then $\mathcal A_\epsilon$ is globally uniformly attractive. \item If all the signals $h_i$ are non-zero and periodic (with possibly different periods), then there exists a compact set $\mathcal A_\epsilon^u\subset\mathcal A_\epsilon$ which is globally uniformly attractive and stable, hence, globally uniformly asymptotically stable. \item \label{thm.main.item4} If \begin{equation*} \lim_{t\to\infty} h_i^t =0,\quad \forall i\in\mathcal N \end{equation*} then, the set $\mathcal A$, given by \eqref{d.A}, is globally attractive for \eqref{s.x}, i.e. \begin{equation*} \lim_{t\to\infty} x_i^t=\M\sr}%{\und{\M} ,\quad\forall i\in\mathcal N. \end{equation*} \end{enumerate} \end{theorem} For the reader's convenience, the proof of Theorem~\ref{thm.main} is postponed to Section \ref{sec.proof}. \subsection{Remarks on the Result}\label{sec.remarks} Claim 1 of Theorem \ref{thm.main} states that, if the family $(h_i)_{i\in\mathcal N}$ is sufficiently exciting, then, in a finite time $t^\star$ the estimates $x_i$ of the agents $i\in I^\star$ satisfying ${\rm M}_i=\M\sr}%{\und{\M}$ reach the target value $\M\sr}%{\und{\M}$, while all the other estimates $x_i$ of the remaining agents $i\in\mathcal N\setminus I^\star$ become larger than $\M\sr}%{\und{\M}$. The time $t^\star$ is, however, a centralized quantity which depends on the excitation properties of all the signals $h_i$. Claim 2 characterizes the asymptotic behavior of the remaining agents, by stating that the update laws \eqref{s.xi} are able to drive the estimates $x_i$ arbitrarily close to $\M\sr}%{\und{\M}$, provided that the amplitude of the signals $h_i^t$ is eventually reduced accordingly. As the approximation $\mathcal A_\epsilon$ can be made arbitrarily tight, by acting on the asymptotic bounds of $h_i$ accordingly, it turns out that this is a \emph{global practical attractiveness} result of the target set $\mathcal A$ (defined in \eqref{d.A}). More precisely, let $\Gamma$ be the set of all the families $\gamma:=(h_i)_{i\in\mathcal N}$ of functions $h_i:\mathbb N\to\mathbb R_{\ge 0}$, and consider a family of systems of the form \eqref{pre.s.xa}, with $x^t\in\mathbb R^N$ and $f_\gamma:=(f_{\gamma}^i)_{i\in\mathcal N}$ satisfying \begin{equation}\label{d.fgamma} f_\gamma^i(t,x) := \proj{[\mu_i,\, {\rm M}_i]}{{\rm e}^{h_i^t} x_i + k_i \sum_{j\in[i]}\big(x_j-x_i \big)} . \end{equation} Then, the second claim of the theorem can be restated as follows. \begin{corollary}\label{cor.p} Under the assumptions of Theorem \ref{thm.main}, the set $\mathcal A$ is globally practically attractive for the family \eqref{d.fgamma}. \end{corollary} Claim 3 of the theorem further strengthen Corollary~\ref{cor.p} to a \emph{uniform} global practical asymptotic stability property of $\mathcal A$ in presence of uniformity of excitation. Moreover, in the relevant case in which the signals $h_i$ are periodic, Claim~4 guarantees the existence of a compact set included in $\mathcal A_\epsilon$ which is globally uniformly asymptotically stable. Finally, Claim 5 states that, if all the signals $h_i^t$ converge to zero, then a \emph{global attractiveness} result of the target set $\mathcal A$ holds (i.e. $x_i^t \to \M\sr}%{\und{\M}$ for all $i\in\mathcal N$). However, we observe that, if $h_i^t\to 0$ for some $i\in\mathcal N$, then the family $(h_i)_{i\in\mathcal N}$ fails to be uniformly exciting, and thus the convergence of the estimates $x_i$ to $\M\sr}%{\und{\M}$ is \emph{not} in general uniform in the initial time $t_0$. This underlines an important difference between sufficiency and uniformity of excitation: sufficiency of excitation allows exact convergence, but prevents uniformity in the initial time. Uniformity of excitation, instead, guarantees uniform convergence and stability but frustrates exact convergence, guaranteeing only a weaker practical result. This, in turn, reveals a somehow necessary compromise between complexity, uniformity and convergence. \subsection{On the Design of the Signals $h_i$} The signals $h_i$ are the only degrees of freedom left to be chosen in the update laws \eqref{s.xi}. In this respect, Theorem \ref{thm.main} links their amplitude and excitation properties to the corresponding asymptotic behavior of the estimates $x_i$, thus providing guidelines for their design. Based on the claims of Theorem \ref{thm.main}, in this section we discuss some possible designs guaranteeing sufficiency or uniformity of excitation. \subsubsection{Sufficiently Exciting Designs} Sufficiency of excitation of the family $(h_i)_{i\in\mathcal N}$ is guaranteed if each $h_i$ takes ``enough'' positive values. According to Definition \ref{d.SE}, and in particular to \eqref{e.SE.m}, how much is ``enough'' depends on centralized quantities. In turn, a design of the signals $h_i$ based on the knowledge of $t_0$ and of the quantities appearing in \eqref{e.SE.m} is undesirable as inevitably centralized and not robust. A simple decentralized way to design a sufficiently exciting family $(h_i)_{i\in\mathcal N}$ amounts to choose bounded signals $h_i$ satisfying \begin{equation}\label{e.sum_hi} \sum_{t\in\mathbb N} h_i^t = \infty ,\qquad\forall i\in\mathcal N. \end{equation} This, for instance, can be achieved by simply letting $h_i^t = a_i/(1+t)$ for some arbitrary $a_i>0$. \begin{lemma}\label{lem.SE} Suppose that, for each $i\in\mathcal N$, the signal $h_i$ is bounded and satisfies \eqref{e.sum_hi}. Then, the family $(h_i)_{i\in\mathcal N}$ is sufficiently exciting in the sense of Definition \ref{d.SE}. \end{lemma} The proof of Lemma~\ref{lem.SE} follows directly from~\eqref{e.sum_hi}, hence it is omitted. \smallskip In view of Claim 5 of Theorem \ref{thm.main}, exact convergence of the estimates $x_i$ to $\M\sr}%{\und{\M}$ is obtained if $\lim_{t\to\infty} h_i^t=0$ for all $i\in\mathcal N$. Moreover, convergence of $h_i$ to zero is implied by (although not equivalent to) the following property \begin{equation}\label{e.sum_hi2} \sum_{t\in\mathbb N} \big(h_i^t\big)^2 < \infty . \end{equation} It is interesting to notice that Properties \eqref{e.sum_hi}-\eqref{e.sum_hi2} are standard assumptions asked to the \emph{stepsize} in classical \emph{stochastic approximation algorithms} \cite{Robbins1951,Kushner1997}, as well as in modern distributed optimization algorithms using vanishing step sizes \cite{NotarstefanoTutorial,nedic_constrained_2010,Simonetto2016}. In the context of this paper, these two conditions are simply sufficient conditions for sufficiency of excitation, which can be easily satisfied by decentralized designs of the signals $h_i$. \subsubsection{Uniformly Exciting Designs}\label{sec.design_hi} In view of Lemma \ref{d.PE}, if every signal $h_i$ is periodic, then $(h_i)_{i\in\mathcal N}$ is uniformly exciting. While periodicity is not necessary for uniformity of excitation, it certainly is a relevant design choice due its simplicity and effectiveness. Possible decentralized design choices for periodic signals $h_i$ leading to a uniformly exciting family $(h_i)_{i\in\mathcal N}$ are listed below, where the quantities $A_i,T_i,\rho_i>0$ are arbitrary. From the theoretical viewpoint, all the following options are equally fine. Depending on the application domain, however, some choices may be more convenient than others. \begin{enumerate} \item \emph{Constant signals}: is the simplest design choice and consists in choosing $h_i^t = A_i$ for all $i\in\mathcal N$. \item \emph{Rectified sinusoids}: different versions can be defined, for instance $h_i^t = A_i |\sin(\pi t/T_i)|$ and $h_i=A_i\max\{0, \sin(2\pi t/T_i) \}$ both have period $T_i$. \item \emph{Square waves:} with $\rho_i\in(0,1]$ playing the role of a duty cycle, square waves have the form \begin{equation}\label{d.square_f} h_i^t = A_i {\rm step}\left({{\rm mod}(t,T_i)- (1-\rho_i)(T_i)}\right) \end{equation} in which ${\rm mod}(s) := s-\max\{n\in\mathbb N \,\mid\, n (T_i+1)\le s\}$, and ${\rm step}(\cdot)$ denotes the \emph{step function} satisfying ${\rm step}(s)=0$ for $s<0$ and ${\rm step}(s)=1$ for $s\ge 0$. The signal \eqref{d.square_f} has period $T_i$ and $h_i^t=A_i$ holds for $\rho_iT_i$ seconds each period. \end{enumerate} \section{Numerical Simulations}\label{sec:simul} \begin{figure}[h] \vspace*{-.2cm} \centering \includegraphics[width=\linewidth,trim=5em 1em 5em 0em,clip]{ex1_network} \vspace*{-.7cm} \caption{Communication structure of Simulation 1: \textbf{(a)} $[1]= \{1,3,4\}$, $[2]=\{2,3,4\}$, $[3]=\{1,2,3\}$, $[4]=\{1,2,4\}$; \textbf{(b)} $[1]= \{1,3,4\}$, $[2]=\{2,4\}$, $[3]=\{1,3,5,6\}$, $[4]=\{1,2,4,5\}$, $[5]=\{3,4,5\}$ and $[6]=\{3,6\}$; \textbf{(c)} $[1]=\{1,4\}$, $[4]=\{1,4,5,6\}$, $[5]=\{4,5\}$ and $[6]=\{4,6\}$.\vspace{-.3cm}} \label{Fig.ex1.top} \end{figure} \begin{figure*} \includegraphics[width=\linewidth,trim=4em 2em 4em 2em,clip]{ex1_sim} \vspace*{-.7cm} \caption{Evolution of the estimates $x_i$ in Scenario 1. The trajectory of the optimal value $\M\sr}%{\und{\M}$ is shown in dashed gray line. Colored lines depict instead the trajectory of the estimates $x_i$, $i=1,\dots,6$. In abscissa: iteration variable $t$.\vspace{-.25cm}} \label{Fig.ex1.sim} \end{figure*} In this section, we present two illustrative numerical simulation scenarios. In Scenario~1, a network with a time-changing topology (see Figure~\ref{Fig.ex1.top}) is considered while in Scenario~2, for a fixed network topology, the use of different signals $h_i$ is evaluated. \subsection{Scenario 1: Uniform Convergence} The first simulation, shown in Figure \ref{Fig.ex1.sim}, is obtained as follows. The simulation starts with a network of $4$ agents (Agents $1$, $2$, $3$, and $4$), provided with a communication structure shown in Figure \ref{Fig.ex1.top}-(a) and with numbers $({\rm M}_1,\,{\rm M}_2,\,{\rm M}_3,\,{\rm M}_4)=(10,\,12,\,13,\,13)$, implying $\M\sr}%{\und{\M}={\rm M}_1=10$. The update laws \eqref{s.xi} are implemented with $\mu_i=1/2$ for all $i\in\{1,\dots,4\}$, with $(k_1,\,k_2,\,k_3,\,k_4)=(0.1,\, 0.08,\,0.05,\,0.09)$, and with the signals $h_i$ chosen as the square waves discussed in Section \ref{sec.design_hi} with parameters $(T_1,A_1,\rho_1)=(15,10^{-3},0.2)$, $(T_2,A_2,\rho_2)=(10,5\cdot10^{-4},0.5)$, $(T_3,A_3,\rho_3)=(5, 10^{-3},0.3)$, $(T_4,A_4,\rho_4)=(10,5\cdot10^{-4},0.5)$. At time $t=500$, two new agents (Agents $5$ and~$6$) are added to the network, and the communication structure is changed to the one shown in Figure~{\ref{Fig.ex1.top}-(b)}. The new agents have numbers $({\rm M}_5,{\rm M}_6)=(7,11)$, lower bounds $\mu_5=\mu_6=1/2$, coefficients $(k_5,k_6)=(0.07,0.1)$, and signals $h_i$ given by the square waves presented in Section \ref{sec.design_hi} with $(T_5,A_5,\rho_5) = (5,10^{-3},0.4)$ and $(T_6,A_6,\rho_6) = (7,25\cdot10^{-4},0.1)$. Furthermore, the numbers of agents $1$ and~$3$ are changed to $({\rm M}_1,{\rm M}_3)=(11,13)$. The new optimum is thus $\M\sr}%{\und{\M}={\rm M}_5=7$. At time $t=1500$, Agents $2$ and $3$ leave the network, and the communication structure is changed to that depicted in Figure \ref{Fig.ex1.top}-(c). Moreover, the numbers of the agents are changed to $({\rm M}_1,{\rm M}_4,{\rm M}_5,{\rm M}_6)=(12,16,11,16)$, leading to $\M\sr}%{\und{\M}={\rm M}_5=11$. % Finally, at time $t=5000$, the number of Agent $4$ is changed to ${\rm M}_4=8$, so as $\M\sr}%{\und{\M}={\rm M}_4=8$. As Figure \ref{Fig.ex1.sim} shows, convergence to the (time-varying) optimum $\M\sr}%{\und{\M}$ is approximate, and the trajectories of the agents show residual oscillations. Figure~\ref{Fig.ex1.sim} also underlines that convergence to $\M\sr}%{\und{\M}$ ``from below'' (i.e. when the initial values of the agent are smaller than $\M\sr}%{\und{\M}$) is slower than convergence ``from above'' (i.e. when the initial values of the agent are larger than $\M\sr}%{\und{\M}$). As shown in the analysis of Section \ref{sec.proof}, this is due to the fact that (i) the convergence rate ``from below'', proved in Section~\ref{sec.proof.1}, is determined by the values of the signals $h_i^t$, while (ii) the convergence rate ``from above'', proved in Sections \ref{sec.proof.2}-\ref{sec.proof.3}, is determined by the values of the coefficients $k_i$. \begin{figure}[h] \vspace*{-.25cm} \centering \includegraphics[width=\linewidth,trim=4em 0em 3em 0em,clip]{maxc_1} \vspace*{-.7cm} \caption{Evolution of the Max-Consensus estimates (update law~\eqref{s.ex.maxconsensus}) in the setting of Scenario 1 (cf. Figure~\ref{Fig.ex1.sim}). In abscissa: iteration variable $t$.\vspace{-.25cm}} \label{Fig.maxc1} \end{figure} For the sake of comparison, Figure~\ref{Fig.maxc1} shows a simulation in which the Max-Consensus~\eqref{s.ex.maxconsensus} is employed in the same setting. As shown in Figure~\ref{Fig.maxc1}, although showing a faster convergence for the first two changes of $\M\sr}%{\und{\M}$, the Max-Consensus fails in tracking the other changes. As illustrated in Section~\ref{sec.intro}, this is due to the fact that it is not globally attractive. \subsection{Scenario 2: Non-Uniform Convergence} \begin{figure*} \includegraphics[width=\linewidth,trim=4em .5em 4em 2em,clip]{ex2_sim} \vspace*{-.7cm} \caption{Evolution of the estimates $x_i$ in Scenario 2. The trajectory of the optimal value $\M\sr}%{\und{\M}$ is shown in dashed gray line. Dark to light orange lines depict the trajectory of the estimates $x_i$, $i=1,\dots,4$ of the first network. Dark to light blue lines depicts the trajectory of the estimates $x_i$, $i=1',\dots,4'$ of the second network. In abscissa: iteration variable $t$. \vspace{-.45cm}} \label{Fig.ex2.sim} \end{figure*} \begin{figure}[h] \vspace*{-.1cm} \centering \includegraphics[width=\linewidth,trim=4em 0em 3em 0em,clip]{maxc_2} \vspace*{-.7cm} \caption{Evolution of the Max-Consensus estimates (update law~\eqref{s.ex.maxconsensus}) in the setting of Scenario 2 (cf. Figure~\ref{Fig.ex2.sim}). In abscissa: iteration variable $t$.\vspace{-.5cm}} \label{Fig.maxc2} \end{figure} In the second scenario, we compare two simple networks having the same data and communication structures, but different signals $h_i$. The first network, $\mathcal N$, includes Agents $1$, $2$, $3$ and $4$, and it is given the communication structure depicted in Figure \ref{Fig.ex1.top}-(a). Initially, the agents are given numbers $({\rm M}_1,{\rm M}_2,{\rm M}_3,{\rm M}_4)=(3,6,9,15)$, so as $\M\sr}%{\und{\M}={\rm M}_1=3$. At time $t=500$, ${\rm M}_1$ is changed to $15$, so as $\M\sr}%{\und{\M}={\rm M}_2=6$. At time $t=20000$, ${\rm M}_2$ is changed to $15$, so as $\M\sr}%{\und{\M}={\rm M}_3=9$. At time $t=35000$, ${\rm M}_3$ is changed to $12$, so as $\M\sr}%{\und{\M}=M_3=12$. Finally, at time $t=150000$, ${\rm M}_3$ is changed to $15$, so as $\M\sr}%{\und{\M}={\rm M}_1={\rm M}_2={\rm M}_3={\rm M}_4=15$. The update laws are implemented with $(k_1,k_2,k_3,k_4) = (0.1,0.08,0.05,0.09)$, $\mu_1=\mu_2=\mu_3=\mu_4=1/2$, and with a family $(h_i)_{i\in\mathcal N_1}$ of uniformly exciting signals defined as square waves with parameters $(T_1,A_1,\rho_1)=(15,10^{-3},0.2)$, $(T_2,A_2,\rho_2)=(10,5\cdot10^{-4},0.5)$, $(T_3,A_3,\rho_3)=(5, 10^{-3},0.3)$, $(T_4,A_4,\rho_4)=(10,5\cdot10^{-4},0.5)$. The second network, $\mathcal N'$, includes Agents $1'$, $2'$, $3'$ and $4'$ and has the same communication structure and data of~$\mathcal N$. The update laws have the same parameters $k_{i'}=k_i$ and $\mu_{i'}=\mu_i$, $i\in\mathcal N$, except for the family $(h_{i'})_{i'\in\mathcal N'}$ which is given by $h_{i'}^t = (1+t)^{-1}$ for all $i'\in\mathcal N'$. The signals $h_{i'}$ satisfy \eqref{e.sum_hi}-\eqref{e.sum_hi2} and, thus, $(h_{i'})_{i'\in\mathcal N'}$ is sufficiently exciting. However, it fails to be uniformly exciting. The simulation shown in Figure \ref{Fig.ex2.sim} compares the time behavior of the update laws $x_i$, $i\in\mathcal N$ and $x_{i'}$, $i'\in\mathcal N'$. As shown in the figure, each ``step'' of $\M\sr}%{\und{\M}$ is followed by the estimates $x_i$ with the same convergence rate. On the contrary, ${\M\sr}%{\und{\M}}'=\M\sr}%{\und{\M}$ is followed by the estimates $x_{i'}$ with a convergence rate which degrades in time. This is due to the fact that the family $(h_i)_{i\in\mathcal N}$ is uniformly exciting, while the family $(h_{i'})_{i'\in\mathcal N'}$ is only sufficiently exciting. Thus, uniformity of convergence is not guaranteed for the estimates $x_{i'}$. Nevertheless, the zoomed part of the plot clearly shows that the estimates $x_{i'}$ reach ${\M\sr}%{\und{\M}}'$ with higher precision (by Claim 5 of Theorem~\ref{thm.main}, indeed, since $h_{i'}^t\to 0$ the convergence of the estimates $x_{i'}$ is asymptotic if ${\M\sr}%{\und{\M}}'$ remains constant), whereas the estimates $x_i$ exhibit a non-zero residual error. The above simulations underline the necessary compromise, already mentioned in different parts of the paper, and formally characterized by Claims 3 and 5 of Theorem \ref{thm.main}, between \emph{exact convergence} and \emph{uniformity in time}, which characterizes the proposed methodology. Finally, Figure~\ref{Fig.maxc2} shows a simulation of the Max-Consensus \eqref{s.ex.maxconsensus} in the same setting (cf. Figure~\ref{Fig.ex2.sim}). Again, the Max-Consensus fails in tracking the time-varying $\M\sr}%{\und{\M}$. To see why this is the case, consider for instance the change of value of ${\rm M}_1$ at $t=500$. This determines an increment of $\M\sr}%{\und{\M}$, bringing the Max-Consensus algorithm in a situation in which \eqref{s.ex.init2} holds at $t_0=500$. Hence, as explained in Section~\ref{sec.intro}, $x^{500}$ falls outside the domain of attraction of the new $\M\sr}%{\und{\M}$, and thus convergence fails. \section{Concluding Remarks}\label{sec:concl} As detailed in the proof of the main result (Section~\ref{sec.proof}) and shown in the numerical simulations, the proposed solution is characterized by a necessary compromise between convergence rate and asymptotic error, as both are determined in the worst case by the signals $h_i$. In particular, if $(h_i)_{i\in\mathcal N}$ is uniformly exciting, uniform convergence is guaranteed, but the estimates will have a non-zero steady-state error. We stress that this residual error can be reduced arbitrarily by reducing the maximum value of the signals $h_i$ accordingly. But we also remark that, in general, this results in a reduction of the convergence rate. Larger values of the signals $h_i$ are associated instead with faster convergence but lead to larger steady-state errors. Moreover, in the limit case in which $h_i^t\to 0$ for all $i\in\mathcal N$, asymptotic convergence is obtained whenever $(h_i)_{i\in\mathcal N}$ is sufficiently exciting. The convergence rate, however, is superlinear and not lower-bounded, and thus uniformity is lost. Clearly, ``smart'' choices of the signals $h_i$ are possible adapting their value at run time to increase them when fast convergence is needed and decrease them when, instead, we desire a low residual error. ``Adaptive'' design choices of this kind will be the subject of future research. We prove all the proposed solution properties under the assumption that the communication structure and the parameters remain constant during the execution. Although uniform global asymptotic stability already guarantees a good behavior for ``slowly varying'' structures (also shown by the numerical simulations), additional work is needed to extend the analysis to handle time-varying networks with communication delays and noise. This extension, in turn, calls for a stochastic framework in which the aleatory nature of those phenomena is fully captured and is the subject of future research. \section{Proof of Theorem \ref{thm.main}} \label{sec.proof} \subsection{Proof of Claim 1}\label{sec.proof.1} In this subsection we prove Claim 1. In particular, we show that if the family $(h_i)_{i\in\mathcal N}$ is sufficiently exciting from some $t_0\in\mathbb N$, then there exists $t^\star=t^\star(t_0)>t_0$ such that, for each $i\in\mathcal N$, $x_i^t\ge \M\sr}%{\und{\M}$ holds for all $t\ge t ^\star$ and, for each $i\in I^\star$, $x_i^t= \M\sr}%{\und{\M}$ holds for all $t\ge t ^\star$. Define the function $\underline{i}:\mathbb R^N\to\mathcal N$, $x\mapsto \underline{i}(x):= \argmin_{i\in\mathcal N} x_i$. Then, $x_j\ge x_{\underline{i}(x)}$ holds for all $j\in\mathcal N$. Moreover, $h_i^t\ge 0$ and \eqref{inq.ki_1} imply ${\rm e}^{h_i^t}-\#([i]\setminus i)k_i \ge 0$ for all $i\in\mathcal N$. Since $\projOp{[\mu_i,{\rm M}_i]}$ is increasing, we have \begin{equation}\label{pf.inq.xplus_1} \begin{aligned} x_i^{t+1} &= \proj{[\mu_i,{\rm M}_i]}{\left({\rm e}^{h_i^t}-\#([i]\setminus i)k_i\right) x_i^t + k_i\sum_{j\in[i]\setminus i} x_j^t }\\ &\ge \projOp{[\mu_i,{\rm M}_i]}\bigg[\left({\rm e}^{h_i^t}-\#([i]\setminus i) k_i\right) x_{\underline i(x^t)}^t \\&\hspace{4em}+ \#([i]\setminus i) k_i x_{\underline i(x^t)}^t \bigg]\\ &= \proj{[\mu_i,{\rm M}_i]}{{\rm e}^{h_i^t}x_{\underline i(x^t)}^t } \\ &= \max\left\{ \mu_i, \min\left\{ {\rm e}^{h_i^t}x_{\underline i(x^t)}^t,\, {\rm M}_i \right\} \right\}\\ &\ge \min\left\{ {\rm e}^{h_i^t}x_{\underline i(x^t)}^t,\, {\rm M}_i \right\} \ge \min\left\{ {\rm e}^{h_i^t}x_{\underline i(x^t)}^t,\, \M\sr}%{\und{\M} \right\} \end{aligned} \end{equation} for all $t\ge t_0$ and all $i\in\mathcal N$. First, notice that, if for some $\bar t\in\mathbb N$, $x_{\underline i(x^{\bar t})}^{\bar t}\ge \M\sr}%{\und{\M}$, then \eqref{pf.inq.xplus_1} implies $x_{\underline i(x^{\bar t+1})}^{\bar t+1}\ge \M\sr}%{\und{\M}$, so that by induction it is possible to conclude that $x_i^t \ge \M\sr}%{\und{\M}$ holds for all $t\ge\bar t$. Namely, the claim holds with $t^\star=\bar t$. It thus suffices to show that such $\bar t$ exists. In doing so, we proceed by contradiction. We first assume that \begin{equation}\label{pf.e.xontr} x_{\underline i(x^{t})}^{t}< \M\sr}%{\und{\M}, \qquad \forall t\ge t_0. \end{equation} Then, we show that, if the signals $h_i$ are sufficiently exciting from $t_0$ (in the sense of Definition \ref{d.SE}), then \eqref{pf.e.xontr} leads to a contradiction, in this way proving the claim. Thus, assume that \eqref{pf.e.xontr} holds. Then, since $h_i^t\ge 0$ for all $i\in\mathcal N$, \eqref{pf.inq.xplus_1} yields \begin{equation}\label{pf.in.xplus_2} x_i^{t+s}\ge {\rm e}^{h_i^t}x_{\underline i(x^t)}^t,\qquad\forall t \ge t_0, \ s\ge 1. \end{equation} Suppose that the signals $h_i$ are sufficiently exciting from $t_0$, for some parameters $\underline h(t_0)$ and $\Delta(t_0)$. Then, for each $i\in\mathcal N$, there exists $s_i\in\{t_0+1,\,\dots,\,t_0+\Delta(t_0)\}$, such that $h_i^{s_i}\ge \underline h(t_0)$. In view of \eqref{pf.in.xplus_2}, this yields \begin{equation*} x^{t_0+1+\Delta(t_0)}_i \ge {\rm e}^{\underline h(t_0)}\, x_{\underline i(x^{t_0+1})}^{t_0+1},\qquad\forall i\in\mathcal N , \end{equation*} and thus, in particular, \begin{equation*} x^{t_0+1+\Delta(t_0)}_{\underline i\big(x^{t_0+1+\Delta(t_0)}\big)} \ge {\rm e}^{\underline h(t_0)}\, x_{\underline i(x^{t_0+1})}^{t_0+1} . \end{equation*} In the same way, in view of sufficiency of excitation of the signals $h_i$, for each $i\in\mathcal N$, there exists $s_i\in\{t_0+1+\Delta(t_0),\,\dots,\,t_0+2\Delta(t_0)\}$, such that $h_i^{s_i}\ge \underline h(t_0)$. Then, in view of \eqref{pf.in.xplus_2}, one has \begin{align*} x^{t_0+1+2\Delta(t_0)}_{\underline i\big(x^{t_0+1+2\Delta(t_0)}\big)} \ge {\rm e}^{\underline h(t_0)}\, x^{t_0+1+\Delta(t_0)}_{\underline i\big(x^{t_0+1+\Delta(t_0)}\big)} \ge {\rm e}^{2\underline h(t_0)}\, x_{\underline i(x^{t_0+1})}^{t_0+1}. \end{align*} By repeating the same arguments, it is thus possible to conclude that, for each $m\in\mathbb N$ satisfying \eqref{e.SE.m}, one has \begin{equation}\label{pf.in.xplus_3} x^{t_0+1+m\Delta(t_0)}_{\underline i\big(x^{t_0+1+m\Delta(t_0)}\big)} \ge {\rm e}^{m\underline h(t_0)}\, x_{\underline i(x^{t_0+1})}^{t_0+1} \ge {\rm e}^{m\underline h(t_0)} \und{\lb} , \end{equation} in which we used the fact that, by definition of $\projOp{[\und{\lb}_i,{\rm M}_i]}$, $x_i^t\ge \mu_i\ge\und{\lb}$ for all $i\in\mathcal N$ and all $t\ge t_0+1$. Since the latter relation holds in particular for \begin{equation*} m^\star(t_0) = \dfrac{1}{\underline h(t_0)}\log\left( \dfrac{\M\sr}%{\und{\M}}{\und{\lb}} \right) . \end{equation*} Then, with $\bar t:= t_0+1+m^\star(t_0)\Delta(t_0)$, from \eqref{pf.in.xplus_3} we obtain \begin{align*} x_i^{\bar t} \ge x^{\bar t}_{\underline i(x^{\bar t})} \ge {\rm e}^{m^\star(t_0)\underline h(t_0)} \und{\lb} = \M\sr}%{\und{\M} ,\qquad\forall i\in\mathcal N \end{align*} which contradicts \eqref{pf.e.xontr} and, thus, proves that $x_i^t \ge \M\sr}%{\und{\M}$ holds for all $i\in\mathcal N$ and all $t\ge t^\star:=\bar t$. Finally, for all $i\in I^\star$, we have $x_i^t\in[\und{\lb},M_i]\le \M\sr}%{\und{\M}$ for all $t\ge t_0+1$ and this, together with the bound $x_i^t\ge \M\sr}%{\und{\M}$ above, implies $x_i^t=\M\sr}%{\und{\M}$ for all $i\in I^\star$ and $t\ge t^\star$. \subsection{Proof of Claim 2}\label{sec.proof.2} Since by Claim 1 each $x_i$ satisfies $x_i^t\ge \M\sr}%{\und{\M}$ for all $t\ge t^\star$, then, in view of Assumption \ref{ass.M_eps}, each $x_i$ also satisfies $x_i^t\ge \mu_i$ for all $t\ge t^\star$. This, in turn, allows us to write \begin{equation*} x_i^{t+1} = \min\left\{ {\rm M}_i,\ {\rm e}^{h_i^t} x_i^t + k_i \sum_{j\in[i]} \big(x_j^t-x_i^t\big) \right\} \end{equation*} for all $i\in\mathcal N$ and all $t\ge t^\star$, which implies both \begin{equation}\label{pf.e.xi_le_Mi} x_i^{t} \le {\rm M}_i \end{equation} and \begin{equation}\label{pf.s.xi_le} x_i^{t+1} \le {\rm e}^{h_i^t} x_i^t + k_i \sum_{j\in[i]} \big(x_j^t-x_i^t\big) \end{equation} for all $i\in\mathcal N$ and all $t\ge t^\star$. From \eqref{pf.e.xi_le_Mi} we also obtain \begin{equation}\label{pf.ineq.limsup_1} \limsup_{t\to\infty} |x_i^t| \le {\rm M}_i <\infty,\qquad \forall i\in\mathcal N. \end{equation} In the following we rely on the forthcoming lemma, whose proof is postponed to \ref{apd.Lemma_limsup}. \begin{lemma}\label{lem.limsup} With $n\in\mathbb N$, let $x,\,y:\mathbb N\to \mathbb R^n$. Suppose that $y$ is {bounded} and that, for some $t_0\in\mathbb N$ and some $\lambda:\mathbb N\to\R_{\ge 0}$ fulfilling $\lambda^t \le \nu \in [0,1)$ for all $t\ge t_0$, $x$ and $y$ satisfy \begin{equation}\label{e.lem.xy} x^{t+1} \le \lambda^t x^t + y^t \end{equation} for all $t\ge t_0$. Then \begin{equation}\label{pf.s.ls_xi_le} \limsup_{t\to\infty}|x^t|\le \dfrac{1}{1-\limsup_{t\to\infty}\lambda^t}\limsup_{t\to\infty}|y^t|. \end{equation} \end{lemma} With $I^\star$ defined in \eqref{d.Isr}, let $n^\star$ be the least integer such that $[I^\star]^{n^\star}=\mathcal N$ (which exists finite in view of Assumption~\ref{ass.connected}). The case in which $n^\star=0$ (i.e. $I^\star=\mathcal N$) directly follows from Claim 1. Hence, we consider $n^\star>0$. Assume that, for some $m\in\{0,\dots, n^\star-1 \}$, there exist $\alpha_m\in[0,1)$ and $\beta_m>0$ such that\footnote{Here we let $[I^\star]^{-1}:=\emptyset$.} \begin{equation}\label{pf.in.induct0} \begin{aligned} &\max_{i\in [I^\star]^{m}_{m-1}}\limsup_{t\to\infty}|x_i^t| \le \alpha_m \max_{j\in [I^\star]^{m+1}_{m}} \limsup_{t\to\infty}|x_j^t| + \beta_m \M\sr}%{\und{\M} . \end{aligned} \end{equation} We will now prove that, if this is the case, then a similar property holds also for $m+1$. First notice that, for each $i\in [I^\star]^{m+1}_{m}$, every $j\in[i]$ belongs to exactly one among the sets $[I^\star]^{m+2}_{m+1}$, $[I^\star]^{m+1}_{m}$, and $[I^\star]^{m}_{m-1}$. Hence, in view of \eqref{pf.s.xi_le}, we can write \begin{equation}\label{pf.in.xi_1} \begin{aligned} x_i^{t+1} & \le\big({\rm e}^{h_i^t} - k_i \#([i]\setminus i)\big)x_i^t + k_i\sum_{j\in [i]\cap [I^\star]^{m} } x_j^t \\&\qquad + k_i\sum_{j\in ([i]\setminus i)\cap [I^\star]^{m+1}_{m}} x_j^t + k_i\sum_{j\in [i]\cap [I^\star]^{m+2}_{m+1}} x_j^t \end{aligned} \end{equation} for all $i\in[I^\star]^{m+1}_{m}$ and all $t\ge t^\star$, in which we used the fact that $[i]\cap [I^\star]^{m}_{m-1} = [i]\cap [I^\star]^{m}$, for all $i\in[I^\star]^{m+1}_{m}$. If \eqref{inq.ki_1} holds, then $1+k_i \#([i]\setminus i)>1$. With $\nu_1>0$ sufficiently small so that $ \log(1+k_i \#([i]\setminus i))-2\nu_1> 0$, let \begin{equation*} \bar h_{i,1} := \log(1+k_i \#([i]\setminus i)) -2\nu_1. \end{equation*} If $\limsup_{t\to\infty} h_i^t \le \bar h_{i,1}$ for all $i\in\mathcal N$, then there exists $T^\star>t^\star$ such that \begin{equation}\label{pf.in.barh} h_i^t \le \bar h_{i,1}+\nu_1 = \log(1+k_i \#([i]\setminus i)) - \nu_1 \end{equation} for all $t\ge T^\star$ and all $i\in\mathcal N$. Thus, \eqref{inq.ki_1} and \eqref{pf.in.barh} imply \begin{equation*} 0\le {\rm e}^{h_i^t} - k_i \#([i]\setminus i) \le {\rm e}^{\bar h_{i,1}+\nu_1}- k_i \#([i]\setminus i) < 1, \end{equation*} for all $t\ge T^\star$ and all $i\in\mathcal N$, so that \eqref{pf.ineq.limsup_1}, \eqref{pf.in.xi_1} and Lemma~\ref{lem.limsup} imply \begin{equation}\label{pf.in.xi_2} \begin{aligned} \limsup_{t\to\infty} |x_i^{t}| & \le \gamma_{i} \sum_{j\in [i]\cap [I^\star]^{m}} \limsup_{t\to\infty}|x_j^t| \\ &\qquad + \gamma_{i}\sum_{j\in ([i]\setminus i)\cap [I^\star]^{m+1}_{m}} \limsup_{t\to\infty}|x_j^t| \\&\qquad +\gamma_{i}\sum_{j\in [i]\cap [I^\star]^{m+2}_{m+1}} \limsup_{t\to\infty}|x_j^t| \end{aligned} \end{equation} for all $i\in [I^\star]^{m+1}_m$, in which we let \begin{equation}\label{pf.d.gammai} \gamma_i := \dfrac{k_i}{1- \limsup_{t\to\infty}\, \big({\rm e}^{h_i^t} - k_i \#([i]\setminus i)\big)} \end{equation} which exists finite in view of Lemma \ref{lem.limsup}. In view of \eqref{pf.in.induct0}, equation \eqref{pf.in.xi_2} implies \begin{equation}\label{pf.in.xi_3} \begin{aligned} \limsup_{t\to\infty} |x_i^{t}| & \le \big( c_{i,1} \alpha_m +c_{i,2}\big) \max_{j\in [I^\star]^{m+1}_{m}} \limsup_{t\to\infty}|x_j^t|\\ &\qquad+ c_{i,3} \max_{j\in [I^\star]^{m+2}_{m+1}} \limsup_{t\to\infty}|x_j^t|+ c_{i,1} \beta_m \M\sr}%{\und{\M}. \end{aligned} \end{equation} for all $i\in [I^\star]^{m+1}_m$, in which we let for convenience \begin{equation}\label{pf.d.ci} \begin{aligned} c_{i,1} &:= \gamma_i \#\left( [i] \cap [I^\star]^{m}\right) \\ c_{i,2} &:= \gamma_i \#\left(([i]\setminus i)\cap [I^\star]^{m+1}_{m}\right) \\ c_{i,3} &:= \gamma_i \#\left([i]\cap [I^\star]^{m+2}_{m+1}\right) . \end{aligned} \end{equation} With $\nu_2>0$ sufficiently small so that $k_i(1-\alpha_m)- \nu_2>0$ for all $i\in\mathcal N$ (recall that $\alpha_m<1$ by assumption), define \begin{equation*} \bar h_i := \min\Big\{ \bar h_{i,1},\ \log\big(1+k_i(1-\alpha_m)- \nu_2\big) \Big\}. \end{equation*} If \begin{equation}\label{pf.in.ls_hi} \limsup_{t\to\infty} h_i^t \le \bar h_i \end{equation} for all $i\in [I^\star]^{m+1}_m$, then, since $\#([i]\cap [I^\star]^{m} )\ge 1$, it holds that \begin{equation}\label{pf.in.hsr_0} \begin{aligned} 1-{\rm e}^{\limsup_{t\to\infty} h_i^t} & \ge 1-{\rm e}^{\bar h_i} \ge -k_i(1-\alpha_m)+\nu_2 \\ &\ge - k_i(1-\alpha_m)\#([i]\cap [I^\star]^{m}) + \nu_2 \end{aligned} \end{equation} for all $i\in [I^\star]^{m+1}_m$. Since for all $i\in [I^\star]^{m+1}_m$, \begin{align*} &\#\big(([i]\setminus i)\cap [I^\star]^{m+1}_{m}\big)\\ &= \#\left( [i]\setminus i\right) -\#\left( [i] \cap [I^\star]^{m}\right) - \#\left([i]\cap [I^\star]^{m+2}_{m+1}\right) \\ &\le \#\left( [i]\setminus i\right) -\#\left( [i] \cap [I^\star]^{m}\right), \end{align*} then, we conclude that \begin{equation}\label{pf.in.cis} \begin{aligned} c_{i,1}& \alpha_m + c_{i,2} \\ &\le \dfrac{ k_i(\alpha_m-1) \#\left( [i] \cap [I^\star]^{m}\right)+k_i \#\left( [i]\setminus i\right)}{1-{\rm e}^{\bar h_i}+ k_i \#([i]\setminus i) }\\ &\le \dfrac{(\alpha_m-1) k_i \#\left( [i] \cap [I^\star]^{m}\right) +k_i \#([i]\setminus i)}{(\alpha_m-1) k_i \#\left( [i] \cap [I^\star]^{m}\right) +k_i \#([i]\setminus i)+\nu_2 }\\&<1. \end{aligned} \end{equation} for all $i\in [I^\star]^{m+1}_m$. Now, since \eqref{pf.in.xi_3} holds for each $i\in [I^\star]^{m+1}_m$, it in particular holds for $\bar i$ satisfying \begin{equation}\label{pf.d.bari} \bar i\in \argmax_{i\in [I^\star]^{m+1}_m} \limsup_{t\to\infty}|x_i^t|, \end{equation} so that \eqref{pf.in.xi_3} implies \begin{equation*} \begin{aligned} \max_{i\in [I^\star]^{m+1}_{m}}& \limsup_{t\to\infty}|x_i^t| \le ( c_{\bar i,1} \alpha_m +c_{\bar i,2} ) \max_{i\in [I^\star]^{m+1}_{m}} \limsup_{t\to\infty}|x_i^t|\\ &\qquad\qquad+ c_{\bar i,3} \max_{j\in [I^\star]^{m+2}_{m+1}} \limsup_{t\to\infty}|x_j^t| + c_{\bar i,1}\beta_m \M\sr}%{\und{\M} \end{aligned} \end{equation*} which, in view of \eqref{pf.in.cis}, yields \begin{equation}\label{pf.in.induct1} \begin{aligned} \max_{i\in [I^\star]^{m+1}_{m}} \limsup_{t\to\infty}|x_i^t| &\le \alpha_{m+1} \max_{j\in [I^\star]^{m+2}_{m+1}} \limsup_{t\to\infty}|x_j^t| \\&\qquad + \beta_{m+1} \M\sr}%{\und{\M} \end{aligned} \end{equation} with \begin{equation}\label{pf.d.a_b} \begin{aligned} \alpha_{m+1} &= \dfrac{c_{\bar i,3}}{1-\big( c_{\bar i,1} \alpha_m +c_{\bar i,2}\big)}, \\ \beta_{m+1} &=\dfrac{c_{\bar i,1}}{1-\big( c_{\bar i,1} \alpha_m +c_{\bar i,2}\big)} \beta_m . \end{aligned} \end{equation} Furthermore, since $\limsup_{t\to\infty} h_i^t\le \bar h_i$, in view of \eqref{pf.in.hsr_0}, $\alpha_{m+1}$ satisfies \begin{align*} \alpha_{m+1} &\le \dfrac{k_{\bar i} \# \big( [{\bar i}]\cap [I^\star]^{m+2}_{m+1} \big) }{ k_{\bar i}\#([{\bar i}]\setminus {\bar i})- k_{\bar i} \#(([{\bar i}]\setminus {\bar i})\cap[I^\star]^{m+1})+\nu_2 }\\ &\le \dfrac{k_{\bar i} \# \big( [{\bar i}]\cap [I^\star]^{m+2}_{m+1} \big) }{ k_{\bar i} \# \big( [{\bar i}]\cap [I^\star]^{m+2}_{m+1} \big)+\nu_2 } <1. \end{align*} Therefore, we claim that if \eqref{pf.in.induct0} holds for some $m\in\{0,\dots, n^\star-1\}$ with $\alpha_m<1$ and $\beta_m\ge 0$, then \eqref{pf.in.induct1} holds as well for $m+1$ with $\alpha_{m+1}<1$ and $\beta_{m+1}\ge 0$ given above. Since by Claim 1, Equation \eqref{pf.in.induct0} trivially holds for $m=0$ with $\beta_0=1$ and $\alpha_0=0$, then we claim by induction that, if \begin{equation}\label{pf.d.barh} \limsup_{t\to\infty} h_i^t \le \bar h:= \min_{i\in\mathcal N} \bar h_i,\qquad \forall i\in\mathcal N, \end{equation} then Equation \eqref{pf.in.induct0} holds for each $m\in\{0,\dots,n^\star\}$. Now, for $m=n^\star$, we have $[I^\star]^{m+1}\setminus [I^\star]^m = \emptyset$, so that \eqref{pf.in.induct0} yields \begin{equation*} \limsup_{t\to\infty} x_i^t \le \beta_{n^\star}\M\sr}%{\und{\M} , \qquad \forall i\in [I^\star]^{n^\star}_{n^\star-1}. \end{equation*} Thus, iterating \eqref{pf.in.induct0} backwards and using \eqref{pf.e.xi_le_Mi} yield \begin{equation}\label{ps.LS_xi} \limsup_{t\to\infty}x_i^t \le \min\Big\{ {\rm M}_i,\ (1+\varepsilon_i)\M\sr}%{\und{\M} \Big\} \end{equation} in which \begin{equation*} \varepsilon_i = 0,\qquad \forall i\in I^\star \end{equation*} and \begin{equation}\label{pf.d.vep_i} \varepsilon_i = \sum_{\ell=0}^{n^\star-m} \left( \prod_{k=\ell+1}^{n^\star-m} \alpha_{n^\star-k} \right) \beta_{n^\star-\ell} -1, \end{equation} for all $i\in [I^\star]^{m}_{m-1} $ and all $m=1,\dots,n^\star$. Moreover, \eqref{pf.d.vep_i} directly implies that the quantities $\varepsilon_i$ also satisfy \begin{equation}\label{pf.d.vep_i_2} \max_{i\in[I^\star]^m_{m-1}} \varepsilon_i = \alpha_m \left(1+\max_{i\in[I^\star]^{m+1}_{m}}\varepsilon_i\right) + \beta_m -1 \end{equation} for all $m=1,\dots,n^\star$. We now prove that $\varepsilon_i$ in \eqref{ps.LS_xi}-\eqref{pf.d.vep_i} can be reduced arbitrarily by reducing $\limsup_{t\to\infty} h_i^t$ accordingly for each $i\in\mathcal N$. For convenience, let \begin{equation}\label{pf.d.upsilon} \upsilon_i := \limsup_{t\to\infty} h_i^t \in [0,\bar h_i]. \end{equation} Then, the quantities $\gamma_i$, defined in \eqref{pf.d.gammai}, satisfy \begin{equation*} \gamma_i(\upsilon_i) = \dfrac{k_i}{1-{\rm e}^{\upsilon_i} + k_i\#([i]\setminus i)}. \end{equation*} Thus, $\gamma_i$ is continuous in $[0,\infty)$, and \begin{equation*} \lim_{\upsilon_i\to 0} \gamma_i(\upsilon_i) = \dfrac{1}{\#([i]\setminus i)}. \end{equation*} In view of the definitions \eqref{pf.d.ci}, also the quantities $\alpha_m$ and $\beta_m$, as defined in \eqref{pf.d.a_b}, depend on $\upsilon_{\bar i}$ through $\gamma_{\bar i}$, in which $\bar i$ satisfies \eqref{pf.d.bari}. We now prove by induction that, by letting $\upsilon:=(\upsilon_1,\dots,\upsilon_N)$, the following holds \begin{equation}\label{pf.e.lim_ab_1} \lim_{\upsilon\to 0} \alpha_m(\upsilon)+\beta_m(\upsilon) = 1,\qquad\forall m=0,\dots,n^\star. \end{equation} First notice that \eqref{pf.e.lim_ab_1} trivially holds for $m=0$, as indeed $\alpha_m=0$ and $\beta_m=1$ despite the value of $\upsilon$. It thus suffices to show that if \eqref{pf.e.lim_ab_1} holds for a given $m\in\{0,\dots,n^\star-1\}$, then the same relation holds as well for $m+1$. For, assume that \eqref{pf.e.lim_ab_1} holds for a given $m\in\{0,\dots,n^\star-1\}$. Then, we can write $\lim_{\upsilon\to 0} \beta_m(\upsilon)=1-\lim_{\upsilon\to 0}\alpha_m(\upsilon)$. Thus, by letting for convenience $\rho_1:=\#([\bar i]\cap[I^\star]^m)$, $\rho_2:=\#(([\bar i]\setminus \bar i)\cap ([I^\star]^{m+1}_m))$, $\rho_3:=\#([\bar i]\cap([I^\star]^{m+2}_{m+1}))$, and noting that $\#([\bar i]\setminus \bar i)-\rho_2 = \rho_1+\rho_3$, we obtain \begin{equation*} \begin{aligned} &\lim_{\upsilon\to 0} \alpha_{m+1}(\upsilon)+ \beta_{m+1}(\upsilon) \\&\quad = \dfrac{\rho_3 + \left(1-\lim_{\upsilon\to 0}\alpha_m(\upsilon)\right) \rho_1}{\#([\bar i]\setminus \bar i) - \lim_{\upsilon\to 0}\alpha_m(\upsilon)\rho_1 - \rho_2}\\&\quad = \dfrac{\rho_3 + \left(1-\lim_{\upsilon\to 0}\alpha_m(\upsilon)\right) \rho_1}{\rho_3 + \left(1-\lim_{\upsilon\to 0}\alpha_m(\upsilon)\right) \rho_1}=1. \end{aligned} \end{equation*} Thus, by induction, we claim \eqref{pf.e.lim_ab_1} for all $m\in\{0,\dots,n^\star\}$. Since for every $i\in[I^\star]^{n^\star}_{n^\star-1}$, $c_{i,3} = 0$ (in fact $[I^\star]^{n^\star+1}_{n^\star}=\emptyset$), then $\alpha_{n^\star}=0$. Thus, \begin{equation*} \lim_{\upsilon\to 0}\beta_{n^\star}(\upsilon)=1. \end{equation*} In view of \eqref{pf.d.vep_i}, this implies \begin{equation*} \lim_{\upsilon\to 0}\max_{i\in[I^\star]_{n^\star}^{n^\star-1}} \varepsilon_{i}(\upsilon) = 0. \end{equation*} In view of \eqref{pf.d.vep_i_2}, $\lim_{\upsilon\to 0}\max_{i\in[I^\star]_{m}^{m+1}}\varepsilon_i(\upsilon)=0$ implies \begin{align*} \lim_{\upsilon\to 0} \max_{i\in[I^\star]_{m-1}^{m}}\varepsilon_{i}(\upsilon) = \lim_{\upsilon\to 0} (\alpha_{m}(\upsilon)+\beta_m(\upsilon)) -1 = 0, \end{align*} so that, by induction, we conclude that \begin{equation*} \lim_{\upsilon\to 0} \max_{i\in[I^\star]_{m}^{m-1}}\varepsilon_i(\upsilon) = 0,\quad \forall m\in\{0,\dots,n^\star\}, \end{equation*} i.e. \begin{equation}\label{pf.eq.lim_vep} \lim_{\upsilon\to 0} \varepsilon_i(\upsilon) = 0,\quad \forall i\in\mathcal N. \end{equation} The latter equation thus implies that, given any $\epsilon\ge0$, there exists $\delta'(\epsilon)\ge 0$ such that $|\upsilon|\le \delta'(\epsilon)$ implies $\M\sr}%{\und{\M} \varepsilon_i\le \epsilon$ for all $i\in\mathcal N$. Therefore, if \begin{equation}\label{pf.inq.ls_hi} \limsup_{t\to\infty}h_i^t \le \delta(\epsilon) := \min\left\{\bar h,\,\dfrac{\delta'(\epsilon)}{N}\right\},\qquad\forall i\in\mathcal N \end{equation} then $|\upsilon|\le \delta'(\epsilon)$, which implies $\M\sr}%{\und{\M}\varepsilon_i\le \epsilon$. In turn, in view of \eqref{ps.LS_xi}, this implies \begin{equation}\label{ps.LS_xi2} \limsup_{t\to\infty}x_i^t \le \min\Big\{ {\rm M}_i,\ \M\sr}%{\und{\M} +\epsilon \Big\}. \end{equation} Claim 2 thus follows from \eqref{ps.LS_xi2} and by noticing that Claim~1 implies $\limsup_{t\to\infty} x_i\ge \M\sr}%{\und{\M}$. \subsection{Proof of Claim 3}\label{sec.proof.3} The third claim of the theorem, i.e., that uniformity of excitation (in the sense of Definition \ref{d.PE}) of $(h_i)_{i\in\mathcal N}$ implies uniform attractiveness of $\mathcal A_\epsilon:= \prod_{i\in\mathcal N} \big[\M\sr}%{\und{\M},\,\min\{\M\sr}%{\und{\M}+\epsilon,\,{\rm M}_i\}\big]$, directly follows by the fact that, if the family $(h_i)_{i\in\mathcal N}$ is uniformly exciting, then in the above analysis $t^\star$ does not depend on $t_0$ and, therefore, the convergence \eqref{ps.LS_xi2} is uniform in the initial time. \subsection{Proof of Claim 4} In this subsection we prove the fourth claim of the theorem. With $(\tau_i)_{i\in\mathcal N}\in\mathbb N^N$ arbitrary, let $F_i\in\mathbb R^{\tau_i\times \tau_i}$ and $C_i\in\mathbb R^{1\times \tau_i}$ denote the matrices \begin{align*} F_i &:= \begin{bmatrix} 0_{(\tau_i-1)\times 1 } & I_{(\tau_i-1)\times (\tau_i-1)}\\ 1 & 0_{1\times(\tau_i-1)} \end{bmatrix}, & C_i&:= \begin{bmatrix} 1 & 0_{1\times(\tau_i-1)} \end{bmatrix}. \end{align*} Then, each $\tau_i$-periodic signal $h_i$ satisfies \begin{equation}\label{pf.s.xxi} \begin{aligned} \xi^{t+1}_i &= F_i\xi^t_i, & h^t_i &= C_i \xi^t_i \end{aligned} \end{equation} for a suitable initial condition $\xi^{t_0}_i\in\mathbb R^{\tau_i}$. Moreover, if all the signals $h_i$ are non-zero, then, by Lemma \ref{lem.PE}, $(h_i)_{i\in\mathcal N}$ is uniformly exciting in the sense of Definition \ref{d.PE} for some $\underline h>0$. For a fixed $\epsilon>0$, let $\delta(\epsilon)$ be defined as above in~\eqref{pf.inq.ls_hi}, and let \begin{align*} \Xi_i :=\Big\{ \xi_i\in\mathbb R^{\tau_i} \,\mid\,\, & \forall j\in\{1,\dots,\tau_i\},\, \xi_{i,j}\in[0,\delta(\epsilon)],\text{ and }\\ & \exists j\in\{1,\dots, \tau_i\},\, \xi_{i,j}\ge \underline h\Big\}, \end{align*} where $\xi_{i,j}$ denotes the $j$-th component of $\xi_i$. Then, $\Xi_i$ is compact and invariant for \eqref{pf.s.xxi}. % We now consider the interconnection between \eqref{pf.s.xxi} and the update laws \eqref{s.xi} for all $i\in\mathcal N$, with the dynamics restricted to the invariant set $Z:=\Xi\times\mathbb R^N$, being $\Xi:=\prod_{i\in\mathcal N} \Xi_i$. We compactly rewrite this interconnections as follows \begin{equation}\label{pf.s.z} z^{t+1} = \phi(z^t),\qquad z^t\in Z \end{equation} with $\phi$ suitably defined and $z^t:=(\xi^t,x^t)\in\mathbb R^r\times\mathbb R^N$, being $\xi:=(\xi_i)_{i\in\mathcal N}$ and $r:=\sum_{i\in\mathcal N}\tau_i$. Clearly, for every solution $x_a$ to \eqref{s.xi} starting at a given $t_0\in\mathbb N$ and subject to the signals $(h_i)_{i\in\mathcal N}$, there is a solution $z_b=(\xi_b,x_b)$ to \eqref{pf.s.z} starting at $0$ and such that $x_b(t)=x_a(t_0+t)$ for all $t\in\mathbb N$. For each compact $K\subset\Xi\times\mathbb R^N$, let $\mathcal S(K)$ denote the set of solutions to \eqref{pf.s.z} starting at $0$ from $K$ and, for each $t\in\mathbb N$, define the reachable set from $K$ as $\mathcal R^t(K) := \big\{ (\xi^s,x^s)\in\Xi\times \mathbb R^{N}\,\mid\, (\xi,x)\in\mathcal S(K),\, s\ge t \big\}$. In view of the above analysis, and since $\Xi$ is invariant for~\eqref{pf.s.z}, it follows that $\mathcal R_{t}(K)$ is included in $\Xi\times\mathbb R^N$ and bounded uniformly in $K$ and $t$ for each $t\ge 1$. Thus, the limit set $\Omega(K) := \bigcap_{t\in\mathbb N} \closure{\mathcal R^t(K)}$ (where $\closure{\mathcal R^t(K)}$ denotes the closure of $\mathcal R^t(K)$) is compact, non-empty, and included in $\Xi\times\mathbb R^N$. Moreover, since $\phi$ is continuous by construction, then $\Omega(K)$ is also forward invariant, uniformly globally attractive for \eqref{pf.s.z} from $K$ (see e.g. \cite[Proposition 6.26]{Goebel2012}), and it is the smallest set having the above properties. Furthermore, we notice that, by definition of the update laws \eqref{s.xi}, $x_i^t\in[\mu_{i},{\rm M}_i]$ for all $t\ge t_0$ despite the value of the initial conditions and of $t_0$, so that we conclude that $\Omega(K_1)=\Omega(K_2)$ for all $K_1,K_2$ supersets of $K^\star:=\prod_{i\in\mathcal N} [\mu_{i},{\rm M}_i]$. In the following we let $\Omega:=\Omega(K^\star)$. As $(h_i)_{i\in\mathcal N}$ is uniformly exciting, by Claim 3 the convergence \eqref{ps.LS_xi2} holds uniformly in the initial time. By the properties of $\Omega$, this implies that $\Omega\subset \Xi\times\mathcal A_\epsilon$, and the projection $\mathcal A_\epsilon^u := \big\{ x\in\mathbb R^N\,\mid\, (\xi,x)\in \Omega \big \}$ satisfies $\mathcal A_\epsilon^u\subset \mathcal A_\epsilon$. Therefore, it remains to show that $\mathcal A_\epsilon^u$ is stable for $x$, i.e. that for each $\ell>0$, there exists $b(\ell)>0$, such that every solution to \eqref{pf.s.z} satisfying $\setdist{x^0}{\mathcal A_\epsilon^u}\le b(\ell)$ also satisfies $\setdist{x^t}{\mathcal A_\epsilon^u}\le \ell$ for all $t\in\mathbb N$. This, in turn, can be proved by similar arguments of \cite[Proposition 7.5]{Goebel2012}). In particular, suppose that the above stability property does not hold, and fix an $\ell>0$ arbitrarily. If $\mathcal A_\epsilon^u$ is not stable, then for each $m\in\mathbb N$ there exist $\tau_m\in\mathbb N$ and a solution $z_m=(\xi_m,x_m)\in\mathcal S(Z)$ such that $\setdist{x^{0}_m}{\mathcal A_\epsilon^u}\le 2^{-m}$ and $\setdist{x^{\tau_m}_m}{\mathcal A_\epsilon^u}>\ell$. This, in turn implies \begin{equation}\label{pf.Omg} \setdist{z^{\tau_m}_m}{\Omega}>\ell. \end{equation} Since $X_0:=\{x\in\mathbb R^N\,\mid\, \setdist{x}{\mathcal A_\epsilon^u}\le 1\}$ is compact, $Z_0:=\Xi\times X_0$ is compact. Thus, since $z^0_m\in Z_0$ for all $m\in\mathbb N$, by uniform attractiveness of $\Omega$, there exists $\bar\tau=\bar\tau(\ell)\in\mathbb N$ such that $\tau_m\le \bar\tau$ for all $m\in\mathbb N$. We are thus given a sequence $(z_m|_{\le \bar\tau})_{m\in\mathbb N}$ of uniformly bounded signals $z_m|_{\le \bar\tau}$, obtained by restricting the solutions $z_m$ to $\{0,\dots, \bar\tau\}$, which satisfies $\lim_{m\to\infty} \setdist{z^0_m}{\Omega} =0$. As $\phi$ is continuous, $Z$ is closed, and since $\Omega$ is forward invariant, then in view of \cite[Theorem 6.8]{Goebel2012} we can extract a subsequence of $(z_m|_{\le \bar\tau})_{m\in\mathbb N}$ (which we do not re-index) that satisfies $\lim_{m\to\infty} \setdist{z^t_m}{\Omega} =0$ for all $t\in\{0,\dots,\bar\tau\}$. This, however, contradicts \eqref{pf.Omg} and proves the claim. \subsection{Proof of Claim 5} The last claim of the theorem, i.e. that if $(h_i)_{i\in\mathcal N}$ is sufficiently exciting according to Definition \ref{d.SE} and $\lim_{t\to\infty} h_i^t=0$, then $\lim_{t\to\infty} x_i^t = \M\sr}%{\und{\M}$ for all $i\in\mathcal N$, follows directly from \eqref{pf.d.upsilon}-\eqref{pf.eq.lim_vep}. \ignorespaces~\hfill$\blacksquare$
{ "timestamp": "2021-06-28T02:14:18", "yymm": "2012", "arxiv_id": "2012.08940", "language": "en", "url": "https://arxiv.org/abs/2012.08940" }
"\\section*{Abstract}\n\n\nWe develop a novel Monte Carlo strategy for the simulation of the Boltzma(...TRUNCATED)
{"timestamp":"2020-12-17T02:19:48","yymm":"2012","arxiv_id":"2012.08985","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\n\n\\subsection{Generalized Sampling in Shift-Invariant Spaces}\n\nSince th(...TRUNCATED)
{"timestamp":"2021-06-18T02:26:01","yymm":"2012","arxiv_id":"2012.08954","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\n\tDiscovering novel user intents is important to improve the service quali(...TRUNCATED)
{"timestamp":"2021-03-23T01:26:00","yymm":"2012","arxiv_id":"2012.08987","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\n\\label{sec:introduction}\nGraph Matching (GM) aims to find node correspon(...TRUNCATED)
{"timestamp":"2021-08-18T02:14:25","yymm":"2012","arxiv_id":"2012.08950","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\n\\label{sec:intro}\n\n\nThe scattering of cosmological dark matter (DM) pa(...TRUNCATED)
{"timestamp":"2021-09-14T02:08:43","yymm":"2012","arxiv_id":"2012.08918","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\n\n\\subsection{Description of the Results}\n\nIn this work, we cover two m(...TRUNCATED)
{"timestamp":"2021-03-25T01:01:41","yymm":"2012","arxiv_id":"2012.08941","language":"en","url":"http(...TRUNCATED)
End of preview. Expand in Data Studio

No dataset card yet

Downloads last month
5