and the arboricity $a = a(G)$ of $G$, i.e., the least number of acyclic subgraphs of $G$ whose union is $G$.
Local algorithms: Consider a problem $\Pi$ and a collection of instances $\mathcal{F}$ for $\Pi$. An algorithm for $\Pi$ and $\mathcal{F}$ takes as input an instance $(G, \mathbf{x}) \in \mathcal{F}$ and must terminate with an output vector $\mathbf{y}$ such that $(G, \mathbf{x}, \mathbf{y}) \in \Pi$. We consider the LOCAL model (cf., [35]). During the execution of a local algorithm $\mathcal{A}$, all processors are woken up simultaneously and computation proceeds in fault-free synchronous rounds. In each round, every node may send messages of unrestricted size to its neighbors and may perform arbitrary computations on its data. A message that is sent in a round $r$, arrives at its destination before the next round $r+1$ starts. It must be guaranteed that after a finite number of rounds, each node $v$ terminates by writing some final output value $\mathbf{y}(v)$ in its designated output variable (informally, this means that we may assume that a node "knows" that its output is indeed its final output.) The algorithm $\mathcal{A}$ is correct if for every instance $(G, \mathbf{x}) \in \mathcal{F}$, the resulting output vector $\mathbf{y}$ satisfies $(G, \mathbf{x}, \mathbf{y}) \in \Pi$.
Let $\mathcal{A}$ be a local deterministic algorithm for $\Pi$ and $\mathcal{F}$. The running time of $\mathcal{A}$ over a particular instance $(G, \mathbf{x}) \in \mathcal{F}$, denoted $T_{\mathcal{A}}(G, \mathbf{x})$, is the number of rounds from the beginning of the execution of $\mathcal{A}$ until all nodes terminate. The running time of $\mathcal{A}$ is typically evaluated with respect to a collection $\Lambda$ of parameters $\mathbf{q}1, \dots, \mathbf{q}{\ell}$. Specifically, it is compared to a non-decreasing function $f: \mathbb{N}^{\ell} \to \mathbb{R}^{+}$; we say that $f$ is an upper bound for the running time of $\mathcal{A}$ with respect to $\Lambda$ if $T_{\mathcal{A}}(G, \mathbf{x}) \le f(\mathbf{q}1^*, \dots, \mathbf{q}{\ell}^*)$ for every instance $(G, \mathbf{x}) \in \mathcal{F}$ with parameters $\mathbf{q}_i^* = \mathbf{q}_i(G, \mathbf{x})$ for $i \in [1, \ell]$. Let us stress that we assume throughout the paper that all the functions bounding running times of algorithms are non-decreasing.
For an integer $i$, the algorithm $\mathcal{A}$ restricted to $i$ rounds is the local algorithm $\mathcal{B}$ that consists of running $\mathcal{A}$ for precisely $i$ rounds. The output $\mathbf{y}(u)$ of $\mathcal{B}$ at a vertex $u$ is defined as follows: if, during the $i$ rounds, $\mathcal{A}$ outputs a value $y$ at $u$ then $\mathbf{y}(u) = y$; otherwise we let $\mathbf{y}(u)$ be an arbitrary value, e.g., “0”.
A randomized local algorithm is a local algorithm that allows each node to use random bits in its local computation—the random bits used by different nodes being independent. A randomized (local) algorithm $\mathcal{A}$ is Las Vegas if its correctness is guaranteed with probability 1. The running time of a Las Vegas algorithm $\mathcal{A}{LV}$ over a particular configuration $(G, \mathbf{x}) \in \mathcal{F}$, denoted $T{\mathcal{A}{LV}}(G, \mathbf{x})$, is a random variable, which may be unbounded. However, the expected value of $T{\mathcal{A}_{LV}}(G, \mathbf{x})$
is bounded. A Monte-Carlo algorithm $\mathcal{A}{MC}$ with guarantee $\rho \in (0, 1]$ is a randomized algorithm that takes a configuration $(G, \mathbf{x}) \in \mathcal{F}$ as input and terminates before a predetermined time $T{\mathcal{A}{MC}}(G, \mathbf{x})$ (called the running time of $\mathcal{A}{MC}$). It is certain that the output vector produced by Algorithm $\mathcal{A}{MC}$ is a solution to $\Pi$ with probability at least $\rho$. Finally, a weak Monte-Carlo algorithm $\mathcal{A}{WMC}$ with guarantee $\rho \in (0, 1]$ guarantees that with probability at least $\rho$, the algorithm outputs a correct solution by its running time $T_{\mathcal{A}{WMC}}(G, \mathbf{x})$. (Observe that it is not certain that any execution of the weak Monte-Carlo algorithm will terminate by the prescribed time $T{\mathcal{A}_{WMC}}(G, \mathbf{x})$, or even terminate at all.) Note that a Monte-Carlo algorithm is in particular a weak Monte-Carlo algorithm, with the same running time and guarantee. Moreover, for any constant $\rho \in (0, 1]$, a Las Vegas algorithm running in expected time $T$ is a weak Monte-Carlo algorithm with guarantee $\rho$ running in time $\frac{T}{1-\rho}$, by Markov's inequality.
Synchronicity and time complexity: Many LOCAL algorithms happen to have different termination times at different nodes. On the other hand, most of the algorithms rely on a simultaneous wake-up time for all nodes. This becomes an issue when one wants to run an algorithm $\mathcal{A}_1$ and subsequently an algorithm $\mathcal{A}_2$ taking the output of $\mathcal{A}_1$ as input. Indeed, this problem amounts to running $\mathcal{A}_2$ with non-simultaneous wake-up times: a node $u$ starts $\mathcal{A}_2$ when it terminates $\mathcal{A}_1$.
As observed (e.g., by Kuhn [22]), the concept of synchronizer [2], used in the context of local algorithms, allows one to transform an asynchronous local algorithm to a synchronous one that runs in the same asymptotic time complexity. Hence, the synchronicity assumption can actually be removed. Although the standard asynchronous model introduced still assumes a simultaneous wake-up time, it can be easily verified that the technique still applies with non-simultaneous wake-up times if a node can buffer messages received before it wakes up, which is the case when running an algorithm after another.
However, we have to adapt the notion of running time. The computation that a node performs in time $t$ depends on its interactions with nodes at distance at most $t$ in the network. More precisely, we say that a node $u$ terminates in time $t$ if it terminates at most $t$ rounds after all nodes in $B_G(u,t)$ have woken up. The termination time of $u$ is the least $t$ such that $u$ terminates in time $t$. We finally define the running time of an algorithm as the maximum termination time over all nodes and all wake-up patterns.
Given two local algorithms $\mathcal{A}_1$ and $\mathcal{A}_2$, we let $\mathcal{A}_1; \mathcal{A}_2$ be the process of running $\mathcal{A}_2$ after $\mathcal{A}_1$. It turns out that