the running time of $\mathcal{A}_1$; $\mathcal{A}_2$ is bounded from above by the sum of the running times of $\mathcal{A}_1$ and $\mathcal{A}_2$. This can be shown as follows. Let $t_1$ and $t_2$ be the running times of $\mathcal{A}_1$ and $\mathcal{A}_2$ respectively. Consider a node $u$ and let $t_0$ be the last wake-up time of a node in the ball $B_G(u, t_1+t_2)$. At $t_0 + t_1$, all nodes in $B_G(u, t_2)$ have terminated $\mathcal{A}_1$ and are thus considered as woken up for the execution of $\mathcal{A}_2$. Node $u$ thus terminates before $(t_0+t_1)+t_2$. As this is true for any node $u$ independently of the wake-up pattern, $\mathcal{A}_1$; $\mathcal{A}_2$ has running time at most $t_1+t_2$. This establishes the following observation.
Observation 2.1 For any two local algorithms $\mathcal{A}_1$ and $\mathcal{A}_2$, the running time of $\mathcal{A}_1$; $\mathcal{A}_2$ is bounded by the sum of the running times of $\mathcal{A}_1$ and $\mathcal{A}_2$.
Another useful remark is that a simultaneous wake- up algorithm running in time $t$ can be emulated in a non-simultaneous wake-up environment with running time at most $t$ using the simple $\alpha$ synchronizer. Indeed, consider a node $u$ and let $t_0$ be the last wake-up time of a node in the ball $B_G(u,t)$. At time $t_0$, all nodes in $B_G(u,t)$ perform (or have performed) round 0. Using the $\alpha$ synchronizer a node can perform round $i$ when all its neighbors have performed round $i-1$. We can thus show by induction on $i$ that all nodes in $B_G(u,t-i)$ perform (or have performed) round $i$ at time $t_0+i$. The node $u$ thus terminates in time $t$. This implies that the running time of the emulation of the algorithm with the $\alpha$ synchronizer is at most $t$. Therefore, in the remaining of the paper we may assume without loss of generality that all nodes wake up simultaneously at time 0.
Local algorithms requiring parameters: Fix a problem $\Pi$ and let $\mathcal{F}$ be a collection of instances for $\Pi$. Let $\Gamma$ be a collection of parameters $\mathbf{p}_1, \dots, \mathbf{p}_r$ and let $\mathcal{A}$ be a local algorithm. We say that $\mathcal{A}$ requires $\Gamma$ if the code of $\mathcal{A}$, which is executed by each node of the input configuration, uses a value $\tilde{\mathbf{p}}$ for each parameter $\mathbf{p} \in \Gamma$. (Note that this value is thus the same for all nodes.) The value $\tilde{\mathbf{p}}$ is a guess for $\mathbf{p}$. A collection of guesses for the parameters in $\Gamma$ is denoted by $\tilde{\Gamma}$ and an algorithm $\mathcal{A}$ that requires $\Gamma$ is denoted by $\mathcal{A}^\Gamma$. An algorithm that does not require any parameter is called uniform.
Consider an instance $(G, \mathbf{x}) \in \mathcal{F}$, a collection $\Gamma$ of parameters and a parameter $\mathbf{p} \in \Gamma$. A guess $\tilde{\mathbf{p}}$ for $\mathbf{p}$ is termed good if $\tilde{\mathbf{p}} \ge \mathbf{p}(G, \mathbf{x})$, and the guess $\tilde{\mathbf{p}}$ is called correct if $\tilde{\mathbf{p}} = \mathbf{p}(G, \mathbf{x})$. We typically write correct guesses and collection of correct guesses with a star superscript, as in $\mathbf{p}^*$ and $\Gamma^*(G, \mathbf{x})$, respectively. When $(G, \mathbf{x})$ is clear from the context, we may use the notation $\Gamma^*$ instead of $\Gamma^*(G, \mathbf{x})$.
An algorithm $\mathcal{A}^\Gamma$ depends on $\Gamma$ if for every instance $(G, \mathbf{x}) \in \mathcal{F}$, the correctness of $\mathcal{A}^\Gamma$ over $(G, \mathbf{x})$ is guaranteed only when $\mathcal{A}^\Gamma$ uses a collection $\tilde{\Gamma}$ of good guesses.
Consider an algorithm $\mathcal{A}^\Gamma$ that depends on a collection $\Gamma$ of parameters $\mathbf{p}_1, \dots, \mathbf{p}_r$ and fix an instance $(G, \mathbf{x})$. Observe that the running time of $\mathcal{A}^\Gamma$ over $(G, \mathbf{x})$ may be different for different collections of guesses $\tilde{\Gamma}$, in other words, the running time over $(G, \mathbf{x})$ may be a function of $\tilde{\Gamma}$. Recall that when we consider an algorithm that does not require parameters, we still typically evaluate its running time with respect to a collection of parameters $\Lambda$. We generalize this to the case where the algorithm depends on $\Gamma$ as follows.
Consider two collections $\Gamma$ and $\Lambda$ of parameters $\mathbf{p}_1, \dots, \mathbf{p}_r$ and $\mathbf{q}_1, \dots, \mathbf{q}\ell$, respectively. Some parameters may belong to both $\Gamma$ and $\Lambda$. Without loss of generality, we shall always assume that ${\mathbf{p}{r'+1}, \dots, \mathbf{p}r} \cap$ ${\mathbf{q}{r'+1}, \dots, \mathbf{q}_\ell} = \emptyset$ for some $r' \in [0, \min{r, \ell}]$ and $\mathbf{p}i = \mathbf{q}i$ for every $i \in [1, r']$. Notice that $\Gamma \setminus \Lambda =$ ${\mathbf{p}{r'+1}, \mathbf{p}{r'+2}, \dots, \mathbf{p}r}$. A function $f$: $(\mathbf{R}^+)^{\ell} \to \mathbf{R}^+$ up- per bounds the running time of $\mathcal{A}^\Gamma$ with respect to $\Gamma$ and $\Lambda$ if the running time $T{\mathcal{A}^\Gamma}(G, \mathbf{x})$ of $\mathcal{A}^\Gamma$ for $(G, \mathbf{x}) \in$ $\mathcal{F}$ using a collection of good guesses $\tilde{\Gamma} = {\tilde{\mathbf{p}}_1, \dots, \tilde{\mathbf{p}}_r}$ is at most $f(\tilde{\mathbf{p}}1, \dots, \tilde{\mathbf{p}}{r'}, \dots, \tilde{\mathbf{q}}_\ell^*)$, where $\tilde{\mathbf{q}}_i^* = \mathbf{q}_i(G, \mathbf{x})$ for $i \in [r' + 1, \ell]$. Note that we do not put any restric- tion on the running time of $\mathcal{A}^\Gamma$ over $(G, \mathbf{x})$ if some of the guesses in $\tilde{\Gamma}$ are not good. In fact, in such a case, the algorithm may not even terminate and it may also produce wrong results.
For simplicity of notation, when Γ and Λ are clear from the context, we say that f upper bounds the run- ning time of AΓ, without writing that it is with respect to Γ and Λ.
The set Γ is weakly-dominated by Λ if for each $j \in [r'+1, r]$, there exists an index $i_j \in [1, \ell]$ and an as- cending function $g_j$ such that $g_j(\mathbf{p}j(G, \mathbf{x})) \le q{ij}(G, \mathbf{x})$ for every instance $(G, \mathbf{x}) \in \mathcal{F}$. (For example, Γ = {Δ} is weakly-dominated by Λ = {n}, since Δ(G, x) ≤ n(G, x) for any (G, x).)
3 Pruning Algorithms
3.1 Overview
Consider a problem $\Pi$ in the centralized setting and an efficient randomized Monte-Carlo algorithm $\mathcal{A}$ for $\Pi$. A known method for transforming $\mathcal{A}$ into a Las Vegas algorithm is based on repeatedly doing the following. Execute $\mathcal{A}$ and, subsequently, execute an algorithm that checks the validity of the output. If the checking fails then continue, and otherwise, terminate, i.e., break the loop. This transformation can yield a Las Vegas algorithm whose expected running time is similar to the