The (2, β)-ruling set pruning algorithm: Let $\beta$ be a constant integer. We define a pruning algorithm $P_{(2,\beta)}$ for the (2, $\beta$)-ruling set problem as follows. Given a triplet $(G, \mathbf{x}, \hat{\mathbf{y}})$, let $W$ be the set of nodes $u$ satisfying one of the following two conditions.
$\hat{\mathbf{y}}(u) = 1$ and $\hat{\mathbf{y}}(v) = 0$ for all $v \in N(u)$, or
$\hat{\mathbf{y}}(u) = 0$ and $\exists v \in B_G(u, \beta)$ such that $\hat{\mathbf{y}}(v) = 1$ and $\hat{\mathbf{y}}(w) = 0$ for all $w \in N(v)$.
The question of whether a node $u$ belongs to $W$ can be determined by inspecting $B_G(u, 1 + \beta)$, the ball of radius $1 + \beta$ around $u$. Hence, we obtain the following.
Observation 3.2 Algorithm $P_{(2,\beta)}$ is a pruning algorithm for the (2, $\beta$)-ruling set problem, running in time $1 + \beta$. (In particular, $P_{(2,1)}$ is a pruning algorithm for the MIS problem running in time 2.) Furthermore, $P_{(2,\beta)}$ is monotone with respect to any non-decreasing parameter.
The maximal matching problem: We define a pruning algorithm $P_{MM}$ as follows. Given a tentative output vector $\hat{\mathbf{y}}$, recall that $u$ and $v$ are matched when $u$ and $v$ are neighbors, $\hat{\mathbf{y}}(u) = \hat{\mathbf{y}}(v)$ and $\hat{\mathbf{y}}(w) \neq \hat{\mathbf{y}}(u)$ for every $w \in (N_G(u) \cup N_G(v)) \setminus {u, v}$. Set $W$ to be the set of nodes $u$ satisfying one of the following conditions.
$\exists v \in N(u)$ such that $u$ and $v$ are matched, or
$\forall v \in N(u), \exists w \neq u$ such that $v$ and $w$ are matched.
Observation 3.3 Algorithm $P_{MM}$ is a pruning algorithm for MM whose running time is 3. Furthermore, $P_{MM}$ is monotone with respect to any parameter.
We exhibit several applications of pruning algorithms. The main application appears in the next section, where we show how pruning algorithms can be used to transform non-uniform algorithms into uniform ones. Before we continue, we need the concept of alternating algorithms.
3.3 Alternating Algorithms
A pruning algorithm can be used in conjunction with a sequence of algorithms as follows. Let $\mathcal{F}$ be a collection of instances for some problem $\Pi$. For each $i \in \mathbb{N}$, let $\mathcal{A}_i$ be an algorithm defined on $\mathcal{F}$. Algorithm $\mathcal{A}_i$ does not necessarily solve $\Pi$, it is only assumed to produce some output.
Let $\mathcal{P}$ be a pruning algorithm for $\Pi$ and $\mathcal{F}$, and for $i \in \mathbb{N}$, let $\mathcal{B}_i = (\mathcal{A}_i; \mathcal{P})$, that is, given an instance
$(G, \mathbf{x})$, Algorithm $\mathcal{B}_i$ first executes $\mathcal{A}_i$, which returns an output vector $\mathbf{y}$ for the nodes of $G$ and, subsequently, Algorithm $\mathcal{P}$ is executed over the triplet $(G, \mathbf{x}, \mathbf{y})$. We define the alternating algorithm $\pi$ for $(\mathcal{A}i){i \in \mathbb{N}}$ and $\mathcal{P}$ as follows. The alternating algorithm $\pi = \pi((\mathcal{A}i){i \in \mathbb{N}}, \mathcal{P})$ executes the algorithms $\mathcal{B}_i$ for $i = 1, 2, 3, \dots$ one after the other: let $(G_1, \mathbf{x}_1) = (G, \mathbf{x})$ be the initial instance given to $\pi$; for $i \in \mathbb{N}$, Algorithm $\mathcal{A}_i$ is executed on the instance $(G_i, \mathbf{x}_i)$ and returns the output vector $\mathbf{y}i$. The subsequent pruning algorithm $\mathcal{P}$ takes the triplet $(G_i, \mathbf{x}i, \mathbf{y}i)$ as input and produces the instance $(G{i+1}, \mathbf{x}{i+1})$. See Figure 1 for a schematic view of an alternating algorithm. The definition extends to a finite sequence $(\mathcal{A}i){i=1}^k$ of algorithms in a natural way; the alternating algorithm for $(\mathcal{A}){i=1}^k$ and $\mathcal{P}$ being $A_1; \mathcal{P}; A_2; \mathcal{P}; \dots; A_k; \mathcal{P}$.
The alternating algorithm $\pi$ terminates on an instance $(G, \mathbf{x}) \in \mathcal{F}$ if there exists $k$ such that $V(G_k) = \emptyset$. Observe that in such a case, the tail $\mathcal{B}k; \mathcal{B}{k+1}; \dots$ of $\pi$ is trivial. The output vector $\mathbf{y}$ of a terminating alternating algorithm $\pi$ is defined as the combination of the output vectors $\mathbf{y}_1, \mathbf{y}2, \mathbf{y}3, \dots$. Specifically, for $s \in [1, k-1]$, let $W_s = V(G_s) \setminus V(G{s+1})$. (Observe that $W_s$ is precisely the set of nodes pruned by the execution of the pruning algorithm $\mathcal{P}$ in $\mathcal{B}s$.) Then, the collection ${W_s : 1 \le s \le k-1}$ forms a partition of $V(G)$, i.e., $W_s \cap W{s'} = \emptyset$ if $s \ne s'$, and $\cup{s=1}^{k-1} W_s = V(G)$. Observe that the final output $\mathbf{y}$ of $\pi$ satisfies $\mathbf{y}(u) = \mathbf{y}_s(u)$ for every node $u$, where $s$ is such that $u \in W_s$. In other words, the output of $\pi$ restricted to the nodes in $W_s$ is precisely the corresponding output of Algorithm $\mathcal{A}_s$. The next observation readily follows from the definition of pruning algorithms.
Observation 3.4 Consider a problem $\Pi$, a collection of instances $\mathcal{F}$, a sequence of algorithms $(\mathcal{A}i){i \in \mathbb{N}}$ defined on $\mathcal{F}$ and a pruning algorithm $\mathcal{P}$ for $\Pi$ and $\mathcal{F}$. Consider the alternating algorithm $\pi = \pi((\mathcal{A}i){i \in \mathbb{N}}, \mathcal{P})$ for $(\mathcal{A}i){i \in \mathbb{N}}$ and $\mathcal{P}$. If $\pi$ terminates on an instance $(G, \mathbf{x}) \in \mathcal{F}$ then it produces a correct output $\mathbf{y}$, that is, $(G, \mathbf{x}, \mathbf{y}) \in \Pi$.
In what follows, we often produce a sequence of algorithms $(\mathcal{A}i){i \in \mathbb{N}}$ from an algorithm $\mathcal{A}^\Gamma$ requiring a collection $\Gamma$ of non-decreasing parameters. The general idea is to design a sequence of guesses $\tilde{\Gamma}_i$ and let $\mathcal{A}_i$ be algorithm $\mathcal{A}^\Gamma$ provided with guesses $\tilde{\Gamma}_i$. Given a pruning algorithm $\mathcal{P}$, we obtain a uniform alternating algorithm $\pi = \pi((\mathcal{A}i){i \in \mathbb{N}}, \mathcal{P})$. The sequence of guesses is designed such that for any configuration $(G, \mathbf{x}) \in \mathcal{F}$, there exists some $i$ for which $\tilde{\Gamma}_i$ is a collection of good guesses for $(G, \mathbf{x})$. The crux is to obtain an execution time for $\mathcal{A}_1; \mathcal{P}; \dots; \mathcal{A}_i; \mathcal{P}$ of the same order as the exe-