Applying Theorem 3 to the work of Barenboim and Elkin [6] (see Theorem 6.3 therein) with $\Gamma = {a, n}$ and $\Lambda = {n}$ yields the following result, since $a \le n$.
Corollary 4 The following uniform deterministic algorithms solving MIS exist:
For the family of graphs with arboricity $a = o(\sqrt{\log n})$, running in time $O(\log n)$,
For any constant $\delta \in (0, 1/2)$, for the family of graphs with arboricity $a = O(\log^{1/2-\delta} n)$, running in time $O(\log n / \log \log n)$.
4.6 Running as Fast as the Fastest Algorithm
To illustrate the topic of the next theorem, consider the non-uniform algorithms for MIS for general graphs, namely, the algorithms of Barenboim and Elkin [4] and that of Kuhn [22], which run in time $O(\Delta + \log^* n)$ and use the knowledge of $n$ and $\Delta$, and the algorithm of Panconesi and Srinivasan [34], which runs in time $2^{O(\sqrt{\log n})}$ and requires the knowledge of $n$. Furthermore, consider the MIS algorithms of Barenboim and Elkin in [5,6], which are very efficient for graphs with a small arboricity $a$. If $n$, $\Delta$ and $a$ are contained in the inputs of all nodes, then one can compare the running times of these algorithms and use the fastest one. That is, there exists a non-uniform algorithm $\mathcal{A}^{{n,\Delta,a}}$ that runs in time $T(n, \Delta, a) = \min{g(n), h(\Delta, n), f(a,n)}$, where $g(n) = 2^{O(\sqrt{\log n})}$, $h(\Delta, n) = O(\Delta + \log^* n)$, and $f(a,n)$ is defined as follows: $f(a,n) = o(\log n)$ for graphs of arboricity $a = o(\sqrt{\log n})$, $f(a,n) = O(\log n / \log \log n)$ for arboricity $a = O(\log^{1/2-\delta} n)$, for some constant $\delta \in (0, 1/2)$; and otherwise: $f(a,n) = O(a + a^\epsilon \log n)$, for arbitrary small constant $\epsilon > 0$.
Unfortunately, the theorems established so far do not allow us to transform $\mathcal{A}^{{n,\Delta,a}}$ into a uniform algorithm because the reason being that the function $T(n, \Delta, a)$ bounding the running time does not have a sequence number. On the other hand, as mentioned in Corollary 2, Theorem 1 does allow us to transform each of the algorithms in [4, 22, 34] into a uniform MIS algorithm, with time complexity $O(\Delta + \log^* n)$ and $2^{O(\sqrt{\log n})}$, respectively. Moreover, Corollaries 3 and 4 show that Theorems 1 and 3 allow us to transform the algorithms in [5,6] to uniform algorithms that (over the appropriate graph families), run as fast as the corresponding non-uniform algorithms. Nevertheless, unless $n$, $\Delta$ and $a$ are provided as inputs to the nodes, it is not clear how to obtain from these transformed algorithms a uniform algorithm running in time $T(n, \Delta, a)$. The following theorem solves this problem.
Theorem 4 Consider a problem $\Pi$ and a family of instances $\mathcal{F}$. Let $k$ be a positive integer and let $\Lambda_1, \dots, \Lambda_k$ be $k$ sets of non-decreasing parameters. Let $\mathcal{P}$ be a $(\Lambda_1 \cup \dots \cup \Lambda_k)$-monotone pruning algorithm for $\Pi$ and $\mathcal{F}$. For $i \in {1, 2, \dots, k}$, consider a uniform algorithm $\mathcal{U}i$ whose running time is bounded with respect to $\Lambda_i$ by a function $f_i$. Then there is a uniform algorithm with running time $O(f{\min})$, where $f_{\min} = \min{f_1(\Lambda_1^*), \dots, f_k(\Lambda_k^*)}$.
Proof. Clearly, it is sufficient to prove the theorem for the case $k=2$. The basic idea behind the proof of theorem above is to run in iterations, such that each iteration $i$ consists of running the quadruple $(\mathcal{U}_1; \mathcal{P}; \mathcal{U}_2; \mathcal{P})$, where $\mathcal{U}1$ and $\mathcal{U}2$ are executed for precisely $2^i$ rounds each. Hence, a correct solution will be produced in Iteration $s = \lceil \log f{\min} \rceil$ or before. Since each iteration $i$ takes at most $O(2^i)$ rounds (recall that the running time of $\mathcal{P}$ is constant), the running time is $O(f{\min})$.
Formally, we define a sequence of uniform algorithms $(\mathcal{A}i){i \in \mathbb{N}}$ as follows. For $i \in \mathbb{N}$, set $\mathcal{A}_{2i+1} = \tilde{\mathcal{U}}1$ and $\mathcal{A}{2i+2} = \tilde{\mathcal{U}}_2$, where $\tilde{\mathcal{U}}_j$ is $\mathcal{U}_j$ restricted to $2^i$ rounds for $j \in {1, 2}$. Let $\pi$ be the uniform alternating algorithm with respect to $(\mathcal{A}i){i \in \mathbb{N}}$ and $\mathcal{P}$, that is $\pi = \mathcal{B}_1; \mathcal{B}_2; \mathcal{B}3; \dots$ where $\mathcal{B}{2i+j} = \tilde{\mathcal{U}}_j; \mathcal{P}$ for every $i \in \mathbb{N}$ and every $j \in {1, 2}$. Letting $T_0$ be the running time of $\mathcal{P}$, the running time of $\mathcal{B}_i$ is at most $2^i + T_0$, for every $i \in \mathbb{N}$.
Consider an instance $(G, \mathbf{x}) \in \mathcal{F}$. For each $(\mathbf{p}, \mathbf{q}) \in \Lambda_1 \times \Lambda_2$, let $\mathbf{p}^* = (\mathbf{p}(G, \mathbf{x}))$ and $\mathbf{q}^* = (\mathbf{q}(G, \mathbf{x}))$. Algorithm $\mathcal{B}_i$ operates on the configuration $(G_i, \mathbf{x}_i)$. Let $\mathbf{p} \in \Lambda_1 \cup \Lambda_2$. Because $\mathcal{P}$ is monotone with respect to $\Lambda_1 \cup \Lambda_2$, it follows by induction on $i$ that $\mathbf{p}^* \ge \mathbf{p}(G_i, \mathbf{x}_i)$. Hence, the running time of $\mathcal{U}j$ over $(G_i, \mathbf{x}i)$ is bounded from above by $f_j(\Lambda_j^*)$ for every $i \in \mathbb{N}$ and each $j \in {1, 2}$. Thus, $V(G{2s+2}) = \emptyset$ for the smallest $s$ such that $2^s \ge f{\min}$. In other words, $\pi = \mathcal{B}_1; \mathcal{B}2; \dots; \mathcal{B}{2s+1}$. Consequently, by Observation 3.4, Algorithm $\pi$ correctly solves $\Pi$ on $\mathcal{F}$ and, since $\mathcal{B}i$ runs in at most $2^{i/2} + T_0$ rounds, the running time of $\pi$ is $O(2^s) = O(f{\min})$, as asserted. □
Now, we can combine Theorem 4 with Corollaries 3 and 4, and establish a uniform algorithm for MIS that runs in time $f(a,n)$. Combining this algorithm with Corollary 2, and applying once more Theorem 4 yields Corollary 1(i).
5 Uniform Coloring Algorithms
In general, we could not find a way to directly apply our transformers (e.g., the one given by Theorem 3) for the coloring problem. The main reason is that we could not find an efficient pruning algorithm for the coloring problem. Indeed, consider for example the $O(\Delta)$-coloring problem. The checking property of a pruning