Monketoo's picture
Add files using upload-large-folder tool
eda048d verified

Fig. 1 Schematic view of an alternating algorithm for ($\mathcal{A}i$)${i \in N}$ and $\mathcal{P}$.

cution time of $\mathcal{A}^{\Gamma}$ provided with the collection $\Gamma^*(G, \mathbf{x})$ of correct guesses.

4 The General Method

We now turn to the main application of pruning algorithms discussed in this paper, that is, the construction of a transformer taking a non-uniform algorithm $\mathcal{A}^\Gamma$ as a black box and producing a uniform one that enjoys the same (asymptotic) time complexity as the original non-uniform algorithm.

We begin with a few illustrative examples of our method in Subsection 4.1. Then, the general framework of our transformer is given in Subsection 4.2. This subsection introduces a concept of “sequence-number functions” as well as the a fundamental construction used in our forthcoming algorithms.

Then, in Subsection 4.3, we consider the deterministic setting: a somewhat restrictive, yet useful, transformer is given in Theorem 1. This transformer considers a single set $\Gamma$ of non-decreasing parameters $p_1, \dots, p_\ell$, and assumes that (1) the given non-uniform algorithm $\mathcal{A}^\Gamma$ depends on $\Gamma$ and (2) the running time of $\mathcal{A}^\Gamma$ is evaluated with respect to the parameters in $\Gamma$. Such a situation is customary, and occurs for instance for the best currently known MIS Algorithms [4,22,34] as well as for the maximal matching algorithm of Hanckowiak et al. [19]. As a result, the transformer given by Theorem 1 can be used to transform each of these algorithms into a uniform one with asymptotically the same time complexity.

The transformer of Theorem 1 is extended to the randomized setting in Subsection 4.4. In Subsection 4.5, we establish Theorem 3, which generalizes both Theorem 1 and Theorem 2. Finally, we conclude the section with Theorem 4 in Subsection 4.6, which shows how to manipulate several uniform algorithms that run in unknown times to obtain a uniform algorithm that runs as fast as the fastest algorithm among those given algorithms.

4.1 Some Illustrative Examples

The basic idea is very simple. Consider a problem for which we have a pruning algorithm $\mathcal{P}$, and a non-uniform algorithm $\mathcal{A}$ that requires the upper bounds on some parameters to be part of the input. To obtain a uniform algorithm, we execute the pair of algorithms $(\mathcal{A}; \mathcal{P})$ in iterations, where each iteration executes $\mathcal{A}$ using a specific set of guesses for the parameters. Typically, as iterations proceed, the guesses for the parameters grow larger and larger until we reach an iteration $i$ where all the guesses are larger than the actual value of the corresponding parameters. In this iteration, the operation of $\mathcal{A}$ on $G_i$ using such guesses guarantees a correct solution on $G_i$ ($G_i$ is the graph induced by the set of nodes that were not pruned in previous iterations). The solution detection property of the pruning algorithm then guarantees that the execution terminates in this iteration and hence, Observation 3.4 guarantees that the output of all nodes combines to a global solution on $G$. To bound the running time, we shall make sure that the total running time is dominated by the running time of the last iteration, and that this last iteration is relatively fast.

There are various delicate points when using this general strategy. For example, in iterations where incorrect guesses are used, we have no control over the behavior of the non-uniform algorithm $\mathcal{A}$ and, in particular, it may run for too many rounds, perhaps even indefinitely. To overcome this obstacle, we allocate a prescribed number of rounds for each iteration; if Algorithm A reaches this time bound without outputting at some node $u$, then we force it to terminate with an arbitrary output. Subsequently, we run the pruning algorithm and proceed to the next iteration.

Obviously, this simple approach of running in iterations and increasing the guesses from iteration to iteration is hardly new. It was used, for example, in the context of wireless networks to compute estimates of parameters (cf., e.g., [8,31]), or to estimate the number of faults [25]. It was also used by Barenboim and Elkin [6] to avoid the necessity of having an upper bound on the arboricity $a$ in one of their MIS algorithms, although their approach increases the running time by $\log^* n$.