paper
stringlengths
9
16
proof
stringlengths
0
131k
quant-ph/0005018
Fix MATH. Apply REF with MATH and let MATH be the value from the theorem. Let MATH be the string for whose quantum NAME complexity we want to give an upper bound. By REF , we get that the length of the encoding is what was given in the statement of the theorem. By simulating the decoding algorithm to a precision of MATH, together with REF , and REF , we have that the fidelity of the encoding is at least MATH. That completes the proof.
quant-ph/0005055
A simple solution to this problem is to embed the classical heuristic MATH into the function used in the algorithm NAME. Let MATH and MATH, so that MATH. By REF , for each function MATH, we have an expected running time in MATH. Let MATH denote the probability that MATH occurs. Then MATH, and we have that the expected running time is in the order of MATH, which can be rewritten as MATH by NAME - NAME 's inequality.
quant-ph/0005055
For MATH, using standard trigonometric identities, we obtain MATH . The inequality follows directly.
quant-ph/0005055
MATH .
quant-ph/0005055
Clearly MATH thus using REF we directly obtain the first part of the theorem. We use this fact to prove the next part of the theorem. MATH . For the last part, we use the fact that for MATH, the given expression attains its minimum at MATH in the range MATH. MATH .
quant-ph/0005055
After REF , by REF , we have state MATH . After REF , ignoring global phase, we have MATH and after applying MATH we have MATH . We then apply MATH to the first register and measure it in the computational basis. The rest of the proof follows from REF . Tracing out the second register in the eigenvector basis, we see that the first register is in an equally weighted mixture of MATH and MATH. Thus the measured value MATH is the result of measuring either the state MATH or the state MATH. The probability of measuring MATH given the state MATH is equal to the probability of measuring MATH given the state MATH. Since MATH, we can assume we measured MATH given the state MATH and MATH estimates MATH as described in REF . Thus we obtain bounds on MATH that translate, using REF , into the appropriate bounds on MATH.
quant-ph/0005055
When MATH, the analysis is straightforward. For MATH, let MATH denote MATH and MATH. From REF we have that the probability that REF outputs Count-MATH for MATH is MATH . The previous inequalities are obtained by using the fact that MATH for any MATH and MATH, which can be readily seen by considering the NAME expansion of MATH at MATH. Now assuming REF has outputted MATH at least MATH times (note that MATH), after REF we have MATH and by REF (and the fact that MATH) the probability that Count-MATH outputs an integer MATH satisfying MATH is at least MATH. Let us suppose this is the case. If MATH, then MATH and, since MATH and MATH are both integers, we must have MATH. If MATH, then rounding off MATH to MATH introduces an error of at most MATH, making the total error at most MATH. Therefore the overall probability of outputting an estimate with error at most MATH is at least MATH. To upper bound the number of applications of MATH, note that by REF , for any integer MATH, the probability that Count-MATH outputs REF is less than MATH. Thus the expected value of MATH at REF is in MATH.
quant-ph/0005055
Apply REF with MATH. For each MATH, with probability greater than MATH, outcome MATH satisfies MATH, in which case we also have that MATH. Thus, with probability greater than MATH, we have MATH . Suppose this is the case. Then by REF , with probability at least MATH, MATH and consequently MATH . Hence, with probability at least MATH, we have MATH. The number of applications of MATH is MATH. Consider the expected value of MATH for MATH. Since MATH for any MATH, we just need to upper bound the expected value of MATH. By REF , for any MATH, MATH with probability at least MATH. Hence MATH is less than MATH with probability at least MATH. In particular, the minimum of MATH and MATH is greater than the expression given in REF with probability at most MATH. Since any positive random variable MATH satisfying MATH has expectation upper bounded by a constant, the expected value of MATH is in MATH.
quant-ph/0005055
To find MATH, we run REF to REF of algorithm Basic NAME and then set MATH. A proof analogous to that of REF gives that CASE: MATH with probability at least MATH, and CASE: the expected value of MATH is in MATH. This requires a number of evaluations of MATH which is in MATH , and thus, the expected number of evaluations of MATH so far is in MATH. In REF , for some constant MATH to be determined below, we use MATH evaluations of MATH to find integer MATH satisfying CASE: MATH with probability at least MATH, and CASE: the expected value of MATH is in MATH. Since MATH, finding such MATH boils down to estimating, with accuracy in MATH, the square root of the probability that MATH takes the value REF on a random point in its domain. Or equivalently, the probability that MATH takes the value REF, where MATH. Suppose for some constant MATH, we run MATH twice with outputs MATH and MATH. By REF , each output MATH REF satisfies that MATH with probability at least MATH for every MATH. It follows that MATH has expected value in MATH. Setting MATH, MATH, and MATH, ensures that MATH satisfies the two properties mentioned above. The number of evaluations of MATH in REF is in MATH which is in MATH. In REF , we set MATH. Note that CASE: MATH with probability at least MATH, and CASE: the expected value of MATH is in the order of MATH. In REF , analogously to algorithm NAME, a number of evaluations of MATH in MATH suffices to find an integer MATH such that CASE: MATH with probability at least MATH, and CASE: the expected value of MATH is in MATH. Fortunately, since MATH, we shall only need MATH if MATH. We obtain that, after REF , CASE: MATH is greater than MATH with probability at least MATH, and CASE: the expected value of MATH is in MATH. To derive this latter statement, we use the fact that the expected value of the minimum of two random variables is at most the minimum of their expectation. Finally, by REF , applying algorithm MATH given such a MATH, produces an estimate MATH of MATH such that MATH (which implies that MATH) with probability at least MATH. Hence our overall success probability is at least MATH, and the expected number of evaluations of MATH is in MATH.
quant-ph/0005106
From the definition of the binary entropy function, we have MATH . Using the expansion MATH for MATH, and simplifying, we get MATH which is the claimed bound.
quant-ph/0005106
We start with the special case of MATH. By REF , there is a measurement MATH on MATH that realizes the trace norm distance MATH between MATH and MATH. Using NAME 's strategy (see, for example, CITE), the resulting distributions can be distinguished with probability MATH. Let MATH denote the classical random variable holding the result of this entire procedure. We have MATH. Thus, by NAME 's Inequality, MATH . We complete the proof for MATH by noticing that measurements can only reduce the entropy, hence MATH, and that MATH. To prove the theorem for general MATH we reduce it to the MATH case. We do this by partitioning the set of strings into pairs with ``easily" distinguishable encoding. There is a set of MATH disjoint pairs MATH which together cover MATH such that MATH . The expectation of the LHS over a random pairing is MATH; hence there is a pairing that achieves this MATH. We now fix this pairing. Let MATH denote the set of elements in the MATH'th pair, that is, MATH and MATH. We know that MATH. Let us also denote MATH. From the base case MATH, we know that for any MATH, MATH. Thus we get: MATH . Averaging all the MATH equations yields: MATH . By the concavity of the entropy function, MATH, and by REF. Therefore, MATH . Since MATH is convex, MATH. Also, MATH is monotone increasing for MATH, so MATH. Together this yields MATH, as required.
quant-ph/0005106
Let MATH. We have: MATH . By REF , MATH, and by REF we have MATH. Thus, MATH.
quant-ph/0005106
By REF , there exists a purification MATH of MATH such that MATH. Since MATH and MATH have the same reduced density matrix in MATH, by REF , there is a (local) unitary transformation MATH on MATH such that MATH. Moreover, by REF we have MATH . By REF , MATH, so MATH . This, when combined with the earlier bound on the trace distance between MATH gives us the required result.
quant-ph/0005106
It is enough to show the lower bound for the two cases when the protocol starts either with MATH or with the other player. Let MATH be the player to start. Note that if we set MATH to a fixed value, say MATH, then we get an instance of MATH. So MATH. But MATH, so the bound of REF applies. Let player MATH be the one to start. Then, observe that if we allow one more message (that is, MATH messages in all), the complexity of the problem only decreases: MATH. So we again get the same bound from REF .
quant-ph/0005106
We prove the theorem by induction on MATH. The case MATH is handled by REF . Suppose the theorem holds for MATH. We prove by contradiction that it holds for MATH as well. If MATH, then by REF there is a MATH message protocol for MATH with the wrong player starting, with error MATH, and with the same communication complexity MATH. This contradicts the induction hypothesis.
quant-ph/0005106
Let MATH denote the message sent by NAME. For a prefix MATH of length MATH, let MATH be the encoding which is prepared by first fixing MATH and then choosing MATH at random and sending the state MATH. Its density matrix is given by MATH . On the one hand, MATH, the number of qubits in MATH. On the other hand, for MATH, let MATH be the error probability when MATH, MATH, and the index MATH. Note that MATH. Moreover, we have MATH, since NAME has a measurement that predicts MATH with probability MATH given MATH. We now claim that MATH. By this lemma, MATH using the concavity of the entropy function.
quant-ph/0005106
By the definition of mutual information, and using REF , MATH . Moreover, from REF , MATH which proves the claim.
quant-ph/0005106
For concreteness, we assume that MATH is even, so that MATH is NAME. Let MATH be a protocol that solves MATH with respect to MATH with MATH message qubits, error MATH, and MATH messages starting with NAME. We would like to concentrate on inputs where MATH is fixed to a particular value in MATH. This would give rise to an instance of MATH that is also solved by MATH, but with MATH messages. An easy argument shows the first message carries almost no information about MATH, and we would like to argue that it is not relevant for solving MATH. However, the correctness of the protocol relies on the message, so we try to reconstruct the message with NAME starting the protocol instead. We give the details below. We first derive a protocol MATH which has low error on an input for MATH generated as below (we call the resulting distribution MATH): MATH are chosen uniformly at random from MATH, MATH is set to MATH, MATH is chosen uniformly at random from MATH, and for all MATH, register MATH is initialized to the state MATH (normalized). Let MATH denote the error of MATH with respect to the distribution MATH. Note that MATH, since having the MATH in a uniform superposition over all possible inputs has the same effect on the result of the protocol as having it randomly distributed over the inputs (recall that we require that the input registers are not changed during a quantum protocol). Let MATH be the mutual information MATH in the protocol MATH when run on the mixed state MATH with MATH being chosen randomly. There is a protocol MATH which solves MATH with respect to the distribution MATH with error MATH error, MATH message qubits and MATH rounds starting with NAME, such that MATH. The protocol MATH is obtained by slightly modifying the first message in protocol MATH so that it is completely independent of MATH. This only affects the average probability of error. Moreover, in MATH the first message does not carry any information about MATH and is therefore completely independent of it. Intuitively this means that NAME does not need to get that message at all, or equivalently that she can recreate it herself. This gives a protocol for solving MATH with MATH messages and with NAME starting. There is a protocol MATH that solves MATH with respect to MATH with MATH error, MATH message qubits and MATH messages starting with NAME. Together we get MATH as claimed.
quant-ph/0005106
First consider the case when MATH is fixed to some MATH, but the rest of the inputs are as in MATH. In protocol MATH . NAME applies a unitary transformation MATH on his qubits and computes MATH in register MATH (for the message) and MATH (for NAME 's ancilla and input). In MATH the message computation is slightly different. Instead of computing MATH, NAME computes MATH, where MATH is the uniform superposition over MATH. Clearly, in MATH the state MATH and hence the message MATH does not depend on MATH, hence MATH when MATH is chosen randomly. Let us denote by MATH the reduced density matrix of the message register MATH in MATH when the input is drawn according to MATH and MATH, and let the corresponding density matrix for MATH be MATH. Clearly, MATH. Let MATH. By REF we know that MATH. Protocol MATH generates the pure state MATH, while the desired pure state is MATH. NAME, who knows MATH knows both MATH and MATH. By REF there is a local unitary transformation MATH acting on register MATH alone, such that MATH . The next step in protocol MATH is that NAME applies the transformation MATH to his register MATH. After that, protocol MATH proceeds exactly as in MATH. Therefore, for a given MATH, the probability that MATH and MATH disagree on the result is at most MATH, and the error probability of MATH on MATH is at most MATH where the second step follows from NAME 's inequality.
quant-ph/0005106
Protocol MATH solves an instance of MATH. NAME is given an input MATH and NAME is given an input MATH. The protocol proceeds as follows. NAME and NAME first reduce the problem to a MATH instance taken from the distribution MATH for a random MATH. To do that, NAME picks MATH at random, sets MATH and sends it to NAME; NAME sets MATH and NAME sets MATH; NAME picks MATH for MATH; and NAME initializes each register MATH for MATH with MATH (normalized). Notice that if NAME and NAME run the protocol MATH over this input, then they get the answer MATH with probability of error at most MATH . We claim that MATH, where MATH is the length of the message MATH. Hence MATH. NAME and NAME do not run the protocol MATH itself, but a modification of it in which NAME sends the first message instead of NAME, thus reducing the number of rounds to MATH. Let MATH be the reduced density matrix of register MATH holding the first message that NAME sends to NAME in MATH, for the input given above. By REF , we know that MATH does not depend on MATH. So MATH is known in advance to NAME. NAME starts the protocol MATH by purifying MATH. More specifically, let MATH be an eigenvector basis for MATH with real and positive eigenvalues MATH. NAME constructs the superposition MATH over two registers MATH (containing the eigenvectors) and MATH (containing the index MATH), and sends register MATH to NAME. The state of the system after this message in MATH is MATH whereas in MATH it is MATH . The reduced density matrix of MATH to registers MATH is the same as the reduced density matrix of MATH to registers MATH. By REF , NAME has a local unitary transformation MATH (operating on his register MATH) that transforms MATH to MATH. NAME applies MATH, and NAME and NAME then simulate the rest of the protocol MATH. From this stage on, the runs of the protocols MATH and MATH are identical have the same communication complexity and success probability.
quant-ph/0005106
Note that MATH is the same as the mutual information MATH when MATH is run on the uniform distribution on MATH. So we prove the claim for the latter. For any MATH, MATH. Therefore by REF (compare REF) we have MATH . As the first message MATH contains only MATH qubits, we have MATH.
cond-mat/0006367
We describe a proof which uses knowledge about NAME functions. Let MATH be a partition, that is, a nonincreasing sequence of nonnegative integers. Then the NAME function MATH is defined by (see CITE or CITE) MATH . It is well-known that a combinatorial description of NAME functions may be given in terms of (semistandard) tableaux. A filling of the cells of the NAME diagram of MATH with elements of the set MATH which is weakly increasing along rows and strictly increasing along columns is called a (semistandard) tableau of shape MATH. REF shows such a semistandard tableau of shape MATH. The weight MATH of a tableau MATH is defined as MATH where the product is over all entries MATH of MATH. Given this terminology, the NAME function MATH is also given by (see CITE), MATH where the sum is over all tableaux MATH of shape MATH with entries MATH. In CITE it was proved that the number of stars with MATH branches, as described above, can be determined by using a standard bijection between stars and tableaux, see REF . First label down-steps by the MATH-coordinate of their end point, so that a step from MATH to MATH is labelled by MATH, see REF . Then, out of the labels of the MATH-th branch, form the MATH-th column of the corresponding tableau. The resulting array of numbers is indeed a tableau. This can be readily seen, since the entries are trivially strictly increasing along columns, and they are weakly increasing along rows because the branches do not touch each other. Thus, given a star with MATH branches, the MATH-th branch running from MATH to MATH, MATH, one obtains a tableau with column lengths MATH. The shape (the vector of row lengths) can be easily extracted from the column lengths. This correspondence between stars and tableaux is a bijection between stars with MATH branches, the MATH-th branch running from MATH to MATH, MATH, and tableaux with entries at most MATH and column lengths MATH. Clearly, the number of these tableaux is given by REF with MATH and MATH the partition whose NAME diagram has column lengths MATH. On the other hand, it is well-known that (see CITE, CITE) MATH where MATH and MATH are the content and the hook length of the cell MATH. The content MATH of a cell MATH is MATH, whereas the hook-length MATH of a cell MATH is the number of cells in the same row to the right of MATH plus the number of cells in the same column below MATH plus REF. The expression obtained can, with some work, be converted into REF.
cond-mat/0006367
Using the correspondence between stars and tableaux described in the proof of REF , we see that we must count tableaux with entries at most MATH having at most MATH columns. This enumeration problem (actually the corresponding ``MATH-enumeration" problem) is known under the name NAME - NAME conjecture, and was first proved by NAME around REF (but appeared only much later as CITE). Since then, many further proofs have been given. See CITE, CITE, CITE, CITE, CITE, CITE for a selection. What all these proofs share more or less explicitly is the following identity, which relates NAME functions and odd orthogonal characters of the symmetric group of rectangular shape, MATH . The odd orthogonal characters MATH, where MATH is a shorthand notation for MATH, etc., and where MATH is a MATH-tuple MATH of integers, or of half-integers, is defined by MATH . Recall that the NAME functions MATH are defined by REF. While NAME functions are polynomials in MATH (compare REF), odd orthogonal characters MATH are polynomials in MATH. They have a combinatorial descriptions in terms of certain tableaux as well, see CITE, CITE, CITE. A variety of different proofs of REF has been given. There are proofs by a combination of combinatorial and manipulatory arguments (compare CITE, CITE, CITE), by use of the theory of NAME - NAME functions (compare CITE), by use of combinatorial descriptions of orthogonal characters coming from algebraic geometry, due to NAME, NAME, NAME, NAME and NAME (compare CITE, CITE, CITE). Eventually, a completely elementary proof was found by CITE. However, what we now require is the evaluation of the left-hand side of REF at MATH, because this yields, in view of REF, exactly the number of tableaux under consideration here. In order to evaluate the right-hand side of REF for MATH, we may use well-known formulae for the evaluation of odd orthogonal characters at these values of the MATH's, namely (see CITE, CITE), MATH where, again, MATH is the hook length of cell MATH, and MATH is given by MATH . Here, MATH denotes the partition conjugate to MATH (see CITE for the definition of conjugate partition). Using this formula for MATH in REF with MATH, finally leads to REF.
cond-mat/0006367
We know that the number of stars with MATH branches of length MATH is given by the product formula MATH . Therefore, proving REF amounts to appropriately rewriting REF and then applying NAME 's formula. For convenience, let us introduce the notations MATH and MATH. Then the product REF can be rewritten as follows, MATH . Our aim is to write this as a product whose range depends only on MATH. To do so, we need to distinguish between the cases of MATH being even or odd. If MATH is even, then REF can be written as MATH . Application of NAME 's formula, and some simplification, yields the first line of REF. If MATH is odd, then REF can be written as MATH . Renewed application of NAME 's formula, and some simplification, yields the second line of REF.
cond-mat/0006367
The situation is more difficult here, as we do not have a nice closed product formula (such as REF) for MATH-friendly stars. For simplicity, we treat the case of even MATH only, the case of odd MATH being completely analogous. Consider a MATH-friendly star with MATH branches of length MATH. It consists of a family MATH of non-crossing lattice paths, MATH running from MATH to some point on the line MATH, MATH. REF displays an example for MATH and MATH. Shifting the MATH-th path, MATH, by MATH units up, we obtain a family MATH of non-intersecting paths, MATH running from MATH to some point on the line MATH, MATH, see REF . Clearly, this correspondence is a bijection. The standard way to find the number of these families of non-intersecting lattice paths is to resort to REF and thus obtain a Pfaffian for this number. However, it seems difficult to derive asymptotic estimates from this Pfaffian, in particular since NAME 's reductions (CITE, see also CITE) do not seem to apply. Therefore we choose a different path. For fixed MATH, the number of families MATH of non-intersecting lattice paths, MATH running from MATH to MATH is given by the corresponding NAME - NAME determinant (see REF ), MATH . We have to sum REF over all MATH, and approximate the sum as MATH tends to infinity. (It is here where we use the assumption that MATH is even. For, any path from a point MATH reaches the vertical line MATH in a point with even MATH-coordinate.) We content ourselves to give a rough outline, as our approach is very much in the spirit of NAME 's asymptotic computation CITE for NAME diagrams in a strip, and as the proof of REF contains a detailed computation of the same kind, showing all the essentials in the simpler case of the estimation of a one-fold sum (as opposed to a MATH-fold sum that we are considering here). As in NAME 's computation, the expression to be estimated is transformed until an integral is obtained, which then can be evaluated by a limit case of NAME 's famous integral CITE. To begin with, we bring the determinant REF into a more convenient form, by taking out some common factors, MATH where MATH denotes the standard shifted factorial, MATH, MATH, MATH. The determinant is a polynomial in MATH and the MATH's. It suffices to extract the leading term, because the contributions of the lower terms to the overall asymptotics are negligible. In order to do so, we observe that, more precisely, the determinant in REF is a polynomial in MATH and the MATH's of degree MATH which is divisible by MATH. The leading term is MATH . Here we used the NAME determinant evaluation to evaluate the determinant in the second line. On ignoring again terms whose contribution to the overall asymptotics are negligible, we obtain MATH as the dominant term in the determinant in REF. Now we have to multiply this expression by the product on the right-hand side of REF, and then sum the resulting expression over all MATH. In fact, we may extend the range of summation and sum over all MATH, because REF is zero if any two MATH's should be the same. For each MATH separately, the sum over MATH is estimated in the way as it is done in the proof of REF for the sum over MATH, now using REF also for MATH other than REF. The result is that we obtain MATH as an estimation for the number of stars under consideration. In the integral we perform the substitution MATH. This gives MATH . At this point, the absolute values in the integrands are superfluous. However, with the absolute values, the integrand is invariant under permutations of the MATH's. Hence, the integral equals MATH . This integral is the special case MATH of NAME 's integral (see CITE) MATH where MATH means MATH even if MATH is not an integer. Substitution of this into REF and application of NAME 's formula to the factorials in the product in REF yields REF after some simplification.
cond-mat/0006367
The first assertion follows immediately from REF since the number of MATH-friendly stars is bounded below by the number of ``genuine" stars, and is bounded above by the number of MATH-friendly stars in the TK model. So, explicitly, we may choose MATH and MATH . The second assertion can be proved as follows. Clearly, for any MATH we have MATH. To see that in fact strict inequality holds, we identify a set of MATH-friendly stars which are not MATH-friendly stars, with the property that its cardinality is MATH (as is the cardinality of MATH-friendly stars). As this set of MATH-friendly stars we may choose families MATH of paths, such that MATH runs from MATH through MATH to the line MATH, MATH, and MATH and MATH touch each other along MATH consecutive edges. (This is indeed possible. Let MATH start with an up-step and MATH start with a down step, then let MATH and MATH go up and down in parallel for MATH steps, then let MATH continue with a down-step, thus reaching MATH, and MATH continue with an up-step, thus reaching MATH. As MATH, such paths MATH and MATH do indeed touch each other along MATH consecutive edges.) If we disregard the portion of the paths between MATH and MATH, then what remains is a MATH-friendly star with MATH branches of length MATH. The cardinality of these is at least the cardinality of ``genuine" stars with MATH branches of length MATH, which, asymptotically, is given by REF with MATH replaced by MATH. Up to some constant, this is MATH which is MATH, as desired.
cond-mat/0006367
CASE: By first principles. As in CITE, we could directly use the main theorem of non-intersecting lattice paths (see REF ), to write the number of stars in question in the form MATH where MATH denotes the set of all lattice paths from MATH to MATH that do not go below the MATH-axis. By the ``reflection principle" (see for example, CITE), each path number MATH could then be written as a difference of two binomials. It was shown in CITE, how to evaluate the resulting determinant (actually, a MATH-analogue was evaluated there). The evaluation relies on the determinant lemma CITE. However, since we are only interested in plain enumeration there is a simpler way. As a first step, we may freely attach MATH up-steps at the beginning of the MATH-th branch, MATH, (see REF ). It is obvious that the number of stars with starting points MATH (instead of MATH) and end points as in the statement of the theorem, each branch not going below the MATH-axis (see REF ), is exactly the same as the number of stars in the statement of the theorem (see REF ). If we apply the main theorem of non-intersecting lattice paths now, then we again obtain a determinant for the number in question, namely the determinant REF with MATH replaced by MATH, MATH denoting the set of all lattice paths from MATH to MATH that do not go below the MATH-axis. Again, by reflection principle, the path number MATH can be easily computed, so that the determinant REF equals MATH . Now we remove as many factors from the determinant as possible. In that way we obtain MATH . The determinant in REF can clearly be rewritten as MATH . This determinant can be reduced by elementary row manipulations to MATH which is apparently a NAME determinant and therefore equals MATH . Substituting this in REF gives REF. CASE: Using knowledge about symplectic characters. The (even) symplectic character MATH is defined by (see CITE) MATH . Proctor CITE also defined odd symplectic characters MATH, which are for example defined by MATH where MATH denotes the MATH-th complete homogeneous symmetric function. It is well-known that a combinatorial description of symplectic characters is given in terms of symplectic tableaux. Let MATH be a partition. A symplectic tableau of shape MATH is a semistandard tableau of shape MATH with the additional property that MATH . It is obvious that because of weak increase along rows this condition may be restricted to the entries in the first column. Let MATH be fixed, and let MATH be a symplectic tableau with entries at most MATH. The weight MATH of the symplectic tableau MATH is defined by MATH where MATH denotes the entry in cell MATH of MATH. Note that entries MATH do not contribute to the weight. Given this terminology, the (even) symplectic character MATH is also given by (see CITE, CITE), MATH where the sum is over all symplectic tableaux MATH of shape MATH with entries MATH, whereas the odd symplectic character MATH is also given by (see CITE), MATH where the sum is over all symplectic tableaux MATH of shape MATH with entries MATH. The formula for symplectic characters needed here is (see CITE, CITE and CITE) MATH where MATH may be even or odd, where, again, MATH is the hook length of cell MATH, and MATH is given by MATH . Here again, MATH denotes the partition conjugate to MATH. Now we use a slight variant of the correspondence described in the proof of REF . Given a star, we label down-steps by the MATH-coordinate of their starting point, that is, a step from MATH to MATH is labelled by MATH, see REF . Then, again, from the labels of the MATH-th branch, we form the MATH-th column of the corresponding tableau. It is evident that the condition that the branches do not go below the MATH-axis under this correspondence translates exactly into REF . Therefore, in that manner we obtain a bijection between stars with MATH branches, the MATH-th branch running from MATH to MATH, MATH, and never going below the MATH-axis, with symplectic tableaux with entries at most MATH and with column lengths MATH. So, REF solves this enumeration problem and gives REF upon some manipulation.
cond-mat/0006367
Using the correspondence between stars restricted by a wall and symplectic tableaux described in the second proof of REF , we see that we want to count symplectic tableaux with entries at most MATH having at most MATH rows and at most MATH columns. This problem was encountered before by Proctor CITE. (He was actually interested in enumerating plane partitions of trapezoidal shape. However, he demonstrates in CITE that these are in bijection with the symplectic tableaux we are considering here.) The solution of the problem lies in the following identity which relates symplectic characters and NAME functions of rectangular shape: MATH . Here MATH is short for MATH, with MATH occurrences of MATH. Recall, that in the argument of a symplectic character MATH the term MATH denotes the two arguments MATH. Actually, an identity for ``universal" characters is true, see CITE. This underlying ``universal" character identity is proved by a combinatorial rule due to CITE, see CITE and CITE. Clearly, REF , used in REF with MATH, MATH, MATH, immediately gives what we want. With some work the resulting expression can be transformed into REF.
cond-mat/0006367
We know that the number of stars with MATH branches of length MATH which do not go below the MATH-axis, and whose end points have MATH-coordinates at least MATH, MATH, is given by the product formula MATH . Application of NAME 's formula yields REF after a short calculation.
cond-mat/0006367
Again, the situation is more difficult here, as we do not have a nice closed product formula (such as REF) for MATH-friendly stars. We shall follow very closely the line of arguments of the proof of REF . Again, for simplicity, we treat the case of even MATH and MATH only, other cases being completely analogous. As in the proof of REF , we begin by transforming MATH-friendly stars into families of non-intersecting lattice paths by shifting the MATH-th path up by MATH units. Thus, MATH-friendly stars with MATH branches of length MATH which do not go below the MATH-axis are in bijection with families MATH of non-intersecting lattice paths, MATH running from MATH to MATH, MATH, for some integers MATH with MATH. For fixed MATH, the number of such families of non-intersecting lattice paths is given by the corresponding NAME - NAME determinant (see REF , and compare also the arguments in the first proof of REF , particularly the application of the reflection principle), MATH . We have to sum REF over all MATH, and approximate the sum as MATH tends to infinity. As in the proof of REF , the expression to be estimated is transformed until an integral is obtained, which then can be evaluated by a limit case of NAME 's famous integral CITE. It is, however, a different limit case that we need here. To begin with, we bring the determinant REF into a more convenient form, by taking out some common factors, MATH where, as before, MATH denotes the standard shifted factorial, MATH, MATH, MATH. The determinant is a polynomial in MATH and the MATH's. It suffices to extract the leading term, because the contributions of the lower terms to the overall asymptotics are negligible. In order to do so, we consider the more general determinant MATH . Clearly, we regain the determinant in REF for MATH. The determinant in REF is a polynomial in MATH and the MATH's of degree MATH. It is divisible by MATH. The leading term of REF is MATH . Here we used REF to evaluate the determinant in the second line. Now we recall the observation (see the paragraph containing REF) that the odd orthogonal character is a certain NAME polynomial in its variables. In the present context it implies that the very last line in our computation is a polynomial in the quantities MATH, MATH, MATH, consisting of a sum of exactly MATH monomials, the evaluation of the orthogonal character at all REF's being given by REF itself. Hence, the determinant REF equals MATH . The leading term of the determinant in REF is obtained from this expression under the substitution of MATH, MATH. This substitution turns REF into MATH . Now we have to multiply this expression by the product on the right-hand side of REF, and then sum the resulting expression over all MATH. In fact, we may extend the range of summation and sum over all MATH, because REF is zero if any two MATH's should be the same. For each MATH separately, the sum over MATH is estimated in the way as it is done in the proof of REF for the sum over MATH, now using REF also for MATH other than REF. The result is that we obtain MATH as an estimation for the number of stars under consideration. In the integral we perform the substitution MATH. After dropping terms which are asymptotically negligible, we obtain MATH . At this point, the absolute values in the integrands are superfluous. However, with the absolute values, the integrand is invariant under permutations of the MATH's, and under sign changes of the MATH's. Hence, the integral equals MATH . This integral is the special case MATH, MATH of the integral (see CITE) MATH . Substitution of this into REF and application of NAME 's formula to the factorials in the product in REF yields REF after some simplification.
cond-mat/0006367
We know that the number of watermelons with MATH branches of length MATH and with deviation MATH (where MATH) is given by REF. Therefore our task is to estimate the sum MATH . We follow the standard way of carrying out such estimations, as described in CITE. The dominant terms in the sum on the right-hand side are those corresponding to MATH's which are near MATH. Consequently, we split the sum into three parts, the terms with ``small" MATH, the terms with ``large" MATH, and the terms with MATH. Let MATH denote the summand on the right-hand side of REF, MATH . Then the precise way we split the sum in REF is MATH where it is understood without saying that all sums are only over those MATH which are of the same parity as MATH. We will show that the contributions of the first and second term in REF are negligible, and we will compute the contribution which the third term provides. To see that the first term in REF is negligible, it is enough to observe that every summand MATH with MATH is bounded above by MATH, and to compute the asymptotics of MATH by means of NAME 's formula, MATH because the term MATH which appears in the denominator of the expression in the next-to-last line grows super exponentially. Therefore, the first term in REF is MATH . The second term in REF is equal to the first term, hence has the same order of magnitude. Now we turn to the third term in REF. To carry out our computations, we would have to distinguish between two cases, depending on whether MATH is even or odd. The computations in both cases are however rather similar. Therefore we carry them out in detail just for the case that MATH is even and leave it to the reader to complete the computations for the other case. In the case that MATH is even, the third term in REF, after replacement of MATH by MATH, becomes MATH . For nonnegative MATH, we may rewrite the summand as MATH . For nonpositive MATH there is a similar computation which leads to the same result. We have to sum REF over MATH between MATH and MATH. In that range, the MATH term is at worst MATH. Thus, the sum REF turns into MATH . If we extend the sum to run over all integers MATH then we make an error which is bounded by MATH. The complete sum MATH can be approximated by REF with MATH, MATH and MATH. The asymptotics of the product in REF is easily determined by using NAME 's formula. Thus we obtain REF.
cond-mat/0006367
The first step is the same as in the proof of REF . We transform MATH-friendly watermelons into families of non-intersecting lattice paths by shifting the MATH-th path up by MATH units. Thus, MATH-friendly watermelons with MATH branches of length MATH and deviation MATH are in bijection with families MATH of non-intersecting lattice paths, MATH running from MATH to MATH, MATH. The number of such families of non-intersecting lattice paths is given by the corresponding NAME - NAME determinant (see REF ), MATH . Consequently, our task is to sum REF over all MATH, MATH, and approximate the sum as MATH tends to infinity. The procedure is quite similar to the proof of REF , with the complication that REF cannot be written in closed form. To begin with, we bring the determinant REF into a more convenient form, by taking out some factors, MATH . The determinant in REF is a polynomial in MATH and MATH, of degree at most MATH, MATH say. We claim that, for our asymptotic considerations, it is sufficient to just consider the leading terms on the right-hand side of REF. For, consider a single term MATH on the right-hand side of REF. It has to be multiplied by the product on the right-hand side of REF, and the resulting expression is summed over all MATH, MATH, so that one obtains MATH . This is now handled in the same way as the sum REF. In essence, it is the expression (compare REF) MATH that needs to be estimated. By REF with MATH, MATH and MATH, this is MATH . Obviously, the larger MATH and MATH are, the larger will be the contribution of the corresponding term, the largest coming from those for which MATH is maximal. As it turns out, the actual degree in MATH and MATH of the leading terms of the determinant in REF is significantly smaller than MATH. To see what it is, and what the leading terms are, we consider a more general determinant, MATH . Clearly, we regain the determinant in REF for MATH. The determinant in REF is a polynomial in MATH and the MATH's of degree MATH. It is divisible by MATH. Therefore the degree in MATH and MATH, after substitution of MATH, MATH, of the determinant in REF is at most MATH. The leading term of REF is MATH . (Clearly, the determinant evaluation used to get from the first to the second line is the NAME determinant evaluation.) The leading term of the determinant in REF is this expression under the substitution of MATH, MATH, which equals MATH . Hence, in order to compute the asymptotics of the sum of REF over all MATH, it suffices to determine the asymptotics of the sum over all MATH of (recall REF and the remark after REF) MATH . As the considerations leading to REF showed, this can be handled completely analogously to the computation of the asymptotics of the sum REF. The result is exactly REF.
cond-mat/0006367
The number of watermelons with MATH branches of length MATH and with deviation MATH (where MATH) which do not go below the MATH-axis is given by REF. We wish to sum this expression over all MATH with MATH and then approximate it. This is done completely analogously to the proof of REF . The only difference is that here the sum does not extend to negative MATH. Again, the dominant terms are those for MATH. Therefore we concentrate on the terms for MATH. The other terms are negligible, as is seen in the same way as in the proof of REF . For carrying out the computations, we would again have to distinguish between the two cases of MATH being even or odd. Also here, the computations are rather parallel. So let us assume in the following that MATH is even. If we carry out the computations parallel to those leading from REF to REF, then we see that the sum over all even MATH of REF is asymptotically MATH . The sum in this expression can be split into a linear combination of sums of the form MATH. The asymptotics of the latter are given by REF . It implies particularly that the largest contribution would come from the term where the exponent MATH of MATH is maximal. This term is MATH which by REF gives a contribution of MATH. The asymptotics of the product in REF is easily determined by using NAME 's formula. Putting everything together we obtain REF.
cond-mat/0006367
As we have already seen, in the context of MATH-friendly models the situation is more difficult, because we do not have a nice closed product formula (such as REF) for the number of MATH-friendly watermelons. The first step is the same as in the proof of REF . We transform MATH-friendly watermelon into families of nonintersecting lattice paths by shifting the MATH-th path up by MATH units. Thus, MATH-friendly watermelons with MATH branches of length MATH and deviation MATH which do not go below the MATH-axis are in bijection with families MATH of nonintersecting lattice paths which do not go below the MATH-axis, MATH running from MATH to MATH, MATH. The number of these families of nonintersecting lattice paths is given by the NAME - NAME - NAME determinant (see REF ), MATH where MATH denotes the set of all lattice paths from MATH to MATH that do not go below the MATH-axis. By the ``reflection principle" (see for example, CITE), each path number MATH can then be written as a difference of two binomials. Thus we obtain the determinant MATH for the number of watermelons under consideration. We have to sum REF over all MATH, MATH, and approximate the sum as MATH tends to infinity. Having carried out many similar proofs before, in particular the proof of REF , we have seen how to approach this problem. As in the proof of REF , we need to determine the leading terms of REF. Once more, we bring the determinant REF into a more convenient form, by taking out some factors, MATH . This determinant is exactly the determinant REF with MATH. Thus, the leading term of the determinant in REF is obtained from REF under the substitution of MATH, MATH. This substitution turns REF into MATH . When expanded, the term in this expression which will give the largest contribution is MATH . What remains to do is to multiply this expression by the product on the right-hand side of REF, then sum the resulting expression over all MATH with MATH mod REF, manipulate the summand in a way analogous to the manipulations in REF, and finally estimate the result using REF . After simplification, the final result is REF.
cond-mat/0006367
We first prove REF for MATH. The NAME summation theorem (see CITE, CITE) says that MATH for suitable functions MATH. It is for example valid for continuous, absolutely integrable functions MATH of bounded variation. The choice of MATH in REF gives REF for MATH upon little manipulation. From now on let MATH. In REF we choose MATH . This function does satisfy the above mentioned requirements. Thus we obtain MATH . (To justify that we may take the latter integral over real MATH instead of over MATH with imaginary part MATH, it suffices to observe that the contour integral of the integrand along the rectangle connecting the extremal points MATH, MATH, MATH, MATH vanishes, and that the integrals along the vertical sides of the rectangle tend to zero as MATH approaches infinity.) Next we approximate the integral which appears in the sum. We split the integral into two parts, MATH . For the first part, that is, for MATH and MATH real, we have MATH. For the second part, that is, for MATH and MATH real, we have MATH. Thus we obtain MATH . The integrals in the last line are easily evaluated by recalling one of the definitions of the gamma function (see CITE), MATH . For, substitution of MATH for MATH and replacement of MATH by MATH yields MATH . Combining all this and substituting back into REF, we get MATH . The appearance of the exponential MATH makes the MATH term ``arbitrarily" small. Thus, REF with MATH follows immediately. To establish REF in full generality, one would proceed in the same way. One would apply the NAME summation REF with MATH instead of MATH, with MATH given by REF as before, and MATH some suitably ``nice" function with MATH for MATH and MATH for MATH, MATH. Everything else is completely analogous. The result is then obtained by letting MATH.
cs/0006008
It is easy to see that from the time process MATH becomes active, it performs each unit of work at most once, partial checkpoints each subchunk at most once (and hence performs at most MATH partial checkpoints), and full checkpoints every chunk at most once (and hence performs at most MATH full checkpoints). Each partial checkpoint consists of a broadcast to process MATH's group, and hence involves at most MATH messages and one round. Thus, process MATH spends at most MATH rounds on partial checkpoints, and sends at most MATH messages when performing partial checkpoints. During a full checkpoint, process MATH broadcasts once to each group other than its own, and broadcasts at most MATH times to its own group. Each broadcast involves at most MATH messages and one round, and there are MATH groups. Thus, process MATH sends less than MATH messages when performing full checkpoints, and takes less than MATH rounds doing so. The required bounds immediately follow.
cs/0006008
REF is immediate from REF and the definition of MATH. We prove REF simultaneously. To do so, we need a careful way of counting the total number of messages sent and the total amount of work done. A given unit of work may be performed a number of times. If it is performed more than once, say by processes MATH, we say that MATH redoes that unit of work of MATH, MATH redoes the work of MATH, etc. It is important to note that MATH does not redo the work of MATH in this case; only that of MATH. Similarly, we can talk about a message sent during a partial checkpoint of a subchunk or a full checkpoint of a chunk done by MATH as being resent by MATH. In particular, a message MATH sent by MATH as part of a broadcast is resent by MATH if MATH sends exactly the same message as part of a broadcast (not necessarily to the same set of recipients). For example, if MATH sends MATH to the remainder of MATH as part of a partial checkpoint, and later MATH sends MATH to the remainder of MATH, then, whether or not MATH, the messages in the second broadcast are considered to be resendings. Since the completion of a chunk is followed by a full checkpoint, it is not hard to show that when a new group becomes active, it will redo at most one chunk of work that was already done by previous active groups. It will also redo at most one full checkpoint that was done already on the previous chunk, and MATH partial checkpoints (one for each subchunk of work redone). In all, it is easy to see that at most MATH units of work done by previous groups are redone when a new group becomes active, and MATH messages are resent. Similarly, since the completion of a subchunk is followed by a partial checkpoint, it is not hard to show that when a new process, say MATH, in a group that is already active becomes active, and the last message it received was of the form MATH (that is, a partial checkpoint of subchunk MATH), it will redo at most one subchunk that was already done by previous active process (namely, MATH), and may possibly resend the messages in two partial checkpoints: the one sent after subchunk MATH, and the one sent after subchunk MATH (if the previous process crashed during the checkpointing of MATH without MATH receiving the message). If the last message that MATH received was MATH for MATH (that is, the checkpointing of a checkpoint in the middle of a full checkpoint), then similar arguments show that it may resend MATH messages: the checkpoint of MATH to its own group, the checkpoint MATH to group MATH, and the checkpointing of MATH to its own group. Thus, the amount of work done by an active group that is redone when a new process in that group becomes active is at most MATH, and the number of messages resent is at most MATH. The maximum amount of unnecessary work done is: (number of groups) MATH (amount of work redone when a new group becomes active) + (number of processes) MATH (amount of work redone when a new process in an already active group becomes active) MATH. Similarly, the maximum number of unnecessary messages that may be sent is no more than: (number of groups) MATH (number of messages resent when a new group becomes active) + (number of processes) MATH (number of messages resent when a new process in an already active group becomes active) MATH. Clearly MATH units of work must be done; by REF , at most MATH messages are necessary. Thus, no more than MATH units of work will be done altogether, and no more than MATH messages will be sent altogether.
cs/0006008
REF follows from the fact that in each useful round, the chain either performs work, or checkpoints to some group MATH the fact that a subchunk MATH was performed, or checkpoints the fact that group MATH was informed that chunk MATH was performed. The discussion above shows that no unit of work is repeated and hence there are at most MATH useful rounds in which the chain performs work. Similarly, each subchunk is partially checkpointed at most once and hence there are at most MATH useful rounds in which the chain performs partial checkpoints of subchunks. Also, the completion of a chunk is checkpointed to each group at most once, yielding at most MATH useful rounds in which such subchunks are checkpointed. Finally, the fact that group MATH was informed about chunk MATH is checkpointed at most once, yielding at most MATH additional useful rounds. Summing the above the claim follows. REF follows because, as reasoned above, the useful operations done by the chain follow the same order as if they are done by a single active process, and hence within MATH rounds the chain must complete a chunk and a full checkpoint.
cs/0006008
The proof is straightforward. We start with REF . In the calculations below, we use ``(MATH)" to denote the value REF if MATH and REF otherwise. Similarly, MATH denotes REF if MATH, and REF otherwise. Recall that MATH denotes MATH mod MATH. MATH . If MATH, then MATH and REF follows. (In the first equality we replaced MATH by MATH since in this case they are identical, and the second equality follows because MATH.) If MATH, then MATH and again REF follows. (The second equality follows by a case analysis on whether or not MATH, using the fact that MATH and the fourth equality follows since MATH and MATH implies MATH.) The proof of REF is similar. Observe that here by assumption, MATH and hence also MATH. If MATH, then MATH and REF follows. If MATH, then MATH and REF follows.
cs/0006008
We first show that if MATH is in MATH and becomes active at round MATH with MATH, then there are at most MATH useless rounds in MATH. We proceed by induction on MATH. If MATH, the result is trivial. If MATH, then MATH is MATH's successor for some MATH in the activation chain and MATH received its last message from MATH at some round MATH. (There is such a MATH and such a message since by convention process REF sent an ordinary message to everybody just before the execution begins.) By definition, we have MATH. If MATH, we are done, since no round in MATH is useless, so there are at most MATH useless rounds in MATH. If MATH, then suppose MATH becomes active at MATH. By the inductive hypothesis, there are at most MATH useless rounds in MATH. All the rounds in MATH are useful. Thus, there are at most MATH useless rounds in MATH. Since MATH by REF , the inductive step follows. Suppose that MATH becomes active at round MATH. By the argument above, there are at most MATH useless rounds in MATH. If MATH, it immediately follows that there are at most MATH useless rounds in MATH. On the other hand, if MATH, since MATH is still active at MATH, it follows that there are no useless rounds in MATH. Hence, we again get that there are at most MATH useless rounds in MATH. The lemma follows.
cs/0006008
Fix an execution MATH of Protocol MATH. The proof proceeds by induction on the round MATH. The base case of MATH holds trivially since only process REF is active then. Assume the claim for MATH, and we will show it for MATH. If MATH, the claim holds trivially. Thus, we can assume MATH. Suppose that the last ordinary message that MATH received before round MATH came from MATH, and was received at round MATH. (Note that there must have been such an ordinary message, given our assumption that process REF sent an ordinary message to all the processes before the execution begins.) We first prove REF . Assume, by way of contradiction, that some process MATH with MATH does not retire by round MATH. Since, by assumption, MATH was the latest round MATH at which MATH received an ordinary message, to complete the proof it is enough to show that if MATH does not retire before round MATH, MATH must have received an ordinary message at some round MATH with MATH. In fact, we plan to show that MATH must have received a message in the interval MATH from some process in MATH. To do this, we plan to use REF . Notice that both of these lemmas require MATH to be active. In fact, we can assume without loss of generality that MATH is active at some round MATH of MATH, and that MATH. If not, we can just consider the execution MATH which is identical to MATH up to round MATH, after which all processes other than MATH crash. It is clear that eventually MATH becomes active in MATH, with the same activation chain it has in round MATH. Moreover, if MATH receives an ordinary message in the interval MATH in MATH, then it must also receive the same message in MATH, since the two executions agree up to round MATH. Since MATH becomes active at some round prior to MATH, the inductive hypothesis on REF implies that all processes MATH have retired by round MATH. Thus without loss of generality MATH. We consider two cases: CASE: MATH is in MATH; CASE: MATH is not in MATH. In REF , since MATH is in MATH's activation chain and is active at round MATH, by the inductive hypothesis, it must be the current process in MATH at round MATH. Applying REF to MATH, we get MATH . By definition, MATH becomes preactive in round MATH, and hence MATH. Substituting this into the above inequality we get MATH . Since MATH, REF implies that MATH, and substituting this fact in the above inequality we get MATH . Thus REF implies that MATH must have received an ordinary message at some round in the interval MATH, contradicting the assumption that it does not, and the claim follows. In REF , let MATH be the greatest process MATH in MATH's activation chain, and let MATH be the smallest process MATH in MATH's activation chain. Suppose MATH gets its last message before becoming active from MATH at round MATH. (Note that this means that the last message received by MATH before becoming active came at MATH.) Since the inductive hypothesis on REF implies that MATH must retire before MATH becomes active, and since MATH must become active at least one round before it sent a message to MATH (since by REF and process MATH checkpoints to its own group before it sends a message to another group), we have MATH. Furthermore, since the processes succeeding MATH in MATH's chain are greater than MATH, the same inductive hypothesis implies that these processes can become active only after process MATH retires, and hence after round MATH. Since by definition, any message received by MATH after round MATH from MATH's chain must be sent by one of the processes succeeding MATH in the chain, it follows that if MATH receives a message from MATH's chain after round MATH, this message is sent after round MATH. To complete the proof we show that MATH must have received some message from MATH's chain at some round in the interval MATH, and hence in the interval MATH, contradicting the assumption that it does not. As argued above, to show this it is enough to show that MATH . Applying REF to MATH's activation chain we get MATH . To bound MATH, we need to bound MATH. To do this we will compute two terms: CASE: MATH; and REF MATH. The first term is equal to MATH as argued above. To compute the second term, we first show: CASE: MATH, and REF MATH. For REF , clearly MATH, since MATH. If MATH, MATH must have received a message from MATH at round MATH before MATH sent a message to MATH (since by REF and process MATH checkpoints to its own group just before it sends a message to another group). As we have observed, MATH, so this contradicts the assumption that the last message received by MATH before becoming active came at MATH. Thus MATH. For REF , clearly MATH. If MATH, this means that MATH did not receive a message from a process in MATH before becoming active (because if it did, then by the inductive hypothesis on REF we have that this message arrives after MATH retires and hence after round MATH). But since MATH, and MATH sent a message to MATH at MATH, some process in MATH must have sent a message to MATH before round MATH, and hence before MATH becomes active. This gives us the desired contradiction. To complete the proof of REF we use the following claim: Every process MATH with MATH that becomes active does so no earlier than round MATH. We proceed by induction. Assume MATH and the claim holds for all MATH with MATH. We prove it for MATH. We first show that the last ordinary message that any process MATH in MATH receives from any process MATH with MATH is sent no earlier than round MATH. Observe that since MATH is in MATH, so are MATH and MATH. If MATH, the claim trivially follows since MATH must send a message to its own group at round MATH just before it sends a message to MATH. Otherwise, by the induction hypothesis we have that MATH became active no earlier than round MATH, and the claim again follows. Let MATH be the last process from which MATH receives an ordinary message. Observe that MATH. (Because, as reasoned above, MATH has received a message from MATH, and hence the message sent from MATH was sent at or after the time the message from MATH; the inductive hypothesis on REF therefore implies that MATH.) It follows from the claim above that the message from MATH was sent no earlier than round MATH. In addition, the inductive hypothesis on REF implies that MATH becomes active only after MATH retires, and hence only after receiving its message. Now, assume that MATH does not receive a go ahead message. It then becomes preactive MATH rounds after it receives the last ordinary message from MATH and then MATH starts sending go ahead messages to lower numbered processes in its group. Since, by assumption, MATH does not receive a message in response, it becomes active MATH rounds after receiving this last message from MATH, and hence no earlier than round MATH. Next assume MATH receives a go ahead message. Let MATH be the process sending this message. Let MATH be the last process from which MATH received an ordinary message before sending the go ahead message to MATH. Since MATH sends a go ahead message to MATH, it follows that MATH. Just as above, we can show that MATH, and hence that MATH received the ordinary message from MATH no earlier than round MATH. Clearly, MATH sends the go ahead message to MATH no earlier than MATH rounds after it receives its ordinary message from MATH, and the claim follows as above. This completes the proof of the inductive step. Now, to compute MATH, observe that MATH, the round in which MATH sends a message to MATH, is at least one round after MATH becomes active (because MATH and MATH first broadcasts to its own group), and hence REF immediately implies that MATH. Thus we get that MATH . (The fourth inequality follows because MATH, and the fifth inequality follows because MATH.) Again, REF implies that MATH, and hence MATH . This completes the proof of the inductive step for REF . For REF , suppose by way of contradiction that MATH becomes active at round MATH and process MATH has not retired by round MATH. First assume MATH does not receive a go ahead message. If MATH, we get an immediate contradiction using the inductive step for REF , since MATH becomes active at or after it becomes preactive. Otherwise, recall that MATH is the last process from which MATH receives an ordinary message before becoming active, and this message is received at round MATH. If MATH, then since MATH became active before round MATH, the inductive hypothesis on REF implies that MATH must have retired before MATH became active and hence before round MATH. If MATH, then MATH becomes preactive only after MATH additional rounds in which it does not hear from MATH. We claim that MATH must have retired by that time. Because otherwise, in this period MATH would have either performed a subchunk and informed its group, or would have checkpointed a subchunk to a group MATH and informed its group about the checkpoint. Since MATH, in both cases, MATH must have heard from MATH. Finally, if MATH, then before MATH becomes active it sends a go ahead message to MATH and waits for a message from MATH for MATH additional rounds. Exactly as above, it follows again that since MATH does not receive any message from MATH, MATH must have retired. Next assume MATH does get a go ahead message before becoming active. However, the same reasoning as above shows that by the time a process sends a go ahead message to process MATH, all processes MATH have retired, and we are done.
cs/0006008
REF were argued in the beginning of REF. For REF , let MATH be the last process that is active and consider its activation chain. We want to find the last round MATH in which MATH is active. It follows from REF that the maximal number of useful rounds performed by any chain is MATH. Therefore, applying REF with MATH we get that MATH . Thus MATH .
cs/0006008
By assumption, one of the processes is correct, say MATH. At some point process MATH will become active, since once every other process has retired process MATH will not extend its deadline. It is straightforward from inspection of the algorithm that at any time during the execution of the algorithm MATH if and only if the first MATH units of work have been performed, and that when it becomes active, process MATH performs all units of work from MATH through MATH.
cs/0006008
It is immediate from the description of the algorithm that all nonretired processes have received a message from MATH by the time it has performed MATH units of work (at level MATH) after round MATH. Thus, we compute an upper bound on the time it takes for MATH to perform MATH units of work starting at round MATH. In the worst case, MATH has just become active at the beginning of round MATH, and must do failure detection before reaching level MATH and doing work. While doing this failure detection, MATH sends ``are you alive?" messages to at most MATH processes (the extra MATH is due to the fact that at each level, it may send one ``are you alive?" message to a process that is alive, but crashes later while MATH is doing failure detection on a larger group). After discovering a failure, process MATH sends an ordinary message; thus, it sends at most MATH ordinary messages. Each message sent takes up one round; in addition, process MATH waits one round for a response after each ``are you alive?" message. This means that MATH spends at most MATH rounds in levels MATH. Clearly, MATH spends MATH rounds working at level MATH in the course of doing MATH units of work (since it sends an ordinary message between each unit of work). The required bound follows.
cs/0006008
The proof is an easy induction on MATH, since when a MATH-th rank process becomes active, it knows about everything its parent knew when it became active, and at least one more piece of work or failure.
cs/0006008
The proof is by induction on MATH. The base case, MATH, is straightforward. Let MATH, and assume that all parts of the lemma hold for smaller values of MATH. We prove it for MATH. For REF , observe that by the inductive hypothesis, REF holds at the beginning of round MATH. If no process is active in round MATH, then no process' knowledge changes, so REF holds at the beginning of round MATH as well. If process MATH is active in round MATH, then by REF of the inductive hypothesis, MATH knows at least as much as MATH. Thus, by REF , it must be the case that MATH is either MATH or some process in the MATH-th generation with respect to MATH, MATH, and MATH, for some MATH (since, by assumption, MATH is not active at the beginning of round MATH). The only process whose knowledge changes during round MATH is one to which MATH sends an ordinary message. It is immediate from the definition that this process must be in the MATH-th generation with respect to MATH, MATH, and MATH, for some MATH. For REF , we must consider two cases: MATH and MATH. If MATH, let MATH be the process that wrote to MATH at MATH. By REF we have that only MATH and processes in generation MATH with respect to MATH, MATH, and MATH are as knowledgeable as MATH at any round in the interval MATH. By REF , these can be the only processes active in this interval. Thus, it suffices to argue that MATH and all processes of generation MATH with respect to MATH, MATH, and MATH are retired by the beginning of round MATH. Since a reduced view is at most MATH, the highest rank a process could be in is MATH. We now argue that by the beginning of round MATH all processes of ranks REF through MATH have retired. More generally, we argue by induction on MATH that for every MATH with MATH, by the beginning of round MATH, every process in ranks REF to MATH has retired. If MATH, note that since MATH received an ordinary message from MATH at round MATH, by REF , every rank REF process receives a message from MATH before round MATH. By REF , the reduced view of any such process is at least MATH. Since MATH receives no message from MATH by round MATH, it must be the case that MATH has retired by round MATH. By definition, no rank REF process can receive any messages at any round in MATH (otherwise it would have a rank higher than REF). Thus, any rank REF process MATH became active before MATH, so by definition of MATH and the fact that MATH, MATH would have heard from MATH before MATH. It is easy to check that MATH. Since MATH did not receive any messages by the beginning of round MATH, MATH must have retired by then. In general, consider a rank MATH process MATH, and assume inductively that every rank MATH or lower process has retired by the beginning of round MATH. By definition of rank, MATH received an ordinary message from a rank MATH process, and, since these are all retired by round MATH, MATH must have received this message before round MATH. By the inductive hypothesis on MATH, MATH must have received its last ordinary message by the beginning of round MATH (again using the fact that MATH if MATH). By REF , the reduced view of MATH when it received its last ordinary message before round MATH was at least MATH. Thus, it must have become active before round MATH, if it became active at all. Since MATH received no messages from MATH, it follows that MATH must have retired before round MATH. This completes the induction on MATH. If MATH we need the fact that MATH, which follows easily from the definitions. We claim that, for every MATH, by round MATH, every process in ranks REF to MATH has retired. To see this, note that a rank REF process MATH (one with a higher number than MATH that received no messages) must have become active at round MATH, and therefore must have retired by round MATH. Thus a level REF process received its last message by MATH. We now proceed as in the case MATH. To prove REF , observe that the result is immediate from the inductive hypothesis applied to MATH if there is no active process at the beginning of round MATH (for in that case, no process' reduced view changes). Otherwise, suppose that MATH is active at the beginning of round MATH. If MATH does not send an ordinary message in round MATH, again the result follows immediately from the inductive hypothesis (since no process' reduced view changes). If MATH does send an ordinary message to, say, process MATH, it is immediate that MATH and MATH know more at the beginning of round MATH than any other non-retired process, and that MATH's reduced view is greater than that of any other non-retired inactive process. It remains to show REF . Observe that the result is immediate if no process becomes active at round MATH. Now suppose that process MATH becomes active at the beginning of round MATH. We must show that no process that was active prior to round MATH is still active at the beginning of round MATH, and that no process besides MATH becomes active at round MATH. Let MATH be the last round in which MATH received a message (as usual, if MATH received no messages prior to round MATH, then we take MATH), and suppose that MATH was MATH's reduced view at round MATH. Then we must have MATH. From REF , it follows that no non-retired process knows more than MATH at the beginning of round MATH. From REF , it follows that any process that was active in the interval MATH must know more than MATH. This shows that all processes that were active before round MATH must have retired by the beginning of round MATH. Suppose some other process MATH becomes active at round MATH. We have just shown that MATH does not know more than MATH. From REF it follows therefore that MATH knows less than MATH. Thus REF provides a contradiction to the assumption that MATH becomes active at round MATH.
cs/0006008
If process MATH's reduced view is MATH and it does not receive a message within MATH steps, then it becomes active. Each message that MATH receives increases its reduced view. Thus, MATH becomes active in at most MATH rounds. Once it becomes active, arguments similar to those used in REF show that it retires in at most MATH rounds. Thus, the running time of the algorithm is at most MATH rounds.
cs/0006008
We proceed by induction on MATH. The case MATH is vacuous. Assume that MATH and the result holds for MATH. If MATH, then it must be the case that MATH received its message from MATH, MATH, and MATH is the successor of MATH in the cyclic order on MATH, as computed by MATH in round MATH. It is easy to see that the result follows immediately in this case, because all processes in the interval MATH must be retired. Suppose MATH. If MATH is also active at round MATH, then the result is immediate from the inductive hypothesis unless MATH changes during round MATH. The description of the algorithm shows that MATH changes only if MATH and MATH is operating on group MATH, in which case MATH is set to MATH at the end of round MATH, and MATH is the successor of MATH in the cyclic order on MATH. In this case it is easy to see that the result follows from the inductive hypothesis; we leave details to the reader. Thus, we have reduced to the case that MATH becomes active at round MATH. Let MATH and let MATH. If MATH, then it must be the case that MATH received a message from MATH at some earlier round MATH such that MATH and MATH. Since we must have MATH, the result now follows from the induction hypothesis (using MATH and MATH instead of MATH and MATH). It remains only to consider the case MATH. Let MATH be the process that sent the ordinary message to process MATH at round MATH, and suppose that MATH became active at the beginning of round MATH. We claim that we have the following chain of inequalities: MATH. Every inequality in this chain is immediate from our assumptions except the first one. Suppose that MATH. From REF , it follows that MATH for all processes MATH not retired by round MATH. This means that no process not retired at MATH knows that a message was sent at round MATH. But at round MATH, process MATH knows this fact (since, by REF). This is impossible. Thus, we must have MATH. Note that MATH, by assumption. Thus, by the inductive hypothesis, all processes in the cyclic order on MATH in the interval MATH are either retired by the beginning of round MATH or receive an ordinary message in the interval MATH from a process operating on MATH. Since we also know that MATH receives a message at round MATH from a process operating on MATH, this proves the first half of REF . Since, by REF , all processes not retired by round MATH must be less knowledgeable than MATH at the beginning of round MATH, it follows from REF that all the processes in the interval MATH in the cyclic order have in fact retired by round MATH. From the description of the algorithm, it follows that MATH will detect this fact before it starts operating on MATH.
cs/0006008
Given MATH, MATH, and an execution MATH of Protocol MATH, we consider the sequence of triples MATH, with one triple in the sequence for every time a process MATH sends an ordinary message reporting a unit of work MATH to a process MATH, listed in the order that the work was performed. We must show that the length of this sequence is no greater than MATH. We say that a triple MATH is repeated in this sequence if there is a triple MATH later in the sequence where the same work unit MATH is performed. Clearly there are at most MATH nonrepeated triples in the sequence, so it suffices to show that there are at most MATH repeated triples. To show this, it suffices to show that the third components of repeated triples (denoting which process was informed about the unit of work) are distinct. Suppose, by way of contradiction, that there are two repeated triples MATH and MATH with the same third component. Suppose that MATH informed MATH about MATH in round MATH, and MATH informed MATH about MATH in round MATH. Without loss of generality, we can assume that MATH. Since MATH is a repeated triple, there is a triple MATH after MATH in the sequence. Let MATH be the round in which MATH became active, and let MATH be the round in which MATH became active. Let MATH, for MATH. By REF , if MATH, then either MATH's knowledge at the beginning of round MATH is greater than MATH's knowledge at the end of MATH, or MATH, and if MATH, then MATH before MATH starts operating on MATH. Since MATH sends a message to MATH while operating on MATH, it cannot be the case that MATH before MATH starts operating on MATH, so it must be the case that MATH and MATH's knowledge at the beginning of round MATH is greater than MATH's knowledge at the end of round MATH. In particular, this means that MATH must know that MATH informed MATH about MATH at the beginning of MATH. We next show that every process MATH that is active at some round MATH between MATH and MATH must know that MATH informed MATH about MATH at the beginning of round MATH. For suppose not. Then, by REF , MATH must have retired by the beginning of round MATH. Since, by REF , MATH is the most knowledgeable process at the beginning of round MATH, it follows that no process that is not retired knows that MATH was informed about MATH. Thus, there is no way that MATH could find this out by round MATH. It is easy to see that MATH does not know that MATH was informed about MATH (for if it did, it would not repeat the unit of work MATH). Therefore, MATH must come after MATH in the sequence. Since MATH, and MATH received an ordinary message from MATH while operating on MATH at round MATH, it follows from REF that between rounds MATH and MATH, every process in MATH that is not retired must receive an ordinary message. In particular, this means that MATH must receive an ordinary message. Since all active processes between round MATH and MATH know that MATH was informed about MATH, it follows that MATH must know it too by the end of round MATH. But then MATH would not redo MATH, giving us the desired contradiction.
cs/0006008
REF implies that the amount of real work units that are performed and reported to MATH is at most MATH. In addition, each of the MATH processes may perform one unit without reporting it (because it retired immediately afterwards). Summing the two, REF follows. For REF implies that each MATH, performs at most MATH reported units of works when operating on MATH. (Here a unit is may be either a real work unit or an `are you alive?' message.) Let MATH. Notice if we consider groups of the form MATH for MATH we count all the groups exactly once. The argument above tells us that the total number of reported units of work is MATH . The reason for the factor of REF is that if MATH, then MATH occurs three times in the left-hand sum: once when considering the work performed by group MATH operating on MATH, once when considering the work performed by MATH when operating on MATH, and once when considering the work performed by MATH when operating on MATH. Clearly, the MATH reported units performed on MATH result in one message each, and the remaining ones result in two messages each (because then the unit itself is also a message). So the number of messages corresponding to reported units of work is at most MATH . In addition, the unreported units may result in messages. These consist both of `are you alive?' messages sent by a process but not reported by it due to the fact it crashes or terminates immediately afterwards, and of `are you alive?' messages that were not reported because the recipient of the `are you alive?' message responded. Each process in MATH can perform at most one such unreported unit when operating on MATH, and hence each group MATH performs no more than MATH such units. In addition, we have to sum the answers of alive processes in MATH to `are you alive?' message sent by MATH. Again, there are at most MATH such answers. Finally, each process MATH sends messages to the other process in MATH just before it starts operating, which together with the answers sums up to a total of no more than MATH messages. Therefore, the number of messages corresponding to unreported units of work is at most MATH . Summing the messages due to the reported units of work and the messages due to the unreported units of work, REF follows. REF is immediate from REF .
cs/0006008
For REF , an easy induction on MATH shows that by the end of phase MATH, no more than MATH units of work remain to be done, and no more than MATH units of work have been done. It follows that at most MATH units of work are done altogether. (We remark that there is nothing special about the factor ``half" in our requirement that we revert to Protocol MATH if more than half the processes that were correct at the beginning of the phase are discovered to have failed during the phase. We could have chosen any factor MATH; a similar proof would show that by the end of phase MATH, at most MATH units of work remain to be done, and no more than MATH units of work have been done, so that no more than MATH units of work are done altogether. However, it follows from results of CITE that if we allow an arbitrary fraction of the processes to fail at every step, and do not revert to Protocol MATH, it is possible to construct an execution where MATH processes fail and MATH units of work are done altogether. Indeed, it follows from the arguments in CITE that this result is tight; there is a matching upper bound.) Since each nonfaulty process broadcasts to all the other nonfaulty processes in each round of an agreement phase, at most MATH messages are sent in each such round. If MATH is the number of failures discovered during the MATH-th agreement phase, then the first agreement phase lasts at most MATH rounds, while for MATH, the MATH-th agreement phase lasts at most MATH rounds, because of the grace round. Thus, altogether, the agreement phases last at most MATH rounds, where MATH is the number of agreement phases. Since MATH, the agreement phases last at most MATH rounds, and at most MATH messages are sent. Finally, to compute an upper bound on the total number of rounds, it remains only to compute how many rounds are required to do the work (since we know the agreement phases last altogether at most MATH rounds). Recall that at the end of phase MATH, at most MATH units of work need to be done. Since no more than half the processes fail during any phase, at least MATH processes are nonfaulty. Thus, at most MATH rounds are spent during each work phase doing work. Since there are at most MATH work phases, this gives the required bound on the total number of rounds. For REF , first observe that if we revert to Protocol MATH at the end of phase MATH, then by our earlier observations it is known to the remaining processes that no more than MATH units of work remain to be done, and no more than MATH units of work have been done. It is also easy to see that at least MATH processes are discovered as faulty. Moreover, by the bounds in REF , at most MATH messages have been sent and MATH rounds have elapsed. Now applying REF , we see that at most MATH work is performed by protocol MATH, no more than MATH messages are sent, and MATH rounds are required. By taking MATH (the worst case), we get the bounds claimed in the statement of the theorem.
cs/0006009
Given that MATH, we have by REF that MATH iff MATH. Since MATH, this holds iff MATH. Again by REF this is true iff MATH, and we are done.
cs/0006009
Let MATH be a correct (joint) protocol for the coordinated attack problem, with MATH being the corresponding system. Consider a ground language consisting of a single fact MATH ``both generals are attacking", let MATH assign a truth value to this formula in the obvious way at each point MATH, and let MATH be the corresponding complete-history interpretation. Assume that the generals attack at the point MATH of MATH. We show that MATH. Our first step is to show that MATH is valid in the system MATH. Assume that MATH is an arbitrary point of MATH. If MATH, then we trivially have MATH. If MATH, then both generals attack at MATH. Suppose that MATH is a point of MATH in which MATH has the same local history as in MATH. Since MATH is executing a deterministic protocol and MATH attacks in MATH, MATH must also attack in MATH. Furthermore, given that the protocol is a correct protocol for coordinated attack, if MATH attacks in MATH, then so does MATH, and hence MATH. It follows that MATH; similarly we obtain MATH. Thus MATH, and again we have MATH. We have now shown that MATH is valid in MATH. By the induction rule it follows that MATH is also valid in MATH. Since MATH, we have that MATH and we are done.
cs/0006009
Fix MATH. Without loss of generality, we can assume MATH. Let MATH be the number of messages received in MATH up to (but not including) time MATH. We show by induction on MATH that if MATH, then MATH iff MATH. We assume that all the runs mentioned in the remainder of the proof have the same initial configuration and the same clock readings as MATH. First assume that MATH. Thus no messages are received in MATH up to time MATH. Since MATH and MATH have the same initial configuration and clock readings, it follows that MATH. By REF we have MATH iff MATH, as desired. Assume inductively that the claim holds for all runs MATH with MATH, and assume that MATH. Let MATH be the latest time at which a message is received in MATH before time MATH. Let MATH be a processor that receives a message at time MATH in MATH. Let MATH be a processor in MATH such that MATH (such a MATH exists since MATH). From REF in the definition of communication not being guaranteed, it follows that there is a run MATH extending MATH such that MATH for all MATH and all processors MATH receive no messages in MATH in the interval MATH. By construction, MATH, so by the inductive hypothesis we have that MATH iff MATH. Since MATH, by REF we have that MATH iff MATH. Thus MATH iff MATH. This completes the proof of the inductive step.
cs/0006009
Recall that communication between the generals is not guaranteed (that is, it satisfies REF above), and we assume that in the absence of any successful communication neither general will attack. Thus, if we take MATH to be ``both generals are attacking", then MATH does not hold at any point in a run in which no messages are received (since MATH does not hold at any point of that run). REF implies that the generals will never attain common knowledge of MATH in any run, and hence by REF the generals will never attack.
cs/0006009
We sketch the proof for MATH; the proof for MATH is analogous. We assume that all runs mentioned in this proof have the same initial configuration and the same clock readings as MATH. If MATH is a run such that MATH holds at some point in MATH, let MATH be the first time in MATH that processor MATH knows MATH. Let MATH, and let MATH be the number of messages that are received in MATH up to (but not including) MATH. We show by induction on MATH that if MATH is a run such that MATH holds at some point in MATH, then MATH. This will show that in fact MATH can never hold. If MATH and MATH holds at some point in MATH, choose some MATH and let MATH. Then we have that MATH. Clearly MATH, so MATH. By the knowledge axiom, we have that MATH, contradicting the hypothesis of the theorem. For the inductive step, assume that MATH and let MATH. We now proceed as in the proof of REF . Let MATH be a processor receiving the last message received in MATH before time MATH. Let MATH be the time at which MATH receives this message. Let MATH be a processor in MATH such that MATH and let MATH. Since communication is not guaranteed, there exists a run MATH extending MATH such that REF no messages are received in MATH at or after time MATH, REF MATH for all MATH, and REF all processors MATH receive no messages in the interval MATH. By construction, at most MATH messages are received altogether in MATH, so MATH. By the induction hypothesis we have that MATH for all MATH. It follows that MATH. But since we assumed MATH and MATH, this gives us a contradiction.
cs/0006009
The proof is analogous to that of REF . Assume that MATH is a joint protocol that guarantees that if either party attacks then they both eventually attack, and let MATH be the corresponding system. Let MATH``At least one of the generals has started attacking". We first show that when either general attacks, then eventual common knowledge of MATH must hold. Since the protocol guarantees that whenever one general attacks the other one eventually attacks, it is easy to see that a general that has decided to attack knows MATH and knows that eventually both generals will know MATH. Thus, by the induction rule for MATH, when a general attacks MATH holds. Since in every run of the protocol in which no messages are received no party attacks (and hence neither MATH nor MATH hold in such runs), by REF , the protocol MATH guarantees that neither general will ever attack.
cs/0006009
Fix a run MATH, time MATH, and formula MATH. Since MATH is MATH-reachable from MATH in the graph corresponding to the complete-history interpretation, there exist points MATH, MATH, , MATH such that MATH, MATH, and for every MATH there is a processor MATH that has the same history at MATH and at MATH. We can now prove by induction on MATH, using REF , that MATH iff MATH. The result follows.
cs/0006009
Let MATH be a system with temporal imprecision and MATH be a point of MATH. Suppose MATH (otherwise clearly MATH is reachable from MATH). Let MATH be the greatest lower bound of the set MATH. We will show that MATH is reachable from MATH and that MATH. Since MATH is a system with temporal imprecision, there exists a MATH such that for all MATH with MATH, there exists a run MATH such that for all MATH, we have MATH and MATH for MATH. If MATH, it follows that MATH is reachable from MATH and MATH is reachable from MATH. By transitivity of reachability, we have that MATH is reachable from MATH, and by symmetry, that MATH is reachable from MATH. It now follows that MATH is reachable from MATH for all MATH. Thus MATH. Furthermore, if MATH, then we know that MATH is reachable from both MATH and MATH. It thus follows that MATH is reachable from MATH. Finally, if MATH, then we know that MATH is reachable from MATH (and hence from MATH) for all MATH. But this contradicts our choice of MATH. Thus MATH, and MATH is reachable from MATH.
cs/0006010
The first point is true by definition of MATH. Let us focus on the second point. Assume that: MATH . MATH here above is the worst possible assumption with respect of the number of cut elimination steps, necessary to normalize MATH at level MATH, because: CASE: we assume that all the contraction nodes at MATH, that is, as many as MATH, have maximal weight. We saw that the weight of a contraction node MATH is the maximal number of MATH-boxes at MATH that MATH can duplicate. Forcefully, the MATH-boxes at MATH can not be more than MATH. This defines as many leftmost components of MATH as MATH; CASE: we assume to have as many MATH/ MATH -boxes as possible at MATH, namely MATH, defining the second component of MATH from its right; CASE: we assume to have as many nodes as possible at MATH in MATH, namely MATH, defining the rightmost component of MATH. Then, we make the hypothesis that every contraction node, MATH-box, MATH -box at MATH, and every node at MATH contributes to form a redex. Finally, we apply MATH, and we observe the behavior of MATH: MATH for some MATH. By all that means that we have just rewritten MATH to MATH after, at most, MATH steps, since MATH, for every MATH. Finally, the third point. If we find MATH, we get MATH as well, which is the sum of all the components of MATH. Assume again to start from MATH, and to rewrite it under MATH. We have: MATH for some MATH. At this point, MATH can be normalized at levels MATH, and MATH by reducing all MATH-redexes which simply erase structure. We can safely state that, after (at most) MATH-steps, MATH here above is a bound for MATH. It implies the third point we want to prove.
cs/0006012
Assume a pair of crossing constituents appears in the output of the constituent voting technique. Each of the constituents must have received at least MATH votes from the MATH parsers. Let MATH be the sum of the votes for the assumed constituents. MATH because none of the parsers contains crossing brackets so none of them vote for both of the assumed constituents. But by addition MATH, a contradiction.
cs/0006012
The technique for this proof comes from NAME 's work on multiple sequence alignment, although his goal was to show that a particular biological sequence alignment technique was good under a given goodness measure CITE. The edit distance in question must be symmetric. That is, it must take the same number of edits to transform parse A into parse B as it does to transform parse B into parse A. This is reasonable, given that the concept of an edit includes the ability to ``undo" it. Also, the edit distance should submit to the triangle inequality. It should be at least as easy to edit parse A into parse B as it is to edit parse A into parse C and then edit parse C into parse B. This is also obviously reasonable. The first observation is that the centroid we've chosen is minimal among the choices we could make. That is, the number of edits incurred by transforming it into each of the other parses is at least as small as the total number of edits required using each of the other candidate points as the centroid. That comes from the decision rule we used to pick it. Next we will relate the cost of editing this chosen parse into all of the other parses to the cost of editing the optimal parse into all of the other parses. Remember, the optimal parse is some parse hidden in the parse space that is too large to simply search. We define MATH to be the total cost of editing all parses into all other candidate parses, and we give a quick bound on how much work we will do using this centroid. A diagrammatic view of what we intend to accomplish is presented in REF . The filled points are the parses given as input. The point marked MATH is the true parse, hidden from us unless we are willing to explore the entire space. The dotted lines represent the minimum possible edit distance. Those lengths are the cost of editing the true parse into the observed parses. Point MATH is also marked MATH because it is the centroid chosen by minimizing the sum of pairwise distances (using the algorithm given). The cost we incur by using it is represented by the solid lines. We are claiming that the edit distance using MATH is less than twice the edit distance using MATH. MATH . The next observation of interest is that even the optimal choice for a centroid must obey the triangle inequality. The true parse, the best parse in parse space, is denoted here by MATH. MATH . Now we have bounded our hypothesis, MATH, from above with respect to MATH, and the optimal parse, MATH, from below with respect to MATH. This gives us a way to bound the extra cost we incur by using this suboptimal choice using simple substitution from REF . MATH . In REF we see that the number of edits required to change our hypothesis, MATH, into each of the other parses is less than twice the number of edits required to change the optimal centroid hypothesis, MATH, (from the space of all parses) into the observed parses. We take this to be a reassuring bound on this approximation, as it was unlikely we could explore the space of parses to find MATH in the first place.
cs/0006033
Let MATH be the MGU of MATH and MATH. By REF , we have that MATH and MATH are nicely moded and MATH is input-linear. Thus by REF MATH is nicely moded, and hence MATH is MATH-nicely moded.
cs/0006033
Suppose that MATH is MATH-robustly typed. By REF we have MATH. Suppose MATH. Since MATH is obtained by unifying MATH with a head of a clause MATH, and MATH, it follows that MATH. By REF , MATH. Since MATH is MATH-nicely moded, MATH and so MATH.
cs/0006033
The proof is by induction on the position MATH in the derivation. The base case MATH is trivial since MATH. Now suppose the result holds for some MATH and MATH exists. By REF , MATH is permutation simply typed. Thus the result follows for MATH by REF .
cs/0006033
Since MATH is robustly typed and types are closed under instantiation, there exists a substitution MATH such that MATH, MATH, and MATH is correctly typed. Since MATH is nicely moded, MATH. Since MATH, it follows that MATH and hence MATH is nicely moded. Since MATH is well typed, it follows by REF that MATH is well typed. Therefore, as MATH is robustly typed and MATH, it follows that MATH is robustly typed.
cs/0006033
Let MATH be an infinite delay-respecting derivation of MATH. Assume, for the purpose of deriving a contradiction, that MATH contains only finitely many steps where a non-robust atom is resolved. Then there exists an infinite suffix MATH of MATH containing no steps where a non-robust atom is resolved. Consider the first query MATH of MATH. Then there is at least one atom in MATH that has infinitely many descendants. Let MATH be the leftmost of these atoms. Then as MATH is robust, we have a contradiction to REF .
cs/0006033
Suppose MATH is MATH-robustly typed (note that MATH exists by REF ). Let MATH be an atom in MATH with all its ancestors in safe positions. By REF , MATH is correctly typed in its input positions and hence selectable. Moreover, since MATH is a safe position, MATH. It follows that if the proper ancestors of MATH are not waiting, then MATH is not waiting. The result follows by induction on MATH. When MATH, MATH has no proper ancestors and hence, by the above paragraph, MATH is not waiting. When MATH, then all proper ancestors of MATH are in safe positions (by hypothesis) and hence, by the inductive hypothesis, they are not waiting. Thus, by the above paragraph, MATH is not waiting.
cs/0006033
By REF , MATH, and thus MATH is a vector of constants. Since MATH is already a vector of non-variable terms, it follows that MATH is a vector of constants and thus MATH. Therefore MATH.
cs/0006033
The proof is by induction on the length of MATH. Let MATH and MATH. The base case holds by the assumption that MATH is MATH-ground. Now consider some MATH where MATH and MATH exists. By REF , MATH and MATH are permutation simply typed and hence type-consistent in all argument positions. The induction hypothesis is that MATH is MATH-ground. Let MATH be the selected atom, MATH be the clause and MATH the MGU used in the step MATH. Consider an arbitrary MATH such that MATH. If MATH, then by the condition on selectability in REF , MATH is non-variable in the MATH-positions of MATH and hence, since the MATH-positions are of constant type, MATH is ground in the MATH-positions of MATH. If MATH, then MATH is ground in all input positions by the induction hypothesis, and hence MATH is a fortiori ground in all MATH-positions of MATH. Thus it follows that MATH is ground. Since the choice of MATH was arbitrary and because of the induction hypothesis, it follows that MATH is MATH-ground.
cs/0006033
By REF is non-variable in all bound positions, and MATH is a linear vector having flat terms in all bound positions, and variables in all other positions. Thus there is a substitution MATH such that MATH and MATH, which shows REF . Since MATH is a linear vector of variables, there is a substitution MATH such that MATH and MATH, which shows REF . Since MATH is MATH-nicely moded, MATH, and therefore MATH. Thus it follows by REF that MATH is a unifier of MATH and MATH. REF follows from REF , and REF follows from REF because of linearity. By REF , the resolvent is MATH-nicely moded and MATH-well typed. By REF , the vector of the output arguments of the resolvent is a linear vector of variables, and hence REF follows.
cs/0006033
We show how MATH is computed, where we consider three stages. In the first, MATH and MATH are unified. In the second, the output positions are unified where the bindings go from MATH to MATH. In the third, the output positions are unified where the bindings go from MATH to MATH. REF illustrates which variables are bound in each stage. The first three parts of the proof correspond to the three stages of the unification. REF (unifying MATH and MATH). By REF , MATH is a vector of flat terms, where MATH is a vector of variables, and by assumption, MATH is linear. By assumption, MATH is a vector of non-variable terms and, since MATH, MATH. Thus there is a (minimal) substitution MATH such that MATH. We show that the following hold: CASE: MATH. CASE: MATH. CASE: Let MATH be a variable occurring directly in a position of type MATH in MATH. Then MATH. Moreover, MATH can only occur in MATH in a bound position of type MATH, and the occurrence must be direct. CASE: MATH. REF holds by the construction of MATH. REF holds since by REF and since MATH is input-linear, MATH is linear. Let MATH be a variable occurring directly in a position of type MATH in MATH. Let MATH be the variable in the same position in MATH. Suppose, for the purpose of deriving a contradiction, that MATH. Then by REF , MATH occurs directly in MATH, and since MATH is a vector of non-variable terms, MATH is not a variable, which is a contradiction. Therefore MATH. Hence MATH and thus MATH and MATH. Furthermore it follows by REF that MATH can only occur in MATH in a bound position of type MATH, and the occurrence must be direct. Thus REF holds. Since MATH is permutation nicely moded, MATH and hence MATH. Thus REF holds. REF (unifying MATH and MATH in each position where either the argument in MATH is a variable, or the arguments in MATH and MATH are both non-variable). Note that this includes all positions in MATH and MATH, but may also include positions in MATH and MATH. Since, by REF , MATH, REF covers precisely the output positions where the binding ``goes from MATH to MATH" (see REF ). We denote by MATH the projection of MATH onto the positions where the argument in MATH is a variable, or the arguments in MATH and MATH are both non-variable, and by MATH the projection onto the other positions, and likewise for MATH. By REF , MATH. Thus there is a minimal substitution MATH such that MATH. Let MATH. Then by REF , MATH. We show the following: CASE: MATH. CASE: MATH. CASE: Let MATH be a variable occurring directly in a position of type MATH in MATH. Then MATH. Moreover, MATH can only occur in MATH in a bound position of type MATH, and the occurrence must be direct. CASE: MATH. Since MATH, MATH. This and REF imply REF . REF holds because REF holds and MATH is linear. By REF , MATH. This together with REF implies REF . Furthermore, because of the linearity of MATH, REF follows. REF (unifying MATH and MATH). By REF , MATH, and thus MATH. Therefore, by the definition of the superscript MATH in REF , MATH is a vector of variables. By REF , MATH, so that there is a minimal substitution MATH such that MATH. Let MATH. Then, by REF , we have MATH. We show REF . CASE: MATH. CASE: MATH is linear and has flat type-consistent terms in all bound positions and variables in all free positions. By REF , MATH. This and REF imply REF . Suppose MATH is a variable in MATH occurring in a position MATH of type MATH, and MATH also occurs in MATH. By REF , the latter occurrence of MATH is in a bound position of type MATH, and is the only occurrence of MATH in MATH. Let MATH be the set of positions where MATH occurs in MATH, and let MATH be the set of terms occurring in MATH in positions in MATH. Then MATH is a set of variable-disjoint, flat terms. Therefore their most general common instance MATH is a flat term and MATH is type-consistent with respect to MATH. Moreover, since MATH is linear, we have MATH and therefore it follows that MATH is a linear vector of type-consistent terms. This and REF imply REF . CASE: Defining MATH it follows that MATH. By REF , the resolvent of MATH and MATH is MATH-robustly typed. By REF , we have MATH.
cs/0006033
For simplicity assume that MATH and each clause body do not contain two identical atoms. Let MATH, MATH and MATH be a delay-respecting derivation of MATH. The idea is to construct an NAME MATH of MATH such that whenever MATH uses a clause MATH, then MATH uses the corresponding clause MATH in MATH. It will then turn out that if MATH is finite, MATH must also be finite. We call an atom MATH resolved in MATH at MATH if MATH occurs in MATH but not in MATH. We call MATH resolved in MATH if for some MATH, MATH is resolved in MATH at MATH. Let MATH and MATH. We construct an NAME MATH of MATH showing that for each MATH the following hold: CASE: If MATH is an atom in MATH that is not resolved in MATH, then MATH for all MATH. CASE: Let MATH be a variable such that, for some MATH, MATH. Then MATH is either a variable or MATH. We first show these properties for MATH. Let MATH be an atom in MATH that is not resolved in MATH. Since MATH, MATH. Furthermore, by REF and since MATH is not resolved in MATH, we have MATH for all MATH. Thus REF holds. REF holds because MATH. Now assume that for some MATH, MATH is defined, MATH is not empty, and REF hold. Let MATH be the leftmost atom of MATH. We define a derivation step MATH with MATH as the selected atom, and show that REF hold for MATH. CASE: MATH is resolved in MATH at MATH for some MATH. Consider the simply typed clause MATH corresponding to the uniquely renamed clause (using the same renaming) used in MATH to resolve MATH. Since MATH is resolved in MATH at MATH, MATH is non-variable in all bound input positions. Thus each bound input position of MATH must be filled by a non-variable term or a variable MATH such that MATH for some MATH. Moreover, MATH must have non-variable terms in all bound input positions since MATH is well typed. Thus it follows by REF that in each bound input position, MATH has the same top-level functor as MATH, and since MATH has flat terms in the bound input positions, there is an MGU MATH of MATH and MATH. We use MATH for the step MATH. We must show that REF hold for MATH. Consider an atom MATH in MATH other than MATH. By REF , MATH. Thus for the atoms in MATH that occur already in MATH, REF is maintained. Now consider an atom MATH in MATH that is not resolved in MATH. By REF , MATH. Since MATH is not resolved in MATH, for all MATH we have that MATH occurs in MATH and thus by REF , MATH. Thus REF follows. REF holds since it holds for MATH and MATH is resolved using the same clause head as in MATH. CASE: MATH is not resolved in MATH. Since MATH is non-speculative, there is a (uniquely renamed) clause MATH in MATH such that MATH and MATH have an MGU MATH. We use MATH for the step MATH. We must show that REF hold for MATH. Consider an atom MATH in MATH other than MATH. By REF , MATH. Thus for the atoms in MATH that occur already in MATH, REF is maintained. Now consider an atom MATH in MATH. Clearly MATH is not resolved in MATH. Since MATH for all MATH and since by REF , we have MATH, REF holds for MATH. By REF for MATH, we have MATH for all MATH. By REF , we have MATH. Thus we have MATH for all MATH. Moreover, REF holds for MATH. Thus REF holds for MATH. Since this construction can only terminate when the query is empty, either MATH is empty for some MATH, or MATH is infinite. Thus we show that if MATH is finite, then every atom resolved in MATH is also resolved in MATH. So let MATH be finite of length MATH. Assume for the sake of deriving a contradiction that MATH is the smallest number such that the atom MATH selected in MATH is never selected in MATH. Then MATH since MATH and MATH are permutations of each other and all atoms in MATH are eventually selected in MATH. Thus there must be a MATH such that MATH does not occur in MATH but does occur in MATH. Consider the atom MATH selected in MATH. Then by the assumption that MATH was minimal, MATH must be the selected atom in MATH for some MATH. Hence MATH must occur in MATH, since the clause used to resolve MATH in MATH is a simply typed clause corresponding to the clause used to resolve MATH in MATH. Thus MATH must occur in MATH, contradicting that MATH terminates with the empty query. Thus MATH can only be infinite if MATH is also infinite.
cs/0006033
In this proof, by a MATH-step we mean a MATH-step, for some MATH; likewise we define a MATH-step. By REF , no MATH-step can instantiate any descendant of MATH or MATH. Thus the MATH-steps can be disregarded, and without loss of generality, we assume MATH is empty. Suppose MATH is a delay-respecting derivation for MATH containing only finitely many MATH-steps. All MATH-steps are contained in a finite prefix of MATH. Moreover, by REF , no MATH-step can instantiate any descendant of MATH. Therefore, we can repeatedly apply the Switching Lemma CITE to this prefix of MATH to obtain a delay-respecting derivation MATH such that MATH contains only MATH-steps and MATH contains only MATH-steps. Now construct the delay-respecting derivation MATH by removing the prefix MATH in each query in MATH. By REF , MATH is robustly typed. Thus by REF , there exists a substitution MATH such that MATH is robustly typed, and MATH, where MATH is the set of variables occurring in the output arguments of MATH. By REF , no MATH-step in MATH, and hence no derivation step in MATH, can instantiate a variable in MATH. Since MATH, it thus follows that we can construct a delay-respecting derivation MATH by applying MATH to each query in MATH. Since MATH is a robustly typed query and MATH is robust, MATH is finite. Therefore MATH, MATH, and finally MATH are finite.
cs/0006033
If MATH is an atom using a predicate in MATH such that the set MATH is non-empty and bounded, we define MATH. Thus, for each atom MATH and substitution MATH such that MATH and MATH are defined MATH . To measure the size of a query, we use the multiset containing the level of each atom whose predicate is in MATH. The multiset is formalised as a function MATH, which takes as arguments a query and a natural number: MATH . Note that if a query contains several identical atoms, each occurrence must be counted. We define MATH if and only if there is MATH such that MATH and MATH for all MATH. Intuitively, there is a decrease when an atom in a query is replaced with a finite number of smaller atoms. All descending chains with respect to MATH are finite CITE. Let MATH be a robustly typed query. Then MATH and thus MATH is defined. Let MATH be a delay-respecting derivation of MATH. Since all predicates MATH with MATH are robust, it follows by REF that there cannot be an infinite suffix of MATH without any steps where an atom MATH such that MATH is resolved. We show that for all MATH, if the selected atom in MATH is MATH and MATH, then MATH, and otherwise MATH. This implies that MATH is finite, and, as the choice of the initial query MATH was arbitrary, MATH is robust. By REF , each position in each atom in MATH is filled with a type-consistent term. MATH . Consider MATH and let MATH be the clause, MATH the selected atom and MATH the MGU used in MATH. If MATH, then MATH for all MATH, and hence by REF and MATH it follows that MATH. Intuitively, the set of atoms that are measured by MATH does not change in this step (although the level of each atom might decrease). Now consider MATH. Since MATH is well-recurrent and because of MATH, we have MATH for all MATH with MATH. This together with REF implies MATH. Intuitively, one atom has been replaced by smaller atoms in this step, but apart from that, the set of atoms that are measured by MATH does not change.
cs/0006033
Suppose there is an infinite left-based derivation MATH of MATH. Then letting MATH, MATH, we can write MATH where MATH are the queries in MATH where a non-robust atom is selected. By REF , there are infinitely many such queries. We derive a contradiction. By REF , the non-robust atoms in each query in MATH have only ancestors in safe positions. Thus by REF , for each MATH, where MATH is MATH-robustly typed, the MATH'th atom in MATH is selected in MATH. Now consider an arbitrary query MATH in MATH and assume it is MATH-robustly typed. By REF and the previous paragraph it follows that there exists a query in MATH that contains no descendants of the MATH'th atom MATH. Intuitively, for each query in MATH, the atom that is ``leftmost according to its permutation" will eventually be resolved completely. By repeatedly applying the Switching Lemma to prefixes of MATH, we can construct a derivation MATH of MATH such that in each query MATH in MATH that is MATH-robustly typed, the MATH'th atom is selected using the same clause (copy) used in MATH. Note that this construction is possible by the previous paragraph. Also note that MATH is infinite. Now consider the derivation MATH obtained from MATH by replacing each MATH-robustly typed query MATH with MATH, that is, the robustly typed query corresponding to MATH. The derivation MATH is an NAME of MATH, and it is infinite. This is a contradiction.
cs/0006046
An assignment of colors to the original MATH-CSP instance's variables solves the problem if and only if, for each constraint, there is at least one pair MATH in the constraint that does not appear in the coloring. In our transformed problem, we choose one variable per original constraint, with the colors available to the new variable being these pairs MATH in the corresponding constraint in the original problem. Choosing such a pair in a coloring of the transformed problem is interpreted as ruling out MATH as a possible color for MATH in the original problem. We then add constraints to our transformed problem to ensure that for each MATH there remains at least one color that is not ruled out: we add one constraint for each MATH-tuple of colors of new variables - recall that each such color is a pair MATH - such that all colors in the MATH-tuple involve the same original variable MATH and exhaust all the choices of colors for MATH.
cs/0006046
Let the two colors allowed at MATH be MATH and MATH. Define MATH to be the set of pairs MATH is a constraint. We then include MATH to our set of constraints. Any pair MATH does not reduce the space of solutions to the original problem since if both MATH and MATH were present in a coloring there would be no possible color left for MATH. Conversely if all such constraints are satisfied, one of the two colors for MATH must be available. Therefore we can now find a smaller equivalent problem by removing MATH, as shown in REF .
cs/0006046
It is safe to choose the colors MATH and MATH, since these two choices do not conflict with each other nor with anything else in the CSP instance.
cs/0006046
Any solution involving MATH can be changed to one involving MATH without violating any additional constraints, so it is safe to remove the option of coloring MATH with color MATH. Once we remove this option, MATH is restricted to two colors, and we can apply REF .
cs/0006046
We may safely assign color MATH to MATH and remove it from the instance.
cs/0006046
No coloring of the instance can use MATH, so we can restrict MATH to the remaining two colors and apply REF .
cs/0006046
If no constraint exists, we can solve the problem immediately. Otherwise choose some constraint MATH. Rename the colors if necessary so that both MATH and MATH have available the same three colors MATH, MATH, and MATH, and so that MATH. Restrict the colorings of MATH and MATH to two colors each in one of four ways, chosen uniformly at random from the four possible such restrictions in which exactly one of MATH and MATH is restricted to colors MATH and MATH REF . Then it can be verified by examination of cases that any valid coloring of the problem remains valid for exactly two of these four restrictions, so with probability MATH it continues to be a solution to the restricted problem. Now apply REF and eliminate both MATH and MATH from the problem.
cs/0006046
We perform the reduction above MATH times, taking polynomial time and giving probability at least MATH of finding a correct solution. If we repeat this method until a solution is found, the expected number of repetitions is MATH.
cs/0006046
If MATH and MATH are both three-color variables, then the instance can be colored if and only if we can color the instance formed by replacing them with a single four-color variable, in which the four colors are the remaining choices for MATH and MATH other than MATH REF . Thus in this case we can reduce the problem size by MATH, with no additional work. Otherwise, if there exists a coloring of the given instance, there exists one in which exactly one of MATH and MATH is given color MATH. Suppose first that MATH has four colors while MATH has only three. Thus we can reduce the problem to two instances, in one of which MATH is used (so MATH is removed from the problem, and MATH is removed as a choice for variable MATH, allowing us to remove the variable by REF ) and in the other of which MATH is used REF . The first subproblem has its size reduced by MATH since both variables are removed, while the second's size is reduced by MATH since MATH is removed while MATH loses one of its colors but is not removed. Thus the work factor is MATH. Similarly, if both are four-color variables, the work factor is MATH. For the given range of MATH, this second work factor is smaller than the first.
cs/0006046
The second constraint for MATH can not involve MATH, or we would be able to apply REF . We choose either to use color MATH or to restrict MATH to avoid that color REF . If we use color MATH, we eliminate choice MATH and another choice on the other neighbor of MATH. If we avoid color MATH, we may safely use color MATH. In the worst case, the other neighbor of MATH has four colors, so removing one only reduces the problem size by MATH. There are four cases depending on the number of colors of MATH and MATH: If both have three colors, the work factor is MATH. If only MATH has four colors, the work factor is MATH. If only MATH has four colors, the work factor is MATH. If both have four colors, the work factor is MATH. These factors are all dominated by the one in the statement of the lemma.
cs/0006046
We assume that the instance has no color choice with only a single constraint, or we could apply one of REF to achieve the given work factor. We say that MATH implies MATH if there are constraints from MATH to every other color choice of MATH. If the target MATH of an implication is not the source of another implication, then using MATH eliminates MATH and at least two other colors, while avoiding MATH forces us to also avoid MATH REF . Thus, in this case we achieve work factor either MATH if MATH has three color choices, or MATH if it has four. If the target of every implication is the source of another, then we can find a cycle of colors each of which implies the next in the cycle REF . If no other constraints involve colors in the cycle (as is true in the figure), we can use them all, reducing the problem by the length of the cycle for free. Otherwise, let MATH be a color in the cycle that has an outside constraint. If we use MATH, we must use the colors in the rest of the cycle, and eliminate the (variable,color) pair outside the cycle constrained by MATH. If we avoid MATH, we must also avoid the colors in the rest of the cycle. The maximum work factor for this case is MATH, and arises when the cycle consists of only two variables, both of which have only three allowed colors. Finally, if the situation described in the lemma exists without forming any implication, then MATH must have four color choices, exactly two of which are constrained by MATH. In this case restricting MATH to those two choices reduces the size by at least MATH, while restricting it to the remaining two choices reduces the size by MATH, again giving work factor MATH.
cs/0006046
We can assume from REF that each constraint connects MATH to a different variable. Then if we choose to use color MATH, we eliminate MATH and remove a choice from each of its neighbors, either eliminating them or reducing their number of choices from four to three. If we don't use MATH, we eliminate that color only. So if MATH has four choices, the work factor is at most MATH, and if it has three choices and four or more constraints, the work factor is at most MATH.
cs/0006046
For convenience suppose that the four-color neighbor is MATH. We can assume MATH has only two constraints, else it would be covered by a previous lemma. Then, if MATH and MATH do not form a triangle with a third (variable,color) pair (REF , left), we choose either to use or avoid color MATH. If we use MATH, we eliminate MATH and the three adjacent color choices. If we avoid MATH, we create a dangling constraint at MATH, which we have seen in REF allows us to further subdivide the instance with work factor MATH in addition to the elimination of MATH. Thus, the overall work factor in this case is MATH. On the other hand, suppose we have a triangle of constraints formed by MATH, MATH, and a third (variable,color) pair MATH, as shown in REF , right. Then MATH and MATH are the only choices constraining MATH, so if MATH and MATH are both not chosen, we can safely choose to use color MATH. Therefore, we make three smaller instances, in each of which we choose to use one of the three choices in the triangle. We can assume from the previous cases that MATH has only three choices, and further its third neighbor (other than MATH and MATH) must also have only three choices or we could apply the previous case of the lemma. In the worst case, MATH has only two constraints and MATH has only three color choices. Therefore, the size of the subproblems formed by choosing MATH, MATH, and MATH is reduced by at least MATH, MATH, and MATH respectively, leading to a work factor of MATH. If instead MATH has four color choices, we get the better work factor MATH. For the given range of MATH, the largest of these work factors is MATH.
cs/0006046
Let MATH be the neighbor with two constraints. Note that (since the previous lemma is assumed not to apply) all neighbors of MATH have only three color choices. First, suppose MATH and MATH are not part of a triangle of constraints (REF , top). Then, if we choose to use color MATH we eliminate four variables, while if we avoid using it we create a dangling constraint on MATH which we further subdivide into two more instances according to REF . Thus, the work factor in this case is MATH. Second, suppose that MATH and MATH are part of a triangle with a third (variable,color) pair MATH, and that MATH has three constraints (REF , bottom left). Then (as in the previous lemma) we may choose to use one of the three choices in the triangle, resulting in work factor MATH. Finally, suppose that MATH, MATH, and MATH form a triangle as above, but that MATH has only two constraints (REF , bottom right). Then if we choose to use MATH we eliminate four variables, while if we avoid using it we create an isolated constraint between MATH and MATH. Thus in this case the work factor is MATH.
cs/0006046
Let MATH and MATH be variables in a small component MATH. Then each (variable,color) pair in MATH from variable MATH has exactly one constraint to a distinct (variable,color) pair from variable MATH, so the numbers of pairs from MATH equals the number of pairs from MATH. The assertions that each variable has the same number of pairs, and that the total number of pairs is a multiple of four, then follow.
cs/0006046
A component with MATH uses up all color choices for all four variables. Thus we may consider these variables in isolation from the rest of the instance, and either color them all (if possible) or determine that the instance is unsolvable. The remaining small components have MATH. Such a component may be drawn with the four variables at the corners of a square, and the top, left, and right pairs of edges uncrossed REF . If only the center two pairs were crossed, we would actually have two MATH components, and if any other two or three of the remaining pairs were crossed, we could reduce the number of crossings in the drawing by swapping the colors at one of the variables. Thus, the only possible small components with MATH are the one with all six pairs uncrossed, and the one with only one pair crossed. The first of these allows all four variables to be colored and removed, while in the other case there exist only three maximal subsets of variables that can be colored. (In the figure, these three sets are formed by the bottom two vertices, and the two sets formed by removing one bottom vertex). We split into instances by choosing to color each of these maximal subsets, eliminating all four variables in the component and giving work factor MATH.
cs/0006046
Choose some arbitrary pair MATH as a starting point, and perform a breadth first search in the graph formed by the pairs and constraints in the component. Let MATH be the first pair reached by this search where MATH is not one of the variables adjacent to MATH, let MATH be the grandparent of MATH in the breadth first search tree, and let the other three pairs be the neighbors of MATH. Then it is easy to see that MATH and its neighbors must use the same four variables as MATH and its neighbors, while MATH by definition uses a different variable.
cs/0006046
Let MATH, MATH, MATH, MATH, and MATH be a witness for the component. Then we distinguish subcases according to how many of the neighbors of MATH are pairs in the witness. CASE: If MATH has a constraint with only one pair in the witness, say MATH, then we choose either to use color MATH or to avoid it. If we use it, we eliminate some four variables. If we avoid it, then we cause MATH to have only two constraints. If MATH is also constrained by one of MATH or MATH, we then have a triangle of constraints (REF , top left). We can assume without loss of generality that the remaining constraint from this triangle does not connect to a different color of variable MATH, for if it did we could instead use the same five variables in a different order to get a witness of this form. We then further subdivide into three more instances, in each of which we choose to use one of the pairs in the triangle, as in the second case of REF . This gives overall work factor MATH. On the other hand, if MATH and MATH are not part of a triangle (REF , top right), then (after avoiding MATH) we can apply the first case of REF again achieving the same work factor. CASE: If MATH has constraints with two pairs in the witness (REF , bottom left), then choosing to use MATH eliminates four variables and causes MATH to dangle, while avoiding MATH eliminates a single variable. The work factor is thus MATH. CASE: If MATH has constraints with all three of MATH, MATH, and MATH (REF , bottom right), then choosing to use MATH also allows us to use MATH, eliminating five variables. The work factor is MATH. The largest of the three work factors arising in these cases is the first one, MATH.
cs/0006046
We split into subcases: CASE: Suppose the cycle passes through five consecutive distinct variables, say MATH, MATH, MATH, MATH, and MATH. We can assume that, if any of these five variables has four color choices, then this is true of one of the first four variables. Any coloring that does not use both MATH and MATH can be made to use at least one of the two colors MATH or MATH without violating any of the constraints. Therefore, we can divide into three subproblems: one in which we use MATH, eliminating three variables, one in which we use MATH, again eliminating three variables, and one in which we use both MATH and MATH, eliminating all five variables. If all five variables have only three color choices, The work factor resulting from this subdivision is MATH. If some of the variables have four color choices, the work factor is at most MATH, which is smaller for the given range of MATH. CASE: Suppose two colors three constraints apart on a cycle belong to the same variable; for instance, the sequence of colors may be MATH, MATH, MATH, MATH. Then any coloring can be made to use one of MATH or MATH without violating any constraints. If we form one subproblem in which we use MATH and one in which we use MATH, we get work factor at most MATH (the worst case occurring when only MATH has four color choices). CASE: Any long cycle which does not contain one of the previous two subcases must pass through the same four variables in the same order one, two, or three times. If it passes through two or three times, all four variables may be safely colored using colors from the cycle, reducing the problem with work factor one. And if the cycle has length exactly four, we may choose one of two ways to use two diagonally opposite colors from the cycle, giving work factor at most MATH. For the given range of MATH, the largest of these work factors is MATH.
cs/0006046
We form a bipartite graph, in which the vertices correspond to the variables and components of the instance. We connect a variable to a component by an edge if there is a (variable,color) pair using that variable and belonging to that component. Since each pair in a good three-component or small two-component is connected by a constraint to every other pair in the component, any solution to the instance can use at most one (variable,color) pair per component. Thus, a solution consists of a set of (variable,color) pairs, covering each variable once, and covering each component at most once. In terms of the bipartite graph constructed above, this is simply a matching. So, we can solve the problem by using a graph maximum matching algorithm to determine the existence of a matching that covers all the variables.
cs/0006046
We employ a backtracking (depth first) search in a state space consisting of MATH-CSP instances. At each point in the search, we examine the current state, and attempt to find a set of smaller instances to replace it with, using one of the reduction lemmas above. Such a replacement can always be found in polynomial time by searching for various simple local configurations in the instance. We then recursively search each smaller instance in succession. If we ever reach an instance in which REF applies, we perform a matching algorithm to test whether it is solvable. If so, we find a solution and terminate the search. If not, we backtrack to the most recent branching point of the search and continue with the next alternative at that point. A bound of MATH on the number of recursive calls in this search algorithm, where MATH is the maximum work factor occurring in our reduction lemmas, can be proven by induction on the size of an instance. The work within each call is polynomial and does not add appreciably to the overall time bound. To determine the maximum work factor, we need to set a value for the parameter MATH. We used NAME to find a numerical value of MATH minimizing the maximum of the work factors involving MATH, and found that for MATH the work factor is MATH. For MATH near this value, the two largest work factors are MATH (from REF ) and MATH (from REF ); the remaining work factors are below REF The true optimum value of MATH is thus the one for which MATH. As we now show, for this optimum MATH, MATH, which also arises as a work factor in REF . Consider subdividing an instance of size MATH into one of size MATH and another of size MATH, and then further subdividing the first instance into subinstances of size MATH, MATH, and MATH. This four-way subdivision combines subdivisions of type MATH and MATH, so it must have a work factor between those two values. But by assumption those two values equal each other, so they also equal the work factor of the four-way subdivision, which is just MATH.
cs/0006046
Randomly choose a subset of four values for each variable and apply our algorithm to the resulting MATH-CSP problem. Repeat with a new random choice until finding a solvable MATH-CSP instance. The random restriction of a variable has probability MATH of preserving solvability so the expected number of trials is MATH. Each trial takes time MATH. The total expected time is therefore MATH.
cs/0006046
Let the cycle MATH consist of vertices MATH, MATH, MATH, MATH. We can assume without loss of generality that it has no chords, since otherwise we could find a shorter cycle in MATH; therefore each MATH has a unique neighbor MATH outside the cycle, although the MATH need not be distinct from each other. Note that, if any MATH and MATH are adjacent, then MATH is REF-colorable iff MATH is; for, if we have a coloring of MATH, then we can color MATH by giving MATH the same color as MATH, and then proceeding to color the remaining cycle vertices in order MATH, MATH, MATH, MATH, MATH, MATH, MATH, MATH. Each successive vertex has only two previously-colored neighbors, so there remains at least one free color to use, until we return to MATH. When we color MATH, all three of its neighbors are colored, but two of them have the same color, so again there is a free color. As a consequence, if MATH has even length, then MATH is REF-colorable iff MATH is; for if some MATH and MATH are given different colors, then the above argument colors MATH, while if all MATH have the same color, then the other two colors can be used in alternation around MATH. The first remaining case is that MATH (REF , left). Then we divide the problem into two smaller instances, by forcing MATH and MATH to have different colors in one instance (by adding an edge between them, REF top right) while forcing them to have the same color in the other instance (by collapsing the two vertices into a single supervertex, REF bottom right). If we add an edge between MATH and MATH, we may remove MATH, reducing the problem size by three. If we give them the same color as each other, the instance is only colorable if MATH is also given the same color, so we can collapse MATH into the supervertex and remove the other two cycle vertices, reducing the problem size by four. Thus the work factor in this case is MATH. If MATH is odd and larger than three, we form three smaller instances, as shown in REF . In the first, we add an edge between MATH and MATH, and remove MATH, reducing the problem size by MATH. In the second, we collapse MATH and MATH, add an edge between the new supervertex and MATH, and again remove MATH, reducing the problem size by MATH. In the third instance, we collapse MATH, MATH, and MATH. This forces MATH and MATH to have the same color as each other, so we also collapse those two vertices into another supervertex and remove MATH, reducing the problem size by four. For MATH this gives work factor at most MATH. For MATH the subproblem with MATH vertices contains a triangle of degree-three vertices, and can be further subdivided into two subproblems of MATH and MATH vertices, giving the claimed work factor.
cs/0006046
Suppose the subset forms a MATH-vertex tree, and let MATH be a vertex in this tree such that each subtree formed by removing MATH has at most MATH vertices. Then, if MATH is REF-colored, some two of the three neighbors of MATH must be given the same color, so we can split the instance into three smaller instances, each of which collapses two of the three neighbors into a single supervertex. This collapse reduces the number of vertices by one, and allows the removal of MATH (since after the collapse MATH has degree two) and the subtree connected to the third vertex. Thus we achieve work factor MATH where MATH and MATH. The worst case is MATH, achieved when MATH and the tree is a path.