paper
stringlengths
9
16
proof
stringlengths
0
131k
quant-ph/0011046
Let MATH be the semi-density matrix MATH . From the first form, it can be seen that it is semi-density, from the second form, it can be seen that it is lower semicomputable. By REF , we have MATH. Since MATH, we have MATH . On the other hand, MATH .
quant-ph/0011046
Given a strong enumeration MATH, the sum MATH clearly defines the projector in a form from which the possiblity of approximating it from below is seen. Assume now that MATH is a projector and MATH is a sequence of elementary nonnegative operators approximating it. Note that for a nonnegative operator MATH, we have MATH iff MATH. Now for any of the MATH, and any vector MATH, if MATH then MATH, which implies MATH and thus MATH. Hence the kernel of MATH contains the kernel of MATH and hence the space of eigenvectors of MATH with nonnegative eigenvalues is contained in MATH. This shows that from MATH, MATH we will be able to build up a sequence MATH of orthogonal vectors spanning MATH.
quant-ph/0011046
Assume MATH and expand MATH in the basis MATH as MATH. By the assumption, we have MATH. Let MATH be the first MATH with MATH. Since MATH we have MATH. Also, MATH hence MATH, which proves REF. Now assume MATH, then we have MATH. Let MATH be the first MATH with MATH. Since MATH we have MATH. Also, MATH hence MATH .
quant-ph/0011046
We start from the end of the proof of REF . We use REF with MATH, and note that one term, say, MATH of the sum MATH must be at least MATH. We would be done if we could upperbound MATH appropriately. It would seem that MATH can be bounded approximately by MATH since MATH. But unfortunately, neither the vectors MATH nor their sequence are computable; so, an approximation is needed. Let MATH be the largest binary number of length MATH smaller than MATH. Then there is a program MATH of length MATH computing a lower approximation MATH of MATH such that MATH. Indeed, let MATH specify the binary digits of MATH and then compute an approximation of MATH that exceeds MATH. The condition MATH implies MATH. We can now proceed with MATH as with MATH. We compute eigenvectors MATH for MATH, and find an elementary vector MATH with MATH . The extra MATH in MATH is coming from the program MATH above.
quant-ph/0011046
Let MATH, then MATH. Hence MATH, therefore MATH giving MATH.
quant-ph/0011046
For each MATH, let MATH be the projection to the space MATH of MATH-length inputs. The operator MATH is a semicomputable semi-density matrix on the set of all inputs. For each time MATH, the semi-density matrix MATH is semicomputable. As it is increasing in MATH, the limit MATH is a semicomputable semi-density matrix, and therefore MATH. Let MATH, then MATH, hence MATH, hence for each MATH we have MATH . Since also MATH, we can assert, with MATH, that MATH . Assume that MATH. Then by REF , if MATH has the eigenvalue decomposition MATH then MATH and MATH. The matrix MATH can be written as MATH . Hence, with MATH, and using REF and MATH . In the last inequality, the first two terms come from the first term of the previous sum, while MATH comes from the rest of the terms.
quant-ph/0011046
For the second inequality, we can use REF with MATH and MATH. The oracle MATH allows us to compute the space MATH with arbitrary precision. Then our quantum Turning machine can simply map the space of MATH-length qubit strings into the (approximate) MATH. Similarly, for the first inequality, if MATH is computable then we can compute the subspaces corresponding to MATH with arbitrary precision.
quant-ph/0011046
Straightforward.
quant-ph/0011046
See CITE.
quant-ph/0011046
Let MATH, then MATH is a density matrix, and hence by REF, MATH. It follows that MATH. On the other hand, since MATH, the monotonicity of logarithm gives MATH which gives the other inequality.
quant-ph/0011046
Direct computation.
quant-ph/0011046
The density matrix MATH over the space MATH is lower semicomputable, therefore REF follows. Hence MATH which gives REF for MATH. For MATH note that by the monotonicity of logarithm, identity REF implies MATH . Taking the expectation (multiplying by MATH on left and MATH on right) gives the desired result.
quant-ph/0011046
Let MATH. Then MATH is a semicomputable semi-density matrix over MATH and thus MATH. At the same time, for any fixed vector MATH, the matrix MATH is a lower semicomputable semi-density matrix, hence MATH. Taking the partial trace gives MATH . This proves REF, which implies the inequality for MATH. Let MATH be any orthogonal basis of MATH with MATH. Then we have MATH which proves MATH. Taking logarithms and noting that MATH, we get REF which proves the inequality for MATH.
quant-ph/0011046
The upper bound follows from the fact that MATH and from REF. For simplicity, let us write for the moment, MATH. For the lower bound, let us first set MATH. We have MATH for all states MATH. Let MATH be the projection to MATH. Let MATH be the uniform distribution on the unit sphere in MATH. Then MATH is a density matrix over MATH. It commutes with all unitary transformations of the form MATH, and therefore according to REF , MATH . Integrating REF by MATH we get MATH . Taking negative logarithm, we get the lower bound on MATH.
quant-ph/0011046
We can restrict ourselves to matrices MATH with MATH. Then with MATH, MATH, MATH where MATH is the transpose of MATH (without conjugation). By singular value decomposition (see CITE), every matrix can be written in the form MATH where MATH is a nonnegative diagonal matrix and MATH are unitary transformations. If the elements of MATH are all distinct, positive and in decreasing order then MATH are unique. In this case, clearly if MATH is symmetric then MATH. This can be generalized to the case when the elements of MATH are not all positive and distinct, using for example limits. Thus, MATH. This gives MATH. As MATH runs through all possible vectors with MATH, so does MATH. Let MATH be the largest element on the diagonal of MATH, then MATH. MATH since MATH. The maximum of MATH is achieved by the element MATH, and then it is MATH.
quant-ph/0011046
Using the notation of REF , let MATH be the subspace of dimension MATH of vectors MATH on which the minimum MATH is achieved. NAME MATH, let MATH be the semi-density matrix defined in the proof of REF . Similarly to REF we have, for any MATH: MATH . Note that MATH for some MATH, hence by REF we have MATH, hence the last term of the right-hand side is MATH.
quant-ph/0011046
The reasoning of REF implies that MATH lower-bounds the left-hand side in the above lemma. Thus, MATH .
quant-ph/0011046
The proof of the existence of a universal test is similar to the proof of REF . The proof of MATH is similar to the one showing MATH in REF . Let us prove MATH. To see that MATH is lower semicomputable, note that as direct computation shows, for any operator MATH the function MATH is monotonic on the set of self-adjoint operators MATH with respect to the relation MATH. By the cyclic property of the trace, we also have MATH. This proves MATH, it remains to prove that MATH. This is equivalent to MATH . But the left-hand side is a lower semicomputable nonnegative definite matrix whose trace is MATH, again due to the cyclic property of trace. Therefore by the defining property of MATH, it is MATH.
quant-ph/0011046
Let MATH be all programs of length MATH for which MATH. Then MATH. Let MATH be the set of elements of MATH orthogonal to all vectors of the form MATH.
quant-ph/0011046
We have, for MATH: MATH . So, we need to estimate MATH. The method used (also called ``NAME method), works for any twice differentiable function with a single maximum. Let MATH, then it can be checked that MATH, MATH, MATH for MATH. The NAME expansion around MATH gives, for MATH: MATH where MATH. Hence, since MATH is increasing, we have for MATH, MATH . On the other hand, by REF, MATH, showing MATH. Hence MATH .
quant-ph/0011046
We view MATH as a MATH-dimensional Euclidean space. Assume MATH. If MATH is the angle between MATH and MATH then this means MATH giving MATH. For a fixed MATH, the relative volume (with respect to MATH) of the set of vectors with MATH is therefore by REF MATH . Let MATH be all programs of length MATH for which MATH. Then MATH. The volume of all vectors MATH that are close in the above sense to at least one of the vectors MATH is thus MATH .
quant-ph/0011046
According to REF , there is a subspace MATH of MATH, of dimension MATH with the property that for all MATH, for all MATH we have MATH. Let MATH. We can apply REF to this subspace MATH of dimension MATH, and obtain that for a certain constant MATH, the volume fraction of vectors with MATH is MATH . If MATH is large this is smaller than REF, so there are states MATH with MATH and MATH. For these, clearly MATH .
quant-ph/0011067
The first step is a standard setup used in quantum algorithms. The only difference is that MATH evaluates to zero in one position. In this case, just treat it as a one. After this the state is exponentially close to the state shown. Recall that the NAME Symbol MATH is zero when MATH, so one amplitude is zero. The result of applying the NAME transform is (where we replace MATH with MATH) MATH . Factoring out the MATH term, using the change of variable MATH, and using the facts that MATH and MATH we have MATH . So we are left to evaluate MATH, which is the NAME sum MATH, and is MATH if MATH and is MATH if MATH. Hence, up to a global constant which we can ignore, the state follows.
quant-ph/0011067
We start with the uniform superposition of MATH and calculate the function value MATH for each element: MATH . Next, we measure if the rightmost value is non-zero. If this is the case, which happens with probability MATH (where MATH is NAME 's phi function obeying MATH), the state has collapsed to the superposition: MATH . Otherwise, we simply try again the same procedure. (The success probability MATH is lower bounded by MATH, see CITE, hence we can expect to be successful after MATH trials.) We continue with the reduced state by changing the phase of MATH to MATH and uncomputing the function value again, giving MATH . Let MATH be the prime decomposition of MATH such that MATH. Using NAME 's algorithm CITE, we can determine these factors efficiently. Because MATH, we can just consider each MATH component separately (with MATH, MATH, et cetera). Hence, by performing the `inverse NAME remainder' map MATH, we obtain the state MATH . But now we use REF on each factor to get MATH, after which the NAME remainder theorem gives us the answer MATH.
quant-ph/0011067
Let MATH be the state after the setup in REF and let MATH be the repeated version in REF , where MATH is the normalizing constant. We can relate the distributions induced by NAME sampling MATH and MATH using the discussion in REF. If MATH then REF implies that MATH is uniformly distributed over MATH and we would be done since the denominator returned by continued fractions is MATH in this case. However this will still be the case even if MATH. If MATH is a multiple of MATH and if the NAME transform of MATH is MATH, then the NAME transform of MATH is MATH, so we get what we want. If MATH is not a multiple but is large enough, the distributions as discussed in REF are MATH-close.
quant-ph/0011067
First, we note that we can rewrite the output as MATH by substituting MATH with MATH in the summation and using the fact that MATH for all MATH. The amplitudes between the square brackets depend on MATH in the following way. First, we consider the case when MATH is co-prime to MATH, that is: MATH, and there exists also an inverse MATH. We then see that MATH where we used the substitution MATH and the multiplicativity of the NAME symbol. Next, we look at the case where MATH and MATH have a common, non-trivial, factor MATH. We say that MATH and MATH, and we know that MATH and MATH are co-prime (because MATH is square free). The NAME remainder theorem tells us that there is a bijection between the elements MATH and the coordinates MATH, which also establishes a one-to-one mapping between MATH and MATH. This allows us to rewrite the expression as follows. MATH . Because MATH is odd and square free MATH, and hence the above term equals zero. This concludes the proof of the lemma.
quant-ph/0011067
Assume that the mapping MATH can be computed in polynomial time. First apply this map, and then compute the NAME transform over MATH. This gives us the final state MATH . We will now show that the map MATH is reversible. Let MATH. MATH is a linear functional since MATH is, so if MATH then MATH is the zero vector. We will show that MATH is not the zero vector except for MATH. Suppose MATH is the zero vector. Since MATH is not the zero map, choose MATH such that MATH. Choose MATH such that MATH. Then MATH, since MATH for all MATH. But this is a contraction by the choice of MATH. So MATH is one-to-one. We will now show that the map is computable in polynomial time. It is enough if MATH can be computed from MATH. But the equations MATH, MATH, , MATH are MATH linear equations in MATH unknowns, and the values MATH and MATH for all MATH are known, so the coefficients MATH of MATH can be solved for using linear algebra.
quant-ph/0011067
For the first step we create, with one call to MATH the superposition MATH where MATH denotes a `dummy state'. Next, we measure if the rightmost bit is zero. If this is the case (probability MATH), the state has collapsed to MATH, which tells us the value MATH immediately. Otherwise, we are left with the superposition of the entries MATH with MATH and the dummy state. This enables us to create the proper phases MATH and uncompute (with a second MATH query) the rightmost bit, which we will ignore from now on. At REF , we perform the NAME transform to the state, yielding MATH . We rewrite this expression as follows: replace MATH with MATH, and use the multiplicativity of MATH and the linearity of the trace in MATH. This proves the validity of REF .
quant-ph/0011067
Every multiplicative group MATH has a generator MATH with period MATH. Use this in combination with the proof method of the preceding paragraphs.
quant-ph/0011067
Let MATH be the decomposition of MATH into its prime factors. The definition of the NAME symbol in combination with NAME remainder theorem yields the equality MATH . By the previous lemma we know that each MATH is zero, hence the above product is zero as well.
cond-mat/0012045
Upon substituting of MATH we find MATH . The sequences MATH and MATH are bounded, so there exist numbers MATH and MATH such that MATH and MATH for all MATH. The sequence MATH converges to MATH, so for any MATH there exists a MATH such that for all MATH: MATH. We now choose MATH such that for all MATH, MATH and MATH. Then we find for all MATH: MATH . Hence the limit is as claimed.
cond-mat/0012045
The proof proceeds by induction. For MATH, the statement is trivially true. Suppose now that it is true for all MATH. Then MATH . The sequence MATH satisfies the conditions of the preceding lemma, application of which gives MATH . Hence the claim holds for MATH, and by induction it is now proved for all MATH.
cs/0012001
The MATH data structure operations generate MATH collection attempts, and for each attempt finding count odd, segment MATH is chosen. Thus either MATH or MATH collection attempts occur at MATH, and the remaining collection attempts occur in some MATH, MATH. A similar observation holds for MATH, and recursively, to verify the lemma.
cs/0012001
The proof is a straightforward verification that all operations (insert, delete, find) consult and modify only the active tree, and background activities do not remove items from the active tree. Thus the content of the search tree is defined as the set of items contained in the MATH nodes of the active tree.
cs/0012001
The proof relies on arguments using truncation and background cleaning to show that MATH operations are sufficient to visit, check, and correct all nodes of the initial base tree (including the possibility that successful insert operations increase the size of the base tree during this sequence of MATH operations).
cs/0012001
In the worst case, each of the MATH tree nodes in MATH have three children each, so MATH of the MATH operations can split these initially present nodes. The usual maximum rate of splitting is once per MATH insert operations, and the lemma states both of these observations.
cs/0012001
To reason about progress over the course of a sequence of operations on the data structure, a type of variant function is useful. We use for each segment MATH a four tuple MATH, where MATH, MATH, and MATH are defined above, and MATH is the number of collection attempts that have previously occurred in MATH (from the initial state to the current state). The evolution of this tuple for different types of operations is summarized as follows. MATH . In this table, MATH represents a nondeterministic number of collection attempts in MATH (ranging between zero and five) addressed by REF for a single operation. The types of operations in the table are REF a node split, REF a key insertion without a split, REF a find, unsuccessful insert or delete, or a successful insert or delete affecting segments below MATH but making no change at MATH, REF a node merge, and REF removal of a key without a merge. Only the transition REF increases the number of tree nodes in a segment, and we introduce simpler notation for this, summing MATH and MATH to make a triple: MATH . Although the additive factor MATH in the table above is indeterminate, REF does provide a lower bound for a sequence of operations. Since our goal is to establish sufficient free chain size, we consider the worst case sequence of operation to deplete a free chain, namely a sequence of type REF transitions. For a sequence of MATH data structure operations starting from the initial state, with MATH, the result of the sequence satisfies MATH where MATH, the number of type REF transitions in segment MATH satisfies MATH by REF , and MATH by REF . Because we require bounds only we can write MATH and MATH. Approximate bounds are expressed by MATH . Also, the initial value MATH lies in the range MATH by the definition of a REF tree, so a conservative bound on the free chain size is given by MATH . We now distinguish between two cases, MATH and MATH. For the case MATH recall that each segment MATH has approximately half of the elements of MATH, with MATH having about MATH elements (so that, if each element is a node with two items, the capacity MATH has been attained). It follows that within MATH operations, every element of every segment undergoes a collection attempt. Thereafter, each element of MATH is either on the free chain for MATH or is a node in the active tree. In such a state, an insert operation fails only if the active tree contains at least MATH items, which establishes the theorem's conclusion. For the case MATH, we examine a history of MATH operations (MATH). For bounding the free chain size, we then have MATH . An overestimate of the count of nodes and size of free chain is obtained by the substitution of MATH for MATH, given by MATH . Thus we see MATH collection attempts in MATH exceeds the number of active tree nodes by at least MATH; this implies that after the MATH operations, at least MATH collection attempts occur outside of active tree nodes. Of course, some or all of these collection attempts could apply to elements already in the free chain. So, while not every collection attempt outside the active tree results in an increase in the free chain size, the MATH collection attempts do ensure a free chain size of at least MATH, less any elements consumed by splits during the period of these collection attempts. Since the number of elements consumed is MATH during this period, it follows that the free chain size is at least MATH. Thus if MATH (the minimum needed to permit all the splits), then the size of the free chain after MATH operations is at least MATH in segment MATH. A conclusion of this analysis is that MATH operations at most triple the number of tree nodes in MATH, while multiplying the free list size by a factor of six. The analysis also shows that MATH is sufficient to supply all node allocation. Hence, the result of applying MATH operations supplies sufficiently many free list elements for a subsequent sequence of MATH operations (because MATH is twice MATH).
cs/0012001
The analysis presented in the proof of REF holds for purposes of bounding the free chain size even when operations are not of type REF , which shows that after MATH operations every free chain size either includes all non-tree nodes or is double the number of free chain nodes.
cs/0012001
REF show stability for a tree in safe state. REF states that MATH operations suffice to reach a normal state, and REF implies that within MATH subsequent operations, the state is safe.
cs/0012004
CASE: Suppose MATH is a witness to the safety of MATH. There are two cases: CASE: Let MATH be an atomic code call condition of the form MATH, then by the definition of safety, MATH, where MATH, and either MATH is a root variable or MATH. Then, there exist MATH, MATH, MATH, MATH, MATH i , such that MATH, and MATH. But, then MATH is dependent on each of the MATH, MATH, MATH, MATH, MATH i by definition. Hence, there exist edges MATH . Therefore, MATH, MATH, MATH, MATH, MATH i precede MATH, hence MATH is also a topological sort of the cceg of MATH. CASE: If MATH is an equality/inequality of the form MATH op MATH, then at least one of MATH, MATH is a constant or a variable MATH such that MATH. Suppose at least one of MATH, MATH is a variable. Then, there exists a MATH, j MATH i, such that MATH, as MATH. But, then MATH is dependent on MATH by definition, and there exists an edge MATH in the cceg of MATH. Hence, MATH precedes MATH in the topological sort of the cceg. If both MATH and MATH are constants, then their nodes have in-degree REF in the cceg, and no code call condition needs to precede MATH in the topological sort order, that is, they are unrestricted. Therefore, MATH is also a topological sort of the cceg of MATH. CASE: Suppose MATH is a topological sort of the cceg of MATH. Let MATH be code call conditions such that there exist edges MATH in the cceg of MATH. Then, by definition each MATH, MATH, depends on MATH. If MATH is an atomic code call condition of the form MATH, then MATH. As MATH, MATH, by definition of MATH, hence MATH. On the other hand, if MATH is an equality/inequality of the form MATH op MATH, then either MATH is a variable and MATH, where MATH, or MATH is a variable and MATH, where MATH, or both. But, MATH. Hence, MATH. If both MATH and MATH are constants, then they are unrestricted in the topological sort. Therefore, MATH is also a witness to the safety of MATH.
cs/0012004
The proof is by induction on the structure of condition lists. CASE: REFs are when the condition list consists of MATH where MATH and each of MATH, MATH is either a variable or a constant. We suppress the cases when both MATH are constants: the relation either holds (in that case we an eliminate MATH) or it does not (in that case we can eliminate the whole invariant). CASE: We have to consider terms of the form MATH (respectively, MATH) and distinguish the following cases. For each case we define expressions MATH such that MATH is equivalent to MATH. CASE: MATH is a constant MATH: Then MATH is a variable. We modify MATH by introducing a new variable MATH and adding the following ccc to all subexpressions of MATH containing MATH . We note that MATH now becomes an auxiliary variable and MATH is a base variable. MATH is defined to be the modified MATH just described. CASE: MATH is a constant MATH: Then MATH is a variable. We modify MATH by introducing a new variable MATH and adding the following ccc to all subexpressions of MATH containing MATH . Again, MATH becomes an auxiliary variable and MATH is a base variable. MATH is defined to be the modified MATH just described. CASE: Both MATH are variables: We modify MATH by introducing a new variable MATH and adding the following ccc to all subexpressions of MATH containing MATH . Again, MATH becomes an auxiliary variable and MATH is a base variable. MATH is defined to be the modified MATH just described. The case MATH is completely analogous: just switch MATH with MATH. Note that the above covers all possible cases, as any variable in the condition list must be a base variable (see REF ). CASE: Analogous to the previous case, just replace MATH by MATH CASE: If in MATH the term MATH is a variable, then we replace each occurrence of MATH in MATH by MATH. If MATH is a constant and MATH is a variable, replace each occurrence of MATH in MATH by MATH. CASE: As the condition list is just a conjunction of the cases mentioned above, we can apply our modifications of MATH one after another. Once all modifications have been performed, we arrive at an equivalent formula of the form MATH .
cs/0012004
CASE: Let MATH be an invariant. We can assume that ic is in DNF: MATH. Thus we can write MATH as follows: MATH . Let MATH be any ground instance of MATH. If MATH, then either MATH evaluates to false in state MATH, or MATH is true in MATH. Assume that MATH evaluates to false, then each MATH has to be false in MATH. Hence, MATH . Assume MATH evaluates to true in MATH. Then there exists at least one MATH that evaluates to true in state MATH. Let MATH be the set of conjunctions that are true in MATH. As all other MATH evaluates to false, MATH. But MATH, hence MATH is true in MATH. As a result, MATH. Since, each MATH is an ordinary invariant the result follows from REF . CASE: Assume that MATH and suppose MATH. Then by REF , MATH. There exists at least one MATH which evaluates to true in MATH. But then, MATH is true in state MATH. Hence, MATH.
cs/0012004
MATH . REF follows from REF . Note that it also holds for invariants of the form MATH because they can be written as two separate invariants: MATH and MATH. REF is immediate by the very definition.
cs/0012004
We show that the containment problem CITE in the relational model of data is an instance of the problem of checking implication between invariant expressions. The results follow then from REF and the fact, that the containment problem in relational databases is well known to be undecidable. To be more precise, we use the results in CITE, where it has been shown that in the relational model of data, the containment of conjunctive queries containing inequalities is undecidable. It remains to show that our implication check problem between invariant expressions can be reduced to this problem. Let MATH be a code call that takes as input an arbitrary set of subgoals corresponding to the conjunctive query MATH and returns as output the result of executing MATH. Let MATH and MATH be arbitrary conjunctive queries which may contain inequalities. we define MATH . Then, clearly MATH . Hence the implication check problem is also undecidable.
cs/0012004
Clearly, by REF , it suffices to prove the proposition for NAME. For an invariant expression that is, the set of all substitutions MATH such that MATH is ground, is finite (because of our assumption about finiteness of the domains of all datatypes). Thus, our atomic code call conditions MATH can all be seen as propositional variables. Therefore, using this restriction, we can view our formulae as propositional formulae and a state corresponds to a propositional valuation. With this restriction, our problem is certainly in MATH, because computing MATH is nothing but evaluating a propositional formula (the valuation corresponds to the state MATH). Thus ``MATH for all MATH and all assignments MATH" translates to checking whether a propositional formula is a tautology: a problem known to be in MATH. To show completeness, we use the fact that checking whether MATH is a logical consequence of MATH (where MATH is an arbitrary clause and MATH an arbitrary consistent set of clauses) is well-known to be MATH-complete. We prove our proposition by a polynomial reduction of implication between atomic invariant expressions to this problem. Let MATH be an atomic invariant expression, that is, an atomic code call condition: it takes as input, a set of clauses, and returns as output, all valuations that satisfy that set of clauses. Let MATH denote the set of results of evaluating MATH on MATH with respect to a state MATH. Then MATH . Hence, checking whether an arbitrary atomic invariant expression MATH implies another atomic invariant expression MATH is MATH hard.
cs/0012004
We translate each simple invariant to a predicate logic formula by induction on the structure of the invariant. CASE: For each MATH-ary code call MATH we introduce a MATH-ary predicate MATH. Note that we interpret MATH as a set of elements. The additional argument is used for containment in this set. CASE: We then replace each simple invariant expression MATH by the universal closure (with respect to all base variables) of the formula MATH CASE: A simple ordinary invariant of the form MATH is translated into MATH where MATH denotes the universal closure with respect to all remaining variables. CASE: A simple invariant of the form MATH is translated into the following MATH statements REF MATH where MATH denotes the universal closure with respect to all remaining variables. Note that according to the definition of a simple ordinary invariant and according to the definition of a code call condition (in front of REF ), MATH and the MATH are conjunctions of equalities MATH and inequalities MATH, MATH, MATH, MATH where MATH are real numbers or variables. The statement MATH is easily proved by structural induction on simple invariants and condition lists.
cs/0012004
We use the translation of REF . The assumption MATH expresses that there is a state MATH and a substitution MATH of the base variables in MATH such that MATH and there is an object MATH such that MATH and MATH. As MATH entails MATH, MATH is not satisfied by MATH. Thus there is MATH such that MATH and there is an object MATH with MATH and MATH. Now suppose MATH. Then we simply modify the state MATH (note a state is just a collection of ground code call conditions) so that MATH. We do this for all MATH that are counterexamples to the truth of MATH. Because MATH, this modification does not affect the truth of MATH and MATH. But this is a contradiction to our assumption that MATH entails MATH. Thus we have proved: MATH. Similarly, we can also modify MATH by changing the extension of MATH and guarantuee that MATH holds in MATH. So we also get a contradiction as long as MATH. Therefore we have proved that MATH. Our second claim follows trivially from MATH, and MATH.
cs/0012004
Let MATH and MATH. Then by the computation performed by the Combine algorithm, the derived invariant has the following form MATH where MATH is determined by REF . If MATH we are done. In this case, there is no state MATH satisfying a ground instance of MATH. We assume that we are given a state MATH of the agent that satisfies MATH, MATH and MATH. Let MATH and MATH be any ground instances of MATH and MATH. Then, either MATH evaluates to false, or MATH is true in MATH. Similarly, either MATH is false or MATH is true in MATH. If either MATH or MATH evaluates to false, then MATH also evaluates to false, and MATH is also satisfied. Let's assume both MATH and MATH evaluate to true. Then so does MATH, and both MATH and MATH are true in MATH, as MATH satisfies both MATH and MATH. If MATH, then both MATH and MATH, MATH and MATH (in all states satisfying MATH and MATH). Then, we have MATH, hence MATH is true in MATH, and MATH satisfies MATH. If MATH, then MATH (in all states satisfying MATH and MATH), and we have MATH, and MATH. As MATH is satisfied by any MATH that also satisfies both MATH, MATH and MATH, we have MATH.
cs/0012004
Let MATH and MATH. Then, either Combine returns NIL or the derived invariant has the following form MATH . In the latter case, MATH, MATH and MATH as implied by the Combine algorithm. We assume that we are given a state MATH of the agent that satisfies both MATH and MATH. Let MATH and MATH be any ground instances of MATH and MATH. Then, either MATH evaluates to false, or MATH is true in MATH. Similarly, either MATH is false or MATH is true in MATH. We have four possible cases. CASE: Both MATH and MATH evaluate to false. Then MATH also evaluates to false, and MATH is also satisfied. CASE: MATH evaluates to false and MATH evaluates to true. Since MATH, MATH is true in MATH. Then MATH also satisfies MATH. CASE: MATH evaluates to true and MATH evaluates to false. In this case, MATH is true in MATH, since MATH satisfies MATH. Hence MATH also satisfies MATH. CASE: Both MATH and MATH evaluate to true. Again, since MATH satisfies both MATH and MATH, MATH is true in MATH and MATH is also satisfied.
cs/0012004
Suppose MATH and MATH. We need to show that MATH. By definition of MATH, there are five possible cases: CASE: MATH, hence MATH by definition of MATH. CASE: MATH. As MATH, MATH. Hence, MATH by definition of MATH. CASE: MATH = Combine REF where MATH. But then MATH as MATH. Hence, MATH. CASE: MATH = Combine REF where MATH. But then MATH as MATH. Hence, MATH. CASE: MATH = Combine REF where MATH. But then MATH as MATH. Hence, MATH.
cs/0012004
Let MATH. We need to show MATH. By the definition of MATH, there are three possible cases: CASE: MATH, then MATH by the definition of MATH. CASE: MATH, which is trivial. CASE: MATH = Combine(MATH) (or MATH = Combine REF or MATH = Combine(MATH)) such that MATH. There exists a smallest integer MATH (i=REF) such that MATH. Let MATH. Then, MATH. By definition of MATH and as MATH, MATH. Hence, MATH.
cs/0012004
Suppose MATH. Then, there exists a smallest integer MATH, such that MATH. The proof is by induction on MATH. Let the inductive hypothesis be defined as MATH, if MATH, then MATH. CASE: MATH, MATH, then there are four possible cases: CASE: MATH, hence MATH, REF MATH where MATH. As MATH, MATH. Then, by REF , MATH. Therefore, MATH. CASE: MATH = Combine REF where MATH. Since MATH, MATH. Then, by REF , MATH. Therefore, MATH. CASE: MATH = Combine REF where MATH. As MATH, MATH. Then, by REF , MATH. Therefore, MATH. CASE: MATH. Let MATH. Then, there exist MATH, such that MATH is derived by one of Combine, Combine or Combine operators. That is, either MATH, or MATH, or MATH. Because this is the only possibility, as MATH, by definition of MATH. By the inductive hypothesis MATH and MATH. By REF , MATH. Hence, MATH.
cs/0012004
The correctness of the system is obvious, as all rules have this property. The completeness follows by adapting the classical completeness proof of first-order logic and taking into account the special form of the inv-formulae. Let MATH be a formula that follows from MATH . Then the set MATH is unsatisfiable (because MATH). Therefore it suffices to show the following claim: Given a set MATH of inv-formulae, whenever MATH, then MATH is satisfiable. Because then the assumption MATH leads to a contradiction. Therefore, taking into account REF , we can conclude that at least an inv-formula of the form MATH with MATH must be derivable. The claim can be shown by establishing that each consistent set MATH containing inv-formulae and their negations, can be extended to a maximally consistent set MATH which contains witnesses. For such sets, the following holds: CASE: MATH implies MATH, REF for all MATH: MATH or MATH, and REF MATH implies that there is a term MATH with MATH. These properties induce in a natural way an interpretation which is a model of MATH.
cs/0012004
The proof is by reducing the statement into predicate logic using REF . We are then in a situation to apply REF . Note that the inference rules of REF act on inv-formulae exactly as CombineREF and CombineREF on invariants. Therefore there is a bijection between proofs in the proof system described in REF and derivations of invariants using CombineREF and CombineREF.
cs/0012004
We are reducing the statement to REF . We transform each invariant with MATH, into two separate invariants with MATH. If MATH is of the form MATH, we are done, because CASE: the set of transformed invariants is equivalent to the original ones, and CASE: although deriving invariants with MATH is possible (such invariants are contained in the set Taut and new ones will be generated by Combine and by Combine), for all such invariants we have also both their MATH counterparts (this can be easily shown by induction). Let's suppose therefore that MATH has the form MATH. We know that both MATH and MATH are entailed by MATH and we apply REF to these cases. We can assume without loss of generality that none of these two invariants is a tautology (otherwise we are done). Thus there are MATH (for MATH) and MATH (for MATH). We apply REF and get that MATH (respectively, MATH) has the form MATH (respectively, MATH). By symmetry MATH is equivalent (in fact, by using a deterministic strategy it can be made identical) to MATH. Thus by our Combine, there is also a derived invariant of the form MATH and this derived invariant clearly entails MATH (because MATH entails MATH and MATH entails MATH).
cs/0012004
CASE: The proof is by induction on the iteration of the while loop in the NAME algorithm. Let the inductive hypothesis be MATH if MATH is inserted into MATH in iteration i, then MATH. CASE: Base Step: For MATH, MATH, MATH, hence MATH. CASE: Inductive Step: Let MATH be inserted into MATH in iteration MATH, and MATH = Combine(MATH), where MATH and MATH are inserted into MATH at step MATH or earlier. Then, by the inductive hypothesis, MATH and MATH. By REF , MATH, hence MATH. CASE: First note that the NAME algorithm computes and returns MATH. The result follows from REF .
cs/0012005
CASE: apply the definition with MATH ``reduced" to a solution. REF: because reduction operators are monotonic.
cs/0012005
CASE: apply the definition with MATH such that MATH. REF: because reduction operators are monotonic.
cs/0012005
Let MATH be the downward closure of MATH by MATH. Let MATH be a chaotic iteration of MATH from MATH with respect to MATH. Let MATH be the limit of the chaotic iteration. Let MATH denotes: for each MATH, MATH. For each MATH, MATH, by induction: MATH. Assume MATH, by monotonicity, MATH. MATH: There exists MATH such that MATH because MATH is a well-founded ordering. The run is fair, hence MATH is a common fix-point of the reduction operators, thus MATH (the greatest common fix-point).
cs/0012005
The last equivalence is proved by REF . About the first one: CASE: Let MATH be the chaotic iteration (with respect to the run MATH). There exists MATH such that MATH but MATH. We define a tree MATH which is an explanation for MATH. MATH is inductively defined as follows: CASE: MATH; CASE: the root of the tree MATH is labeled by MATH; CASE: (we have previously observed that MATH because MATH and, for each MATH, MATH) the deduction rule used to connect the root to its children, which are labeled by the MATH, is MATH; CASE: the immediate subtrees of MATH are the MATH for MATH. REF: let us consider a numbering MATH of the nodes of the explanation such that the traversal according to the numbering from MATH to MATH corresponds to a breadth first search algorithm. For each MATH, let MATH be the name of the rule which links the node MATH to its children, and let MATH be the prefix of every iteration with respect to a run which starts by MATH. By induction we show that MATH, so MATH for every iteration whose run starts by MATH.
cs/0012008
Straightforward.
cs/0012008
By REF , there is an infinite sequence of nodes MATH, such that for each MATH, MATH is an offspring of MATH. To each call branch from MATH to a MATH the mapping MATH assigns one of the elements of the finite set MATH. By NAME 's theorem CITE we get that there is a subsequence MATH, such that for each MATH the mapping MATH assigns to the branch from MATH to MATH the same element.
cs/0012010
Consider a common fixpoint MATH of the functions from MATH. We prove that MATH. Let MATH be the iteration in question. For some MATH we have MATH for MATH. It suffices to prove by induction on MATH that MATH. The claim obviously holds for MATH since MATH. Suppose it holds for some MATH. We have MATH for some MATH. By the monotonicity of MATH and the induction hypothesis we get MATH, so MATH since MATH is a fixpoint of MATH.
cs/0012010
MATH . Consider the predicate MATH defined by: MATH . Note that MATH is established by the assignment MATH. Moreover, it is easy to check that by virtue of assumptions A, B and C MATH is preserved by each while loop iteration. Thus MATH is an invariant of the while loop of the algorithm. (In fact, assumptions A, B and C are so chosen that MATH becomes an invariant.) Hence upon its termination MATH holds, that is MATH . MATH . This is a direct consequence of MATH and the Stabilization REF . MATH . Consider the lexicographic ordering of the strict partial orderings MATH and MATH, defined on the elements of MATH by MATH . We use here the inverse ordering MATH defined by: MATH iff MATH and MATH. Given a finite set MATH we denote by MATH the number of its elements. By assumption all functions in MATH are inflationary so, by virtue of assumption B, with each while loop iteration of the modified algorithm the pair MATH strictly decreases in this ordering MATH. But by REF is finite, so MATH is well-founded and consequently so is MATH. This implies termination.
cs/0012010
It suffices to establish in each case assumption A and C. Let MATH . MATH . After introducing the GI algorithm we noted already that MATH. So assumption A implies MATH and a fortiori MATH. For assumption C it suffices to note that MATH implies that MATH is not idempotent, that is, that MATH. MATH . Consider MATH. Suppose that MATH. Then MATH which is a contradiction. So MATH. Consequently, assumption A implies MATH. For assumption C it suffices to use the fact that MATH.
cs/0012010
To deal with assumption A take a function MATH such that MATH. Then MATH for any MATH that coincides with MATH on all components that are in the scheme of MATH. Suppose now additionally that MATH. By the above MATH is not such a MATH, that is, MATH differs from MATH on some component MATH in the scheme of MATH. In other words, MATH depends on some MATH such that MATH. This MATH is then in the scheme of MATH and consequently MATH. The proof for assumption B is immediate. Finally, to deal with assumption C it suffices to note that MATH implies MATH, which in turn implies that MATH.
cs/0012010
The termination and MATH are immediate consequences of the counterpart of the CD REF for the CDI algorithm and of the NAME Consistency REF . To prove MATH note that the final CSP MATH can be obtained by means of repeated applications of the projection functions MATH starting with the initial CSP MATH. (Conforming to the discussion at the end of REF we view here each such function as a function on CSP's). As noted in Apt CITE each of these functions transforms a CSP into an equivalent one.
cs/0012010
See Appendix.
cs/0012010
MATH is a direct consequence of the Alternative Path Consistency Note REF. The proof of MATH is straightforward. These properties of the functions MATH, MATH and MATH were already mentioned in Apt CITE.
cs/0012010
The proof is analogous to that of the HYPER-ARC Algorithm REF . The termination and MATH are immediate consequences of the counterpart of the CD REF for the CDI algorithm and of the Path Consistency REF . To prove MATH we now note that the final CSP MATH can be obtained by means of repeated applications of the functions MATH, MATH and MATH starting with the initial CSP MATH. (Conforming to the discussion at the end of REF we view here each such function as a function on CSP's). As noted in Apt CITE each of these functions transforms a CSP into an equivalent one.
cs/0012010
The following intuitive argument may help to understand the more formal justification given in Appendix. First, both considered functions have three arguments but share precisely one argument, the one from MATH, and modify only this shared argument. Second, both functions are defined in terms of the set-theoretic intersection operation MATH applied to two, unchanged, arguments. This yields commutativity since MATH is commutative.
cs/0012010
We prove first that for MATH we have MATH . Indeed, by REF we have the following string of inclusions, where the last one is due to the idempotence of the considered functions: MATH . Additionally, by the inflationarity of the considered functions, we also have for MATH . So MATH is a common fixpoint of the functions from MATH. This means that any iteration of MATH that starts with MATH, MATH, MATH eventually stabilizes at MATH. By the Stabilization REF we get the desired conclusion.
cs/0012010
See Appendix.
cs/0012010
The termination is obvious. MATH is an immediate consequences of the counterpart of the SI REF for the SI algorithm refined for the compound domains and of the Directional Arc Consistency REF . The proof of MATH is analogous to that of the HYPER-ARC Algorithm REF MATH.
cs/0012010
See Appendix.
cs/0012010
MATH . It suffices to notice that for each MATH-tuple MATH of subsets of the domains of the respective variables we have where MATH and where we assumed that MATH. MATH . Let the considered CSP be of the form MATH. Assume that some common variable of MATH and MATH is identical to the variable MATH. Further, let MATH denote the set of MATH such that MATH and MATH, where MATH is the scheme of MATH and MATH is the scheme of MATH. Finally, let MATH denote the MATH function of MATH and MATH the MATH function of MATH. It is easy to check that for each MATH-tuple MATH of subsets of MATH, respectively, we have where MATH .
cs/0012010
Note first that the ``relative" positions of MATH and of MATH with respect to MATH and MATH are not specified. There are in total three possibilities concerning MATH and three possibilities concerning MATH. For instance, MATH can be ``before" MATH , ``between" MATH and MATH or ``after" MATH. So we have to consider in total nine cases. In what follows we limit ourselves to an analysis of three representative cases. The proof for the remaining six cases is completely analogous. Recall that we write MATH to indicate that MATH is a subsequence of the variables of MATH. CASE: MATH and MATH. It helps to visualize these variables as in REF . Informally, the functions MATH and MATH correspond, respectively, to the upper and lower triangle in this figure. The fact that these triangles share an edge corresponds to the fact that the functions MATH and MATH share precisely one argument, the one from MATH. Ignoring the arguments that do not correspond to the schemes of the functions MATH and MATH we can assume that the functions MATH and MATH are both defined on MATH . Each of these functions changes only the first argument. In fact, for all elements MATH of, respectively, MATH and MATH, we have REF . MATH. The intuitive explanation is analogous as in REF . We confine ourselves to noting that MATH and MATH are now defined on MATH but each of them changes only the second argument. In fact, we have REF . MATH and MATH. In this case the functions MATH and MATH are defined on MATH but each of them changes only the third argument. In fact, we have .
cs/0012010
Suppose that the constraint MATH on the variables MATH and the constriant MATH on the variables MATH, where MATH. Denote by MATH the MATH function of MATH and by MATH the MATH function of MATH. The following cases arise. CASE: MATH. Then the functions MATH and MATH commute since their schemes are disjoint. CASE: MATH. CASE: MATH. Then the functions MATH and MATH commute by virtue of the Commutativity REF MATH. CASE: MATH. Let the considered CSP be of the form MATH. We can rephrase the claim as follows, where we denote now MATH by MATH: For all MATH we have MATH . To prove it note first that for some MATH such that MATH we have MATH, MATH and MATH. We now have where MATH and MATH whereas where MATH . By the NAME Consistency REF MATH each function MATH is inflationary and monotonic with respect to the componentwise ordering MATH. By the first property applied to MATH we have MATH, so by the second property applied to MATH we have MATH. This establishes the claim. CASE: MATH. This subcase cannot arise since then the variable MATH precedes the variable MATH whereas by assumption the converse is the case. CASE: MATH. We can assume by REF that MATH. Then the functions MATH and MATH commute since each of them can change only its first component and this component does not appear in the scheme of the other function. This concludes the proof.
cs/0012010
Recall that we assumed that MATH, MATH and MATH. We are supposed to prove that the function MATH semi-commutes with the function MATH with respect to the componentwise ordering MATH. The following cases arise. CASE: MATH. In this and other cases by an equality between two pairs of variables we mean that both the first component variables, here MATH and MATH, and the second component variables, here MATH and MATH, are identical. In this case the functions MATH and MATH commute by virtue of the Commutativity REF . CASE: MATH. Then MATH and MATH differ, since MATH. Ignoring the arguments that do not correspond to the schemes of the functions MATH and MATH we can assume that the functions MATH and MATH are both defined on MATH . The following now holds for all elements MATH of, respectively, MATH, MATH, MATH, MATH and MATH: CASE: MATH. In this case MATH and MATH differ as well, since MATH. Again ignoring the arguments that do not correspond to the schemes of the functions MATH and MATH we can assume that the functions MATH and MATH are both defined on MATH . The following now holds for all elements MATH of, respectively, MATH, MATH, MATH, MATH and MATH: CASE: MATH. Then also MATH, since MATH and MATH as MATH. Thus the functions MATH and MATH commute since each of them can change only its first component and this component does not appear in the scheme of the other function. This concludes the proof.
cs/0012015
The proof is by structural induction. For the base case, suppose MATH where MATH. Then MATH and hence MATH. Thus MATH. Now consider MATH where the inductive hypothesis holds for MATH. By REF , there exists a type substitution MATH such that MATH and MATH for each MATH. By the inductive hypothesis, MATH for each MATH, and hence by REF , MATH. The rest of the proof is now trivial.
cs/0012015
The proof is by structural induction. For the base case, suppose MATH where MATH. If MATH, there is nothing to show. If MATH, then by definition of a typed substitution, MATH. Now consider MATH where the inductive hypothesis holds for MATH. By REF , there exists a type substitution MATH such that MATH, and MATH for each MATH. By the inductive hypothesis, MATH for each MATH, and hence by REF , MATH. The rest of the proof is now trivial.
cs/0012015
We show that the result is true when MATH is computed using the well-known NAME algorithm CITE which works by transforming a set of REF into a set of the form required in the definition of a typed substitution. Only the following two transformations are considered here. The others are trivial. CASE: If MATH and MATH does not occur in MATH, then replace all occurrences of MATH in all other equations in MATH with MATH, to obtain MATH. CASE: If MATH, then replace this equation with MATH, to obtain MATH. We show that if MATH and MATH is obtained by either of the above transformations, then MATH. For REF , this follows from REF . For REF , suppose MATH and MATH where MATH. By Rule (Query), we must have MATH, and hence by Rule (Atom), MATH and MATH for some type substitution MATH. On the other hand, by REF , MATH and MATH for some type substitutions MATH and MATH, and moreover for each MATH, we have MATH and MATH. Since MATH, it follows that MATH. Therefore MATH, and so MATH.
cs/0012015
The first statement is a straightforward consequence of REF . For the second statement, assume MATH, let MATH, and MATH be the derivation tree for MATH corresponding to MATH (by REF ). By hypothesis, there exists a variable typing MATH such that for each incomplete node MATH of MATH, we have MATH. To show that this also holds for complete nodes, we transform MATH into a derivation which ``records the entire tree MATH". This is done as follows: Let MATH be the program obtained from MATH by replacing each clause MATH with MATH. Let us call the atoms in the second occurrence of MATH unresolvable. Clearly MATH for each such clause. By induction on the length of derivations, one can show that MATH has operational subject reduction. For a single derivation step, this follows from the operational subject reduction of MATH. Now let MATH be the derivation for MATH using in each step the clause corresponding to the clause used in MATH for that step, and resolving only the resolvable atoms. First note that since MATH has operational subject reduction, there exists a variable typing MATH such that MATH. Moreover, since the unresolvable atoms are not resolved in MATH, it follows that MATH contains exactly the non-root node atoms of MATH. This however shows that for each node atom MATH of MATH, we have MATH. Since the choice of MATH was arbitrary, MATH has subject reduction.
cs/0012015
Let MATH be an arbitrary proper skeleton for MATH with head MATH, where MATH. Let MATH and MATH. For each node MATH in MATH, labelled MATH in MATH and MATH in MATH, let MATH be the variable typing such that MATH. Let MATH . Consider a pair of nodes MATH, MATH in MATH such that MATH is a child of MATH, and the equation MATH corresponding to this pair (see REF ). Consider also the equation MATH corresponding to the pair MATH, MATH in MATH. Note that MATH and MATH. By REF , MATH and MATH. Moreover, since MATH, we have MATH. Therefore MATH. Since the same reasoning applies for any equation in MATH, by REF , MATH is a typed substitution. Consider a node MATH in MATH with node atom MATH. Since MATH, by REF , MATH. and by REF , MATH. Therefore MATH has subject reduction with respect to MATH.
cs/0012015
Let MATH be a semi-generic query and MATH a type skeleton corresponding to a skeleton for MATH with head MATH. Each equation in MATH originates from a pair of nodes MATH where MATH is labelled MATH and MATH is labelled MATH, and the equation is MATH. Let MATH be obtained from MATH by replacing each such equation with the two equations MATH, MATH. Clearly MATH and MATH are equivalent. Because of the renaming of parameters for each node and since MATH is a tree, it is possible to define an order MATH on the equations in MATH such that for each label MATH defined as above, MATH, where for each MATH, MATH denotes a sequence containing all equations MATH with MATH. We show that MATH fulfills the conditions of REF . By REF , MATH fulfills REF . By REF , it follows that MATH is a subrelation of MATH, and hence MATH fulfills REF . By REF , MATH fulfills REF . Thus MATH has a solution, so MATH is proper, and so by REF , MATH has subject reduction with respect to the set of semi-generic queries.
cs/0012018
By induction on the structure of resource proofs. Assume that MATH has a resource proof. In the base case, the sequent is the conclusion of either the Axiom, MATH-L or REFR rules. CASE: We have MATH where MATH and MATH. Let this assignment of Boolean variables be MATH. Then MATH is just MATH, and clearly the linear proof tree corresponding to MATH is a proof of this sequent. CASE: We have MATH where MATH and MATH. Let this assignment of Boolean variables be MATH. Then MATH is just MATH, and clearly the linear proof tree corresponding to MATH is a proof of this sequent. CASE: We have MATH where MATH and MATH. Let this assignment of Boolean variables be MATH. Then MATH is just MATH, and clearly the linear proof tree corresponding to MATH is a proof of this sequent. Hence we assume that the result holds for all provable sequents whose resource proof is no more than a given size. Consider the last rule used in the proof. There are REF cases: CASE: In this case, we have that MATH where MATH, and there is a resource proof of MATH. By the hypothesis, there is a linear proof of MATH for some Boolean assignment MATH, and hence there is a linear proof of MATH, which is just MATH, as required. CASE: In this case, we have that MATH where MATH, and there is a resource proof of MATH. By the hypothesis, there is a linear proof of MATH for some Boolean assignment MATH, and hence there is a linear proof of MATH, which is just MATH, as required. CASE: In this case, we have that MATH where MATH, and MATH and MATH both have resource proofs for some disjoint sets of Boolean variables MATH and MATH. By the hypothesis, we have that there are linear proofs of MATH and MATH (recall that MATH must be an assignment of all Boolean variables in the proof), and so there is a linear proof of MATH which is just MATH, as required. CASE: In this case, we have that MATH where MATH, and there is a resource proof of MATH. By the hypothesis, there is a linear proof of MATH for some Boolean assignment MATH, and hence there is a linear proof of MATH, which is just MATH, as required. CASE: In this case, we have that MATH where MATH, and there is a resource proof of MATH. By the hypothesis, there is a linear proof of MATH for some Boolean assignment MATH, and hence there is a linear proof of MATH, which is just MATH, as required. CASE: In this case, we have that MATH where MATH, and MATH and MATH both have resource proofs for some disjoint sets of Boolean variables MATH and MATH. By the hypothesis, we have that there are linear proofs of MATH and MATH (recall that MATH must be an assignment of all Boolean variables in the proof), and so there is a linear proof of MATH which is just MATH, as required. CASE: In this case, we have that MATH where MATH, and MATH and MATH both have resource proofs for some disjoint sets of Boolean variables MATH and MATH. By the hypothesis, we have that there are linear proofs of MATH and MATH (recall that MATH must be an assignment of all Boolean variables in the proof), and so there is a linear proof of MATH which is just MATH, as required. CASE: In this case, we have that MATH where MATH, and there is a resource proof of MATH. By the hypothesis, there is a linear proof of MATH for some Boolean assignment MATH, and hence there is a linear proof of MATH, which is just MATH, as required. CASE: In this case, we have that MATH where MATH, and there is a resource proof of MATH. By the hypothesis, there is a linear proof of MATH for some Boolean assignment MATH, and hence there is a linear proof of MATH, which is just MATH, as required. CASE: In this case, we have that MATH where MATH, and there is a resource proof of MATH. By the hypothesis, there is a linear proof of MATH for some Boolean assignment MATH, and hence there is a linear proof of MATH, which is just MATH, as required.
cs/0012018
By induction on the structure of resource derivations. Assume that MATH has a closed resource derivation. In the base case, the sequent is the conclusion of either the Axiom, MATH-L or REFR rules, and it is clear that the addition of MATH to either the antecedent or the succedent of MATH will result in a closed resource derivation of the appropriate sequent. Clearly the linear proof tree property is satisfied in these cases. Hence we assume that the result holds for all provable sequents whose resource proof is no more than a given size. Consider the last rule used in the proof. There are REF cases, of which we only give the argument for MATH-L and MATH-R; the others are similar. CASE: In this case, we have that MATH where MATH, and there is a closed resource derivation of MATH. By the hypothesis, there are closed resource derivations of both MATH and MATH, and so there are closed resource derivations of MATH and MATH, as required. Clearly the linear proof tree property is satisfied in this case. CASE: In this case, we have that MATH where MATH, and MATH and MATH both have closed resource derivations for some disjoint sets of Boolean variables MATH and MATH. By the hypothesis, there are closed resource derivations of both MATH and MATH and of both MATH and MATH. Hence there are disjoint sets of variables MATH and MATH such that MATH and MATH such that MATH and MATH both have closed resource derivations, as do MATH and MATH. Thus we have closed resource derivations of both MATH and MATH, as required. Clearly the linear proof tree property is satisfied in this case.
cs/0012018
By induction on the structure of resource proofs. Assume that MATH has a resource proof. In the base case, the sequent is the conclusion of either the Axiom, MATH-L or REFR rules. CASE: We have MATH, so clearly there is a resource proof of MATH and the linear proof corresponding to this resource proof is MATH. CASE: We have MATH, so clearly there is a resource proof of MATH and the linear proof corresponding to this resource proof is MATH. CASE: We have MATH, so clearly there is a resource proof of MATH and the linear proof corresponding to this resource proof is MATH. Hence we assume that the result holds for all provable sequents whose resource proof is no more than a given size. Consider the last rule used in the proof. There are ten cases. CASE: In this case, we have that MATH, and there is a proof of MATH which is a subproof of MATH. By the hypothesis, there are disjoint sets of variables MATH and MATH such that there is a resource proof of MATH, and hence there is a resource proof of MATH, and clearly the linear proof corresponding to this resource proof is MATH. CASE: In this case, we have that MATH, and there is a proof of MATH which is a subproof of MATH. By the hypothesis, there are disjoint sets of variables MATH and MATH such that there is a resource proof of MATH, and hence there is a resource proof of MATH, and clearly the linear proof corresponding to this resource proof is MATH. CASE: In this case, we have that MATH, MATH such that MATH and MATH both have proofs in the linear sequent calculus which are subproofs of MATH. Hence by the hypothesis there are disjoint sets of Boolean variables MATH such that MATH and MATH have resource proofs, (and moreover the linear proofs corresponding to each resource proof is the appropriate subproof of MATH) and so by REF , there are closed resource derivations of MATH and MATH. Hence there are new disjoint sets of Boolean variables (that is, not occurring anywhere in the above two resource sequents) MATH and MATH and a total assignment MATH of MATH such that MATH and MATH have resource proofs, and so there is a resource proof of MATH, that is, MATH for some disjoint sets of Boolean variables MATH and MATH, and clearly the linear proof corresponding to this resource proof is MATH. CASE: In this case, we have that MATH, and there is a proof of MATH which is a subproof of MATH. By the hypothesis, there are disjoint sets of variables MATH and MATH such that there is a resource proof of MATH, and hence there is a resource proof of MATH, and clearly the linear proof corresponding to this resource-proof is MATH. CASE: In this case, we have that MATH, and there is a proof of MATH which is a subproof of MATH. By the hypothesis, there are disjoint sets of variables MATH and MATH such that there is a resource proof of MATH, and hence there is a resource proof of MATH, and clearly the linear proof corresponding to this resource proof is MATH. CASE: In this case, we have that MATH, MATH such that MATH and MATH both have proofs in the linear sequent calculus which are subproofs of MATH. Hence by the hypothesis there are disjoint sets of Boolean variables MATH such that MATH and MATH have resource proofs, (and moreover the linear proofs corresponding to each resource proof is the appropriate subproof of MATH) and so by REF , there are closed resource derivations of MATH and MATH. Hence there are new disjoint sets of Boolean variables (that is, not occurring anywhere in the above two resource sequents) MATH and MATH and a total assignment MATH of MATH such that MATH and MATH have resource proofs, and so there is a resource proof of MATH, that is, MATH for some disjoint sets of Boolean variables MATH and MATH, and clearly the linear proof corresponding to this resource proof is MATH. CASE: In this case, we have that MATH, MATH such that MATH and MATH both have proofs in the linear sequent calculus which are subproofs of MATH. Hence by the hypothesis there are disjoint sets of Boolean variables MATH such that MATH and MATH have resource proofs, (and moreover the linear proofs corresponding to each resource proof is the appropriate subproof of MATH) and so by REF , there are closed resource derivations of MATH and MATH. Hence there are new disjoint sets of Boolean variables (that is, not occurring anywhere in the above two resource sequents) MATH and MATH and a total assignment MATH of MATH such that MATH and MATH have resource proofs, and so there is a resource proof of MATH, that is, MATH for some disjoint sets of Boolean variables MATH and MATH, and clearly the linear proof corresponding to this resource proof is MATH. CASE: In this case, we have that MATH, and there is a proof of MATH which is a subproof of MATH. By the hypothesis, there are disjoint sets of variables MATH and MATH such that there is a resource proof of MATH, and hence there is a resource proof of MATH, and clearly the linear proof corresponding to this resource proof is MATH. CASE: In this case, we have that MATH, and there is a proof of MATH which is a subproof of MATH. By the hypothesis, there are disjoint sets of variables MATH and MATH such that there is a resource proof of MATH, and hence there is a resource proof of MATH, and clearly the linear proof corresponding to this resource proof is MATH. CASE: In this case, we have that MATH, and there is a proof of MATH which is a subproof of MATH. By the hypothesis, there are disjoint sets of variables MATH and MATH such that there is a resource proof of MATH, and hence there is a resource proof of MATH, and clearly the linear proof corresponding to this resource proof is MATH.
cs/0012018
By induction on the structure of resource proofs. Assume that MATH has a resource proof. In the base case, the sequent is the conclusion of either the Axiom, MATH-L, REFR, REFL or MATH-R rules, and it is clear that the result holds in each of these cases. Hence we assume that the result holds for all provable sequents whose resource proof is no more than a given size. Consider the last rule used in the proof. We only give the argument for the cases MATH-L, MATH-R, MATH-L, MATH-R, C!L; the others are similar. CASE: In this case, we have that MATH, and there is a resource proof of MATH. By the hypothesis, there is a linear proof of MATH for some Boolean assignment MATH, which is clearly either MATH or MATH, as required. CASE: In this case, we have that MATH, and MATH and MATH both have resource proofs. By the hypothesis, we have that there are linear proofs of MATH and MATH, and so there is a linear proof of MATH which is just MATH, as required. CASE: In this case, we have that MATH, and there is a resource proof of MATH. By the hypothesis, there is a linear proof of MATH for some Boolean assignment MATH, and hence there is a linear proof of MATH, which is just MATH, as required. CASE: In this case, we have that MATH, and MATH and MATH both have resource proofs for some disjoint sets of Boolean variables MATH and MATH. By the hypothesis, we have that there are linear proofs of MATH and MATH (recall that MATH must be an assignment of all Boolean variables in the proof), and so there is a linear proof of MATH which is just MATH, as required. CASE: In this case, we have that MATH, and there is a resource proof of MATH. By the hypothesis, there is a linear proof of MATH for some Boolean assignment MATH, and hence there is a linear proof of MATH, which is just MATH, as required.
cs/0012018
The proof is similar to that of REF , and hence is omitted.
cs/0012018
By induction on the structure of resource proofs. Assume that MATH has a resource proof. In the base case, the sequent is the conclusion of either the Axiom, MATH-L, REFR, REFL or MATH-R rules, and it is clear that the result holds in each of these cases. Hence we assume that the result holds for all provable sequents whose resource proof is no more than a given size. Consider the last rule used in the proof. We only give the argument for the cases MATH-L, MATH-R, MATH-L, MATH-R, C!L; the others are similar. CASE: In this case, we have that MATH, and there is a proof of MATH which is a subproof of MATH. By the hypothesis, there are disjoint sets of variables MATH and MATH such that there is a resource proof of MATH, and hence there is a resource proof of MATH, and clearly the linear proof corresponding to this resource proof is MATH. CASE: In this case, we have that MATH, MATH such that MATH and MATH both have proofs in the linear sequent calculus which are subproofs of MATH. Hence by the hypothesis there are disjoint sets of Boolean variables MATH such that MATH and MATH have resource proofs, (and moreover the linear proofs corresponding to each resource proof is the appropriate subproof of MATH) and so by REF , there are closed resource derivations of MATH and MATH. Hence there are new disjoint sets of Boolean variables (that is, not occurring anywhere in the above two resource sequents) MATH and MATH and a total assignment MATH of MATH such that MATH and MATH have resource proofs, and so there is a resource proof of MATH, that is, MATH for some disjoint sets of Boolean variables MATH and MATH, and clearly the linear proof corresponding to this resource proof is MATH. CASE: In this case, we have that MATH, and there is a proof of MATH which is a subproof of MATH. Without loss of generality, let MATH. By the hypothesis, there are disjoint sets of variables MATH and MATH such that there is a resource proof of MATH, and by REF there is a resource proof of MATH, that is, there is a resource proof of MATH, giving us a resource proof of MATH, and clearly the linear proof corresponding to this resource proof is MATH. CASE: In this case, we have that MATH such that MATH and MATH both have proofs in the linear sequent calculus which are subproofs of MATH. Hence by the hypothesis there are disjoint sets of Boolean variables MATH such that MATH and MATH have resource proofs, (and moreover the linear proofs corresponding to each resource proof is the appropriate subproof of MATH). Now as each of these has the property that the Boolean expression attached to each formula in the conclusion evaluates to REF, we can choose MATH and MATH, and hence we have a resource proof of MATH, and clearly the linear proof corresponding to this resource proof is MATH. CASE: In this case, we have that MATH, and there is a proof of MATH which is a subproof of MATH. By the hypothesis, there are disjoint sets of variables MATH and MATH such that there is a resource proof of MATH, and hence there is a resource proof of MATH, and clearly the linear proof corresponding to this resource proof is MATH.
cs/0012018
We proceed by induction on the size of the resource derivation. In the base case, the resource proof consists of just one of the leaf rules. Hence there are four cases, of which we only give the argument for Axiom, the others being similar. In this case, the endsequent of the resource proof is just MATH with MATH and MATH. Hence MATH and the LBI proof corresponding to this resource proof is just MATH. Hence we assume that the result holds for all proofs of no more than a given size. There are numerous cases, of which we only give the argument for MATH-L, MATH-R, MATH-L, MATH-R and MATH L, the others being similar. CASE: In this case, the endsequent is MATH where MATH, and the premisses are MATH and MATH. Now as the endsequent has a resource proof, there is a Boolean assignment MATH of MATH such that both premisses have resource proofs. Hence by the induction hypothesis we have that the LBI proof tree corresponding to the resource derivation of MATH is an LBI proof of MATH, and the LBI proof tree corresponding to the resource derivation of MATH is an LBI proof of MATH. Now as MATH is the maximal multiplicative super-bunch of MATH in MATH, and as MATH is a total assignment of Boolean variables, we have that MATH and MATH where MATH, and so the LBI proof tree corresponding to the resource derivation of MATH is an LBI proof of MATH. CASE: In this case, the endsequent is MATH, and the premiss is MATH. Hence by the induction hypothesis we have that the LBI proof tree corresponding to the resource derivation of MATH is an LBI proof of MATH, and hence the LBI proof tree corresponding to the resource derivation of MATH is an LBI proof of MATH. CASE: In this case, the endsequent is MATH where MATH, and the premiss is MATH. Hence by the induction hypothesis we have that the LBI proof tree corresponding to the resource derivation of MATH is an LBI proof of MATH, and hence the LBI proof tree corresponding to the resource derivation of MATH is an LBI proof of MATH. CASE: In this case, the endsequent is MATH, and the premisses are MATH and MATH, and so there is a Boolean assignment MATH such that both premisses have a resource-proof. Hence by the induction hypothesis we have that the LBI proof tree corresponding to the resource derivation of MATH is an LBI proof of MATH, and that the LBI proof tree corresponding to the resource derivation of MATH is an LBI proof of MATH. As MATH is a total assignment of Boolean variables, we have that MATH and MATH where MATH, and so the LBI proof tree corresponding to the resource derivation of MATH is an LBI proof of MATH. CASE: In this case, the endsequent is MATH where MATH, and the premisses are MATH and MATH. Hence by the induction hypothesis we have that the LBI proof tree corresponding to the resource derivation of MATH is an LBI proof of MATH, and the LBI proof tree corresponding to the resource derivation of MATH is an LBI proof of MATH, and hence the LBI proof tree corresponding to the resource derivation of MATH is an LBI proof of MATH.
cs/0012018
We proceed by induction on the size of the resource derivation. In the base case, the resource derivation consists of just one of the leaf rules. Hence there are four cases, of which we only give the argument for Axiom, the others being similar. In this case, MATH and the endsequent of the resource derivation is just MATH with MATH and MATH. Hence it is clear that there is also a closed resource derivation of MATH. Hence we assume that the result holds for all proofs of no more than a given size. There are numerous cases, of which we only give the argument for MATH-L, MATH-R, MATH-L, MATH-R and MATH L, the others being similar. CASE: In this case, the endsequent is MATH where MATH, and the premisses are MATH and MATH. Hence by the hypothesis we have that MATH and MATH both have closed resource derivations (recall that the position of the bunch MATH may be arbitrary), and so MATH has a closed resource derivation. CASE: In this case, the endsequent is MATH, and the premiss is MATH where MATH. Hence by the induction hypothesis we have that MATH has a closed resource derivation, and hence so does MATH. CASE: In this case, the endsequent is MATH where MATH, and the premiss is MATH. Hence by the induction hypothesis we have that MATH has a closed resource derivation, and hence so does MATH. CASE: In this case, the endsequent is MATH, and the premisses are MATH and MATH. Hence by the induction hypothesis we have that MATH and MATH have closed resource derivations, and as MATH, so does MATH. CASE: In this case, the endsequent is MATH where MATH, and the premisses are MATH and MATH. Hence by the induction hypothesis we have that MATH and MATH have closed resource derivations, and hence so does MATH.
cs/0012018
We proceed by induction on the height of the LBI proof. In the base case, the rule used is one of Axiom, MATH-R, MATH-R and MATH-L. We only give the argument for Axiom, the others being similar: In this case, the sequent is just MATH, and it is clear that this LBI proof corresponds to the resource proof MATH with MATH. Hence we assume that the result holds for all proofs of no more than a given size. There are numerous cases, of which we only give the argument for MATH, MATH-L, MATH-R, MATH-L, MATH-R, MATH L, MATH R, and MATH-R, the others being similar. CASE: In this case, the conclusion is MATH and the premiss is MATH, and so by the hypothesis there is a resource proof of MATH. Now as in such a resource proof all the expressions in MATH must be mapped to REF under the corresponding Boolean assignment, and MATH, it is clear that there is a resource proof of MATH for which MATH is the corresponding LBI proof. CASE: In this case, the conclusion is MATH and the premisses are MATH and MATH, and so by the hypothesis there are disjoint sets of variables MATH and MATH such that there are resource proofs of MATH and MATH. Now, as these are resource proofs, by REF , there exists MATH such that there are resource proofs of MATH and MATH. Hence there is a resource proof MATH of MATH for which MATH is the corresponding LBI proof. CASE: In this case, the conclusion is MATH and the premiss is MATH, and so by the hypothesis there is a resource proof of MATH, and as MATH must be MATH, we have that there is a resource proof of MATH, and hence there is a resource proof MATH of MATH for which MATH is the corresponding LBI proof. CASE: In this case, the conclusion is MATH and the premiss is MATH, and so by the hypothesis there is a resource proof of MATH, and hence there is a resource proof MATH of MATH for which MATH is the corresponding LBI proof. CASE: In this case, the conclusion is MATH, and the premisses are MATH and MATH, and so by the hypothesis there are disjoint sets of variables MATH and MATH such that there are resource proofs of MATH and MATH. Now, as these are resource proofs, by REF , there exists MATH such that there are resource proofs of MATH and MATH. Hence there is a resource proof MATH of MATH for which MATH is the corresponding LBI proof. CASE: In this case, the conclusion is MATH and the premisses are MATH and MATH, and so by the hypothesis there are resource proofs of MATH and MATH. Now MATH must be MATH under MATH, and as we must have all formul in the premisses mapped to REF, we have that there is a set MATH of distinct variables MATH such that there are resource proofs of MATH and MATH, and hence there is a resource proof MATH of MATH for which MATH is the corresponding LBI proof. CASE: In this case, the conclusion is MATH and the premiss is MATH, and so by the hypothesis there is a resource proof of MATH, and as MATH must be MATH, there is a set of distinct variables MATH such that there is a resource proof of MATH, and hence there is a resource proof MATH of MATH for which MATH is the corresponding LBI proof. CASE: In this case, the conclusion is MATH and the premisses are MATH and MATH, and so by the hypothesis there are resource proofs of MATH and MATH. Now we must have all formul in the premisses mapped to REF, and so we have that there is a set MATH of distinct variables MATH such that there are resource proofs of MATH and MATH, and hence there is a resource proof MATH of MATH for which MATH is the corresponding LBI proof.
hep-th/0012133
See REF.
hep-th/0012164
We will proof the lemma by induction over the sum of the finite NAME labels MATH. The equation obviously holds for MATH and for MATH (fundamental weights). Suppose now that the assertion is valid for labels with MATH. For a label MATH with MATH we denote by MATH the number between MATH and MATH satisfying MATH for MATH and MATH for MATH. Clearly the equation holds for weights satisfying MATH. By induction we show that it holds for all MATH and therefore for all weights with MATH. Let MATH be a weight with MATH. Then this weight appears once in the fusion of the weight MATH with MATH and the fundamental weight MATH. The other weights MATH appearing in the fusion have MATH. Assuming that the equation is valid for these MATH and using the associativity of the fusion product we show that the equation holds for MATH, MATH . This completes the proof of REF .
hep-th/0012164
Let us first remark that the equation certainly holds for MATH, because then the fusion matrices MATH coincide with the finite tensor-product coefficients. We are now going to proof the statement: For all MATH and MATH the following is true: CASE: MATH CASE: MATH . We proof this proposition by induction over MATH and MATH. We start with MATH. REF is fulfilled because of REF. For REF consider a weight MATH with MATH. Then MATH for some MATH. This is just the truncated weight in the fusion of MATH and MATH, therefore MATH . We note that the statements A and B for MATH are equivalent to the statements for MATH. For the induction process we only have to show the step MATH. Assume that MATH and MATH are valid. Let MATH be a label with MATH. The fusion of MATH and MATH differs from the finite tensor-product decomposition just by representations MATH with MATH and MATH. From MATH we know that their dimensions vanish modulo MATH and hence MATH . Now we will proof MATH. Let MATH be a label with MATH and MATH, MATH for MATH. We then define MATH . MATH occurs once in the fusion of MATH and MATH, all the other labels occurring in the fusion fulfil the requirements of MATH. Hence MATH . Now we have to show MATH. Let MATH be a label of the form MATH with MATH and define MATH . Then MATH appears once in the finite tensor product of MATH and MATH. It belongs to the representations that are truncated by going over to the fusion rules of the affine NAME algebra. For the other truncated representations MATH we know from MATH that MATH. But since MATH is applicable to MATH we get MATH. This completes the proof.
hep-th/0012164
We are only going to give a sketch of the proof. Let us rewrite the numbers MATH by introducing MATH as MATH . An important observation is that the first factor in MATH is always an integer. As also the binomial coefficient is an integer, we can see that MATH is a divisor of all MATH. It remains to show that it is already the greatest common divisor. Let MATH be a prime number. We determine the maximum MATH and the corresponding MATH such that MATH. Then one can show that MATH.
math-ph/0012011
We heavily rely on CITE, which itself follows CITE. Our metastable free energies are defined as the real part of the metastable free energies of CITE, which are complex in general. The first step consists in defining the metastable free energies. This can be done by introducing truncated contour activities and truncated partition functions following the inductive procedure of REF - REF . One obtains metastable free energies MATH (that depend on MATH). One can then prove the claims of REF. We set then MATH. At this point we have well-defined metastable free energies depending on MATH and MATH (that is, they are functionals on the NAME space of interactions), and the free energy of the system is given by the minimum of the metastable free energies, as stated in REF . It is also clear that MATH, and that MATH is real analytic in MATH on MATH. What remains to be done is to check differentiable properties. For given MATH and MATH, we consider MATH as a function of MATH. This is a mild complication of the situation in CITE, since the metastable free energies here depend on MATH parameters instead of MATH. One then gets REF - the partial derivatives with respect to MATH of the truncated contour activities and of the partition function with given external label satisfying the claims of the lemma with a constant MATH instead of MATH. Finally, the metastable free energies are given as convergent series of clusters of contours, the weights of those obeying suitable bounds. This leads to REF .
math-ph/0012011
REF of metastable free energies (with MATH) imply that there exists MATH such that MATH for all MATH, and that the matrix of derivatives MATH has a bounded inverse, uniformly in MATH in a neighborhood MATH of MATH. Let us define MATH and, for MATH, MATH (notice that MATH). By the implicit function theorem, each MATH is described by a MATH function from an open subset of MATH into MATH. If we set MATH the phase diagram satisfies the NAME phase rule, provided there are exactly MATH tangent functionals at MATH for each MATH. Each metastable free energy MATH, MATH, defines a tangent functional MATH: for all MATH, we set MATH. Notice that REF ensures boundedness of the tangent functional. We show now that these tangent functionals are linearly independent, and that any other tangent functional is a linear combination of these ones. We examine the manifold where MATH phases coexist; without loss of generality, we can choose MATH with MATH. The determinant of REF can be written as a linear combination of determinants of MATH with MATH being MATH different indices. Since the determinant of REF differs from REF, at least one of the determinants in the previous equation differs from REF. Without loss of generality we can assume that MATH is not singular. Our analysis is local, so we can take MATH and MATH. Then REF implies that MATH, and non-singularity of REF shows that MATH, MATH, are linearly independent. Furthermore, it also implies that for all tangent functional MATH the system of equations for MATH, MATH has a unique solution with MATH . Now we consider any MATH; we define MATH, MATH, and MATH . We have MATH, MATH is an isomorphism, and MATH is a map of class MATH by REF of page REF. By the implicit function theorem there exists a map MATH such that MATH. We introduce the interactions MATH . Then using REF we have MATH . Differentiating with respect to MATH, we obtain (recall that MATH is tangent to MATH at MATH) MATH . Then obviously MATH, and it follows by linearity of the tangent functionals that MATH .