paper
stringlengths
9
16
proof
stringlengths
0
131k
math/0009250
First let MATH be the unit vector basis for MATH and set MATH. The tree MATH is clearly a MATH-block basis tree on MATH isomorphic to MATH, so that MATH. By REF the block basis index is strictly greater than the order of any block basis tree on the space, so that MATH. But now, by REF , the block basis index is of the form MATH for some MATH so that MATH. As we noted after REF , MATH, and since the node basis for MATH is shrinking, it follows from REF that MATH when MATH and MATH when MATH. It is clear that MATH, and finally we showed in REF that MATH embeds into MATH as a block basis, and hence we have the inequalities MATH which completes the proof.
math/0009250
The argument follows the same lines as the proof that MATH is isomorphic to MATH in CITE. Note that for MATH the node basis of MATH is a family of indicator functions with nested or disjoint supports, and the nested functions are at most MATH sets deep. The required map MATH is found by sending the MATH element of the admissible enumeration of the node basis of MATH to the MATH element MATH of the node basis of MATH, MATH. (Note that MATH is not in the image.) It is easy to see that MATH. The general case is similar. We view the node basis of MATH (in the ordering MATH) as a disjoint union of MATH trees MATH, MATH, with MATH isomorphic to the replacement tree MATH and MATH the unique initial node of the tree MATH. Thus MATH implies MATH with MATH. For each MATH let MATH be the defining map for the replacement tree. Recall that MATH is one or a countable union of trees, each isomorphic to MATH. Let MATH be an enumeration of all of these trees for MATH. For each MATH let MATH be the sequence of initial nodes of MATH, so that MATH is equivalent to the node basis of MATH under the natural map. Let MATH be the given admissible enumeration of the node basis of MATH and let MATH be an admissible enumeration of the node basis of MATH. To avoid confusion between domain and range we shall let MATH, for MATH denote the elements of the node basis of MATH in the image. Thus MATH. We define a map MATH inductively to satisfy the following conditions: CASE: MATH is not in the image of MATH; CASE: if MATH and MATH or MATH for some MATH, then MATH for some MATH; CASE: MATH is increasing, that is, if MATH, then MATH; CASE: if MATH and MATH, then MATH; CASE: if MATH and MATH, then the order of MATH in MATH is less than or equal to the order of MATH in MATH, where the sets are trees in the usual order MATH and the order of a node MATH in a tree MATH is simply the order of the subtree MATH of MATH. It is easy to see that the inductive definition of MATH will succeed because if MATH have been chosen, then there are infinitely many candidates for MATH satisfying REF - REF . It is also not difficult to see that if MATH is the induced map from MATH into MATH, then MATH.
math/0009250
By REF and the proof of REF we have MATH . To complete the proof we show that for each MATH there does not exist a MATH-block basis tree on MATH of order MATH, and hence MATH. Then, since MATH for any space MATH with basis MATH, and MATH where MATH is any admissible enumeration of the node basis for MATH, it follows that MATH. We prove this result by induction on MATH. For MATH we first note that MATH. Since the unit vector basis of MATH does not contain MATH's uniformly as block bases, it follows that MATH contains no MATH-block basis tree of order MATH. We assume that the result is true for MATH, and let MATH be an admissible enumeration of the node basis of MATH. Suppose that MATH is a MATH-block basis tree of order MATH on MATH which, without loss of generality, we assume consists of finitely supported vectors with respect to MATH, and is isomorphic to the minimal replacement tree MATH. We write MATH, where MATH is a tree isomorphic to MATH and the elements from different trees MATH are unrelated. Choose MATH and let MATH, where MATH, be the defining map for the replacement tree MATH. Let MATH so that MATH. Let MATH be a terminal node in MATH. Define MATH and let MATH . Let MATH, so that MATH is isomorphic to MATH, and let MATH be the restricted tree MATH. The tree MATH is a MATH-block basis tree of order MATH. By REF and the induction hypothesis there is no MATH-block basis tree on MATH of order MATH. Consider the tree MATH note that MATH and MATH when MATH and MATH (since MATH is a block basis tree). If there exists MATH such that for every MATH and MATH, MATH then the tree MATH would be a MATH-block basis tree on MATH of order MATH, contradicting the induction hypothesis. Therefore there is a terminal node MATH and MATH such that MATH and MATH. Define MATH and let MATH. As before we consider the tree MATH, so that MATH is isomorphic to MATH, and we let MATH be the restricted tree MATH. Arguing as above there is a terminal node MATH and MATH such that MATH and setting MATH gives MATH. Continuing in this way we get MATH a block basis of some node MATH of MATH such that MATH for some sequence MATH with MATH, together with a sequence MATH such that MATH (where MATH). Let MATH, so that MATH since MATH was a MATH-tree. On the other hand (with MATH) MATH . Thus there exists no such tree MATH of order MATH on MATH which completes the proof.
math/0009250
Find MATH so that MATH, let MATH, and find MATH such that MATH. Now, MATH as required.
math/0009250
We may write MATH as MATH where MATH. We shall prove the result using induction on MATH. Let MATH and let MATH be the map MATH, restricted from MATH to MATH, let MATH be the NAME functions on MATH and let MATH be the extension of these to MATH with MATH. Let MATH be a tree with constant REF and order MATH on MATH; we construct a tree of order MATH on MATH. Let MATH and let MATH with the usual ordering by extension. The subtree of MATH given by MATH has order MATH, and after every terminal node is a tree of order MATH so that MATH. It is clear from the previous lemma that MATH is a MATH-tree with constant REF. If the result is true for MATH, then given a MATH-REF-tree on MATH of order MATH, there exists a MATH-REF-tree on MATH of order MATH, but now by the case MATH there exists a MATH-REF-tree on MATH of order MATH. Finally, if MATH is a limit ordinal and the result has been proven for every MATH, then let MATH and MATH hence we may take the union of MATH-REF-trees MATH on MATH of order MATH to obtain a tree on MATH of order MATH as required.
math/0009250
From REF we know that the MATH-index of MATH is either MATH or MATH and hence MATH or MATH. For each MATH we shall construct a MATH-tree on MATH of order MATH so that MATH by REF and the result follows. This is clear for MATH since MATH embeds isometrically into MATH for each MATH, which immediately yields a MATH-REF-tree of order MATH. We may now complete the proof by induction on MATH. If there is a MATH-REF-tree on MATH of order MATH, then by the previous lemma there exists a tree of order MATH on MATH for every MATH. Taking the union over MATH of these we obtain a MATH-REF-tree on MATH of order MATH as required. This completes the inductive step and hence the proof.
math/0009250
Again, from REF we know that MATH is either MATH or MATH. To demonstrate that it is the former we show that for each MATH there does not exist a MATH-tree on MATH of order MATH. We prove this by induction on MATH based on the following lemmas. The idea of the proof is that if we do have a MATH-tree of order MATH on MATH, then we can find a node in that tree which admits an absolute convex combination with arbitrarily small norm. This contradicts the hypothesis that it was a MATH-tree.
math/0009250
Fix MATH and let MATH be as in the statement of the lemma. Suppose that MATH for each MATH. Then MATH for each MATH. We may assume that each MATH has finite support with respect to the unit vector basis of MATH, and let MATH. Thus MATH embeds into MATH with constant MATH via the map MATH, and hence MATH has a lower MATH estimate with constant MATH. By CITE, for fixed MATH and MATH, if MATH is sufficiently large, then there exists a normalized block basis MATH of MATH such that MATH. Now if we take MATH to be very small, depending on MATH, then we see that for each MATH the size of one of the sets MATH and MATH must be at least MATH. We calculate the norm of MATH in MATH supposing that MATH. Let MATH be the second half of MATH, so that if MATH, then MATH, where MATH. Clearly MATH, MATH and MATH . On the other hand MATH, so this is impossible for large MATH, and hence for MATH large enough. This contradicts our initial assumption that MATH for each MATH and hence there exists MATH with MATH.
math/0009250
Choose MATH, set MATH, and let MATH be a tree on MATH as above. If there exist MATH and MATH such that MATH then set MATH and MATH, and we have MATH while MATH as required. Otherwise set MATH, where MATH. Then MATH is a MATH-tree on MATH of order MATH, and from the previous lemma can find MATH and MATH such that MATH. Now, MATH and MATH so that MATH; also MATH for each MATH, thus if we set MATH, then MATH. Clearly MATH has supremum norm less than that of MATH so that MATH. Finally MATH so that setting MATH, we obtain MATH with MATH and MATH as required.
math/0009250
We shall prove the result by induction on MATH. Let MATH and let MATH be a tree on MATH of order MATH satisfying the hypotheses of the lemma. We may assume that MATH consists of finitely supported vectors with respect to the basis of MATH, and that MATH is isomorphic to the minimal tree MATH. We write MATH where MATH is a tree isomorphic to MATH and the elements from different trees MATH are unrelated. Choose MATH, and let MATH, where MATH, be the defining map for the replacement tree MATH. Let MATH, so that MATH. Let MATH be any terminal node in MATH. Define MATH and let MATH. Let MATH, so that MATH is isomorphic to MATH, and let MATH be the restricted tree MATH. The tree MATH on MATH has order MATH, and satisfies the conditions of the previous lemma, thus we may find a terminal node MATH and MATH such that MATH and MATH. Define MATH and let MATH. As before we consider the tree MATH, so that MATH is isomorphic to MATH, and we let MATH be the restricted tree MATH. Arguing as above there is a terminal node MATH and MATH such that MATH and setting MATH gives MATH. Continuing in this way we get MATH a block basis of some node MATH of MATH such that MATH for some sequence MATH with MATH, together with a sequence MATH such that MATH, where MATH, and MATH. Let MATH, so that for MATH, if MATH, and MATH is chosen so that MATH, then MATH . Thus MATH so that MATH is the vector we seek. This completes the proof in the case MATH. We next suppose the result has been proven for MATH and let MATH be a tree on MATH of order MATH. We may assume consists of finitely supported vectors in MATH, is isomorphic to MATH, and may be written as MATH with MATH. As before, choose MATH, and let MATH be the defining map for the replacement tree MATH. Define MATH as for the case MATH. This time the tree MATH has order MATH on MATH, but we may also consider it as a tree of order MATH on MATH, and hence it satisfies the conditions of the lemma for MATH. Thus, by the induction hypothesis, there exists a terminal node MATH and MATH such that MATH and MATH. Continuing in this way we obtain MATH a block basis of a node MATH of MATH such that MATH for some sequence MATH with MATH, together with a sequence MATH such that MATH, MATH and MATH. Let MATH, let MATH, MATH and choose MATH so that MATH. We may write MATH where MATH and MATH. Now, MATH and hence MATH as required. This completes the proof.
nlin/0009006
of REF : We shall adopt identifications, MATH . For MATH, the left hand side of the first relation should read as MATH. The six elements in REF are interpreted as MATH respectively. This immediately leads to the theorem.
nlin/0009006
The lemma is equivalent to MATH . Since REF is linear in MATH, it suffices to show the equality of the coefficients of them in the both sides. First consider the coefficient of MATH, We need to show the equality, MATH . To verify this, we prepare a matrix MATH . Denote by MATH the minor, the determinant of a matrix obtained by deleting MATH rows and MATH columns from MATH. Then REF is represented as, MATH where MATH. Obviously this is the NAME identity. Thus the equality of coefficients of MATH in both sides is established. The equalities are similarly proven up to those of MATH. For MATH case, the first term of the right-hand side in REF does not contribute. We can check , however, the equality of the reminding terms. Thus the lemma is proved.
nlin/0009006
Firstly we substitute MATH to REF and obtain MATH . The application of NAME 's formula yields MATH in the form, MATH . We use REF to the second term in the right-hand side to obtain, MATH . It is now obvious that repeated applications of REF to the last term results REF .
nlin/0009034
Due to the fact that MATH take values in the corresponding orthogonal group we find that from REF it follows MATH, MATH and analogous relations for the vector MATH. As a result we get that MATH . Let us now insert REF into REF and take the limit of the r.h.side of REF for MATH. This immediately gives REF . In order that REF be satisfied identically with respect to MATH we have to put to REF also the residues of its r.h.side at MATH and MATH. This gives us the following system of equation for the projectors MATH and MATH: MATH where we have to keep in mind that MATH is given by REF . Taking into account REF and the relation between MATH and MATH REF reduces to: MATH . One can check by a direct calculation that REF satisfies identically REF . The theorem is proved.
quant-ph/0009004
We use a lemma from CITE. If MATH and MATH are two quantum states and MATH then the total variational distance between the probability distributions generated by the same measurement on MATH and MATH is at most MATH. We also use a lemma from CITE. Let MATH. There are subspaces MATH, MATH such that MATH and CASE: If MATH, then MATH and MATH, CASE: If MATH, then MATH when MATH. REF can be viewed as a quantum counterpart of the classification of states for NAME chains CITE. The classification of states divides the states of a NAME chain into ergodic sets and transient sets. If the NAME chain is in an ergodic set, it never leaves it. If it is in a transient set, it leaves it with probability MATH for an arbitrary MATH after sufficiently many steps. In the quantum case, MATH is the counterpart of an ergodic set: if the quantum random process defined by repeated reading of MATH is in a state MATH, it stays in MATH. MATH is a counterpart of a transient set: if the state is MATH, MATH is left (for an accepting or rejecting state) with probability arbitrarily close to REF after sufficiently many MATH's. The next Lemma is our generalization of REF for the case of two different words MATH and MATH. Let MATH. There are subspaces MATH, MATH such that MATH and CASE: If MATH, then MATH and MATH and MATH and MATH, CASE: If MATH, then for any MATH, there exists a word MATH such that MATH. Proof. We use MATH to denote the space MATH from REF for a word MATH. We define MATH. MATH consists of all vectors in MATH orthogonal to MATH. Next, we check that both REF are true. CASE: It is easy to see that, for all MATH, MATH due to MATH. We also need to prove that MATH and MATH. For a contradiction, assume there are MATH and MATH such that MATH. Then, by definition of MATH, there also exists MATH such that MATH does not belong to MATH. REF implies that the norm of MATH can be decreased by repeated applications of MATH. A contradiction with MATH for all MATH. CASE: Clearly, if MATH belongs to MATH then for all MATH the superposition MATH also belongs to MATH because MATH and MATH are unitary and map MATH to itself (and, therefore, any vector orthogonal to MATH is mapped to a vector orthogonal to MATH). MATH does not increase if we extend the word MATH to the right and it is bounded from below by REF. Hence, for any fixed MATH we can find a MATH such that MATH for all MATH. We define a sequence of such words MATH for MATH. MATH is a bounded sequence in a finite dimensional space. Therefore, it has a limit point MATH. We will show that MATH. First, notice that MATH because it is a limit of a subsequence of MATH and all MATH belong to MATH. Therefore, if MATH then MATH has nonzero MATH component for some MATH. Reading sufficiently many MATH would decrease this component, decreasing the norm of MATH. This contradicts the fact that, for any MATH, MATH (since MATH is less than any MATH which is true because MATH is the limit of MATH). Therefore, MATH. This completes the proof of lemma. Let MATH be a language such that its minimal automaton MATH contains the "forbidden construction" and MATH be a QFA. We show that MATH does not recognize MATH. Let MATH be a word after reading which MATH is in the state MATH. Let MATH, MATH, MATH. We find a word MATH such that after reading MATH is in the state MATH and the norm of MATH is at most some fixed MATH. (Such word exists due to REF .) We also find a word MATH such that MATH. Because of unitarity of MATH and MATH on MATH REF , there exist integers MATH and MATH such that MATH and MATH. Let MATH be the probability of MATH accepting while reading MATH. Let MATH be the probability of accepting while reading MATH with a starting state MATH, MATH be the probability of accepting while reading MATH with a starting state MATH and MATH, MATH be the probabilities of accepting while reading MATH and MATH with a starting state MATH. Let us consider four words MATH, MATH, MATH, MATH. MATH accepts MATH with probability at least MATH and at most MATH. Proof. The probability of accepting while reading MATH is MATH. After that, MATH is in the state MATH and reading MATH in this state causes it to accept with probability MATH. The remaining state is MATH. If it was MATH, the probability of accepting while reading the rest of the word REF would be exactly MATH. It is not quite MATH but it is close to MATH. Namely, we have MATH . By REF , this means that the probability of accepting during MATH is between MATH and MATH. Similarly, on the second word MATH accepts with probability between MATH and MATH. On the third word MATH accepts with probability between MATH and MATH. On the fourth word MATH accepts with probability MATH and MATH. This means that the sum of accepting probabilities of two words that belong to MATH (the first and the fourth words) differs from the sum of accepting probabilities of two words that do not belong to MATH (the second and the third) by at most MATH. Hence, the probability of correct answer of MATH on one of these words is at most MATH. Since such REF words can be constructed for arbitrarily small MATH, this means that MATH does not recognize MATH.
quant-ph/0009004
Let MATH be the minimal deterministic automaton of a language MATH. If it contains at least one of "forbidden constructions" of REF , then MATH cannot be recognized by a REF-way QFA. We now show that, if MATH does not contain any of the two ``forbidden constructions" and does not contain ``two cycles in a row" construction then MATH can be recognized by a QFA. Let MATH be the starting state of MATH and MATH be the transition function of the automaton MATH. MATH denotes the state to which MATH goes if it reads the word MATH in the state MATH. We will construct a QFA for MATH by splitting MATH into pieces MATH, MATH, MATH, MATH, constructing a reversible finite automaton for each of those pieces and then combining these reversible automata. Let MATH be the set of all states MATH such that after reading any word in MATH, there exists a word such that MATH passes back to the state MATH. We split MATH into connected components MATH, MATH, MATH, MATH. Two different states MATH and MATH belong to the same MATH iff MATH is reachable from MATH and MATH is reachable from MATH. Let MATH be the set of all remaining states, i. e., the states that do not belong to MATH. For every letter MATH and every state MATH of MATH, there is exactly one MATH such that reading MATH in MATH leads to MATH, i. e., every letter induces a permutation of states in MATH. Let MATH be a state in MATH and MATH. For a contradiction, assume that there are two states MATH and MATH such that MATH. Then, there exist words MATH and MATH such that MATH and MATH. (This is true because every state in MATH is reachable from every other state in MATH.) However, this means that MATH contains the ``forbidden construction" of REF , with MATH, MATH, MATH and MATH. A contradiction. Such automata MATH are called permutation automata. MATH does not contain a fragment of the form shown in REF with MATH and MATH being two different states of MATH and MATH. For a contradiction, assume that MATH contains such a fragment. By definition of MATH and MATH, MATH implies that there is a word MATH such that reading MATH in MATH leads to a state MATH and MATH is not reachable from MATH. Consider the states MATH, MATH, MATH. MATH has a finite number of states. Therefore, there must be MATH and MATH such that MATH. Notice that this implies MATH for all MATH. Let MATH be the smallest number such that MATH and MATH is divisible by MATH. Define MATH. Then, MATH (because MATH is divisible by MATH), i. e., MATH. We have shown that MATH, MATH, MATH form a ``two cycles in a row" construction with MATH. A contradiction. By a theorem from CITE, any language recognizable by a deterministic automaton which does not contain the construction of REF is recognizable by a reversible finite automaton(RFA). (A reversible finite automaton is a deterministic automaton in which, for every state MATH and letter MATH, there is at most one state MATH such that reading MATH in MATH leads to MATH.) Any reversible automaton is a special case of a quantum automaton. (If, for every state MATH and every letter MATH, there is one MATH such that reading MATH leads to MATH, the letter MATH induces a permutation on states of automaton and the corresponding transformation of a quantum automaton is clearly unitary.) Therefore, the language recognized by MATH is recognized by a QFA as well. Also, permutation automata MATH, MATH, MATH are special cases of reversible automata. Therefore, they can be replaced by equivalent QFAs. We will construct a QFA for MATH by combining those QFAs. However, before that, we must solve one problem. NAME if the state of MATH after reading a word MATH is in MATH, the starting state of MATH can be in MATH. If we want to use the permutation automaton for MATH to recognize a part of MATH, we must define one of states in MATH as the starting state. The next two lemmas show that this is possible. If the minimal automaton MATH contains the construction of REF , the states MATH and MATH cannot be in the same MATH. Let us suppose the opposite. MATH and MATH are different states of the minimal deterministic automaton. Therefore, there exists a word MATH such that MATH is an accepting state (or a rejecting state) and MATH is a rejecting state (or an accepting state). Also, there is no word MATH such that MATH is a rejecting state (or an accepting state) and MATH is an accepting state (or a rejecting state) because, otherwise, MATH would contain the construction of REF . We denote MATH by MATH and MATH by MATH. There exists a word MATH such that MATH (because all states in MATH can be reached from one another). Moreover, the states MATH and MATH are accepting states (MATH is accepting because MATH and MATH is accepting because, if it was rejecting, MATH, MATH and MATH would form the construction of REF with MATH and MATH as MATH and MATH.). Similarly, the states MATH and MATH are accepting states, the states MATH and MATH are accepting states and so on. However, there exists MATH such that MATH (because MATH is a permutation automaton and, therefore, it must return to the starting state after some number of MATH's). This gives us the contradiction. For each part MATH, there is a state MATH such that if MATH belongs to MATH then MATH. Let MATH. Then, there is a unique MATH such that MATH. (This is true because MATH is a permutation automaton and every state has a unique preceding state.) We must show that this state MATH does not depend on the word MATH. Assume this is not true. Then, there are words MATH and MATH such that MATH and MATH and MATH (and MATH, MATH, MATH, MATH are all in MATH). Let MATH and MATH. Then, there exist MATH and MATH such that MATH and MATH. (Again, we are using the fact that MATH is a permutation automaton, and, therefore, if it reads the same word many times, it returns to the same state at some point.) This implies MATH and MATH (because MATH and MATH and, in a permutation automaton, MATH such that MATH must be unique). Therefore, MATH and MATH. Similarly, MATH. This means that MATH contains the states MATH and MATH from the construction shown in REF (with MATH and MATH instead of MATH and MATH and MATH instead of MATH). By REF , this is impossible. A contradiction. We denote these states as MATH. Let MATH be the automaton MATH with MATH as the starting state. Let MATH be the language recognized by MATH. For any MATH, either MATH or MATH. Let MATH and MATH be such that MATH and MATH. (MATH and MATH exist because, otherwise, MATH or MATH would be unreachable from the starting state MATH.) By REF , MATH and MATH. For a contradiction, assume that neither MATH nor MATH is true. Then, there are words MATH and MATH and we get the ``forbidden construction" of REF . Let MATH denote the number of MATH such that MATH. MATH denotes the corresponding reversible automaton for the automaton MATH with one modification: when the automaton MATH passes to a state of MATH accepts with probability MATH and rejects with probability MATH. Next, we define a QFA recognizing the language MATH: it works as MATH with probability MATH (with amplitude MATH) and as MATH with probability MATH (with amplitude MATH) for each MATH. CASE: MATH. The QFA recognizes MATH with probability MATH. CASE: MATH and MATH. The automaton MATH accepts with probability MATH. Moreover, MATH is accepted by at least MATH automata from MATH. This means that the total probability of accepting is at least MATH . CASE: MATH and MATH. Similarly to the previous case, the total probability of rejecting is at least MATH.
quant-ph/0009004
We have a QFA MATH which accepts MATH with probability MATH and a QFA MATH which accepts MATH with probability MATH. We will make a QFA MATH which will work like this: CASE: Runs MATH with probability MATH, CASE: Runs MATH with probability MATH, CASE: Accepts input with probability MATH. CASE: MATH and MATH input is accepted with probability MATH CASE: MATH and MATH input is accepted with probability at least MATH CASE: MATH and MATH input is accepted with probability at least MATH CASE: MATH and MATH input is rejected with probability at least MATH . So automaton MATH recognizes MATH with the probability at least MATH .
quant-ph/0009004
Follows immediately from REF .
quant-ph/0009004
For a contradiction, assume that MATH is a QFA that recognizes MATH with a probability MATH. We construct REF words such that MATH gives a wrong answer on at least one of them. Let MATH be a superposition of QFA corresponding to the state MATH (the superposition after reading some word MATH that leads to MATH). Similarly to the proof of REF , we consider decompositions MATH for all MATH and take MATH, MATH. Let MATH, MATH, MATH. Similarly to the proof of REF , there is a word MATH with the first letter MATH such that MATH and MATH where MATH. Also, there are words MATH and MATH with the first letters MATH and MATH and the same property. Next, we consider the decompositions MATH for all MATH and take MATH, MATH. Let MATH, MATH, MATH. Let MATH be words with the first letters MATH, MATH and MATH such that MATH and MATH (and similar inequalities hold for MATH and MATH). Let MATH be the probability of accepting while reading the left endmarker MATH and the word MATH that leads to the superposition MATH. Let MATH, MATH, MATH be the probabilities of accepting while reading MATH, MATH and MATH if the starting superposition is MATH. Let MATH, MATH and MATH be the probabilities of accepting while reading MATH, MATH and MATH if the starting superposition is MATH. Let MATH, MATH and MATH be the probabilities of accepting while reading MATH, MATH and MATH if the starting superposition is MATH. Let MATH be the probability of accepting the word MATH. Then, MATH . MATH is the sum of probabilities of accepting while reading MATH, accepting while reading MATH, accepting while reading MATH and accepting while reading MATH. The first two probabilities are exactly MATH and MATH. The probability of accepting while reading MATH may differ from MATH because the state of MATH after reading MATH is MATH and the state used to define MATH is MATH. However, these two probabilities differ by at most MATH because MATH and the probability distributions resulting from observing MATH and MATH can differ by at most twice the distance between superpositions REF . Similarly, the distance between the state of MATH after reading MATH and MATH is at most MATH and this implies that the probability of accepting while reading MATH portion of MATH differs from MATH by at most MATH. Therefore, the difference between MATH and MATH is at most MATH. Similar bounds are true for probabilities of accepting MATH, MATH, MATH, MATH, MATH. (We denote these probabilities MATH, MATH, MATH, MATH, MATH.) By putting the bounds for MATH, MATH, MATH together, we get MATH . Putting the bounds for MATH, MATH, MATH together gives MATH . However, each of MATH, MATH, MATH is the probability of accepting a word in MATH and must be at least MATH and each of MATH, MATH, MATH is the probability of accepting a word not in MATH and must be at most MATH. Therefore, MATH . A contradiction.
quant-ph/0009004
The minimal automaton of MATH has a structure similar to REF , with some more states. Similarly to REF , the states of the minimal automaton of MATH can be partitioned into REF levels: CASE: The starting state (nothing read so far). CASE: The states after reading MATH, MATH or MATH. CASE: The states after reading MATH, MATH or MATH and MATH, MATH or MATH. CASE: The states after reading MATH, MATH or MATH, MATH, MATH or MATH and MATH, MATH or MATH. If we look for the ``forbidden construction" of REF , then MATH and MATH should be in the MATH or MATH level. (MATH or MATH cannot be in the MATH level because, after reading any letter in the starting state, MATH leaves it and never returns. Also, MATH or MATH cannot be in the MATH level because every state in it is ``all-accepting" or ``all-rejecting".) This leaves us with REF possible cases. CASE: MATH and MATH are both states on the MATH level (after the automaton has read MATH, MATH or MATH). The sets of words that are accepted from MATH and MATH correspond to black pieces in MATH squares in REF . For any two of these three squares, the black pieces in one of them are subset of the black pieces in the other. (And this means there is no words MATH such that MATH gets accepted from MATH but not from MATH and MATH gets accepted from MATH and not from MATH.) CASE: MATH and MATH are two of REF states on the MATH level (after reading one of MATH, MATH or MATH and one of MATH, MATH or MATH). The sets of words that lead to acceptance correspond to rows in REF . One can easily see that any two of them are subsets of one another. CASE: One of MATH and MATH is on the MATH level and the other is on the MATH level. W. l. o. g., assume that MATH is on the MATH level and MATH is on the MATH level. Then, the word MATH that leads the automaton MATH from MATH to MATH must contain one of letters MATH, MATH and MATH. However, reading MATH, MATH or MATH in the state MATH would lead MATH to a state in the MATH level from which it cannot return to MATH (and, therefore, REF is violated). In all REF cases, we see that one of conditions of REF is violated. Therefore, the minimal automaton MATH does not contain the ``forbidden construction" of REF .
quant-ph/0009007
(Existence) We presuppose the canonical isomorphism between MATH and MATH (see CITE). Every state MATH on MATH gives rise to a function MATH via the equation MATH. Conversely, a function MATH gives rise to a state MATH on MATH just in case MATH and the map MATH is a positive definite kernel CITE. Define MATH by MATH where MATH is the characteristic function of MATH. Obviously, MATH. Now define MATH by MATH . To see that MATH is positive-definite, let MATH and let MATH with MATH. Fix MATH. It follows then that MATH where MATH . Define a relation MATH on MATH by MATH . An inspection of REF shows that MATH is an equivalence relation. Thus, there are disjoint subsets MATH of MATH such that MATH, and MATH . Therefore, MATH is positive-definite. (Uniqueness) Let MATH and let MATH be a state of MATH such that MATH for all MATH. Fix MATH, let MATH, and let MATH. Thus, MATH and MATH. Since MATH are unitary, it follows from CITE that MATH for any MATH. Let MATH. Then, using the NAME relations, we have MATH and MATH . Using REF , and REF, it follows that MATH . Since this is true for all MATH, it follows that MATH when MATH. Similarly, REF , and REF entail that MATH . Since this is true for all MATH, it follows that MATH when MATH. When MATH and MATH, we have MATH . Thus, MATH agrees with MATH on all NAME operators. Since the values of a state on the NAME operators fixes its values on MATH, it follows that MATH. CASE: Let MATH denote the abelian subalgebra of MATH generated by MATH and MATH, with MATH. We have seen that MATH is multiplicative, and hence is a pure state. Thus, MATH has an extension to a pure state MATH on MATH. On the other hand, we have shown that MATH has a unique extension. Therefore, MATH, and MATH is pure.
quant-ph/0009007
Since MATH is faithful, we have MATH . Using the NAME density theorem, and the fact that multiplication is jointly continuous (in the strong operator topology) on bounded sets, it follows that MATH is contained in the strong-operator closure of MATH. The conclusion then follows immediately.
quant-ph/0009007
We show first that MATH is cyclic for MATH. By the GNS construction, MATH is cyclic for MATH. Since MATH is generated by products of NAME operators, it follows that the set MATH is a total set in MATH. Let MATH . Then it will suffice for our conclusion to show that MATH is a total set in MATH. Let MATH. That is, MATH, for some quadruple MATH of real numbers. Let MATH. Note that since NAME operators are unitary, MATH. Now, MATH where MATH. Thus, MATH . Hence, MATH; that is, MATH is a scalar multiple of MATH. Since MATH is a total set in MATH and since MATH was an arbitrary vector in MATH, it follows that MATH is a total set in MATH. Therefore, MATH is cyclic for MATH. By symmetry, MATH is cyclic for MATH. Since MATH, MATH is separating for MATH and MATH. In order to show that MATH is a trace vector for MATH, let MATH. If MATH, then MATH. If MATH, then MATH . Similarly, MATH. In either case, MATH . By taking linear combinations of NAME operators and norm limits, it follows that MATH is a tracial state of MATH. By taking weak limits in the GNS representation, it follows that MATH is a trace vector for MATH. By symmetry, MATH is a trace vector for MATH.
quant-ph/0009007
Let MATH be the GNS representation of MATH induced by MATH, and let MATH. By REF , it will suffice to find a NAME operator MATH for MATH such that MATH. Since MATH is type II, there is a projection MATH such that MATH is equivalent to MATH CITE. That is, there is a partial isometry MATH such that MATH and MATH. [Note that MATH.] For each MATH, the operator MATH is self-adjoint and unitary. Moreover, for MATH, we have MATH . Since MATH is a trace vector for MATH, it follows that MATH. Hence, MATH, and MATH . Since MATH is also cyclic for MATH, there is a MATH anti-isomorphism MATH of MATH onto MATH such that MATH for all MATH CITE. Defining self-adjoint unitaries MATH by MATH, one obtains (compare CITE) MATH .
quant-ph/0009009
Following NAME 's diagonalization proof CITE let us consider the following family of unary predicates over MATH depending on the parameter MATH : MATH . Clearly: MATH and so: MATH . Anyway: MATH implying the formula REF .
quant-ph/0009032
By definition, MATH, and by REF , MATH. Write MATH as a telescoping sum. Then MATH, and the theorem follows.
quant-ph/0009032
For any oracle MATH, we will think of MATH as an infinite bit-string where MATH for all MATH. Operator MATH defined by REF is then given by MATH . Let MATH denote the identity operator. For every MATH, let MATH denote the projection operator onto the subspace querying the MATH-th oracle bit. Let MATH. By REF . For every MATH and MATH, let MATH, where MATH is such that MATH (MATH for 'answer'). Then MATH where MATH denotes the complex conjugate of MATH. Rewrite the above equation in terms of distances MATH, MATH . For every MATH, let MATH denote the total mass that queries the oracle at MATH index-positions above and below the leftmost REF. By the NAME - NAME inequality, MATH . The right hand side is the written-out product of REF matrices. Let MATH and MATH, where MATH denotes transposition, and let MATH denote the MATH matrix with entry MATH defined by MATH for all MATH. Then MATH where MATH denotes the induced matrix norm. Since MATH, we have that MATH. Matrix MATH is a NAME matrix, and its norm is upper bounded by the norm of the MATH . NAME matrix MATH defined by MATH for all MATH. The norm of any NAME matrix is upper bounded by MATH (see for example CITE for a neat argument), and hence MATH.
quant-ph/0009038
As given in Ref. CITE.
quant-ph/0009038
The proof of REF is given in CITE and CITE. We stress that MATH direction holds in any OL.
quant-ph/0009038
REF fail in lattice OREF, so they imply the orthomodular law. For the converse with REF we start with MATH. It is easy to show that MATH and MATH. By applying the NAME theorem (which we shall subsequently refer to as F-H) CITE to our starting expression we obtain: MATH. The first conjunction is by orthomodularity equal to MATH. The disjunction is thus equal or less than MATH and we arrive at MATH. By multiplying both sides by MATH we get MATH. By symmetry we also have MATH. A combination of the latter two equations proves the theorem. We draw the reader's attention to the fact that MATH does not hold in all OMLs (it is violated by REF). For the converse with REF we start with REF and obtain MATH. On the other hand, starting with MATH we obtain MATH. Therefore the conclusion. As for the statements with MATH substituted for MATH they fail in OREF, so they imply the orthomodular law. For the converse it is sufficient to note that in any OML the following holds: MATH .
quant-ph/0009038
To obtain REF from REF we apply REF CASE : MATH. Reversing the steps yields REF from REF . To get REF we first note that one can easily derive MATH from REF . Then one gets REF by applying REF . To arrive at REF starting from REF we apply REF and reduce the right hand side of REF to MATH what yields REF .
quant-ph/0009038
REF follows from REF and by REF an ortholattice that admits a strong set of classical states is orthomodular. Let now MATH and MATH be any two lattice elements. Assume, for state MATH, that MATH. Since the lattice admits a strong set of classical states, this implies MATH, so MATH. But MATH for any state, so MATH. Hence we have MATH, which means (since the ortholattice admits a strong set of classical states) that MATH. This is another way of saying MATH. CITE By F-H, an orthomodular lattice in which any two elements commute is distributive.
quant-ph/0009038
CITE By REF , to any two orthogonal atoms MATH and MATH there correspond orthogonal one-dimensional subspaces (vectors) MATH and MATH from MATH such that MATH and MATH. The unitary orthoautomorphism MATH maps into the unitary operator MATH so as to give MATH for some MATH. From this and from the unitarity of MATH we get: MATH. Hence, there is an infinite orthogonal sequence MATH, such that MATH, for all MATH. Then NAME 's CITE and NAME 's CITE theorems prove the claim.
quant-ph/0009038
We need only to use pure states defined by unit vectors: If MATH and MATH are closed subspaces of NAME space, MATH such that MATH is not contained in MATH, there is a unit vector MATH of MATH belonging to MATH. If for each MATH in the lattice of all closed subspaces of MATH, MATH, we define MATH as the square of the norm of the projection of MATH onto MATH, then MATH is a state on MATH such that MATH and MATH. This proves that MATH admits a strong set of states, and this proof works in each of REF cases where the underlying field is the field of real numbers, of complex numbers, or of quaternions. We can formalize the proof as follows: MATH .
quant-ph/0009038
The proof is similar to that in CITE. By REF we have MATH etc., because MATH, that is, MATH in any ortholattice. Assuming MATH we get MATH. Hence, MATH. Therefore, MATH. Thus, by REF for strong quantum states, we obtain: MATH, , MATH, and MATH, wherefrom we get MATH. By symmetry, we get MATH. Thus MATH. MATH-GO is orthomodular because REF-Go fails in OREF, and MATH-Go implies MATH-Go in any OL REF . It is a variety smaller than OML because REF-Go fails in the NAME lattice from REF a.
quant-ph/0009038
We use induction on MATH. The basis is simply the definition of MATH. Suppose MATH. Multiplying both sides by MATH, we have MATH . F-H was used in the last two steps, whose details we leave to the reader.
quant-ph/0009038
Lattice OREF violates all of the above equations as well as MATH-Go. Thus for the proof we can presuppose that any OL in which they hold is an OML. REF follows from definitions, replacing variables with their orthocomplements in MATH-Go. Assuming REF , we make use of MATH to obtain the equivalent equation MATH, so MATH. By renaming variables, the other direction of the inequality also holds, establishing MATH-Go. Conversely, MATH-Go immediately implies MATH. The proof for REF is similar. For REF , we demonstrate only REF , MATH. From REF , by rearranging factors on the left-hand-side we have MATH, so MATH (from REF), etc.; this way we build up REF . For the converse, REF obviously follow from REF . For REF , using REF we can write REF as MATH. Multiplying both sides by MATH and using F-H we obtain MATH. Conversely, disjoining both sides of MATH with MATH and using F-H and REF we obtain REF . The proof for REF is similar.
quant-ph/0009038
These obviously follow from REF and (for MATH) the fact that MATH.
quant-ph/0009038
For REF , MATH. For REF , we have MATH . In the third step we used REF ; in the fourth MATH and MATH; in the fifth MATH and MATH. Rearranging the left-hand side, this proof also gives us MATH and thus MATH. The other direction of the inequality follows from MATH.
quant-ph/0009038
We have already proved the converses in REF . From REF , we have MATH since MATH when MATH for MATH. By rearranging the left-hand side we also have MATH and MATH. Thus MATH, which is the REFGO law by REF . In the penultimate step we used REF . The proof for REF is similar, and from REF we obtain REF .
quant-ph/0009038
We illustrate the proof by showing that REF-variable equation MATH holds in any REFGO. The essential identities we use are MATH which hold in any OML. Starting with REF , we have MATH where in the penultimate step we used REF [or more generally REF ] and in the last REF . Substituting MATH for MATH and using REF we obtain MATH . Using REF for the other direction of the inequality, we obtain REF . The reader should be able to construct the general proof.
quant-ph/0009038
Substitute MATH for MATH in REF-Go.
quant-ph/0009038
For REF : MATH. From hypotheses, MATH commutes with MATH and MATH. Using F-H twice, MATH. For REF : From hypotheses, MATH. For REF : From REF , MATH[from hypothesis] MATH. Thus MATH, which by REF is MATH.
quant-ph/0009038
Substituting MATH for MATH,, MATH for MATH, MATH for MATH, , MATH for MATH, and MATH for MATH, we satisfy the hypotheses of REF and obtain REF . Conversely, suppose the hypotheses of REF hold. From the hypotheses and REF , we obtain MATH. Thus MATH. Applying REF to the right-hand side, we obtain MATH. Then REF gives us REF .
quant-ph/0009038
For REF : This is the same as REF for MATH. For REF : Using REF we express the REFGO law as MATH . We define MATH, MATH, and MATH. In REF we substitute MATH for MATH, MATH for MATH, MATH for MATH, MATH for MATH, MATH for MATH, MATH for MATH, MATH for MATH, MATH for MATH, MATH for MATH, MATH for MATH, MATH for MATH, and MATH for MATH. With this substitution, all hypotheses of REF are satisfied by the hypotheses of REF . The conclusion becomes MATH . We simplify REF using MATH[since MATH and MATH commute by hypothesis] MATH[since MATH] MATH; MATH similarly; MATH; and MATH. This gives us MATH . Now, in any OML we have MATH. Thus the left-hand side of REF absorbs MATH, so MATH which after rearranging is exactly REF . For REF : Using REF we obtain from the REFGO law MATH . Therefore MATH . In any OML we have MATH; applying this to the right-hand side we obtain REF .
quant-ph/0009038
To show this equation holds in REFGO, we start with REF that occurs in the proof of NAME 's REF , rewriting it as: MATH . We substitute MATH for MATH, MATH for MATH, MATH for MATH, MATH for MATH, MATH for MATH, and MATH for MATH. With these substitutions, the hypotheses of REF are satisfied. This results in REF , showing that REF holds in REFGO. We show independence as follows. On the one hand, REF fails in the NAME OML (REF a) but holds in OML GREF (REF a). On the other hand, the REFGO law REF holds in the NAME OML but fails in GREF.
quant-ph/0009038
If MATH has REF or more variables, we replace it with a chained identity per REF , otherwise we replace it with the extended definition we mention after REF . The proof is then obvious. (In many cases the equation may also hold for smaller MATH or even in OML or OL, for example, when MATH.)
quant-ph/0009038
We illustrate the case MATH. In any OML we have MATH. Thus MATH.
quant-ph/0009038
In MATH, set MATH. On the other hand, lattice LREF (REF b) is a REFOA because it is an OML in which REF holds, but it is not a REFOA because it violates REF .
quant-ph/0009038
For REF - REF : We omit the easy proofs. For REF : If MATH then MATH using REF , so MATH (via F-H) MATH. Conversely, if MATH, then using REF , MATH.
quant-ph/0009038
We will work with the dual of REF , MATH . First we show that the REFOA law implies REF . In any OL we have MATH . Substituting MATH for MATH, MATH for MATH, and MATH for MATH; simplifying with REF ; and applying REF to the left-hand side of the conclusion we obtain MATH . We convert the REFOA law MATH to MATH to MATH using REF . We substitute MATH for MATH and simplify with REF to obtain MATH. Combining with REF yields MATH . Letting MATH we have MATH [for example, for REF , using REF we have MATH] from which we obtain REF . Conversely, assume REF holds. Let MATH. The hypotheses of REF are satisfied using REF . Noticing [with the help of REF ] that the right-hand side of the resulting inequality is MATH, we have MATH, so MATH. Applying REF , we have the REFOA law MATH.
quant-ph/0009038
Using the definitions, REF can be written in the dual form MATH. We substitute MATH for MATH and MATH for MATH throughout; simplifying with REF we obtain MATH. This is easily shown to be equivalent to REF .
quant-ph/0009038
The proof is analogous to that for REF .
quant-ph/0009038
For REF , assuming MATH we have MATH by REF . Conversely, from REF MATH and MATH and from the hypothesis MATH, so MATH. For REF , the proof is similar, noticing for its converse that MATH.
quant-ph/0009038
This equation is obviously a substitution instance of the REFOA law REF . On the one hand, it fails in lattice LREF but holds in lattice MATH (REF a). On the other hand, the REFOA law REF holds in lattice LREF but fails in lattice MATH.
quant-ph/0009038
Writing the REFOA law as MATH, we weaken the left-hand side of the inequality with MATH, etc. to obtain MATH . This equation fails in OML LREF but holds in LREF.
quant-ph/0009038
It is obvious by definition that MATH is the REFOA law REF . It is also obvious MATH implies MATH and thus the REFOA law: each subexpression of MATH is greater than or equal to the subexpression of MATH that replaces it. To show that MATH holds in MATH, we closely follow the proof of the orthoarguesian equation in CITE. We recall that in lattice MATH, the meet corresponds to set intersection and MATH to MATH. We replace the join with subspace sum MATH throughout: the orthogonality hypotheses permit us to do this on the left-hand side of the conclusion CITE, and on the right-hand side we use MATH. Suppose MATH is a vector belonging to the left-hand side of REF . Then there exist vectors MATH such that MATH. Hence MATH for MATH. In REF we assume, for our induction hypothesis, that the components of vector MATH can be distributed over the leftmost terms on the right-hand side of the conclusion as follows: MATH . In particular if we eliminate the right-hand ellipses we obtain a MATH proof of the starting equation MATH, which is the REFOA law; this is the basis for our induction. Let us first extend REF by adding variables MATH and MATH to the hypothesis and left-hand side of the conclusion. The extended REF so obtained obviously continues to hold in MATH. Suppose MATH is a vector belonging to the left-hand side of this extended REF . Then there exist vectors MATH such that MATH. Hence MATH for MATH. On the right-hand side of the extended REF , for any arbitrary subexpression of the form MATH, where MATH, the vector components will be distributed (possibly with signs reversed) as MATH and MATH. If we replace MATH with MATH, components MATH and MATH can be distributed as MATH so that MATH remains an element of the replacement subexpression. We continue to replace all subexpressions of the form MATH, where MATH, as above until they are exhausted, obtaining REF . That MATH cannot be inferred from the REFOA law follows from the fact that the REFOA law holds in LREF while MATH fails in it. LREF , is the only other lattice with this property among all NAME REF-atoms-in-a-block lattices with REF atoms and REF blocks. LREF and LREF are most probably the smallest NAME REF-atoms-in-a-block lattices with that property: we scanned some REF of smaller lattices and did not find any other.
quant-ph/0009038
The proof is analogous to the proof of REF .
quant-ph/0009038
In any OML, from REF we have MATH. Using this, we rewrite MATH as MATH. In any OML we also have MATH. Repeatedly applying the commutation law MATH, we prove MATH. Similarly, for any MATH we have MATH. Repeatedly applying the commutation laws MATH and MATH, we can build up MATH for any expression MATH constructed from variables MATH. As an exercise, the reader is invited to show an alternate proof using REF .
quant-ph/0009038
REF .
quant-ph/0009038
For REF : MATH[using MATH, MATH] MATH[using MATH, MATH] MATH[using MATH, MATH] MATH[using MATH, MATH] MATH[since MATH and MATH] MATH. For REF : MATH[using MATH, MATH] MATH[using MATH] MATH. The other direction of the inequality is obvious. For REF : MATH[using MATH, MATH] MATH.
quant-ph/0009038
For REF : Let MATH abbreviate MATH. By REF we have MATH. Hence from the first hypothesis MATH, so MATH and by REF MATH. Using REF , we obtain MATH . Since MATH, it follows that MATH, MATH, and MATH. In a similar way we obtain REF from REF respectively. To obtain the MATH-GO law from REF , we substitute MATH for MATH, MATH for MATH and MATH, and MATH for MATH. The hypotheses of REF are satisfied in any OML, and the conclusion becomes MATH[using REF ] MATH, which is REF . To obtain the MATH-GO law from REF , we make the same substitutions as above. The conclusion becomes MATH[using REF ] MATH. Therefore MATH. The left-hand side evaluates as MATH, establishing REF . To obtain the MATH-GO law from REF , we substitute MATH for MATH, MATH for MATH and MATH, and MATH for MATH. After that the proof is the same as for REF .
quant-ph/0009038
For REF : By F-H we have MATH. From REF we have MATH. Combining these we have MATH. Substituting MATH for MATH and MATH for MATH and simplifying with REF gives the result. For REF , MATH: Expanding the definition of MATH REF and applying F-H, we have MATH [using REF ] MATH. For REF , MATH: Expanding the definition of MATH and applying F-H, we have MATH [using REF ] MATH. Substituting MATH for MATH and MATH for MATH and simplifying with REF gives the result.
quant-ph/0009038
Assume that REF holds. Substitute MATH for MATH, MATH for MATH, and MATH for MATH. It is easy to see the hypotheses of REF are satisfied [use REF to establish the third hypothesis]. Using REF , the left-hand side of the conclusion evaluates to MATH. The right-hand side is MATH, which by REF is MATH, establishing the REFOA law REF . Conversely, we show the REFOA law implies REF . In any OML we have from the third hypothesis MATH. From the first hypothesis and the REFOA law REF we obtain MATH. From the second hypothesis we have MATH. Thus MATH. Since MATH holds in any ortholattice, we conclude MATH.
quant-ph/0009038
Assume that REF holds. Since MATH, we also have that REF holds. So by REF we have MATH . By REF we also have MATH, so MATH; applying REF again to the right-hand side we obtain MATH . In REF we substitute MATH for MATH, MATH for MATH, and MATH for MATH. It is easy to see the hypotheses of REF are satisfied, and the conclusion gives us MATH . From REF , and REF we conclude the REFOA law REF . For the converse, the proof that the REFOA law implies REF is essentially identical to that for REF .
quant-ph/0009039
The theorem is a special case of one in CITE, and the reader is referred to that paper for a strictly formal proof. Here we will give a slightly less formal sketch. Let us say that a diagram MATH is accepted by the algorithm if a call scan-MATH occurs. We will first prove that at least one member of each isomorphism class of diagram in MATH with at most MATH blocks is accepted. Then we will prove that at most one member of each isomorphism class is accepted. These two facts together will obviously imply the truth of the theorem. Suppose that the first assertion is false: there is an isomorphism class in MATH, with at most MATH blocks, that is never accepted. Let MATH be a member of such a missing isomorphism class which has the least number of blocks. MATH cannot be irreducible, since all irreducible diagrams are accepted explicitly. Thus, we can choose MATH and consider MATH. Since MATH and MATH has fewer blocks than MATH, at least one isomorph MATH of MATH is accepted. The isomorphism from MATH to MATH maps MATH onto some subset of MATH. Let MATH be a set of atoms consisting of that subset plus enough new atoms to make MATH the same size as MATH. The for loop considers some extension MATH equivalent to MATH, since it considers all equivalence classes of extensions. Moreover, since MATH we can infer that MATH and consequently that MATH. This means that the algorithm will perform the call scan-MATH, which is a contradiction as MATH is isomorphic to MATH and the isomorphism class of MATH was supposed to be not accepted at all. This proves that all isomorphism classes are accepted at least once. Next suppose that some isomorphism type is accepted twice. Namely, there two isomorphic but distinct diagrams MATH and MATH in MATH, with at most MATH blocks, such that both MATH and MATH are accepted. Choose such a pair MATH with the least number of blocks. As before, MATH and MATH cannot be irreducible, so they must be accepted by some calls scan-MATH and scan-MATH which arise from the calls scan-MATH and scan-MATH, respectively, where MATH and MATH. The properties of MATH ensure that MATH and MATH are isomorphic, so they must in fact be the same diagram MATH (since isomorphism classes with fewer blocks than MATH are accepted at most once by assumption). However, MATH and MATH are equivalent but distinct extensions of MATH, which violates the for loop specification. This contradiction completes the proof.
quant-ph/0009039
Consider a diagram MATH with more than one block. If MATH is not restricted to connected diagrams, MATH for any MATH, so MATH is reducible. Suppose instead that MATH contains only connected diagrams. Choose a longest possible sequence MATH of distinct blocks MATH, where MATH for MATH. Let MATH and MATH be two atoms of MATH. Since MATH is connected, there is a chain of blocks from MATH to MATH. This same chain is in MATH unless it contains MATH. However, all the blocks MATH intersecting MATH are in MATH (or else MATH can be made longer), so MATH can be replaced in MATH by some portion of MATH. Hence MATH is connected, so MATH is reducible.
quant-ph/0009091
For each individual pair MATH, where MATH, by the definitions of MATH and MATH, we have MATH . By the properties of the inner product and that MATH is unitary, the above expressions are simplified to MATH . The effects of MATH and MATH cancel out on most base vectors, except for these MATH such that MATH. Therefore REF is bounded from the above by MATH . By the NAME inequality, this expression can be further upper-bounded by MATH . Now we are ready to bound MATH. By definitions and pulling summations out of the absolute value, we obtain MATH . Plug in the upper bound of REF , this is then upper-bounded by MATH . By reordering the terms of the summation and applying the NAME inequality, this is further upper-bounded by MATH . Let MATH be a column vector with MATH . Clearly, MATH and, MATH . Let MATH be a NAME matrix with MATH . Now REF can be upper-bounded by MATH . Since every MATH is a unit vector, we have MATH . Clearly MATH, by REF . Applying the NAME inequality and by the definition of the spectral norm, we obtain the desired upper bound: MATH .
quant-ph/0009102
Let MATH be fixed; putting MATH we find that MATH satisfies REF , since if MATH then MATH is in MATH. As a consequence of uniqueness, we have the desired result.
quant-ph/0009102
Using the properties of integration of projection valued measures, it is easy to see that the MATH-timelike component is a c-number if and only if MATH is constant almost everywhere according to MATH. It is constant only on the two-dimensional affine subspaces of MATH parallel to MATH. But considering the transformation REF , it is impossible that the support of MATH is in one of these subspaces.
cs/0010005
We prove the lemma by induction on MATH. Let MATH be a total, associative function. CASE: For MATH, both REF above hold trivially. CASE: Let MATH be such that MATH. Since MATH is total, MATH. Since MATH, either MATH or MATH (or both). Therefore, for MATH, MATH generates one of the sets that satisfies one of REF or REF above. CASE: Let MATH such that MATH. Suppose that no set of size greater than or equal to MATH exists that satisfies one of REF or REF above for MATH. By the induction hypothesis, there exists a MATH such that MATH generates a set of size MATH that satisfies one of REF or REF above. In this case, suppose that REF is satisfied (the argument for the former case is analogous to the latter). By the conditions of REF , there exist strings MATH (where MATH are distinct, and distinct from MATH) such that MATH . Choose distinct MATH satisfying MATH . Since MATH is associative, for each MATH, MATH (the equation on line REF holds, because, by assumption, for all MATH, MATH). Set MATH. If at least one such MATH is not a member of MATH, then MATH satisfies REF for MATH and thus contradicts our assumption that no such set of size MATH exists. Otherwise, every such MATH is a member of MATH. Since MATH, by the pigeonhole principle, there exists some MATH such that MATH . Let MATH, and observe that MATH and for each MATH, MATH . Since we chose distinct MATH this set is large enough to contradict our assumption that no such set of size MATH exists.
cs/0010005
Let MATH be a nondecreasing, unbounded, total, recursive function. We will construct a MATH-to-REF, total, commutative, associative, recursive function MATH. Our construction uses a downward self-reducible trick that results in a total, single-valued, one-to-one function MATH (recall that MATH is the ``power multiset" of MATH) with the following property: MATH if and only if MATH, where MATH. Since MATH is associative and commutative, all elements in MATH are of the form MATH, where MATH is a permutation of MATH. It follows from simple combinatorics that MATH. Conversely, MATH if and only if MATH (MATH is so named because the properties mentioned above are very similar to certain properties that prime factorizations have over the natural numbers). Thus, if MATH can first compute MATH and MATH before it computes MATH, it can choose a value for MATH so that MATH satisfies the ambiguity bound MATH. This can be done as follows: on input MATH, MATH performs the following two phase process. The first phase starts with an empty set MATH (so named because it contains the portion of MATH that is currently ``known"), to which MATH will add as elements ordered pairs in a well-defined order that is independent of the values MATH. In effect, MATH at any time MATH constitutes a partial definition of MATH. We will denote partial function defined by MATH for time MATH of MATH running on input MATH as MATH, that is, MATH . Phase one concludes at some time MATH such that both MATH and MATH are defined. If, at time step MATH, there exists a MATH such that MATH is defined and equal to MATH, then MATH outputs MATH. Otherwise, MATH chooses MATH so that CASE: MATH is not defined, and CASE: MATH. MATH then adds MATH to MATH, outputs MATH, and halts. The partial functions MATH are, in a sense, analogous to the the stages of a finite extension construction used in relativization proofs (in fact our construction is in some sense a diagonalization of the ambiguity bound MATH - one that is computable, of course). In order for these partial functions to ``add up" to a single function (that is, MATH) that has all the properties we desire, it is crucial that for each pair of input strings MATH and time step MATH, the definition of MATH is consistent with all other MATH in a significant way. By this we mean that MATH . It is also necessary that every MATH be one-to-one. We claim that MATH, defined on input MATH by the following procedure, gives rise to such a family of functions. CASE: (Phase one) IF MATH, LET MATH (where MATH is the string that immediately precedes MATH in the lexicographical order), and discard MATH. CASE: LET MATH, CASE: LET MATH, CASE: (Phase two) OUTPUT MATH, where MATH, on input MATH, is defined by the following procedure: CASE: IF, for some MATH, MATH, OUTPUT MATH, CASE: ELSE LET MATH, and OUTPUT MATH, and on input MATH, MATH is defined by the following procedure: CASE: IF, for some MATH, MATH, OUTPUT MATH, CASE: ELSE CASE: LET MATH (where MATH is defined relative to the lexicographic ordering), CASE: LET MATH, CASE: OUTPUT MATH. Note that MATH and MATH are the only places where elements are added to MATH. Before we prove our claims, we need the following definition: for all (possibly partial) functions MATH and MATH defined over the same domain and range, we say that MATH extends MATH if, wherever MATH is defined, MATH is also defined, and for all MATH where MATH and MATH are both defined, MATH. Now, from the definition of MATH, the following claims follow easily: CASE: For all inputs MATH and at every time step MATH during the execution of MATH, MATH is one-to-one and single-valued. This can easily be proved by induction over the lexicographic order of all paired input strings MATH. CASE: For every two pairs of input strings MATH, MATH, and corresponding time steps MATH and MATH, either MATH extends MATH or MATH extends MATH (this captures our intuition that the partial functions must be significantly consistent). This is because the order in which the functions MATH and MATH are called on particular input values is independent of the input values to MATH (although, of course, the number of calls in this sequence that are made is not), because MATH never removes elements from MATH, and because the actions that MATH and MATH take depend only on their respective inputs and on the current value of MATH. Clearly, for every MATH, there are infinitely many partial functions MATH such that MATH is defined, thus any function extending all such MATH must be total. It follows from item two that there is a unique, single-valued function that extends all partial functions MATH. We will define MATH to be this unique, total, single-valued function. We make the following claims: CASE: MATH . Otherwise, since each MATH is one-to-one and single-valued, MATH would not extend any MATH on which both MATH and MATH are defined. CASE: MATH . This follows immediately from the definitions of MATH and MATH. We are now ready to prove our main claims. CASE: Clearly, MATH halts and outputs on every input, therefore it must be total. CASE: For all MATH, and by REF , MATH . By REF , MATH. CASE: For all MATH, by REF, MATH. By REF, MATH. CASE: By REF above, for all MATH, and all MATH, MATH. There are no more than MATH such pairs MATH. Since, for all MATH for which MATH is defined, we have MATH and that MATH was added to MATH during a call to MATH. Since, by the construction of MATH, MATH, we conclude that MATH must be MATH-to-one. We conclude that MATH is a MATH-to-one, total, commutative, associative, recursive function.
cs/0010005
Let MATH, MATH, and MATH be as assumed above. Let MATH be a nondeterministic NAME machine that accepts MATH, runs in polynomial time, and on input MATH has no more than MATH accepting paths. We will use MATH to build an associative, one-way function MATH that is strong, total, and MATH-to-one. First, we introduce some notation. Let MATH, and let MATH be such that MATH. Define MATH and MATH as follows: if MATH, then MATH is the MATH-th character (counting from the left) of MATH, and MATH is the substring of MATH consisting of all characters in MATH starting from the MATH-REF MATH, then MATH. We define the set of witnesses for MATH with respect to MATH by MATH . Since MATH has at most MATH accepting paths, MATH, and MATH if and only if MATH. We will assume, without loss of generality, that there exists a strictly increasing polynomial MATH that depends only on MATH such that for each MATH, and for each MATH, MATH and MATH. To make MATH easier to understand, we will construct it from several subroutines. The first plays the role of a ``one-way gate." We define the subroutine MATH as follows: MATH . Clearly, MATH is total, and for all MATH, MATH. For MATH, MATH is defined as follows: MATH . Clearly, MATH is total. Suppose that MATH. Consider the maximum size of MATH. First, from the definition of MATH, MATH. Consider each case below: CASE: If MATH, then MATH, therefore MATH. CASE: If MATH, then MATH and MATH. It follows that MATH. CASE: If MATH, then MATH, where MATH, therefore MATH. We define REF-ary function MATH as MATH where MATH is scalar multiplication. Finally, We define REF-ary function MATH as MATH . Clearly, MATH is total and honest. We claim that MATH is MATH-to-one, associative, one-way, and strong. CASE: Let MATH and MATH. First, observe that MATH . Now, using the above equation where necessary, MATH . CASE: Suppose that MATH is in the image of MATH. It follows that MATH, and that there are exactly MATH pairs of string suffixes MATH such that MATH. By the construction of MATH, MATH. The following table lists all of the possible preimage values MATH of MATH, given MATH, MATH, MATH, and MATH. MATH . It is easy to see (by counting the number of distinct elements for a given set of MATH) that for each MATH there are at most MATH elements MATH such that MATH, and likewise for MATH. In sum, then, since MATH is nondecreasing, there are no more than MATH preimage elements MATH such that MATH, so MATH must be MATH-to-one. CASE: Suppose that there is some polynomial-time computable function MATH that inverts MATH. We could then decide MATH in polynomial time as follows: Given any input string MATH, to decide if MATH, compute MATH and accept MATH if and only if MATH is defined and is equal to MATH, where MATH. Therefore, we conclude that MATH must be one-way. CASE: Suppose that there is some polynomial-time computable function MATH such that for all strings MATH, and for all MATH, if MATH for some MATH, then MATH is defined and MATH. We could then decide MATH in polynomial time as follows: Given any input string MATH, to decide if MATH, compute MATH and accept MATH if and only if MATH is defined and is equal to MATH, where MATH. By an analogous argument, if we assume that there is some function MATH such that for all strings MATH in the image of MATH, and for all MATH, if MATH for some MATH, then MATH is defined and MATH, then we arrive at the same contradiction. We conclude that MATH is a strong, total, MATH-to-one, associative, one-way function.
cs/0010005
If MATH UP MATH P, then MATH is accepted by a nondeterministic NAME machine that runs in polynomial time and has, at most, one accepting path. Taking MATH, by REF there exists a MATH-to-one AOWF.
cs/0010005
For the ``only if" direction, suppose that MATH is a language accepted by a nondeterministic NAME machine that runs in polynomial time and, on input MATH, has at most MATH accepting paths (where MATH is a polynomial). We can easily find another polynomial MATH that is nondecreasing and greater than or equal to MATH. By REF, there exists a MATH-to-one strong, total AOWF. For the ``if" direction, if there exists a MATH-to-one, strong, total AOWF MATH, then there exists a REF-ary MATH-to-one one-way function (just compose MATH with the inverse of a standard pairing function). CITE proves that NAME MATH P if there exists a REF MATH-to-one one-way function, therefore NAME MATH P.
cs/0010005
Let MATH be an associative function in MATH. We will prove the above lemma by induction over MATH. First, assume that MATH. Clearly, MATH satisfies the conditions of the lemma. Next, suppose that MATH. By the induction hypothesis, there exists MATH such that MATH satisfies one of REF or MATH from REF , that MATH, and that MATH. Assume, that, for MATH, no MATH exist with the above properties. Assume, by the induction hypothesis, and without loss of generality, that MATH satisfies REF from REF (the argument in the case that REF is satisfied is analogous). By assumption and by the induction hypothesis, the cardinality of the set MATH is equal to MATH, where MATH. We choose the set MATH subject to the following constraints CASE: MATH, CASE: MATH, CASE: MATH (the third constraint means that the elements of MATH are the shortest possible strings that will produce the results desired below). Clearly, such a MATH exists. It follows from the proof of REF that for at least one MATH, the string MATH satisfies REF or REF . Also, if MATH, then MATH will be one of the shortest MATH strings in MATH. Thus MATH. But since by the induction hypothesis MATH, MATH satisfies condition MATH above.
cs/0010005
Suppose that MATH satisfies REF . We can write REF equivalently as MATH . We will use induction over MATH to prove that MATH satisfies the conditions of the lemma. Suppose that MATH. It follows immediately that, for all MATH, MATH. Next, suppose that MATH. By associativity, MATH for our choice of MATH. Now, MATH . Suppose that MATH. Let MATH be a natural number satisfying MATH. By the induction hypothesis, MATH . By associativity, MATH (to see why MATH, consider that MATH).
cs/0010005
Suppose that MATH is a total, associative function that satisfies REF . By REF , there exists MATH where MATH such that for all MATH where MATH, and all MATH, MATH. We will prove, by contradiction, that MATH is not MATH-to-REF, where MATH inverts MATH, defined as MATH. Assume that, for all MATH, MATH is MATH-to-one. Let MATH. By assumption, MATH . Suppose that MATH. Choose MATH such that MATH satisfies REF . Let MATH . By REF , for some MATH, there exists MATH such that CASE: MATH, CASE: MATH, Let MATH. By REF above, MATH. Since MATH, we have MATH. By REF (and because MATH), MATH . By REF above, MATH, therefore MATH . Now, MATH which, since MATH, MATH, MATH . Since MATH is nondecreasing, MATH thus, for MATH and MATH, and for all MATH, there exists MATH such that MATH. But this contradicts our assumption that MATH is MATH-to-REF.
cs/0010007
Note that MATH is usually not accounted for in the I/O model, but we will keep track of the internal memory computation done in MATH in our emulation. The idea behind the emulation is as follows. We will mimic the behavior of the I/O algorithm MATH in the cache model, using an array NAME of MATH blocks to play the role of the fast memory. We will view the main memory in the cache model as an array Mem of MATH-element blocks. Although NAME is also part of the memory, we are using different notations to make their roles explicit in this proof. Likewise, we will view the cache as an array of sets and denote the MATH-th set by MATH. As discussed above, we do not have explicit control on the contents of the cache locations. However, we can control the memory access pattern through a level of indirection so as to maintain a REF correspondence between NAME and the cache. NAME, we assume that maps block MATH of NAME to cache set MATH for MATH. We divide the I/O algorithm into rounds, where in each round, the I/O algorithm MATH transfers a block between the slow memory and the fast memory and (possibly) does some computations. The cache algorithm MATH transfers the same blocks between NAME and NAME and then does the identical computations in NAME. REF formally describes the procedure. Note that the MATH elements must be explicitly copied in the cache model. It must be obvious that the final outcome of REF is the same as algorithm MATH. The more interesting issue is the cost of the emulation. A block of size MATH is transferred into cache if its image does not exist in the cache at the time of reference. The invariant that we try to maintain at the end of each round is that there is a REF correspondence between NAME and NAME. This will ensure that all the MATH operations are done within the cache at minimal cost. Assume that we have maintained the above invariant at the end of round MATH. In round MATH, we transfer block MATH into MATH. Accessing the memory block MATH will displace the existing block in cache set MATH, where MATH. From the invariant, we know that the block displaced from MATH is MATH, which must be restored to cache to restore the invariant. We can bring it back by a single memory reference and charge this to the round MATH itself, which is MATH. (Actually it will be brought back during the subsequent reference, so the previous step is only to simplify the accounting.) The cost of copying MATH to MATH is MATH assuming that MATH and MATH are not mapped to the same cache set (MATH). Otherwise it will cause alternate cache misses (thrashing) of the blocks MATH and MATH leading to MATH steps for copying. This can be prevented by transferring through an intermediate memory block MATH such that MATH. Having two such intermediate buffers that map to distinct cache sets would suffice in all cases. So, we first transfer MATH to MATH followed by MATH to MATH. The first copying has cost MATH since both blocks must be fetched from main memory. The second transfer is between blocks, one of which is present in the cache, so it has cost MATH. To this we must also add cost MATH for restoring the block of NAME that was mapped to the same cache set as MATH. So, the total cost of the safe method is MATH. The internal processing remains identical. If MATH denotes the internal processing cost of REF, the total cost of the emulation is at most MATH.
cs/0010007
Any lower bound in the number of block transfers in MATH carries over to MATH. Since the lower bound is the maximum of the lower bound on number of comparisons and the bound in REF , the theorem follows by dividing the sum of the two terms by REF.
cs/0010007
The MATH-way mergesort algorithm described in CITE has an I/O complexity of MATH. The processing time involves maintaining a heap of size MATH and MATH per output element. For MATH elements, the number of phases is MATH, so the total processing time is MATH. From REF , and REF , the cost of this algorithm in the cache model is MATH. Optimality follows from REF .
cs/0010007
See REF.
cs/0010007
The probability of conflict misses is MATH when MATH is MATH. Therefore the expected total number of conflict misses is MATH for MATH elements. The I/NAME mergesort uses MATH-way merging at each of the MATH levels, hence the second part of the theorem follows.
cs/0010007
If MATH and MATH map to the same cache set in MATH cache then their MATH-th level memory block numbers (to be denoted by MATH and MATH) differ by a multiple of MATH. Let MATH. Since MATH (both are powers of two), MATH where MATH. Let MATH be the corresponding sub-blocks of MATH and MATH at the MATH-th level. Then their block numbers MATH differ by MATH, that is, a multiple of MATH as MATH. Note that blocks are aligned across different levels of cache. Therefore MATH and MATH also collide in MATH.
cs/0010007
Let the set of blocks of size MATH be MATH (we are assuming that the blocks are aligned). Let the target block in the contiguous area for each block MATH be in the corresponding set MATH where each block MATH is also aligned with a cache line in MATH . Cache. Let block MATH map to MATH, MATH where MATH denote the set of cache lines in the MATH-cache. (Since MATH is of size MATH, it will occupy several blocks in lower levels of cache.) Let the MATH block map to set MATH of the MATH . Cache. Let the target block MATH map to set MATH. In the worst case, MATH is equal to MATH. Thus in this case the line MATH has to be moved to a temporary block say MATH (mapped to MATH) and then moved back to MATH. We choose MATH such that MATH and MATH do not conflict and also MATH and MATH do not conflict. Such a choice of MATH is always possible because our temporary storage area MATH of size MATH has at least MATH lines of MATH-cache (MATH and MATH will take up two blocks of MATH-cache, thus leaving at least one block free to be used as temporary storage). This is why we have the assumption that MATH. That is, by dividing the MATH-cache into MATH zones, there is always a zone free for MATH. For convenience of analysis, we maintain the invariant that MATH is always in MATH-cache. By application of the previous corollary on our choice of MATH (such that MATH) we also have MATH for all MATH. Thus we can move MATH to MATH and MATH to MATH without any conflict misses. The number of cache misses involved is three for each level - one for getting the MATH block, one for writing the MATH block, and one to maintain the invariant since we have to touch the line displaced by MATH. Thus we get a factor of MATH. Thus the cost of this process is MATH where MATH is the amount of data moved.
cs/0010007
The proof of NAME and NAME can be modified to disregard block transfers that merely rearrange data in the external memory. Then it can be applied separately to each cache level, noting that the data transfer in the higher levels do not contribute for any given level.
cs/0010007
We perform a MATH-way mergesort using the variation proposed by CITE in the context of parallel disk I/Os. The main idea is to shift each sorted stream cyclically by a random amount MATH for the MATH-th stream. If MATH, then the leading element is in any of the cache sets with equal likelihood. Like CITE, we divide the merging into phases where a phase outputs MATH elements, where MATH is the merge degree. In the previous section we counted the number of conflict misses for the input streams, since we could exploit symmetry based on the random input. It is difficult to extend the previous arguments to a worst case input. However, it can be shown easily that if MATH (where MATH is the number of cache sets), the expected number of conflict misses is MATH in each phase. So the total expected number of cache misses is MATH in the level MATH cache for all MATH. The cost of writing a block of size MATH from level MATH is spread across several levels. The cost of transferring MATH blocks of size MATH from level MATH is MATH. Amortizing this cost over MATH transfers gives us the required result. Recall that MATH block transfers suffice for MATH-way mergesort.
cs/0010007
The two input buffers and the output buffer, even if they map to the same cache set can reside simultaneously in the cache. Since at any stage only one REF-merger is active there will be no conflict misses at all and the cache misses will only be in the form of capacity or compulsory misses.
cs/0010007
If MATH and MATH are such that MATH we have the total number of conflict misses MATH . Note that the condition is satisfied for MATH for any fixed MATH which is similar to the tall-cache assumption made by NAME. The set associative case is proved by REF .
cs/0010007
: We will split up the summation of REF into two parts, namely, MATH and MATH. One can obtain better approximations by refining the partitions, but our objective here is to demonstrate the existence of MATH and MATH and not necessarily obtain the best values. MATH . The first term can be upper bounded by MATH which is MATH. The second term can be bounded using REF using MATH. MATH . The first term of the previous equation is less than MATH and the second term can be bounded by MATH for sufficiently large MATH (MATH suffices). This can be bounded by MATH, so REF can be bounded by MATH. Adding this to the first term of REF , we obtain an upper bound of MATH for MATH. Subtracting this from REF gives us MATH, that is, MATH.
cs/0010008
The proof is based on the characterisation of polynomial space computable functions by means of ramified reccurrence, reported in CITE. We shall represent ramified functions by MATH-programs. Firstly, let us recall briefly how ramified functions are specified. Let MATH be the set of constructors, and suppose that each constructor is of arity MATH or MATH. Let MATH be copies of the set MATH. We shall say that MATH is (a copy) of tier MATH. A ramified function MATH of arity MATH must satisfy the following condition : The domain of a ramified function MATH of arity MATH is MATH, and the range is MATH where the output tier is MATH and MATH. The class of ramified functions is generated from constructors in MATH, for all tiers MATH, and is closed under composition, recursion with parameter substitution and flat recursion. A ramified function MATH is defined by flat recursion if MATH where MATH and MATH and MATH are defined previously. We see that the flat recursion template is ordered by MATH because we can set MATH. A ramified function MATH of output tier MATH is defined by recursion with substitution of parameters if for some MATH where MATH. The functions MATH and MATH are ramified functions. The crucial requirement imposed on the ramified recursion schema is that the output tier MATH of MATH be strictly smaller than the tier MATH of the recursion parameter. It follows that MATH. Now the above template is ordered by MATH by putting MATH valencies thus. MATH . The termination proof is similar to the proof of REF . We conclude that each ramified function is computable by a MATH-program, and so, following CITE, each polynomial-space computable function is represented by a MATH-program.
cs/0010008
REF is a consequence of REF , of REF and of REF is a consequence of REF by observing that for each MATH, we have MATH.
cs/0010008
Given a program MATH, the operational semantics of call by value are provided by a relation MATH where MATH is a substitution over MATH. The relation MATH is defined as the union of the family MATH defined below : CASE: MATH, if MATH and MATH. CASE: MATH, if MATH is a MATH-ary constant of MATH. CASE: MATH if MATH CASE: MATH, if MATH,,MATH, and MATH where MATH is a rulw, and MATH. and MATH. It is routine to verify that MATH iff MATH, where MATH. The rules of the operational semantics described above form a recursive algorithm which is an interpreter of MATH-programs. Put MATH. The computation of MATH consists in determining MATH such that MATH for some MATH. Actually, MATH is the height of the computation tree. It remains to show that this evaluation procedure runs in space bounded by a polynomial in the sum of sizes of the inputs. For this, put MATH. By induction on MATH, we can establish that MATH implies MATH. As an immediate consequence of REF , we have MATH. Now, at any stage of the evaluation, the number of variables assigned by a substitution is less or equal to (the maximal arity of a function symbol MATH). Next, the size of the value of each variable is bounded MATH because of REF . Consequently, the space required to store a substitution is always less than MATH. We conclude that the whole runspace is bounded by MATH.
cs/0010008
The demonstration of the theorem above is tedious. The theorem is a consequence of REF whose proof is detailed in the three next subsections. For each ground substitution MATH and rule MATH, we have MATH by REF below. The result follows from the monotonicity of the interpretation as stated in REF .
cs/0010008
Suppose that MATH is a function symbol of rank MATH. Put MATH. By definition, we have MATH. Thus, MATH, since MATH.
cs/0010008
Straightforward by induction on MATH.
cs/0010008
The proof is by induction on MATH. CASE: Assume MATH. CASE: Suppose that MATH is a constant of MATH. We have MATH, by REF . CASE: Suppose that MATH is a variable of MATH. We have MATH for some MATH satisfying MATH and MATH. By the hypotheses of the lemma, MATH. So, MATH by REF . CASE: Assume MATH. CASE: Suppose that MATH and that MATH. Again, by the hypotheses of the lemma, MATH where MATH. So, MATH by REF . CASE: Suppose MATH where MATH is a function symbol of MATH of rank MATH and that for all MATH, MATH. We have MATH . Lastly, the case when MATH where MATH is a constructor of MATH, is similar to the previous one, and so we skip it.
cs/0010008
By induction on MATH. CASE: Assume MATH. CASE: Suppose that MATH is the constant MATH. We obtain MATH, by REF and so REF holds. CASE: Suppose that MATH is a variable. We have MATH for some MATH. By the hypotheses of the lemma we have MATH. Therefore MATH. CASE: Assume MATH. Suppose that MATH. The hypotheses of the lemma give MATH. So REF holds. CASE: Suppose that MATH where MATH is a function symbol of MATH of rank MATH. Then, for all MATH, if MATH, we have MATH. REF yields MATH, so we have MATH . Otherwise, MATH and we have MATH. It follows by the induction hypothesis and by monotonicity of MATH that MATH . By definition, MATH. From the above inequalities, we get the following bound on MATH . The case when MATH is similar to the case above.
cs/0010008
The proof goes by induction on MATH. However all the cases are the same as in in REF , except the following one. Suppose that MATH where MATH is a function symbol of MATH of rank MATH. We have MATH for each position MATH such that MATH. Now, MATH implies MATH, by REF . So we can apply the lemma hypothesis, and we obtain MATH. The inequality is strict because MATH for at least one MATH such that MATH. On the other hand, if MATH we know that MATH. By REF , MATH. Consequently, MATH . From both former inequalities, we conclude that MATH .
cs/0010008
The proof is by induction on MATH. We have MATH. Assume that MATH, MATH. We have MATH, and so MATH by induction hypothesis. Assume that MATH. REF yields that MATH. By REF , MATH. Therefore, MATH.