paper stringlengths 9 16 | proof stringlengths 0 131k |
|---|---|
quant-ph/9905042 | The proof proceeds in two stages. First, we show that every NAME for MATH is, in fact, a maximal MATH-beable algebra for MATH ( REF below). Second, we show that every MATH-beable algebra for MATH is contained in some NAME for MATH ( REF below). CASE: Suppose that MATH is an NAME for MATH. That is MATH, where MATH is a maximal abelian subalgebra of MATH, and MATH. (MATH-Priv) Since MATH leaves MATH invariant and MATH, it follows that MATH. CASE: Let MATH. By construction of MATH, MATH contains MATH and is invariant under each element of MATH. However, MATH is the smallest subspace of MATH that contains MATH and is invariant under each element in MATH. Hence, MATH. Conversely, MATH is invariant under each element of MATH since MATH is contained in MATH. Thus, MATH and MATH. But since MATH is an abelian subalgebra of MATH, it is an abelian subalgebra of MATH. By REF , MATH is beable for MATH. (Def) We show first that MATH is in the center of MATH. Since MATH is a self-adjoint set, MATH. Let MATH, and let MATH be a generator of MATH. (That is, MATH and MATH.) Since MATH commutes with MATH, MATH leaves MATH invariant. Further, MATH since MATH. Thus, MATH, and we may conclude (by linearity and continuity of MATH) that MATH. Since the same argument applies to MATH (which is also contained in MATH), MATH reduces MATH; and thus, MATH. On the other hand, MATH is clearly contained in MATH since MATH is invariant under the action of MATH and under the action of the self-adjoint set MATH. Let MATH. Since MATH, it follows that MATH. Now let MATH. Then, MATH where MATH. Since MATH reduces MATH, the spatial isomorphism MATH induced by MATH factors into MATH, the spatial automorphism on MATH induced by MATH, and MATH, the spatial automorphism on MATH induced by MATH. Hence, MATH . Trivially, MATH. Furthermore, since MATH is a subset of MATH, it follows that MATH is the identity automorphism on MATH. To see this, note that, MATH where the final equality follows from CITE and the fact that MATH. Thus, MATH commutes with every operator in MATH, and MATH is the identity automorphism on MATH. It then follows that MATH which is obviously contained in MATH. Since MATH was an arbitrary element of MATH, it follows that MATH. Moreover, this map is onto, for given MATH, MATH . Therefore, MATH. (Maximality) To see that MATH is a maximal MATH-beable algebra for MATH, it suffices to show that REF every MATH-beable algebra for MATH is contained in an NAME for MATH, and REF if MATH and MATH are distinct NAME for MATH, then MATH. We establish REF below. For REF it suffices to note that if MATH and MATH are distinct NAME for MATH, then MATH and MATH, where MATH and MATH are distinct maximal abelian subalgebras of MATH (each containing MATH). Thus, MATH and MATH. CASE: Suppose that MATH is MATH-beable for MATH. Since MATH, it will suffice to show that MATH is contained in an NAME for MATH because, by REF , MATH is MATH-beable for MATH. Thus, we may assume that MATH is a NAME algebra. Once again, let MATH. Obviously, MATH reduces MATH, and MATH. Since MATH is a NAME algebra MATH is a NAME algebra acting on MATH CITE. Likewise, MATH is a NAME algebra acting on MATH. Let MATH. Then we have MATH, where each summand is a NAME algebra. Since MATH is beable for MATH, MATH is in fact an abelian subalgebra of MATH REF . We show that MATH and that MATH. (Clearly, once MATH has been established, we will automatically have MATH, since MATH and MATH leaves MATH invariant.) MATH is clearly a subspace of MATH since MATH. In order to show that MATH, let MATH be a projection in MATH. Since MATH reduces MATH, MATH. Choose MATH such that MATH. Let MATH, and let MATH. Since MATH, MATH leaves MATH invariant, and (by construction) MATH leaves MATH invariant. Thus, MATH leaves MATH invariant, and MATH. Furthermore, MATH, and MATH. Thus, MATH. Since MATH, it follows by (Def) that MATH; and since MATH reduces MATH, it also follows that MATH. In particular, both MATH and MATH are in the abelian algebra MATH. Since MATH and MATH commute, there are mutually orthogonal projections MATH on MATH such that MATH, MATH. To see that MATH, let MATH. Then, MATH . Thus, MATH. But, by REF , MATH. Hence, MATH and MATH. Now we may repeat a similar argument with MATH replaced by MATH, and MATH. (The only change to the argument is that, throughout, MATH must be interchanged with MATH, since MATH is interchanged with MATH). It follows that MATH as well, and thus MATH. We chose MATH, however, so that if MATH, then MATH. Indeed, a routine calculation shows that MATH. Furthermore, MATH since MATH. Thus, MATH, for all projections MATH. Since MATH is (by hypothesis) a NAME algebra, each MATH is a norm-limit of linear combinations of projections in MATH. Thus, MATH, for all MATH and MATH reduces MATH. Since MATH, and since MATH is the smallest subspace that contains MATH and reduces MATH, it follows that MATH. We have now shown that MATH and that, accordingly, MATH is an abelian NAME subalgebra of MATH. All that remains is to show that MATH. Since MATH is a NAME algebra, it will suffice to show that for any projection MATH, MATH. Let MATH. Then, by (Def), MATH. Since MATH is reduced by MATH, MATH. In particular, MATH. Since MATH is abelian, MATH. But MATH was an arbitrary unitary operator in MATH. Applying REF , with MATH, we may conclude that MATH. Moreover, MATH, since MATH. Thus, MATH and MATH is contained in an NAME for MATH. |
quant-ph/9905042 | CASE: Since elements of MATH pairwise commute, and MATH, it follows that MATH is abelian, as is MATH. Therefore, MATH is itself the unique maximal MATH-beable algebra for MATH. CASE: Recall that MATH. Thus, in this case, MATH. Since MATH is abelian and MATH is a cyclic vector for MATH, it follows that MATH is maximal abelian as a subalgebra of MATH CITE. Accordingly, MATH is the unique maximal abelian subalgebra MATH of MATH with the property that MATH. |
quant-ph/9905042 | CASE: Suppose that MATH and that MATH. Using REF for the MATH-algebra MATH, it follows that MATH leaves MATH invariant. Since MATH is self-adjoint, MATH also leaves MATH invariant and MATH. Since MATH, the commutant of MATH relative to MATH is MATH CITE. Clearly, then, MATH is in the commutant of MATH. However, since MATH is maximal abelian, MATH. Therefore, MATH. REF is trivial, since each element in MATH leaves MATH invariant. |
quant-ph/9905042 | Clearly, MATH is a MATH-algebra, since it is the inverse image under MATH of a MATH-algebra. Furthermore, (MATH-Priv) follows by the construction of MATH. CASE: Let MATH. By REF , it will be sufficient to show that MATH is abelian. Clearly, MATH is the smallest subspace (of MATH) that contains MATH and that is invariant under MATH. However, MATH contains MATH by construction, and MATH is invariant under MATH (since MATH maps MATH into MATH). Thus MATH. Conversely, MATH leaves MATH invariant, since MATH. Therefore MATH and MATH. The conclusion then follows by noting that MATH is abelian (since MATH is a mutually commuting family of operators). (Def) Let MATH be a unitary element of MATH such that MATH and MATH. In this case (that is, where MATH is pure), we can actually prove the stronger result that MATH, from which it follows immediately that MATH. We show first that MATH is an eigenvector of MATH. For this, let MATH and let MATH. Since MATH is pure, the representation MATH of MATH is irreducible CITE. Thus, MATH. In particular, there is a net MATH such that MATH. However, MATH, for all MATH. Since MATH and MATH are NAME, it follows that MATH . Hence, MATH, and by the NAME inequality, MATH for some MATH, which is what we wanted to show. Now, since MATH is an eigenvector of MATH, it follows that MATH. Moreover, since MATH, it follows that MATH. Thus, by REF , MATH and MATH. |
quant-ph/9905042 | Fix MATH and let MATH be the characteristic function of the clopen set MATH . By the construction of MATH, we have MATH where MATH is the spectral-measure for MATH and MATH (compare REF ). In other words, MATH, where MATH. Recall that MATH is defined to be the unique closest continuous function to MATH, where MATH if MATH and MATH otherwise. However, MATH, for if MATH, then MATH. Now applying the considerations prior to this lemma (identifying sets with their characteristic functions), we have MATH and MATH. That is, MATH for all MATH. Moreover, since MATH for all MATH, MATH. Since MATH, it follows that MATH. |
quant-ph/9905042 | Let MATH be the set of points in MATH at which MATH is not defined, and suppose that MATH. Since MATH is closed and nowhere dense, there is a net MATH such that MATH. Using the fact that MATH for each MATH (since MATH), and the fact that MATH, it follows that MATH, and thus MATH. A similar argument shows that if MATH, then MATH. |
quant-ph/9905042 | Let MATH. Then, MATH, for some BUC function MATH on MATH. Let MATH, where MATH, and let MATH and MATH be the unique continuous functions on MATH corresponding, respectively, to MATH and MATH. (Clearly, MATH and MATH have identical range and it follows from CITE that MATH.) We now show that that MATH. For this, let MATH. CASE: Suppose that MATH. Then MATH and MATH. Therefore, MATH . CASE: Suppose that MATH. Then, by definition, MATH, since MATH and MATH are not defined at MATH. We now show that MATH. CASE: Suppose that MATH. By Case REFa, it will be sufficient to show that MATH and MATH. In order to establish this, note that MATH and MATH are continuous on MATH (since each is the composition of two continuous functions). Moreover, by definition, MATH may not disagree with the continuous function MATH on any open set (in MATH), and MATH may not disagree with the continuous function MATH on any open set (in MATH). Therefore, MATH and MATH. CASE: Suppose that MATH. Since MATH is nowhere dense, there is a net MATH such that MATH. For each MATH, MATH . If we take the limit over MATH of the Right-hand side of REF , the first and third terms go to zero since MATH and MATH are continuous. Thus, MATH. If we let MATH and MATH, then it follows from the continuity of MATH that MATH and MATH (see Case REFa). Thus, MATH. Now, using the fact that MATH is continuous at MATH, and MATH, we have MATH and MATH. But this, in conjunction with the fact that MATH is uniformly continuous entails that MATH. Therefore, MATH. |
quant-ph/9905042 | CASE: Since MATH is a MATH-algebra, it will be sufficient to show that for every unitary element MATH, MATH. Moreover, since MATH commutes with all elements in MATH, the result would follow from REF if we could show that MATH. We proceed to show this. From REF , there is a unitary MATH such that MATH, for all MATH. Since each MATH is dispersion-free, it follows that MATH. Thus, MATH where we have used REF in the second and final equalities. From REF it follows that, in the GNS representation (of MATH) for MATH, MATH. Hence, we may use the fact that MATH is a MATH-homomorphism in combination with the NAME inequality to conclude that MATH, for some MATH. In particular, MATH, as we wished to show. CASE: Since MATH is beable for MATH, it has a dispersion-free state MATH. We show that this entails that MATH for all MATH. In order to see this, note first that MATH, for all MATH, since MATH is uniformly continuous on MATH. Suppose, for reductio ad absurdem, that MATH for some MATH. Since MATH and MATH satisfy the NAME of the CCR we have MATH for all MATH, and MATH for all MATH. Fix MATH such that MATH for any MATH. Since MATH is dispersion-free on MATH, it follows that MATH. Moreover, MATH and MATH since MATH must assign each unitary operator a value in its spectrum REF . Thus, we have MATH contrary to our assumption that MATH for any MATH. Therefore MATH when MATH. |
quant-ph/9905077 | Let MATH be bases for MATH . MATH . |
quant-ph/9905099 | Notice MATH corresponds to a random choice of a coset state MATH. By basic results in NAME analysis on Abelian groups CITE we have MATH and we may write the coset state as MATH . Therefore for all MATH we have MATH. This means MATH and therefore MATH. For the other inclusion, notice that for MATH with MATH, we have MATH. This means that MATH. But MATH. The lemma follows. |
quant-ph/9905099 | We first prove MATH is an efficient elimination observable. The argument is actually the standard argument that the quantum algorithm efficiently finds a hidden subgroup but phrased in the present terminology. Suppose the hidden subgroup is MATH, that is, MATH, and let MATH. If MATH then MATH, and further, if MATH is strictly contained in MATH, then MATH. Using the formula above we see MATH. If MATH then MATH and thus MATH. We now show optimality and uniqueness. Let MATH. Then note that for any MATH, MATH if and only if MATH. So by the first lemma MATH if and only if MATH. This means that MATH is a refinement of any elimination observable. Unique optimality follows from this. |
solv-int/9905003 | Let us rewrite the discrete linear system REF as MATH and expand both sides in powers of MATH: MATH . Comparing coefficients of the powers of MATH we have MATH . For MATH this is satisfied trivially, for MATH we obtain the continuous linear system REF MATH . Let MATH be a solution of REF and hence MATH a solution of the continuous matrix NAME REF . In order for MATH to be a solution of the discrete system REF , and hence MATH a solution of the discrete matrix NAME REF , MATH must also satisfy REF for all MATH. For MATH we have MATH since MATH and MATH. For MATH REF follows by induction, hence every solution of REF with constant MATH satisfies the difference REF . |
cs/9906007 | By REF , MATH is closed under composition. We prove the lemma by decomposing a given REFdgsm-rla MATH into a series of REF's, together realizing the transduction of MATH. The final REFdgsm performs the required transduction, whereas all the other transductions `preprocess the tape', by adding to the original input the outcome of the various tests of MATH. As we also need this information for the positions containing the end-of-tape markers MATH and MATH, we start by a transduction that maps input MATH to the string MATH, where MATH and MATH are new symbols. Information concerning the end-of-tape positions is added to these new symbols. The other machines may ignore MATH and MATH, and treat MATH and MATH as if they where these end-of-tape markers. For each look-around test MATH of MATH we introduce a REFdgsm MATH that copies the input, while adding to each position the outcome of the test MATH for that position in the original string (ignoring any other additional information a previous transduction added to the string). The machine MATH itself can be seen as the work of three consecutive REFdgsm's. The first one, simulating a finite state automaton recognizing MATH, checks on each position whether the prefix read belongs to MATH. It adds this information to the symbol at that position. The second transducer, processing the input from right to left, simulating a finite state automaton for the mirror image of MATH, adds information concerning the suffix. Note that the input has been reversed in the process. This can be undone by another reversal performed by a third REFdgsm. Once the value of each look-around test of MATH is added to the original input string, obviously the transduction of MATH can be simulated by an ordinary REFdgsm. |
cs/9906007 | Let MATH be a deterministic finite automaton accepting MATH. Every path (in the state transition diagram of MATH) from the initial state to a final state passes exactly one transition labelled by a symbol from MATH. For any such transition MATH of MATH let MATH consist of all strings that label a path starting in the initial state of MATH and ending in MATH, and symmetrically, let MATH consist of all strings that label a path from MATH to one of the final states of MATH. Obviously, MATH and MATH are regular, and MATH is the union of the languages MATH taken over all such transitions. Since MATH is deterministic, these languages are easily seen to be disjoint. |
cs/9906007 | We show how to simulate the instructions of a REFgsm-mso by a REFgsm-rla. Recall that such an instruction is specified as MATH, where MATH is a formula MATH with one free node variable, and the moves MATH are (functional) formulas MATH with two free node variables. Tests: unary node predicates. Consider a test MATH in MATH. It can easily be simulated by regular look-around tests. Identifying MATH with MATH, consider the language MATH, which is regular by REF . As each string of this language contains exactly one symbol with MATH as its second component, it can be written as a finite union of languages MATH, with regular languages MATH, and MATH, see REF . This implies that the test MATH can be simulated by a finite disjunction of the look-around tests MATH, where each MATH is obtained from the corresponding MATH by dropping the second component (REF-part) of the symbols. Of course, this disjunction is computed by testing each of its alternatives consecutively. Moves: binary node predicates. Once the test of an instruction is evaluated, one of its moves is executed, and the output is written. This move is given as a formula MATH, specifying a functional relation between the present position MATH and the next position MATH on the input. Where the REFdgsm-mso may `jump' to its next position, independent of the relative positions of MATH and MATH, a REFdgsm-rla can only step to one of the neighbouring positions of the tape, and has to `walk' to the next position when simulating this jump. Before starting the excursion from MATH to MATH the REFdgsm-rla determines the direction (left, right, or stay) by evaluating the tests MATH, MATH, and MATH using the method that we have explained above. Since MATH is functional, at most one of these tests is true. In the sequel we assume that our target position MATH lies to the left of the present position MATH, that is, test MATH is true. The right-case can be treated in an analogous way; the stay-case is trivial. Similarly to the case of tests, identify MATH with MATH, and consider MATH. Each string of this language contains exactly one symbol with MATH as its second component, the position of MATH, and it precedes a unique symbol with MATH as its second component, the position of MATH; all other symbols carry MATH. It can be written as a finite disjoint union of languages MATH, with regular languages MATH and MATH, by applying REF twice. Our moves are functional, meaning that there is a unique position MATH that satisfies the predicate MATH with MATH the present position. Still before starting the excursion from MATH to the new position MATH, the REFdgsm-rla determines which language in the union above describes this position by performing the regular look-around tests MATH, where each MATH is obtained from the corresponding MATH by deleting the second component (the MATH -part) of the symbols. The REFdgsm-rla now moves to the left. In each step it checks whether the segment of the input string between the present position (candidate MATH) and the starting position (corresponding to MATH) belongs to the regular language MATH. This can be done by simulating a finite automaton for (the mirror image of) MATH in the finite state control. Each time this segment belongs to MATH, it performs the rla-test MATH, to verify the requirement on the initial segment of the input. Once this last test is satisfied, it has found the position MATH and writes the output string. |
cs/9906007 | Rather direct, using NAME 's result REF . We consider one implication (from right to left) only. Let the string language MATH be defined by the closed formula MATH of MATH, as in the statement of the lemma (using the edge representation). We show that there exists a formula defining MATH using the node representation. Consider the mso definable graph transduction ndREFed mapping MATH to MATH for all non-empty MATH, compare REF . The graph language MATH is mso definable, say by an mso formula MATH of MATH. It defines the string language MATH. If MATH, then we are done; otherwise, consider MATH. |
cs/9906007 | Consider MATH fixed by the copy set MATH and formulas MATH, MATH, and MATH. We may assume that MATH and MATH are disjoint. The domain formula for the union is the disjunction MATH; its copy set is MATH. The node formulas and the edge formulas for both transductions are also taken together (by disjunction), but we ensure that they are applicable only for the appropriate input by changing MATH to MATH, and similarly for the edge formulas. We add MATH for MATH, MATH, MATH. |
cs/9906007 | CASE: From left to right; assume MATH, that is, MATH. We split MATH into the mappings MATH, and MATH. As MATH, also MATH is mso definable, by REF . By REF , the domain of MATH is mso definable as it is the inverse image of MATH for the transduction MATH. Now it is easily seen that MATH using for MATH the formula defining the domain of MATH, MATH, MATH, and MATH. The union MATH is mso definable by REF . Hence, MATH. We have discussed already that the image of MATH under MATH must be MATH (provided MATH belongs to the domain of MATH) as MATH has no nodes to copy. CASE: From right to left; assume MATH, that is, MATH. Then also MATH is mso definable, where MATH. We are ready when MATH does not belong to the domain of MATH. Otherwise, as the transduction MATH, mapping the empty graph to itself, is easily seen to be mso definable, MATH follows by REF . |
cs/9906007 | Let MATH be a REFdgsm realizing the string transduction MATH, and consider a fixed input string MATH, MATH for MATH. Additionally we use MATH and MATH. We can visualize the `computation space' of MATH on MATH by constructing a graph MATH that has as its nodes the pairs MATH, where MATH is a state of MATH, and MATH is one of the positions of the input tape carrying MATH. The edges of MATH are chosen in accordance with the instruction set MATH of MATH: for each instruction MATH in MATH there is an edge from MATH to MATH if MATH equals MATH, and an edge from MATH to MATH otherwise. The edge is labelled by the output symbol MATH. In this context we will consider MATH as a labelling symbol (rather than as a string of length zero) in order to avoid notational complications. In REF we illustrate the computation space for the REFdgsm from REF on input MATH (with output MATH omitted, as usual). The computation on that input is represented as a bold path (compare REF ). As MATH is deterministic, every node of MATH has at most one outgoing edge. The output of the computation of MATH on MATH can then be read from MATH by starting in node MATH, representing MATH in its initial configuration, and following the path along the outgoing edges. The computation is successful if it ends in a final configuration MATH. We will mark the initial and final nodes of MATH by special labels MATH and MATH, the other nodes remain unlabelled (represented in our specification by MATH). Note that the graph MATH does not only represent the computation of MATH on MATH starting in the initial state and MATH-th position of the tape (marked by MATH) but rather all possible computations that result from placing MATH on an arbitrary position of the tape, in an arbitrary state. We construct a series of mso graph transductions, the composition of which maps MATH to MATH for each MATH. As grMSO is closed under composition REF , this proves the lemma. The first graph transduction MATH maps MATH to MATH. The second graph transduction MATH selects the path in MATH corresponding to the successful computation of MATH on MATH (if it exists) by keeping only those nodes that are reachable from the initial configuration and lead to a final configuration. The last graph transduction MATH removes edges labelled by MATH (used as a symbol representing the empty string) while contracting paths consisting of these edges. Step one: constructing MATH. Let MATH be the graph transduction that constructs MATH. We follow the general description above, and formalize MATH as mso transduction. The domain formula of the transduction specifies that the graph is of the form MATH for some string MATH. The copy set equals MATH, where MATH is the set of states of MATH. The node MATH of MATH is identified with MATH, the MATH-copy of the node MATH of MATH corresponding to the MATH-th position of the input tape, labelled with MATH. The labels of the edges are chosen according to the instructions of MATH. For MATH, MATH, and MATH let MATH be the following disjunction, where the unspecified `dots' range over their respective components: MATH . Then, MATH . All copies of the nodes are present, with special labels for initial and final nodes: CASE: MATH, when MATH, and MATH, otherwise. CASE: MATH, when MATH, and MATH, otherwise. CASE: MATH. Note that we assume that MATH, in order to avoid that both MATH and MATH are defined for the initial node. This is the case when MATH accepts any input in its initial state without executing instructions. We satisfy the assumption by adding additional instructions to a new final state. Step two: selecting the computation path. The transduction MATH removes nodes that are not on the path from the node labelled by MATH to a node labelled by MATH (if it exists). Nodes that are not on such a path do not correspond to the configurations that are part of the (successful) computation of MATH on MATH. Note that if such a path exists, then it is unique. Recall that the predicate MATH specifies the existence of a path from MATH to MATH. By MATH we restrict ourselves below to a path containing only edges with label MATH. Formally, CASE: MATH, CASE: MATH, CASE: MATH CASE: and, for MATH, MATH. Step three: contracting MATH-paths. The last graph transduction of three, MATH deletes all nodes that have an outgoing MATH-labelled edge, and contracts each MATH-path to its last node. This can be specified with the trivial copy set MATH, node formula MATH, and edge formulas MATH, for MATH. |
cs/9906007 | Starting with the mso transduction MATH we build a REFdgsm-mso MATH for MATH that closely follows the mso specification of MATH. Assume MATH is specified by domain formula MATH, copy set MATH, node formulas MATH, MATH, and edge formulas MATH, MATH, MATH. The state set of MATH is (in principle) equal to the copy set MATH: when MATH is true for a pair MATH of nodes, then MATH, visiting the position corresponding to MATH of the input tape in state MATH, may move to the position corresponding to MATH changing to state MATH, while writing MATH to the output tape. Note that, for each input graph MATH, MATH defines a graph representation of a string, hence at most one of these formulas defines an edge in a given position (node) and a given state (copy). However, in general the formula MATH is only functional as far as graphs MATH satisfying the domain formula MATH are concerned, and for these graphs only when restricted to nodes for which the respective MATH and MATH copies are defined. Since our formal definition of REF demands functional moves, we consider the formulas MATH. The instructions of MATH are of the form MATH - but this is REF-tuple notation, and has to be replaced by REF where for a fixed state MATH each of the alternatives MATH has to be tested consecutively, as explained in the paragraph about REFgsm in REF (using additional states). If none of the edge formulas gives a positive result, the present node has no successor, which indicates the last position of the output string. In that case, the series of consecutive tests ends up in the final state MATH. Initially MATH has to find the unique node of the output graph that has no incoming edges. We solve this by adding the new initial state MATH from which this node is found by testing all possibilities, but again in a consecutive fashion, for MATH: MATH where MATH abbreviates MATH. |
cs/9906007 | The identity on MATH is easily performed by an REFdgsm. Hence MATH, by REF . As for the inverse MATH, note that mapping MATH to MATH is mso definable because MATH has at least one node, which may be copied to provide the additional nodes that are connected by edges labelled by MATH and MATH to the original graph. We now compose this mapping by REF, which is mso definable by REF . |
cs/9906007 | By our previous lemma, the transduction MATH from MATH to MATH, for MATH, is an element of MATH, as is its inverse MATH. By the equalities MATH, and MATH, and the closure of grMSO under composition REF , we have MATH iff (by definition) MATH iff MATH. The result now follows from REF demonstrating MATH iff MATH. |
cs/9906007 | Assume MATH is realized by a (nondeterministic) REFgsm MATH with MATH states. Choose MATH such that MATH. Consider the behaviour of MATH on input MATH. The input tape, containing MATH, has MATH positions. Hence, MATH has MATH configurations on this input. Consider the configuration assumed by MATH when it has just written the symbol MATH on its output tape. As there are MATH possible output strings MATH for MATH, there exist two strings MATH and MATH for which this configuration is the same. This means that we can switch the computation of MATH halfway to the computation of MATH obtaining a computation for MATH with MATH, which is not an element of MATH. |
cs/9906007 | The proof of the first inclusion MATH is implicit in our definition of grNMSO. The nondeterminism of an mso transduction MATH with parameters MATH can be `pre-processed' by a relabelling MATH that maps each node label MATH nondeterministically to a symbol MATH, where MATH. The valuation of MATH has now become a part of the labelling, and we change the domain formula MATH, the node formulas MATH, and the edge formulas MATH that specify the mso transduction accordingly. Each atomic subformula MATH in such a formula is replaced by the disjunction MATH, and each atomic subformula MATH is replaced by MATH. In this way we obtain `deterministic' equivalents MATH, MATH, MATH for mso transduction MATH. We now have MATH which follows by observing that for a graph MATH, MATH if and only if MATH, and similarly for the other formulas. For the converse inclusion MATH, it suffices to note that each nondeterministic node relabelling is a nondeterministic mso definable graph transduction. The inclusion then follows from the closure of MATH under composition, REF . Let MATH define a graph node relabelling. We formalize it as mso graph transduction from MATH to MATH by choosing parameters MATH, MATH, with the intended meaning that a node belonging to MATH will be relabelled into MATH. The domain formula MATH expresses that the MATH form an `admissable' parameter set by demanding each node to be in exactly one of the MATH, and additionally, if a node has label MATH, then MATH containing this node satisfies MATH: MATH . Each node is copied once, relabelled according to MATH: CASE: MATH, CASE: MATH, MATH, CASE: MATH, MATH. |
cs/9906007 | First, the inclusion from left to right. Let MATH, that is, MATH. Consider the string transduction MATH. Then MATH is an element of MATH, as MATH equals the composition MATH of (nondeterministic) mso definable graph transductions, where MATH is the mapping from MATH to MATH, compare REF . By the corollary above, and REF , MATH. Consequently, as MATH equals the `marking' from MATH to MATH followed by MATH, MATH. For the reverse inclusion, MATH, note that every marked relabelling can be decomposed into a marking and a relabelling, each of which we will show to be a (nondeterministic) mso transduction. The inclusion then follows from the closure of NMSOS under composition. The marking mapping MATH to MATH is easily seen to be an element of MSOS, either by direct construction, or by constructing a REFdgsm for that task, and applying REF . Finally, to show that MATH one closely follows the argumentation in the proof of MATH, REF . As we relabel edges, rather than nodes, in the representation MATH of a string MATH, but still have parameters ranging over nodes, we use the parameters for the source node of an edge to determine the new label of its outgoing edge (compare REF ): MATH is as before, but we now have MATH, and MATH. |
cs/9906007 | Clearly, the length of the output of a MATH-visiting computation on input MATH is at most MATH times the length of MATH. Hence the implication from left to right. As for the other implication, assume that the finitary transduction MATH is realized by a REFgsm MATH. If during a (successful) computation for MATH, MATH visits the same position twice in the same state, then it did not write symbols to the output in the meantime, because otherwise MATH has infinitely many output strings for the present input, as an easy pumping argument shows. Hence we may omit this excursion from the computation. Consequently, there is a computation of MATH for MATH that does not visit each of the tape positions more than MATH times, where MATH is the number of states of MATH. Hence MATH itself is finite visit. |
cs/9906007 | Let MATH be a REFgsm, finite visit for constant MATH; each pair MATH in the transduction realized by MATH can be computed by a MATH-visiting computation. We may decompose the behaviour of MATH on input MATH as follows. First, a relabelling of MATH guesses a string of MATH-visiting sequences, one for each position of the input tape. Then, a REFdgsm verifies in a left to right scan whether the string specifies a valid computation, a track, of MATH for MATH, compare REF . If this is the case, the REFdgsm returns to the left tape marker MATH and simulates MATH on this input, following the MATH-visiting computation previously guessed. When changing from one tape position to a neighbouring position, the REFdgsm records the `crossing number' of that move, that is, the number of times it crossed the border between these two tape positions (in one direction or another). The crossing number can be read by inspecting the directions of the moves stored in the visiting sequence. It is used to `enter' the next visiting sequence at the right visit, compare REF . |
cs/9906007 | By the last lemma, MATH. As the right-hand side of this inclusion is closed under composition REF we have the inclusion MATH. According to REF , MATH equals MATH. The inclusion from left to right follows from the fact that both MATH and MATH. |
cs/9906007 | Obviously MATH, while MATH by REF , which proves the inclusion from right to left. The reverse implication is immediate from REF : recall that transductions in NMSOS are finitary because the number of parameter valuations is finite. |
cs/9906007 | Assume that the finitary transduction MATH is a composition MATH as in the statement of the lemma; MATH, MATH realized by the REFgsm MATH, and MATH realized by the REFdgsm MATH. As to be expected, the unknown family MATH will not feature in our arguments, but later will enable us to apply the result in a context. In fact, we show how to replace MATH by MATH such that MATH. Hence MATH equals MATH on the range of MATH. Reconsider the proof of REF , where a MATH-visit REFgsm is decomposed into a relabelling that guesses a MATH-visiting sequence for each position of the input tape, and a REFdgsm that verifies in a single left-to-right pass whether the resulting string defines a MATH-track, and then deterministically simulates the specified computation for the original input. Alternatively, by combining the verification phase with the relabelling, we may decompose the MATH-visit REFgsm into a one-way gsm that nondeterministically writes a MATH-track, and a REFdgsm simulating the computation. We apply that new decomposition to MATH, and immediately observe that the first phase (guessing and writing a track) can be performed by MATH using a straightforward direct product construction. Summarizing: we have replaced the composition MATH by a new composition MATH realized by MATH followed by MATH, where MATH is a REFgsm that writes valid tracks for the REFdgsm MATH. Let MATH be MATH-visit. We continue by demonstrating that we need not consider all computations of MATH, instead it suffices to put a bound on the number of visits that the machine makes to each of the positions of its input. This will change the transduction MATH realized by MATH, but not the composition MATH (due to MATH being finitary). Consider the behaviour of MATH on input MATH, where MATH is in the range of MATH. Fix a position on the tape MATH and a state of MATH, and split the output of MATH during the computation into segments, corresponding to the consecutive visits to the selected position in the selected state. MATH writes MATH where MATH is written during the excursions in between consecutive visits. We assume MATH. Returning to the same position and state, each of the excursions can be repeated in (or omitted from) the computation of MATH, so the machine may produce every string MATH, MATH as possible output on input MATH. By our previous construction, each output of MATH forms a MATH-track for the second machine MATH. This implies that MATH does not generate output during any of its visits to the segments MATH, as MATH is supposed to be finitary. At first glance, the excursion of MATH writing MATH can be omitted: the second machine MATH does not generate output when it visits the segment MATH during its simulation of the specified computation. However, the previous example shows that MATH (or in fact any segment MATH) may have its effect on the output of MATH by rearranging parts of the adjacent computation that leave the segment MATH (to the left or to the right) in order to return there later. We consider the computation of MATH specified by the track MATH from the viewpoint of the segment MATH. Starting from the leftmost position of MATH, the computation enters MATH from the left. Before leaving the segment for the last time, the computation makes several tours outside MATH. Such a tour of MATH to the left of the segment MATH, in MATH, corresponds to two consecutive visits MATH and MATH in the first visiting sequence of MATH, meaning the computation leaves the segment to the left in state MATH, returning there later in state MATH. A symmetric observation holds for tours to the right, in MATH, and consecutive visits in the last visiting sequence of MATH. Hence, the relative order of those tours that leave to the left is fixed by the last visiting sequence of MATH, similarly for the tours to the right. The relative order of all tours (left and right taken together) is determined by the segment MATH. Replacing MATH by another string in MATH will not change the tours in MATH and MATH, but it may rearrange the relative order of tours to the left and tours to the right. A visiting sequence for MATH contains at most MATH visits. Hence, there are less than MATH tours to each side of the segment. Together these at most MATH tours may be ordered in less than MATH ways (the orders of the tours at the same side of the segment are fixed). Now we are able to apply a pumping argument to the segment MATH. If MATH, then two of the prefixes MATH, MATH, MATH, define the same rearrangement on the adjacent tours, and thus we may replace MATH by MATH in the output MATH of MATH. The resulting track MATH defines a computation for MATH that results in the same output as the original track MATH. Thus, we may assume that MATH. Consequently, we allow for all possible rearrangements, and hence for all possible outputs of MATH, by taking MATH as the bound on the number of visits of MATH to a fixed position in a fixed state. Now that we have limited the number of visits of MATH to MATH times the size of its state set, we can replace MATH by a decomposition in MATH, using again the argumentation of REF . Thus, MATH is replaced by a composition in MATH. The result follows, as MATH is closed under composition, REF . |
cs/9906007 | Observe that MATH by an obvious construction. Let MATH. Assume that MATH is finitary. We have by the previous lemma, MATH, which equals MATH for MATH (and which equals MATH for MATH). Hence, by induction on MATH, MATH implies MATH, for a finitary string transduction MATH. As MATH, the theorem follows. |
cs/9906007 | By REF , MATH. Additionally, elements of NMSOS are necessarily finitary. This proves the implication from left to right. The reverse implication follows from the last result and the characterization MATH from REF . |
cs/9906007 | In view of REF it suffices to prove the equality MATH. The inclusion of MATH in MATH can be proved as REF , which states this inclusion for MATH: the relabelling guesses a string of visiting sequences for the computation of the NAME machine on the input string; the REFdgsm verifies that this string is a track and simulates the computation. Note that a visiting sequence of a NAME machine should also record the symbol at the position of the input tape at each visit. It is straigthforward to adapt the notions of visiting sequence and MATH-track in this way, such that REF still holds (see CITE). The reverse inclusion is almost immediate. In two phases the NAME machine may simulate the composition, first writing the image of the marked relabelling on the tape, and then simulating the REFdgsm on this new tape. There is a minor technicality: for a given input MATH the initial tape contains MATH, and the NAME machine is supposed to overwrite this string with its relabelling and add two new tape markers (for the simulation of the REFdgsm). Instead, it keeps the relabelling of the tape markers in its finite state memory, rather than overwriting them. |
cs/9906007 | In view of REF it suffices to prove the equality MATH. The inclusion MATH is immediate. We demonstrate the reverse inclusion, much along the lines as sketched in CITE, see also CITE. By REF , MATH. Hence, any NAME transduction MATH can be decomposed into a marked relabelling MATH and a deterministic REFgsm transduction MATH. We will argue that for a deterministic NAME transduction this (nondeterministic) marked relabelling can be realized by a deterministic REFgsm, which shows MATH by the closure of REF under composition. Let MATH be a deterministic NAME transduction, and let MATH be the decomposition as above. Let MATH be an input string. As MATH is functional, MATH for any marked relabelling MATH that belongs to the domain of MATH. As this domain MATH is a regular language CITE, a REFdgsm-rla can be constructed that finds and outputs such a marked relabelling by one pass from left to right over the input, using its look-around to check the remainder of the input for a relabelling of the present input symbol that leads to an element of MATH. This means that the REFdgsm-rla looks ahead to test the suffix of the tape for membership in the language MATH, where MATH is a (fixed) one-way deterministic finite state automaton accepting MATH except that the initial state is changed to MATH which is the state where MATH would be after reading the output generated by the REFdgsm-rla on the prefix, including the relabelling chosen for the present symbol. |
cs/9906008 | Let the list to be sorted consist of a permutation MATH of the elements MATH. Consider a MATH . NAME algorithm MATH where MATH is the increment in the MATH-th pass and MATH. For any MATH and MATH, let MATH be the number of elements in the MATH-chain containing element MATH that are to the left of MATH at the beginning of pass MATH and are larger than MATH. Observe that MATH is the number of inversions in the initial permutation of pass MATH, and that the insertion sort in pass MATH requires precisely MATH comparisons. Let MATH denote the total number of inversions: MATH . Given all the MATH's in an appropriate fixed order, we can reconstruct the original permutation MATH. The MATH's trivially specify the initial permutation of pass MATH. In general, given the MATH's and the final permutation of pass MATH, we can easily reconstruct the initial permutation of pass MATH. MATH . Let MATH as in REF be a fixed number. Let permutation MATH be an incompressible permutation having NAME complexity MATH where MATH is the encoding program in the following discussion. The description in REF is effective and therefore its minimum length must exceed the complexity of MATH: MATH . Any MATH as defined by REF such that every division of MATH in MATH's contradicts REF would be a lower bound on the number of inversions performed. There are MATH possible divisions of MATH into MATH nonnegative integral summands MATH's. Every division can be indicated by its index MATH in an enumeration of these divisions. Therefore, a self-delimiting description of MATH followed by a description of MATH effectively describes the MATH's. The length of this description must by definition exceed its NAME complexity. That is, MATH . We know that MATH since every MATH. We can assume MATH. Together with REF , we have MATH . By REF MATH is bounded above by MATH . By REF we have MATH for MATH. Therefore, the second term in the right-hand side equals MATH for MATH. Since MATH and MATH, MATH for MATH. Therefore, the total right-hand side goes to MATH for MATH. Together with REF this yields MATH . Therefore, the running time of the algorithm is as stated in the theorem for every permutation MATH satisfying REF . By REF at least a MATH-fraction of all permutations MATH require that high complexity. Therefore, the following is a lower bound on the expected number of inversions of the sorting procedure: MATH . This gives us the theorem. MATH . |
cs/9906008 | Fix an incompressible permutation MATH such that MATH where MATH is an encoding program to be specified in the following. Assume that MATH stacks is sufficient to sort MATH. We now encode such a sorting process. For every stack, exactly MATH elements pass through it. Hence we need perform precisely MATH pushes and MATH pops on every stack. Encode a push as MATH and a pop as MATH. It is easy to prove that different permutations must have different push/pop sequences on at least one stack. Thus with MATH bits, we can completely specify the input permutation MATH. Then, as before, MATH . Hence, approximately MATH for incompressible permutations MATH. Since most (a MATH-th fraction) permutations are incompressible, we can calculate the average-case lower bound as: MATH . MATH . |
cs/9906008 | Consider an incompressible permutation MATH satisfying MATH . We use the following trivial algorithm (which is described in CITE) to sort MATH with stacks in the parallel arrangement . Assume that the stacks are named MATH and the input sequence is denoted as MATH. Algorithm NAME CASE: For MATH to MATH do CASE: Scan the stacks from left to right, and push MATH on the the first stack MATH whose top element is larger than MATH. If such a stack doesn't exist, put MATH on the first empty stack. CASE: NAME the stacks in the ascending order of their top elements. We claim that algorithm NAME uses MATH stacks on the permutation MATH. First, we observe that if the algorithm uses MATH stacks on MATH then we can identify an increasing subsequence of MATH of length MATH as in CITE. This can be done by a trivial backtracing starting from the top element of the last stack. Then we argue that MATH cannot have an increasing subsequence of length longer than MATH, where MATH is the natural constant, since it is compressible by at most MATH bits. Suppose that MATH is a longest increasing subsequence of MATH and MATH is the length of MATH. Then we can encode MATH by specifying: CASE: a description of this encoding scheme in MATH bits; CASE: the number MATH in MATH bits; CASE: the combination MATH in MATH bits; CASE: the locations of the elements of MATH in MATH in at most MATH bits; and CASE: the remaining MATH with the elements of MATH deleted in MATH bits. This takes a total of MATH bits. Using NAME approximation and the fact that MATH, we the above expression is upper bounded by: MATH . This description length must exceed the complexity of the permutation which is lower-bounded in REF . This requires that (approximately) MATH. This yields an average complexity of NAME of: MATH . MATH . |
cs/9906008 | Let MATH be any sorting algorithm using parallel stacks. Fix an incompressible permutation MATH with MATH, where MATH is the program to do the encoding discussed in the following. Suppose that MATH uses MATH parallel stacks to sort MATH. This sorting process involves a sequence of moves, and we can encode this sequence of moves as a sequence of the following items: ``push to stack MATH" and ``pop stack MATH", where the element to be pushed is the next unprocessed element from the input sequence and the popped element is written as the next output element. Each of these term requires MATH bits. In total, we use MATH terms precisely since every element has to be pushed once and popped once. Such a sequence is unique for every permutation. Thus we have a description of an input sequence with length MATH bits, which must exceed MATH. It follows that MATH. This yields the average-case complexity of MATH: MATH . MATH . |
cs/9906008 | The proof is very similar to the proof of REF . We use a slightly modified greedy algorithm as described in CITE: Algorithm NAME CASE: For MATH to MATH do CASE: Scan the queues from left to right, and append MATH on the the first queue whose rear element is smaller than MATH. If such a queue doesn't exist, put MATH on the first empty queue. CASE: Delete the front elements of the queues in the ascending order. Again, we can claim that algorithm NAME uses MATH queues on any permutation MATH that cannot be compressed by more than MATH bits. We first observe that if the algorithm uses MATH queues on MATH then a decreasing subsequence of MATH of length MATH can be identified, and we then argue that MATH cannot have a decreasing subsequence of length longer than MATH, in a way analogous to the argument in the proof of REF . MATH . |
cs/9906008 | The proof is the same as the one for REF except that we should replace ``push" with ``enqueue" and ``pop" with ``dequeue". MATH . |
cs/9906024 | MATH . The equations are justified in the following manner: REF by definition of the inner product, REF by the choice of MATH, REF by identification of MATH with MATH, REF by definition of the tensor product and REF by REF . |
cs/9906024 | For every MATH let MATH be the configuration which is MATH at cell MATH and quiescent elsewhere. Then for every MATH we have MATH. Thus if MATH is well-formed REF hold. For the converse suppose that both conditions are satisfied. Then REF implies that the columns of MATH have unit norm. Now we show that for any two distinct configurations MATH and MATH, the associated columns of the evolution operator are orthogonal. Since MATH and MATH are different there exist a cell MATH, such that MATH. Thus MATH by REF and MATH by REF . |
cs/9906024 | Suppose MATH is well-formed. By the previous lemma MATH is described by a unitary matrix. Let MATH be the local function described by the inverse of this matrix, that is for all MATH we have MATH. Let MATH be the trivial LQCA MATH. Clearly MATH, which concludes the proof. |
cs/9906024 | Let MATH denote the set of MATH-cycles of MATH. We define a mapping MATH. Let MATH be a configuration with interval domain MATH. Let MATH, and for MATH, let MATH. Then by REF . We have then MATH . Since the mapping MATH is clearly surjective the statement of the lemma follows. |
cs/9906024 | Algorithm MATH will construct the graph MATH of REF and then determines if it has a MATH-cycle of weight different from REF. This will be done by two consecutive algorithms MATH and MATH, from which the first will check if there is a column of norm less than REF, and the second will check if there is a column of norm greater than REF. They are both modifications of the NAME single source shortest paths algorithm CITE (BF for short), when MATH is taken for the source. They are based on the fact that BF detects negative cycles going through the source. (Actually for our purposes any shortest paths algorithm can be used which uses sum and min as arithmetic operations, and which detects negative cycles. NAME 's algorithm would be another example). Algorithm MATH replaces every sum operation in BF by a product operation, and initializes the shortest path estimate for the source to MATH (the shortest path estimates for the other vertices are initialized to MATH as in BF), and then runs it on MATH. This way it computes the shortest paths when the weight of a path is defined as the product of the edge weights. To see this let MATH be the same graph as MATH except the edge weights are replaced by their logarithm. Then the weight of a shortest path in MATH given by BF will be the logarithm of the shortest path in MATH given by MATH. For the same reason, negative cycles in MATH through the source will correspond to MATH-cycles in MATH with weight less than REF which will therefore be detected by MATH. REF replaces every min operation in MATH by max and the default initial shortest path estimate MATH by REF, and then runs it on MATH. This way it computes the shortest paths when the weight of a path is defined as the product of the reciprocal of the edge weights. If we define MATH with negative logarithm edge weights then negative cycles in MATH will correspond to cycles in MATH with weight greater than REF and will be detected by MATH. The complexity of BF is MATH. In the graph MATH we have MATH. Every vertex has MATH outgoing edges, therefore MATH. Thus the complexity of the algorithm MATH is MATH. |
cs/9906024 | Let MATH, and let MATH denote the set of MATH-cycles. We will define a mapping MATH. For MATH, let MATH be an interval such that MATH. Let MATH, and for MATH we define MATH, and MATH. Then by REF . Since MATH, REF implies that MATH is indeed a MATH-cycle in MATH. Also, it is clear that MATH is surjective. Finally MATH if and only if MATH since both are equivalent to the existence of MATH such that MATH. |
cs/9906024 | The algorithm MATH constructs the graph MATH and computes the strongly connected component of the node MATH. By REF there exists two distinct configurations such that the corresponding column vectors in MATH are not orthogonal if and only if in this component there is a vertex MATH with MATH. This can be checked easily. Finding the strongly connected components in a graph can be done in time MATH for example with NAME 's algorithm CITE. In MATH the size of number of vertices is MATH. Since every vertex has outdegree MATH, the number of edges is MATH. Therefore the complexity of the algorithm MATH is MATH. |
cs/9906024 | The local transition function of MATH is the composition of two separate local transition functions, thus its time evolution operator is also the composition of time evolution operators of the associated LQCAs, that is MATH. Since MATH is unitary we have that MATH preserves the norm (respectively, is unitary) if and only if MATH preserves the norm (respectively, is unitary). The theorem follows from REF . |
gr-qc/9906021 | Since MATH is parallel along each MATH, MATH . We may view MATH on each MATH as a curve in the NAME group MATH parameterised by MATH. Then MATH where the dot denotes differentiation with respect to MATH. Now let MATH be the curve in the NAME algebra corresponding to MATH by right translation, that is, MATH corresponds to the right-invariant vector field equal to MATH at MATH by MATH. (It might seem more natural to choose left translation, but then we would have to solve for MATH instead.) Thus MATH . Differentiating with respect to MATH and using that MATH and MATH we get MATH in the frame MATH. Integrating and solving MATH gives MATH . |
gr-qc/9906021 | The first expression follows immediately by letting MATH and MATH in REF . Using REF and the symmetries of the NAME tensor we get MATH and hence the second formula. |
gr-qc/9906021 | Note that MATH implies that MATH on the whole disc. First we need estimates for MATH and MATH. Since MATH, MATH so MATH and MATH for some MATH. But from REF, MATH and MATH so MATH and since MATH, MATH and MATH . Put MATH . Then MATH so MATH . Next we replace the integral in MATH with an expression involving only the value of the NAME tensor at the origin. The mean value theorem gives MATH for some MATH. Since MATH and MATH, MATH . Applying the mean value theorem again to the first factor in the last term and using that MATH and MATH gives MATH for some MATH. Thus from REF, MATH . Integrating and using REF along with MATH we get MATH . Adding REF and applying REF gives MATH and we have established the first part of the lemma. The b-length of MATH is given by MATH . Now MATH is a geodesic with MATH, so the first and third terms are MATH and MATH by REF. The second term is MATH since the norm of a NAME transformation equals the norm of its inverse. But MATH is given by REF , and applying REF and MATH we get MATH so using REF again gives MATH . Adding REF together we get the desired bound on MATH. |
gr-qc/9906021 | We start by decomposing MATH as MATH where MATH and MATH are dual independent simple bivectors. Inverting this relation and using that for any bivector MATH, MATH we get MATH . Define a disc by MATH, such that MATH, MATH and MATH at MATH, as in REF. Then REF gives MATH so MATH. Put MATH . Then REF give MATH so REF applies. From REF we have a loop MATH at MATH and a NAME transformation MATH generated by parallel propagation around MATH. Replacing MATH and MATH with MATH and MATH and repeating the above procedure we get another loop MATH at MATH which generates a NAME transformation MATH. Put MATH and MATH . From REF we know that MATH . Let MATH. Then MATH is generated by parallel propagation around the concatenation MATH of MATH and MATH, and we may write MATH . Using first REF and then REF we get MATH and similarly MATH and MATH . Inserting REF into REF and using that MATH and MATH and REF gives MATH . The length of MATH is the sum of the lengths of MATH and MATH. From REF we find that MATH . The same holds for MATH except that we have to correct for the starting frame being MATH instead of MATH. From REF, MATH and thus MATH . |
gr-qc/9906021 | Let MATH. To construct the first square, put MATH. Since MATH REF gives MATH . Applying REF to the first factor of MATH and REF to the second factor and then taking the square root gives that REF is fulfilled. Similarly, applying REF to the first factor of MATH, REF to the other two factors and taking the third root gives that REF is fulfilled. Thus REF applies and we have a loop MATH which generates a first approximation MATH to MATH. Also, MATH so from REF , MATH and REF gives MATH . Next we repeat the construction for the NAME transformation MATH. We first have to check that the conditions are satisfied. But from REF, MATH and from REF and the fact that the norm of a NAME transformation equals the norm of its inverse, MATH . It follows that we can write MATH with MATH . Thus MATH satisfies the conditions as long as the generating curve stays in MATH. Repeating the above process we get a series of loops MATH corresponding to a sequence MATH of NAME algebra elements, generating NAME transformations MATH. The products MATH are generated by parallel propagation along the concatenation of the curves MATH, and MATH from REF and repeated application of REF. But REF gives MATH so MATH as MATH. It remains to show that the resulting curve is contained in MATH. From REF , MATH . For MATH, we have to take into account that the starting point is MATH instead of MATH, so MATH from REF. Summing over MATH we get the desired bound on the length, and it is evident that the generating curve stays in MATH. |
gr-qc/9906021 | We start by generating the NAME transformation MATH where MATH is chosen sufficiently large for REF to hold on a subset of MATH, which gives REF. By REF there exists a horizontal curve MATH in MATH from MATH to MATH. Let MATH and MATH for MATH. Then MATH is a horizontal curve from MATH to MATH since the action of the NAME group preserves horizontal curves. Let MATH be the combined curve obtained by joining the curves MATH in sequence. Then MATH generates MATH and since MATH and MATH the result follows. |
gr-qc/9906021 | Let MATH be a curve in MATH, parameterised by b-length, with MATH. Let MATH be the tangent vector of MATH, and let MATH be the tangent vector of MATH with components MATH in the fixed frame MATH. Also, let the frame MATH of MATH be given by MATH. We want to show that MATH. From CITE, the fundamental REF-form MATH at MATH is given by MATH, where MATH is regarded as a map MATH, so MATH . Next, the connection form MATH is given by MATH where MATH is the canonical isomorphism from MATH to the vertical subspace of MATH, and MATH denotes the vertical component of MATH CITE. By definition, if MATH and MATH is any curve in MATH with MATH and MATH, then MATH at MATH. The vertical component of MATH is given by MATH where MATH is the matrix with components MATH and MATH are the rotation coefficients of the frame MATH. Combining REF gives MATH . Since MATH is parameterised by b-length, MATH so from REF, MATH and from REF, MATH . Put MATH. Then MATH and integration gives MATH since MATH at MATH. Thus MATH and the result follows from MATH and MATH. |
gr-qc/9906076 | Let us choose a region MATH of the form MATH, where we mean the domain of dependence of some open subset MATH with compact closure of the NAME surface MATH, that is, the set of all MATH such that every past respectively, future directed timelike or null curve starting at MATH hits MATH. Let us first show that the restrictions of the states to a subalgebra MATH, MATH are quasiequivalent. Regions of this particular shape are convenient, because the algebras MATH are then isomorphic to the algebras MATH constructed from the NAME space MATH which is a closed subspace of MATH. The restricted states MATH then correspond to the operators MATH on MATH, where MATH denotes the projection on this subspace. One then knows, by a well-known result of CITE, that the states MATH and MATH on the algebra MATH are quasiequivalent if and only if MATH where we mean the NAME norm in MATH. Using the NAME inequality MATH valid for any two bounded operators (we mean the trace norm), one concludes that REF hold provided MATH . As above, let MATH be the NAME operator on MATH. Trivially, we can write REF MATH . By assumption, the operator MATH is bounded for MATH, therefore REF will hold if MATH can be shown to be NAME for such a MATH, since the product of two NAME operators is in the trace class. To see this, let us pick an orthonormal basis MATH of spinors in MATH. Then MATH . In order to estimate the right-hand side of REF , we exchange the MATH - integration and the summation over MATH (this is justified, because the resulting expression turns out to be absolutely convergent). We have, MATH . The sum over MATH at fixed MATH of the integrand is independent of MATH and equal to twice the spectral function MATH, defined by MATH therefore we have found MATH . The spectral function MATH is given by the following expressions MATH . For MATH a derivation of these may be found in CITE, the expression for MATH is derived in the Appendix. MATH grows as MATH for large MATH for for all the homogeneous spaces MATH, ensuring that MATH is NAME for MATH. We have therefore shown that MATH is quasiequivalent to MATH for any set MATH of the form MATH. Since it is enough to verify local quasiequivalence on a cofinal set of open subsets (such as the set of regions of the type MATH), this then proves the theorem. |
gr-qc/9906076 | The NAME connection can be calculated from REF , taking MATH (this means that MATH in that formula). Introducing an orthonormal frame MATH for the metric tensor, MATH, and and going to local coordinates, we find MATH and MATH where MATH is the spinor connection. According to REF , this means that MATH . That this expression agrees with the geometric expression for MATH in the statement of this proposition now immediately follows, because MATH, MATH, and because the differential of the projection map is given by MATH. |
gr-qc/9906076 | We aim at using the propagation of singularities theorem, REF , combined with a deformation argument due to Fulling, CITE, first applied in a similar context in CITE. By REF , the polarisation set of MATH must be a union of NAME orbits corresponding to the operators MATH and MATH. By REF , sections over MATH, annihilated by MATH are pull-backs to MATH of sections in MATH over null-geodesics which are parallel with respect to MATH. Therefore, two elements MATH and MATH of MATH are in the same Hamiltonian orbit if MATH for some MATH. Now let MATH and MATH be a NAME surface of MATH through that point. Then there is there is a convex normal MATH of MATH and a convex normal neighbourhood MATH of MATH, containing MATH, such that there is another spacetime MATH with NAME surface MATH and a corresponding causal normal neighbourhood MATH with the properties that: CASE: MATH is isometric to MATH and REF MATH contains a NAME surface MATH and a a convex flat neighbourhood MATH contained in a convex normal neighbourhood MATH of MATH such that MATH (we mean the domain of dependence), where MATH corresponds to MATH under the isometry. By the propagation of singularities theorem, it will be enough to show that MATH has the desired polarisation set, because any pair of null related points can be transported along a null geodesic into a region of that kind. Let MATH be the pull-back of the twopoint functions to the deformed spacetime MATH. By the propagation of singularities theorem and the equations of motion on the deformed spacetime it will induce a NAME distribution on all of MATH. Furthermore MATH will have the required polarisation set if MATH has, again by the propagation of singularities theorem. But MATH is contained in a flat portion of spacetime, so effectively our theorem has to be shown for NAME space only. So let MATH be twopoint functions of a NAME state in NAME space. By our REF , all NAME states differ by a smooth piece only, so we might restrict attention to the vacuum in NAME space, MATH where MATH are the ordinary positive respectively, negative frequency twopoint functions for the NAME MATH in flat space. It is not difficult to see from the definition of the polarisation set and the definition of MATH that one must have MATH . Now MATH therefore, using that MATH is a principal symbol of MATH, one can conclude (by the definition of the polarisation set) that MATH where MATH. Since the form of MATH is already restricted by REF , it is easy to see that these equations imply MATH and MATH. From the equation MATH it follows MATH where MATH is the totally antisymmetric tensor in MATH dimensions and MATH . Using standard identities for traces of gamma matrices and MATH we find that MATH. Since MATH, this implies MATH, which in NAME space is just the condition on the polarisation that was claimed. |
gr-qc/9906076 | Let us denote by MATH the advanced, retarded, NAME, anti-Feynman parametrices of the spinorial NAME operator MATH. They are known to be determined (modulo MATH) by the equations MATH and their wave front sets CITE. For the NAME and anti-Feynman parametrices, these wave front sets read: MATH and MATH . Actually, in CITE only the case of a scalar operator with metric principal part is treated. Inspection of the proof however shows that it may be extended to operators with metric principal part acting in vector bundles such as MATH. We also need the advanced, retarded, NAME and anti-Feynman parametrices for the NAME operator, given by MATH . By the anticommutation relations, one infers that MATH . Let us define MATH . Our aim is now to prove that MATH modulo a smooth kernel. To this end, we first show that MATH . In order to see why this must be true, consider a point MATH in wave front set of MATH such that MATH. Then, because MATH must be zero for such points by the support properties of MATH, it must hold that MATH. Since (by the microlocal NAME condition, REF ) MATH we find that MATH can be in the wave front set of MATH if and only if MATH. A similar reasoning can be applied for MATH, this time using the representation MATH and exploiting the microlocal NAME condition, REF , for MATH. Altogether one concludes from this that MATH is in the wave front set of MATH if and only if MATH and MATH for MATH respectively, MATH if MATH, which is just the set REF . We have therefore shown the first inclusion in REF . The second inclusion is treated in just the same way. Now, by definition, we have MATH. Applying the operator MATH to the relation CITE MATH we find from this that MATH modulo smooth and hence that MATH . We had already shown that MATH and we also have MATH (since the wave front set of a distribution cannot become larger when acting upon it with a differential operator), so the set on the left had side of this equation is contained in the set MATH. By the same arguments the set on the right hand side REF must be contained in MATH. It therefore trivially follows that MATH where MATH . Now by REF modulo smooth. We are thus in a position to use the propagation of singularities theorem and we conclude that polarisation sets of the distribution MATH (and hence the wave front set) must be a union of Hamiltonian orbits for the operator MATH. Using the same arguments as in the proof of the preceding theorem, it is then easy to show that if MATH is in MATH, then this set must also contain nonzero vectors away from the diagonal, a contradiction. Hence MATH, that is, MATH as we wanted to show. Inserting this into REF one gets MATH . It can be extracted from the analysis of the propagators in CITE that MATH . This holds because all propagators have the same structure as MATH, that is, the same functions MATH multiply different singular parts. These can be combined to give the above equation. Combining this equation with REF then proves the theorem. |
gr-qc/9906076 | We treat the case of infinite order first and suppress the subscript MATH. We also introduce the notation MATH for operators MATH acting on spinors. We need to show that MATH can be modified modulo MATH to an operator MATH such that MATH and MATH. Taking the transpose of REF we obtain MATH . It is not difficult to see that MATH and MATH. From this it follows that MATH . But multiplying REF with MATH from both sides and using that MATH, we also have that MATH . Therefore, since the principal symbols of MATH and MATH are equal and since the factorisation is unique modulo MATH once this information is known, we have shown that MATH . We redefine MATH by MATH and MATH by MATH and MATH by inserting the modified definition of MATH. These redefined operators will then satisfy (remembering that that MATH and using that MATH for any operator MATH acting on spinor fields over MATH.) MATH and MATH where MATH. Since the left hand side of this equation is invariant under charge and hermitian conjugation and positive, we find MATH, MATH and MATH. Since MATH is compact, MATH is a compact operator and the projectors on all nonzero eigenspaces have a smooth kernel. Let MATH be the projector on the (finite dimensional) kernel of MATH. Then MATH is strictly positive, MATH, MATH, MATH, MATH. Let us write MATH . Then MATH, MATH, and the operator MATH defined by MATH satisfies MATH modulo smooth, MATH and MATH, providing us thus with a modified operator with the desired properties. For arbitrary MATH, one proceeds in a similar way, this time using that MATH modulo MATH. |
gr-qc/9906076 | The argument establishing the existence of MATH as in the lemma is the same for all MATH, therefore, to lighten the notation we will only treat the case MATH and drop the reference to MATH. We set MATH. The MATH are then related to the PDO's MATH in the statement of the lemma by REF . One finds after the first iteration, MATH modulo MATH. Further iterations change MATH only by symbols of order less or equal MATH and will therefore not affect the above form of MATH. Only the above form of the MATH is used to argue the existence of MATH, therefore the adiabatic order MATH is not important for our argument, as long as MATH . The proof of the lemma then amounts to show that MATH can be modified by a symbol of class MATH such that one can find a MATH(or rather one for each helicity), taking values in the complex REF by REF matrices, such that (MATH denotes the hermitian adjoint of a matrix and MATH the identity matrix) MATH . The PDO MATH corresponding to MATH by REF then obviously fulfills the claim of the lemma (note that the integral/sum in REF is over positive MATH only). Taking the matrix adjoint of REF , one observes that MATH can be taken to be hermitian. One may regard REF as a linear equation for MATH at each value of MATH and the helicity index MATH, and thus write it as a REF by REF matrix system for the matrix entries of MATH where MATH is a REF by REF matrix determined from the entries of MATH. Only using the above from of MATH and the fact that MATH one finds from REF MATH . Hence, if MATH, then MATH has a matrix inverse in MATH for large MATH, MATH . REF may therefore be inverted for large MATH and gives us a solution MATH to REF . It follows directly from the above form of MATH that MATH has MATH as a principal symbol, therefore MATH is positive definite for large MATH. We have thus constructed a solution to REF if MATH is greater than some MATH. To find such a MATH also for MATH, we may redefine MATH and arbitrarily for MATH, since MATH by definition does not depend on MATH for MATH. By what we have already shown, such a MATH trivially allows for a hermitian, positive solution of REF , but it is not yet a symbol (because its dependence on MATH is not smooth). We might however change the above definition of MATH in an arbitrary small neighbourhood of MATH to make it smooth (and hence a symbol), without making the corresponding matrix MATH singular. As we mentioned earlier, the resulting matrix MATH is automatically hermitian. By easy arguments based on the continuity of the construction, it will also remain positive, if that change is made arbitrarily small. |
gr-qc/9906076 | According to REF , an adiabatic state of infinite order (defined, as described above, by a symbol MATH) is NAME, so it is sufficient to show that adiabatic states of order MATH (described by a symbol MATH) are locally quasiequivalent to such a state. Now MATH is by definition a symbol of order MATH, so in particular, MATH . The criterion on local quasiequivalence, REF then immediately proves that the states are locally quasiequivalent if MATH. Let MATH be the operator associated to the symbol MATH via REF at some MATH. By standard theorems, for example, in CITE, the associated kernel on MATH is in MATH for MATH. The difference of the twopoint function of an adiabatic state of infinite order and one of order MATH is MATH . Now the causal propagator MATH propagates MATH times differentiable initial data to MATH times differentiable solutions. Therefore the above difference must MATH times differentiable in MATH. |
gr-qc/9906076 | In order to prove that MATH has the wave front set described in REF , we shall employ the following result due to W. Junker CITE. This result has originally been obtained for scalar fields, but a careful analysis of the proof shows that it can be adapted to the spinor case. We present here a modified version which is tailored to our situation. MATH is the causal propagator for the spinorial NAME operator MATH. Let MATH be an elliptic PDO on MATH. Let MATH be an interval containing MATH and MATH such that there exist PDO's MATH which have the property MATH modulo smooth and MATH where MATH is defined in REF . Then the spinorial bidistributions MATH have wave front set MATH. We apply the lemma to MATH and MATH and MATH as in the definition of the twopoint functions, REF . Then REF , and REF provide us with operators MATH as in the statement of the above theorem. Clearly, since MATH, MATH . Noting that MATH and using the fact that the wave front set cannot become larger upon acting with a PDO on a distribution, we can apply Junker's theorem to MATH (given by REF ) and obtain MATH. It remains to show equality in the above inclusions. The anticommutation relations imply that MATH. If the causal propagator MATH had wave front set MATH (and this will indeed be shown) then MATH thus in fact equality would hold in the above inclusions. We have to show that MATH has indeed wave front set MATH. From MATH and the antilinearity of the charge conjugation it follows that MATH . One has (see CITE), MATH where MATH means the identity on the NAME surface. From CITE, one knows that MATH where MATH is the embedding map. Now assume that MATH is in MATH but not in the wave front set of MATH. By the propagation of singularities, we can assume that there is an element MATH, MATH, which is in MATH but not in MATH. By REF , also MATH. Since MATH must be a nonzero null covector, it is impossible that the nonzero element MATH is in MATH. But the latter set is actually equal to MATH and so must contain any element of that form, a contradiction. |
hep-th/9906225 | For MATH and MATH we have MATH using the differential REF of MATH and the braided NAME REF . Applying MATH, we can ignore the total differential and obtain MATH . This gives us immediately MATH for MATH. We rewrite REF to find MATH which gives us a recursive definition of MATH leading to the formulas stated. |
hep-th/9906225 | This is obtained from REF by reversing of arrows or equivalently by turning diagrams upside down in the diagrammatic language of braided categories. |
hep-th/9906225 | Let MATH be an element of MATH. In particular, MATH . Applying the antipode to the last component and multiplying with the MATH-th component we obtain MATH . Thus, MATH is the identity on MATH. On the other hand, applying the inverse antipode and then MATH to the last component of REF we get MATH . This is to say that MATH is indeed right MATH-invariant. Conversely, it is clear that MATH is the identity. Now take MATH in MATH. Its image under MATH is MATH . Applying MATH to the last component we get MATH by right MATH-invariance. Applying MATH we arrive at MATH . We observe that this is the same as applying MATH to REF . Thus, the last component of REF lives in MATH and the application of the antipode sends it to MATH as required. That the result is right MATH-invariant is also clear by the defining property of the antipode. |
math-ph/9906013 | This is a simple consequence of the triangle inequality MATH . |
math-ph/9906013 | Using the majorization REF the proof is basically reduced to a right choice of notation. Let MATH be the nonnegative compact operator in MATH, given by the integral kernel MATH. Furthermore let MATH be the NAME distribution and MATH be the group of unitary multiplication operators MATH on MATH. Passing to the NAME representation of the NAME function in REF we obtain MATH . Of course, MATH. In particular, REF immediately imply MATH. The NAME distribution is a convolution semigroup, that is, MATH. If we insert this into REF and change variables using the group property of the unitary operators MATH, then REF yields MATH . This completes the proof. |
math-ph/9906013 | Note that REF is equivalent to MATH . Scaling gives the simple identity for all MATH where MATH is the Beta function. Let MATH the eigenvalues of MATH. Then MATH . |
math-ph/9906013 | We apply an induction argument similar to the one used in CITE. For MATH and MATH the bound REF is identical to REF. Consider the operator REF in the (external) dimension MATH. We rewrite the quadratic form MATH for MATH as MATH . The form MATH is closed on MATH for a.e. MATH and it induces the self-adjoint operator MATH on MATH. For a fixed MATH this is a NAME operator in MATH dimensions. Its negative spectrum is discrete, hence MATH is compact on MATH. Assume that we have REF - REF for the dimension MATH and all MATH from the interval MATH. Then MATH satisfies the bound MATH for a.e. MATH. Here MATH . Indeed, REF follows from REF follows from REF - REF in dimension MATH. Let MATH be the quadratic form corresponding to the operator MATH on MATH. We have MATH and MATH for all MATH. According to REF the form on the right-hand side of REF can be closed to MATH and induces the self-adjoint operator MATH on MATH. Then REF implies MATH . The assumption MATH implies that MATH is an integrable function and we can apply REF to the right-hand side of REF. In view of REF we find MATH for MATH. The bounds REF or REF and the calculation MATH complete the proof. |
math-ph/9906019 | Let MATH be a foliation in acausal NAME surfaces and write MATH. We first show that it suffices to restrict one's attention to the NAME surface MATH. More precisely, we show that MATH is a strong deformation retract of MATH. In fact, using MATH to parametrize MATH and defining MATH by MATH we have a homotopy of the identity on MATH onto the projection, MATH, onto MATH leaving MATH fixed. The only non - trivial point is to show that the image of MATH lies in MATH and this is where the causal structure enters. However, two remarks suffice: first, causal disjointness reduces to disjointness on an acausal NAME surface and hence is preserved if we pass from one acausal NAME surface to another by changing the value of MATH. Secondly, if we take causally disjoint points MATH, MATH with distinct values of MATH then the curve MATH is timelike and connects MATH with that NAME surface of the foliation containing MATH. Its range must lie in MATH or there would be a causal curve coming arbitrarily close to connecting MATH and MATH, contrary to assumption. We now know that the inclusion of MATH in MATH induces an isomorphism in homotopy and, in particular, an isomorphism of path-components. Now unless MATH is one dimensional and non - compact, the complement of a point of MATH is path - connected and MATH is then also path - connected. If MATH is one dimensional and non - compact it is isomorphic to MATH so that MATH has two path - components. |
math-ph/9906019 | As the product of endomorphisms localized in MATH is again localized in MATH, it suffices to observe that if MATH and MATH are unitary then MATH is unitary. |
math-ph/9906019 | We first show that MATH. This relation is trivial if MATH and MATH are causally disjoint in the sense that there is a MATH such that MATH contains an initial and final support of MATH and MATH an initial and final support of MATH. The idea of the proof is to reduce to this trivial case. Replace MATH and MATH by MATH and MATH, where MATH and MATH are unitary. Then MATH to be understood as valid in some MATH for MATH sufficiently large. Thus if MATH and MATH are causally disjoint, the validity or not of our relation is unaffected by the passage from MATH , MATH to MATH, MATH. But MATH and MATH lie in a connected component MATH by hypothesis, so after a finite number of steps we can arrange that the initial and final supports of both intertwiners coincide. This is again the trivial case so MATH, as required. It only remains to show that MATH . The above computations show that the kernel of the left hand side does not change if we shift to MATH and MATH. However, by hypothesis, given MATH, we can find MATH with MATH and we can take MATH, when MATH completing the proof. |
math-ph/9906019 | The uniqueness claim tells us how to go about defining MATH: given MATH pick MATH and unitaries MATH where MATH and we have no option but to set MATH . By REF , such a choice, however made, automatically satisfies MATH. We have MATH, where MATH and the product of supports of MATH and MATH is contained in MATH. Set MATH then, by REF , MATH and rearranging this identity gives MATH and completes the proof of the theorem. |
math-ph/9906019 | These equalities follow easily from the formula MATH used to define MATH in the proof of REF . |
math-ph/9906019 | MATH being connected, the result will follow from REF once we show that MATH . But if MATH, MATH and since we have a coherent choice of components, MATH giving an inclusion. The reverse inclusion is trivial, completing the proof. |
math-ph/9906019 | Let MATH denote the projection onto MATH then the conditional expectation MATH of MATH onto MATH may either be defined by integrating over the action of MATH or by MATH . Now MATH . Since MATH is cyclic and separating for each MATH and MATH, MATH . Now using the fact that MATH is separating for each MATH and that MATH is path - connected, we obtain MATH since MATH satisfies twisted duality. |
math-ph/9906019 | Any two points of MATH are contained in elements of MATH so if this is connected and each of its elements are path - connected the two points can be joined by a path in MATH. Conversely, given MATH, there is a path in MATH beginning in MATH and ending in MATH, if MATH is pathwise connected. Since MATH is a base for the topology, it is easy to construct a path in MATH joining MATH and MATH. |
math-ph/9906019 | If MATH is a union of components and MATH with MATH then MATH so MATH and MATH is a union of components. Conversely, if each MATH is a union of components and MATH with MATH, then MATH for some MATH. But MATH is a sieve so MATH and MATH. Since MATH is a union of components, MATH so MATH is a union of components. Now MATH is a component, if any given pair MATH and MATH can be joined by a path in MATH. But MATH being connected, we may as well suppose MATH and MATH have an upper bound MATH. If MATH is a component, MATH and MATH can even be joined by a path in MATH, completing the proof of the lemma. |
math-ph/9906019 | Since MATH is connected, it suffices to prove the result when the path MATH is a MATH - simplex MATH with MATH. But then, MATH. |
math-ph/9906019 | We pick unitaries MATH, MATH, as above, for each object MATH of Rep-MATH. Given an arrow MATH in that category, we define for MATH, MATH . Then MATH is a faithful MATH-functor and our computations above show that it is full. Hence, it remains to show that each object MATH of MATH, is equivalent to an object in the image of MATH. We show this by constructing a representation MATH. We pick unitaries MATH, MATH, on MATH such that MATH and define MATH . This is well defined since MATH is connected and for any path MATH with MATH, MATH we have MATH. Furthermore, the definition respects the net structure since MATH . Hence we get a representation of the net MATH, trivial on the covering by construction and MATH is an associated REF - cocycle. This completes the proof. |
math-ph/9906019 | Let MATH, MATH be unitaries realizing the equivalence of MATH and MATH on MATH. Then MATH, MATH is an associated object of MATH. Since each MATH is connected, MATH is at the same time an object of MATH by REF . If we define MATH this gives a well defined element of Rep-MATH just as in the proof of REF . Furthermore, MATH obviously extends MATH by the choice of the MATH. If we make another choice MATH of the MATH then MATH so that MATH remains unchanged and is consequently the unique extension of MATH to an object of Rep-MATH. Passing to the extensions does not change the intertwiners by REF . |
math-ph/9906019 | Since MATH is a group of isometries leaving MATH invariant, MATH . |
math-ph/9906019 | By assumption, we have MATH, and MATH. Since MATH is part of a NAME surface, it follows that MATH. Hence MATH. Now define MATH, MATH. Then MATH since MATH (see CITE), and MATH. Therefore we obtain MATH where the last equality is a consequence of the fact that MATH and MATH are disjoint open subsets of a NAME surface. The boundary of MATH and MATH is in both cases the smooth manifold MATH . Hence MATH and MATH are diamonds, and since MATH and MATH are disjoint and their union yields MATH up to the common boundary MATH of MATH and MATH, this entails MATH and MATH. Now we define the following sets: MATH, MATH, MATH, MATH. One can see that MATH, for there would otherwise be causal curves joining pairs of points on MATH and this is excluded. It follows that MATH is the union of three disjoint parts, and MATH. The common boundary of MATH and MATH is the smooth manifold MATH, implying that MATH is a diamond. Moreover, it is obvious that MATH, MATH, and by standard arguments it follows that MATH and MATH. Let us check that MATH. First we notice that MATH is fairly obvious (MATH is causally closed, that is, MATH, and MATH is an acausal hypersurface in MATH), and so is MATH, implying MATH. To show the reverse inclusion it is sufficient to prove that MATH. We have MATH and MATH and MATH imply that MATH. Now consider an arbitrary past-directed causal curve MATH starting at some point on MATH. For MATH to meet MATH, it must intersect MATH. However, any intersection of MATH with MATH must be contained in MATH since MATH is past-directed and we have seen that MATH. Thus, since only the part of MATH lying in the causal past of its intersection with MATH can enter MATH, MATH never meets MATH, showing that MATH. Therefore MATH is a diamond. An analogous argument works for MATH. |
math-ph/9906019 | We shall only give the proof of the first equality, since the remaining cases are completely analogous, requiring some largely obvious notational changes. We recall that MATH for any MATH, and also the notation MATH, MATH, MATH and MATH used in the proof of REF . Then we define the subsets MATH, MATH and MATH, and analogous sets with MATH replaced by MATH. Next, we define MATH, and aim at demonstrating that this set is a NAME surface. It is fairly obvious that MATH is achronal, that is, MATH. It is also not difficult to check that MATH where the sets forming the union are pairwise disjoint except for the intersection MATH. Now let MATH be an arbitrary endpointless causal curve in MATH. If MATH enters MATH or MATH, it must intersect MATH or MATH, hence MATH. Suppose that MATH enters MATH. Since MATH is past-compact, MATH must intersect one of the regions MATH, MATH or MATH, as MATH would otherwise have a past-endpoint. On the other hand, a causal curve without endpoint intersecting MATH can only meet MATH if it intersects MATH, too. Hence, if MATH enters MATH, it must also intersect MATH. Using the same argument with obvious modifications for the case that MATH enters MATH, one arrives at the same conclusion. This shows that every causal curve without endpoints in MATH intersects MATH, implying MATH, and therefore MATH is a NAME surface. Now we note that MATH for each open neighbourhood MATH of MATH in MATH since MATH has empty intersection with MATH. Thus MATH is an intersection of diamonds. Moreover, whenever MATH is any diamond, it is obvious that we can find some open subset MATH of MATH with piecewise smooth boundary MATH, implying MATH. Hence, to establish the lemma, it suffices to consider diamonds of the form MATH. Obviously, the causal complement MATH of each such MATH may be written as MATH where MATH and MATH are both diamonds. Notice that the union of MATH and MATH over all MATH yield MATH and MATH, respectively. Consequently we have MATH where the second equality follows from NAME duality, the third has been justified above, the fourth and fifth equalities use additivity and the last but one again follows from NAME duality. |
math-ph/9906019 | The first part is a variant of NAME 's result CITE, compare also CITE. We supply the relevant argument as REF in Sec. CASE: If REF is added so that MATH intertwines MATH and MATH, the adjoint action of MATH on the net MATH is geometrically correct, that is, MATH, MATH, MATH. Thus the net MATH together with its dilation and translation symmetries coincides with both MATH and MATH (derived from the nets MATH and MATH as in REF ) and their respective translation and dilation symmetries. Thus the corresponding extensions to conformally covariant theories coincide. |
math-ph/9906019 | If MATH is cyclic for MATH for a given MATH, then it is cyclic for MATH, too. However this NAME algebra is invariant under the modular group of MATH, and hence coincides with MATH by NAME 's theorem. Conversely, let MATH be orthogonal to MATH. Then for any MATH we have MATH, hence the function MATH is analytic on the strip MATH and continuous on the boundary. But as we have a MATH-hsm inclusion, it vanishes for negative real MATH and hence everywhere. Thus MATH is orthogonal to MATH, completing the proof. |
math-ph/9906019 | Set MATH . By the previous lemma MATH is cyclic for MATH, MATH, therefore we may apply a result of NAME and NAME CITE to the MATH-hsm inclusion MATH and get a one parameter group of unitaries MATH on MATH with positive generator satisfying MATH . Hence we have MATH and this equation is used to define MATH for negative MATH. We now set MATH and the definition of MATH clearly agrees with REF when MATH. Furthermore, MATH . Moreover, the operators MATH restricted to MATH give the modular conjugation and operator of MATH. Similarly, using the results of CITE anew, the restriction of MATH to MATH (again denoted by MATH) coincides with the unitary group derived from the +hsm inclusion MATH. Now a standard NAME argument, based on the positivity of the generator of MATH, shows that MATH is independent of MATH, while the ``modular" NAME argument in REF shows that MATH is independent of MATH. Thus the inclusion MATH is standard. We have proved that MATH for any MATH, and that MATH gives a translation-dilation covariant net of NAME algebras on MATH. Then we get a conformally covariant net by a result of NAME (CITE, see also CITE). |
math-ph/9906019 | REF immediately gives REF . Then let MATH and choose MATH as an eigenvector with eigenvalue MATH of MATH, normalized in such a way that MATH is a rotation through an angle MATH. Then REF is obviously satisfied and choosing MATH we get property (MATH). When MATH, REF follows by REF . |
math-ph/9906019 | Let MATH. By REF , for any MATH there exist MATH, MATH such that MATH and MATH, MATH, in particular MATH. Then MATH where the first equality follows by additivity and the second by duality. |
math-ph/9906019 | Pick a representation MATH equivalent to MATH and localized in MATH. Then arguing as in CITE, we see that MATH and MATH yield conjugate endomorphisms of the NAME algebra of the wedge MATH. The next step is to deduce from REF that MATH and MATH are conjugate representations. This circumstance is obscured by the fact that the product even of localized representations is defined only up to equivalence. For this reason, we use cocycles from MATH instead of representations, recalling REF . We have a faithful tensor MATH-functor MATH taking a cocycle MATH into the associated endomorphism MATH in MATH and an arrow MATH into MATH. If MATH, then there is a tensor MATH-functor from MATH into the category of endomorphisms of the NAME algebra of the wedge MATH, mapping an object MATH onto its restriction to the algebra of the wedge MATH and acting as the identity on arrows. REF means that the composition of these functors is even full. Thus if MATH and MATH are the images of MATH and MATH and are conjugates, MATH and MATH are conjugates. If MATH is a cocycle associated with MATH and MATH is a cocycle associated with MATH, then the endomorphisms of MATH obtained by restriction are conjugates and so are the equivalent endomorphisms MATH and MATH. Hence MATH has a left inverse and finite statistics. |
math-ph/9906019 | CASE: Since MATH is irreducible, MATH is fixed up to a one-dimensional representation. By REF , one-dimensional representations are trivial on MATH, hence MATH does not depend on the chosen representation. Since MATH is the identity element in MATH, the corresponding element in MATH is a central element, so MATH is a scalar by irreducibility. REF shows that MATH does not depend on the representative MATH. REF is obvious. |
math-ph/9906019 | We remark that the existence of conjugates for finite statistics depends on REF and was discussed in the proof of REF. Since we are dealing with a sector, REF implies that MATH is one dimensional and contained in MATH. In fact, let MATH yield MATH in MATH, that is, MATH for MATH, then the cocycle MATH, defined by MATH yields MATH in MATH. Let MATH be defined by MATH, MATH and MATH. A simple computation shows that MATH . Thus by REF, MATH. But MATH. Hence MATH as claimed. Obviously, an isometry MATH in MATH will implement MATH. Now a simple computation shows that MATH. But MATH since MATH and MATH are orthogonal. Hence, we may suppose that MATH and differs at most by a sign from the standard implementation of the restriction of MATH to MATH. |
math-ph/9906019 | We have MATH, hence MATH and MATH, therefore MATH . Thus by covariance MATH . |
math-ph/9906019 | By REF , MATH. Furthermore, by the previous lemma, MATH belongs to the same one dimensional space of intertwiners. |
math-ph/9906019 | We first observe that if MATH for some unitary MATH, then MATH and this implies that MATH. Then we note that MATH, where MATH, because MATH establishes an isomorphism between the original structure and the structure transformed by MATH. Since MATH and MATH are associated with the same sector and both localized in MATH and MATH, the result now follows. |
math-ph/9906019 | Choose associated endomorphisms localized in MATH and denote the involutions associated with MATH and MATH by MATH and MATH, respectively. Then from the definition of MATH for the pairs MATH and MATH and the equality MATH one obtains the relation MATH which means that the function MATH is a local groupoid character. Then, making use of REF we get MATH . |
math-ph/9906019 | Since MATH is a local representation, it is locally trivial on the commutator of MATH, hence, by REF , there exists MATH such that MATH for MATH. Because of REF the result follows by applying REF sufficiently often. |
math-ph/9906019 | As in REF , we first show MATH; indeed if MATH is localized in MATH and MATH is a unitary in MATH in End-MATH, then MATH. Since MATH, MATH, for MATH. But MATH by REF . Thus MATH. Now MATH is localized in MATH and, again since MATH, MATH, MATH and MATH are comparable and MATH. Thus MATH, where MATH is the left inverse of MATH. Hence MATH. Now MATH and implements MATH on MATH. MATH is localized in MATH and since MATH, MATH so we have MATH . |
math-ph/9906019 | Take MATH and MATH orthogonal to each other and contained in MATH, and set MATH. Clearly MATH is causally disjoint from MATH and invariant under MATH and MATH. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.