paper
stringlengths
9
16
proof
stringlengths
0
131k
math/0106264
Observe first that the duality pairing of MATH with MATH given by MATH establishes a NAME transform isomorphism of MATH to MATH. To see that the action MATH restricted to MATH is conjugate to the semigroup action of MATH on MATH from REF, it is enough to carry out the computation on the characters MATH. Thus MATH yields MATH which, when evaluated at MATH, equals MATH or zero, according to MATH or not. Hence MATH, where MATH is the obvious inclusion. Since for every MATH there is a positive integer MATH such that MATH, the semigroup MATH of multiplicative positive integers is cofinal in MATH, and from this it follows easily that MATH is dense in MATH, because MATH is dense in MATH. The result now follows from REF.
math/0106265
By viewing MATH as the top right-hand corner in the linking algebra MATH, we can convert MATH into a function with values in a MATH-algebra, and integrate as usual. The inner product on MATH is given by multiplication in MATH, and hence can be pulled in and out of the integral.
math/0106265
Let MATH be a compact neighbourhood of MATH, and denote its interior by MATH. Then by uniform continuity, the function MATH is continuous from MATH into the MATH-algebra MATH. Thus it has an integral MATH in MATH; since evaluation at MATH is a continuous homomorphism on MATH, we have MATH for every MATH. The inclusion MATH of MATH into MATH is a bounded linear map, and hence MATH . Putting REF together says that the integral MATH is given by the continuous function of compact support defined by the right-hand side of REF.
math/0106265
Let MATH and MATH. Then MATH, and so for every MATH, we have MATH which implies MATH; since we know from REF that MATH, this implies that MATH is an algebra. Similarly, we have MATH which implies that MATH. This proves both that MATH is a MATH-algebra, so its closure MATH is a MATH-algebra, and that MATH has the algebraic properties of an inner product. To show positivity of MATH, let MATH be a faithful nondegenerate representation of MATH, and note that MATH . Since MATH is dense in MATH and MATH is nondegenerate, it is enough to show this when MATH for MATH and MATH. Well, MATH which is positive because each MATH is positive. Because MATH is continuous in MATH, this calculation also shows that MATH so MATH is definite.
math/0106265
We first show that MATH as elements of MATH, so that MATH is bounded on MATH. To do this, we again choose a faithful nondegenerate representation MATH of MATH, and it is enough to prove that MATH for all MATH and MATH. We know that MATH is positive in MATH, so MATH for all MATH, MATH and MATH. Integrating this over MATH and pulling the integral inside the inner product gives MATH which is REF. We deduce that MATH is bounded. Since MATH is just another element of MATH, MATH is also bounded. We can therefore show that MATH is adjointable with adjoint MATH by checking that MATH and this follows easily from MATH. Thus MATH, and it is easy to check that MATH is a homomorphism of MATH-algebras. To verify that MATH is unitary, we let MATH, MATH and calculate: MATH . This calculation shows that MATH is bounded with MATH, as is MATH, and that MATH so MATH is adjointable with MATH. Since we trivially have MATH on MATH, MATH is a homomorphism into MATH, as claimed. For MATH, the covariance condition MATH follows easily from the formulas and the identity MATH, and it then extends by continuity to MATH.
math/0106265
We begin by noting that if MATH and MATH, then MATH which is finite by REF . Since MATH it follows that MATH maps MATH into MATH. To see that MATH is isometric, we fix two vectors MATH and MATH in MATH, and compute: MATH . Thus MATH extends to an isometry on MATH. Now for MATH, we have MATH which proves the first intertwining relation. To check that MATH is even easier.
math/0106265
For MATH, MATH and MATH, we have MATH . At this stage we want to whip the MATH past MATH: then the integral would be that defining MATH, and we'd be done. Unfortunately, the resulting integral converges in the norm coming from the MATH-valued inner product, so we have to work to pull it through a balanced tensor product defined using the MATH-valued inner product. The MATH-valued integral in REF is characterized by its inner products with vectors of the form MATH for MATH: MATH . The two elements MATH and MATH of MATH are there to ensure that the integrand in this double integral is integrable on MATH, so that we can apply NAME 's Theorem to continue: MATH . We can now go backwards through the previous analysis to see that MATH and the result follows.
math/0106265
For the first identity, we compute MATH . The second identity follows from a simple algebraic manipulation.
math/0106265
Because MATH belongs to MATH, the formulas in REF show that MATH is a MATH-subalgebra of MATH. We know from REF that the MATH-homomorphism MATH restricts to a MATH-homomorphism of MATH into MATH such that MATH, and MATH gives the required action of MATH on MATH: MATH is by REF. For any bounded operator MATH, we have MATH; because MATH is isometric and MATH is decreasing for the reduced norm, we have MATH, and the inequality follows.
math/0106265
We know from REF that MATH since the left action of MATH satisfies MATH, it follows that MATH for all MATH. REF also shows that MATH, so MATH has the required algebraic properties. To see positivity, we fix a representation MATH of MATH on MATH, and consider the left-regular representation MATH on MATH given by MATH . Now we let MATH and consider MATH of the form MATH for MATH. Then we have, by a variant of REF , MATH . Since the function MATH belongs to MATH, we can apply NAME 's Theorem, and then substitute MATH to reduce this to MATH which has the form MATH for the element MATH of MATH defined by MATH and hence is positive. Since MATH of the given form span a dense subspace of MATH, this proves that MATH is positive. Thus MATH is a pre-inner product; it is definite because the regular representation is faithful on the reduced crossed product where MATH sits. If now MATH, then MATH, so we can repeat the calculation of the previous paragraph to see that MATH which is positive because MATH acts as bounded operators on MATH. This proves REF .
math/0106265
To make things a little easier on the eye, we shall write MATH. We fix MATH, and compute: MATH using standard properties of MATH-valued integrals. On the other hand, MATH . Because the inside integral converges in MATH, we can pull it through the MATH-valued inner product with MATH; now we have an ordinary MATH-valued integral, and we can pull the automorphisms and representation through to recover MATH . But now we're talking about ordinary scalar-valued integrals; the element MATH is a sum of elements of the form MATH, and for such an element the integrand MATH is integrable on MATH. Thus an application of NAME 's Theorem identifies REF with REF, and we are done.
math/0106265
For REF , we need to verify the two items of CITE. Write MATH for MATH, and let MATH and MATH be typical spanning elements of MATH. Then MATH and hence MATH . The function MATH and its product with MATH are in MATH because MATH is proper, so it follows that MATH and MATH are integrable. This gives the first item of CITE. Set MATH. Then MATH, and MATH REF says there is a multiplier MATH such that MATH . We claim that MATH multiplies MATH (we already know it multiplies MATH). If MATH, then MATH because MATH. But MATH is back in MATH, so MATH. Thus MATH. We now define MATH, and REF gives the second item of CITE. Notice that by definition of MATH, we have MATH. Since MATH is a sub-module of MATH, it follows from the NAME correspondence that MATH is an ideal of MATH. This gives REF . For REF , we have to verify the three properties of REF for MATH. Since MATH, the integrability properties are clear. So it suffices to check REF and to show that MATH and MATH coincide. If MATH then MATH is in MATH, so MATH is in MATH. Thus MATH multiplies MATH, and MATH has the properties described in REF . But with this definition of MATH we trivially have MATH.
math/0106265
Since MATH, it is easy to see that MATH is a MATH-module; on MATH, the MATH-valued inner product takes values in MATH, and with MATH, MATH becomes a full left NAME MATH-module. The right actions and inner products are already defined; the only thing we need to worry about is whether MATH is full as a NAME MATH-module. So let MATH be the ideal in MATH spanned by the elements MATH. Then MATH which is MATH because MATH is full. We can therefore deduce from the NAME correspondence that MATH. Thus MATH is a MATH - MATH imprimitivity bimodule. Similarly, MATH is a MATH - MATH imprimitivity bimodule. Note that the map MATH is bilinear, so there is a well-defined map MATH on the algebraic tensor product MATH satisfying REF, and which is MATH-linear. To see that it is MATH-linear, recall that the action of MATH on MATH is given by the product of the embedded copies in MATH; thus for MATH and MATH, we have MATH . In the same way, the inner product MATH is given by the product MATH in MATH, so MATH and MATH extends to an isometry of MATH into MATH. To see that MATH has dense range and is therefore onto, note that MATH acts nondegenerately on MATH, so that MATH because MATH is full. Since MATH is a bimodule isomorphism which preserves the MATH-valued inner product, it must preserve the MATH-valued inner product as well.
math/0106265
We know from CITE that MATH is a pre-inner product module over MATH, so we can complete to get a NAME MATH-module. We could also complete using the left-hand inner product, but the two satisfy the relation MATH so the usual argument shows that the two semi-norms are equal, and the completions are the same. It follows that the left and right actions extend to actions of the full crossed products, and the relation REF implies that the completion is an imprimitivity bimodule.
math/0106265
Let MATH and MATH. For REF we need to verify that MATH and its product with MATH are in MATH. Since the action of MATH is proper with respect to MATH and MATH, it suffices to check that the functions MATH are integrable. We know from REF for the proper NAME equivalence MATH that MATH and its product with MATH are integrable. Thus the integrability of REF follows from the estimate MATH . For REF , note that MATH and MATH and their products with MATH are integrable using REF for MATH and MATH. To verify REF , we write MATH for the generalized fixed-point algebra associated to the action MATH, and MATH, and define MATH . Note that MATH multiplies MATH, and that the right-hand side belongs to MATH because MATH by REF. Thus MATH. Straightforward calculations show that MATH and that MATH for MATH and MATH.
math/0106265
Recall that MATH is a MATH - MATH imprimitivity bimodule. The result follows from the NAME correspondence: MATH is the corresponding ideal in MATH.
math/0106265
As usual, we write MATH for the ideal in MATH spanned by functions of the form MATH where MATH and MATH. Since MATH, two applications of REF imply that MATH . Thus functions of the form REF are dense in MATH if and only if MATH is spanned by the functions MATH if and only if MATH is spanned by the functions MATH. The result follows.
math/0106265
REF were proved earlier, and the statement about saturation is part of REF . We know from REF that MATH and MATH are proper and saturated with respect to MATH and MATH, respectively. Thus REF gives two imprimitivity bimodules MATH where MATH is an ideal of MATH. Since MATH, we can apply REF to the imprimitivity bimodules MATH and MATH. Thus to see the existence of the isomorphism, it suffices to prove that MATH as imprimitivity bimodules; given this, it then follows from the the NAME correspondence that MATH because the imprimitivity bimodules in REF based on MATH are completed in the same norm. That MATH and MATH are MATH - MATH and MATH - MATH imprimitivity bimodules, respectively, is proved in REF (after again identifying MATH with MATH). Recall that MATH and that MATH is dense in MATH. Since MATH implies MATH we obtain that MATH. That MATH is now clear because the inclusion of MATH into MATH preserves both inner products and the MATH-action. Similarly, MATH. Finally, to get the formula for the isomorphism, we need to chase through our identifications. Here, MATH means the left action of MATH on MATH. Thus we have a formula for the action provided MATH which means that MATH must have the form MATH. If so, MATH which gives the right formula.
math/0106265
A compactness argument shows that the action MATH is continuous on MATH for the inductive limit topology, hence on MATH. The action on MATH is continuous by CITE. To see that MATH is continuous on MATH, note first that for any MATH, MATH using REF . Because MATH acts properly, MATH is compact, and we have MATH. Now let MATH, use a compactness argument to see that MATH uniformly with support in a fixed compact neighbourhood as MATH, and take MATH in the inequality to see that MATH as MATH. Thus MATH is continuous on MATH. It is easy to check that the triple MATH has the required algebraic properties, and hence MATH is a NAME equivalence, as claimed. Because MATH vanishes unless MATH is nonempty, the function MATH has compact support. It is continuous because MATH is, and hence the integrability conditions in REF are trivially satisfied. Similar considerations show that REF holds. To verify the existence of the multiplier MATH of MATH, we consider the function MATH defined by the right-hand side of REF. This is a continuous function of compact support from MATH to MATH. Since the inclusion of MATH in MATH is equivariant for the actions MATH, it induces a homomorphism of MATH into MATH; we take MATH to be the multiplier defined by the function in MATH. For MATH, the product MATH is given by the usual convolution formula MATH which has compact support because MATH and MATH do. Because evaluation at MATH is a homomorphism on MATH, it pulls through the integral, and MATH . But now we notice that MATH and changing the order of integration in REF shows that MATH . REF now follows from REF . Pulling the variable MATH through the integrals (see REF ) shows that the action of MATH on MATH is the integrated form of the action MATH of MATH by pointwise multiplication on MATH and the action of MATH defined by MATH. Since the restriction of a representation MATH of MATH to MATH is MATH and MATH is again pointwise multiplication, it follows from REF that the action of MATH is given by the same REF . This implies that MATH also has compact support in MATH, and hence belongs to MATH. This completes the proof that MATH acts properly. To verify that MATH is saturated, we note that MATH . Thus MATH is the subalgebra of MATH spanned by the range of the inner product REF; since the symmetric imprimitivity theorem asserts, inter alia, that this is dense in MATH, we deduce that MATH is saturated with respect to MATH.
math/0106265
Suppose REF is satisfied. Then MATH so MATH extends to an isometry of MATH onto a closed subspace MATH of MATH; the properties in REF imply that MATH is a MATH - MATH submodule of MATH. Since MATH the triple MATH extends to an imprimitivity-bimodule homomorphism. Since the ranges of the inner products on MATH span dense ideals in MATH and MATH, and since MATH and MATH are isomorphisms on MATH and MATH, the ranges of the inner products on MATH span dense ideals in MATH and MATH. Thus it follows from the NAME correspondence that MATH. If REF holds, the map MATH extends as before, but now we use that MATH preserves the inner products to see that MATH for MATH of the form MATH, and extend this by continuity to MATH and MATH. Now REF applies.
math/0106265
To identify MATH with MATH, recall from the proof of REF that the multipliers MATH which span MATH are the images of MATH under the natural homomorphism of MATH into MATH induced by the equivariant embedding of MATH in MATH. A faithful nondegenerate representation of MATH extends to a faithful nondegenerate representation MATH of MATH, and extending the regular representation MATH to MATH and restricting gives the regular representation MATH of MATH. Thus MATH embeds faithfully in MATH, with range the generalized fixed-point algebra MATH. When we view MATH as MATH, the right-hand inner product on MATH becomes the inner product REF on MATH, and we have already observed in the proof of REF that the right action is given by REF. The calculation REF shows that the left inner product agrees too, and it follows from REF that MATH as imprimitivity bimodules. The algebra MATH is the dense subalgebra of MATH spanned by the functions of the form MATH for MATH. These functions belong to MATH; we aim to use REF to prove that the inclusion MATH of MATH in MATH extends to an isomorphism of MATH onto the Combes bimodule MATH . For the map MATH we use the identification of MATH with MATH. The action of MATH on MATH is by the usual formula for convolution in MATH, so MATH which agrees with the action of MATH on MATH given by REF. In other words, MATH for MATH. To prove that MATH, we show that MATH for every MATH. From the characterizing property of the MATH-valued inner product associated to the proper action MATH on MATH, and from REF , we have MATH . On the other hand, from REF we have MATH which on substituting MATH gives MATH. Thus MATH preserves the right inner product. The left inner product on MATH is given by MATH which is the left inner product of MATH described in REF. In other words, with MATH defined by MATH, we have MATH. Since the homomorphisms MATH, MATH certainly extend to isomorphisms on the completions, we can deduce from REF that MATH extends to an isomorphism of MATH onto MATH. It remains to verify the formula for the isomorphism. Using REF and then REF gives MATH when MATH and MATH has the form MATH for MATH and MATH. Inserting the variable MATH (see REF ) gives MATH . REF only guarantees this formula for MATH and MATH of a particular form. However, functions of these forms are dense in MATH and MATH for the inductive limit topologies, which are stronger than the topologies arising from the imprimitivity bimodule structure, and we can extend the formulas to these submodules by continuity.
math/0106265
That MATH induces the actions on MATH and MATH is standard. Because MATH is compatible with the maps MATH and MATH CITE, it is easy to check that MATH is compatible with the module actions and inner products. In particular, this implies that each MATH is isometric, and hence extends to an action on MATH implementing the desired NAME equivalence of systems. For the submodule MATH, the functions in REF have finite support, and hence are trivially integrable. For MATH, the function MATH defined by MATH also has finite support; the embedding of MATH in MATH carries this function into a multiplier MATH of MATH which satisfies REF . Thus the action of MATH is proper. To see that it is saturated, we use CITE to see that the function MATH in MATH is given by MATH when MATH and MATH.
math/0106265
Let MATH be the kernel of the quotient map from MATH to MATH. Then, by the NAME correspondence CITE, there are a closed submodule MATH of the bimodule MATH of CITE and an ideal MATH in MATH such that MATH is a MATH - MATH imprimitivity bimodule. In particular, this implies that the semi-norms on MATH induced by the quotient norms on MATH and MATH coincide CITE. The semi-norm coming from the right inner product is that induced by the reduced norm on MATH. However, we know by applying CITE to the bimodule of REF that this coincides with the seminorm induced by the left inner product and the reduced norm on MATH. Thus the seminorm on MATH pulled back from the quotient MATH is the reduced seminorm, the quotient is the reduced crossed product, and MATH is the kernel of the quotient map onto MATH. Since MATH if and only if MATH, the result follows.
math/0106268
Let MATH be the image of a regular function MATH of degree MATH, and let MATH be the tautological subbundle on MATH. Then MATH for integers MATH with sum MATH, and MATH is given by an inclusion MATH, that is, a point MATH is mapped to the fiber over MATH of the image of this bundle map. If MATH are homogeneous coordinates on MATH then MATH has the basis MATH, so each map MATH has the form MATH for vectors MATH (which will depend on the chosen identification of MATH with MATH). The span of MATH must therefore be contained in the span of the set MATH which has cardinality MATH. On the other hand, at least MATH of the integers MATH must be zero, and the kernel of MATH contains the span of the corresponding vectors MATH.
math/0106268
Let MATH. Since MATH, the NAME conditions on MATH imply that MATH for all MATH, which says exactly that MATH belongs to MATH.
math/0106268
The first sum is dictated by the classical NAME formula. Notice that this classical case is equivalent to the following statement. If MATH and MATH are partitions such that MATH then MATH . Now suppose MATH for some MATH and let MATH be a rational curve of degree MATH in MATH which meets each of the varieties MATH, MATH, and MATH for general flags MATH, MATH, MATH. Let MATH be a subspace of dimension MATH which contains the span of MATH. Then MATH lies in the intersection MATH where MATH and MATH are the results of removing the leftmost MATH columns from MATH and MATH, and MATH. Since the flags MATH, MATH, MATH are general, this implies that MATH. Since we also have MATH we deduce that MATH and MATH. Using this, the quantum NAME formula becomes equivalent to the statement that if MATH then MATH if MATH for MATH and MATH for MATH; otherwise MATH. In other words, the quantum NAME formula states that MATH where the right hand side is a coefficient of the classical NAME formula for MATH. If MATH is zero then the space MATH can't exist, so neither can MATH. On the other hand, if MATH then there exists a unique subspace MATH of dimension MATH which is contained in the intersection MATH. Furthermore, since the flags are general, MATH must lie in the interior of each of these NAME varieties. In particular, each of the spaces MATH and MATH have dimension MATH. Notice also that MATH and MATH. Since MATH we deduce that MATH, so MATH has dimension MATH. We conclude that the only rational curve of degree MATH in MATH which meets the NAME varieties for MATH, MATH, and MATH is the line MATH of MATH-dimensional subspaces between MATH and MATH.
math/0106268
We claim that if MATH for MATH then MATH, that is, no MATH-terms show up when the first product is expanded in the quantum ring. Using induction, this can be established by proving that if MATH then the expansion of MATH involves no MATH-terms and no partitions of lengths greater than MATH. The claim therefore follows from REF. Since the determinant of the quantum NAME formula is a signed sum of products of the form MATH, we conclude from the classical NAME REF that MATH as required.
math/0106268
If MATH and MATH is a partition such that MATH, then any intersection MATH of general NAME varieties in MATH must be empty since MATH. This shows that MATH.
math/0106268
If MATH contains MATH times a NAME class then some curve of degree MATH meets each of the NAME varieties MATH and MATH where MATH and MATH are general flags. If MATH has dimension MATH and contains the span of this curve then MATH lies in the intersection MATH in MATH. In particular this intersection is not empty which implies that MATH in MATH. Since this is equivalent to MATH in MATH, this proves the inequality MATH of the theorem. Now let MATH be the smallest number for which MATH. Notice that this implies that MATH contains a MATH rectangle, that is, MATH. Set MATH and let MATH be the partition given by MATH for MATH. If the NAME diagram for MATH is put in the upper-left corner of a MATH by MATH rectangle, then MATH is the complement of MATH in the top MATH rows of this rectangle, turned MATH degrees, and MATH is the complement of MATH in the leftmost MATH columns, also turned. It follows from the NAME rule that the product MATH contains the class MATH and that MATH contains MATH. Since the structure constants of MATH are all non-negative, this implies that MATH contains the product MATH. Since MATH by assumption, we conclude that MATH contains MATH times some NAME class. In particular, at least one term of the product MATH must involve a power of MATH which is less than or equal to MATH. This proves the other inequality MATH in the theorem. Notice that the identities we have used in this argument follow easily from REF and the dual version of REF , combined with the classical NAME rule.
math/0106268
Both sides of the identity are zero for MATH. It follows from REF with MATH that MATH, so MATH by REF. For MATH we finally obtain MATH by induction, which proves the lemma.
math/0106273
This is true because the homomorphism (see CITE and CITE) MATH has kernel MATH and sends MATH to MATH.
math/0106273
This can be seen from a calculation using the explicit form of a possible isomorphism (see CITE). Alternatively, one may use the theory of twisting CITE: the condition MATH implies that the cocycle MATH is trivial in MATH, and hence of the form MATH for some MATH. Then MATH has order MATH, from which the lemma easily follows.
math/0106273
Since MATH, we know that MATH. From REF and the isomorphisms MATH, MATH and MATH one concludes MATH. The equivalence of REF follows directly from REF . If MATH is not NAME isomorphic, then MATH must be non-square, and none of the curves MATH, MATH and MATH is isomorphic to MATH since they are all NAME isomorphic. This implies (again using REF ) that MATH.
math/0106273
Since MATH is supersingular, it has MATH-invariant MATH (compare CITE). Hence there exists an elliptic curve MATH such that MATH. Multiplication by MATH on MATH is purely inseparable of degree MATH (again CITE), and therefore it factors as MATH for some automorphism MATH of MATH. Assuming MATH for a moment implies MATH, hence MATH and MATH. As MATH is even, MATH has an equation MATH with MATH. From MATH we conclude MATH and therefore MATH. For MATH, the latter fact follows from an easy calculation. Therefore we may consider MATH. Comparing the list in CITE Wa (see also CITE) with the condition MATH leaves us with the cases MATH, so MATH for suitable MATH. Now MATH, which means in particular that MATH. By REF , this implies MATH. Hence also MATH is a square in MATH and thus MATH.
math/0106273
By REF , we have MATH and MATH with MATH. In particular, REF is satisfied with MATH, hence MATH. Let us fix square roots MATH. By CITE, MATH has an equation MATH, so MATH with MATH. Because MATH is supersingular, too, we can conclude MATH. This shows that MATH is a fourth power in MATH. Applying this result to MATH instead of MATH yields MATH . The point MATH has order MATH, namely MATH. As in the proof of REF , the group homomorphism MATH has kernel MATH and sends MATH to MATH. Together with REF we obtain the equivalence MATH . Since we already knew that MATH, the desired result drops out.
math/0106273
Note that MATH and MATH and MATH. Assume MATH for the moment. Then MATH has MATH elements and, using REF , exactly one from each pair MATH, MATH and MATH yields a curve isomorphic to MATH over MATH. The remaining case MATH corresponds to MATH. These three values are different since we assume the characteristic to be MATH. The curves MATH and MATH are obviously isomorphic. Moreover, MATH. Since MATH, one of MATH is a square in MATH, hence MATH, and again we find MATH values of MATH giving the same curve.
math/0106273
This can by shown by naively computing MATH . Then MATH.
math/0106273
Recall that MATH is ordinary, that is, MATH, if and only if (after a suitable choice of coordinates) it has an equation MATH with MATH and MATH, and then MATH (see CITE). Thus we may assume that MATH has such an equation. For MATH, we denote by MATH the elliptic curve with equation MATH. Then MATH if and only if MATH, where MATH denotes the trace from MATH to MATH. Otherwise MATH is a quadratic twist of MATH and MATH. It therefore remains to verify that MATH. Treating the point at infinity and MATH separately, and dividing the equation by MATH, we obtain MATH with MATH which is odd because MATH is an involution on MATH with precisely one fixed point.
quant-ph/0106016
In the NAME space MATH any two normalized pure states can be transformed into another by a unitary operator MATH, MATH. Also, for any unitary operator on MATH there is a hermitian operator MATH such that MATH. In this proof we think of MATH as the Hamiltonian of a dynamical quantum system and MATH as the initial condition. Now for any state MATH in a neighborhood of the coherent state on the north pole, MATH, there is a Hamiltonian, such that after some time MATH the dynamical state of the system is MATH. It suffices to prove the statement on the moment functions. Due to invariance under rotation one may choose the coherent state on the north pole REF with the NAME function MATH. Choose this state as initial state for NAME 's equation with some arbitrary Hamiltonian MATH. Using REF we conclude that at MATH the first derivative of these moments vanish for every Hamiltonian MATH. In more tedious calculations for the second derivatives at MATH one uses REF . For MATH REF the second derivatives of the moments are either negative (positive) or they vanish. The Hamiltonians MATH which lead to the vanishing second derivative either have the coherent state on the north pole as an eigenstate or are locally equivalent to a rotation. Thus in all cases with vanishing second derivative the state remains within the manifold of coherent states for infinitesimal times.
quant-ph/0106016
It suffices to prove the statement on the moments MATH. In REF the moments were calculated in the form MATH with coefficients MATH. Now MATH follows immediately from MATH. The statement on equality follows from the condition MATH.
quant-ph/0106016
The statements on MATH and MATH are equivalent. The statement on MATH can be reduced to the statements on MATH using MATH . For any non-coherent state MATH described by its NAME function MATH we will construct a state MATH such that MATH, using the entropy reducing map MATH defined in REF and rotations MATH. As MATH is a bounded function on a compact manifold (the projective space MATH) and the coherent states provide a local maximum, finding such a state is equivalent to showing that the coherent states are a global maximum and that every global maximum is at a coherent state. In general we have MATH with strict inequality if MATH is a coherent state while MATH is not. So there are no noncoherent states that are mapped to a coherent state by MATH without increasing MATH. It follows that we may assume in the following that all coefficients MATH of MATH are real and nonnegative, MATH (and we still assume that MATH is not a coherent state). Since MATH does not describe a coherent state, there is no MATH such that MATH. Further action of MATH cannot increase the value of MATH. We now show that the combined action of a suitable rotation MATH as given by REF followed by MATH gives the desired result. The action of rotations on NAME functions is described in REF. It suffices to consider the transformation MATH with real MATH, which represents the rotation around the y-axis. The rotated NAME function for an arbitrary MATH is then given by MATH . Such a rotation leaves the moments MATH invariant. We now want to find a value MATH so that the combined action MATH increases these moments by a finite amount when applied to MATH. We consider the coefficient MATH of the rotated NAME function as a function of MATH. Since all the coefficients MATH are positive and MATH when MATH, MATH has a maximum value for some MATH. As MATH is not a coherent state, some MATH with MATH does not vanish and we have MATH. Also for the rotated NAME function, MATH, there exist some nonvanishing coefficients, MATH, since rotating a non-coherent state results still in a non-coherent state. At least one of the non-vanishing coefficients must be negative MATH. If no coefficients were negative, there would be a rotation MATH that increases MATH further. Now define MATH . We have then have MATH. Only if the signs of coefficients of MATH are related according to MATH we have MATH according to REF . In this case one may replace the rotation MATH by MATH for sufficiently small MATH. For small MATH we have MATH since MATH has an isolated maximum at MATH and MATH. We can chose MATH small enough such that the negative coefficients remain negative (we have shown above that at least one coefficient is negative). Since now MATH and MATH the relation MATH cannot be fullfilled and we have MATH.
quant-ph/0106017
First, note that REF is a priori weaker than REF , since REF only requires that an operator map MATH to MATH and MATH to itself. In fact, the two circuits shown in REF both do this, even though they differ on other initial states. To prove MATH, we simply need to notice that the parity gate is a fanout gate going the other way conjugated by a layer of NAME gates, since parity is simply a product of controlled-nots with the same target qubit, and conjugating with MATH reverses the direction of a controlled-not. This is shown in REF . Clearly the number of work bits used to perform either gate will be the same. (We prove this equivalence in greater detail and generality in REF below.) To prove MATH, we use a slightly more elaborate circuit shown in REF . Here we use the identity shown in REF to convert the parity gate into a product of controlled MATH-shifts. Since these are diagonal, they can be parallelized as in CITE by copying the target qubit onto MATH work bits, and applying each one to a different copy. While we have drawn the circuit with two fanout gates, any gate that satisfies the conditions in REF , and its inverse after the MATH-shifts, will do. Finally, MATH is obvious.
quant-ph/0106017
Let MATH, and let MATH be a Boolean matrix on MATH qubits where the zero state has period MATH. For instance, if we write MATH as shorthand for MATH where MATH is the MATH digit of MATH's binary expansion and MATH, we can define MATH so that it permutes the MATH as follows: MATH . Then if we start with MATH work bits in the state MATH and apply a controlled-MATH gate to them from each input, the state will differ from MATH on at least one qubit if and only if the number of true inputs is not a multiple of MATH. (Note that this controlled-MATH gate applies to MATH target qubits at once in an entangled way.) We can then apply a MATH-ary OR of these MATH qubits to the target qubit, that is, a NAME gate with its inputs conjugated with MATH and its target qubit negated before or after the gate. We end by applying the inverse series of controlled-MATH gates to return the MATH work bits to MATH. Now we use REF to parallelize this set of controlled-MATH gates. We can convert them to diagonal gates by conjugating the MATH qubits with a unitary operator MATH, where MATH and MATH is diagonal. If we have a parity gate, we can fan out the MATH work bits to MATH copies each using REF . We can then simultaneously apply the MATH controlled-MATH gates from each input to the corresponding copy, and then uncopy them back. This is shown in REF . For MATH, for instance, MATH, MATH, and MATH. The operators MATH, MATH, and the controlled-MATH gate can be carried out in some finite depth by controlled-nots and one-qubit gates by the results of CITE. The total depth of our MATH gate is a function of these and so of MATH, but not of MATH. Finally, the number of work bits used is MATH as promised.
quant-ph/0106017
We apply the operators MATH, MATH, and MATH in that order to the state MATH, and check that the result has the same effect as MATH. The operator MATH simply applies MATH to each of the MATH qudigits of MATH, which yields, MATH where MATH is a compact notation for MATH, and MATH denotes MATH. Then applying MATH to the above state yields, MATH . By a change of variable, the above can be re-written as, MATH . Finally, applying MATH to the above undoes the NAME transform and puts the coefficient of MATH in the exponent into the last slot of the state. The result is, MATH which is exactly what MATH would yield.
quant-ph/0106017
By CITE, any fixed dimension unitary matrix can be computed in fixed depth using one-qubit gates and controlled nots. Hence MATH can be computed in MATH, as can MATH. The result now follows immediately from REF .
quant-ph/0106017
First note that MATH and MATH are equivalent, since a MATH gate can be simulated by a MATH gate with MATH extra inputs set to the constant REF. Since MATH and MATH gates are equivalent, we can freely use MATH gates in place of MATH gates and vice versa. It is easy to see that, given a MATH gate, we can simulate a MATH gate. Applying MATH to MATH digits (represented as bits, but each digit only taking on the values REF or REF) transforms, MATH . Now send the bits of the last block REF to a MATH-ary OR gate with control bit MATH (see the proof of REF ). The resulting output is exactly MATH. The bits in the last block can be erased by reversing the MATH gate. This leaves only MATH, MATH work bits, and the output MATH. The converse (simulating MATH given MATH) requires some more work. The first step is to show that MATH can also determine if a sum of digits is divisible by MATH. Let MATH be a set of digits represented as MATH bits each. For each MATH, let MATH REF denote the bits of MATH. Since the numerical value of MATH is MATH, it follows that MATH . The idea is to express this last sum in terms of a set of Boolean inputs that are fed into a MATH gate. To account for the factors MATH, each MATH is fanned out MATH times before plugging it into the MATH gate. Since MATH, this requires only constant depth and MATH work bits (which of course are set back to REF in the end by reversing the fanout). Thus, just using MATH and constant fanout, we can determine if MATH. More generally, we can determine if MATH using just a MATH gate and constant fanout. Let MATH denote the resulting circuit, that determines if a sum of digits is congruent to MATH mod MATH. The construction of MATH is illustrated in REF for the case of MATH. We can get the bits in the value of the sum MATH using MATH circuits. This is done, essentially, by implementing the relation MATH. For each MATH, MATH, we compute MATH (where now the MATH's are digits). This can be done by applying the MATH circuits in series (for each MATH) to the same inputs, introducing a REF work bit for each application, as illustrated in REF . Let MATH denote the MATH bit of MATH. For each MATH and for each MATH, we take the AND of the output of the MATH with MATH (again by applying the AND's in series, which is still constant depth, but introduces MATH extra work inputs). Let MATH denote the output of one of these AND's. For each MATH, we OR together all the MATH's, that is, compute MATH, again introducing a constant number of work bits. Since only one of the MATH's will give a non-zero output from MATH, this collection of OR gates outputs exactly the bits in the value of MATH. Call the resulting circuit MATH, and the sum it outputs MATH. Finally, to simulate MATH, we need to include the input digit MATH. To do this, we apply a unitary transformation MATH to MATH that transforms it to MATH. By NAME, et al. CITE (as in the proof of REF ), MATH can be computed in fixed depth using one-qubit gates and controlled NOT gates. Now using MATH and all the other work inputs, we reverse the computation of the circuit MATH, thus clearing the work inputs. This is illustrated in REF . The result is an output consisting of MATH, MATH work bits, and MATH, which is the output of a MATH gate.
quant-ph/0106017
By the preceding lemmas, MATH and MATH are MATH-equivalent. By REF , MATH is MATH-reducible to MATH. Hence MATH is MATH-reducible to MATH. Conversely, arrange each block of MATH input bits to a MATH gate as follows. For the control-bit block (which contains the bit we want to fan out), set all but the last bit to zero, and call the last bit MATH. Set all bits in the MATH input-bit block to REF. Now the MATH output of the MATH circuit is MATH, represented as MATH bits with only one possibly nonzero bit. Send this last output bit MATH and the input bit MATH to a controlled-NOT gate. The outputs of that gate are MATH and MATH. Now apply MATH to the bits that were the outputs of the MATH gate (which are all left unchanged by the controlled-not's). This returns all the MATH's to REF except for the control bit which is always unchanged. The outputs of the controlled-not's give the desired MATH. Thus the resulting circuit simulates MATH with MATH work bits.
quant-ph/0106017
By the preceding lemmas, fanout of bits is equivalent to the MATH function. Thus we can do fanout, and hence MATH, if we can do MATH. By the result of REF , we can do MATH if we can do fanout in constant depth. Hence MATH.
quant-ph/0106017
We will abuse notation in this proof and identify the encoding MATH with its value MATH. So MATH and MATH will mean the encoding of MATH and MATH respectively. CASE: To do sums, the first thing we do is form the list MATH. Then we create a flattened list MATH from this with elements which are the MATH's from the MATH's. MATH is in MATH using our definition of sequence from the preliminaries, and closure under sums and MATH to find the length of the longest MATH. To flatten MATH we use MATH to find the length MATH of the longest MATH for MATH. Then using max twice we can find the length of the longest MATH. This will be the second coordinate in the pair used to define sequence MATH. We then do a sum of size MATH over the subentries of MATH to get the first coordinate of the pair used to define MATH. Given MATH, we make a list MATH of the distinct MATH's that appear as MATH in some MATH for some MATH. This list can be made from MATH using sums, MATH and MATH. We sum over the MATH and check if there is some MATH such that the MATH-th element of MATH has same MATH as MATH and if not add the MATH-th elements MATH times REF raised to the appropriate power. We know what power by computing the sum of the number of smaller MATH that passed this test. Using MATH and closure under sums we can compute in MATH a function which takes a list like MATH and a MATH and returns the sum of all the MATH's in this list. So using this function and the lists MATH and MATH we can compute the desired encoding. For products, since the MATH's of MATH are algebraically independent, MATH is isomorphic to the polynomial ring MATH under the natural map which takes MATH to MATH. We view our encodings MATH as MATH-variate polynomials in MATH. We describe for any MATH a circuit that works for any MATH computable MATH such that MATH is of degree less than MATH viewed as a MATH-variate polynomial. In MATH we define MATH to consist of the sequence of polynomially many integer values which result from evaluating the polynomial encoded by MATH at the points MATH where MATH and MATH. To compute MATH at a point involves computing a polynomial sum of a polynomial product of integers, and so will be in MATH. Using closure under polynomial integer products we compute MATH where MATH is the sequence projection function from the preliminaries. Our choice of points is what is called by CITE the MATH-th order principal lattice of the MATH-simplex given by the origin and the points MATH from the origin in each coordinate axis. By REF of that paper (proved earlier by a harder argument in CITE) the multivariate NAME Interpolant of degree MATH through the points MATH is unique. This interpolant is of the form MATH where the MATH's are polynomials which do not depend on the function MATH. An explicit formula for these MATH's is given in REF as a polynomial product of linear factors. Since these polynomials are all of degree less than MATH, they have only polynomial in MATH many coefficients and in PTIME these coefficients can be computed by iteratively multiplying the linear factors together. We can then hard code these MATH's (since they don't depend on MATH) into our circuit and with these MATH's, MATH, and closure under sums we can compute the polynomial of the desired product in MATH. CASE: We do sums first. Assume MATH. One immediate problem is that the MATH and MATH might use different MATH's for their denominators. Since MATH is closed under poly-sized maximum, it can find the maximum value MATH to which MATH is raised. Then it can define a function MATH which encodes the same element of MATH as MATH but where the denominators of the MATH's are now MATH. If MATH was MATH we need to compute the encoding MATH. This is straightforward from REF . Now MATH where MATH's are the numerators of the MATH's in MATH. From REF we can compute the encoding MATH of MATH in MATH. So the desired answer MATH is in MATH. For products MATH, we play the same trick as the in the MATH product case. We view our encodings of elements of MATH as d-variate polynomials in MATH under the map MATH goes to MATH. (Note that this map is not necessarily an isomorphism.) We then create a function MATH which consists of the sequence of values obtained by evaluating MATH at polynomially many points in a lattice as in the first part of this lemma. Evaluating MATH at a point can easily be done using the first part of this lemma. We then use REF of this lemma to compute the products MATH. We then get the interpolant MATH. We non-uniformly obtain the encoding of MATH expressed as an element of MATH. that is, in the form MATH. Thus, the product MATH is MATH . The encoding of the products is the d-tuple given by MATH. Each of its components is a polynomial sum of a product of two things in MATH and can be computed using the first part of the lemma.
quant-ph/0106017
The proof is by induction on MATH. In the base case, MATH, we do not multiply any layers, and we can easily represent this as a tensor graph of width REF. Assume for MATH that MATH can be written as color consistent tensor graph of width MATH and polynomial size. There are two cases to consider: In the first case the layer is a tensor product of matrices MATH where the MATH's are NAME gates, one qubit gates, or fan-out gates (since MATH); in the second case the layer is a controlled-not layer. For the first case we ``multiply" MATH against our current graph by ``multiplying" each MATH in parallel against the terms in our sum corresponding to MATH's domain, say MATH. If MATH with domain MATH is a one-qubit gate, then we multiply the two amplitudes in each vertical edge of height MATH in our tensor graph by MATH. This does not effect the width, size, or number of paths through the graph. If MATH is a NAME gate, then for each good term MATH in MATH in our tensor graph we add one new term to the resulting graph. This term is added by adding a horizontal edge going out from the source node of MATH followed by the new MATH-term followed by a horizontal edge into the terminal node of MATH. The new term is obtained from MATH by setting to MATH the left hand amplitudes of all edges in MATH of height between MATH and MATH and then if MATH is the amplitude of an edge of height MATH in the new term we change it to MATH. This new term adjusts the amplitude for the case of a MATH vector in MATH tensored with either a MATH or MATH. This operation increases the width of the new tensor graph by the width of the good MATH-term for each good MATH-term in the graph. Since the original graph has width MATH there are at most this many starting and ending vertices for such terms. So there at most MATH such terms. Each of these terms has width at most MATH. Thus, the new width is at most MATH . Notice this action adds one new path through the MATH part of the graph for every existing one. Now suppose MATH is a fan-out gate, let MATH be a good MATH-term in our tensor graph and let MATH be any vertical edge in MATH in MATH. Suppose MATH has amplitude MATH for MATH and amplitude MATH for MATH. In the new graph we change the amplitude of MATH to MATH. We then add a horizontal edge out of the source node of MATH followed by a new MATH-term followed by a horizontal edge into the terminal node of MATH. The new term is obtained from MATH by changing the amplitude for edges in MATH with amplitudes MATH in MATH to MATH. The amplitudes of the non-MATH edges in this term are the reverse of the corresponding edge in MATH, that is, if the edge in MATH had amplitude MATH then the new term edge would have amplitude MATH. The same argument as in the NAME case shows the new width is bounded by MATH and that this action adds one new path through the MATH part of the graph for every existing one. For the case of a controlled-not layer, suppose we have a controlled-not going from line MATH onto line MATH. Let MATH be a new color, anti-color pair not yet appearing in the graph. Let MATH be a vertical edge of height MATH in the graph and let MATH be respectively its color product and two amplitudes. Similarly, let MATH be a vertical edge of height MATH in the graph and MATH be its color product and two amplitudes. In the new graph we multiply MATH times the color product of MATH and MATH and change the amplitude of MATH to MATH. We then add a horizontal edge going out from the starting node of MATH, followed by a vertical edge with values MATH followed by a horizontal edge into the terminal node of MATH. In turn, we add a horizontal edge going out of the starting node of MATH, followed by a vertical edge with values MATH followed by a horizontal edge into the terminal node of MATH. We handle all other controlled gates in this layer in a similar fashion (recall they must go to disjoint lines). We add at most a new vertex of a given height for every existing vertex of a given height. So the total width is at most doubled by this operation and MATH. In the MATH case, simulating a layer which is a NAME product of spaced controlled-not gates and identity matrices, notice we would at most add one to the color depth at any place. So if a controlled-not layer is a composition of MATH many such layers it will increase the color depth by MATH. In the MATH case, notice that simulating a single controlled-not we add one new path for each existing path through the graph at each of the two heights affected. This gives three new paths on the whole subspace for each old one. Since we have handled the two possible layer cases and the changes we needed to make only increase the resulting tensor graph polynomially, we thus have established the induction step and REF. For REF , observe for each multi-line gate we handle in adding a layer we at most quadruple the number of paths through the subspace where that gate applies. Since there are at most logarithmically many such gates, the number of paths through the graph increases polynomially.
quant-ph/0106017
Let MATH be a particular graph in the family and let MATH be the vector whose amplitude we want to compute. Assume that all graphs in our family have fewer than MATH colors in any color product and have a width bounded by MATH. We will proceed from the source to the terminal node one height at a time to compute the amplitude. Since the width is MATH the number of MATH-terms is at most MATH and each of these must have width at most MATH. Let MATH (some of which may be zero) denote the amplitudes in MATH of MATH in each of these terms. The MATH are each sums of at most MATH amplitudes times the color products of at most MATH colors and anticolors, so the encoding of these MATH amplitudes is MATH computable. Because of the restriction on the width of MATH there are at most MATH many MATH-terms, MATH many MATH-terms, and MATH many MATH-terms. Fixing some ordering on the nodes of height MATH and MATH let MATH be the amplitude of MATH in the MATH-term with source the MATH-th node of height MATH and with terminal node the MATH-th node of height MATH. The amplitude is zero if there is no such MATH-term. Then the amplitudes MATH of the MATH-terms can be computed from the amplitudes MATH of the MATH-terms using the formula MATH . Thus MATH can be computed from the MATH using a polynomial sized circuit to do these adds and multiplies. Similarly, each MATH can be computed by polynomial sized circuits from the MATH's and so on. Since we have log-color depth the number of terms consisting of elements in our field times color products in a MATH will be polynomial. So the size of the MATH's MATH, MATH will be polynomial in the input MATH. So the size of the circuits for each MATH where MATH and MATH will be polynomial size. There is only one MATH-term in MATH and its amplitude is that of MATH, so this shows it has polynomial sized circuits. For the MATH result, if the number of paths is polynomially bounded, then the amplitude can be written as the polynomial sum of the amplitudes in each path. The amplitude in a path can in turn be calculated as a polynomial product of the amplitudes times the colors on the vertical edges in the path. Our condition on every color appearing at exactly two heights guarantees the color product along the whole path will be REF or REF, and will be zero iff we get a color and its anticolor on the path. This is straightforward to check in MATH, so this sum of products can thus be computed in MATH using REF .
quant-ph/0106017
Given a a family MATH of MATH operators and a family MATH of states we can use REF to get a family MATH of log color depth, color-consistent tensor graphs representing the amplitudes of MATH. Note MATH is also a family of MATH operators since NAME and fan-out gates are their own inverses, the inverse of any one qubit gate is also a one qubit gate (albeit usually a different one), and finally a controlled-not layer is its own inverse. REF shows there is a P/poly circuit computing the amplitude of any vector MATH in this graph. This amounts to calculating MATH . If this is nonzero, then MATH, and we know MATH is in the language. In the MATH case everything is a rational so P/poly can explicitly compute the magnitude of the amplitude and check if it is greater than MATH. The MATH result follows similarly from the MATH part of REF .
quant-ph/0106030
First we prove sufficiency: Let MATH be a set of pure states for which REF is satisfied. Choosing an arbitrary probability distribution MATH we can form a decomposition MATH representing the state MATH. Every other decomposition MATH of MATH is obtained via a right unitary matrix MATH by MATH . From REF we get MATH with MATH. Summing over MATH and using the right unitarity of MATH we arrive at MATH which proves the optimality of the decomposition MATH and therefore the optimality of the set MATH. Next we prove necessity: Let MATH be an optimal set of pure states. Then the decomposition MATH representing the state MATH is optimal. We now define a family of unitary MATH-matrizes by MATH, where t is a real parameter and MATH is the skew hermitian matrix defined by MATH, MATH and MATH otherwise, that is, MATH . Selecting the first MATH columns of MATH we get a right unitary MATH-matrix MATH. Applying MATH to the optimal decomposition MATH according to REF we get a new decomposition MATH of MATH which is defined by MATH . This decomposition can not have less entanglement than the decomposition MATH, which was assumed to be optimal, thus MATH holds true. At MATH=REF, both sides of this inequality are equal. In the following, we will show that the first derivatives with respect to MATH of both sides are also equal at MATH. Hence, the inequality must be fullfilled for the second derivatives at MATH which will lead to REF . To proceed, we need the following lemma: Lemma: Let MATH be a family of positive matrices, which depend differentiably on MATH. Then we have MATH NAME, if MATH is zero at some point MATH and if MATH is twice differentiable at MATH we have MATH . Note that REF is always well defined due to the positivity and differentiability of MATH whereas REF may not be well defined. Let MATH with MATH unitary. Then we have MATH . In the last but one step, we have used that MATH, that the trace is invariant under cyclic permutations of the factors and that MATH and MATH commute. To prove the second part of the lemma, we use the fact that if MATH is zero MATH is zero and we can always choose MATH such that MATH. This choice leads to MATH having used MATH. which holds true, because MATH. Applying this lemma to MATH, MATH, we get MATH . According to REF , MATH is zero at MATH. Therefore we can calculate the second derivative of MATH at MATH using the second part of the lemma, which leads to MATH . Using REF we can further evaluate the righthand side. Defining MATH, we arrive at MATH . This expression is always well defined because MATH is well defined, even if MATH has a non vanishing kernel. This can be easily seen by expressing MATH and MATH in the NAME of MATH and then evaluating MATH in this basis. To calculate the derivatives of MATH we use the fact, that MATH which leads to MATH and MATH . Note that the first and second derivative of MATH are well defined. As we have shown, the first derivatives of the right and left hand side of REF are equal to REF. Therefore MATH necessarily holds true, which in turn can be seen to be equivalent to REF by substituting REF into the right hand side as well as taking into account, that the left hand side of REF does not depend on MATH and therefore is zero.
quant-ph/0106070
Using REF we may express MATH as MATH. Let MATH and MATH denote the columns of MATH and MATH respectively. Assume that MATH is finite-dimensional, and let MATH. We distinguish between the case MATH (that is, MATH has at least as many rows as columns), and the case MATH (that is, MATH has more columns than rows). In the case MATH, define MATH; then MATH is the MATH-dimensional subspace spanned by MATH. The projection of MATH onto MATH is MATH . Moreover, the columns of MATH are orthonormal, since its NAME matrix is MATH . In the case MATH, first embed MATH in a MATH-dimensional space MATH in an expanded complex NAME space MATH, and let MATH be an orthonormal basis for MATH of which the first MATH vectors are the MATH-basis. Then proceed as before, using MATH in place of MATH.
quant-ph/0106084
Let MATH, and denote its matrix elements with respect to the number states as MATH. Assume that MATH for all MATH. Since MATH is a NAME series with absolute convergence, it follows that the NAME coefficient MATH for all MATH and MATH. Due to the uniform convergence, MATH for all MATH, the matrix elements MATH for all MATH. Similarly, one proves that MATH for all MATH, so that MATH. Since REF equals the condition MATH for all MATH, MATH, and MATH, the theorem follows.
quant-ph/0106084
It is well-known that REF equals REF , see, for example, CITE. Let MATH be a sequence of vectors in MATH, and put, for all MATH, MATH. Then MATH where MATH and MATH. Suppose then that MATH and MATH for all MATH. Then, by the NAME inequality [for MATH], the series MATH converges for all MATH and it is nonnegative when MATH. Defining, for all MATH, MATH, we see that MATH is a positive sesquilinear form, that is, MATH is positive semidefinite.
quant-ph/0106084
If MATH, MATH, then MATH for all MATH, and thus MATH. Hence MATH for all MATH. Conversely, if MATH for all MATH, then MATH, with MATH.
quant-ph/0106084
Suppose that MATH is a POM for which MATH. Choosing MATH one sees that MATH where MATH and MATH. Conversely, if MATH, MATH, for some MATH, MATH, then MATH is a covariant positive operator measure. The normalization condition MATH equals the condition MATH which equals the following two conditions: CASE: for MATH, if MATH then MATH, CASE: MATH for all MATH. Thus, if MATH is as above and satisfies REF , then define MATH and the first part of Theorem is proved. Let MATH for all MATH. If MATH, MATH, where MATH and MATH, then it is easy to confirm that MATH is a projection measure. Conversely, suppose that MATH is a covariant normalized projection measure, that is, MATH for all MATH. By direct calculation, one sees that this equals the fact that MATH for all MATH and MATH. Multiply both sides of this equation by MATH and sum up with respect to MATH to get MATH which holds for all MATH and for all MATH for which MATH. Substituting MATH one gets MATH for all MATH. Use REF to get MATH . From MATH we see that MATH. If MATH then REF clearly holds. Since the operator MATH is positive and MATH then MATH for all MATH. Using REF with MATH and MATH and the fact that MATH it follows that MATH for all MATH. Due to the positiveness, if MATH then MATH for all MATH for which MATH. Using this and the condition MATH one gets by direct calculation that MATH, MATH, where MATH. If MATH or MATH this holds trivially.
quant-ph/0106115
First, notice that MATH satisfy the commutation relations MATH where we used the NAME symbol MATH. We proceed by induction on MATH. If MATH, then we have, for MATH: MATH thus REF follow immediately from the basic commutation relations REF . To prove the inductive step, we first show, again by induction on MATH that: MATH . We will prove only the first of the previous equalities, since the other ones may be obtained in the same way. If MATH, then MATH where to get the last equality we have used REF . Now let MATH: MATH . By the inductive assumption, we have: MATH . Using REF , we obtain, for MATH, MATH and MATH . Now putting together REF , we get: MATH as desired. Thus, we have proved REF . Now notice that, for example, MATH has the same form as MATH except that the MATH's have been replaced by MATH, therefore, using the same arguments as above one may show that: MATH . More in general, considering the NAME bracket between MATH, and MATH, we get MATH. Proceeding this way, we obtain all the matrices MATH and MATH. The matrices in REF form a basis in MATH since the MATH do and the linear transformation in REF is nonsingular. In fact, the corresponding determinant is a NAME determinant which is different from zero because all the MATH's are different from each other. The same is true for the elements in REF which form a basis in MATH and MATH, respectively. Finally, the commutation relations REF follow immediately from REF .
quant-ph/0106115
We show that all the matrices of the form MATH can be obtained as repeated commutators of MATH, MATH, MATH, MATH, for every MATH. REF gives the result for MATH. We first prove that this is true for MATH as well, and then proceed by induction on MATH. If MATH, we want to show that we can obtain all the matrices of the form MATH, MATH, MATH. From our assumption on the connectedness of MATH, there exists a path joining the node representing the MATH particle and the node representing the MATH-th particle. Let us denote by MATH the length of this path, namely the number of edges between MATH and MATH. We proceed by induction on MATH. If MATH, then at least one among MATH and MATH is different from zero. If MATH, we have: MATH and MATH . Since MATH, from the matrix MATH, using (repeated) NAME brackets with elements MATH and/or MATH, with MATH one can obtain all of the elements of the form MATH, with MATH. If MATH, but MATH, the same can be proved by taking the commutator with MATH first and then the commutator with MATH and analogously, if MATH, by taking the commutator with MATH first and then with MATH. Now, assume it is possible to obtain every MATH for every MATH whose distance is MATH. Let MATH and MATH have a path with distance MATH and let MATH represent a particle/node in between MATH and MATH in the path. Let us also assume just for notational convenience that MATH. From the inductive assumption, we know that MATH and MATH can be obtained for every MATH. We need to show that we can also obtain every MATH for every MATH. Using REF in Appendix MATH, we get MATH and MATH where we have used the following property of the NAME matrices MATH . As before, we can now take repeated NAME brackets of the matrix obtained in REF with matrices of the form MATH and/or MATH, with MATH, to obtain all of the matrices MATH, for MATH. This concludes the proof that every NAME product with two matrices different from the identity can be obtained, namely MATH in the above notations. We now show that every matrix MATH can be obtained. Consider the NAME bracket MATH . Both elements MATH and MATH are available because of the inductive assumption. If MATH, we have concluded otherwise, the NAME bracket with the matrix MATH or MATH leads to the desired result. This conclude the proof of the Theorem.
quant-ph/0106115
First notice that, from REF, it follows immediately: MATH . Since the values MATH are all different, from REF we have that all the elements of the form MATH, MATH, MATH, are in MATH. We can write the matrix MATH as MATH using the fact that MATH if MATH and MATH are in two different connected components. Taking the NAME brackets with elements MATH, MATH, with MATH (here if MATH, we put MATH), one may show, as in the proof of REF , that it is possible to obtain all the elements in MATH, MATH. Moreover from REF , it follows that these and their linear combinations are the only matrices that can be generated by MATH, MATH, MATH, MATH.
quant-ph/0106115
From REF , all we have to show is that, in the given situation, the NAME algebra MATH is a subalgebra of MATH. Rewrite the drift matrix MATH as MATH . From REF , the matrices MATH, MATH and MATH, where MATH is the number of sets MATH, are available to generate the NAME algebra MATH. In particular, since we have assumed that the last MATH sets are singletons, the matrices MATH, MATH, MATH are MATH. Now, assume that in the set MATH there are two elements MATH and MATH such that REF is verified for some MATH and assume, for the sake of concreteness, that the inequality is verified for the MATH coefficient (minor changes are needed in the other cases). By taking the NAME bracket of MATH with MATH, the first term gives zero, since it does not involve any term in the set MATH (see REF , in Appendix MATH and the definition of the MATH's in REF ). The NAME bracket of the second term with MATH gives a matrix which is a linear combination of matrices of the form MATH, MATH and MATH. We call this matrix MATH. Thus, we have MATH . By taking the NAME bracket of REF with MATH, and using REF in the Appendix MATH, we obtain MATH . From this matrix, by taking NAME brackets with MATH and/or MATH, MATH, it is possible to obtain all the matrices of the form REF with all the possible combinations of MATH and MATH in place of MATH and MATH respectively. Using REF, it is not difficult to see that MATH . By taking the NAME bracket of this with MATH, we obtain MATH and repeating the calculation as in REF , we obtain MATH . Continuing this way, it is possible to obtain all the matrices of the form MATH and, with minor changes in the choice of the NAME brackets, we can obtain MATH . Now consider for example, the matrices MATH and assume, without loss of generality that the elements MATH are arranged so that elements that have the same value for MATH appear one after the other in the sum. The associated determinant is (compare r. the proof of REF ) a NAME determinant and therefore by appropriate linear combinations we can obtain all the matrices of the form MATH where MATH is a generic subset of MATH such that all the values of MATH are the same, for all the MATH. In particular, if MATH contains a single element then we place that element in the set of singletons MATH. The other subsets of MATH are arranged in new sets. It is clear that we can repeat this procedure for the other sets MATH, and then for the subsets obtained, as described in REF . If the procedure ends with all the elements in MATH then we have that MATH is in MATH and the Theorem follows from REF .
quant-ph/0106128
It is clear that we can look at the system REF as having state varying on MATH. The topology on MATH is the one induced by the one of MATH. First, we show that the identity MATH is a NAME stable point (see CITE) for the flow MATH on MATH. Assume, by the way of contradiction, that MATH is not a NAME stable point. Then there exists a time MATH and an open set MATH, with MATH a ball in MATH of radius MATH, such that MATH is not an element of MATH for any MATH. In particular, for MATH, MATH is never an element of MATH. Fix MATH and consider the ball MATH in MATH. We have, for any MATH, MATH positive integers, MATH . If this was not the case, we would have had that the distance between MATH, MATH, and MATH would have been less than MATH contradicting what we have said before. Therefore we have found an infinite sequence of disjoint open balls of the same radius which contradicts compactness of the whole space MATH. Thus MATH is a NAME stable point. Using this fact, we may apply REF to conclude that the set attainable from the identity for system REF is MATH.
quant-ph/0106128
If the system is pure state controllable then MATH is transitive on the complex sphere MATH, therefore its realification REF is transitive on the real sphere MATH. Thus, from REF , it must contain a NAME group locally isomorphic to one of the groups listed in REF . As a consequence, the NAME algebra MATH must contain a NAME algebra isomorphic to one of the corresponding NAME algebras. Assume first MATH odd, then REF are excluded. REF is also excluded since MATH, when MATH (recall that MATH is the realification of MATH). Therefore MATH must be either MATH or MATH in this case. If MATH then MATH so REF coincide. If MATH is even and MATH, then REF is excluded as above and REF through REF all imply that MATH up to isomorphism of MATH, which from REF gives MATH or MATH up to isomorphism (with or without the identity matrix). REF is excluded by REF . This proves that the only possible NAME algebras MATH that correspond to a transitive NAME group are the ones given in the statement of the Theorem. The converse follows from the well known properties of transitivity of MATH and MATH as well as of any group conjugate to them via elements in MATH, and from REF .
quant-ph/0106128
We have the following isomorphisms between the two coset spaces MATH and MATH and the two manifolds MATH and MATH, respectively: MATH where MATH means isomorphic. Therefore if the two orbits coincide, we must have that the two coset spaces must coincide as well. So, in particular, their dimensions have to be equal which gives REF . Conversely assume that REF is verified. Then the dimensions of the two coset spaces on the left hand sides of REF are the same and so are the dimensions of the manifolds on the right hand side namely MATH and MATH. Notice also that these two manifolds are connected since both MATH and MATH are connected. Since MATH is closed in MATH and therefore compact, from REF we have that MATH is closed in MATH. On the other hand, since the two coset spaces have the same dimensions, MATH is open in MATH. By connectedness, we deduce that the two coset spaces must coincide, and therefore the two orbits coincide as well.
quant-ph/0106128
First notice that MATH is a semigroup. In fact, assume MATH and MATH are in MATH for every MATH. Then there exist two sequences of elements MATH and MATH in MATH converging to MATH and MATH, respectively. The elements of the sequence MATH are all in MATH and by continuity MATH so that MATH. Since MATH is arbitrary, this proves MATH. Consider now an element MATH. MATH is also in MATH, for every MATH, and by compactness, the sequence of MATH's has a converging subsequence MATH. The sequence MATH converges as MATH tends to infinity to MATH and therefore MATH (compare r. CITE). Since MATH is a closed subgroup of the NAME group MATH, it is itself a NAME subgroup (CITE pg. REF). To prove connectedness notice that, if MATH is not empty, MATH implies MATH (see REF ). Therefore MATH is the intersection of a decreasing sequence of compact and connected sets MATH (connectedness of these sets was proven in CITE); thus MATH is itself compact and connected (CITE pg. REF).
quant-ph/0106128
It follows immediately from the fact (see REF) that the connected subgroup corresponding to MATH is a subgroup of MATH.
quant-ph/0106128
Assume that the system is state controllable in arbitrary time. This means that the set MATH is transitive on MATH and since MATH, so is MATH. It follows from the assumptions on MATH and the fact that MATH that MATH has to be equal to MATH.
quant-ph/0106151
Compare REF.
quant-ph/0106151
This result is straighforwardly obtained by deriving with respect to time the expression of REF in which we use REF for MATH and identifying the previous definitions.
quant-ph/0106151
Its elementary using REF with MATH and the cyclic property of the trace.
quant-ph/0106151
Compare REF.
quant-ph/0106158
Let's proceed by parts. Let the quantum system described by MATH be a closed system. Postulate REF states that the physical information we obtain from a quantum system is the probability of getting a certain (eigen)value of an observable, say MATH. If the system is in a state MATH, then by postulate REF and NAME 's theorem the probability of measuring the value MATH at the initial time is MATH and at a time MATH is MATH. Now this MATH always admits an ensemble interpretation, that is, it may always possibly be understood that the system is in state MATH with probability MATH and in state MATH with probability MATH. This is proven in the following MATH always admits an ensemble interpretation for any MATH and MATH. As it has been showed in the preceding REF statistical mixture of states MATH with probability MATH and MATH with probability MATH is represented by the density operator MATH whatever MATH and MATH are. On the contrary given MATH we cannot conclude that it represents a statistical mixture, but only that within the set of all possible interpretations, the ensemble interpretation must be present. This stems out from the fact that you can always construct an ensemble of two arbitrary states MATH and MATH with arbitrary probabilities MATH and MATH. Finally it must be remarked that all these deductions only resort to postulates REF. Now under the ensemble interpretation the probability of getting the value MATH after a time MATH is given by the theorem on compound probabilities and the linearity of the trace operation MATH for any MATH. We then conclude that MATH under this interpretation. Let now the evolution of the quantum system described by MATH be non-linear, that is, MATH. Then it is clear that MATH does not admit the ensemble interpretation in clear contradiction to the previous lemma. Thus the evolution for a closed quantum system must be linear. Let now the quantum system described by MATH be an open system. Then as it is proven in CITE its corresponding density operator MATH can always be obtained by enlarging its NAME space MATH adjoining an auxiliary NAME space MATH in such a way that MATH, where MATH is the density operator corresponding to MATH. This settles the impossibility of distinguishing proper from improper mixtures CITE by local operations. The adjoining is made in such a way that the system plus the auxiliary system can be consider a closed system. Then any family of evolution operators MATH defined over MATH can be obtained by tracing out over the auxiliary NAME space MATH: MATH . So we can apply the previous result to MATH, then by the linearity of MATH and MATH we have MATH .
quant-ph/0106158
Let a quantum system compound of three spin-REF/REF particles be described by the so-called GHZ state MATH where henceforth MATH will denote the state with positive/negative spin component along the MATH axis. The set of all possible events may be divided into the four disjoint sets of results: Note that these four disjoint sets are equiprobable, thus MATH. Let now the three particles be spatially separated from each other so that by postulate REF no physical phenomena can interrelate the three parties, that is, no possible physical influence from one particle on the others can be established. Then for event MATH the following chain of implications can be readily proven MATH where to calculate probabilities postulates REF have implicitly been used (see above), to establish implication REF the definition of EPR element of reality has been applied and where implication REF elementarily follows from its premises. Equally for events MATH, MATH and MATH similar chains can be posed: MATH . Now since these four cases (each chain corresponds to one MATH) exhaust all possibilities, we arrive at the conclusion that with probability MATH the quantity MATH must have an element of reality, and its value must be MATH . But applying quantum formalism (postulates REF) we may also calculate MATH, since MATH. Thus the hypotheses are self-contradictory.
quant-ph/0106160
MATH .
quant-ph/0106160
First we perform the usual success amplification to boost the success probability of the quantum protocol to MATH, increasing the communication to MATH at most, since MATH is assumed to be a constant. Using standard techniques CITE we can assume that all amplitudes used in the protocol are real. Now we employ the following fact proved in CITE and CITE. The final state of a quantum protocol exchanging MATH qubits on an input MATH can be written MATH where MATH are pure states and MATH are real numbers from the interval MATH. Now let the final state of the protocol on MATH be MATH and let MATH be the part of the state which yields output REF. The acceptance probability of the protocol on MATH is now the inner product MATH. Using the convention MATH this can be written as MATH. Viewing MATH and MATH as MATH-dimensional vectors, and summing their outer products over all MATH yields a sum of MATH rank REF matrices containing reals between -REF. Rewrite this sum as MATH with MATH to save notation. The resulting matrix is an approximation of the communication matrix within componentwise error MATH. In the next step define for all MATH a set MATH of the indices of positive entries in MATH, and the set MATH of the indices of negative entries of MATH. Define MATH and MATH analogously. We want to have that all rank REF matrices either have only positive or only negative entries. For this we split the matrices into REF matrices each, depending on the positivity/negativity of MATH and MATH. Let MATH and analogously for MATH, then set the positive entries in MATH and MATH to REF. Consider the sum MATH . This sum equals the previous sum, but here all matrices are either nonnegative or nonpositive. Again rename the indices so that the sum is written MATH (to save notation). At this point we have a set of rank one matrices which are either nonnegative or nonpositive with the above properties. We want to round entries and split matrices into uniformly weighted matrices. Let MATH denote the number of matrices used until now. Consider the intervals MATH, and MATH, for all MATH up to the least MATH, so that the last interval includes REF. Obviously there are MATH such intervals. Round every positive MATH and MATH to the upper bound of the first interval it is included in, and change the negative entries analogously by rounding to the upper bounds of the corresponding negative intervals. The overall error introduced on an input MATH in the approximating sum MATH is at most MATH . The sum of the matrices is now between MATH and MATH for inputs in MATH and between MATH and MATH for inputs in MATH. Add a rectangle with weight MATH covering all inputs. Dividing all weights by MATH renormalizes again without increasing the error beyond MATH. Now we are left with MATH rank REF matrices MATH containing entries from a MATH size set only. Splitting the rank REF matrices into rectangles containing only the entries with one of the values yields MATH weighted rectangles, whose (weighted) sum approximates the communication matrix within error MATH. In a last step we replace any rectangle with an absolute weight value of MATH by MATH rectangles with weights MATH for MATH. The rectangle weighted MATH can be replaced by a set of rectangles with weight MATH each, introducing negligible error.
quant-ph/0106160
We start with the result of the previous lemma. The obtained set of rectangles approximates the communication matrix within error MATH for some small constant MATH . Call these rectangles MATH and their weights MATH. Doing the same construction for the rejecting part of the final state of the original protocol we get a set of MATH weighted rectangles, such that the sum of these is between REF and MATH on every MATH and between MATH and REF for every MATH. Call these rectangles MATH. Due to the previous construction their weights can be assumed to be also MATH. Note that for all MATH . We construct our new set of rectangles as follows. For every ordered MATH tuple of rectangles containing at least MATH rectangles MATH and at most MATH rectangles MATH we form a new rectangle by intersecting all of the rectangles in the tuple. The weight of the new rectangle is the product of the weights of its constituting rectangles. Now we consider the sum of all rectangles obtained this way. The number of new rectangles is at most MATH. The sum of the weights of rectangles adjacent to a zero input MATH of the function is MATH for some MATH (see for example, REF for the last inequality). The same sum of weights is also clearly at least REF. The sum of the weights of rectangles adjacent to a one input MATH of the function is MATH for some MATH. The same sum of weights is also clearly at most REF. So choosing MATH large enough yields the desired set of rectangles.
quant-ph/0106160
We are given any quantum protocol for MATH with error REF/REF and some worst case communication MATH. We have to put the stated lower bound on MATH. Following REF we can find a set of MATH weighted rectangles, so that the sum of these approximates the communication matrix up to error MATH for any MATH, where the weights are either MATH, or MATH for some real MATH between REF. We will fix MATH later. Let MATH denote that set. Furthermore let MATH denote the function that maps MATH to MATH. First we give a lower bound on the sum of absolute values of the NAME coefficients in MATH for MATH, in terms of the respective sum for MATH, using the fact that MATH approximates MATH. Obviously MATH. The identity of NAME then gives us MATH . We make use of the following simple consequence of REF . Let MATH, and MATH. Then MATH . Hence MATH . Thus the sum of absolute values of the chosen NAME coefficients of MATH must be large, if there are not too many such coefficients, or if the error is small enough to suppress their number in the above expression. Call MATH, so MATH. Now due to the decomposition of the quantum protocol used to obtain MATH, the function is the weighted sum of MATH rectangles. Since the NAME transform is a linear transformation, the NAME coefficients of MATH are weighted sums of the NAME coefficients of the rectangles. Furthermore the NAME coefficients of a rectangle are the products of the NAME coefficients of the characteristic functions of the sets constituting the rectangle, as argued in REF. So MATH and MATH . For all rectangles MATH we have MATH by the identity of NAME. Using the NAME inequality REF we get MATH . But according to REF the weighted sum of these values, with weights between -REF, adds up to at least MATH, and so at least MATH rectangles are there, thus MATH. If now MATH, then let MATH, and we get the lower bound MATH. Otherwise set MATH to get MATH as well as MATH.
quant-ph/0106160
It suffices to prove that the nondeterministic rank is polynomial. Define rectangles MATH, which include inputs with MATH and MATH, and MATH, which include inputs with MATH and MATH. Let MATH denote the all one matrix. Then let MATH. This is a matrix which is REF exactly at those inputs with MATH. Furthermore MATH is composed of MATH weighted rectangles and thus the nondeterministic rank of MATH is MATH.
quant-ph/0106160
We already know that the complexity of MATH is MATH. Now consider functions MATH for smaller MATH. The logarithmic lower bound is obvious from the at most exponential speedup obtainable by quantum protocols CITE. Fixing MATH pairs of inputs variables to the same values leaves us with MATH pairs of free variables and the function accepts if MATH accepts on these inputs. Thus the lower bound follows.
quant-ph/0106160
The protocol determines (and removes) positions in which MATH are different, until no more such positions are present, or until MATH such positions are found, in both cases the function value can be decided. Nisan CITE has given a protocol in which NAME and NAME, given MATH-bit strings MATH, compute the leftmost bit in which MATH differ. The protocol needs communication MATH to solve this problem with error MATH. Hence we can find such a position with error MATH and communication MATH, since MATH. So NAME and NAME can determine with error REF/REF, whether there are exactly MATH differences between MATH and MATH, using communication MATH as claimed.
quant-ph/0106160
We prove the bound for MATH with range MATH. Obviously the bound itself changes only by a constant factor with this change and the communication complexity is unchanged. Let MATH be the index of any NAME coefficient of MATH. Let MATH. Basically MATH measures how well MATH approximates MATH, the parity function on the MATH variables which are REF in MATH. Consider the following distribution MATH on MATH: Each variable is set to one with probability MATH and to zero with probability MATH. Then every MATH is one respectively, zero with probability MATH. So under this distribution on the inputs MATH to MATH we get the uniform distribution on the inputs MATH to MATH. We will get an approximation of MATH under MATH with error MATH by taking the outputs of a protocol for MATH under a suitable distribution. We then use a hardness result for MATH given by the following lemma. Let MATH be the distribution on MATH, that is the MATH-wise product of the distribution on MATH, in which REF is chosen with probability MATH. Then MATH . Clearly with REF we get that computing MATH with error MATH under the distribution MATH needs quantum communication MATH. Let us prove the lemma. Lindsey's lemma (see for example, CITE) states the following. Let MATH be any rectangle with MATH entries in the communication matrix of MATH. Then let MATH . The above fact allows to compute the discrepancy of MATH under the uniform distribution, and will also be helpful for MATH. MATH is uniform on the subset of all inputs MATH containing MATH ones. Consider any rectangle MATH. There are at most MATH inputs with exactly MATH ones in that rectangle. Furthermore if we intersect the rectangle containing all inputs MATH containing MATH ones in MATH and MATH ones in MATH with MATH we get a rectangle containing at most MATH inputs. In this way MATH is partitioned into MATH rectangles, on which MATH is uniform and Lindsey's lemma can be applied. Note that we partition the set of inputs with overall MATH ones into up to MATH rectangles. Let MATH. The probability of any input with MATH ones is MATH. We get the following upper bound on discrepancy under MATH: MATH . This concludes the proof of REF . To describe the way we use this hardness result first assume that the quantum protocol for MATH is errorless. The NAME coefficient for MATH measures the correlation between MATH and the parity function MATH on the variables that are ones in MATH. We first show that MATH can be computed with error MATH from MATH (or its complement). To see this consider MATH . Without loss of generality assume that the first MATH variables of MATH are its ones. So we can rewrite to MATH . Note that MATH depends only on the first MATH variables. In other words, if we fix a random MATH, the output of MATH has an expected advantage of MATH over a random choice in computing parity on the cube spanned by the first MATH variables. Consequently there must be some MATH realizing that advantage. We fix that MATH, and use MATH (or MATH) to approximate MATH. The error of this approximation is MATH. Next we show that MATH respectively, MATH is correlated with MATH under some distribution. Let MATH be a distribution resulting from MATH, if all MATH and MATH for MATH are fixed so that MATH and all other variables are chosen as for MATH. Then MATH . Hence computing MATH on MATH with no error is at least as hard as computing MATH on distribution MATH with error MATH, which needs at least MATH qubits communication due to the discrepancy bound. We assumed previously that MATH is computed without error. Now assume the error of a protocol for MATH is MATH. Then reduce the error probability to MATH by repeating the protocol MATH times and taking the majority output. Computing MATH on MATH with error MATH is at least as hard as computing MATH on distribution MATH with error MATH, which needs at least MATH qubits communication. The error introduced by the protocol is smaller than the advantage of the function MATH in computing MATH. So a lower bound of MATH holds for the task of computing MATH with error MATH. This implies a lower bound of MATH for the task of computing MATH with error REF/REF.
quant-ph/0106160
First note that MATH by REF . So we can read the bound MATH . The MATH define a probability distribution on the MATH. If we choose a MATH randomly then the expected NAME weight of MATH is MATH. Also the expectation of MATH is MATH. We use the following lemma. Let MATH be nonnegative and MATH be positive numbers and let MATH be a probability distribution. Then there is a MATH with: MATH . To see the lemma let MATH and MATH and assume that for all MATH we have MATH. Then also for all MATH with MATH we have MATH and hence MATH, a contradiction. So there must be one MATH, such that MATH. Using that MATH in the bound of REF yields the lower bound.
quant-ph/0106160
Consider any quantum protocol for MATH with communication MATH. As described in REF , we can find a set of MATH weighted rectangles so that their sum yields a function MATH that approximates MATH entrywise within error MATH. Consequently, due to REF , the sum of certain NAME coefficients of MATH is bounded: MATH . Also MATH due to REF . But on the other hand MATH, which we will use to relate MATH to MATH. We employ the following lemma. Let MATH with MATH. Then MATH . Let us prove the lemma. Define MATH and MATH . Then MATH and MATH . Due to the triangle inequality we have MATH and MATH which implies MATH and MATH . REF is proved. So the distribution given by the squared MATH-Fourier coefficients of MATH is close to the vector of the squared MATH-Fourier coefficients of MATH. Then also the entropies are quite close, by the following fact (see REF). Let MATH be distributions on MATH with MATH. Then MATH. Actually the fact also holds if MATH are subdistributions, that is, if they consist of nonnegative numbers summing up to at most REF. So we get MATH . Remembering that MATH we get MATH . This concludes the proof.
quant-ph/0106160
We first consider the entropy bound and proceed similarly as in the proof of REF . Let MATH be the considered function and let MATH be the function computed by a protocol decomposition with error MATH consisting of MATH rectangles with MATH for the communication complexity MATH of some protocol computing MATH with error REF/REF. MATH denotes the communication matrix of MATH divided by MATH, let MATH be the corresponding matrix for MATH. Using the NAME norm on the matrices we have MATH. Then also the singular values of the matrices are close due to the NAME theorem for singular values, see REF. Let MATH be two square matrices with singular values MATH and MATH. Then MATH . As in REF we can use REF to show that the MATH-distance between the vector of squared singular values of MATH and the corresponding vector for MATH is bounded and REF to show that the entropies of the squared singular values of MATH and MATH are at most MATH apart. It remains to show that MATH is upper bounded by MATH. Due to REF MATH. Due to the NAME inequality we have MATH . The last step holds since MATH is the sum of MATH rank REF matrices. We get the desired lower bound. To prove the remaining part of the theorem we argue as in the proof of REF that the sum of the selected singular values of MATH is large compared to the sum of the selected singular values of MATH, then upper bound the former as above by the rank of MATH and thus by MATH. The remaining argument is as in the proof of REF .
quant-ph/0106160
We change the range of MATH to MATH. Now consider the NAME coefficient with index MATH. MATH for a function MATH that is REF, if at least MATH of its inputs are one. Without loss of generality let MATH be an odd integer. Thus any input to MATH with MATH ones is accepted by both MATH and MATH. Call the set of these inputs MATH. Similarly every input to MATH with an odd number of ones larger than MATH is accepted by both MATH and MATH and every input to MATH with an even number of ones smaller than MATH is rejected by both MATH and MATH. On all other inputs MATH and MATH disagree. Thus there are MATH inputs more being classified correctly by MATH than those being classified wrong. The NAME coefficient MATH is MATH. So the method of REF gives the claimed lower bound.
quant-ph/0106160
First consider MATH. This function is equivalent to a function MATH, in which MATH is REF if the number of ones in its input is MATH, and MATH else. Consider the NAME coefficient for MATH. For simplicity assume that MATH is even and MATH is odd. Then clearly MATH. Thus the method of REF gives us the lower bound MATH. Note that finding this lower bound is much easier than the computations in REF for MATH, since we have to consider only one coefficient. Now consider functions MATH for smaller MATH. The logarithmic lower bound is obvious from the at most exponential speedup obtainable by quantum protocols CITE. Fixing MATH pairs of inputs variables to ones and MATH pairs of input variables to zeroes leaves us with MATH pairs of free variables and the function accepts if MATH accepts on these inputs. Thus the lower bound follows.
quant-ph/0106160
Let MATH. We first construct a protocol with public randomness, constant communication, and error MATH, using the NAME principle, and then switch to a usual weakly unbounded protocol (with private randomness) with communication MATH and the same error using a result of NAME. We know that for all distributions MATH there is a rectangle with discrepancy at least MATH. Then the weight of ones is MATH and the weight of zeroes is MATH or vice versa on that rectangle (for some MATH). We take that rectangle and partition the rest of the communication matrix into REF more rectangles. Assign to each rectangle the label REF or REF depending on the majority of function values in that rectangle according to MATH. The error of the rectangles is at most REF/REF. If a protocol outputs the label of the adjacent rectangle for every input, the error according to MATH is only MATH. This holds for all MATH. Furthermore the rectangle partitions lead to deterministic protocols with MATH communication and error MATH: NAME sends the names of the rectangles that are consistent with her input. NAME then picks the label of the only rectangle consistent with both inputs. We now invoke the following lemma due to NAME (as in CITE). The following statements are equivalent for all MATH: CASE: For each distribution MATH there is a deterministic protocol for MATH with error MATH and communication MATH. CASE: There is a randomized protocol in which both players can access a public source of random bits, so that MATH is computed with error probability MATH (over the random coins), and the communication is MATH. So we get a MATH communication randomized protocol with error probability MATH using public randomness. We employ the following result from CITE to get a protocol with private randomness. Let MATH be computable by a probabilistic protocol with error MATH, that uses public randomness and MATH bits of communication. Then MATH. We may now choose MATH small enough to get a weakly unbounded error protocol for MATH with cost MATH.
quant-ph/0106160
The lower bound is trivial, since the quantum protocol can simulate the classical protocol. For the upper bound we have to construct a classical protocol from a quantum protocol. Consider a quantum protocol with error MATH and communication MATH. Due to REF this gives us a set of MATH weighted rectangles, such that the sum of the rectangles approximates the communication matrix entrywise within error MATH. The weights are real MATH with absolute value smaller than REF. Label the MATH weighted rectangles with REF and the other rectangles with REF, and add MATH rectangles covering all inputs and bearing label REF. This clearly yields a majority cover of size MATH, which is equivalent to a classical weakly unbounded error protocol using communication MATH due to REF .
cs/0107002
See CITE. It follows from the monotonicity of the reduction functions.
cs/0107002
See CITE. It follows from the monotonicity and contractance of the reduction functions, and the well-foundedness of the ordering of the semilattice.
cs/0107002
The proof is done by induction. Every atomic operator is contracting and monotonic by definition of a reduction function. Now consider a set MATH of contracting and monotonic composition operators. CASE: By hypothesis the composition operators from MATH are contracting. Then it is immediate to prove the contractance of MATH, MATH, and MATH. CASE: Given MATH suppose that MATH. CASE: Since every MATH is supposed to be monotonic, then, we have MATH, and then, MATH, and so on. It follows that MATH is monotonic. CASE: To prove the monotonicity of MATH we consider the function MATH. Then, we prove that MATH is monotonic (third item). It follows that MATH. Then it is immediate to prove (by a double inclusion) that the set of fixed-points of MATH coincides with the set of common fixed-points of the functions from MATH. As a consequence, we have MATH, that completes the proof. CASE: For MATH, we have MATH by monotonicity of MATH. It follows that MATH, that ends the proof.
cs/0107002
We prove by induction the equivalence MATH. It obviously holds for an atomic operator MATH. Now consider a set of composition operators MATH on MATH and assume that the equivalence holds for each MATH. If we have MATH for MATH, then it follows: MATH . Considering MATH completes the proof.
cs/0107002
We first prove that MATH is a common fixed-point of the functions from MATH. By REF is a fixed-point of each MATH. By REF it follows that MATH for each MATH. The proof is completed since by hypothesis the set of generators covers MATH. We prove now that MATH is the greatest common fixed-point of the functions from MATH. Consider a common fixed-point MATH of the functions from MATH. It suffices to prove that MATH is included in every element from the iteration, namely MATH. It obviously holds for MATH. Suppose now it holds for MATH, that is, MATH, and assume that MATH for some MATH. By monotonicity of MATH we have MATH. By REF we have MATH, that completes the proof.
cs/0107002
The proof is a direct adaptation of Apt's CITE. To prove termination it suffices to prove that the pair MATH strictly decreases in some sense at each iteration of the while loop, and to note that the ordering MATH is well-founded. The correctness is implied by the invariant of the while loop, that is, every MATH is such that MATH. It follows that the final domain is a common fixed-point of the functions from MATH (since MATH). The second part of the proof of REF ensures that it is the greatest one included in the initial domain.
cs/0107002
See Apt CITE.
cs/0107002
It suffices to prove that in all these cases, MATH is idempotent. The proof is then completed by REF , that is, each element computed by MATH is a common fixed-point of the functions from its generator. CASE: The proof is obvious. CASE: See Apt CITE. CASE: Since the relation of strongness is transitive, then MATH is stronger than MATH for MATH. Now it suffices to prove that MATH. We have MATH since MATH is idempotent, MATH is stronger than MATH, and MATH is contracting. This ends the proof if we set MATH. CASE: Let us prove that function MATH is idempotent. We prove by induction on MATH that MATH is idempotent for MATH and that it is independent on MATH for MATH. It holds for MATH since by hypothesis, MATH is idempotent, and MATH and MATH are independent. Now fix MATH, and consider that MATH is idempotent, and that MATH and MATH are independent. We prove that function MATH is idempotent and independent on MATH. Given an element MATH, we have: MATH . Then MATH is idempotent. Now we prove that MATH and MATH are independent, that is, MATH . It suffices to prove that each term of this formula is equivalent to MATH. It obviously holds for the last term. For the first term, we have: MATH . The independence of each pair MATH ends the proof. For the second term we have: MATH . It suffices to remark that MATH and MATH are independent by hypothesis, and then to prove that MATH and MATH commute, that is, MATH. This result is easily proved by induction since MATH by independence of each pair MATH, and then MATH by independence of MATH and MATH, and so on.
cs/0107002
CASE: It suffices to show that operator MATH is idempotent, that is, MATH for all MATH. The proof then follows by REF . Given a particular element MATH, consider that MATH, and that MATH. A simple induction, using the hypothesis of independence, then allows us to rewrite MATH as MATH. CASE: The proof is obvious since by definition, MATH is a fixed-point of MATH. Hence, it is also a fixed-point of MATH.
cs/0107002
The proof is very similar to the one of REF .
cs/0107007
Straightforward.