paper stringlengths 9 16 | proof stringlengths 0 131k |
|---|---|
math/0008220 | Without loss of generality MATH. Suppose MATH and MATH . Then MATH must be equal to one of MATH or MATH; but then MATH a contradiction. |
math/0008220 | We will show that the sum MATH converges to the desired integral REF , where the sum is over MATH. Similar arguments hold for MATH, MATH, and MATH. The value REF is then a weighted average of these sums, with weights MATH. Since MATH (see REF ), the weights are bounded in absolute value (less than MATH) and sum to MATH. Therefore the weighted average also converges to REF . We separate the proof into four cases. In the first case, where one of MATH is greater than the sum of the other three, there are no singularities (MATH is never zero), so the summand is a continuous function on MATH. Therefore REF converges to the integral in REF . For the second case, suppose each of MATH is strictly less than the sum of the others, but we are not in the case where both MATH and MATH. Since MATH, none of the four MATH can have a term with MATH within MATH of a singularity. We claim that the sum REF converges to the integral REF . This is proved in the same manner as in REF : one needs only check that the contribution on a small neighborhood of the singularities is small. As before, let MATH be a MATH-neighborhood of a singularity. Ignore for a moment the single term closest to the singularity. REF give us the estimate MATH for constants MATH where MATH. Now MATH for some positive constants MATH. Using MATH for small MATH, where MATH we get the bound MATH . Taking the MATH term out of the summation turns it into a MATH. Summing over annuli concentric about the singularity, we may bound the left-hand side of REF by MATH . The single term closest to the singularity contributes a negligible amount MATH . In the third case, when both MATH and MATH, we have MATH. Then MATH and the other MATH are nonzero. Furthermore the pairs MATH appearing in the products for MATH do not come within distance MATH of the singularity. Since near MATH, MATH is a polynomial taking non-negative values which is zero at MATH, it must have a double root there. Thus its derivative with respect to MATH is zero at MATH. We can therefore remove MATH from REF and just deal with the remaining three MATH. When MATH and MATH, REF becomes MATH where we used MATH. This is MATH (sum over annuli as before). Finally we consider the case when one of MATH is equal to the sum of the other three. Suppose first that MATH. Note that MATH is a monotonic increasing function of MATH: the expected number of MATH-edges increases with their relative weight. For each MATH sufficiently small, choose MATH so that MATH . Such a MATH exists because MATH is in the domain of case two, above. By monotonicity MATH for this MATH. Take a sequence of MATH's tending to MATH. On the corresponding sequence of MATH's, MATH tends to MATH, which is equal to the value of the integral MATH (see REF below). The set MATH in this case is obtained from the concatenation of the appropriate subintervals of the sets MATH that are defined for each quadruple MATH. Since MATH we must have that MATH . This (and symmetry) takes care of the remaining cases. |
math/0008220 | The graph MATH has MATH-edges. For MATH let MATH be the MATH-valued random variable indicating the presence of the MATH-th MATH-edge in a random matching. Then MATH, and so MATH . We have MATH and so MATH. In the case when one of MATH is greater than the sum of the others, we know from REF that MATH converges to MATH or MATH; as a consequence MATH, and the covariances converge to MATH also, so MATH as well. Similarly when one of MATH equals the sum of the others; then MATH converges to MATH or MATH along MATH, and so MATH is MATH on this same subsequence. The remaining cases require more work. By (a straightforward extension of) REF, we have MATH where MATH and where MATH are the vertices of the edge associated with MATH (MATH being the left vertex) and MATH the vertices of the edge associated with MATH (MATH being the right vertex). The inverses MATH are only defined when the corresponding MATH are nonzero. When MATH tends to MATH the diagonal entries MATH tend to MATH (see REF ). Writing each MATH determinant in REF as the product of the two diagonal entries minus the product of the off-diagonal entries, the diagonal entries can be taken out of the quotient in REF and contribute MATH. It remains to estimate the contribution of the off-diagonal entries. We can compute the inverses of the MATH as follows. Note that from REF we have MATH where MATH (we won't need the expression for MATH). Recall the definition of the matrix MATH of REF . Let MATH be the vector MATH . We have MATH . From REF we therefore find, for example, MATH and MATH . We also have MATH when MATH or MATH. Similar expressions hold for inverses of MATH. An argument identical to the proof of REF (the only difference is the factor MATH, which has modulus MATH) shows that the parts of the sums REF over a MATH-neighborhood MATH of the singularities are MATH. We will show that for all MATH with MATH (later we will set MATH) the value MATH tends to zero as MATH in MATH. Similar results hold for MATH, MATH, and MATH. For simplicity of notation let MATH denote the vertices MATH and MATH. REF has the form MATH where MATH is a smooth function on the complement of the region MATH. We already know that the sum over MATH is MATH, so let us replace MATH by a new function MATH which agrees with MATH outside MATH and is zero on MATH. We sum by parts over the variable MATH to get MATH . Since MATH, the sum over MATH of the exponentials is MATH for each MATH. The difference MATH is bounded by MATH times the supremum of MATH on the complement of MATH, except that at the points adjacent to the boundary of MATH, the difference is bounded by the supremum of MATH near the boundary. One can check (see the proof of REF ) that the sup of MATH on the complement of MATH is MATH, and the supremum of MATH on the boundary of MATH is MATH. Only MATH pairs MATH correspond to points adjacent to the boundary of MATH, so we have MATH . Choosing MATH, we have MATH. A similar argument holds in the case where both MATH and MATH. From REF we have MATH but MATH and MATH are MATH as soon as the edges MATH and MATH are separated by at least MATH in their MATH-coordinates. Therefore (using MATH) MATH . Here the term MATH comes from edges whose MATH coordinate is less than MATH or greater than MATH, and the term MATH consists of the remaining pairs of edges. As a consequence MATH. |
math/0008220 | Let MATH denote the coefficient of MATH in MATH. As we computed earlier, on the toroidal graph MATH the MATH-probability of a MATH-edge (respectively, MATH-, MATH-, MATH-edge) is given by MATH (respectively, MATH, MATH, MATH). The expected number of MATH-edges is MATH. Let MATH . Let MATH be the corresponding set of matchings, that is, those where the corresponding quadruples MATH are in MATH. Because the variance is MATH, for all MATH there exists MATH such that for MATH greater than MATH we have MATH . Note that if MATH are probabilities and MATH, then MATH (since the left-hand side is maximized when the MATH's are equal). Thus for the entropy we may write MATH but for MATH, we have MATH, and similarly for MATH, so MATH . Note also that MATH. Letting MATH as MATH we have finally that the limiting entropy per dimer is MATH . Without loss of generality we may assume that MATH and MATH; then from REF we have MATH. Plugging in from REF now gives MATH . To prove the equivalence of this formula and REF , we show that they agree when MATH, and show that their partial derivatives are equal for all MATH. REF gives MATH . This is two times the value of the entropy per site given in REF, as it should be since the entropy MATH as we defined it is the entropy per dimer. NAME also shows that this value equals MATH, where MATH is NAME 's constant MATH . From the expansion MATH (see CITE) we have MATH, so the two formulas agree when MATH. It remains to compute the derivatives. REF is symmetric under the full symmetry group MATH, and REF is by definition symmetric under the operations of exchanging under the operations of exchanging MATH and MATH, exchanging MATH and MATH, and exchanging MATH with MATH. These operations are transitive on MATH, so it suffices to show equality of the derivatives with respect to MATH. We have MATH (recall that MATH). On the other hand when MATH is one of MATH we have MATH . Taking MATH of REF and recalling that MATH, etc., gives MATH . Since MATH the proof is complete. |
math/0008220 | We show that the Hessian (matrix of second derivatives) is negative definite, that is, MATH, and MATH, for all MATH, except at the four points MATH or MATH. A computation using REF gives MATH . A second differentiation yields MATH and this quantity is strictly negative except at the points MATH. A similar calculation holds for MATH: MATH . We have MATH . Finally, MATH which is clearly positive. |
math/0008222 | The number MATH is an algebraic integer, so its MATH-adic absolute value is at most MATH. To determine how much smaller it is, first notice that MATH . In order for MATH to reduce to MATH modulo MATH, we must have MATH . However, this is impossible unless MATH, because MATH has order MATH in the residue field. Since MATH, the only possibility is MATH. In that case, MATH. In order to have MATH, the second factor would need to reduce to MATH. However, that could happen only if MATH, which is impossible. |
math/0008222 | In this proof, we will write MATH to indicate an unspecified power of MATH. Because the product in question is real and the only real power of MATH is MATH, we will in several cases be able to see that factors of MATH equal MATH without having to count the MATH's. Start by observing that MATH . (To prove the last line, check that MATH and MATH together run over the same range as MATH.) In the factors where MATH, replace MATH with MATH. Now for every MATH, it is easy to check that MATH . When MATH is odd, pairing MATH with MATH in this way takes care of every factor except for a power of MATH, which must be real and hence MATH. Thus, the whole product is MATH when MATH is odd, as desired. In the case when MATH is even, the pairing between MATH and MATH leaves the MATH factor unpaired. The product is thus MATH . Notice that MATH . Hence, since every power of MATH has a square root among the powers of MATH (because MATH is odd), MATH . Substituting this result into REF shows that the product we are trying to evaluate must equal MATH, since the MATH factor must be real and therefore MATH. All that remains is to determine the sign. Since MATH and MATH are reciprocals, it is enough to answer the question for the second one (which is notationally slightly simpler). We know that it is plus or minus a power of MATH, and need to determine which. Since MATH, we have MATH . The product MATH is real, so it must be MATH; to determine which, we just need to determine its sign. For that, we write MATH which is negative iff MATH is odd (assuming MATH). Thus, the sign of the product is negative iff there are an odd number of odd numbers from MATH to MATH, that is, iff MATH (since MATH is even). Therefore, the whole product is MATH iff MATH, and is MATH otherwise. |
math/0008222 | The proof is based on the observation that for any non-zero numbers, the power sums of their reciprocals are minus the NAME coefficients of the logarithmic derivative of the polynomial with those numbers as roots, that is, MATH . To apply this fact to MATH, define MATH . Then MATH . This function is invariant under interchanging MATH with MATH (equivalently, interchanging MATH with MATH), so its NAME coefficients are as well. By the observation above, the coefficient of MATH is MATH. Straightforward calculus shows that these coefficients are polynomials over MATH in MATH, MATH, and MATH. Using the fact that MATH and MATH completes the proof. |
math/0008225 | The first step of the proof will be to show that the domain of MATH is indeed the whole space MATH. Using the positivity of the MATH operator we show that MATH which means that MATH. We need NAME 's inequality CITE MATH where MATH, MATH is the dimensionality of the space and for us MATH. Applying this inequality we obtain the following bound MATH which means that MATH. Since MATH inside MATH we conclude that MATH as given by REF is well defined over the whole space. MATH is also continuous. To prove it let us rewrite the free energy functional as MATH and using the bounds REF it is easy to show that MATH where the constant MATH is given by MATH and MATH. We will also need to show that MATH is coercive MATH . Using REF one may show that indeed MATH and thus the quotient tends to infinity as the norm grows. The continuity and coercitivity of MATH allow to use REF of CITE, which states the existence of at least one minimizer MATH of MATH provided it is a weakly lower semicontinuous and coercive application MATH. So we know that there is at least one minimum, but we do not know where to look for it. Let us now show how the MATH parameter allows us to select different targets for the minimization problem. To do so we define a real one-dimensional function for any given direction MATH . This function is nothing but a polynomial over MATH where MATH and MATH are constants that depend only on the precise direction MATH. By differentiating the polynomial and imposing MATH we find that MATH at the origin for all possible directions. This means that, as we mentioned above, our NAME ``multiplier" MATH allows us to avoid the useless solution, MATH. Furthermore, we can restrict the location of the minimizer to a surface of a certain norm. By differentiating MATH we reach MATH . This equation has a single solution which gives us the norm of minimal energy along the MATH-direction MATH a value which is bounded above by our NAME multiplier MATH. From REF it also follows that for any absolute minimum of the functional, MATH, the expected value of the MATH operator must be bounded by the MATH norm MATH . Otherwise the trivial solution MATH would have less energy than MATH. We can thus, instead of working with an unknown surface, delimit the location of the minimum to a set which is given by two inequalities, REF MATH . |
math/0008225 | For the type of spaces that we work with, a set is compact iff it is closed and we can build a MATH-net for any positive number MATH. The MATH-net is a finite set MATH such that for each MATH there is an element MATH verifying MATH. Thus compactness is equivalent to the possibility of building an arbitrarily good approximation of our minimizer using a finite but sufficiently large basis of functions. It is evident that the set MATH is closed in the subspace MATH of MATH. Let MATH be a convergent sequence and let MATH be their limit. Since for each element of the sequence MATH it is also obvious that MATH also thus the limit belongs to MATH. The compactness of the closed set MATH essentially follows from the fact that the eigenstates of MATH form a complete basis of MATH, and that the eigenvalues of MATH form a monotonously growing unbounded set of positive numbers. These eigenstates are of the form MATH where MATH are NAME 's generalized polynomials. And the corresponding eigenvalues are MATH . Let us choose any natural number MATH such that MATH. We can split the whole space as a direct sum MATH where MATH . The important point is that since MATH is isomorph to MATH for some natural number MATH, and MATH lays inside a compact MATH-dimensional ball of radius MATH, then we can find any MATH-net for MATH. Furthermore, by separating MATH and using the definition of MATH we show that the projection of MATH outside of MATH can be made arbitrarily small MATH . A direct consequence of this is that for MATH, a MATH-net of MATH is also a MATH-net of MATH, which proves the compactness. Finally, by inspecting the eigenvalues of MATH and using Eq. REF it is not difficult to see that the absolute minimum of MATH must lay in MATH. |
math/0008229 | By construction the MATH - invariants defining MATH are decomposable (as sums of products of one - dimensional classes), hence they are elements in MATH which restrict to zero on the subgroup MATH. Thus the extension restricted to this subgroup defines an elementary abelian subgroup of rank equal to MATH which is the kernel of the natural projection MATH. This of course has maximal rank in MATH and is central, hence the proposition follows. |
math/0008229 | Let MATH, we know that MATH is NAME and by a standard result in commutative algebra it follows that under that condition the cohomology will be a free module over any polynomial subring over which it is finitely generated (see CITE). Hence we only need to prove that MATH is a finitely generated MATH-module. To prove this we will use the more geometric language of cohomological varieties (see CITE for background). Let MATH denote the homogeneous hypersurface in MATH (the maximal ideal spectrum for MATH) defined by MATH. Then MATH will be finitely generated over MATH if and only if MATH. If we represent the class MATH by an epimorphism MATH with kernel MATH, then we know that MATH, the variety associated to the annihilator of MATH. Moreover using basic properties of these varieties, we have that MATH. Now the cohomological variety of a module will be MATH if and only if the module is projective, hence what we need to prove is that the module MATH is projective. However by NAME 's Theorem (see CITE) we know that it is enough to check this by restricting to maximal elementary abelian subgroups; in this case MATH is the only such group and projectivity follows from our hypothesis, as MATH is a polynomial subalgebra over which it is finitely generated (note that by NAME 's detection theorem, the kernel of MATH is nilpotent, hence MATH embeds in MATH under this map). Hence we conclude that MATH form a homogeneous system of parameters and so MATH is free and finitely generated as a module over MATH. |
math/0008229 | Indeed MATH can be expressed as a central extension MATH which by construction yields a mod MATH LHS spectral sequence which collapses at MATH. |
math/0008229 | We consider the NAME - NAME - NAME spectral sequence with mod MATH coefficients for the group extension MATH . Recall that MATH, where, if MATH denotes the usual NAME operator, MATH for MATH. If MATH is the natural projection, it induces an isomorphism between the subrings of the mod MATH cohomologies generated by REF classes. The MATH - invariants defining the extension MATH are precisely MATH. As a consequence of this the differential MATH is zero on the two - dimensional polynomial generators in the fiber (indeed the ideal generated by the transgressions of the one - dimensional classes includes the entire ideal generated by the MATH, hence the NAME of these transgressions are in the ideal already). Now choose MATH to be classes in MATH restricting to the two - dimensional permanent cocycle polynomial classes in the edge of the spectral sequence. These are our desired elements. |
math/0008229 | Let MATH denote the MATH - NAME subgroup of MATH, that is, the kernel of the natural projection MATH. This group fits into a commutative diagram of extensions: MATH . Note that MATH is MATH. As before, the MATH - invariants defining MATH are trivial mod MATH, hence MATH is also an exterior algebra on MATH one - dimensional generators, and the map MATH induces a surjection in mod MATH cohomology. In particular, if we let MATH then we can assume that MATH, MATH. On the other hand we can choose MATH such that MATH. Hence we obtain the following basis for the kernel of MATH in dimension equal to two: MATH. Now consider the mod MATH LHS spectral sequence for the bottom row. Since, MATH, the five term exact sequence associated to this spectral sequence shows that MATH . By comparing dimensions, we see that MATH must act trivially on MATH and hence on MATH. Furthermore, MATH embeds MATH naturally as a subspace of MATH. Since MATH is central in MATH, we have MATH acts trivially on MATH and we conclude that MATH acts trivially on MATH by the natural MATH-embedding above, and hence MATH acts trivially on all of MATH. Let MATH denote the ideal generated by the MATH-transgressions. Then evidently MATH is generated by the basis of MATH above, which one can easily verify to be a regular sequence in the cohomology of MATH. Next we consider the spectral sequence for MATH given by the middle row of our diagram. We have seen that MATH acts trivially on MATH. Comparing spectral sequences, we see that MATH is generated by trangressions MATH, MATH which restrict to a regular sequence of maximal length in MATH. By REF and the comments immediately following it, MATH where MATH is a polynomial algebra on MATH. From this, it is easy to compute that MATH and hence MATH. Since MATH is concentrated on a single horizontal row, there are no problems in lifting the ring structure to that of MATH. Hence we conclude that MATH . This completes the proof. |
math/0008229 | By a result due to NAME and NAME (see CITE), the LHS spectral sequence associated to the defining extension above collapses at MATH if coefficients are taken in MATH for MATH sufficiently large. The improved lower bound MATH was obtained in CITE. This implies the collapse of the mod MATH spectral sequence, the statement is readily derived from the algebraic interpretation of this MATH - term. |
math/0008229 | Indeed, the integral MATH term can only have MATH - torsion for finitely many primes MATH, the result follows from the mod MATH reduction sequence for MATH. |
math/0008230 | In the quotient we have MATH, while MATH. Also MATH while MATH, which is equal to MATH in the quotient. Thus MATH and these two copies of MATH commute with each other, giving a copy of MATH in the quotient. Next, note that MATH exchanges MATH, MATH, and also exchanges MATH, MATH, hence the two copies of MATH above, and the extension MATH. |
math/0008230 | We note that the lift of MATH is given as MATH . On the other hand, note that MATH commutes with MATH, MATH, consequently with each of the remaining four generators, while the first two generators commute with the third and fourth, and MATH, MATH, and all three have the central element MATH in common. The verification for the second group is similar. For the third, note that MATH so that MATH has order MATH. Also conjugation with MATH takes the element to its fifth power, while conjugation by MATH takes it to its inverse. This gives the extension MATH. The final generator can again be choosen as MATH. It remains to check the lift of MATH. Note, first that MATH as a subgroup of MATH, while MATH, MATH both commute with the elements of this MATH. On the other hand MATH as well, and the final statement follows. |
math/0008230 | The only thing that needs to be pointed out is that if there were a MATH which did not contain the central element MATH, (and hence a MATH obtained by adjoining MATH), then it would project non-trivially to one of the four MATH's in the quotient MATH. But we have identified the lifts of these groups as copies of groups having rank three! Hence, every possible MATH contains MATH, and hence projects to one of the two conjugacy classes of extremal MATH's. Consequently, it must lie in one or the other of the four copies of MATH that we have constructed. |
math/0008230 | We use the description of MATH as MATH where we write MATH . But we can replace this copy of MATH by MATH and MATH is defined as the identity on MATH and the correspondence above on MATH. |
math/0008230 | First we check the result for MATH. We have MATH and this is a presentation of MATH. The same calculations result for MATH using the automorphism above. Moreover, MATH commutes with MATH while MATH inverts it. So this change cancels out in the squares for the pair MATH, and we have verified that the groups asserted to be MATH's in fact are. The remaining statements are now easily checked by comparing with the MATH's already constructed above. But this gives a complete list of possible pairs and the result follows. |
math/0008230 | The normalizer of MATH is obtained by adjoining MATH, MATH. Thus there are three degree two extensions of MATH in MATH. The extension by MATH is MATH. Clearly, the extension by MATH does not give a MATH. Finally, consider the extension by MATH. Replace the second MATH by MATH . Then these two copies of MATH commute with each other and their span is MATH. Moreover, MATH exchanges them, giving an isomorphism of this group with a second copy of MATH. |
math/0008230 | For any triple of indecomposable modules MATH, MATH, MATH appearing in the direct sum decompositions of MATH, MATH, and MATH above, we have MATH. Thus MATH. This shows that the map above is an isomorphism on the socle, hence it is injective. The NAME series of the two sides are equal, by a calculation, so we have an isomorphism. |
math/0008230 | Consider the long exact sequence in cohomology arising from the short exact sequence MATH. The maps MATH are just the restriction homomorphisms for MATH, which can be shown to be surjective in all degrees MATH. This implies that MATH is isomorphic to the kernel of the restriction map MATH for all MATH. This means that we can write the NAME series of the cohomology of MATH as MATH . The module generators may be taken to be in degrees MATH, MATH. |
math/0008230 | MATH and MATH are interchanged by an outer automorphism of MATH. |
math/0008230 | There is an exact sequence MATH. |
math/0008230 | There is an exact sequence MATH. |
math/0008230 | There is an exact sequence MATH, and we consider the associated long exact sequence in cohomology. Since MATH, we see that MATH is one-dimensional if MATH. Furthermore, the induced maps MATH are surjective for all MATH. This, plus the fact that the socle of MATH is two-dimensional, determines the NAME series. For the information on the degrees of the generators, we must study the kernel of the map MATH. This kernel is generated by elements in degrees MATH. |
math/0008230 | Use the exact sequence MATH. |
math/0008230 | A computer calculation using MAGMA for MATH shows that through degree REF the coefficients of the polynomial MATH agree with the ranks of the cohomology. Now according to the preceding tables, all algebra generators occur by this degree. Hence using the multiplicative structure of the spectral sequence we infer that it must collapse at MATH, that is, MATH. |
math/0008230 | First it was shown that the elements of MATH are in the kernels of restriction by computing the restriction maps on the elements. Then for each degree MATH the restriction maps MATH for MATH in the list was written as a linear transformation and the intersection of the null spaces was computed to be zero . |
math/0008230 | From the lemma we see that MATH for MATH. But by the NAME series REF we have that MATH has the same dimension as MATH. On the other hand we know that MATH is generated by elements of degree at most MATH. So MATH, since all generators are in MATH. So MATH induces a surjective homomorphism MATH. But again because the NAME series are the same this is an isomorphism. |
math/0008240 | First suppose that the algebraic number MATH of negative complex points with respect to MATH is zero. Then, according to REF , the adjunction equality is fulfilled and, by REF, we can find an almost complex structure MATH homotopic to MATH and hence to MATH such that MATH is pseudoholomorphic with respect to MATH. If conversely MATH is pseudoholomorphic with respect to MATH, the adjunction equality holds for MATH, and if we choose a generic MATH homotopic to MATH (and MATH), REF implies that MATH. |
math/0008240 | For the sake of simplicity, let us assume that there is only one double point MATH. First consider the case that MATH is positive. Let MATH and choose an orientation preserving chart MATH around MATH such that CASE: MATH CASE: For small disks MATH in MATH around MATH, MATH,MATH and the restrictions MATH map the orientation of MATH to the canonical orientations of the planes MATH and MATH. Then a trivialization for the normal bundle MATH restricted to MATH is given by MATH, followed by the projection onto the second plane MATH, and with respect to this chart, a section of MATH is given by the affine plane MATH for a small MATH. If we choose a tubular neighborhood MATH which coincides with the map given by MATH and MATH around the MATH, the image of this section will be contained in MATH and will intersect MATH in one positive point. A similar section can be constructed over MATH, and combined with a generic section of MATH outside of the MATH, we obtain an immersion MATH of MATH which will intersect MATH in MATH points, counted with signs. If the double point MATH is negative, the sections constructed over the MATH will contribute with sign MATH to the intersection number of MATH and MATH, and we obtain MATH. This proves our assertion in the case that there is only one double point, the proof in the general case is similar. |
math/0008240 | We have a decomposition MATH, to which we can apply REF to obtain MATH . By REF , MATH. Substituting this into the last equation leads to the desired result. |
math/0008240 | As demonstrated in CITE, we can find an immersed surface in MATH having MATH positive double points, MATH negative double points and representing the class MATH. In fact, pick two generic lines in MATH and reverse the orientation of one of them to obtain two spheres MATH which intersect in one point with intersection number MATH. Remove a small ball MATH around this point and a similar ball MATH around one negative double points of MATH. We than can glue MATH and MATH along their boundaries in such a way that the boundary links MATH and MATH get identified. This yields a new immersed surface in MATH as desired. By iterating the construction, one can construct an immersed surface in MATH having MATH positive self - intersection points and representing the class MATH. Cutting out small disks and gluing in handles at the remaining double points leads to a surface MATH having genus MATH and self - intersection number MATH. Since the homology class MATH was divisible by MATH, the same is true for MATH. Now let MATH denote the branched cover of order MATH with branch locus MATH. As in CITE, one can use this cover to obtain estimates for the genus of MATH. We have MATH and MATH . Now the real cohomology MATH appears in MATH as the subspace invariant under the action of MATH, and the splitting of MATH into the eigenspaces of the action is orthogonal with respect to the intersection form (see CITE), hence we have the inequality MATH in particular MATH . If we substitute the values for MATH and MATH from the above equation, we see that the terms containing MATH cancel out, and this leads to the desired result. |
math/0008240 | This follows immediately from REF . |
math/0008242 | Let MATH be a MATH-bridge link; let MATH be a rational-form diagram for MATH, and write MATH for MATH. The classification of MATH-bridge links implies that MATH is isotopic to any rational-form diagram associated to the fraction MATH. (If MATH is an integer, then it is easy to see that MATH is the trivial knot, which is not MATH-bridge.) Define a sequence MATH of rational numbers by MATH, MATH. This sequence terminates at, say, MATH, where MATH is an integer. Write MATH. It is easy to see that MATH for all MATH, and that MATH. Then MATH is isotopic to MATH, which is in Legendrian rational form. |
math/0008242 | None of MATH, MATH, and MATH contains negative powers of MATH; the lemma will be proved if we can show that MATH, where MATH . Define the auxiliary matrices MATH . Then MATH and MATH, and so MATH . But if we define a sequence of functions MATH, then an easy induction yields the recursion MATH with MATH and MATH. In particular, for all MATH, MATH has degree MATH and is thus nonzero. From the given conditions on MATH, it follows that MATH, as desired. |
math/0008242 | Let MATH be a Legendrian rational form for a MATH-bridge link MATH. The crossings of MATH are counted, with the same signs, by both the writhe of MATH and the NAME number of the Legendrian link MATH obtained from MATH; MATH, however, also subtracts half the number of cusps. Hence MATH by REF . Since MATH is ambient isotopic to MATH, we conclude that MATH is at least MATH; by the NAME bound, equality must hold. |
math/0008243 | Let MATH where MATH and MATH are defined as in the statement of REF. To approximate the creation rate, we need to approximate MATH, which is the constant term of MATH. The constant term is given by the usual contour integral, which we will approximate using the saddle point method. Write MATH and MATH, so that MATH. Note that the MATH coordinates are related to the coordinates in the statement of the proposition by MATH and MATH. We will keep MATH and MATH fixed as we send MATH to infinity. The critical points of MATH are MATH and MATH . Because MATH, we have MATH. It follows that MATH and MATH are complex conjugates on the circle MATH. We now apply the saddle point method. To find the constant term of MATH, we integrate MATH about the circle of radius MATH centered at the origin. One can check that, on this circle, MATH is greatest at the critical points MATH and MATH. (To check it, parametrize the circle by the angle MATH formed with the real axis. One has MATH iff MATH is one of the two critical points or MATH lies on the real axis. At the critical points, MATH, so MATH has maxima at these points. It must have minima on the real axis, since between any two local maxima there must be a local minimum.) As MATH goes to infinity, the integral is given asymptotically by the integrals over the parts of the path near the critical points, which can be estimated straightforwardly. This is the saddle point method. We will omit the details of the argument leading to the approximation, because they are standard, and can be found, for example, in CITE. The saddle point method tells us that the constant term of MATH is the sum MATH, where MATH and MATH . (For the proof of REF we will not need to determine the signs of the square roots in REF, but they must be chosen so that MATH and MATH are complex conjugates.) Simplifying MATH yields MATH . From this, one can check that at either critical point of MATH, MATH has absolute value MATH . Let MATH be the phase of MATH, so that MATH . Then MATH and MATH is approximated by MATH . Of course, MATH . Since MATH, MATH, and MATH, we see that MATH . We have MATH by REF. Interchanging MATH and MATH corresponds to interchanging MATH and MATH. Let MATH denote the result of interchanging MATH and MATH (and also MATH and MATH) in the expression MATH, so that, for example, MATH. When we substitute REF into REF , we see that MATH . Hence, by REF MATH . (To see that the error term is MATH, one uses the fact that it is MATH and that MATH.) Now we check that MATH . The identity MATH suggests that this should be so, but does not seem to prove it. If we set MATH, we find (after some simplification) that MATH . Thus, MATH and MATH have the same phase. If we combine the formulas MATH and MATH with MATH we find that MATH equals MATH times a positive factor, so their phases are equal. Because MATH we see that MATH and MATH have the same phase, to within a sign, so MATH . Finally, we change to the coordinates of our generating function by the substitutions MATH and MATH. We set MATH. Then when MATH, the creation rate at the MATH location in an Aztec diamond of order MATH is MATH . Because creation rates must be non-negative, the MATH sign in this formula can always be taken to be MATH. The constant implicit in the big MATH depends continuously on MATH and MATH. Thus, for fixed MATH the constant can be chosen uniformly for all MATH and MATH with MATH. We have therefore proved the result claimed in the statement of the proposition. |
math/0008243 | We have MATH where MATH and MATH. For fixed MATH, this function is clearly maximized at MATH. When MATH, it becomes a linear function maximized at MATH (for MATH). This yields MATH. Therefore, MATH if MATH is small enough compared to MATH. For MATH, we have MATH . It follows that MATH . Therefore, MATH . If we are not worrying about dependence on MATH, this is MATH. This completes the proof. |
math/0008243 | From REF , we see that we can choose MATH to be the imaginary part MATH . If we substitute MATH for MATH and MATH for MATH and differentiate, then the first term of REF contributes the MATH-terms in the formula we are proving. To see this, recall that (up to an irrelevant multiple of MATH) MATH . After we express this in terms of MATH, MATH, and MATH and substitute MATH for MATH and MATH for MATH, the right hand side becomes MATH where MATH is the function of MATH that results from making the substitutions in MATH. Denote by MATH the function REF . When we differentiate MATH with respect to MATH (holding MATH, MATH, and MATH fixed), we get MATH . Because MATH is a critical point of MATH, MATH, so MATH. Now expressing the imaginary parts of the logarithms in terms of MATH gives the desired terms from the formula we are proving. (To simplify the terms to the form found in the statement of the lemma, one has to use the fact that for MATH, MATH.) When we substitute and differentiate, the remaining terms in REF clearly give algebraic results. We omit the details of the calculations, since they are tedious and straightforward. The claim that the last term is MATH for MATH is a consequence of REF. The only thing to check is that although the denominator vanishes at MATH, these two points are never in MATH (or near enough to cause problems). To see that, note that REF of MATH is equivalent to the set of MATH for which MATH . Note that MATH is impossible (since then we must have MATH and MATH, so MATH). However, substituting MATH in the left hand side of REF gives MATH. Thus, the factors MATH and MATH in the denominator of the last term in our main formula cannot become arbitrarily small, and the last term is indeed MATH. |
math/0008243 | We will use the formula for MATH from REF. Let MATH be a small, simply-connected neighborhood in MATH of the smallest real interval containing MATH, such that the points MATH are not in MATH. (We checked at the end of the proof of REF that these points are not in MATH.) It follows from the definition of MATH that MATH on MATH. If MATH, then MATH and MATH, so MATH (contradicting MATH). Thus, MATH on MATH, and hence there is a holomorphic square root of MATH on MATH, if MATH was chosen to be sufficiently small. It follows that the third term (the algebraic term) of the formula for MATH in REF is holomorphic on MATH. The derivative of that term is thus algebraic and holomorphic on MATH, so to complete the proof we just need to check this for the other two terms. The first two terms can be expressed in terms of the arctangent. If we do so, we find that the derivative with respect to MATH of the sum of those two terms is MATH . This is also algebraic and holomorphic on MATH. Thus, MATH is holomorphic on MATH and algebraic, as desired. |
math/0008243 | We assume that MATH, since otherwise MATH. As in the proof of REF, we will integrate MATH around a circle about the origin, where, as in REF , MATH . However, since we are looking only for an upper bound and not for an asymptotic estimate, we will not need the full saddle point method. We will only sketch the proof, because the details are straightforward but somewhat tedious to check. We will use the same notation as in the proof of REF; for example, we write MATH and MATH. Since MATH, the critical points MATH and MATH of MATH are real. (Of course, the case MATH has to be handled separately, but this will not cause problems.) We will integrate MATH around a circle of radius MATH, where MATH will be either MATH or MATH. We choose MATH where MATH is the lesser of MATH and MATH. To bound the integral, we will use the fact that the absolute value of the integral is at most as large as the greatest value of MATH on the circle. It is not hard to check by straightforward manipulation of inequalities that MATH if MATH, and MATH if MATH. (Since MATH and MATH, we cannot have MATH.) Thus, MATH if MATH, and MATH otherwise. Take MATH so that MATH. On the circle of radius MATH about MATH, MATH is greatest when MATH; in fact, the second derivative test shows that this is the only local maximum. Thus, the integral is bounded by MATH, so MATH. Because the sign of MATH doesn't change when MATH and MATH are interchanged, MATH is the lesser of MATH and MATH. Hence, MATH. It follows that MATH . A simple calculation gives MATH. The inequalities MATH and MATH, together with the fact that the only dependence on MATH in any of these expressions is in the exponent, imply that the creation rate at MATH is MATH for some MATH. A little more care in the estimates shows that this bound can be chosen uniformly for MATH, as desired. |
math/0008243 | Since the discriminant of the polynomial is MATH, we see that it has two real roots whenever MATH. Clearly, MATH is a root iff MATH, and since the sum of the roots is MATH, it is the greater root iff also MATH. One can check the other claims similarly. |
math/0008243 | This simply amounts to showing that MATH is bounded as a function of MATH, MATH, and MATH, for MATH sufficiently small. If one computes MATH using the quadratic formula, and then differentiates it with respect to MATH, one finds that it equals a continuous function of MATH, MATH, and MATH (for MATH near MATH) divided by MATH . If MATH is small enough compared to MATH, then MATH will be continuous, and hence bounded, for all MATH, MATH, and MATH with MATH. Then MATH, as desired. |
math/0008243 | From REF , we see that MATH is approximated by MATH . (The error here is bounded by a constant multiple of MATH, which is MATH and hence MATH by REF.) Since MATH, we see that MATH is given to within MATH by the sum of MATH with MATH and its complex conjugate. REF will show that the latter two sums are MATH as MATH goes to infinity. Assuming REF, we can prove the desired limit by approximating the sum REF with an integral. The sum is equal to MATH to see that the error is MATH, note that the summand (viewed as a function of a real variable MATH) is MATH by REF, and is monotonic on MATH and MATH. By REF, the polynomial MATH has real roots MATH. Let MATH be the lesser root, and MATH the greater root. Then MATH if MATH, and MATH if MATH. By REF, we have MATH iff MATH and MATH, and MATH iff MATH and MATH. In both of these cases, we have MATH and MATH. Thus, we need only deal with the case MATH. Suppose MATH and MATH, that is, MATH. It follows from REF that MATH, and MATH. We will approximate the integral in REF by MATH . This approximation introduces further error. To see how large the error is, first rescale by a factor of MATH, so that the function under the square root sign becomes MATH. Around a root MATH, this function can be expanded as MATH (with the sign depending on which root MATH is). Because MATH, the coefficient of MATH cannot become arbitrarily small. Thus, for small MATH, the error introduced by the approximation is bounded by a constant (depending on MATH) times MATH and hence by MATH . One can evaluate the integral REF explicitly, because MATH . As MATH, the right hand side of REF approaches MATH (since the numerator of the fraction is positive as its denominator approaches MATH). We see that REF evaluates to MATH . The case with MATH (that is, MATH and MATH) is completely analogous, except the integral is over the interval MATH, rather than MATH. This integral is MATH, so we get that MATH, which agrees with MATH. This proves the desired result. |
math/0008243 | We need to show that MATH approximates MATH. Since REF implies that the creation rates are all non-negative, and MATH is the sum of a subset of the creation rates appearing in the sum giving MATH, the placement probability must be at least MATH. Also, given any point in the Aztec diamond, the north-going placement probabilities at the four points obtained by rotating it by multiples of MATH about the origin sum to MATH. This is true because by rotational symmetry these placement probabilities are equal to the placement probabilities in each of the four directions at the original point, which must sum to REF. This is the content of REF , except here it is expressed in terms of the placement probabilities, rather than the asymptotic formula. One can check by direct computation that MATH . If the difference between MATH and the placement probability were not MATH, then the four placement probabilities would have to sum to more than MATH, which is impossible. |
math/0008243 | First suppose that MATH. The desired result will follow from the equation MATH together with the estimate given by REF. First, we show that REF applies to the creation rates appearing in the sum. Consider MATH as a function of MATH. Its first derivative at MATH is MATH which is greater than MATH since MATH. The only root of the derivative is MATH . Thus, the function REF is increasing for MATH. (Note that in REF we need only sum up to MATH, since beyond that point MATH and hence MATH. Thus, MATH never reaches the pole in REF at MATH.) Therefore, MATH and REF applies to bound the creation rates in REF . Thus, for some constant MATH between MATH and MATH, MATH . This geometric series is bounded by MATH . This proves the desired bound, with MATH. For MATH, we use the trick of summing the placement probabilities at the four points obtained by rotating by multiples of MATH about the origin. As in the proof of REF, the sum must be MATH, and we know that three of the terms are MATH. Therefore, the fourth must be MATH, as desired. |
math/0008243 | Since MATH is algebraic, it satisfies an equation MATH with MATH polynomials (not all identically zero). Let MATH. We will show that MATH has the desired property, using induction on MATH. We can choose the coefficients MATH so that they have no (non-constant) common factor. Fix MATH, and let MATH. Define MATH. Since the coefficients were taken to have no common factor, they do not all vanish when we set MATH. Their degrees do not increase when we set MATH (or when we remove common factors), so our lemma follows by induction on MATH (applied to MATH and MATH), assuming we can prove it in the case MATH. Suppose MATH. Assuming MATH is not identically zero, we can divide REF by some power of MATH to get an equation satisfied by MATH with non-zero constant term, say MATH. (A priori, MATH will satisfy the new equation only where MATH is non-zero. However, since MATH is holomorphic on MATH, its zeros are isolated. By continuity, it satisfies the equation at its zeros as well as elsewhere.) Then any root of MATH is a root of MATH, so MATH has at most MATH roots, and hence at most MATH roots. |
math/0008243 | To prove this, we will apply the NAME REF says that MATH satisfies the conditions of REF, so there is an absolute upper bound for the number of roots that it can have as a function of MATH while MATH, MATH, and MATH are held fixed (unless it is identically zero for those values of MATH, MATH, and MATH). Before we apply the NAME Theorem, we break MATH up into a bounded number of subintervals on which MATH is monotonic. We have to look at the behavior of MATH. As in REF, set MATH, MATH, and MATH. REF says that as MATH goes to infinity, MATH equals MATH . We would like to show that when divided by MATH, REF stays away from integers. After REF is divided by MATH, the only possible integral values it can take on are MATH, MATH, and MATH (assuming MATH is large enough). If we ignore the MATH term, the rest of the formula is the difference of the arguments of two points on the same horizontal line (divided by MATH). Thus, it cannot be MATH. It can be MATH only if the points coincide or are on the horizontal axis. It can be MATH only if the points are on the horizontal axis. The points coincide iff MATH, which is impossible (since MATH). They are on the horizontal axis iff MATH . REF of MATH implies that MATH so no MATH gives a MATH satisfying REF . (Note that MATH is impossible since then MATH.) In fact, the above argument, combined with continuity considerations, shows that the two points cannot get arbitrarily close to each other or the horizontal axis, and they clearly cannot get arbitrarily far from the origin. Thus, even taking into account the MATH term, MATH really does stay slightly away from integers as MATH. Hence, the NAME Theorem tells us that the exponential sums are bounded (uniformly in MATH). |
math/0008243 | Let MATH and MATH . For MATH, MATH (by REF) and MATH is bounded (by REF). Suppose MATH for all MATH. To bound the sum in the statement of the proposition, we will apply summation by parts. We have MATH . This sum is bounded in absolute value by MATH. The function MATH is monotonic on MATH and MATH (on the subintervals where it is real, of course), so within each of these intervals, the sum MATH telescopes. The boundary terms are MATH, and hence the entire sum is MATH. |
math/0008243 | We simply check that this formula satisfies the differential equations and boundary conditions. |
math/0008243 | We use induction on the size of MATH (holding MATH fixed and varying MATH). The case where this set is empty is trivial. Assume that the lemma is true whenever MATH, and suppose we have a situation in which MATH. It clearly suffices to consider the case in which MATH for some vertex MATH in MATH that is adjacent to at least one vertex in MATH. Let MATH be a vertex in MATH adjacent to MATH. Given that MATH has some specific value, any extension of MATH to MATH would have to give MATH height MATH or MATH (for some particular MATH whose value we don't care about - it's MATH plus or minus REF or REF), while any extension of MATH to MATH would have to give MATH height MATH or MATH (with MATH determined from MATH the same way MATH is determined from MATH). Because MATH and MATH agree modulo MATH on MATH and MATH, we have MATH. Let MATH and MATH be the two extensions of MATH to MATH that assign MATH height MATH and MATH, respectively, and let MATH and MATH be the two extensions of MATH to MATH that assign MATH height MATH and MATH, respectively. (If such extensions do not exist, it is not a problem, as we will see below.) The distribution MATH is a weighted superposition of MATH and MATH, where the MATH-th term (MATH or MATH) is given weight proportional to the number of extensions of MATH to MATH (which should be taken to be zero in the case where the extension to MATH does not exist). Similarly, MATH is a superposition of MATH and MATH. Since MATH for all MATH in MATH, and MATH, we can use our induction hypothesis to conclude that MATH is stochastically dominated by MATH for all MATH, which implies that MATH is stochastically dominated by MATH, as was to be shown. |
math/0008243 | Apply REF to the partial height functions MATH and MATH. |
math/0008243 | Let MATH be the highest extension of MATH to MATH, let MATH be the lowest extension of MATH to MATH, and define MATH and MATH similarly. (It is not hard to show that the complete height functions extending a given partial height function form a lattice under the usual partial ordering, so it makes sense to talk about the highest and lowest extensions.) Let MATH be on the boundary of MATH (and hence on the boundary of MATH or MATH). If MATH is on the boundary of MATH, then we can find a nearby MATH on the boundary of MATH so that MATH while if MATH is on the boundary of MATH, then we can find a nearby MATH on the boundary of MATH so that MATH . Since the two height functions agree modulo MATH at MATH, MATH, where MATH is the greatest multiple of REF that is less than or equal to MATH. It follows from this (and the corresponding inequality in the other direction) that if MATH is any extension of MATH to MATH and MATH any extension of MATH to MATH, then for each MATH on the boundary of MATH, MATH differs from MATH by at most MATH. Now let MATH be any vertex in MATH. If we compute MATH by conditioning on the heights on the boundary of MATH, then it follows from REF that the expected value of MATH under MATH differs by at most MATH (and hence at most MATH) from its expected value under MATH. |
math/0008243 | Let MATH be a lattice-path connecting a point MATH on the boundary of MATH to the point MATH. Let MATH be the partition of the space of possible height functions in which two height functions are regarded as equivalent if they agree at MATH. Let MATH be the conditional expectation MATH, the function from the set of height functions to the reals that assigns to each height function MATH the average value of MATH as MATH ranges over all height functions in the equivalence class of MATH. Note that MATH is just the function MATH itself, while MATH is the average value of the height at MATH, averaged over all height functions. The functions MATH form a martingale; that is, MATH . On each component of MATH, MATH takes on at most two distinct values, according to the two different values of MATH that are consistent with the already-known values of MATH. From REF , we see that these two values of MATH differ by at most REF. Since MATH is their weighted average, it follows that MATH and MATH never differ by more than REF. Then, applying NAME 's Inequality (REF on page REF) to the quantities MATH, we get MATH . Replacing MATH by MATH, we get MATH . This completes the proof. |
math/0008245 | We show first that MATH is incompressible. Of course this follows by standard techniques, by thinking of MATH as having a polyhedral metric of non-positive curvature and using the NAME - NAME Theorem to identify the universal covering with MATH (compare CITE). Since MATH is totally geodesic and geodesics diverge in the universal covering space, we see that MATH is covered by a collection of embedded planes MATH. However we want to use a direct combinatorial argument which generalises to situations in the next section where no such metric is obvious on MATH. Suppose that there is an immersed disk MATH with boundary MATH on MATH. Assume that MATH is in general position relative to MATH, so that the inverse image of MATH is a collection MATH of arcs and loops in the domain of the map MATH with image MATH. MATH can be viewed as a graph with vertices of degree four in the interior of MATH. Let MATH be the number of vertices, MATH the number of edges and MATH the number of faces of the graph MATH, where the faces are the complementary regions of MATH in MATH. We assume initially that these regions are all disks. An NAME characteristic argument gives that MATH and so since MATH, there must be some faces with degree less than four. We define some basic homotopies on the disk MATH which change MATH to eventually decrease the number of vertices or edges. First of all assume there is a region in the complement of MATH adjacent to MATH with two or three vertices. In the former case we have a MATH - gon MATH of MATH with one boundary arc on MATH and the other on MATH. So MATH has interior disjoint from MATH and its boundary lies on MATH. But by definition of MATH, any such a MATH - gon can be homotoped into a double arc of MATH. For the MATH - gon is contained in a cell in the closure of the complement of MATH. The cell has a polyhedral structure which can be described as the cone on the dual cell decomposition of a link of a vertex of the cubing. The two arcs of the MATH - gon can be deformed into the MATH - skeleton of the link and then define a cycle of length two. By definition such a cycle is an edge taken twice in opposite directions. We now homotop MATH until MATH is pushed into the double arc of MATH and then push MATH slightly off this double arc. The effect is to eliminate the MATH - gon MATH, that is, one arc or edge of MATH is removed. Next assume there is a region MATH of the complement of MATH bounded by three arcs, two of which are edges of MATH and one is in MATH. The argument is very similar to that in the previous paragraph. Note that when the boundary of MATH is pushed into the MATH - skeleton of the link of some vertex of the cubing then it gives a MATH - cycle which is the boundary of a triangle representing the intersection of the link with a single cube. Therefore we can slide MATH so that MATH is pushed into the triple point of MATH lying at the centre of this cube. Again by perturbing MATH slightly off MATH, MATH is removed and MATH has two fewer edges and one fewer vertex. Finally to complete the argument we need to discuss how to deal with internal regions which are MATH - gons, MATH - gons or MATH - gons. Now MATH - gons cannot occur, as there are no MATH - cycles in the link of a vertex. MATH - gons can be eliminated as above. The same move as described above on MATH - gons has the effect of inverting them, that is, moving one of the edges of the MATH - gon past the opposite vertex. This is enough to finish the argument by the following observations. First of all consider an arc of MATH which is a path with both ends on MATH and passes through vertices by using opposite edges at each vertex (degree MATH), that is, corresponds to a double curve of MATH. If such an arc has a self intersection, it is easy to see there are embedded MATH - gons or MATH - gons consisting of subarcs. Choosing an innermost such a MATH - gon or MATH - gon, then there must be `boundary' MATH - gons (relative to the subdisk defined by the MATH - gon or MATH - gon) if there are intersections with arcs. Now push all MATH - gons off such a MATH - gon or MATH - gon, starting with the boundary ones. Then we arrive at an innermost MATH - gon or MATH - gon with no arcs crossing it and can use the previous method to obtain a contradiction or to decrease the complexity of MATH. Similarly if two such arcs meet at more than one point, we can remove MATH - gons from the interior of an innermost resulting MATH - gon and again simplify MATH. Finally if such arcs meet at one point, we get MATH - gons which can be made innermost. So in all cases MATH can be reduced down to the empty graph, with MATH then lying in a face of MATH. So MATH is contractible on MATH. It remains to discuss the situation when some regions in the complement of MATH are not disks. In this case, there are components of MATH in the interior of MATH. We simplify such an innermost component MATH first by the same method as above, working with the subdisk MATH consisting of all the disks in the complement of MATH and separated from MATH by MATH. So we can get rid of MATH and continue on until finally all of MATH is removed by a homotopy of MATH. Note that once MATH has no interior intersections with MATH then MATH can be homotoped into MATH as it lies in a single cell, which has the polyhedral structure of the cone on the dual of a link of a vertex of the cubing. This completes the argument showing that MATH is incompressible. Next we wish to show why MATH has the MATH - plane, MATH - line and triple point properties. Before discussing this, it is helpful to discuss why the lifts of MATH to the universal covering are planes, without using the polyhedral metric. Suppose some lift of MATH to the universal covering MATH was not embedded. We know such a lift MATH is an immersed plane by the previous argument that MATH is incompressible. It is easy to see we can find an immersed disk MATH with boundary MATH on MATH which represents a MATH - gon. There is one vertex where MATH crosses a double curve of MATH. But the same argument as in the previous paragraph applies to simplify the intersections of MATH with all the lifts MATH of MATH. We get a contradiction, as there cannot be a MATH - gon with interior mapped into the complement of MATH. This establishes that all the lifts MATH are embedded as claimed. It is straightforward now to show that any pair of such planes MATH which intersect, meet in a single line. For if there is a simple closed curve of intersection, again the disk MATH bounded by this curve on say MATH can be homotoped relative to the other planes to get a contradiction. Similarly if there are at least two lines of intersection of MATH and MATH then there is a MATH - gon MATH with boundary arcs on MATH and MATH. Again we can deform MATH to push its interior off MATH giving a contradiction. This establishes the MATH - line property. The MATH - plane and triple point properties follow once we can show that any three planes of MATH which mutually intersect, meet in a single triple point. For then if four planes all met in pairs, then on one of the planes we would see three lines all meeting in pairs. But this implies there is a MATH - gon between the lines and the same disk simplification argument as above, shows that this is impossible. There are two situations which could occur for three mutually crossing planes MATH, MATH and MATH. First of all, there could be no triple points at all between the three planes. In this case the MATH - gon MATH with three boundary arcs joining the three lines of intersection on each of the planes can be used to give a contradiction. This follows by the same simplification argument, since the MATH - gon can be homotoped to have interior disjoint from MATH. Secondly there could be more than one triple point between the planes. But in this case, in say MATH, we would see two lines meeting more than once. Hence there would be MATH - gons MATH in MATH between these lines. The interiors of such MATH - gons can be homotoped off MATH and the resulting contradiction completes the argument for all the properties claimed in the theorem. |
math/0008245 | The proof is extremely similar to that for REF so we only remark on the ideas. First of all the conditions in the statement of REF play the same role as the link conditions in the definition of a cubing of non-positive curvature. So we can homotop disks which have boundary on MATH to reduce the graph of intersection of the interior of the disk with MATH. In this way, MATH - gons and MATH - gons can be eliminated, as well as compressing disks for MATH. This is the key idea and the rest of the argument is entirely parallel to REF . Note that the closures of the complementary regions of MATH are MATH - injective, by essentially the same proof as in CITE. The only thing that needs to be carefully checked, is why it suffices to assume that only embedded disks in the complementary regions need to be examined, to see that any possibly singular MATH - gons, for MATH, are homotopically trivial and there are no singular MATH - gons. Suppose that we have a properly immersed disk MATH in a complementary region, with boundary meeting the set of double curves of MATH at most three times and MATH is not homotopic into a double arc, a triple point or a face of MATH. If this disk is not homotopic into the boundary of the complementary region, we can apply NAME 's lemma and the loop theorem to replace the singular disk by an embedded one. Moreover since the boundary of the new disk is obtained by cutting-and-pasting of the old boundary curve, we see the new curve also meets the set of double curves of MATH at most three times. So this case is easy to handle: it does not happen. Next assume that the singular disk is homotopic into the boundary surface MATH of the complementary region. (Note we include the possibility here that the complementary region is a ball and MATH is a MATH - sphere). Let MATH be the boundary curve of the singular disk and let MATH be a small regular neighbourhood of MATH in MATH. Thus MATH is null homotopic in MATH. Notice that there are at most three double arcs of MATH crossing MATH. Now fill in the disks MATH bounded by any contractible boundary component MATH of MATH in MATH, to enlarge MATH to MATH. Since MATH shrinks in MATH, it is easy to see by NAME 's theorem, that MATH contracts also in MATH. Also if MATH meets the double arcs of MATH, we see the picture in MATH must be either a single arc or three arcs meeting at a single triple point, or else we have found an embedded disk contradicting our assumption. For we only need to check that MATH cannot meet the double arcs in at least four points. If MATH did have four or more intersection points with the singular set of MATH, then one of the double arcs crossing MATH has both ends on MATH. But this is impossible, as there would be a cycle in the graph of the double arcs on MATH, which met the contractible curve MATH once. Finally we notice that there must be some disks MATH which meet the double arcs; in fact at least one point on the end of each double arc in the boundary of MATH must be in such a disk. For otherwise it is impossible for MATH to shrink in MATH, as there is an essential intersection at one point with such an arc. (This immediately shows the possibility that MATH crossed the double curves once cannot happen). So there are either one or two disks MATH with a single arc and at most one such disk with three arcs meeting in a triple point. But the latter case means that MATH can be shrunk into the triple point and the former means MATH can be homotoped into the double arc of MATH in MATH by an easy examination of the possibilities. Hence this shows that it suffices to consider only embedded disks when requiring the properties in REF . This is very useful in applications in CITE. To show the converse, assume we have an incompressible surface which has the MATH - plane, MATH - line and triple point properties. Notice that in the paper of CITE, the triple point property is enough to show that once the number of triple points has been minimised for a least area representative of MATH, then the combinatorics of the surface are rigid. So we get that MATH has exactly the properties as in REF . |
math/0008245 | This follows immediately from REF , by observing that since the boundary of every meridian disk meets the double curves at least four times, there are no non-trivial MATH - gons in the complement of MATH in MATH for MATH and no MATH - gons. Hence MATH is almost cubed, as MATH in MATH has similar properties to MATH in MATH. |
math/0008245 | The argument is very similar to those for REF above, so we outline the modifications needed. Suppose there is a compressing or boundary compressing disk MATH for one of the surfaces MATH. We may assume that all the previous MATH are incompressible and boundary incompressible by induction. Consider MATH, the graph of intersection of MATH with the previous MATH, pulled back to the domain of MATH. Then MATH is a degree three graph; however at each vertex there is a MATH pattern as one of the incident edges lies in some MATH and the other two in the same surface MATH for some MATH. We argue that the graph MATH can be simplified by moves similar to the ones in REF . First of all, note that by an innermost region argument, there must be either an innermost MATH - gon, MATH - gon or a triangular component of the closure of the complement of MATH. For we can cut up MATH first using the arcs of intersection with MATH, then MATH etc. Using the first collection of arcs, there is clearly an outermost MATH - gon region in MATH. Next the second collection of arcs is either disjoint from this MATH - gon or there is an outermost MATH - gon. At each stage, there must always be an outermost MATH - gon or MATH - gon. (Of course any simple closed curves of intersection just start smaller disks which can be followed by the same method. If such a loop is isolated, one gets an innermost MATH - gon which is readily eliminated, by assumption). By supposition, such a MATH - gon or MATH - gon can be homotoped into either a boundary curve or into a boundary vertex of some MATH. We follow this by slightly deforming the map to regain general position. The complexity of the graph is defined by listing lexicographically the numbers of vertices with a particular label. The label of each vertex is given by the subscript of the first surface of MATH containing the vertex. (compare CITE for a good discussion of this lexicographic complexity). The homotopy above can be readily seen to reduce the complexity of the graph. Note that the hypotheses only refer to embedded MATH - gons but as in the proof of REF it is easy to show that if there are only trivial embedded MATH - gons for MATH, then the same is true for immersed MATH - gons, using NAME 's lemma and the loop theorem. Similarly, REF can be converted to a statement about embedded disks, using NAME 's lemma and the loop theorem, since cutting open the manifold using the previous surfaces, converts the polygon into a MATH - gon. This completes the proof of REF . |
math/0008245 | Note that the cases where either MATH has incompressible boundary or is non-orientable, are not so interesting, as then MATH and MATH are NAME and the result follows by NAME 's theorem CITE. So we restrict attention to the case where MATH and MATH are closed and orientable. Our method is a mixture of those of CITE and CITE and we indicate the steps, which are all quite standard techniques. CASE: By REF , if MATH is the canonical surface for the cubed manifold MATH and MATH is the cover corresponding to the fundamental group of MATH, then MATH lifts to an embedding denoted by MATH again in MATH. For by REF , all the lifts of MATH to the universal covering MATH are embedded planes and MATH stabilises one of these planes with quotient the required lift of MATH. Let MATH be the homotopy equivalence and assume that MATH has been perturbed to be transverse to MATH. Denote the immersed surface MATH by MATH. Notice that MATH lifts to a map MATH between universal covers and so all the lifts of MATH to MATH are properly embedded non-compact surfaces. In fact, if MATH is the induced cover of MATH corresponding to MATH, that is, with fundamental group projecting to MATH, then there is an embedded lift, denoted MATH again, of MATH to MATH, which is the inverse image of the embedded lift of MATH. The first step is to surger MATH in MATH to get a copy of MATH as the result. We will be able to keep some of the nice properties of MATH by this procedure, especially the MATH - plane property. This will enable us to carry out the remainder of the argument of NAME and NAME quite easily. For convenience, we will suppose that MATH is orientable. The non-orientable case is not difficult to derive from this; we leave the details to the reader. (All that is necessary is to pass to a MATH - fold cover of MATH and MATH, where MATH lifts to its orientable double covering surface.) Since MATH is a homotopy equivalence, so is the lifted map MATH. Hence if MATH is not homeomorphic to MATH, then the induced map on fundamental groups of the inclusion of MATH into MATH has kernel in MATH. So we can compress MATH by NAME 's lemma and the loop theorem. On the other hand, the ends of MATH pull back to ends of MATH and a properly embedded line going between the ends of MATH meeting MATH once, pulls back to a similar line in MATH for MATH. Hence MATH represents a non-trivial homology class in MATH and so cannot be completely compressed. We conclude that MATH compresses to an incompressible surface MATH separating the ends of MATH. Now we claim that any component MATH of MATH which is homologically non-trivial, must be homeomorphic to MATH. Also the inclusion of MATH induces an isomorphism on fundamental groups to MATH. The argument is in CITE, for example, but we repeat it for the benefit of the reader. The homotopy equivalence between MATH and MATH induces a map MATH which is non-zero on second homology. So MATH is homotopic to a finite sheeted covering. Lifting MATH to the corresponding finite sheeted cover of MATH, we get a number of copies of MATH, if the map MATH is not a homotopy equivalence. Now the different lifts of MATH must all separate the two ends of the covering of MATH and so are all homologous. (Note as the second homology is cyclic, there are exactly two ends). But then any compact region between these lifts projects onto MATH and so MATH is actually compact, unless MATH is non-orientable, which has been ruled out. Finally to complete this step, we claim that if MATH is projected to MATH and then lifted to MATH, then the result is a collection of embedded planes MATH satisfying the MATH - plane property. Notice first of all that all the lifts MATH of MATH to MATH satisfy the `MATH - surface' property. In other words, if any subcollection of four components of MATH are chosen, then there must be a disjoint pair. This is evident as MATH is the pull-back of MATH and so MATH is the pull-back of MATH. Then the MATH - plane property clearly pulls-back to the `MATH - surface' property as required. Now we claim that as MATH is surgered and then a component MATH is chosen, this can be done so that the MATH - surface property remains valid. For consider some disk MATH used to surger the embedded lift MATH in MATH. By projecting to MATH and lifting to MATH, we have a family of embedded disks surgering the embedded surfaces MATH. It is sufficient to show that one such MATH can be selected so as to miss all the surfaces MATH in MATH which are disjoint from a given surface MATH containing the boundary of MATH, as the picture in the universal covering is invariant under the action of the covering translation group. This is similar to the argument in CITE. First of all, if MATH meets any such a surface MATH in a loop which is non-contractible on MATH, we can replace MATH by the subdisk bounded by this loop. This subdisk has fewer curves of intersection with MATH than the original. Of course the subdisk may not be disjoint from its translates under the stabiliser of MATH. However we can fix this up at the end of the argument. We relabel MATH by MATH if this step is necessary. Suppose now that MATH meets any surface MATH disjoint from MATH in loops MATH which are contractible on MATH. Choose an innermost such a loop. We would like to do a simple disk swap and reduce the number of such surfaces MATH disjoint from MATH met by MATH. Note we do not care if the number of loops of intersection goes up during this procedure. However we must be careful that no new planes are intersected by MATH. So suppose that MATH bounds a disk MATH on MATH met by some plane MATH which is disjoint from MATH, but MATH does not already meet MATH. Then clearly MATH must meet MATH in a simple closed curve in the interior of MATH. Now we can use the technique of NAME and NAME to eliminate all such intersections in MATH. For by the MATH - surface property, either such simple closed curves are isolated (that is, not met by other surfaces) or there are disjoint embedded arcs where the curves of intersection of the surfaces cross an innermost such loop. But then we can start with an innermost such MATH - gon between such arcs and by simple MATH - gon moves, push all the arcs equivariantly outside the loop. At each stage we decrease the number of triple points and eventually can eliminate the contractible double curves. The conclusion is that eventually we can pull MATH off all the surfaces disjoint from MATH. Finally to fix up the disk MATH relative to the action of the stabiliser of MATH, project MATH to the compact surface in MATH. Now we see that MATH may project to an immersed disk, but all the lifts of this immersed disk with boundary on MATH are embedded and disjoint from the surfaces in MATH which miss MATH. We can now apply NAME 's lemma and the loop theorem to replace the immersed disk by an embedded one in MATH. This is obtained by cutting and pasting, so it follows immediately that any lift of the new disk with boundary on MATH misses all the surfaces which are disjoint from MATH as desired. This completes the first step of the argument. CASE: The remainder of the argument follows that of NAME and NAME closely. By REF we have a component MATH of the surgered surface which gives a subgroup of MATH mapped by MATH isomorphically to the subgroup of MATH corresponding to MATH. Also MATH is embedded in the cover MATH and all the lifts have the MATH - plane property in MATH. All that remains is to use NAME and NAME 's triple point cancellation technique to get rid of redundant triple points and simple closed curves of intersection between the planes over MATH. Eventually we get a new surface, again denoted by MATH, which is changed by an isotopy in MATH and has the MATH - line and triple point properties. It is easy then to conclude that an equivariant homeomorphism between MATH and MATH can be constructed. Note that this case is the easy one in NAME and NAME 's paper, as the triple point property means that triangular prism regions cannot occur and so no crushing of such regions is necessary. |
quant-ph/0008030 | Let MATH be a normal state of MATH. Then, by hypothesis, MATH is a normal state of MATH. Define a state MATH on MATH by MATH . Since MATH is normal, MATH. Define a state MATH on MATH by MATH . Since MATH is normal, MATH. Now, REF entail that MATH for any MATH, and thus MATH for any MATH. However, a state of the NAME algebra is uniquely determined (via linearity and uniform continuity) by its action on the generators MATH. Thus, MATH and since MATH, it follows that MATH and MATH are quasi-equivalent. |
quant-ph/0008030 | By hypothesis, the bijective mapping MATH must map the self-adjoint part of MATH onto that of MATH. Extend MATH to all of MATH by defining MATH . Clearly, then, MATH preserves adjoints. Recall that a family of states MATH on a MATH-algebra is called full just in case MATH is convex, and for any MATH, MATH for all MATH only if MATH. By hypothesis, there is a bijective mapping MATH from the ``physical" states of MATH onto the ``physical" states of MATH. According to both the conservative and liberal construals of physical states, the set of physical states includes normal states. Since the normal states are full, the domain and range of MATH contain full sets of states of the respective MATH-algebras. By REF and the fact that the domain and range of MATH are full sets of states, MATH arises from a symmetry between the MATH-algebras MATH and MATH in the sense of NAME REF . Their REF then apply to guarantee that MATH must be linear and preserve NAME structure (that is, anti-commutator brackets). Thus MATH is a NAME MATH-isomorphism. Now both MATH and MATH are NAME algebras, and the latter has a trivial commutant. Thus KR REF applies, and MATH is either a MATH-isomorphism or a MATH-anti-isomorphism, that reverses the order of products. However, such reversal is ruled out, otherwise we would have, using the NAME relations REF , MATH for all MATH. This entails that the value of MATH on any pair of vectors is always is a multiple of MATH which, since MATH is bilinear, cannot happen unless MATH identically (and hence MATH). It follows that MATH is in fact a MATH-isomorphism. And, by REF , MATH must map MATH to MATH for all MATH. Thus MATH is quasi-equivalent to MATH. |
quant-ph/0008030 | For clarity, we suppress the representation map MATH. Suppose that MATH is a bounded function. We show that if MATH, then MATH for all MATH. The NAME operators on MATH satisfy the commutation relation (BR CITE , REF ): MATH . Using REF , we find MATH and from this, MATH. Now let MATH be in the domain of MATH. Then a straightforward calculation shows that MATH . Let MATH be an infinite orthonormal basis for MATH, and let MATH be the vector whose MATH-th component is MATH and whose other components are zero. Now, for any MATH, we have MATH. Thus, REF gives MATH . Hence, MATH . Since MATH is generated by the MATH, REF holds when MATH is replaced with any element in MATH. On the other hand, MATH is an eigenvector with eigenvalue MATH for MATH while MATH is an eigenvector with eigenvalue MATH for MATH. Thus, MATH while MATH for all MATH. Thus, the assumption that MATH is in MATH (and hence satisfies REF ) entails that MATH. |
quant-ph/0008030 | By NAME REF , MATH is a ``type III" NAME algebra which, in particular, contains no atomic projections. Since MATH is irreducible and MATH factorial, either MATH and MATH are disjoint, or they are quasi-equivalent. However, since MATH, the weak closure of the NAME representation clearly contains atomic projections. Moreover, MATH-isomorphisms preserve the ordering of projection operators. Thus there can be no MATH-isomorphism of MATH onto MATH, and the NAME and NAME representations of MATH are disjoint. |
quant-ph/0008030 | Again, we use the fact that MATH REF does not contain atomic projections, whereas MATH REF does. Suppose, for reductio ad absurdum, that MATH and MATH are not disjoint. Since both these states are pure, they induce irreducible representations, which therefore must be unitarily equivalent. Thus, there is a weakly continuous MATH-isomorphism MATH from MATH onto MATH such that MATH for each MATH. In particular, MATH maps MATH onto MATH; and, since MATH is weakly continuous, it maps MATH onto MATH. Consequently, MATH contains an atomic projection - contradiction. |
quant-ph/0008030 | We shall prove the contrapositive. Suppose, then, that there is some extension MATH of MATH to MATH that is dispersion-free on all bounded functions of MATH. Then MATH is multiplicative for the product of the bounded operator MATH with any other element of MATH (KR CITE , REF ). Hence, by REF , MATH for all MATH and MATH. In particular, we may set MATH, and it follows that MATH for all MATH. Since MATH is a one-to-one function of MATH, it follows from REF that MATH and MATH is a real-linear isometry of the NAME space MATH. We next show that MATH is in fact a unitary operator on MATH. Since MATH is a symplectomorphism, MATH for any two elements MATH. We also have MATH using the fact that MATH is isometric. But MATH, since MATH is real-linear. Thus, MATH using again the fact that MATH is isometric. Cancellation with REF then gives MATH. Thus, MATH preserves the inner product between any two vectors in MATH. All that remains to show is that MATH is complex-linear. So let MATH. Then, MATH for all MATH. Since MATH is onto, it follows that MATH for all MATH and therefore MATH. Finally, since MATH is unitary and MATH, it follows that MATH. However, if MATH, then MATH since MATH is a complex structure. Since MATH is also a complex structure, it follows that MATH for all MATH and MATH. Therefore, MATH. |
quant-ph/0008030 | MATH may be thought of as a real NAME space relative to either of the inner products MATH defined by MATH . We shall use NAME and NAME 's REF : MATH are unitarily equivalent if and only if the positive operator MATH on MATH is a trace-class relative to MATH. (Since unitary equivalence is symmetric, the same ``if and only if" holds with MATH.) As we know, we can build any number operator MATH REF on MATH by using the complex structure MATH in REF . In terms of field operators, the result is MATH . Observe that MATH, which had better be the case, since MATH represents the number of MATH-quanta with wavefunction in the subspace of MATH generated by MATH. The expectation value of an arbitrary ``two-point function" in MATH-vacuum is given by MATH invoking REF in the first equality, and the NAME relations REF together with REF to obtain the second. Plugging REF back into REF and using REF eventually yields MATH . Next, recall that on the NAME space MATH, MATH, where MATH is any orthonormal basis. Let MATH be any extension of MATH to MATH. The calculation that resulted in REF was done in MATH, however, only finitely many-degrees of freedom were involved. Thus the NAME uniqueness theorem ensures that REF gives the value of each individual MATH. Since for any finite MATH, MATH as positive operators, we must also have MATH . Thus, MATH will be defined just in case the sum MATH converges. Using REF , this is, in turn, equivalent to MATH . However, it is easy to see that MATH is a MATH-orthonormal basis just in case MATH forms an orthonormal basis in MATH relative to the inner product MATH. Thus, REF is none other than the statement that the operator MATH on MATH is trace-class relative to MATH, which is equivalent to the unitary equivalence of MATH. (The same argument, of course, applies with MATH throughout.) |
quant-ph/0008030 | Suppose that MATH and MATH are disjoint; that is, MATH. First, we show that MATH, where MATH is the quadratic form on MATH which, if densely defined, would correspond to the total MATH-quanta number operator. Suppose, for reductio ad absurdum, that MATH contains some unit vector MATH. Let MATH be the state of MATH defined by MATH . Since MATH, it follows that MATH is a regular state of MATH (since MATH itself is regular), and that MATH. Let MATH be the projection onto the closed subspace in MATH generated by the set MATH. If we let MATH denote the subrepresentation of MATH on MATH, then MATH is a representation of MATH with cyclic vector MATH. By the uniqueness of the GNS representation, it follows that MATH is unitarily equivalent to MATH. In particular, since MATH is the image in MATH of MATH, MATH contains a vector cyclic for MATH in MATH. However, by BR ( CITE , REF , MATH), this implies that MATH - a contradiction. Therefore, MATH. Now suppose, again for reductio ad absurdum, that MATH . Let MATH and let MATH. Since the family MATH of projections is downward directed (that is, MATH implies MATH), we have MATH . Now since MATH, it follows that MATH for all MATH. Thus, MATH and MATH - contradicting the conclusion of the previous paragraph. |
quant-ph/0008031 | Clearly a pure tensor satisfies the exchange property. To prove the converse, we proceed by induction over MATH. We pick a basis MATH of MATH and write MATH where MATH. If MATH for some MATH, the exchange property for the case MATH implies that MATH satisfies the exchange property, so is a pure tensor by the inductive hypothesis. Next, if MATH and MATH are non-zero, we apply the exchange property to the case where MATH, MATH, MATH and MATH for MATH, and conclude that the tensors MATH and MATH are proportional. It follows that MATH is a pure tensor. |
quant-ph/0008031 | It is useful to associate to MATH a linear map MATH. If MATH has rank MATH then MATH and we are in REF . So we may assume MATH has rank MATH. We will consider MATH as a MATH by MATH matrix. Consider the homogeneous degree MATH polynomial MATH. There are MATH cases to consider: REF there are exactly two points in MATH where MATH vanishes. Let MATH be homogeneous coordinates for these two points. Then we can make a change a basis in the first MATH so that these MATH points are MATH and MATH. Then both MATH and MATH have rank MATH; they must both be non-zero, otherwise all MATH would have rank MATH. After a change of basis in the second and third copies of MATH, we may assume MATH and MATH for suitable MATH, not both equal to MATH. In this case, the tensor MATH is equal to MATH, so it has rank equal to MATH. By direct computation, we see that MATH if MATH REF , or MATH belongs to MATH (respectively, MATH) if MATH (respectively, MATH), which belongs to REF . REF there is only one point MATH of MATH at which MATH vanishes. We may assume this point is MATH. We can think of MATH as giving a parameterization of a curve in MATH which is tangent to the quadric surface MATH consisting of rank MATH matrices. We can change bases in all three copies of MATH so that MATH. As the tangent plane to MATH at MATH is spanned by MATH and MATH, we can change the basis vector MATH in the first MATH so that MATH. Next, as MATH and MATH must both be non-zero, we can change bases in the other copies to arrange that MATH. Then our tensor MATH is MATH, and has rank exactly MATH. Indeed it has the property that for any non-zero MATH, the tensor in MATH obtained by contracting MATH with MATH has rank equal to MATH; thus MATH can't be a sum of two pure tensors. We verify easily that MATH. Or we can see geometrically that the corresponding point in MATH belongs to the dual variety to the NAME product MATH, which means that the hyperplane defined by MATH is tangent to the NAME variety at some point. The relevant point of MATH is MATH: notice that the tangent space to MATH at MATH is spanned by tensors of the type MATH where MATH. Since MATH is orthogonal to all these tangent vectors, it is orthogonal to the tangent space of MATH at MATH. REF the polynomial MATH vanishes identically; this means that the linear map MATH always has rank MATH. This can happen in either of MATH ways: REF there is a vector MATH and a linear map MATH such that REF there is a vector MATH and a linear map MATH such that MATH . We need only consider REF . Then we have MATH. If MATH and MATH are linearly dependent, the tensor MATH is pure and we are in REF . Otherwise, MATH has rank MATH and after changes of bases in the second and third copies of MATH it takes the form MATH. Then MATH belongs to MATH. |
quant-ph/0008031 | The four cases of the statement correspond to the four cases of REF . REF is obvious. REF follows as MATH is locally equivalent to MATH for some MATH; by NAME 's theorem MATH is locally equivalent to MATH. In REF , we have MATH where MATH and MATH are linearly independent for each MATH. There is no harm in assuming that the vectors MATH have norm MATH. By rescaling MATH by a phase and applying a local transformation we can assume MATH and MATH for MATH. We can also arrange that MATH for MATH and MATH for some MATH. This gives the normal form; note the reduction to MATH is easy to achieve by changing the signs of MATH and MATH in the MATH-th factor MATH. In REF , the tensor MATH has the form MATH where MATH and MATH are linearly independent for each MATH. There are two types of degrees of freedom in the expression of MATH in this form. First we have the transformation MATH. With its help we can arrange that MATH. Secondly we have MATH. This is used to arrange that MATH. By rescaling the MATH, we may assume that MATH and MATH have norm MATH. After applying a local unitary transformation, we obtain MATH and MATH for MATH. Then we have MATH and MATH for suitable MATH. Write MATH for MATH. Thus we have MATH for some MATH. Clearly a phase change for MATH will make MATH real, so we can assume MATH. We can of course assume MATH by changing MATH and MATH appropriately. In the rest of the proof we use the notation MATH to denote the vector MATH in the MATH-th copy of MATH. We will next do a simultaneous phase change MATH . This operation does not change MATH but rescales MATH as well as MATH; so we can assume MATH is real. Finally a phase rescaling of MATH will make MATH real without changing MATH or MATH. This way we easily get the normal form. |
quant-ph/0008031 | Let MATH be the image of an algebraic mapping MATH, where MATH is the locally closed algebraic subvariety comprised of MATH-uples MATH where MATH are distinct, MATH and MATH belong to some MATH-plane, and MATH. It is easy to see that MATH is irreducible; thus standard results in algebraic geometry say that MATH is constructible and its closure is irreducible. We have easily MATH so that MATH is constructible. It is clear that MATH for MATH, so that MATH is irreducible. |
quant-ph/0008031 | Let MATH be the rank of MATH and MATH the rank of the range of MATH. Thus MATH is a linear combination of pure tensors MATH. Write MATH where MATH and MATH. Then we have MATH, so that MATH is a linear combination of the pure tensors MATH and MATH. In the other direction, assume that the range of MATH is contained in the linear span of the pure tensors MATH. Then there are linear forms MATH on MATH (so MATH) such that MATH. This means that MATH and MATH. |
quant-ph/0008031 | The NAME group MATH acts naturally on MATH and preserves each MATH. Let MATH be the subspace spanned by MATH, MATH and MATH. Clearly the closure of MATH is the closure of the MATH-saturation MATH. We can compute its dimension as follows. We consider the infinitesimal equation of the NAME algebra MATH on MATH. For MATH, we denote by MATH the space of MATH such that MATH. Then we have MATH . This follows as the right-hand side is the rank of the mapping MATH at the point MATH. Now for any MATH, the tensor MATH belongs to MATH. Let MATH be the space comprised of the MATH such that MATH is a linear combination of MATH and MATH. Since MATH is the direct sum of MATH and of the line spanned by MATH, we have MATH. So it suffices to compute MATH. Now for MATH with MATH we compute: MATH . So MATH belongs to MATH iff the coefficients of MATH, MATH, MATH, and the other tensors obtained from these by permutations, all vanish. At first sight this is just a system of linear equations in MATH unknowns, but one can essentially separate them according the four groups of four variables MATH by introducing the sums MATH. One gets the equations: CASE: For each MATH, REF For each MATH, REF for each permutation MATH of MATH we have MATH. REF easily implies that MATH is independent of MATH. By summing each of the three types of equations over all choices of indices (or of permutations for the third), we get consistency requirements for MATH; these are easily solved to yield: MATH . Here MATH is a free parameter; once it is chosen we can solve for MATH and obtain MATH, where MATH, MATH are matrices independent of MATH, and MATH are some scalars. The fact that MATH is independent of MATH then implies that MATH is too; call this scalar MATH. Then we need the MATH to sum up to MATH, etc. This gives the value MATH for MATH and the requirement MATH. Counting the free parameters we obtain MATH. It follows that MATH has dimension MATH. It is clearly contained in the codimension MATH subvariety defined by the vanishing of the MATH; the latter variety is seen to be irreducible, thus it must equal the closure of MATH. |
quant-ph/0008031 | We associate to MATH as before a linear map MATH and compute the rank of its image. If MATH has rank MATH it is clear that MATH has rank MATH, so we can assume MATH is injective. We can think of MATH as parameterizing a line MATH in MATH. If this line is not contained in the hypersurface MATH, then MATH of its points have rank MATH, and it follows using REF that MATH has rank MATH. Thus we need to focus on the case where MATH is contained in MATH. First of all, there is the case where MATH is contained in MATH for some MATH. In that case it is easy to see that MATH is of rank MATH. So we can assume that MATH contains a point MATH which belongs to none of the MATH; so MATH is MATH-conjugate to MATH. For this choice of vector MATH we can write down the equations on a tensor MATH so that the line thru MATH and MATH is contained in MATH, that is, MATH vanishes identically. It is natural to consider MATH as a vector modulo scaling in the quotient space MATH, that is, as an element of projective space MATH. Look at the equation giving the vanishing of the coefficient of MATH, as MATH ranges from MATH to MATH. The first equation is MATH. The second is MATH; this is a non-degenerate quadratic form in MATH variables. The third equation involves the new variables MATH and is linear as a function of these MATH variables. The fourth equation involves also the last variable MATH, and is linear in MATH. The variety of MATH such that MATH thus has a dense open set which is obtained by successive fibrations with fibers irreducible algebraic varieties; thus it itself is irreducible and its dimension is equal to MATH. Denote by MATH the big MATH-orbit inside MATH, which is the complement of MATH. Now consider the algebraic variety MATH comprised of pairs MATH where MATH and MATH is a line thru MATH which lies entirely inside MATH. This is a locally closed subvariety of the product of MATH with the NAME manifold of lines in MATH. Then the projection map MATH is a fibration, because it is MATH-equivariant and MATH is a single orbit. The dimension of MATH is therefore MATH. What we are really after however is the variety MATH of lines contained in MATH and meeting MATH. There is an obvious map MATH which is a smooth mapping with one-dimensional fibers. Therefore MATH has dimension MATH. Now we claim that any line contained in MATH and not contained in any MATH must meet each MATH. For this purpose consider some tensor in MATH, say MATH, and consider again the set of MATH such that MATH. One checks that this forms a subvariety of MATH of dimension MATH. It follows that the set of lines contained in MATH and meeting MATH in finitely many points has a finite ramified covering which maps to MATH with three-dimensional fiber, therefore it has dimension MATH. Note that the lines completely contained in MATH form a variety of dimension MATH. It then follows that any line contained in MATH must meet each MATH. Thus we can change the basis of the first MATH so that MATH and MATH. Then both these tensors have rank MATH, and by REF MATH itself has rank MATH. |
quant-ph/0008047 | Let MATH be the operation maximizing MATH in the definition of MATH. Clearly, if we compose MATH with any operator of the form MATH, this leaves MATH unchanged. The same must then be true after averaging over MATH (``twirling" CITE). We may thus assume MATH, where MATH is the twirling superoperator. We find MATH must have the form MATH, and since MATH we can solve for MATH and MATH. It follows that MATH . But then we compute MATH . Setting MATH, we obtain: MATH . This operator is positive if and only if MATH and MATH. We also find MATH . Since MATH are orthogonal projections, we find that MATH is positive if and only if MATH . The theorem follows by noting MATH . |
quant-ph/0008047 | Let MATH be primal optimal for MATH. Then MATH . Similarly, let MATH and MATH be primal optimal for MATH and MATH respectively. Then MATH is primal feasible for MATH, thus giving the second inequality. |
quant-ph/0008047 | Let MATH be an operator satisfying the constraints above. Then for any operators MATH, MATH, MATH, we have: MATH . If MATH, MATH, MATH, and MATH then the last four terms are all nonnegative, and we have MATH and thus MATH . In fact, by the theory of duality for SDPs, this inequality can be made tight, to wit: MATH minimizing over operators satisfying the constraints. Upon adding a variable MATH with MATH, the constraints become MATH . We thus find MATH . But we readily see that MATH proving the theorem. |
quant-ph/0008047 | If MATH and MATH are dual optimal for MATH and MATH, then MATH . Similarly, if MATH is dual optimal for MATH, then MATH . |
quant-ph/0008047 | For MATH, take MATH, MATH. For MATH, take MATH, MATH. |
quant-ph/0008047 | For the first inequality, let MATH and MATH be primal optimal for MATH and MATH; then MATH is primal feasible for MATH, giving the inequality. For the second inequality, let MATH be dual optimal for MATH. Then, taking MATH, we have MATH . |
quant-ph/0008047 | By the theorem, we have, writing MATH: MATH . |
quant-ph/0008047 | (First proof) Let MATH be primal optimal for MATH. Then MATH is primal feasible for MATH, so MATH . (Second proof) Let MATH be dual optimal for MATH. Then MATH . Here we used the facts that for a positive superoperator MATH and an arbitrary operator MATH, MATH . |
quant-ph/0008047 | We first consider MATH. Writing MATH, we have MATH with MATH subject to the constraints MATH . Since increasing MATH increases the feasible set, the maximum cannot decrease. Dually, MATH which is nondecreasing in MATH for any choice of MATH. For MATH, we proceed similarly; taking MATH, we have: MATH with MATH subject to the constraints MATH . These constraints become harder to satisfy as MATH increases, and thus the maximum cannot increase. Dually, MATH . But MATH . This, of course, is nonincreasing in MATH, so we are done. |
quant-ph/0008047 | For the first claim, take MATH, MATH, at which point MATH, so MATH. For the second claim, take MATH . Finally, for the third claim, take MATH . In each case, the lower bound coming from MATH agrees with the upper bound coming from MATH, and thus both MATH and MATH are optimal. |
quant-ph/0008047 | We need to show that for any MATH, MATH . Choose MATH between MATH and MATH, and consider the dual SDP bound with MATH . Then MATH is p.p.t., so MATH; the first term is bounded below REF by the following lemma. |
quant-ph/0008047 | Let MATH be the projection onto the positive part of MATH then we need to show that MATH is bounded below REF. Fix MATH, and consider the statement MATH. For this to be true, we must certainly have MATH . Letting MATH be the largest value of MATH such that these inequalities simultaneously hold for infinitely many MATH, we conclude by REF that MATH . In particular, if MATH, then there exists MATH such that MATH, so MATH as required. |
quant-ph/0008047 | Indeed, this is true for each of the functions MATH and MATH individually, so must be true for their sum. |
quant-ph/0008047 | Let MATH be the superoperator MATH integrating with respect to the uniform probability measure on MATH. This is trace-preserving, MATH-local (thus p.p.t), and satisfies MATH. The first claim thus follows from the theorem. Similarly, if MATH is the superoperator MATH then the theorem applies to MATH. |
quant-ph/0008047 | Let MATH be primal optimal for MATH such that MATH is invariant under MATH and MATH. The representations of this group are in one-to-one correspondence with the integers MATH, with MATH and MATH. Writing MATH we have MATH . We next observe that MATH iff MATH and MATH iff MATH . Similarly, the partial transpose MATH is invariant under MATH and MATH. Again the representations are indexed by MATH, with MATH . Defining MATH we obtain the condition MATH . Finally, the relation between MATH and MATH obtains by noting that MATH . |
quant-ph/0008047 | We observe that MATH. Thus if we show that MATH, the proof of REF will apply to give REF ; taking MATH gives REF , and the equations for MATH follow immediately. It thus remains to show MATH (since the other inequality is immediate). Taking MATH we find MATH . |
quant-ph/0008047 | By the above argument, we may assume MATH. Now, MATH . We find that the optimal MATH satisfies MATH . Plugging in, we obtain the stated bound. |
quant-ph/0008047 | Fix an integer MATH, and consider the set MATH consisting of tensor products MATH with each MATH; note, in particular, that MATH is a set of mutually orthogonal projections. Since MATH we have MATH where we define MATH to be the number of factors equal to MATH. Let us then define an operator MATH . We observe that MATH is a projection, so MATH, and that MATH which tends to MATH as MATH as long as MATH . We also compute MATH . If we take MATH, then we obtain the limit MATH . But then by REF , we conclude that MATH whenever MATH. Since this is decreasing over the range, we obtain the strongest bound by taking the limit as MATH, proving the theorem. |
quant-ph/0008047 | That this is an upper bound was shown in CITE, so it suffices to prove the lower bound. We construct a protocol in two steps. First, suppose MATH possesses a transitive group of symmetries; that is, a transitive group MATH of permutations such that MATH . (For instance, the operator MATH is symmetric under the transitive group MATH of cyclic shifts.) We decompose MATH where MATH is the orthogonal projection onto the MATH-eigenspace of MATH. Then MATH is symmetric under the transitive group MATH, and thus has constant diagonal. If we similarly decompose MATH we find MATH . We can thus apply the following lemma to MATH. Let MATH be a maximally correlated operator of dimension MATH such that MATH has constant diagonal. Then MATH . We compute MATH . This is a block matrix with REF- and REF-dimensional blocks; we thus immediately compute that its eigenvalues are MATH for MATH, and MATH for MATH. Since MATH is positive, we have: MATH and thus the largest eigenvalue of MATH in absolute value is MATH. We thus have MATH . Now, write MATH . Then for MATH, we find that since MATH we have MATH . Since MATH, we have proved the theorem in the symmetric case. To reduce the general case to the symmetric case, we adapt the distillation protocol for pure states given in CITE. Given a word MATH in the numbers MATH, we write MATH for the number of times MATH appears in MATH. Then our first step is, given MATH to measure MATH for MATH. Then the resulting (random) state MATH is maximally correlated, and MATH admits a transitive action of MATH. Now MATH where MATH is the expected value, and the inequality follows from the fact that the measurement is local, so cannot increase the expected distillable entanglement. It thus suffices to show that MATH . Now, the measurement has at most MATH different outcomes, so gives us at most MATH bits of information. But then MATH so we find MATH as required. |
quant-ph/0008047 | To any subset MATH, we associate a projection MATH which satisfies MATH . For each MATH and each integer MATH, let MATH be the minimum over MATH of the largest eigenvalue of MATH subject to the constraint MATH. Then MATH (take MATH). Since MATH the theorem follows by the classical analogue of REF . |
quant-ph/0008047 | Since nonnegative linear combinations and limits of positive operators are positive, it suffices to prove the result for MATH. In that case, MATH factors as a tensor product of the following operators: MATH . The first two are clearly positive; that the third is positive is a special case of the following lemma. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.