paper
stringlengths
9
16
proof
stringlengths
0
131k
math-ph/0011050
This follows immediately from the strict convexity of MATH and MATH.
math-ph/0011050
The proof is similar to that of REF . Let MATH be a minimizing sequence. By REF we have MATH . With NAME theorem we therefore extract a subsequence, still denoted by MATH, such that MATH . Furthermore MATH is bounded in MATH, which implies that there exists a further subsequence, again denoted as MATH, with MATH . (This relies on the fact that for a smooth bounded domain MATH, MATH is relatively compact in MATH.) Hence, using NAME 's Lemma we get MATH and by the weak lower semicontinuity of MATH-norms we deduce MATH . In order to prove MATH, we decompose MATH such that both functions are in MATH. With MATH, REF implies MATH . On the other hand MATH fulfills MATH which converges to MATH for MATH and MATH is bounded. Thus MATH . The uniqueness is an immediate consequence of the strict convexity of the functional.
math-ph/0011050
The uniqueness of the minimum, the spherical symmetry of the functional REF and the fact that MATH implies the continuity of MATH away from the origin. With MATH REF has a meaning in the sense of distributions on the domain MATH. Consider the set MATH . If MATH, then MATH and MATH . For all MATH we find MATH. Let MATH, then using MATH we infer MATH .
math-ph/0011050
Denote MATH . On this domain we have MATH with MATH, since MATH. So we conclude from standard elliptic arguments (for example, REF) that MATH is bounded, hence continuous everywhere. From MATH we get the two times differentiability of MATH away from the origin and as long as MATH. By means of REF and a standard bootstrap argument we conclude MATH.
math-ph/0011050
Suppose by contradiction MATH. We do not assume MATH to be finite. Then, as in REF , we get that there is some MATH and a MATH, such that MATH for MATH. Since MATH, we conclude MATH for MATH large enough. Hence, from REF we get MATH which is a contradiction to MATH.
math-ph/0011050
As we have already argued above, the symmetry of MATH follows from the symmetry of the functional and the uniqueness of MATH. Denote MATH the non increasing rearrangement of MATH. (For definition see for example, REF.) From the fact that MATH and REF we get MATH. REF implies MATH and again from CITE we get MATH, which proves the statement.
math-ph/0011050
Inserting MATH into REF yields the following equation for the potential, away from the origin: MATH where we replaced the constants by one. Since MATH is spherical symmetric we can use the ansatz MATH and obtain by REF the following fourth order equation: MATH . We can see that in the surrounding of each point MATH there exists a local solution MATH with MATH. Since MATH is also a solution of REF , every composed function MATH is a local solution around MATH. Away from MATH the solutions can be uniquely continued up to MATH. Hence, there exists a solution MATH and a MATH, such that MATH and MATH. Repeating the argument of REF , one can show that a solution of REF , with MATH and MATH, MATH uniquely determines the minimum of the functional REF . Therefore MATH uniquely determines the selfconsistent potential MATH, which implies that MATH has compact support, too.
math-ph/0011050
Let MATH denote the angular momentum operator parallel to the magnetic field. Since MATH, we have MATH which implies that the eigenvectors of the operator MATH are of the form MATH. Hence we can write the sum of the negative eigenvalues as MATH showing that the MATH's are the eigenvectors of the one-dimensional operator MATH.
math-ph/0011050
First of all we note that for any comparison wave function MATH and fixed integer MATH we get MATH with MATH . If we set MATH, add and subtract MATH in the scalar product and use MATH, where MATH is the eigenvector corresponding to the MATH-th lowest eigenvalue MATH of the one-particle operator MATH as comparison wave function, REF reads MATH . Applying REF to the inequality above, we finally arrive at the upper bound REF . Lower bound: Let MATH denote the minimizer of REF , that is, MATH. So after again adding and subtracting MATH and using the NAME inequality CITE, we can write the lower bound on MATH as follows: MATH . Using REF we arrive at REF .
math-ph/0011050
Since MATH, we get MATH . Minimizing over MATH, we get the first part of the lemma. The proof of second part works analogously to REF .
math-ph/0011050
Using the scaling relation REF and defining MATH we get MATH .
math-ph/0011050
Let us rewrite MATH and define the unitary operator MATH . With MATH and the fact that unitary transformations do not change traces we derive MATH where MATH . So we have the self-adjoint NAME operator MATH and its symbol MATH . Using the relation MATH and by means of some cut off function MATH, REF yields MATH which together with REF immediately implies REF . Recall, that by MATH for MATH and MATH otherwise. In order to tackle the outer zone MATH, we note that the potential fulfills MATH . So by definition of the scaling functions MATH and MATH, we can use REF , which states that with a cut off function MATH, we get MATH . Combining REF together with REF completes the proof of REF .
math-ph/0011051
We take the bracket of both sides of REF with MATH to obtain MATH . It follows that MATH is divisible by MATH. Since MATH is of degree less than MATH in MATH and since MATH is monic of degree MATH we must have MATH and MATH . Similarly, we find MATH and also that: MATH . These expressions also allow one to compute the brackets of MATH, MATH, MATH and MATH with MATH, and the check of the NAME identity follows easily from it. Therefore, MATH is a NAME bracket for which the coefficients of MATH are NAME. If we compare the expressions above for the brackets with MATH with REF for the NAME vector fields, we conclude that they are Hamiltonian with respect to MATH with Hamiltonian function MATH. Checking that MATH is NAME can be done by a straightforward (but rather long) computation using the following expressions for the derivatives of MATH: MATH . For the second statement, one easily checks that when MATH is odd then MATH is a NAME involution, so that there is an induced bracket on MATH or on MATH. Explicit formulas for this bracket are computed as in the proof of REF . The other statements in REF then follow from REF .
math-ph/0011051
In view of REF the entries of MATH can be written in the form MATH . Note also that, by using the MATH action, we can assume that MATH, MATH, and that MATH is a disjoint union of MATH, with MATH. Let MATH. Then MATH has the following form MATH . On the upper right corner the matrix has entry MATH, and the blocks MATH,MATH and MATH, MATH, are matrices as follows: CASE: MATH is a tridiagonal matrix of size MATH of the form: MATH CASE: MATH is a diagonal matrix of the form MATH, with the convention that if MATH is MATH then its only entry is MATH; CASE: MATH is a matrix with only one non-zero entry MATH in the lower left corner; It is clear that the set of eigenvalues of MATH is the union of the set of eigenvalues of the MATH's and MATH's. Now we have: The eigenvalues of the matrix MATH are MATH. Assuming the lemma to hold we find that the number of negative eigenvalues of MATH is equal to MATH, so the proposition follows. So we are left with the proof of the lemma. We write MATH for MATH and we denote by MATH the MATH-th vector of the standard basis of MATH. In the basis MATH the matrix MATH takes form MATH where MATH is the transpose of the matrix MATH . We show that this matrix has eigenvalues MATH . Then the result follows because the eigenvalues of MATH are MATH the eigenvalues of MATH. For MATH, let MATH and let MATH denote the span of MATH. For MATH we have that MATH if and only if there exists a polynomial MATH of degree less than MATH such that MATH for MATH. Since the MATH-th component of MATH is given by MATH we have that MATH, more precisely MATH . This means that in terms of the basis MATH the matrix MATH is upper triangular, with the integers MATH on the diagonal.
math/0011001
The proof is straightforward; see CITE and CITE.
math/0011001
It is sufficient to prove the statement for two reduced decompositions that can be identified by applying a braid relation once. Therefore, it is sufficient to check the statement for rank REF NAME algebras MATH. In this case the only element that has two different reduced decompositions is the maximal element MATH. So we can assume that we are dealing with the two different reduced decompositions of MATH, namely, MATH. Since MATH, it is easy to see that for either reduced decomposition, MATH are all the positive roots (each occuring exactly once). Hence, the collection MATH does not depend on the reduced decomposition. Let MATH be the numbers defined above for the two decompositions. The vectors MATH, MATH are singular vectors in MATH of weight MATH (they are nonzero, since the algebra MATH has no zero divisors). For NAME algebras MATH, it is easy to see that the space of singular vectors in MATH in the weight MATH is REF-dimensional. Indeed, the module MATH is irreducible, so if there were two independent singular vectors of weight MATH, then the direct sum of two copies of MATH would be contained in MATH. But this is impossible, since some weight multiplicities of this direct sum are bigger than the corresponding weight multiplicities of MATH. Therefore, the vectors MATH are proportional. Since MATH is a free module over the subalgebra MATH, we have MATH in MATH for a suitable MATH. We claim that MATH. Indeed, consider the natural homomorphism from MATH to the algebra generated by MATH with the relations MATH, MATH, sending MATH to MATH (it is easy to check that such a homomorphism exists). The images of the two monomials under this homomorphism differ by a power of MATH, which implies that MATH. On the other hand, since the NAME relations are symmetric under MATH, a similar homomorphism exists if MATH is replaced with MATH, which yields MATH. Thus, MATH, as desired.
math/0011001
The proof of existence and uniqueness of MATH is straightforward (see for example, CITE). To prove the invertibility, it is sufficient to observe that for an irreducible module MATH and large MATH, the map MATH is nonzero. Indeed, the tensor product of a NAME module with a finite dimensional module does not contain finite dimensional submodules. Thus, the operator MATH cannot have finite rank, and hence has to be nonzero on MATH.
math/0011001
The lemma is an easy consequence of the definitions: it expresses the fact that the operation of fusion of intertwiners commutes with the operation of restriction of intertwiners to submodules.
math/0011001
Consider the intertwiner MATH. It satisfies the relation MATH . Therefore, MATH . This implies after a straightforward calculation that MATH . Now let us consider the intertwiner MATH. It satisfies the relation MATH . Therefore, MATH . This implies that MATH as desired.
math/0011001
Let MATH. Consider REF in the weight subspace of weight MATH in the tensor product MATH. Let us identify this weight subspace with the opposite one in any way, and take the determinant of both sides of REF . Since the fusion matrix is triangular with the diagonal elements equal to MATH, its determinant is MATH. Therefore, using the decomposition MATH we obtain for MATH: MATH and for MATH . Now let us substitute the values of MATH and MATH computed in the previous section. Then we get MATH . It is clear that MATH is completely determined from this equation. It remains to check that the expression given in the proposition satisfies the equations, which is straightforward.
math/0011001
The existence follows from the above explicit computation of MATH. The uniqueness is obvious, since the function is defined at infinitely many points. The invertibility follows from REF .
math/0011001
REF from CITE implies that as MATH, one has MATH and MATH. Thus, going to the limit MATH in REF , and using REF from CITE, we obtain the proposition.
math/0011001
Straightforward from REF .
math/0011001
We prove the relations for MATH; the relations for MATH follow automatically since these two operators are proportional. If MATH is REF-dimensional, then MATH is known, and it is straightforward to establish the result. From this and REF it follows that the result is true if MATH is the tensor product of any number of REF representations. But any finite dimensional representation is contained in such a product, so we are done.
math/0011001
First of all observe that in MATH, one has MATH (where MATH is the iterated coproduct). On the other hand, using the expression for MATH for REF-dimensional MATH, and the expression for MATH given in REF , we get MATH . But MATH is the submodule of MATH generated by MATH. Thus, for MATH . Now the result follows from REF by a direct calculation.
math/0011001
The proof of this proposition is straightforward from the previous results.
math/0011001
This is proved by a straightforward calculation with intertwiners, which generalizes to the q-case the calculation of CITE. Another proof is given as a remark in REF.
math/0011001
The statement follows easily from the definitions by induction on the length of MATH.
math/0011001
If MATH is large dominant, the statement is clear from REF , since MATH is independent of the reduced decomposition MATH of MATH by the quantum NAME identities REF . For an arbitrary MATH, the statement follows from the fact that a rational function is completely determined by its values at large dominant weights.
math/0011001
As for MATH, the lemma is an easy consequence of the definitions: it expresses the fact that the operation of fusion of intertwiners commutes with the operation of restriction of intertwiners to submodules.
math/0011001
It is enough to establish the first formula. The proof of this formula is obtained by specializing REF to the case MATH, dualization of the second component, and multiplication of the components.
math/0011001
Let MATH be a reduced decomposition. Using the Main Definition, we have MATH (in the last equality we used that MATH depends only of MATH). This implies that MATH . But for MATH we obviously have MATH on MATH. This, together with the Main Definition implies the result.
math/0011001
The proof follows easily from the results of the previous section and the Main Definition.
math/0011001
Clear.
math/0011001
To prove the statement for any NAME algebra, it is enough to do so for MATH, in which case the result is an easy consequence of the results of REF.
math/0011001
This follows at once by comparing the actions of both sides on basis vectors of MATH.
math/0011001
The result follows immediately from the equality MATH and REF .
math/0011001
This is immediate from REF and the definition.
math/0011001
It is sufficient to prove the result for MATH. In this case, we have a unique simple reflection MATH, and MATH . But MATH on MATH. Therefore, by the explicit formula for MATH we have MATH, as claimed.
math/0011001
Follows immediately from REF .
math/0011001
The proof of this proposition is straightforward.
math/0011001
Straightforward from the definition.
math/0011001
Straightforward from the definition.
math/0011001
The first statement is immediate from REF , as for MATH, the operator valued function MATH of MATH is regular at integer values of MATH when restricted to a weight subspace of a generic NAME module. Let us now prove the second statement. Because MATH is generic, any nonzero homogeneous vector MATH of weight different from MATH is not singular. Thus, it suffices to show that for any non-singular homogeneous vector MATH, one has MATH. Let MATH be a non-singular homogeneous vector in MATH. Then there exists an index MATH such that MATH. We will assume that MATH is an eigenvector for the NAME operator of MATH, since this does not cause a loss of generality. Let MATH be the submodule of MATH over MATH generated by MATH. Then MATH is a NAME module, and MATH is a nonzero homogeneous vector in MATH which is not singular. Let MATH be the reduced decomposition of MATH, such that MATH. By REF , to this decomposition there corresponds a factorization MATH where MATH is the product of the terms in the product formula for MATH corresponding to all but the last factor in the reduced decomposition of MATH. Thus, it suffices to show that MATH. But this is immediate from the product formula for the operator MATH in REF : the first factor in this product has numerator MATH, and so for MATH, the product vanishes whenever the set of indices over which the product is taken is nonempty. The proposition is proved.
math/0011001
It suffices to assume that MATH is homogeneous with respect to the weight decomposition. Let MATH be the NAME generators of MATH corresponding to some reduced decomposition of the maximal element of MATH. By the PBW theorem (see CITE), the submodule MATH is given by MATH, where MATH is the algebra of polynomials of MATH, and the product is taken over all roots in a suitable order. Since the product is finite, and MATH is a sum of finite dimensional MATH-modules (because MATH is conjugate to some MATH under NAME 's braid group action on MATH), we have that MATH is finite dimensional.
math/0011001
We have to check that the operators MATH satisfy the braid relations. This can be checked on finite dimensional NAME algebras of rank REF. But in this case, according to REF , everything reduces to the case when MATH is finite dimensional, where the statement is known.
math/0011001
The statements are obtained from REF by passing sending MATH to infinity. Namely, the first equation follows from REF. The second equation follows from REF.
math/0011001
We will first transform the equality to a convenient form, and then show that both sides satisfy the same ABRR equation CITE, which has a unique solution. This will imply that the two sides are equal. Let us make a change of variable MATH. Then the equation to be proved takes the form MATH . Let MATH. Then the equation takes the form MATH . Replacing MATH with MATH, we obtain the equation MATH . To establish this equation, let us recall that by REF (see also CITE), the element MATH satisfies the ABRR equation MATH . Therefore, the element MATH satisfies the equation MATH . Thus, using REF , we get that the operator on MATH given by MATH satisfies the equation MATH . Therefore, the operator MATH satisfies MATH . Transforming this using the weight zero property of MATH, we get MATH . Now we note that the same equation is satisfied by MATH, by REF. Both of these solutions are triangular, with the diagonal part equal to MATH. But REF claims that such a solution is unique. Therefore, MATH, and the lemma is proved.
math/0011001
This follows directly from REF , and REF .
math/0011001
The proposition is immediate from REF .
math/0011001
The proof is analogous to the proof of the NAME character formula using the approach of CITE; it is based on the fact that in the NAME group of the category MATH, an irreducible module is an alternating sum of NAME modules.
math/0011001
The proof follows from REF , and REF .
math/0011001
We have MATH (for brevity we drop the subscripts indicating the modules in which the operators act). From REF we easily obtain MATH . Let us substitute this equation into the previous equation, and use the fact that in the second component we are restricting to the zero-weight subspace. It is easy to see that the MATH-factors cancel, and we get MATH as desired.
math/0011001
The proof is similar to the proof of the symmetry of MATH, given in CITE. It suffices to assume that MATH. Let MATH denote the right hand side of the equality to be proved. By REF , MATH, like MATH, is a solution of the NAME equations and the dual NAME equations. Moreover, both MATH, MATH have the form: MATH times a finite sum of rational functions of MATH multiplied by rational functions of MATH (where the denominators of the rational functions are products of binomials of the form MATH, respectively MATH). Let us regard MATH as functions with values in MATH. It is easy to see, using power series expansions, that a solution of the NAME equations with the above properties is unique up to right multiplication by an operator depending rationally of MATH. Similarly, a solution of the dual NAME equations with such properties is unique up to left multiplication by an operator depending rationally on MATH. So we have MATH where MATH are rational operator valued functions of MATH, and hence MATH . Let us take the limit MATH (for all MATH) in the last equality. It follows from the asymptotics of intertwiners (see CITE) that in this limit MATH is equivalent to MATH. So we get MATH for all MATH. Thus, MATH is a constant operator. Using the symmetry of MATH, we get that MATH is also a constant, so we get MATH, where MATH is a constant operator. Finally, let us show that MATH. We have the identity MATH . Using REF , we can rewrite this equation in the form MATH . Now let us take the limit: MATH, MATH. Then, using REF , we get MATH as desired.
math/0011001
This follows by applying REF several times.
math/0011001
REF follows from REF and the definitions of CITE. REF can be checked directly using REF .
math/0011001
This Proposition is well known, but we will give a proof for the reader's convenience. REF follows from the fact that any finite dimensional representation has a weight decomposition with respect to any (quantum) MATH-subalgebra corresponding to a simple root (by representation theory of quantum MATH). Let us prove REF . By the existence of a weight decomposition, it suffices to prove this for irreducible representations. But in an irreducible representation, MATH (respectively, MATH) acts by a scalar. If MATH, this scalar must be zero, as MATH is a linear combination of MATH and thus MATH. If MATH, it suffices to show that MATH (as the weights are integral). But MATH. Since MATH, in a finite dimensional MATH-module we have MATH. Thus, MATH . Thus, MATH, but it is an integer power of MATH, so MATH as desired. REF is clear from REF .
math/0011001
Let us call the operator defined by the right hand side by MATH. Using the identity MATH, we get that MATH for MATH. Since the statement that MATH is NAME group invariant, and MATH, this is sufficient.
math/0011001
We need to show that MATH lands in MATH and that it is a homomorphism. Let us prove the first statement. So let MATH, MATH, and let us show that MATH. It is clear that if MATH (where MATH is the dual vertex to MATH) then MATH is mapped under MATH to MATH, with MATH. Thus, we need to show that MATH and MATH are also mapped to simple positive roots. A simple computation shows that this property is equivalent to the identity MATH. So let us prove this. Let MATH. Clearly, MATH. So MATH is a positive root of the form MATH, where MATH is a linear combination of positive roots except MATH. Similarly, MATH, and MATH. Thus, MATH . Now, we see that since MATH, the height of the right hand side (that is, the sum of the multiplicities of the simple roots) is MATH, and the equality is possible only if MATH. But the right hand side is a positive root, so MATH and hence MATH. Now we prove the second statement (that MATH is a homomorphism). Since MATH lands in MATH, it is sufficient to check that the map MATH induced by MATH is a homomorphism. But this is obvious from the definition.
math/0011001
Straightforward, as in REF ; see also REF.
math/0011001
The lemma is proved by arguments similar to those in CITE. Namely, similarly to CITE, one can write down an explicit formula for MATH, and show that its poles are all of first order and can occur only on hyperplanes MATH for positive roots MATH and such MATH that MATH. If a dominant weight MATH belongs to such a hyperplane, then MATH, so MATH is a real root. But it is clear that there exists a number MATH such that for MATH one has MATH for any weight MATH of MATH and any real root MATH.
math/0011001
The proof is analogous to the proof of NAME REF.
math/0011001
The proof is the same as that of REF .
math/0011001
Clear.
math/0011001
This is, after some transformations, the content of REF. This is also the multicomponent version of the ABRR equation for affine NAME algebras, projected to the product of loop representations (see CITE).
math/0011001
We have seen that the operator MATH conjugates the operators MATH to the diagonal operators MATH, and the dynamical action of the braid group to the multicomponent dynamical action. It is easy to see that the multicomponent dynamical action commutes with the operators MATH. This implies the desired statement.
math/0011001
This is the main result of CITE; see also REF (where the simply laced case is treated). This is also the multicomponent version of the ABRR equation for quantum affine algebras, projected to the product of loop representations (see CITE).
math/0011001
Analogous to REF .
math/0011001
The proof is easy.
math/0011001
Recall that any automorphism of an algebra acts on the set of equivalence classes of representations of this algebra. All we need to show is that the representation MATH is stable under the automorphism MATH for any MATH. It follows from NAME 's highest weight theory of finite dimensional representations of MATH that there is a unique, up to a shift of parameter, finite dimensional representation of MATH with the same MATH-character as MATH (namely, all such representations have the form MATH for some MATH). On the other hand, it is easy to check that MATH does not change the MATH-character of a representation. This implies the statement.
math/0011001
We have (dropping MATH from the subscripts and MATH from the superscripts for brevity): MATH . Now recall that in the braid group MATH we have MATH, and hence MATH (with the length of both being MATH in the affine NAME group). This implies that the product MATH is symmetric under interchanging MATH and MATH (REF-cocycle relation). Thus, the theorem is equivalent to the statement that the expression MATH is symmetric in MATH and MATH. But this statement is MATH-independent, so it is sufficient to prove the theorem in the limit MATH (respectively, MATH). We can also assume that MATH is a single loop representation. Now, for MATH, this statement is clear since the operators MATH in the limit MATH are just MATH. In particular, for MATH, conjugation by MATH and conjugation by MATH act in the same way on the generators of MATH. But since MATH are MATH-independent, this action is independent on MATH. Thus, the two actions coincide even at MATH. Since MATH is an irreducible module over MATH, by NAME 's lemma this means that MATH, where MATH are nonzero complex numbers, and MATH. Finally, taking the determinants, we find that some power of MATH is MATH, so by continuity MATH also. The theorem is proved.
math/0011001
REF is exactly REF follows from the fact that the operator MATH commutes with the KZ (qKZ) equations.
math/0011001
This is immediate from the previous results.
math/0011001
The proposition follows immediately from REF and the definitions.
math/0011001
The proof is by a straightforward comparison of the two systems.
math/0011008
Suppose that MATH does not hold, then MATH is clean. Suppose also that MATH does not hold: then by REF , for MATH, there is a point MATH with MATH clean in MATH. This MATH also satisfies the slope condition MATH and is hence, by REF , reachable from MATH.
math/0011008
By REF , for MATH, there is a point MATH with MATH clean in MATH. This MATH also satisfies the slope condition MATH and is hence, by REF , reachable from MATH.
math/0011008
We will see that all the properties in the condition follow essentially from the form of our definitions. Note that when an existential or universal quantifier is applied to a family of monotonically increasing (decreasing) events, the result is monotonically increasing (decreasing) event (as a function of the sequence MATH). Indeed, these quantifiers are just the maximum and minimum operations. REF says that for a barrier value MATH, the event MATH is an increasing function of MATH. To check this, consider all possible barriers of MATH. We have the following kinds: CASE: Heavy barrier values MATH of MATH: all heavy barriers of MATH remained barriers of MATH, so monotonicity holds, and the event still depends only on MATH. CASE: Barrier values MATH of MATH of the emerging type: such a barrier is defined by the absence of some holes of MATH on some subintervals of MATH. Since the presence of a hole in MATH is a monotonically decreasing event, the presence of an emerging barrier is an increasing event depending only on MATH. CASE: Barrier values MATH of some compound type. Such a barrier appears in MATH if certain barriers appear in MATH in MATH in certain positions. Since barrier events are increasing in MATH, compound barrier events of MATH depend only on MATH, in an increasing way. REF says that for a hole value MATH, the event MATH is a decreasing function of MATH. To check this, consider all possible holes of MATH. We have the following kinds: CASE: Heavy hole values MATH of MATH: all holes of the heavy type of MATH remained holes of MATH, so the event still depends only on MATH in a decreasing way. CASE: Hole values MATH of MATH of the emerging type: such a hole is defined by the property that MATH is a jump. The jump property is a decreasing function of MATH, and therefore so is the property of being a hole of the emerging type. CASE: Hole values MATH of some compound type. Such a hole appears in MATH if two holes and a jump appear in MATH in MATH in certain positions. Since hole and jump events are decreasing functions of MATH in MATH, compound barrier events of MATH depend only on MATH, in an increasing way. REF says first that for every point MATH and integer MATH, the events MATH, MATH are decreasing functions of MATH and MATH respectively. The property that MATH is strongly left MATH-clean in in MATH is defined in terms of strong left-cleanness of MATH in MATH and the absence of certain barriers in MATH. NAME left MATH-cleanness in MATH is a decreasing function of MATH, so is the absence of barriers in MATH. Therefore strong left MATH-cleanness of MATH in MATH is a decreasing function of MATH. Since both strong and regular left MATH-cleanness in MATH are decreasing functions of MATH, and the properties stating the absence of barriers/walls are decreasing functions of MATH, both strong and regular left MATH-cleanness are also decreasing functions of MATH in MATH. The inequality MATH, implies that these functions reach their minimum for MATH. Similar relations hold for right-cleanness.
math/0011008
Let MATH be the sequence of maximal external intervals of MATH, of size MATH. (We consider MATH, so that MATH is automatically of size MATH.) Let MATH be the intervals betwen them. By REF of MATH, each MATH can be covered by a sequence of neighbors MATH in MATH. Every wall of MATH intersects an element of this sequence. Each pair of these neighbors will be closer than MATH to each other. Indeed, each point of the hop between them belongs either to a wall intersecting one of the neighbors, or to a maximal external interval of size MATH, so the distance between the neighbors is at most MATH. Now operation REF above puts some new walls of the emerging type between the intervals MATH or into some of the hops, all of them disjoint from all existing walls and each other. If one of these new walls MATH comes closer than MATH to some interval MATH then we add MATH to the sequence MATH; otherwise, we start a new sequence with MATH. If as a result of these additions (or originally), some of these sequences come closer than MATH to each other then we unite them. By the properties of MATH and the construction of walls of emerging type, the external intervals between the sequences are hops. Between the resulting sequences, the distance is MATH. Within these new sequences, every pair of neighbors is closer than MATH. Consider one of the above sequences, let us call its elements MATH. If it consists of a single light wall MATH then it is farther than MATH from all other sequences and operation REF removes it. Since MATH is surrounded by maximal intervals, any potential heavy wall MATH of MATH intersecting MATH is contained in MATH and the same operation removes MATH as well. Let us show that the operations of forming compound walls can be used to create a sequence of consecutive neighbors MATH of MATH spanning the same interval as MATH. Assume that walls MATH for MATH have been processed already, and a sequence of neighbors MATH for MATH has been created in such a way that MATH and MATH is not a light wall which is the last in the series. (This condition is satisfied when MATH; indeed, each light wall MATH is part of some new wall via one of the compounding operations, since the ones that are not, would have been removed in operation REF.) We show how to create MATH. If MATH is the last element of the series then it is heavy, and we set MATH. Suppose now that MATH is not last. Suppose that it is heavy. If MATH is also heavy, or light but not last then MATH. Else MATH, Suppose now that MATH is light: then it is not last. If MATH is last or MATH is heavy then MATH. Suppose that MATH is light. If it is last then MATH; otherwise, MATH.
math/0011008
If MATH contains no walls of MATH then it is a hop of MATH and we are done. Let MATH be the union MATH of all walls of MATH in MATH. The inner cleanness of MATH in MATH implies that MATH is farther than MATH from its ends. REF applied to MATH implies that MATH is spanned by a sequence of neighbor walls MATH of MATH. Since MATH contains no walls of MATH (and thus no compound walls), these neighbor walls are farther than MATH from each other. None of these walls MATH is a wall of MATH therefore each is contained in an outer clean light wall MATH. The sequence MATH satisfies our condition: its members are still separated by hops of size MATH, for the same reason as MATH were.
math/0011008
We will use MATH which follows from REF. Consider an interval MATH of size MATH containing no walls of MATH. Let MATH be the middle third of MATH. By REF , it is contained in a hop of MATH. REF implies that MATH is covered by a sequence MATH of light neighbor walls of MATH separated from each other by hops of MATH of size MATH, and surrounded by hops of MATH of size MATH. By REF we have MATH, and removing the MATH from MATH leaves a subinterval MATH of size at least MATH. (If at least two MATH intersect MATH take the interval between consecutive ones, otherwise MATH is divided into two pieces of total length at least MATH.) Now MATH is an interval of length at least MATH which has distance at least MATH from any wall. There will be a clean point in the middle of MATH which will then be clean in MATH.
math/0011008
Suppose that this is not the case. Then the operation of creating emerging barriers would turn every interval of the form MATH with MATH into an emerging barrier. Both MATH and MATH can be chosen to be clean. This choice would define an emerging wall. Since we assumed that we are at the point of the construction when no more emerging wall can be added, this is not possible.
math/0011008
For MATH, let MATH be the event that MATH is realized by a hole ending at MATH but is not realized by any hole ending at any MATH. Let MATH be the event that MATH is strongly right-clean. Then MATH, MATH, the events MATH and MATH are independent for each MATH, and the events MATH are mutually disjoint. Hence MATH .
math/0011008
If MATH then we can apply the hole lower bound REF; suppose therefore that this does not hold. Let MATH. Then REF is applicable to MATH, and we get MATH . Consider the event MATH that MATH is strongly right-clean and the interval MATH contains no barriers. Then MATH. Event MATH is decreasing with MATH. By the FKG inequality, we have MATH.
math/0011008
We will use the following inequality, which can be checked by direct calculation. Let MATH, then for MATH we have MATH . In view of REF , for the first statement of the lemma, we only need to consider the case MATH. Let MATH, then we have MATH . Let MATH . REF is applicable to MATH and also MATH by REF. We have MATH, hence MATH. The events MATH are independent, so MATH where in the last step we used REF. We have MATH . Now, by REF, we have MATH. Substituting into REF: MATH where we used MATH, which follows from REF. The event MATH implies that a hole of type MATH starts in MATH and that the left end of the hole is strongly left-clean. Let MATH be the event that MATH is strongly right-clean and that there is no barrier in MATH. If also MATH holds then there is a jump from MATH to our hole, which we need. The event MATH is decreasing, so the FKG inequality implies that MATH can be multiplied with MATH for a lower bound. For the second statement, the requirements must be added that there are no barriers of MATH in MATH, and that MATH must be strongly right-clean in MATH. We can replace MATH with the larger MATH, and we can replace MATH with MATH.
math/0011008
This follows directly from the reachability REF , if we observe that MATH, so that the the minslope is MATH.
math/0011008
Recall the definition of an emerging barrier. Suppose that there are MATH with MATH, MATH, MATH and a light barrier type MATH such that no hole of type MATH is cleanly contained in MATH. Then MATH is a barrier of MATH. This definition implies that if MATH is an emerging barrier then there is a light barrier type MATH such that no hole of type MATH is cleanly contained in MATH. Let us fix MATH, and for any MATH, let MATH be the event that no strongly outer-clean hole of type MATH starts at MATH. For each fixed MATH, these events are independent. Indeed, according to REF , event MATH only depends on MATH. Recall the hole lower bound: CASE: It says the following. For MATH and MATH, and let MATH be the event that there is a MATH such that MATH is a jump and a hole of type MATH starts at MATH. Then MATH . With MATH, MATH, MATH, this implies that with probability at least MATH, a strongly left-clean hole starts at MATH. REF implies that with probability at least MATH, this hole is also strongly right-clean, and so it is strongly outer-clean. Hence, each of the independent events MATH has a probability upper bound MATH. The probability that all the events MATH hold is bounded by MATH. The probability that one of these events holds for some MATH is at most MATH times larger. The sizes of emerging barriers vary in the range MATH, hence the factor MATH.
math/0011008
Assume REF. As MATH is passed by MATH, there is a path from MATH to MATH . Due to REF there is a path from MATH to MATH. As MATH is passed by MATH, there is a path from MATH to MATH. The total slope of the combination of these three paths is clearly at most REF. (For reachability, the total slope condition is not important; but, a compound hole will need to satisfy REF) The proof of REF is immediate.
math/0011008
Assume MATH. then MATH, and MATH. REF turn into the true inequalities MATH. Assume MATH, then MATH, and MATH, MATH. REF will turn into the true inequalities MATH. Assume MATH, then MATH, MATH, MATH, MATH. REF will turn into the true inequalities MATH. Assume MATH, then MATH, MATH. Given MATH and MATH, what we need to check from REF is MATH which is true for MATH.
math/0011008
For fixed MATH, let MATH be the event that a compound barrier of any type MATH with MATH, distance MATH between the component barriers, and size MATH appears at MATH. For any MATH, let MATH be the event that a barrier of rank MATH and size MATH starts at MATH. We can write MATH where events MATH, MATH are independent. By REF: MATH . Hence by REF: MATH .
math/0011008
Let MATH, MATH. For each MATH, let MATH be the event that there is a MATH such that MATH is a jump of MATH, and a hole of type MATH starts at MATH and ends at MATH, and that MATH is the smallest possible number with this property. Let MATH be the event that there is a MATH such that MATH is a jump of MATH, and a hole of type MATH starts at MATH. Then MATH, and for each MATH, the events MATH are independent. We have, using the notation of REF : MATH . Further, using the same lemma: MATH . By REF : MATH, hence the operation MATH can be deleted. The same reasoning applies to the second application of MATH. Combining these, using MATH, MATH: MATH .
math/0011008
We have MATH which is satisfied by the choice MATH in REF.
math/0011008
We can choose MATH last, to satisfy REF, so consider just the other inequalities. Choose MATH to satisfy REF; then REF will be satisfied if MATH. This is achieved by MATH .
math/0011008
For the moment, let us denote the largest existing rank by MATH. Emerging types got a rank equal to MATH, and the largest rank produced by the compound operation is at most MATH (since the compound operation is applied twice), hence MATH. Since also MATH (since there is only one rank in MATH), we have for MATH: MATH . Now for the number of types. There is only one emerging type. The operation of forming compound types multiplies the number of types at most by the number of values MATH in REF : this is MATH for MATH and MATH otherwise. For the moment, let MATH denote the number of barrier types. The operation of forming compound types once results in multiplying MATH by at most MATH and adding the result to MATH. We have to repeat this operation twice, and use MATH: MATH if MATH. This recursive inequality leads to the estimate REF. This is straightforward with the recursion MATH since what we are proving is MATH. The divisor MATH in REF absorbs the effect of the factor MATH in the recursion.
math/0011008
Let us prove REF, which says MATH. We have MATH which is smaller than REF if MATH is sufficiently large. For REF , note that MATH which because of REF, is clearly less than REF if MATH is large.
math/0011008
The probability that a point MATH is strongly clean in MATH but not in MATH is clearly upperbounded by MATH, which upperbounds the probability that a barrier of MATH appears in MATH: MATH . For sufficiently large MATH, we will always have MATH. Indeed, this says MATH, which is satisfied if MATH. This implies that if REF holds for MATH then it also holds for MATH. For REF , since the scale-up REF says MATH, the inequality MATH will be guaranteed if MATH is large.
math/0011008
Recall REF . Let MATH. For any point MATH, the expression MATH is an upper bound on the sum, over all MATH, of the probabilities that an emerging barrier of type MATH (with rank MATH) starts at MATH. We have MATH . Due to REF, this expression grows exponentially in MATH, and MATH decreases double exponentially in MATH. It follows from REF that its multiplier MATH only grows exponentially in a power of MATH. Hence for large enough MATH, the product decreases double exponentially in MATH. So, for sufficiently large MATH, REF follows. To prove REF , let MATH be the emerging barrier type, let MATH and MATH, and let MATH be the event that there is a MATH such that MATH is a jump, and a hole of type MATH starts at MATH. We will be done if we prove MATH . Let MATH be the event that MATH is strongly right-clean in MATH, that MATH is strongly clean and MATH is strongly left-clean in MATH and that no barrier of MATH occurs in MATH. By the definition of emerging holes, MATH implies the event MATH, since MATH will be an emerging hole. Clearly, MATH . REF implies MATH, and we have MATH . By REF, this is MATH if MATH is sufficiently large. Hence the right-hand side of REF can be lowerbounded by MATH. The required lower bound of REF is MATH if MATH is sufficiently large.
math/0011008
Let MATH be two types with ranks MATH. Assume without loss of generality that MATH and that MATH is light: MATH. With these, according to REF of the scale-up algorithm, we can form compound barrier types MATH, as long as MATH. This gives a type of rank MATH, for all MATH. The bound REF and the definition of MATH in REF shows that the contribution by this term to the sum (over MATH) of probabilities that a barrier of size MATH and rank MATH starts at MATH is at most MATH . Now we have MATH, hence the above bound reduces to MATH. The total contribution to the sum for rank MATH is therefore at most MATH where in the last step we used MATH satisfied if MATH.
math/0011008
By REF , each rank MATH occurs for at most a constant number MATH values of MATH. For every such value but possibly the last one, the probability sum can only be increased as a result of the two operations of forming compound types. According to REF , the increase is upperbounded by MATH. After these increases, the probability becomes at most MATH. The last contribution, due to the emerging type, is at most MATH by REF ; clearly, if MATH is sufficiently large, the total is still less than MATH.
math/0011008
We will show that compound hole types in MATH satisfy REF if their component types do (they are either in MATH or are formed in the process of going from MATH to MATH). Consider the compound hole type MATH where MATH . Let MATH, then MATH. Let MATH and MATH. Following the notation of REF , let MATH be the event that there is a MATH such that MATH is a jump of MATH, and a compound hole of type MATH starts at MATH. That lemma assumes MATH, which holds in our case. Let us check the condition MATH. We have MATH which, due to REF, is always smaller than MATH if MATH is sufficiently large. The condition MATH of the lemma is satisfied automatically by the definitions. Hence all conditions of the lemma are satisfied. The conclusion is MATH . NAME us show that for MATH and then MATH chosen sufficiently large, this is always larger than MATH. First we show MATH . Indeed, recall the definition of MATH in REF. For MATH, we have MATH . For MATH, we have MATH. This proves REF. Using REF gives MATH . Note also that MATH if MATH is large enough. Thus, the second factor on the right-hand side of of REF is MATH. Substituting into REF, we get the lower bound MATH for the factors of MATH. This is MATH if MATH is sufficiently large. More exactly, we need MATH satisfied by the choice in REF.