paper
stringlengths
9
16
proof
stringlengths
0
131k
solv-int/9907012
Recall that, from a geometric point of view, the integrability of a reduction means that, if the reduction condition is satisfied on the initial surfaces, then it must propagate in the construction of the lattice. As it was shown in CITE the solution MATH of the MQL REF is fixed by the values of the rotation coefficients MATH on the initial surfaces. Therefore, if MATH on the initial surfaces, then they are equal MATH in the whole lattice, since the backward rotation coefficients MATH satisfy the same equations as MATH. The algebraic content of this result is instead expressed by the following equation MATH where MATH REF is a simple consequence of the MQL REF and of REF . Again we see that, if the constraint REF is satisfied on the initial surfaces (the Right-hand side of REF is zero), then it propagates transversally through the whole lattice (the LHS of REF is zero).
solv-int/9907012
The quadrilateral with the initial vertex is described by the following rotation coefficients: MATH, MATH, MATH and MATH connected by REF . Since MATH then the parallelograms MATH and MATH are similar (see REF ) if and only if MATH which means, due to REF , that the backward and forward MATH's are equal.
solv-int/9907012
The implication MATH is obvious. Let us concentrate on the opposite implication. Let us start from any set of backward rotation coefficients MATH related with MATH via REF; REF implies MATH which, together with REF and with the corresponding formula satisfied by the backward rotation coefficients MATH, gives, for MATH different from MATH and MATH, MATH that is, MATH is a function of MATH and MATH only. This, together with REF written in terms of MATH, as MATH and with REF , implies the existence of functions MATH such that MATH . We use the functions MATH to redefine the potentials MATH and obtain new backward rotation coefficients MATH satisfying MATH.
solv-int/9907012
The implication from REF to REF is trivial. To prove that REF is sufficient we notice that, in terms of MATH, it can be rewritten as MATH which leads again to MATH .
solv-int/9907012
CASE: The equivalence of REF follows immediately from the definitions of the potential matrix MATH and of the polar transformation MATH. CASE: The application of MATH to REF gives the equations MATH which imply equations MATH for some proportionality factor functions MATH. The linear REF and its adjoint REF satisfied by MATH and MATH imply that MATH satisfy REF (which allows to identify MATH with MATH) and lead to the symmetry REF . CASE: Following a similar strategy, one can show that MATH which implies REF , up to some constant of integration.
solv-int/9907012
The proof consists in showing that the circularity property is an admissible constraint for the quadrilateral lattice; that is, once imposed on the initial surfaces, it propagates transversally through the lattice. This was shown in CITE using purely geometric means. The algebraic proof is instead based on the following formula MATH where MATH which is a direct consequence of REF. We see that, if the circularity constraint REF is satisfied on the initial surfaces (the Right-hand side of REF is zero), then it propagates transversally through the lattice (the LHS of REF is zero).
solv-int/9907012
CASE: REF is a strightforward consequence of REF and has been found in CITE. CASE: REF follows from the equalities MATH . The first equality follows from rewriting MATH in terms of the backward data; the second equality follows from REF straightforward consequence of REF.
solv-int/9907012
The equivalence between REF is a straightforward consequence of the definitions of MATH, MATH and MATH. Furthermore, the quadrilateral lattice on a sphere is obviously circular, the circles being the intersections of the sphere with the planes of the elementary quadrilaterals CITE. CASE: Applying MATH to REF leads to MATH which implies that MATH for some MATH. Using REF one obtains MATH which, together with REF leads to identification of the factors MATH . Notice that REF imply MATH . Application of the shift in MATH direction to REF and using the above identity leads to REF the first of them allows for identification MATH, while the second gives the circularity condition. At last, REF imply the following relation between the circularity property and its dual: MATH which implies that also REF is satisfied. The proof of: CASE: REF is similar and is left to the reader.
solv-int/9907012
We first prove by induction the case MATH. For MATH the statement follows from the linear REF . When MATH and MATH and the upper part of REF holds then MATH and application of the NAME REF concludes the first part of the proof. Notice that applying the shifts MATH and MATH in different order we obtain the following generalized NAME equations MATH . To show the lower part of REF let us apply the shift MATH to the upper part of it obtaining MATH . It remains to prove that for a generic lattice and MATH which can be done, again, by simple induction with the help of REF .
solv-int/9907012
REF follows from REF for MATH and from REF is the compatibility condition of REF .
solv-int/9907012
REF are a strightforward consequence of REF respectively.
solv-int/9907012
If MATH is quadrilateral, then comparison of REF with REF proves the statement.
solv-int/9907012
Define functions MATH by REF and notice the following identity: MATH valid for a generic quadrilateral lattice. In the case of the NAME lattice we have MATH, and REF shows that such constraint is admissible.
solv-int/9907012
The linear REF and the constraint REF imply that MATH which gives MATH . Because the NAME lattice is is circular, then MATH can be identified with the potentials MATH, therefore REF leads to the symmetry constraint REF.
solv-int/9907012
The orthogonality REF imply that MATH where MATH is the linear space spanned by MATH, MATH. In addition, the planarity of the lattice implies that these two linear subspaces coincide; therefore MATH and MATH, which are orthogonal to the same MATH dimensional linear subspace, must be proportional: MATH . Applying MATH to the linear system REF and using REF, we infer that MATH (MATH without loss of genericity) and MATH.
solv-int/9907012
The proof is standard, in the philosophy of the MATH method. CASE: First, after defining the ``long derivatives" MATH one can verify that the functions MATH where MATH is the MATH component of the matrix MATH defined by MATH solve the homogeneous version of the MATH REF and go to zero at MATH; therefore uniqueness implies the equations MATH or, equivalently, the equations MATH . These two last equations, written in components, coincide with REF , using also the property MATH which is a direct consequence of the bilinear identity REF for MATH and MATH. At last, the MATH limit of REF implies that the coefficients MATH satisfy the MQL REF The proof of REF is conceptually similar. The function MATH where MATH solves the homogeneous version of the MATH REF and goes to zero at MATH; therefore uniqueness implies the equation MATH . This equation is equivalent to MATH whose component form reduces to REF , taking account of the formulas MATH which are obtained from the bilinear identity REF for MATH, MATH and MATH, MATH respectively. At last, the bilinear identity REF for MATH gives MATH or, equivalently, REF ; furthermore, REF lead to REF , evaluated at MATH, gives REF .
solv-int/9907012
We use the same strategy of the previous MATH proofs. Comparing REF with REF for MATH, one obtains MATH or, equivalently, REF , using REF . Furthermore one can verify that MATH satisfies the MATH REF , for MATH. Therefore, taking account of the MATH - large asymptotics, one obtains the equation MATH whose MATH component gives REF , using REF . At last, REF for MATH gives directly REF , which can be immediately identified with the symmetry constraint REF , using REF . Furthermore, REF imply REF , provided that one uses REF .
solv-int/9907012
REF for MATH give respectively REF . Consider REF for MATH, then REF follows from the fact that its Right-hand side satisfies REF as well. Analogous considerations lead to REF . At last REF , evaluated at MATH, gives the orthogonality REF .
solv-int/9907017
For completeness, we briefly sketch a modification of the proof of REF to prove REF . An elementary calculation shows that for each MATH with MATH the expression MATH is equal to MATH for MATH and to MATH for MATH. Therefore, MATH for MATH. For a positive MATH to be selected below, let MATH. Arguing as in the proof of REF (see the corresponding REF - REF), one infers MATH where MATH depends only on MATH and MATH. Next, pick MATH such that MATH. Note that MATH depends only on MATH and MATH. Moreover, MATH uniformly for MATH since MATH. Since MATH, one obtains MATH . This and the inequality MATH show that MATH . Now REF follows from the inequalities MATH and MATH where MATH is an absolute constant (see REF).
solv-int/9907017
As above, one needs to show that MATH remains bounded as MATH. Denote MATH and consider the following matrix potentials exponentially decaying at infinity, MATH . With this new notation and MATH REF still holds. By transposing the second and third rows and columns in the MATH matrix MATH, we remark that this matrix is similar to the block-diagonal matrix with the blocks MATH on the main diagonal and zero remaining entries. REF , applied to each of the diagonal blocks, shows that the norm of the operator REF in the matrix case remains bounded as MATH. REF is also valid in the new notation. In particular, one infers, MATH . By REF , the norm of each entry in the last matrix decays to zero as MATH. Similarly for MATH, and, as a result, the norms of the off-diagonal entries of MATH, defined as in REF, decay to zero as MATH. To handle the diagonal entries of MATH, let MATH be a matrix-valued function with exponentially decaying MATH. Note that MATH . In the case MATH, one can apply REF to each entry of this matrix. This shows that MATH as MATH and the proof of REF for MATH is finished. In the case MATH we claim that the conclusion of REF holds for the matrix-valued case (and, hence, REF holds). Indeed, to verify REF, one considers the polar decomposition MATH, where MATH and MATH is a partial isometry. To be consistent with our previous notations in REF, we denote MATH and MATH, so that MATH. Similarly to the proof of REF in the scalar case, we put MATH and MATH and observe that MATH . As above, the estimate REF in the matrix case follows from the relation MATH (compare REF ). To verify this relation, we denote MATH and observe that the entries of the matrix operator MATH are scalar operators of the type MATH . But the norm of each summand in these entries tends to zero as MATH. To verify the latter fact one can apply identity REF , and REF for the scalar case, and the result follows.
chao-dyn/9908002
Let us assume that REF is proved. The particular case MATH implies the embedding MATH provided that MATH. Therefore, there exist three positive constants MATH, MATH and MATH independent of MATH such that MATH where MATH. Since MATH, NAME 's inequality gives MATH which implies REF with MATH. Now we prove REF . As in the introduction, we define the usual NAME spaces MATH by interpolation, when MATH is not an integer. Remarking that, for all MATH, MATH, MATH, we deduce, by interpolation, that, for MATH, MATH . Due to the two-dimensional NAME embedding MATH, where MATH, and to the estimates REF , we obtain MATH where MATH is a positive constant independent of MATH. On the other hand, one can apply REF with MATH to get the existence of two constants MATH and MATH independent of MATH such that, for MATH, MATH where MATH for MATH. NAME 's inequality adapted to the case of anisotropic spaces and the equality MATH yield MATH whence REF with MATH. The proof is completed.
chao-dyn/9908002
Let MATH. Like in the introduction, we can define the operator MATH on MATH, supplemented either with homogeneous NAME boundary conditions in REF case or with periodic boundary conditions in REF case. Let MATH be a sequence of eigenvalues and eigenfunctions of MATH, such that MATH forms an orthonormal basis in MATH and that MATH. For MATH, the operator MATH writes, for any MATH, MATH where MATH. We notice that MATH is an orthonormal basis in MATH and the operator MATH on MATH (where MATH or MATH according to the boundary conditions) writes, for any MATH, MATH where MATH. Again, like in the introduction, for MATH, we define the NAME spaces MATH and MATH of periodic functions on MATH and MATH. Performing a change of variables from MATH to MATH and using the NAME embedding in dimension REF, MATH, where MATH, we obtain, for any MATH in MATH, MATH . But we have MATH . If MATH satisfies MATH, then, due to the NAME REF , we improve the above inequality and obtain MATH . The estimates REF imply that MATH where MATH in the general case and MATH, when MATH. Let now MATH be a function in MATH and MATH. If MATH, we deduce from REF that MATH where we could interchange the order of integrations, since MATH. Now, REF-dimensional NAME embedding MATH with MATH implies that MATH . But, as MATH, MATH . Finally , we remark that there exists a positive constant MATH such that, for MATH, for any MATH, MATH. Therefore, we deduce from REF that MATH which proves the embedding MATH. If MATH satisfies MATH, then, according to REF becomes MATH and the estimate REF is proved.
chao-dyn/9908002
As in REF , we consider the NAME series of MATH and MATH: MATH where, for MATH, MATH and MATH, MATH, MATH. A straightforward computation gives MATH where MATH. Hence, MATH . Since MATH, there exists a positive constant MATH, independent of MATH, MATH and MATH, such that MATH . The previous two inequalities imply that MATH where MATH have the same MATH-norms as MATH and MATH respectively, and where MATH is independent of MATH. NAME 's anisotropic inequality together with REF give MATH . Due to the classical NAME embedding MATH, we also have MATH . We deduce from the relations REF that MATH . The proof is completed.
chao-dyn/9908002
We can write MATH . But an integration by parts shows that MATH . Therefore, we can use REF to deduce that MATH .
chao-dyn/9908002
Since MATH is divergence-free, simple integrations by parts give MATH . We deduce from NAME 's inequality and from a two-dimensional NAME estimate that MATH which implies the lemma.
chao-dyn/9908002
Let MATH be a weak NAME solution with the same initial data as MATH. The difference MATH satisfies the following equation in MATH, MATH . We would like to take the inner product in MATH of this equation with MATH and to integrate in space and time. The result would be the first line of REF . Unfortunately, this is not possible without some additional justification because the integral MATH which is supposed to vanish, may not converge. Nevertheless, one can argue as in CITE and CITE (see also CITE). The idea is that, instead of multiplying the equation of MATH by MATH which yields regularity problems, one can multiply the equation of MATH by MATH, the equation of MATH by MATH and then subtract the two energy inequalities satisfied by MATH and MATH; the result is the same. This argument is detailed below. We saw at the end of the proof of REF that all the terms in the equation of MATH belong to MATH. So we can multiply the equation of MATH by MATH and integrate in space and time to obtain MATH . Unfortunately, we cannot directly multiply the equation of MATH by MATH and then integrate in space and time, because MATH and MATH are only in MATH and MATH respectively. As MATH, by a standard smoothing procedure, we can find a sequence of smooth divergence free vector fields MATH, such that MATH converges strongly to MATH in MATH, MATH converges strongly to MATH in MATH and MATH converges strongly to MATH in MATH. Multiplying the equation of MATH by MATH and integrating by parts yield MATH . We now pass to the limit in MATH in the above equation. With the regularities and convergences at hand, it is easily seen that MATH . On the other hand, by REF , we have MATH . We deduce that MATH . Finally, we integrate by parts to obtain that MATH . As MATH and MATH converge to MATH and MATH in MATH and MATH respectively, we infer from the above equality that, MATH . Putting together REF finally yields MATH . Since MATH and MATH are weak NAME solutions, the two following energy inequalities hold: MATH . We now add both energy inequalities and subtract relations REF to obtain MATH . Arguing as in REF , one shows that the integral MATH is absolutely convergent and vanishes. Thus we deduce from the previous inequality that MATH . Writing MATH and applying REF , we get, for any MATH, MATH . Since the interpolation inequality MATH holds, for any MATH, we infer from the above inequality that MATH that is MATH . And the result follows from NAME 's inequality.
chao-dyn/9908002
The proof is based on a NAME approximation, using the first MATH eigenvectors MATH, MATH, .,MATH of the NAME operator MATH. Since MATH and MATH commute, we can choose these eigenvectors MATH so that, either MATH or MATH. Let MATH denote the projector onto the space MATH generated by the first MATH eigenfunctions. We remark that MATH. The above properties imply that, for every MATH and for every MATH, MATH . We know (see REF, for example or CITE), that, for every MATH, there exists a global solution MATH of REF or also of REF , where MATH is replaced by MATH and MATH by MATH and where the initial condition is MATH. Moreover, for every MATH, MATH and MATH are uniformly bounded with respect to MATH in the spaces MATH and MATH respectively. We want to show that this solution MATH satisfies the additional estimates and properties given in REF , which will be preserved, when MATH goes to MATH. In order to simplify the notation, we shall drop the subscript MATH in all the a priori estimates below, when there is no confusion. Taking the inner product of the modified REF with MATH gives, for MATH, MATH . Since MATH commutes with MATH, we get by REF , for MATH, MATH . A simple interpolation inequality now yields, for MATH, MATH . REF implies, for MATH, MATH where MATH denotes a positive constant depending only MATH. We find again by interpolation that, for MATH, MATH . Due to the estimate REF , we also have, for MATH, MATH . Finally, we obtain, due to REF and the NAME REF that, for MATH, MATH . We now fix the real number MATH. In particular, we assume that MATH satisfies the following conditions MATH . REF , together with REF , imply, for MATH, MATH . Due to REF and to REF on the initial data, when MATH is small enough, there exists a positive time MATH such that, for MATH, MATH and, that, if MATH, MATH . We shall show by contradiction that MATH. We derive from REF , and REF , that, for MATH, MATH which in turn implies MATH . Set MATH . An application of NAME 's lemma in REF gives, for MATH, MATH . The estimate of MATH is simple and comes from the usual MATH-energy estimates on the velocity MATH. If we take the inner product of the modified REF with MATH, we obtain, for MATH, MATH . It follows that, for MATH, MATH where MATH. NAME 's lemma implies, for MATH, MATH . Integrating REF from MATH to MATH, where MATH, one finds MATH . By interpolation, we can write, MATH . Since MATH is an orthogonal projection in MATH and MATH, we infer from REF that, for MATH, MATH where MATH. To simplify, we set MATH and MATH, where MATH and MATH are positive functions of MATH. REF give the following bounds, MATH . We now deduce from REF that, for MATH, MATH . Due to the choice REF of MATH and REF , we infer from REF that, when MATH is small enough, we have, for MATH, MATH . Likewise, we derive from REF that, when MATH is small enough, we have, for MATH, MATH . Finally, we deduce from REF , where MATH is small enough, that, for MATH, MATH which contradicts REF , if MATH. It follows that MATH. Remark that the estimate REF implies, for MATH, MATH . We have just proved that, under REF , for any MATH, the solution MATH of the modified NAME REF with initial data MATH satisfies MATH where MATH is a positive constant independent of MATH and MATH. Integrating REF from MATH to MATH and using the estimates REF , one also shows that, for any MATH, MATH where MATH is a positive constant independent of MATH, but depending on MATH. We remark that MATH and MATH converge to MATH and MATH in MATH and MATH respectively. Now, a classical argument (see REF or CITE) shows that MATH belongs to the space MATH, is a weak NAME solution of REF with initial data MATH and that, due to REF , MATH belongs to MATH. The uniqueness of the solution MATH follows from REF implies that MATH belongs to the space MATH.
chao-dyn/9908002
The proof follows the same lines as the proof of REF . So we shall only indicate the main changes in the estimate of MATH, for MATH. Let MATH be fixed so that MATH. Arguing as in REF , we deduce from REF that, for MATH, MATH where MATH. REF imply that MATH . The application of NAME 's lemma to REF and the estimate REF give, for MATH, MATH which implies, due to REF , where MATH is small enough, that, for MATH, MATH . Now we finish the proof by arguing like in the proof of REF .
chao-dyn/9908002
The function MATH satisfies the following linear equation MATH . We first take the scalar product in MATH of the above equation with MATH. Since MATH and MATH are divergence-free vector fields, we obtain, by integrating by parts, that MATH and that MATH . Applying the estimate REF of REF to the term MATH, we get, for MATH, MATH or also, by REF , MATH . Integrating REF and using the NAME lemma, we obtain, for MATH and for MATH, MATH . Integrating now REF from MATH to MATH, we deduce from REF that, for MATH, MATH . We now fix a real number MATH. Multiplying REF by MATH, integrating over MATH and remarking, as in REF , that MATH we obtain, for MATH . Arguing as in REF , we remark that MATH . Furthermore, we have MATH . Using NAME inequalities, we deduce from the equalities REF that, for MATH, MATH or also, due to REF MATH . Since MATH, we derive from REF and the NAME inequality for MATH that MATH . Integrating the estimate REF , we obtain, for MATH, MATH . Using the uniform NAME lemma, we also deduce from REF that, for MATH, MATH . But, from the NAME embedding MATH, for MATH and REF , we infer that MATH . Finally the estimates REF imply that, for MATH, MATH where MATH.
chao-dyn/9908002
We first recall the equation satisfied by MATH, that is, MATH . Taking the scalar product in MATH of the above equation with MATH and remarking, as in CITE that MATH we obtain the equality MATH . Since, by the estimate REF , MATH we infer from REF , by using also a NAME inequality, that, for MATH, MATH or also MATH . Integrating REF and using the NAME lemma, we obtain, for MATH and for MATH, MATH which at once implies the estimate REF .
chao-dyn/9908002
Like in the proof of REF , we consider a NAME approximation, using the first MATH eigenfunctions MATH, MATH, .,MATH of the NAME operator MATH. As in REF , these eigenfunctions MATH are chosen so that, either MATH or MATH. Moreover, if the eigenvector MATH is independent of the third variable MATH, it can be chosen so that, either MATH or MATH. These properties imply that, if MATH denotes the projector onto the space MATH generated by the first MATH eigenfunctions, then, for every MATH and for every MATH, the inequalities MATH as well as REF hold. We recall that MATH (respectively, MATH) converges to MATH (respectively, MATH) in MATH (respectively, MATH), as MATH goes to MATH. Like in the proof of REF , we know (see REF, for example, or CITE) that, for every MATH, there exists a global solution MATH of REF or also of REF , where MATH is replaced by MATH and MATH by MATH and where the initial condition is MATH. Moreover, for every MATH, MATH and MATH are uniformly bounded with respect to MATH in the spaces MATH and MATH respectively. We want to show that this solution MATH satisfies the additional estimates and properties given in REF . In order to simplify the notation, we drop the subscript MATH, when there is no confusion. Like in the proof of REF , we take the scalar product in MATH of the modified REF with MATH and obtain the equality REF . Applying REF , we have, for MATH, MATH . In order to estimate the term MATH, we apply REF and obtain, for MATH, MATH . To estimate the third nonlinear term MATH, we can use the estimate REF as follows, MATH . Finally, like in the proof of REF , we write, for MATH, MATH . Due to the estimates REF , we have, for MATH, MATH . Due to REF and to REF on the initial conditions, where MATH, MATH are small enough, there exists a positive time MATH such that, for MATH, MATH and, that, if MATH, MATH . We shall show by contradiction that MATH. To this end, we shall estimate separately the terms MATH, MATH and MATH. The estimate of the term MATH will be a consequence of REF . We derive from the estimates REF that, for MATH, MATH which in turn implies that MATH . The NAME lemma then gives, for MATH, MATH . On the other hand, integrating REF , we get, for MATH and for MATH, MATH . We now fix a positive number MATH, satisfying MATH. We deduce from the estimates REF that, for MATH, MATH where MATH. On the one hand, we remark that, for MATH, where MATH, MATH. On the other hand, for MATH, we notice that MATH. From these remarks and from the estimate REF , we finally infer that, for MATH, MATH where MATH. REF imply that, for MATH, MATH where MATH. It remains to estimate the term MATH. Taking the scalar product in MATH of the modified REF with MATH, applying the estimate REF as well as the estimate REF , we obtain, for MATH, MATH or also MATH . By integration, it follows from REF that, for MATH, MATH . We infer from REF that, for MATH, MATH where MATH . Finally, REF give, for MATH, MATH . If MATH, MATH, MATH, MATH, MATH and MATH are small enough, REF together with the estimates REF imply that, for MATH, MATH which contradicts the equality REF . It follows that MATH. We have just proved that, under REF , for any MATH, the solution MATH of the modified NAME REF with initial data MATH satisfies MATH where MATH is a positive constant independent of MATH and MATH. Integrating REF and using the estimates REF as well as REF , one also shows that, for any MATH, MATH where MATH is a positive constant independent of MATH and MATH. Like in the proof of REF , a classical argument (see REF or CITE) together with the estimates REF , shows that MATH belongs to the space MATH, is a weak NAME solution of REF with initial data MATH and that MATH. The uniqueness of the solution MATH follows from REF . Arguing as in REF , we actually show that MATH belongs to MATH. Indeed, we deduce from the equality REF that, for any MATH, for any MATH, MATH which implies that MATH belongs to MATH. It follows, since MATH and MATH also belong to this space, that MATH belongs to MATH. As MATH, MATH is also in the space MATH. The vector MATH actually lies in the space MATH. Indeed, applying the estimate REF , we obtain, for MATH and MATH, MATH which implies that MATH belongs to the space MATH. As MATH and MATH, we deduce from REF that MATH and thus that MATH.
chao-dyn/9908002
We use the same NAME basis as in the proof of REF , so that REF hold. Since MATH converges to MATH in MATH, MATH also converges to MATH in MATH. Hence, there exists MATH such that, for MATH, MATH and thus that MATH . Likewise, for any MATH, MATH converges to MATH in MATH, when MATH goes to MATH. But, since MATH is a bounded set in MATH, MATH is a compact set in MATH and thus, there exists MATH such that, for MATH, for MATH, MATH and MATH . We set MATH. Like in the proof of REF , for every MATH, we know that there exists a global solution MATH of REF , where MATH is replaced by MATH and MATH by MATH and where the initial condition is MATH. We shall prove a priori estimates on the solution MATH . We again drop the subscript MATH, when there is no confusion. Like in the proof of REF , taking the inner product in MATH of the modified REF with MATH, we are led to estimate MATH, MATH and MATH. The estimate of the first term does not change and is given in REF . Decomposing MATH into MATH and applying REF to MATH, we can write, for MATH, MATH . But an anisotropic NAME inequality and REF imply that, for MATH, MATH or also, due to the NAME inequality for MATH, MATH . To estimate the third term, we again write MATH as MATH, apply REF to MATH and remark that MATH, which implies that, for MATH, MATH . It remains to bound the term MATH. A quick computation using NAME series shows that we have, for any MATH, MATH . Since MATH is independent of MATH, it follows from the above inequality that MATH . But, the NAME inequality, REF and the NAME REF imply that MATH . On the other hand, applying the periodic version of the multiplication property given in REF, we can write, MATH where MATH. Using the two-dimensional NAME embedding theorems, we infer from the above estimate that MATH where MATH and MATH. Applying then the following NAME inequality (see CITE or CITE) MATH we deduce from REF that MATH . Finally, due to the estimates REF , the solution MATH satisfies the following inequality, for MATH, MATH . Since MATH belongs to the space MATH, we infer from REF on the initial conditions, where MATH, MATH, MATH are small enough, that there exists a positive time MATH such that, for MATH, MATH and, if MATH, MATH . Then, as shown in the proof of REF , MATH, MATH and MATH satisfy the estimates REF , for MATH. Moreover, we deduce from REF , that, for MATH, MATH . Using the estimates REF , one shows that, if REF are satisfied for sufficiently small constants MATH, MATH, MATH, MATH, MATH and MATH, then, for MATH, MATH which contradicts the equality REF . It follows that MATH. Thus, we have proved that, under REF , for every integer MATH, MATH, the solution MATH of the modified NAME REF with initial data MATH satisfies MATH where MATH is a positive constant independent of MATH and MATH. We now finish the proof, by arguing as in the proof of REF .
cond-mat/9908326
Because of commutation relations REF - REF , MATH and therefore MATH are invariant under the replacements.
cond-mat/9908326
Let MATH and consider a new continuous function MATH that coincides with MATH for MATH (MATH). By definition the set MATH derived from MATH satisfies MATH. Next, we introduce new spectral parameters MATH . These spectral parameters MATH still obey the NAME ansatz REF , because MATH depends only on MATH and MATH is equal to MATH by definitions of MATH and MATH (MATH). We define MATH . Evaluating MATH helps us to prove the lemma. Compute MATH in two ways that both of MATH and MATH operate to the left or to the right. It thus follows that MATH . Since MATH is a continuous function for MATH, MATH must be MATH. Due to the definition of MATH the proof has been completed.
cond-mat/9908326
MATH is reduced to MATH with the help of commutation relation REF and relations REF - REF . Letting both MATH and MATH act to the right NAME vector we obtain MATH . Here we have used the l'Hospital's rule. Notice that extra terms whose right NAME vectors still contain MATH do not generate MATH, because it raises only in the case that both of the NAME vectors depend on MATH and the l'Hospital's rule is applied. REF implies the lemma.
cond-mat/9908326
The proof is straightforward with MATH.
cond-mat/9908326
It is obvious that this expression satisfies REF - REF . We prove its converse by induction on MATH. Let MATH . By REF it follows that MATH. Let us assume that MATH for MATH. By REF we have MATH . By assumption of induction the right hand side is equal to MATH. Thus MATH is independent of MATH. By REF MATH does not depend on any MATH (MATH). Hence we obtain MATH owing to REF . The proof has been completed.
cs/9908003
Because the polyhedron is homeomorphic to a sphere, its graph MATH must be planar. It remains to show that MATH is REF-connected. If there is a vertex MATH whose removal disconnects MATH into at least two components MATH and MATH, then there must be a ``belt" wrapping around the polyhedron that separates MATH and MATH. This belt has only one vertex REF connecting MATH and MATH, so it must consist of a single face. This face touches itself at MATH, which is impossible for a convex polygon. Similarly, if there is a pair MATH of vertices whose removal disconnects MATH into at least two components MATH and MATH, then again there must be a ``belt" separating the two components, but this time the belt connects MATH and MATH at two vertices (MATH and MATH). Thus, the belt can consist of up to two faces, and these faces share two vertices MATH and MATH. This is impossible for strictly convex polygons.
cs/9908003
If a cutting contained a cycle, having positive area both interior and exterior to the cycle, then the resulting unfolding would be disconnected, a contradiction. (Above we excluded the possibility of the cutting containing a cycle on the boundary of an open polyhedron MATH.) If the cutting did not contain a particular (nonboundary) point with nonzero curvature, neighborhoods of that point could not be flattened without overlap.
cs/9908003
By REF , the cutting is a forest. Suppose for contradiction that the forest has multiple connected components. Then there is a closed path MATH on the surface of the polyhedron that avoids all cuts and strictly encloses a connected component of the cutting. In particular, MATH avoids all vertices of the polyhedron. Let MATH denote the total turn angle along the path MATH. Because MATH avoids vertices, it unfolds to a connected (uncut) closed path in the planar layout; thus, MATH. Because the polyhedron is homeomorphic to a sphere, the NAME theorem CITE applied to MATH says that MATH where MATH is the curvature enclosed by MATH. By convexity, MATH. Further, MATH encloses a connected component of the cutting, which consists of at least one vertex of the polyhedron, so MATH. Therefore, MATH, contradicting that MATH could lay out flat in the plane.
cs/9908003
Suppose some cutting MATH includes only a single cut incident to MATH. Let MATH be MATH, where MATH is a small ball around MATH. Neighborhood MATH unfolds to a small disk that self-overlaps by precisely the absolute value of the curvature of MATH.
cs/9908003
The first claim follows simply by summing the angles incident to a middle vertex and checking when that sum is greater than MATH. For basic hats, we have MATH and for triangulated hats, we have MATH . Now MATH is only restricted by MATH for validity and MATH for negative curvature. Because MATH, these two conditions are satisfied precisely if MATH, which is satisfiable because MATH.
cs/9908003
The first claim follows from REF by halving the coefficient of MATH, because only a single MATH is now included in the sum of angles. The constraints now become MATH and MATH. Because MATH, these are equivalent to MATH, which is achievable because MATH.
cs/9908003
By REF , any edge cutting is a forest of nonboundary edges that covers the tip and middle vertices. Every connected component of the cutting is a tree, and so must have at least two leaves. Note that no two corners of the hat can be leaves of a common connected component of the cutting, because otherwise the path of cuts connecting them would disconnect the polyhedron. (Recall we excluded the possibility of a boundary edge being in a cutting.) Thus, at most one corner is a leaf of each connected component of the cutting. By REF , the middle vertices have negative curvature, so by REF , they cannot be leaves of the cutting. Hence, the cutting must in fact be a single path from a corner to the tip, visiting all of the middle vertices. It is possible to argue by case analysis that, for basic hats, there is precisely one such path up to symmetry (see REF , left), and for triangulated hats, there are two such paths up to symmetry (see REF , center and right). Each of these cuttings has a vertex with only one spike triangle cut away (marked by a gray circle in REF ), which means the remainder has negative curvature, leading to overlap by REF . However, this can be argued more simply as follows. Because the spike remains connected to the rest of the polyhedron, there must be a spike triangle MATH that remains connected to a brim face MATH; see REF . Because there is only one cut in the hat incident to a corner, this cut is not incident to one of the two vertices shared by faces MATH and MATH, say MATH. Therefore the brim faces MATH, MATH, and (in the case of a triangulated hat) MATH incident to MATH and the spike face MATH remain connected in the unfolding along edges incident to MATH. But by REF , these faces have total angle at MATH of more than MATH, causing overlap.
cs/9908003
This lemma follows directly from REF : an edge cutting of the spiked tetrahedron cannot induce an edge cutting of a constituent hat, and cannot cause overlap, so it must cut each hat into at least two pieces, by way of a path with the claimed properties.
cs/9908003
Suppose there were an edge cutting. By REF , inside each of the four hats would be paths of cuts joining two corners, and these paths share no cuts because they use only nonboundary edges. But because there are only four corners in total, these paths would form a cycle in the cutting, contradicting REF .
cs/9908003
A cutting could only have leaves on the boundary, because MATH has negative curvature, and because the cut incident to any other leaf could be glued (uncut) without affecting the unfolding. But any cutting has at least two leaves, so it must disconnect the polyhedron, a contradiction.
cs/9908004
Note that the deductive closure of the reduct MATH is a closure, and note that for every MATH that is a closure, the deductive closure of MATH is a subset of MATH.
cs/9908004
Let MATH be the atoms not covered by MATH. We prove the claim by induction on the size of MATH. Assume that the set MATH. Then, MATH covers MATH by REF and MATH returns true if and only if MATH return false. By REF, CREF, and CREF, this happens precisely when there is a stable model of MATH agreeing with MATH. Assume MATH. If MATH returns true, then MATH returns false and by REF and CREF there is no stable model agreeing with MATH. On the other hand, if MATH returns false and MATH covers MATH, then MATH returns true and by REF and CREF there is a stable model that agrees with MATH. Otherwise, induction together with EREF and EREF show that MATH or MATH returns true if and only if there is a stable model agreeing with MATH.
cs/9908004
Observe that the function MATH is monotonic and that the function MATH is anti-monotonic. Hence, MATH are monotonic with respect to MATH. Assume that there exists MATH such that MATH for only one MATH and MATH. If MATH and MATH, then MATH . Consequently, both MATH and therefore MATH . It follows that MATH. Thus, MATH is monotonic and has a least fixed point. Finally, notice that MATH has the same fixed points as MATH.
cs/9908004
Note that MATH is anti-monotonic in its first argument, that is, MATH implies MATH, and monotonic in its second argument. Fix a program MATH, a stable model MATH of MATH, and a set of literals MATH such that MATH agrees with MATH. Define MATH and MATH . Let MATH be the least fixed point of MATH. Since MATH agrees with MATH, MATH and MATH. Hence, the least fixed point of MATH, which is equal to the least fixed point of MATH, is a subset of MATH. In other words, MATH.
cs/9908004
Assume that MATH covers MATH and that MATH. Then, MATH. As MATH for MATH, MATH for every MATH. Thus, MATH is the least fixed point of MATH from which we infer that MATH is a stable model of MATH.
cs/9908004
Define MATH . Then, MATH implies MATH, which in turn implies MATH by the monotonicity of MATH. Hence, MATH, and consequently MATH . Now, MATH implies MATH, which by the definition of MATH implies MATH. Thus, MATH. Moreover, for any fixed point MATH, MATH and hence MATH by definition.
cs/9908008
A correct process MATH signs an acknowledgement for MATH by a correct sender only when it receives MATH over an authenticated channel from sender(MATH). The first property then follows from the fact that a signed acknowledgement from MATH of the form MATH contains the identity of the sender MATH = sender(MATH). To see that the second property holds, recall that a valid acknowledgement set contains acknowledgements from MATH distinct processes, which must contain at least MATH processes. Hence, a valid set of acknowledgements must contain an acknowledgement signed by a correct process. Since MATH is correct, by REF, MATH was multicast by MATH. Finally, note that two sets of valid acknowledgements must intersect in at least one correct process. Since a correct process never signs conflicting acknowledgements, the third property follows.
cs/9908008
That MATH suppresses duplicate deliveries is immediate from the protocol. It is left to show that if sender-MATH is correct, it must have sent MATH. To prove this fact, consider that for MATH to deliver MATH, MATH must have obtained a set MATH of valid acknowledgments for MATH. By REF , it follows that MATH must have been sent by sender-MATH.
cs/9908008
Notice that MATH, thus there are at least MATH correct processes in MATH. Thus, if MATH sends MATH to every process in MATH, then at least MATH correct processes MATH will receive it. Since no correct process receives a conflicting message from MATH, each will acknowledge it, sending back a MATH message to MATH, thereby enabling delivery of MATH by MATH.
cs/9908008
For MATH to deliver MATH, MATH must have obtained a set MATH of valid acknowledgments for MATH from MATH processes. If MATH learns that MATH delivered MATH, then by NAME we are done. Alternatively, if MATH does not learn that MATH delivered MATH, then after a timeout period MATH sends MATH to MATH. By REF , MATH cannot have received a conflicting set of acknowledgements, so at the latest, upon receipt of MATH from MATH, process MATH performs NAME(MATH).
cs/9908008
For MATH to deliver MATH, MATH must have obtained a set MATH of valid acknowledgments for MATH from MATH processes in MATH. Likewise, MATH must have obtained a set MATH of MATH valid acknowledgments for MATH. By REF , MATH and MATH do not conflict, and by the security of MATH, MATH.
cs/9908008
A correct process MATH signs a T acknowledgement for MATH only when it receives MATH over an authenticated channel from sender(MATH). Likewise, a correct process signs an AV acknowledgement only when the AV message contains a valid signature of sender(MATH). The first property then follows from the fact that a signed acknowledgement of the form MATH contains the identity of the sender MATH = sender(MATH), and a signed acknowledgement MATH contains both the sender's identity and its signature. For the second property, note that if the run contains a valid set of T acknowledgements for MATH, at least one of which must be from a correct process, then by REF MATH was multicast by MATH. Likewise, if a valid set of AV acknowledgements are formed for MATH, then again by REF MATH was multicast by MATH.
cs/9908008
That MATH suppresses duplicate deliveries is immediate from the protocol. It is left to show that if sender-MATH is correct, it must have sent MATH. For MATH to deliver MATH, MATH must have obtained a valid set of either AV acknowledgments or of REF acknowledgements for MATH. By REF , MATH was WAN-multicast-MATH by a correct sender(MATH).
cs/9908008
The theorem easily follows from the NAME property of the T protocol, which is employed within some timeout from NAME REF unless a valid set of AV acknowledgements for MATH is received first, enabling its delivery by MATH.
cs/9908008
For MATH to deliver MATH, MATH must have obtained a valid set MATH of either AV acknowledgments or of T acknowledgements for MATH. If MATH learns that MATH delivered MATH satisfying MATH and MATH, then by NAME we are done. Alternatively, if MATH does not learn that MATH delivered such MATH, then after a timeout period MATH sends MATH to MATH. If MATH has not delivered any conflicting message, then upon receipt of MATH from MATH, process MATH performs NAME(MATH). Otherwise, MATH delivers some message MATH satisfying MATH and MATH. In either case, we are done. (We note that, if sender REF is correct, then by Integrity MATH.)
cs/9908008
As argued in REF above, for MATH and MATH to deliver MATH and MATH, respectively, they must have each delivered corresponding sets of valid acknowledgments MATH and MATH. Denote by MATH the set of processes represented in MATH, and likewise MATH. If MATH and MATH intersect in an correct process, then we argue that MATH as in REF . It remains to compute the probability that conflicting message delivery is enabled in the case that MATH does not intersect MATH at any correct member. CASE: MATH. Thus, MATH contains faulty members only. By assumption, the adversary chooses which processes are faulty without knowledge of MATH, and hence for any MATH, MATH randomizes the choice of processes as a function of MATH independently of failures. Hence, the probability MATH for this event is at most MATH. CASE: MATH, MATH. Note that MATH, and in this case, MATH must intersect in a correct process, leading to a contradiction. CASE: (Without loss of generality) MATH, MATH, MATH. To distinguish from REF , assume that MATH contains at least one correct member MATH. Note that the correct member MATH chooses peers randomly, and does not disclose the composition of peers-MATH to sender-MATH. Moreover, by assumption there is a positive probability for each message from sender-MATH to MATH to reach its destination (independent of the choice of peers-MATH). Thus, the choice of peers-MATH is independent from the choice of any process in MATH. Therefore, the probability that MATH does not reach any correct member in MATH in MATH probes is at most MATH. Thus, the overall probability for conflicting message to be deliverable is bounded by MATH.
cs/9908010
Let MATH be any update, and let MATH denote the total number of times MATH is sent by correct processes in rounds MATH in MATH. Denote by MATH the number of correct replicas that have accepted update MATH by the time round MATH completes. Since MATH copies of update MATH need to reach a replica (not in MATH) in order for it to accept the update, we have that MATH. Furthermore, since at most MATH new updates are sent by correct processes in round MATH, we have that MATH, where MATH. By induction on MATH, it can be shown that MATH. Therefore, for MATH we have that MATH, which implies that not all the replicas are active for update MATH.
cs/9908010
Let MATH be any update. Since the MATH-amortized fan-in of MATH is MATH, with probability MATH (where MATH is arbitrarily chosen here as some constant between MATH and MATH), the number of messages received (from correct replicas) by any replica in rounds MATH is less than MATH. From now on we will assume that every replica MATH receives at most MATH messages in rounds MATH. This means that for each MATH, if MATH is updated by a set MATH of replicas during rounds MATH, then MATH. Some replica MATH becomes active for MATH if out of the updates in MATH at least MATH are from MATH, that is, MATH. In order to show the lower bound, we need to exhibit an initial set MATH, such that if MATH is too small then no replica becomes active. More specifically, for MATH, we show that there exists a set MATH such that for each MATH, we have MATH. We choose the initial set MATH as a random subset of MATH of size MATH. Let MATH denote the number of replicas in MATH from which messages are received by replica MATH during rounds MATH, that is, MATH. Since MATH receives at most MATH messages in these rounds, we get MATH where the constant MATH is at most MATH if MATH, and hence we have that MATH. By our assumption that MATH, we have that MATH. This implies that the probability that all the MATH are at most MATH is at least MATH. We have shown that for most subsets MATH if MATH no new replica would become active. Therefore, for some specific MATH it also holds. (In fact it holds for most.) Recall that at the start of the proof we assumed that the MATH-amortized fan-in is at most MATH. This holds with probability at least MATH. Therefore in MATH of the runs the delay is at least MATH, which implies that the expected delay is MATH.
cs/9908010
The outline of the proof is as follows. For the most part, we consider bounds on the number of messages sent, rather than directly on the number of rounds. It is more convenient to argue about the number of messages, since the distribution of the destination of each replica's next message is fixed, namely uniform over all replicas. As long as we know that there are between MATH and MATH replicas active for MATH, we can translate an upper bound on the number of messages to an approximate upper bound on the number of rounds. More specifically, so long as the number MATH of active replicas does not reach a quarter of the system, that is, MATH, we study MATH, an upper bound on the number of messages needed to be sent such that with high probability, MATH, we have MATH new replicas change state to active. We then analyze the algorithm as composed of phases starting with MATH. The upper bound on the number of messages to reach half the system is MATH, the bound on the number of rounds is MATH, and the error probability is at most MATH, where MATH. In the analysis we assume for simplicity that MATH for some MATH, and this implies that in the last component we study, there are at most MATH active replicas. At the end, we consider the case where MATH, and bound from above the number of rounds needed to complete the propagation algorithm. This case adds only an additive factor of MATH to the total delay. We start with the analysis of the number of messages required to move from MATH active replicas to MATH, where MATH. For any MATH, let MATH be the number of messages that MATH received, out of the first MATH messages, and let MATH be the number of distinct replicas that sent the MATH messages. Let MATH be an indicator variable such that MATH if MATH receives messages from MATH or more distinct replicas after MATH messages are sent, and MATH otherwise. That is, MATH if and only if MATH. We now use the coupon collector's analysis to bound the probability that MATH when MATH messages are received. Thus, a replica needs to get an expected MATH messages before MATH, and so with probability MATH it would need more than MATH messages to collect MATH different messages. For MATH we have that MATH . Let MATH denote the number of replicas that received messages from MATH or more replicas after MATH messages are sent, that is, MATH, where the active replicas are MATH. For MATH we have, MATH where the right inequality uses the fact that MATH. Our aim is to analyze the distribution of MATH. More specifically, we would like to find MATH such that, MATH for any MATH. Generally, the analysis is simpler when the random variables are independent. Unfortunately, the random variables MATH are not independent, but using a classical result by CITE, the dependency works only in our favor. Namely, let MATH be i.i.d. binary random variables with MATH, and MATH. Then, MATH . From now on we will prove the bounds for MATH and they will apply also to MATH. First, using a NAME bound (see CITE) we have that, MATH . For MATH, we have MATH, and hence MATH . For the analysis of the Random algorithm, we view the algorithm as running in phases so long as MATH. There will be MATH phases, and in each phase we start with MATH initial replicas, for MATH. The MATH-th phase runs for MATH rounds. We say that a phase is ``good" if by the end of the phase the number of active replicas has at least doubled. The probability that some phase is not good is bounded by, MATH for MATH. Assuming that all the phases are good, at the end half of the replicas are active. The number of rounds until half the system is active is at most, MATH where we used here the fact that MATH is a decreasing function in MATH. We now reach the last stage of the algorithm, when MATH. Unfortunately, there are too few passive replicas to use the analysis above for MATH, since we cannot drive the expectation of MATH any higher than MATH. We therefore employ a different technique here. We give an upper bound on the expected number of rounds for completion at the last stage. Fix any replica MATH, and let MATH be the number of new update in round MATH that MATH receives. Since MATH, we have MATH, and so: MATH . Let MATH denote the number of new updates received by MATH in MATH rounds, hence MATH. Then, MATH. Using the NAME bound we have, MATH . Let MATH. The probability that MATH is less than MATH is at most MATH. The probability that some replica receives less than MATH new updates in MATH rounds is thus less than MATH, and so in an expected MATH rounds the algorithm terminates. Putting the two bounds together, we have an expected MATH number of rounds.
cs/9908010
The probability that a replica receives MATH messages or more in one round is bounded by MATH, which is bounded by MATH. For MATH, this bound is MATH, for some MATH. Hence the probability that any replica receives more than MATH in a round is small. Therefore, the fan-in is bounded by MATH. If MATH, then for MATH, this bound is MATH, for some MATH. Therefore, in this case, the fan-in is bounded by MATH. The probability that in MATH rounds a specific replica receives more than MATH messages is bounded by MATH which is bounded by MATH. The probability that any replica receives more than MATH messages is bounded by MATH. Thus, the MATH-amortized fan-in is at most MATH.
cs/9908010
Let MATH be any update. We say that a node in the tree is active for MATH if MATH correct replicas (out of the MATH replicas in the node) are active for MATH. We start by bounding the expected number of rounds, starting from MATH, for the root to become active. The time until the root is active can be bounded by the delay of the Random algorithm with MATH replicas. Since on average one of every three messages is targeted at the root, within expected MATH rounds the root becomes active. The next step of the proof is to bound how much time it takes from when a node becomes active until its child becomes active. We will not be interested in the expected time, but rather focus on the time until there is at least a constant probability that the child is active, and show a bound of MATH rounds. Given that MATH correct replicas in the parent node are active, each replica in the child node has an expectation of receiving MATH updates from new replicas in every round. Using a NAME bound, this implies that in MATH rounds each replica has a probability of MATH of not becoming active. The probability that the child node is not active (that is, less of MATH of its replicas are active) after MATH rounds is bounded by MATH for MATH. In order to bound the delay we consider the delay until a leaf node becomes active. We show that for each leaf node, with high probability its delay is bounded by MATH. Each leaf node has MATH nodes on the path leading from the root to it. Partition the rounds into meta-rounds, each containing MATH rounds. For each meta-round there is a probability of at least MATH that another node on the path would become active. This implies that in MATH meta-rounds, we have an expected number of MATH active nodes on the path. Therefore, the probability that we have less than MATH is at most MATH. We have MATH nodes on the the path, this gives the constraint that MATH. In addition we like the probability that there exists a leaf node that does not become active to be less than MATH, which holds for MATH. Consider MATH meta rounds. Since there are at most MATH leaves in the tree, then with probability at least MATH the number of meta-rounds is at most MATH. Thus, the delay is MATH. This implies that the total expected delay is bounded by MATH.
cs/9908010
Any replica at the root has a probability of MATH receiving a message from any other replica. This implies that the expected number of messages per round is MATH, which establishes the lower bound. The probability that a replica receives more than MATH is bounded by MATH (using the NAME bound). Since MATH, the probability is bounded by MATH, and the theorem follows.
cs/9908010
The proof of the fan-in is identical to the one of the NAME algorithm. We have MATH replicas at the root. Each replica sends to each replica at the root with probability MATH. Therefore the expected number of updates to each replica in the root is MATH, which establishes the lower bound on fan-in. With probability MATH, each replica receives less than MATH updates in a round. The proof on the delay has two parts. The first is computing the time it takes to make all the replicas in the root active. This can be bounded by the delay of the Random algorithm with MATH replicas, and so is MATH. The second part is propagating on the tree. This part is similar to the Tree algorithm. As before, in each node at each round, each replica has a constant probability of receiving messages from MATH new replica. This implies that with some constant probability MATH all the replicas in a node are active after MATH rounds. The analysis of the propagation to a leaf node is identical to before, and thus this second stage takes MATH meta-rounds and the total delay on the second MATH.
cs/9908016
Compute the NAME\ï diagram of the circles within this region; that is, a partition of the region into cells, each of which contains points closer to one of the circles than to any other circle REF . Because the region is simply connected, the cell boundaries of this diagram form a tree. We choose a vertex MATH of this tree such that each of the subtrees rooted at MATH has at most half the leaves of the overall tree, and draw a circle centered at MATH and tangent to the circles having NAME\ï cells incident at MATH. This splits the region into simpler regions. We continue recursively within these regions, stopping when we reach regions bounded by only four arcs (in which no further simplification is possible). Adding each new circle to the NAME\ï diagram can be done in time linear in the number of arcs bounding the region, so the total time to subdivide all regions is MATH.
cs/9908016
The vertex protection step can be performed in MATH time using circles with radius half the minimum distance between vertices. (This minimum distance is an edge of the NAME triangulation and can be found in MATH time.) Next we find a set of MATH circles to add to our packing, so that any remaining gaps are simply connected CITE. If in this step we ever add a circle MATH tangent to the boundary, we replace it by a set of circles: a circle with radius MATH centered on the boundary at the point of tangency, circles with radius MATH tangent to MATH at each of its other points of tangency, and one circle concentric to MATH with radius reduced by MATH; this replacement is depicted in REF . Finally, as in the algorithm of Bern et al. CITE we repeatedly find the NAME\ï diagram of the circles bounding any remaining gap and place a circle on a NAME\ï vertex so as to divide the gap into two smaller parts of roughly equal complexity. However, in order to avoid placing circles tangent to the boundary in this step, we change their method by using only the circles around a gap as NAME\ï sites, omitting any diagram edges that may bound the gap. Our NAME diagram's edges (together with any boundary edges of the gap) form a tree, so we can find a vertex which splits the tree's leaves roughly evenly. Placing a circle at that vertex produces two simpler gaps and does not cause essential tangencies with the domain boundary. Unlike Bern et al. CITE, we do not bother eliminating bad four-sided gaps. We form a mesh by connecting each center of a circle in the packing to the circumcenters of adjacent gaps REF . In the four-sided gaps along the domain boundary, we place an additional edge from the boundary to the center of the opposite circle, bisecting the chord between the tangencies with the two other circles. These edges form a quadrilateral mesh since each face surrounds a point of tangency, and each point of tangency is surrounded by the vertices from two circles and two gaps. Each mesh element is the NAME cell of the point of tangency it contains; its boundary is composed of perpendicular bisectors of dual NAME edges. Each dual NAME edge has a circumscribing circle from the circle packing as witness to the empty-circle property of NAME graphs.
cs/9908016
MATH has one circle centered at each vertex; the two circles corresponding to the endpoints of an edge overlap in a lune. The lune's two corners are points of tangency in the original circle packing (or, if the edge is on the domain boundary, the corners are one such point of tangency and its reflection) and are contained in the two quadrilaterals on either side of the edge. These corners have power zero with respect to the two circles, and are not interior to any other circles; therefore they have those two circles (and possibly some others) as nearest power neighbors. Since power diagram cells are convex, those two circles must continue to be the nearest neighbors to each point along the center line of the lune; in other words this center line lies along an edge in the power diagram corresponding to the given mesh edge. Conversely, we must show that every power diagram adjacency corresponds to a mesh edge. But the power diagram boundaries described above form a convex polygon completely containing the center of the cell's circle; therefore there can be no other adjacencies than the ones we have already found, which correspond to mesh edges.
cs/9908016
We form the NAME\ï quadrilateralization of REF , and subdivide each quadrilateral MATH into four smaller quadrilaterals by dropping perpendiculars from the NAME\ï site contained in MATH to each of MATH's four sides. On edges where two cells of the NAME\ï quadrilateralization meet, the two perpendiculars end at a common vertex because they are the two halves of a chord connecting two tangent points on the same circle. For the same reason, each perpendicular meets the edge to which it is perpendicular without crossing any other cell boundaries first.
cs/9908016
As in the algorithm of Bern et al., we find a circle packing; however as discussed below we place some further constraints on the placement of circles. We then connect pairs of tangent circles by radial line segments through their points of tangency, and apply a case analysis to the resulting set of polygons. As shown in REF , all interior gaps can be subdivided into kites: three-sided gaps result in three kites, good four-sided gaps result in four, and bad four-sided gaps result in seven. Also shown in the figure are three types of gaps on the boundary of the polygon: three-sided gaps along the edge, reflex vertices protected by two equal tangent circles, and convex vertices packed by a single circle. There are two remaining cases, in which one or two of the sides of a four-sided gap are portions of the domain boundary, and the four-sided gap has a high aspect ratio preventing these boundary edges from being covered by a small number of three-sided gaps. In the simpler of these cases, two opposite sides of the four-sided gap are both boundary edges. Such a gap is necessarily good. If it has aspect ratio MATH, we can line the domain edges by MATH additional circles, as in the next case. Otherwise, our construction is illustrated in REF . We find a mesh using an auxiliary set of circles, perpendicular to the original packing. We first place at each end of the four-sided gap a pair of identical circles, tangent to each other and crossing the boundary edges perpendicularly at their points of tangency. These are the medium-sized circles in the figure. We next place two more circles, each perpendicular to one of the boundary edges and crossing it at the same points already crossed by the previously added circles; these are the large overlapping circles in the figure. Finally, each end of the original four-sided gap now contains a gap formed by four circles, but two of these circles cross rather than sharing a tangency. We fill each gap with an additional circle; these are the small circles in the figure. The resulting set of eight circles forms six three-sided gaps and one good four-sided gap, and can be meshed as shown in the figure. The final case consists of four-sided gaps (not necessarily good) involving one boundary edge. To make this case tractable, we restrict our initial placement of circles so that, if we place a circle MATH within a gap involving boundary edges, then MATH is either tangent to those edges or separated from them by a distance of at least MATH times its radius, for some sufficiently small value MATH. Then, any remaining four-sided boundary gap must have bounded aspect ratio, and we can place MATH small circles along the boundary edge leaving only three-sided gaps on that edge REF . The interior of the gap can then be packed with MATH additional circles leaving only the previously solved three- and four-sided internal gap cases.
cs/9908016
Suppose we have such a simple polygon, and a quadrilateral mesh on it. Let MATH denote the number of mesh vertices on the boundary of the polygon, MATH denote the number of interior vertices, MATH denote the number of mesh edges, and MATH denote the number of mesh quadrilaterals. Then, since each quadrilateral has four edges, each interior edge appears twice, and there are MATH boundary edges, we have the relation MATH. Combining this with NAME 's formula MATH and cancelling MATH leaves MATH. However, if all interior vertices of the mesh were incident to four or more edges, and all exterior vertices were incident to three or more edges, we would have MATH (since each edge contributes two to the sum of vertex degrees), a contradiction. So, the mesh has either an interior vertex with degree three, or an exterior vertex with degree two, and in either case at least one of the angles at that vertex must be at least MATH.
cs/9908016
The result follows from REF , since any kite (which we can assume without loss of generality to have a vertical axis of symmetry) can be divided into six MATH quadrilaterals in one of three ways depending on how the top and bottom angles of the kite compare to MATH. Specifically, we add new subdivision points on the midpoints of each kite edge. Then, if both the top and bottom angle of the kite are sharp (less than MATH), we can split the kite along a line between the left and right vertices, and subdivide both of the resulting triangles into three MATH quadrilaterals REF . If both angles are large (greater than MATH), we can similarly split the kite vertically along a line from top to bottom and again subdivide both of the resulting triangles REF . In both of these two cases the subdivisions are axis-aligned or at MATH angles to the axes. In the final case, the top angle is large (at least MATH) and the bottom is sharp (less than MATH). In this case, like the second, we partition the kite vertically into two triangles, and again partition each triangle into three; however in this final case the subdivisions are along lines between the bottom of the triangle and the two opposite edge midpoints, and at MATH angles to those lines. It is easily verified that with the given assumptions on the angles of the original kite, all vertices of the subdivision lie as depicted in the figures and all angles are at most MATH.
cs/9908018
Since the translation by a constant doesn't alter the recognizablity of a set, as recalled in the introduction (see CITE for details), we can assume that MATH. We have to construct a regular language MATH such that the number of words of length MATH is exactly MATH. Since MATH only contains powers of MATH with non-negative integral coefficients, the construction of MATH can be easily achieved by union of languages MATH on distinct alphabets (one has a small restriction for the language MATH; we explain it in the following example to keep this proof simple). To conclude the proof, the reader must recall that if a language MATH is regular then the language MATH formed of the smallest words of each length for the lexicographic ordering is still regular CITE. One can check that MATH.
cs/9908018
Assume that MATH. Let MATH be the minimal alphabet of MATH. Then MATH where MATH. For MATH, MATH has exactly MATH words of length MATH with MATH in position MATH. From this observation, one can check that MATH have exactly MATH words of length MATH for MATH. Notice that MATH if MATH. If MATH then we have to remove the MATH first words of each length from MATH, MATH . Notice one more time that MATH if MATH.
cs/9908018
We proceed as in REF and consider the polynomial MATH. Observe that since MATH, the coefficient of the dominant power in MATH is positive and thus the same remark holds for MATH. By adding extra terms of the form MATH, if MATH we can assume that MATH where MATH, MATH and MATH. Let MATH. Using REF , for MATH we construct languages MATH such that for all MATH, MATH. The reader can construct easily a language MATH such MATH, MATH by union of languages MATH and MATH. If we want to consider the smallest word of each length, as in REF , then the language MATH must contain exactly MATH words of length at most MATH (in this case, the first word of length MATH is the MATH word of MATH and its numerical value is thus MATH). This can be achieved by adding or removing a finite number of words from the regular language MATH (this operation doesn't alter the regularity of MATH). Thus MATH . To conclude we have to add a finite number of words for the representation of MATH and MATH .
cs/9908018
Let MATH with MATH and MATH. Let MATH be the least common multiple of MATH. One has MATH with MATH. By REF; thus MATH. As in REF , there exist a constant MATH and a language MATH such that MATH, MATH . We modify MATH (by adding or removing a finite number of words) to have MATH . It was proved in CITE that the arithmetic progression MATH is recognizable for any numeration system. Let MATH then MATH is a regular language such that MATH . We conclude as in REF .
hep-th/9908011
One has MATH for all MATH, and similarly MATH .
hep-th/9908011
We have MATH . Now, if MATH, we have MATH where MATH means MATH evaluated at MATH. The proof for MATH is similar.
hep-th/9908011
We have MATH . For MATH, we may write MATH; similarly, the last line displayed above becomes MATH which has the desired derivative.
hep-th/9908011
This follows by integration by parts; one has only to note that the terms can be arranged so that this is sensible.
hep-th/9908011
We shall first show that the function is measurable. Let MATH be a positive real number and MATH a fixed NAME - NAME operator. The inverse image of the ball of radius MATH at MATH is MATH where MATH form an orthonormal basis for MATH. But the sum is a pointwise-convergent sum of non-negative continuous functions, hence lower semicontinuous, and hence measurable. Now for positive integers MATH, consider the sets MATH. These form a decreasing sequence of sets of finite measure, with intersection MATH. Thus for large enough MATH the set MATH has positive measure. We may similarly find a set MATH, symmetric about the origin, of positive measure, and with MATH bounded on MATH. However then the set MATH will contain an interval about the origin, and from the ``twisted group law" and the NAME - NAME - NAME Theorem we have MATH is bounded on this interval. Thus there exists an interval containing the origin on which MATH is bounded. We integrate the twisted group law in the form MATH (for MATH close enough to zero) over a closed interval MATH near the origin: MATH . The first term of the last line tends to MATH as MATH. The second term does, too, since we have shown that conjugation by MATH is strongly continuous on the NAME - NAME operators. Thus MATH is continuous at the origin. Finally, at any value of MATH, for small enough MATH, we have MATH which tends to zero as MATH does.
hep-th/9908011
A little algebra shows MATH . From the NAME - NAME - NAME Theorem, we know MATH for some MATH, MATH. Thus MATH . For any MATH with MATH, write MATH where MATH is an integer and MATH, where MATH has the same sign as MATH. Then we have MATH for MATH. The result now follows from elementary considerations.
hep-th/9908011
Using a subscript minus to denote MATH-antilinear parts, we have MATH . A priori, this integral is known to exist only in the strong sense. However, it follows from the previous two results that MATH is a locally integrable NAME function and that the integral converges for the real part of MATH sufficiently positive. Multiplying by MATH, one easily shows that the resulting integral tends to MATH as MATH.
hep-th/9908011
We have MATH where the asterisk denotes the real adjoint.
hep-th/9908011
We have MATH . Since the first factor varies continuously in NAME - NAME norm and the second continuously in operator norm, the product varies continuously in NAME - NAME norm.
hep-th/9908011
Note that MATH is a NAME space, and so will be the space of linear forms on it. We have MATH. The idea will be to integrate this against MATH. However, since MATH is only weakly defined, it will be easier to approach this integral as a limit. Consider then, for a NAME - NAME operator MATH, MATH . Here the spectrum of MATH must lie in a strip MATH for some MATH, and so the integral converges for MATH sufficiently positive. Now, as MATH approaches MATH (in the space of forms on MATH), the left-hand side of this equation approaches MATH in the space of forms, but this integral is in fact a NAME - NAME operator. (This follows from REF .) Therefore, as MATH tends to MATH as a form, the quantity MATH tends to a limit, which we denote MATH, equal to the integral. For the converse, we note that MATH and the right-hand side is NAME - NAME in view of REF .
hep-th/9908011
We shall work with MATH, to avoid conjugating MATH and MATH. For this, we must derive the precise relation between the MATH's, MATH's, and MATH. This arises from the canonical quantization prescription, which in our case amounts to the replacement of the variables MATH, MATH with the operators MATH, MATH. We see that MATH is precisely MATH, the MATH-linear part of MATH. We can work out MATH from the identity MATH from which we find MATH and thus MATH . It was shown in REF (and the comments following that proposition) that MATH is continuous in NAME - NAME norm.
hep-th/9908011
Let MATH be the image of the vacuum at time MATH. Then MATH . Since MATH varies continuously in NAME - NAME norm, for any fixed MATH, this can be made as close to zero as desired by choosing MATH close enough to MATH.
hep-th/9908011
It is enough to establish this for trigonometric monomials. For any MATH, let MATH be the corresponding NAME operator. Here MATH is MATH as an element of MATH, and MATH is its complex conjugate. Then a trigonometric monomial is (a constant times) MATH for some MATH. The image of this state at time MATH is MATH where MATH. We find MATH . Since, as MATH approaches MATH, we have MATH and MATH to MATH in NAME - NAME norm, this tends to unity as MATH.
hep-th/9908011
With the choice of phases MATH, we have a projective unitary representation MATH. The cocycle representing its deviation from a true representation is MATH. This can be computed by lengthy but straightforward means. In matrix notation, we find it is MATH . Here the quantity whose determinant is to be taken is of the form MATH, where MATH varies continuously in trace norm in MATH and MATH. From this it follows that the cocycle is continuous, and so a continuous choice of phase is possible, making MATH into a one-parameter unitary group. Let such a choice be made. Finally, we must show that this group is strongly continuous. Since the phases vary continuously, it is enough to show that the original, projective representation is strongly continuous. While this could probably be done directly from the formula above, it is probably clearer to give an indirect argument. In the previous subsection it was shown that MATH is strongly continuous on a dense family of states. (Recall that now the phase has been chosen so that MATH is a one - parameter unitary group.) For any such state MATH and any MATH, the state MATH is in the domain of MATH. It follows that MATH has a densely-defined generator, which, because MATH is unitary, must be self-adjoint.
hep-th/9908012
Let MATH be the domain of MATH. The spectrum of MATH is the set of points MATH at which the map MATH is not invertible. More precisely, since MATH is not canonically a complex vector space, we work with the complexification. This will be done in the usual way, without introducing unnecessary notations for complexifications. Then MATH is a Hermitian form bounded away from zero (as a form). Suppose MATH is not one-to-one. Then MATH has an eigenvector MATH with eigenvalue MATH. Then MATH . This can hold for all MATH only if MATH is purely imaginary (or zero). Now suppose that MATH is not onto, and its image lies in some hyperplane MATH. This means that MATH is an eigenvector of MATH with eigenvalues MATH. Now MATH is the generator of MATH. The domain of MATH is MATH, and the energy function is MATH. As in the previous paragraph, we find MATH forcing the real part of MATH to vanish. We now take up the more delicate case, where MATH is one-to-one but not onto, but its image is dense. In this case, we may find a sequence MATH of unit vectors in MATH which are elements of MATH such that MATH is bounded and MATH tends to zero. We have MATH for any MATH. Thus MATH where we have taken MATH for simplicity (and we have used the NAME - NAME - NAME bound MATH). Now, suppose MATH. We may choose MATH so that MATH is as large as desired. For such a MATH, and any MATH, we may choose MATH large enough so that MATH for all MATH. In this case we will have MATH . However, this is a contradiction, for the right-hand side is uniformly bounded, and MATH can be made as large as desired. Thus MATH. Consideration of MATH similarly rules out the case MATH, and so we must have MATH.
hep-th/9908012
The idea will be to define a positive complex structure MATH relative to which MATH is orthogonal. The conclusions will follow almost directly from this. At a formal level, one has MATH . However, the sense in which this integral converges as MATH needs to be made precise. NAME if MATH were known to be bounded, the existence of this integral would be a bit delicate. In the present case, there are a number of technicalities, which arise because we need to gradually establish enough properties of MATH to show that it exists as a bounded operator. Once this is done, the remainder of the proof will be routine algebraic computations. We begin by showing that MATH exists (as a potentially unbounded operator) on the dense domain MATH. NAME integral for MATH converges strongly on MATH in the sense of a NAME principal value. To see this, first note that since MATH converges near infinity as a NAME principal value, it is enough to show that MATH converges, where ``P" indicates the principal value and MATH. The integrand can be re-written: MATH . On the other hand, the NAME - NAME - NAME estimate MATH implies MATH for sufficiently large MATH (and similarly for negative MATH). From this estimate, we have, for MATH, MATH of order MATH as MATH. This is integrable at infinity, and so MATH exists for MATH. It is straightforward to verify that MATH in a suitable sense, namely on MATH. This is a singular-integral computation using the definition of MATH. We first re-write the principal-value integral: MATH (understood strongly on MATH). Now we have MATH (understood strongly on MATH). After some algebra, this can be re-written as MATH . While the existence of this as a NAME integral requires the cancellation of MATH against MATH in order to compensate for the singularity in the denominator MATH at MATH, we may break the integral into two if each is interpreted as a principal value: MATH . (This will of course be compatible with definition of the NAME integral, since it simply corresponds to a particular way of forming the limit which is the integral.) If we do this, then we can use the distributional identity MATH on the first integral, and its counterpart for integration over MATH on the second, to obtain MATH immediately, strongly on MATH. In the next stage of the analysis, we shall want to consider the MATH-linear and antilinear portions of MATH. Formally, these are given by MATH. However, in order to make sense of this, we must show that MATH and MATH have a common dense domain. We shall show that the MATH-invariant dense set MATH; then MATH are naturally defined on this domain. (Here MATH for some MATH.) It is clear that MATH; we must show that MATH. So let MATH. Then MATH which is an element of MATH. The same sort of argument shows MATH for MATH. Now in fact MATH exists as a bounded operator. To see this, we will show that the MATH-antilinear part MATH of the resolvent is MATH in operator norm. This will be accomplished by estimating MATH . (Using the symmetry MATH, MATH, it is enough to consider the case of positive MATH.) In REF of paper I, we considered a quantity MATH and showed MATH where MATH . Using this, we find that for MATH sufficiently large MATH . This NAME - NAME - NAME estimates, and the boundedness of MATH, now imply this is MATH at infinity in operator norm. We remark that the boundedness of MATH implies that MATH extends naturally to exist on MATH, for we can define MATH. However, we shall see shortly that in fact MATH is itself bounded. We next note that MATH is a positive-definite symmetric form on MATH. (This follows easily from the fact that MATH is a generator of symplectomorphisms and that it is classically positive.) Similarly, conjugating by MATH, we have MATH a positive-definite symmetric form on MATH (or MATH). Thus MATH also defines a positive-definite symmetric form, and hence MATH is a MATH-symmetric operator defining a positive-definite form MATH on MATH. Thus MATH has a canonical extension to a positive self-adjoint operator on a dense domain in MATH. (We shall continue to denote this operator by MATH.) We now re-write the equation MATH in terms of the self-adjoint operators MATH. We have MATH and the MATH-linear and MATH-antilinear parts of this are MATH . Multiplying through by MATH, we get MATH . From the first equation and the boundedness of MATH, we see that MATH (and hence MATH) is bounded. According to the second, the operators MATH, MATH commute. Thus, again using the first equation, we may find a MATH-symmetric, MATH-antilinear operator MATH such that MATH . (The factor of two is for later convenience.) It will be useful to rewrite these. Note that MATH where the MATH-antilinearity of MATH has been used (note that MATH). We now let MATH . Since MATH the operator MATH is a generator of symplectomorphisms and MATH is a symplectomorphism. We also have MATH . Now we shall show that MATH is MATH-orthogonal: MATH . (Here we have used the fact that, by construction, the operator MATH is invariant under conjugation by MATH.) Since MATH is MATH-orthogonal and a symplectomorphism, it is MATH-linear. We may thus set MATH .
hep-th/9908012
The MATH-antilinear part of MATH is MATH . Multiplying on the left by MATH and on the right by MATH (both bounded operators with bounded inverses), we see that the antilinear part of MATH is NAME - NAME iff MATH is, or equivalently if MATH is. The idea now will be to think of MATH as a vector in the space of symmetric bounded operators, and consider the action of MATH on this space by conjugation. However, this space is not a NAME space, and in order to take advantage of the spectral theory of operators on NAME space, it is more convenient to regard MATH as a sort of unbounded form on the NAME - NAME operators. Let MATH be the space of NAME - NAME MATH-antilinear, MATH-symmetric endomorphisms of MATH. This space is naturally a complex NAME space, with complex structure given by MATH and inner product MATH . Conjugation by MATH is a strongly continuous unitary map on this space. We may apply the usual spectral theory of one-parameter unitary groups on NAME space to this. In fact, we can work out the spectral resolution explicitly in terms of that for MATH. For we have MATH where we have defined MATH. One can check that MATH is a projection-valued measure on MATH, and the equation above provides the spectral resolution of conjugation by MATH. Note that since MATH is supported for MATH, the measure MATH is supported for MATH. (This may be counterintuitive, as one thinks of the generator of a one-parameter group of conjugations as having eigenvalues which are differences of the eigenvalues of the generator of the original group. However, in the present case there is a very curious interaction between the fact that the spectrum is imaginary and the antilinearity of the elements of MATH. This is in some sense the central point of the proof.) The above analysis does not quite apply directly to MATH or MATH, since MATH may not be NAME - NAME. However, the operator MATH is bounded, and so can be regarded as a linear functional on the space MATH of trace-class elements of MATH. The space MATH is dense in MATH and invariant under conjugation by MATH, and so the spectral resolution derived above for this conjugation can be applied, by duality, to MATH, and similarly to MATH. That MATH be NAME - NAME is thus equivalent to requiring MATH to be so. Since MATH resolves MATH into the orthogonal direct integral of NAME - NAME operators, the integral above, restricted to any compact interval of MATH-values, must be NAME - NAME. This may be seen to be equivalent to the requirement MATH be NAME - NAME by elementary arguments. And since MATH is MATH-symmetric, this is equivalent to MATH being NAME - NAME. We now take up the boundedness-below. If MATH is NAME - NAME, then MATH provides a restricted symplectomorphism taking MATH to MATH. The image of the restricted symplectomorphism is unitarily implementable, and the quantum Hamiltonian induced by MATH is MATH, which is bounded below.
hep-th/9908012
We have mentioned everything except the negativity. But MATH is a positive symmetric form.
hep-th/9908012
Let MATH be a MATH-orthonormal basis, and let MATH, where MATH. Then we are given that MATH converges. The sum and integration are of non-negative terms, so the convergence is absolute. This quantity dominates MATH (where MATH is the identity). Since MATH, this implies MATH is NAME - NAME, which implies MATH is.