paper
stringlengths
9
16
proof
stringlengths
0
131k
math/0007195
For MATH, we compute MATH . By the mirror of this calculation and switching MATH and MATH, we obtain: MATH . But by REF , we have MATH . Hence, MATH, so that MATH.
math/0007195
REF follows from REF . To get REF , apply REF . Then, REF follows by using REF .
math/0007195
Let MATH. Then MATH .
math/0007195
For MATH, we compute MATH .
math/0007195
By the NAME equation, MATH. Hence, MATH, so that (by REF ) MATH. Now use REF .
math/0007205
Its self-adjointness follows from the form REF of MATH. Let us show that this operator is compact. We estimate the NAME norm of MATH: MATH . Thus MATH is a NAME operator. Hence, it is a compact operator CITE. To prove the positivity of MATH, consider the scalar product MATH as MATH.
math/0007205
Let us represent REF in the operator form in MATH where MATH has the form REF , and MATH, MATH. Due to the positivity of MATH the homogeneous equation MATH has only the trivial solution. Since MATH is a compact operator, then by the NAME theorem CITE inhomogeneous REF has a unique solution given by MATH with MATH. Due to Condition C and the fact that MATH is inside the upper half-plane at positive distance from the MATH-axis (Condition B), MATH is an infinitely differentiable function with respect to all variables. Moreover, MATH as MATH, and MATH are bounded for all fixed MATH. One can show CITE that the function MATH has the same properties. Let us prove that MATH is a real function. After multiplication of REF by MATH and integration with respect to MATH from MATH to MATH we obtain MATH . The self-adjointness of MATH implies that the imaginary part of the left-hand-side of REF is equal to zero: MATH . Application of conjugation to REF for MATH gives MATH . It follows from REF and the reality of MATH that MATH is real.
math/0007206
Consider a variation MATH of MATH with fixed end-points MATH, MATH. Suppose MATH (yielding the second equation of motion) and MATH. For the variation of the action functional MATH one obtains, MATH . (Note that MATH.) This yields the first equation of motion. The total energy MATH is always a first integral; MATH and MATH are easily checked by direct calculation. The integral MATH is due to the rotational symmetry around the vertical axis and MATH is due to the rotational symmetry around the top's axis. To achieve the reduction, note that in terms of the angular velocity, the momentum integrals are MATH and MATH. Thus one obtains MATH and MATH. Furthermore, MATH. Unless MATH, one finds the following representation of MATH in the basis MATH: MATH . Now one can express the energy integral MATH in terms of MATH,MATH,MATH and MATH. This yields the reduced equation of motion REF .
math/0007206
This follows from the fact that for the dynamical constants MATH, MATH and MATH to be physically feasible, there has to be a value for MATH between MATH and MATH such that REF leads to real MATH. This means that the polynomial on the right hand side must be nonnegative somewhere in the interval MATH. But at MATH it takes the nonpositive values MATH, and it goes to MATH for MATH. Hence there have to be three real zeroes, situated as stated.
math/0007206
First, the expressions for the logarithmic differentials of MATH are derived in the general case. The reduction to the case of spherical tops is then immediate. Finally, the analytic properties of the differentials are examined. Observe that MATH . This implies MATH such that MATH . By REF , the components of the angular velocity vector in the direction of the vertical axis and the top's symmetry axis are given by MATH . This implies MATH . With REF one obtains MATH . Now it follows from MATH and MATH that MATH . Note that from REF , MATH . Hence the signs of MATH can be chosen such that MATH or, from REF , MATH . Together with REF and the equation for MATH obtained from REF , this yields MATH . Substituting these expressions into REF , one obtains MATH . This implies REF and the reduction to the spherical case. Now assume that MATH. The assertions regarding the poles of the logarithmic differentials away from MATH follow straightforwardly. Regarding the asymptotics at MATH, note that near MATH the logarithmic differentials are MATH . Introduce the parameter MATH. It is well defined (up to sign) and holomorphic around MATH. One finds that the singular part of the logarithmic differentials is MATH.
math/0007206
First, project the vector MATH stereographically from the north pole of the unit sphere into the complex plane. The result is the image of MATH under the NAME transformation REF , that is, MATH. Since MATH, the fromula for MATH follows from REF : MATH . This proves the proposition.
math/0007206
The angle MATH is the argument of the curve MATH. It follows that MATH is the imaginary part of the logarithmic differential MATH. REF then follows from REF .
math/0007206
Loops occur, if MATH changes sign as MATH moves from MATH to MATH. It follows from REF that this happens if MATH has a solution for MATH in the interval MATH. That equation is equivalent to MATH . Since the left hand side of this equation is smaller than zero for MATH, this equation cannot be fulfilled if MATH and MATH have the same sign. This proves the first part of the proposition. Now assume that MATH and MATH have different signs. Then MATH . In the interval MATH, the function MATH is continuously decreasing from MATH to MATH. Hence, REF leads to a MATH if MATH . This is equivalent to MATH . The inequality on the right is always fulfilled, because the right hand side is greater than one and the left hand side smaller. The inequality on the left is equivalent to MATH. This proves the condition for loops. If MATH, then the zero of MATH occurs at MATH, hence there are cusps.
math/0007206
REF imply that the pullback of the holomorphic differential MATH to the MATH-plane is MATH. The MATH-integral therefore becomes MATH . Since MATH, this implies MATH . The constant has to be MATH for MATH to vanish at MATH.
math/0007206
Consider the asymptotic behavior of the logarithmic differential of MATH: MATH . It has only two simple poles on MATH, one at MATH and the other at MATH. For MATH, the asymptotic expansion is MATH . To calculate the asymptotic expansion around the branchpoint MATH, introduce the local parameter MATH. The result is MATH . Since the residues of the logarithmic differential of MATH are MATH and MATH, respectively, MATH itself is not branched on MATH, but has a zero at MATH and a simple pole at MATH. It is a multiply valued function, because the additive periods of the logarithmic differential lead to multiplicative periods of MATH. It follows that, in terms of the uniformizing variable MATH, MATH is of the form MATH . Similarly, one obtains the other REF . The constants MATH and MATH are determined by the initial conditions. Note first that MATH implies MATH and MATH implies MATH. Just substitute MATH into REF and observe that MATH is an odd function. Further, since MATH for MATH, it follows from REF that MATH and MATH . From this one obtains REF . The signs of MATH and MATH are determined so that MATH is positive and MATH has a positive imaginary part. Concerning the constants MATH, note first that MATH is a doubly periodic function of MATH. This implies MATH since the quotient of MATH-functions is already doubly periodic. Equally, considering MATH yields MATH. From the logarithmic derivative of MATH with respect to MATH one obtains MATH since MATH. The first of equations then REF follows from MATH . Since MATH as MATH, one has to show that MATH . From the first REF and MATH one obtains MATH . Since MATH, this yields the expansion MATH . The last equality uses MATH . The asymptotical expansion REF follows from REF and MATH . To see this last equation, just take the NAME expansion MATH divide by MATH and MATH, and use REF again. The equation for MATH is derived analogously.
math/0007206
Put REF in the form MATH and consider separately the integrals MATH and MATH . Substituting the uniformizing variable in the first integral, we get MATH . Using the formula MATH one obtains MATH . The last equality follows from MATH . Now the last integral can easily be solved since MATH. Note that MATH to obtain MATH . Requiring that MATH for MATH, one obtains MATH where that branch of the logarithmic term is chosen that vanishes for MATH. The other integral is easier to deal with: MATH since MATH. If the constant is chosen to make MATH vanish for MATH, this becomes MATH . The formula for MATH follows from REF .
math/0007206
The formula for MATH and MATH follows elementarily from the corresponding integrals REF and the initial conditions. Now consider the logarithmic differential of MATH from REF : MATH . As one can read off from this expression, the differential has three simple poles away from infinity, one at MATH with residue MATH and two more at MATH with residues MATH. Furthermore, the term MATH contributes one more simple pole at infinity with residue MATH. The location of the poles with their residues is summarized in REF . It also lists the poles of the logarithmic differential of MATH, which are obtained similarly. Now the poles with residue MATH lead to zeroes and poles of MATH and MATH, while the other poles give rise to branchpoints. It follows that, as functions of MATH, MATH and MATH are of the form MATH . Choose the branches of these multiply valued functions as explained in the proposition. Then the values MATH follow from the initial condition MATH. Since MATH is a doubly periodic function of MATH, it follows that MATH. It is left to show that MATH as given in the proposition. Since from REF MATH this will be achieved if we can prove that for MATH, MATH . This follows by a calculation analogous to the one in REF.
math/0007207
See for example, CITE .
math/0007207
We refer to CITE .
math/0007207
We refer to REF .
math/0007207
The estimate REF is a consequence of REF , if we take REF into account. By using the coercivity of MATH and the NAME inequality we get MATH . By REF we obtain, using the NAME inequality and the boundedness of MATH, MATH . For MATH small enough MATH and REF implies that MATH where MATH is independent of MATH. Therefore MATH which tends to zero as MATH, by the dominated convergence theorem. The estimate REF now readily follows by the continuity of MATH with respect to MATH.
math/0007207
We refer to REF .
math/0007207
By referring to the NAME estimates in REF the proof is analogous as the proof of REF .
math/0007207
Put MATH, MATH and MATH. Then we have MATH . For every MATH we denote by MATH the union of all closed cubes MATH such that MATH and MATH. For MATH we define the sets MATH and MATH . Further, we define MATH as the union of all closed cubes MATH with MATH and MATH, and we define MATH as the union of all closed cubes MATH with MATH and MATH. If we choose MATH small enough, then, for MATH, MATH according to REF . Thus, the definition of MATH yields MATH . Let us put MATH . A repeated application of the NAME 's and the NAME 's inequalities yields, according to REF , MATH . Hence MATH . Now recall that MATH for MATH. Thus, MATH as MATH for every MATH and the lemma is proved.
math/0007207
Let us define MATH and MATH . We apply REF to obtain MATH . Now MATH approaches MATH and MATH tends to zero as MATH. Thus, REF follows by REF is proved.
math/0007207
By the strict monotonicity assumption it follows that MATH . Consequently, REF is proved if we can prove that MATH as MATH. The proof of REF will be splitted up into four steps. CASE: We start by showing that MATH . Let us write MATH where the last equality follows from REF . According to REF the map MATH is continuous from MATH into MATH and an application of REF , using this fact, yields MATH and, thus, MATH . By the uniform continuity assumption we have MATH . By arguing as in REF we conclude that MATH . Thus, by taking REF into account we have shown REF . CASE: We proceed by showing that MATH . Let MATH be arbitrary. For MATH there exists a simple function MATH which satisfies the assumptions in REF , such that MATH . We write MATH . It follows, for the first integral on the right hand side, that MATH where MATH and where MATH and MATH are defined as in the previous section. By REF , the functions MATH are bounded in MATH. By the structure conditions this implies that MATH is uniformly bounded in MATH for some MATH. From REF it further follows that the sequence MATH is bounded in MATH. Therefore there exists a number MATH such that MATH uniformly with respect to MATH. Hence, up to a subsequence, MATH as MATH. By REF we know that MATH . This enables us to use the compensated compactness result REF and conclude that MATH in the sense of distributions. Consequently MATH and MATH . By using REF this gives MATH . For the second integral on the right hand side of REF we observe that the growth condition on MATH together with the NAME inequality gives MATH . By REF the sequences MATH and MATH are bounded in MATH. Therefore, by using REF , the last inequality in REF gives MATH . By taking REF into account we obtain MATH . Again using the NAME inequality, and REF , yields MATH . Thus REF follows by the arbitrariness of MATH and REF is accomplished. CASE: We show that MATH . Let us fix MATH and let MATH be defined as in REF . We write MATH . By similar arguments as in REF we conclude that MATH . It also follows, by the NAME inequality, that MATH . Therefore, according to REF , MATH . REF now follows by an analogous argumentation as in the final lines of REF In order to conclude the proof let us show that MATH . First we observe that MATH or equivalently MATH . Since MATH is continuously embedded in MATH we can pass to the limit in the right hand side and, consequently, MATH . By collecting the results from REF follows and the proof is complete.
nlin/0007024
We construct MATH by taking a quotient of a neighbourhood of the identity section MATH in MATH by a distribution MATH constructed from the linear system. Let MATH be a neighbourhood of a pole MATH not containing any of the other poles, and let MATH be a coordinate on MATH such that MATH at MATH. Then MATH in MATH, where MATH is holomorphic and has a pole of order MATH at MATH. Define MATH to be the distribution on MATH tangent to the non-vanishing vector field MATH where MATH is the right-invariant vector field on MATH generated by MATH. If MATH is an open set not containing any other poles, then we define MATH in the same way on MATH, but without the factor MATH; that is MATH is tangent to MATH. The vector fields are proportional on MATH, so MATH is well defined globally as a distribution on MATH. Under the condition on MATH, we have MATH at every MATH. So it is possible to choose an open neighbourhood MATH of MATH in MATH such that the quotient MATH is a NAME complex manifold of the same dimension as MATH. We then have a double fibration and a smooth curve MATH, which we also denote by MATH. Because we are looking only at a neighbourhood of the identity section, the right action of MATH on MATH does not pass to MATH; but the corresponding NAME algebra action does. Each MATH can be identified with a left-invariant vector field on MATH, and hence with a vector field on MATH tangent to the fibres of MATH. Its projection MATH by MATH is a holomorphic vector field on MATH, and the map MATH is a NAME algebra representation, satisfying the conditions in the definition of a twistor space. The singular hypersurface has components given by the poles of MATH, and MATH meets these transversally. It remains to show that MATH. To do this, we note that the meromorphic vector field MATH on MATH is tangent to MATH, and so its projection into MATH vanishes. On the other hand, at the identity, the right- and left-invariant vector fields generated by an element of MATH coincide. Hence MATH. The proposition follows.
nlin/0007024
Under either of REF , the eigenvalues of MATH can be assumed to be distinct, since they are distinct at MATH and since we can, if necessary, replace MATH by a smaller neighbourhood. So we can find a holomorphic gauge transformation MATH such that MATH is the sum of a diagonal polynomial and a term that vanishes to order MATH at MATH, and so can be absorbed into MATH. If we assume first that MATH is actually diagonal, then we deduce successively that MATH are diagonal (for MATH) and that MATH . For MATH, this implies that MATH since the diagonal terms on the left-hand side vanish, and hence that MATH is also diagonal. For MATH, it gives MATH since MATH has no pair of eigenvalues differing by REF. Thus, whether or not MATH is diagonal, we have that MATH is holomorphic at MATH when MATH; and that when MATH, MATH where MATH is a diagonal polynomial of degree MATH.
nlin/0007024
Any twistor space is full at a singularity of rank MATH since any MATH satisfying REF is then holomorphic at MATH, and can therefore be generated by a holomorphic vector field in any twistor space. In the irregular case, we construct MATH from the `minimal' twistor space in REF , by cutting out and replacing a neighbourhood of each component of MATH corresponding to an irregular singularity. Suppose, to begin with, that the system has a singularity of rank MATH at MATH and that in a neighbourhood MATH of MATH we have MATH, where MATH with MATH a diagonal polynomial of degree MATH with distinct diagonal entries throughout MATH and MATH holomorphic. By making a diagonal gauge transformation, we can make MATH off-diagonal. Pick constant diagonal matrices MATH which, together with MATH, form a basis for the diagonal subalgebra of MATH, and for each MATH let MATH be the off-diagonal matrix with entries MATH where MATH and MATH, MATH, are the diagonal entries in MATH and MATH. Thus MATH . Now introduce evolution equations for the diagonal matrix MATH and the off-diagonal matrix MATH as functions of the complex variables MATH by putting MATH where MATH, MATH. The integrability of this system is established by showing that MATH . Since both sides are off-diagonal, this follows from REF and MATH . So the evolution equations extend MATH and MATH to functions of MATH on a neighbourhood MATH of the origin in MATH. It follows from the definitions that MATH is flat meromorphic connection on the trivial bundle principal bundle MATH. Let MATH denote the quotient of a neighbourhood of the identity section in MATH by the horizontal foliation. The foliation extends holomorphically to MATH since it is spanned by MATH where MATH are interpreted as right-invariant vector fields on MATH. The quotient is a `local twistor space' for MATH in the sense that it carries a holomorphic MATH-action, which is free and transitive except on the hypersurface MATH, and contains a copy of MATH on which the induced system is MATH. Moreover, the fullness condition holds at MATH since MATH and the MATH-s span the diagonal subalgebra (in the case MATH, MATH on its own does that). Any generic MATH can be reduced to the form REF by a holomorphic gauge transformation MATH; so more generally a local twistor space can be constructed by applying the same gauge transformation to MATH. By using the MATH action, we can identify MATH with MATH, where MATH is a neighbourhood in MATH of the MATH intersection point of MATH and MATH. Then the embedded copy of MATH is mapped onto a punctured neighbourhood of the singularity in MATH. The identification allows us to replace MATH by MATH. By repeating this for the other irregular singularities, we obtain a full twistor space.
nlin/0007024
Choose a coordinate MATH on a small disc in MATH such that the pole is at MATH, and extend this to a neighbourhood of MATH in MATH so that MATH is given by MATH. Then we can identify a neighbourhood of MATH with MATH, as before. For each MATH, we have a holomorphic map MATH and hence an element MATH of MATH such that MATH. Let MATH be the corresponding map into the second twistor space. Then the required biholomorphic map is MATH.
nlin/0007024
Let MATH be solution to MATH with constant monodromy, and with constant connection matrices MATH to the special solutions at the poles. Let MATH be nearby points (neither a pole) and let MATH be close to the identity. Then, by integrating the action of MATH on MATH, we have two points MATH, MATH in MATH near MATH. These are the same if MATH . Let MATH vary continuously with MATH, and suppose that MATH is not a pole for any small MATH. Put MATH (the right-hand side is interpreted by regarding MATH as a point of MATH and by using the local action of MATH on MATH). This is independent of the choice of branch of MATH and MATH (so long as we make the choice of branch continuously) since MATH and MATH have the same monodromy. Moreover, MATH depends only on MATH, and not on the path, by REF . So if we exclude a small neighbourhood of each pole in MATH, then we can embed the complement in MATH by MATH. By fixing MATH and moving MATH, we see from REF that MATH. It remains to show that MATH extends holomorphically to the poles. Consider one of the poles (a point of MATH, varying continuously with MATH). We can choose a coordinate MATH in a neighbourhood MATH of the pole on each MATH so that MATH is the unit disc and the pole is at MATH. Then, for small MATH, since MATH is full, there exists a holomorphic map MATH such that MATH has the same singularity data at MATH as MATH has at MATH. Since MATH is an isomonodromic deformation, MATH and MATH also have the same NAME 's matrices. Let MATH be a solution to MATH with the same monodromy and connection matrices to the special solutions in the sectors at MATH as MATH. Then MATH . Further MATH is holomorphic at MATH. This is because it is single-valued, since MATH and MATH have the same holonomy, and bounded since in any sector MATH at MATH where MATH, MATH are the corresponding special solutions and MATH , MATH are the formal gauge transformations to diagonal form. So the embedding MATH extends by mapping MATH to MATH.
nlin/0007024
First we note that the Hamiltonians MATH generate the constant gauge transformations. Consider next the flow generated by MATH. We shall find the value of the Hamiltonian vector field at a point of MATH constructed from a global meromorphic REF-form MATH. To do this, we must we must evaluate the gradient of MATH at such a point. We have MATH . However MATH . We conclude that the value of the Hamiltonian vector field at such a point is MATH . The claim is that this is tangent to an isomonodromic deformation. To see this, let MATH be a solution to MATH, let MATH be a disc containing MATH, but no other pole, and let MATH be a second disc not containing MATH such that MATH is an open cover of MATH. For small MATH, put MATH. Then MATH is single-valued, holomorphic, and equal to the identity when MATH. By NAME 's theorem, MATH for some holomorphic maps MATH, MATH, with MATH. Put MATH and MATH. Then the definitions agree on MATH and MATH is a global meromorphic REF-form with poles at MATH REF and MATH. Moreover MATH, where MATH . Since MATH and MATH are holomorphic in MATH and MATH, it follows that the deformation is isomonodromic (see REF ). At MATH, we have MATH and MATH we also have at MATH that MATH, MATH for MATH. So the tangent to the deformation is the Hamiltonian vector field constructed above: these deformations move the poles, but leave the singularity data unchanged. Now consider the flow generated by MATH. Proceeding as before to calculate the value of the Hamiltonian vector field at a point given by a global REF-form MATH, we have MATH . So in this case, the value of the Hamiltonian vector field is MATH which is clearly isomonodromic. These deformations change the singularity data at MATH, leaving the position of the poles unchanged.
nlin/0007024
By fixing a base point (disjoint from the poles) and a frame at the base point, we can find a solution MATH for each MATH which depends smoothly on MATH. If the monodromy representation is constant, then we can find a matrix MATH for each MATH such that the monodromy matrices of MATH are constant. If we take MATH close to MATH and exclude small discs around the poles of MATH, then MATH is single-valued, and we can construct MATH in REF by putting MATH . (This is holomorphic except at the poles of MATH). Conversely, if we are given MATH, then MATH is a flat connection on the trivial bundle over MATH . Its holonomy coincides with the monodromy MATH for each MATH, and so the monodromy representation must be constant up conjugacy.
nlin/0007024
We shall look at the proof in outline. Suppose that the deformation is isomonodromic. Let MATH be a solution to MATH, depending continuously on MATH and with constant monodromy (we have to keep in mind that MATH is multi-valued and singular at the poles). Let MATH be a disc containing a pole (at MATH) and put MATH where MATH and MATH are the special solutions at MATH and MATH in the corresponding sectors at one of the poles. Then, for MATH close to MATH, MATH is a single-valued holomorphic map MATH; it is independent of sector, because the NAME 's matrices are the same at MATH and MATH. Once it is established that it is possible to differentiate the asymptotic expansions term-by-term, it is immediate that MATH is meromorphic, and of the required form. On a disc MATH that does not contain a singularity, we put MATH, and define MATH in the same way. If we choose the branch of MATH to vary continuously with MATH, MATH is independent of the choice of branch because the monodromy of MATH is independent of MATH. Given MATH, the only freedom in the construction of MATH is in the choice of the coordinate MATH, and hence in the local identification of the discs on the different NAME surfaces. A different choice for each MATH will add MATH to MATH for some local holomorphic vector field MATH. Thus REF holds on the overlap of two discs. To prove the converse, suppose that MATH is meromorphic, as stated. Choose a continuously varying sector MATH at the pole, and let MATH be the corresponding special solution. Then, by writing MATH and dropping the subscripts, we have MATH . It follows that MATH for some matrix MATH, which can depend of MATH but not MATH. Therefore MATH . The left-hand side is asymptotic to a power series, divided by MATH, as MATH in MATH (the same series for each sector at the pole). In the case MATH, each off-diagonal entry on the right-hand side has an exponential factor which must blow up as MATH along some directions in MATH since the angle of the sector MATH is more than MATH. This is a contradiction unless the off-diagonal entries in MATH all vanish. Thus MATH is a MATH-independent diagonal matrix. It can be absorbed into the special solutions to give that MATH and hence that the MATH matrices are constant. This is also true, more simply, in the regular case since the formal solutions then converge.
nlin/0007024
From the definitions, MATH and, in variational notation, MATH . We must show that MATH is skew-symmetric, closed, and non-degenerate. From the first constraint, we have MATH. It follows that MATH . However MATH . The skew-symmetry follows. A similar calculation, starting from MATH shows that MATH is closed. To show that MATH is nondegenerate, we note that if MATH, then MATH is anti-diagonal, MATH is lower triangular for each MATH, and MATH is upper triangular. However, from REF , MATH . Therefore MATH is diagonal and so MATH for each MATH. It then follows from the second constraint REF in the defintion of MATH that MATH. If we make a different choice for MATH at each point, then the effect is to replace MATH by MATH, where MATH is independent of MATH. This adds MATH to MATH, which vanishes by REF .
quant-ph/0007016
Let MATH denote the worst-case number of comparisons required if MATH and MATH have domain of size MATH. We show that MATH for some (small) constant MATH. Let MATH and consider the subproblem MATH. Using at most MATH comparisons, we can find a claw in MATH with probability at least MATH, provided there is one. We do that by using binary search to find the minimum MATH for which MATH, at the cost of MATH comparisons, and then recursively determining if the subproblem MATH contains a claw at the cost of at most MATH additional comparisons. There are MATH subproblems, so by applying amplitude amplification we can find a claw among any one of them with probability at least MATH, provided there is one, in the number of comparisons given in REF . We pick MATH. Since MATH, REF implies MATH for some constant MATH. Furthermore, our choice of MATH implies that the depth of the recursion defined by REF is on the order of MATH, so unfolding the recursion gives the theorem.
quant-ph/0007016
To apply REF , we will describe a relation MATH for each of our problems. For functions MATH and MATH, we denote by MATH the cardinality of the set MATH. For each problem MATH will be defined by MATH for some appropriate sets MATH and MATH. CASE: Here we suppose that MATH divides MATH. Let MATH be the set of functions MATH such that MATH and MATH for all MATH. Let MATH be the set of functions MATH such that MATH and MATH for all MATH. Then a simple computation gives that the relation MATH satisfies MATH, MATH, MATH, and MATH. CASE: Now we suppose that MATH is odd. Let MATH be the set of functions MATH such that MATH, and MATH, for all MATH. Then MATH satisfies that MATH and MATH. CASE: Let MATH be the set of functions MATH such that MATH. Then a similar computation gives MATH and MATH. Note that the no-collision problem and the no-range problem are not functions in general (several outputs may be valid for one input), but that they are functions on the sets MATH and MATH chosen above (there is a unique correct output for each input). Thus, REF implies a lower bound of MATH for the evaluation-complexity of each of our three problems.
quant-ph/0007021
We use the notation of REF. For any subset MATH, MATH, let us define MATH are linearly independent. Suppose there is a nontrivial linear combination MATH . Let MATH be a set of largest cardinality such that MATH and let MATH, MATH. We define a vector MATH . Applying MATH to the linear combination above, we have MATH . For any set MATH, MATH CASE: If MATH, MATH for all MATH, MATH . Hence MATH. CASE: If MATH, there exists an element MATH in MATH (by choice of MATH). MATH. Hence MATH. In fact, MATH is orthogonal to MATH. Hence, in the above linear combination REF , the only vector which has a nontrivial projection along MATH is MATH. Hence, MATH leading to a contradiction. MATH lie in a vector space of dimension at most MATH. By definition, for any set MATH, MATH, MATH where MATH are unitary transformations (matrices) independent of the set stored. For any pair of indices MATH, MATH where, recalling the notation of REF, MATH is the string stored by the storage scheme for set MATH and MATH is either the single location in the string corresponding to index MATH or the empty set. Therefore, if we define MATH and MATH to be the parity of the bits stored in MATH at the locations of MATH, we have MATH . Let us define for every set MATH, MATH, a matrix MATH as follows: MATH . Then we have, MATH . Hence, MATH where for MATH, MATH . Hence, we see that MATH span MATH. So, MATH lie in a vector space of dimension at most MATH. Now the theorem is an easy consequence of the above two claims.
quant-ph/0007021
Essentially, the same proof of REF goes through. Since the query scheme can make an error only if the element is present, we observe that the only vector in the linear combination REF that has a non-zero projection on the space MATH, is the vector MATH. Hence MATH, and the operators MATH continue to be linearly independent. Hence, the same tradeoff equation holds in this case too.
quant-ph/0007021
Since we are looking at a one probe quantum scheme, MATH. We start by picking a family MATH of sets, MATH, MATH, MATH and MATH for all MATH. By picking the sets greedily CITE, one obtains a family MATH with MATH . Let MATH. Since, MATH, MATH . Hence, MATH are linearly independent. Suppose there is a non-trivial linear combination MATH . Fix a MATH. Let MATH. Define MATH . Applying MATH to the above linear combination, we get MATH . Taking inner product of the above combination with the vector MATH we get MATH CASE: For any MATH, MATH. CASE: For any MATH, MATH where MATH and MATH, MATH and MATH. For any MATH, MATH where MATH and MATH and MATH and MATH. Hence MATH . We now note that for every MATH, we have a linear combination as in REF above. We can write the linear combinations in the matrix form as MATH, where MATH and MATH is a MATH matrix whose rows and columns are indexed by members of MATH. For MATH, MATH where MATH. The diagonal entries of MATH, MATH, are MATH. The non-diagonal entries satisfy MATH. Using the lower bound on MATH from REF , we get MATH . Hence MATH. This implies that MATH is non-singular. [Suppose not. Let MATH be a vector such that MATH. Let MATH be the location of the largest coordinate of MATH. We can assume without loss of generality that MATH. Now, the MATH-th coordinate of the vector MATH is at least MATH in absolute value, which is a contradiction.] So, MATH for all MATH. Hence MATH are linearly independent. MATH lie in a vector space of dimension at most MATH. Similar to proof of REF . Using the two claims above, MATH . Using the upper bound on MATH from REF , we get MATH . For values of MATH such that MATH, that is MATH, using REF and the lower bound on MATH from REF , we get MATH . For MATH, we recall the fact that MATH is always a lower bound (the information-theoretic lower bound) for the storage space. Thus, for these values of MATH too MATH . Hence, the theorem is proved.
quant-ph/0007021
The proof of this theorem is similar to the proof of REF . Pick a family MATH of sets, MATH, MATH, MATH, MATH for all MATH, such that MATH. One can prove that MATH, MATH, are linearly independent in exactly the same fashion as REF was proved. The difference is that MATH lie in a vector space of dimension at most MATH instead of MATH. This statement can be proved just as REF was proved. Therefore, by a argument similar to that at the end of the proof of REF , we get a lower bound MATH .
quant-ph/0007021
For MATH, let MATH denote the function for query MATH, which maps bit strings of length MATH to MATH that is, MATH maps MATH to MATH iff the query scheme given query MATH and bit string MATH evaluates to MATH. Consider a mapping MATH that is, MATH takes a subset of the universe of size at most MATH to a function from bit strings of length MATH to the reals. MATH is defined as follows MATH are linearly independent over MATH. Suppose there exists a non-trivial linear combination MATH . Pick a set MATH of smallest cardinality such that MATH. Let MATH be the string stored by the storage scheme. Applying MATH to the above linear combination, we get MATH . If MATH, there exists an element MATH such that MATH. Then, MATH, and hence, MATH. If MATH, then MATH. Hence, MATH which is a contradiction. Hence the claim is proved. MATH lie in a vector space of dimension at most MATH. Since the query scheme is deterministic and makes at most MATH (classical) bit probes, given a query MATH, MATH, the function MATH is modelled by a decision tree of depth at most MATH. Hence MATH can be represented over MATH as a sum of products of at most MATH linear functions, where the linear functions are either MATH (representing the value stored at location MATH in the bit string) or MATH (representing the negation of the value stored at location MATH). Note that for any MATH, at most one of these products evaluates to MATH. Such a function can be represented as a multilinear polynomial in MATH of degree at most MATH. A product of at most MATH such functions can be represented as a multilinear polynomial of degree at most MATH. Hence, MATH lie in the span of at most MATH functions from MATH to MATH. From this, the claim follows. From the above two claims, the theorem follows.
quant-ph/0007021
A proof very similar to that of REF goes through. We just observe that now the query scheme is a logical disjunction over a family of deterministic query schemes. If the query element is present in the set stored, there is a decision tree in this family that outputs REF. If the query element is not present in the set stored, then all the decision trees output REF. Let us denote by MATH the family of decision trees corresponding to query element MATH, MATH. For any decision tree MATH in MATH, let MATH be the function it evaluates. Let us now define MATH. Then MATH . With this choice of MATH, the rest of the proof is the same as in the deterministic case.
quant-ph/0007021
Suppose there is a classical scheme which stores subsets of size MATH from a universe of size MATH using MATH bits of storage, and answers membership queries using one bit probe with two-sided error at most MATH. Define MATH. Since MATH, MATH. Therefore, MATH . We repeat the query scheme MATH times and accept only if more than MATH trials accept. Then the probability of making an error on a positive instance (that is, the query element is present in the set stored) is bounded by MATH . The probability of making an error on a negative instance (that is, the query element is not present in the set stored) is bounded by MATH . From lower bound on MATH from REF , we get MATH . Hence, the probability that a random sequence of coin tosses gives the wrong answer on some query MATH and a particular set MATH stored, is at most MATH . Call a sequence of coin tosses bad for a set MATH, if when MATH is stored, there is one query MATH for which the query scheme with these coin tosses gives the wrong answer. Thus, at most half of the coin toss sequences are bad for a fixed set MATH. By an averaging argument, there exists a sequence of coin tosses which is bad for at most half of the sets MATH. By setting the coin tosses to that sequence, we now get a deterministic scheme which answers membership queries correctly for at least half the sets MATH, and uses MATH bit probes. From the proof of REF , we have that MATH . Using the upper bound on MATH in REF and the fact that MATH, we get MATH . Arguing as in the last part of the proof of REF , and recalling that since MATH, MATH is always a lower bound (the information-theoretic lower bound) for the storage space, we get MATH .
quant-ph/0007021
The proof of this theorem is similar to the proof of REF above. We repeat the query scheme MATH times and accept only if more than MATH trials accept. We ``derandomise" the new query scheme in a manner similar to what was done in the proof of REF . We thus get a deterministic query scheme making MATH bit probes and answering membership queries correctly for at least half the sets MATH. The rest of the proof now follows in the same fashion as the proof of REF .
quant-ph/0007036
Since MATH for all MATH we have MATH . Let MATH be the MATH-dimensional vector which has entries indexed by strings MATH and which has MATH as its MATH-th entry. Note that the MATH norm MATH is MATH for all MATH . For any MATH let MATH be defined as MATH . The quantity MATH can be viewed as the total query magnitude with respect to MATH at time MATH of those strings which distinguish MATH from MATH . Note that MATH is a MATH-dimensional vector whose MATH-th element is precisely MATH . Since MATH and MATH by the basic property of matrix norms we have that MATH that is, MATH . Hence MATH . If we let MATH by NAME 's inequality we have MATH . Finally, if MATH then MATH . REF then implies that MATH .
quant-ph/0007036
Suppose that MATH is a quantum exact learning algorithm for MATH which makes at most MATH quantum membership queries. If we take MATH then REF implies that there is a set MATH of cardinality at most MATH such that for all MATH we have MATH . Let MATH be any two concepts in MATH . By REF , the probability that MATH outputs a circuit equivalent to MATH can differ by at most MATH if MATH's oracle gates are MATH as opposed to MATH and likewise for MATH versus MATH . It follows that the probability that MATH outputs a circuit equivalent to MATH can differ by at most MATH if MATH's oracle gates are MATH as opposed to MATH but this contradicts the assumption that MATH is a quantum exact learning algorithm for MATH .
quant-ph/0007036
Let MATH be a quantum network which learns MATH and has query complexity MATH . For all MATH we have the following: if MATH's oracle gates are MATH gates, then with probability at least MATH the output of MATH is a representation of a Boolean circuit MATH which computes MATH . Let MATH be all of the concepts in MATH and let MATH be the corresponding vectors in MATH . For all MATH let MATH be the collection of those basis states which are such that if the final observation performed by MATH yields a state from MATH then the output of MATH is a representation of a Boolean circuit which computes MATH . Clearly for MATH the sets MATH and MATH are disjoint. By REF , for each MATH there is a real-valued multilinear polynomial MATH of degree at most MATH such that for all MATH the value of MATH is precisely the probability that the final observation on MATH yields a representation of a circuit which computes MATH provided that the oracle gates are MATH gates. The polynomials MATH thus have the following properties: CASE: MATH for all MATH; CASE: For any MATH we have MATH (since the total probability across all possible observations is REF). Let MATH . For any MATH let MATH be the column vector which has a coordinate for each monic multilinear monomial over MATH of degree at most MATH . Thus, for example, if MATH and MATH we have MATH and MATH . If MATH is a column vector in MATH then MATH corresponds to the degree-MATH polynomial whose coefficients are given by the entries of MATH . For MATH let MATH be the column vector which corresponds to the coefficients of the polynomial MATH . Let MATH be the MATH matrix whose MATH-th row is MATH note that multiplication by MATH defines a linear transformation from MATH to MATH. Since MATH is precisely MATH the product MATH is a column vector in MATH which has MATH as its MATH-th coordinate. Now let MATH be the MATH matrix whose MATH-th column is the vector MATH . A square matrix MATH is said to be diagonally dominant if MATH for all MATH . REF above imply that the transpose of MATH is diagonally dominant. It is well known that any diagonally dominant matrix must be of full rank (a proof is given in REF). Since MATH is full rank and each column of MATH is in the image of MATH it follows that the image under MATH of MATH is all of MATH and hence MATH . Finally, since MATH we have MATH which proves the theorem.
quant-ph/0007036
Suppose that MATH is not exact learnable from classical membership queries, that is, for any polynomial MATH there are infinitely many values of MATH such that any learning algorithm for MATH requires more than MATH queries in the worst case. By REF , this means that for any polynomial MATH there are infinitely many values of MATH such that MATH . At least one of the following conditions must hold: REF for any polynomial MATH there are infinitely many values of MATH such that MATH or REF for any polynomial MATH there are infinitely many values of MATH such that MATH . REF show that in either case MATH cannot be exact learnable from a polynomial number of quantum membership queries.
quant-ph/0007036
If MATH is an eigenvalue of MATH which has corresponding eigenvector MATH then since MATH we have MATH . Without loss of generality we may assume that MATH so MATH for some MATH and MATH for MATH . Thus MATH and hence MATH is in the disk MATH .
quant-ph/0007045
We begin by deriving a more usable expression for MATH. MATH where we have used the fact that MATH is periodic of period MATH. Since MATH is one-to-one when restricted to its period MATH, all the kets MATH are mutually orthogonal. Hence, MATH . If MATH, then since MATH is a MATH-th root of unity, we have MATH . On the other hand, if MATH, then we can sum the geometric series to obtain MATH where we have used the fact that MATH is the primitive MATH-th root of unity given by MATH . The remaining part of the proposition is a consequence of the trigonometric identity MATH .
quant-ph/0007045
We begin by noting that MATH where we have made use of the inequalities MATH . It immediately follows that MATH . As a result, we can legitimately use the inequality MATH to simplify the expression for MATH. Thus, MATH . The remaining case, MATH is left to the reader.
quant-ph/0007045
Since MATH we know that MATH which can be rewritten as MATH . But, since MATH, it follows that MATH . Finally, since MATH and hence MATH, the above theorem can be applied. Thus, MATH is a convergent of the continued fraction expansion of MATH.
quant-ph/0007045
From the above theorem, we know that MATH where MATH is a monotone decreasing sequence of positive reals converging to zero. Thus, MATH .
quant-ph/0007045
CASE: The probability of error MATH of finding the hidden state MATH is given by MATH where MATH where MATH is the function that rounds to the nearest integer. Hence, MATH . Thus, MATH CASE: The computational cost of the NAME transform MATH is MATH single qubit operations. The transformations MATH and MATH each carry a computational cost of MATH. MATH REF is the computationally dominant step. In MATH REF there are MATH iterations. In each iteration, the NAME transform is applied twice. The transformations MATH and MATH are each applied once. Hence, each iteration comes with a computational cost of MATH, and so the total cost of MATH REF is MATH.
quant-ph/0007060
Since the NAME spaces MATH and MATH have the same (infinite) dimension, it follows from REF of CITE that MATH has a dense set of cyclic vectors in MATH.
quant-ph/0007060
Let MATH denote the closed linear span of MATH. Since the infinitesimal generator MATH of the group MATH is positive, NAME 's ``little NAME theorem" REF entails that MATH. However, MATH; that is, MATH is cyclic under operators NAME in MATH over all times (NAME REF). Therefore, MATH is cyclic for MATH.
quant-ph/0007122
We first note that the NAME MATH can be obtained as MATH . Therefore MATH. From this, we have MATH for any given MATH. Therefore MATH contains the maximal torus MATH.
quant-ph/0007122
For each rotation matrix MATH we easily verify that MATH .
quant-ph/0007122
This follows immediately from REF - REF .
quant-ph/0007122
We first quote the following fact CITE: For any MATH, there exists a collection of unitary matrices MATH, MATH, and a MATH such that MATH where MATH is a rotation involving MATH and MATH and satisfying REF . For the benefit of the reader and for the sake of self-containedness, we include a direct proof of REF in the Appendix, condensed from REF . Now we can break up MATH into MATH where MATH and MATH for MATH. It is easy to see that MATH acts trivially except on MATH and MATH, and the other MATH's act non-trivially only on MATH. In addition, MATH's commute with each other, and each MATH commutes with MATH, MATH as well. Thus, MATH . For MATH, define MATH . Therefore we have reached MATH where each MATH is a unitary matrix which acts nontrivially only on the states MATH and MATH satisfying REF .
quant-ph/0007122
This is a basic fact which can be found in most basic algebra or group theory books.
quant-ph/0007123
Straightforward verification.
quant-ph/0007123
The action of the evolution dynamics MATH on the invariant subspace MATH is, by REF , MATH, the identity operator on MATH. Since the component of MATH in MATH is the zero vector, the action of MATH (the MATH identity matrix) on it remains zero for all MATH.
quant-ph/0007123
Solve REF for MATH: MATH . Substituting REF into REF , we obtain REF in bra-ket form.
quant-ph/0007123
It is obvious that REF holds. To see REF , we have MATH .
quant-ph/0007123
The representation REF follows easily from REF . To calculate MATH, write MATH analogous to [REF ], and apply REF to obtain REF ; or apply REF and the properties of the MATH generators commonly used in quantum mechanics [REF , p. REF].
quant-ph/0007123
Use REF .
quant-ph/0007123
As in [REF ], we have MATH and, therefore MATH . Let MATH be the orthogonal projection of MATH onto MATH. Then MATH from which, we apply the NAME inequality and obtain MATH . Combining REF , we have established the left half of REF . Next, mimicking [REF - REF ], we have MATH and from MATH, by an application of the NAME inequality again, we obtain MATH . Integrating REF from REF to MATH, noting that MATH, we have verified the right half of REF .
quant-ph/0007123
We have, from REF , MATH . The conclusion follows.
quant-ph/0007123
Substituting REF into REF and simplifying, we obtain MATH . The proof follows.
quant-ph/0007123
Straightforward verification.
quant-ph/0007123
Use the matrix representation REF .
quant-ph/0007123
This is the major theorem in REF ; see REF and particularly REF therein. Note also the work by NAME who considered some measurement effects in REF .
quant-ph/0007123
By the very definition of MATH in REF , we know that MATH . Therefore REF follows.
quant-ph/0007123
By NAME 's formula, MATH we have MATH .
quant-ph/0007123
From REF , we have MATH . Now, applying REF , we obtain MATH . Substituting REF into REF , we obtain MATH .
quant-ph/0007124
For any MATH, denote MATH . CASE: We have, for MATH, MATH CASE: MATH .
quant-ph/0007124
It follows obviously from by REF .
quant-ph/0007124
Using REF , we have MATH . Again, from the definition of MATH in REF , we see that REF gives MATH . Therefore REF follows.
cond-mat/0008422
Define MATH by MATH . Since they satisfy the following properties: MATH the product MATH is written as MATH where MATH. The square norms of MATH are calculated as MATH .
cond-mat/0008422
For a reduced expression MATH, we have MATH . From the NAME REF , we calculate the scalar product as follows: MATH .
cond-mat/0008422
It is sufficient to require the conditions MATH and MATH in order to determine the coefficients MATH such that MATH.
cond-mat/0008422
The orthogonality for MATH is straightforward from that of the nonsymmetric NAME polynomials REF . We have MATH where the last equality follows from REF .
cond-mat/0008422
There exists a MATH-homomorphism MATH defined by MATH . Since MATH does not depend on MATH as REF , we have MATH . Thus we obtain the following relation: MATH . We show that the sum on the left-hand side of the above equation can be replaced by the sum on MATH. Consider the isotropy group MATH for the dominant weight MATH (MATH for MATH). Since an element MATH can be written by a product of simple reflections fixing MATH, MATH (see CITE), there exists at least one simple root MATH associated with the reflection MATH in the set MATH. Hence, for MATH, we have MATH . Define MATH. For MATH, there is a unique MATH and a unique MATH such that MATH. We obtain the above lemma since the sum on MATH on the left-hand side of REF can be replaced by that on MATH which is equivalent to that on MATH.
cs/0008001
Suppose there is such a cycle. Letting MATH be the vertex at one end of REF-edge, we can trace around the cycle, giving a sequence of vertices MATH, where MATH is the vertex at the other end of REF-edge. The assignment has MATH for MATH, and MATH, and hence it violates REF . Only If. Suppose the assignment violates a transitivity constraint given by REF . Then, we construct a cycle MATH of vertices such that only edge MATH is a REF-edge.
cs/0008001
The ``if" portion of this proof is covered by REF . The ``only if" portion is proved by induction on the number of variables in the antecedent of the transitivity constraint (REF .) That is, assume a transitivity constraint containing MATH variables in the antecedent is violated and that all other violated constraints have at least MATH variables in their antecedents. If there are no values MATH and MATH such that MATH and MATH, then the cycle MATH is simple. If such values MATH and MATH exist, then we can form a transitivity constraint: MATH . This transitivity constraint contains fewer than MATH variables in the antecedent, but it is also violated. This contradicts our assumption that there is no violated transitivity constraint with fewer than MATH variables in the antecedent.
cs/0008001
The ``if" portion of this proof is covered by REF . The ``only if" portion is proved by induction on the number of variables in the antecedent of the transitivity constraint (REF .) Assume a transitivity constraint with MATH variables is violated, and that no transitivity constraint with fewer variables in the antecedent is violated. If there are no values of MATH and MATH such that there is a variable MATH with MATH and either MATH or MATH, then the corresponding cycle is chord-free. If such values of MATH and MATH exist, then consider the two cases illustrated in REF , where REF-edges are shown as dashed lines, REF-edges are shown as solid lines, and the wavy lines represent sequences of REF NAME MATH is a REF-edge (shown on the left). Then the transitivity constraint: MATH is violated and has fewer than MATH variables in its antecedent. CASE: NAME MATH is a REF-edge (shown on the right). Then the transitivity constraint: MATH is violated and has fewer than MATH variables. Both cases contradict our assumption that there is no violated transitivity constraint with fewer than MATH variables in the antecedent.
cs/0008001
We consider the case where MATH. The general statement of the proposition then holds by induction on MATH. Define assignment MATH to be: MATH . We consider two cases: CASE: If MATH, then any cycle in MATH through MATH must contain a REF-edge other than MATH. Hence adding this edge does not introduce any transitivity violations. CASE: If MATH, then there must be some path MATH of REF between nodes MATH and MATH in MATH. In order for the introduction of REF MATH to create a transitivity violation, there must also be some path MATH between nodes MATH and MATH in MATH containing exactly one REF-edge. But then we could concatenate paths MATH and MATH to form a cycle in MATH containing exactly one REF-edge, implying that MATH. We conclude therefore that adding REF-edge MATH does not introduce any transitivity violations.
cs/0008001
We note that any cycle in MATH must be present in MATH and have the same edge labeling. Thus, if MATH has no cycle with a single REF-edge, then neither does MATH.
cs/0008001
Suppose that MATH is satisfiable, that is, there is some assignment MATH such that MATH. Then by REF we can find an assignment MATH such that MATH. Furthermore, since the construction of MATH by REF preserves the values assigned to all variables in MATH, and these are the only relational variables occurring in MATH, we can conclude that MATH. Therefore MATH is satisfiable. Suppose on the other hand that MATH is satisfiable, that is, there is some assignment MATH such that MATH. Then by REF we also have MATH, and hence MATH is satisfiable.
cs/0008001
Our proof is an adaptation of a proof by CITE that MATH has a bisection bandwidth of at least MATH. That is, one would have to remove at least MATH edges to split the graph into two parts of equal size. Observe that MATH has MATH vertices and MATH edges. These edges are split so that MATH are in MATH and MATH are in MATH. Let MATH denote the planar dual of MATH. That is, it contains a vertex MATH for each face MATH of MATH, and edges between pairs of vertices such that the corresponding faces in MATH have a common edge. In fact, one can readily see that this graph is isomorphic to MATH. Partition the vertices of MATH into sets MATH, MATH, and MATH according to the types of their corresponding faces. Let MATH, MATH, and MATH denote the number of elements in each of these sets. Each face of MATH has four bordering edges, and each edge is the border of at most two faces. Thus, as an upper bound on MATH, we must have MATH, giving MATH, and similarly for MATH. In addition, since a face of type A cannot be adjacent in MATH to one of type B, no vertex in MATH can be adjacent in MATH to one in MATH. Consider the complete, directed, bipartite graph having as edges the set MATH, that is, a total of MATH edges. Given the bounds: MATH, MATH, and MATH, the minimum value of MATH is achieved when either MATH and MATH, or vice-versa, giving a lower bound: MATH . We can embed this bipartite graph in MATH by forming a path from vertex MATH to vertex MATH, where either MATH and MATH, or vice-versa. By convention, we will use the path that first follows vertical edges to MATH and then follows horizontal edges to MATH. We must have at least one vertex in MATH along each such path, and therefore removing the vertices in MATH would cut all MATH paths. For each vertex MATH, we can bound the total number of paths passing through it by separately considering paths that enter from the bottom, the top, the left, and the right. For those entering from the bottom, there are at most MATH source vertices and MATH destination vertices, giving at most MATH paths. This quantity is maximized for MATH, giving an upper bound of MATH. A similar argument shows that there are at most MATH paths entering from the top of any vertex. For the paths entering from the left, there are at most MATH source vertices and MATH destinations, giving at most MATH paths. This quantity is maximized when MATH, giving an upper bound of MATH. This bound also holds for those paths entering from the right. Thus, removing a single vertex would cut at most MATH paths. Combining the lower bound on the number of paths MATH, the upper bound on the number of paths cut by removing a single vertex, and the fact that we are removing MATH vertices, we have: MATH . We can rewrite MATH as MATH. Observing that MATH for all values of MATH, we have: MATH .
cs/0008001
Classify the parity of face MATH as ``even" when MATH is even, and as ``odd" otherwise. Observe that no two faces of the same parity can have a common edge. Divide the set of split faces into two subsets: those with even parity and those with odd. Both of these subsets are edge independent, and one of them must have at least REF/REF of the elements of the set of all split faces.
cs/0008001
Suppose there is an edge-independent set of MATH split faces. For each split face, choose one edge in MATH and one edge in MATH bordering that face. For each value MATH, define assignment MATH (respectively, MATH), to the variables representing edges in MATH (respectively, MATH) as follows. For an edge MATH that is not part of any of the MATH split faces, define MATH (respectively, MATH). For an edge MATH that is part of a split face, but it was not one of the ones chosen specially, let MATH (respectively, MATH). For an edge MATH that is the chosen variable in face MATH, let MATH (respectively, MATH). This will give us an assignment MATH to all of the variables that evaluates to REF. That is, for each independent, split face MATH, we will have two REF-edges when MATH and four REF-edges when MATH. All other cycles in the graph will have at least two REF-edges. On the other hand, for any MATH such that MATH the assignment MATH will cause an evaluation to REF, because for any face MATH where MATH, all but one edge will be assigned value REF. Thus, the set of assignments MATH forms an OBDD fooling set, as defined in CITE, implying that the OBDD must have at least MATH vertices.
cs/0008001
Partition the set of split faces into four sets: EE, EO, OE, and OO, where face MATH is assigned to a set according to the values of MATH and MATH: CASE: Both MATH and MATH are even. CASE: MATH is even and MATH is odd. CASE: MATH is odd and MATH is even. CASE: Both MATH and MATH are odd. Each of these sets is vertex independent. At least one of the sets must contain at least MATH of the elements. Since there are at least MATH split faces, one of the sets must contain at least MATH vertex-independent split faces.
cs/0008001
For any ordering of the variables in MATH, partition them into two sets MATH and MATH such that those in MATH come before those in MATH, and such the number of variables that are in MATH are equally split between MATH and MATH. Suppose there is a vertex-independent set of MATH split faces. For each value MATH, we define assignments MATH to the variables in MATH and MATH to the variables in MATH. These assignments are defined as they are in the proof of REF with the addition that each variable MATH in MATH is assigned value REF. Consider the set of assignments MATH for all values MATH. The only cycles in MATH that can have less than two REF-edges will be those corresponding to the perimeters of split faces. As in the proof of REF , the set MATH forms an OBDD fooling set, as defined in CITE, implying that the OBDD must have at least MATH vertices.
cs/0008010
REF illustrates the recursive construction of such a polygon, for all MATH of the form MATH. The shortest flipturn sequence for the polygon includes only diagonal flipturns and therefore has length MATH. Another sequence, which we believe to be the longest, requires twelve flipturns to remove every MATH vertices. REF illustrates this long sequence of standard flipturns; the corresponding extended flipturn sequence is essentially equivalent.
cs/0008010
When MATH is a multiple of MATH, the polygon consists of a horizontally symmetric rectangular `comb' with MATH `teeth'; if MATH is not a multiple of MATH, we add a small rectangular notch in a bottom corner of the polygon. See REF . (We consider a rectangle to be a comb with one tooth.) Both the teeth and the gaps between them decrease in height as they approach the middle of the polygon. Since the polygon is symmetric about its vertical bisecting line, standard and extended flipturns have exactly the same effect. The only way to eliminate the comb is through a sequence of orthogonal flipturns across the top edge of the polygon's bounding box; each such flipturn eliminates exactly one tooth. It easily follows that every flipturn sequence for this polygon has length MATH.
cs/0008010
CASE: Suppose some corner of MATH is not a vertex of MATH. Some edge of MATH lies on a line separating the missing corner from the interior of MATH. This edge contains a diagonal lid. CASE: Without loss of generality, suppose MATH does not contain the top left and top right vertices of MATH. REF implies that MATH has at least two diagonal pockets. Let MATH be the result of flipturning one of these pockets. Since the width of the flipturned pocket is less than the width of MATH, and thus less than the width of MATH, at least one of the upper corners of MATH is not a vertex of MATH. (As REF shows, flipturning one pocket can capture the opposite corner.) Thus, by REF , MATH still has at least one diagonal pocket.
cs/0008010
We achieve the stated upper bound by performing an orthogonal extended flipturn only when no diagonal pockets are available. By REF , we are forced to perform an orthogonal flipturn on a polygon MATH if and only if all four corners of MATH are also vertices of MATH. Let MATH be a nonconvex orthogonal MATH-gon with bounding box MATH, and suppose MATH has no diagonal pockets. Without loss of generality, suppose MATH has an extended orthogonal pocket whose lid MATH is the top edge of MATH. This pocket obviously lies strictly between the vertical lines through MATH and MATH. Let MATH be the polygon that results when this extended pocket is flipturned. The highest vertices of MATH are vertices of the newly flipturned pocket, and thus must lie strictly between the vertical lines through MATH and MATH. Thus, neither of the top vertices of MATH is a vertex of MATH, and by REF , we can perform at least two consecutive diagonal flipturns on MATH. See REF . In other words, any orthogonal extended flipturn can be followed by at least two diagonal flipturns. Thus, if we perform orthogonal flipturns only when no diagonal flipturn is available, any three consecutive flipturns eliminate at least four vertices.