paper
stringlengths
9
16
proof
stringlengths
0
131k
nlin/0012034
The proof is by direct construction. In the scalar boundary-value problem for genus MATH, use MATH to make the change of variables MATH . Note that MATH, and since MATH is analytic in the gaps while in the bands satisfies MATH, the conditions satisfied by the boundary values of MATH on the oriented contour MATH are then MATH while for MATH, the conjugate boundary conditions hold: MATH . Since the quotient MATH defined by REF necessarily decays at infinity if MATH, exists, and has at worst inverse square-root singularities at the isolated endpoints MATH and MATH, by the same kind of reasoning as in the proof of REF it follows that MATH must agree with the NAME integral of the difference of its boundary values REF . That is, MATH must be given by MATH . Now, the function MATH defined from such a solution MATH by REF will only satisfy the decay condition MATH as MATH if MATH in the same limit. Expanding the explicit REF for large MATH gives a NAME series whose leading term is MATH. Therefore, for existence it is necessary that the first MATH coefficients in this NAME series vanish identically. These coefficients are computed as moments of the densities. Expanding MATH inside the integrals in geometric series, one sees that the NAME series of MATH, convergent for MATH sufficiently large, is MATH where the quantities MATH, easily seen to be real-valued by using the complex-conjugation symmetry of the contours, are defined in REF . Thus, the conditions for existence are exactly the moment conditions recorded in REF . When the moment conditions are satisfied, the function MATH is the solution (unique, by REF ) of the scalar boundary-value problem for genus MATH. The claimed analyticity of MATH defined by REF then follows from the explicit REF for MATH and the analyticity of the boundary values of MATH.
nlin/0012034
It suffices to show that the sum of the first and third terms in REF have this property. Consider the integral MATH . This integral defines an analytic function of MATH. Therefore for MATH in a band MATH, we have in particular MATH where for concreteness we suppose MATH to lie to the left of the band MATH. For such MATH, we can augment the contour of integration by writing MATH since MATH is not contained in any band MATH and since MATH. By standard analyticity deformations, this expression can be written as MATH where MATH are contours lying just to the left and right of MATH (see REF ). Now, passing to the limit REF , there is a residue contribution as MATH crosses MATH, and one obtains the formula MATH . Using this expression for MATH in REF for MATH, we finally obtain the desired representation with MATH . This function is clearly analytic for all MATH in between MATH and MATH, which completes the proof. Note that the final term in this formula for MATH can be rewritten: MATH where the conjugate contours are presumed to be oriented from MATH toward the origin. With this subsitution, the function MATH is also defined and analytic for MATH in the lower half-plane between MATH and MATH, where it satisfies MATH.
nlin/0012034
Starting from a terminal endpoint MATH of a band MATH in which the condition is satisfied, we can ensure that the condition is satisfied as well in the next band along MATH by integrating MATH along the intermediate gap MATH, and insisting that the real part vanish. It is easy to see that since MATH is defined in terms of the average of the boundary values of a logarithmic integral, its derivative with respect to MATH is the average of boundary values of a NAME integral, which explains the integrand in REF . Although the integral is taken over the gap MATH of the contour MATH between the endpoints MATH and MATH, REF depend only on the ordered sequence of endpoints MATH and not on the particular contour gaps MATH. This is because the integrand has an analytic continuation from each gap of MATH to either side; using the jump condition satisfied by the boundary values of MATH in the gaps, one finds that MATH . Therefore, the integrand continues to the left as MATH and to the right as MATH.
nlin/0012034
First observe that from REF for MATH in terms of the function MATH (compare REF ), the continuity of the endpoint functions MATH in MATH and the continuous dependence of the analytic function MATH on the endpoints, we immediately find that for MATH in some sufficiently small disk MATH, the statement that MATH vanishes exactly like a square root at all endpoints MATH carries over to MATH as well. Consider the deformation of a band MATH for MATH. We seek a map MATH that satisfies the implicit relation MATH where MATH is a real constant chosen so that MATH, that is, MATH . Also, we restrict attention to maps for which MATH. First, we show that for MATH sufficiently small, MATH for some MATH. Since the denominator of MATH is real and strictly nonzero by assumption, this will follow if we can argue that the numerator of MATH is differentiable at MATH. Differentiating the numerator with respect to MATH or MATH, we may take the derivative operator inside the integral, since the integrand vanishes at the endpoints for MATH and MATH. The derivatives of the integrand with respect to MATH and MATH have contributions from explicit MATH and MATH dependence and from MATH and MATH dependence through the endpoints MATH. From the explicit REF , it is easy to see that the partial derivatives with respect to MATH and MATH are both integrable in MATH for MATH and MATH. Then, the chain rule terms are integrable in MATH because the endpoints are continuously differentiable by assumption and because the derivatives of MATH with respect to the endpoints are integrable in MATH, although they blow up like inverse square roots at the two endpoints of the contour of integration. Next, introduce the change of variables MATH where MATH . Note that from the differentiability and distinctness properties of the endpoints in the neighborhood MATH, we have the estimates MATH for some positive constants MATH and MATH and all sufficiently small MATH. The implicit relation REF therefore becomes MATH . Now, consider the function MATH defined by the integral MATH for MATH in a lens-shaped neighborhood of MATH. If the neighborhood is sufficiently thin, then the map MATH is one-to-one, since by REF is strictly nonzero in the interior of MATH. By the reality condition satisfied by MATH, the image of the lens-shaped neighborhood of MATH is a lens-shaped neighborhood of the open real interval MATH . The inverse function MATH is defined and analytic in the open real interval MATH. Near the endpoints, we have MATH near MATH and MATH near MATH for some constants MATH, MATH, and MATH. Letting MATH and MATH, the relation REF can be rewritten as MATH where MATH . We want to consider solving this equation for MATH by fixed-point iteration, that is, by choosing some MATH and constructing the sequence MATH by the recursion MATH. If this sequence converges, then we have a solution. For MATH consider the rectangular region MATH with corner points MATH and MATH. The interval MATH is contained in MATH. We claim that for MATH sufficiently small the function MATH in the integrand of REF has an analytic extension as a function of MATH to some open set containing MATH for which MATH . To show the analyticity of MATH it suffices to examine the endpoints MATH and MATH. On the one hand, MATH in REF blows up exactly like a negative one-third power at each endpoint. But on the other hand, the inverse map MATH vanishes like a two-thirds power at each endpoint, and since we are working in MATH, the function MATH vanishes like a square root at each endpoint; thus, the function of MATH in square brackets in REF vanishes exactly like a one-third power at each endpoint. Analyticity at the endpoints thus follows for the product MATH. Next, to establish REF , we note that by analyticity in MATH, there exists a uniform bound and the only question is its behavior as MATH. Clearly, MATH converges pointwise to zero in this limit for all MATH except possibly at the endpoints MATH and MATH. But by analyticity at the endpoints and compactness of MATH, the convergence to zero is in fact uniform for MATH, and the result follows. Note that for all MATH the disk MATH is contained in MATH . We claim that for sufficiently small MATH, the transformation MATH maps this disk into itself. Indeed MATH which can be made arbitrarily small and in particular less than MATH for MATH and MATH close enough to MATH and MATH respectively in view of REF . Let MATH denote the disk in the MATH-plane centered at MATH in which the last line is bounded above by MATH. Then, for MATH and MATH both in the disk MATH, we also have for MATH, MATH . From REF , the contraction mapping theorem guarantees that the iteration MATH will converge when MATH whenever MATH is taken in the disk MATH. Moreover, the limit MATH is the unique solution of REF in this disk. We therefore have a function MATH defined for all MATH in the closure of the open interval MATH. This function is continuously differentiable in the closed interval of its definition since for all MATH defined above and for MATH, we have MATH and consequently MATH holding even at the endpoints. At these endpoints, we know that the unique solution in the disk is given simply by MATH and MATH. The curve MATH is therefore homotopic to the closed real interval MATH. The function MATH is defined on the closed real interval MATH and has a unique analytic continuation to the curve MATH. The inverse function so defined on MATH is continuous, and we then obtain MATH . For each MATH and MATH in MATH, we therefore obtain a curve with the same endpoints, MATH and MATH. By our estimates, the curves contract uniformly to MATH as MATH. Finally, set MATH . This is a continuous function of MATH. Each point in the image satisfies REF and consequently the image is a smooth curve MATH connecting MATH to MATH. Moreover, by continuity, MATH will lie in the set MATH for MATH sufficiently small, and to achieve this, we restrict MATH and MATH to some slightly smaller disk MATH. Finally, to see that for all MATH the differential MATH is nonvanishing in MATH and of the same sign as MATH in MATH, simply differentiate REF to obtain MATH from which the required result follows from the estimate REF . To verify the continuity of the exceptional band MATH, one repeats the above arguments, substituting zero everywhere for MATH. Thus, one uses MATH and obtains an estimate analogous to REF . Also, one takes MATH and MATH and obtains estimates analogous to REF . By similar arguments based on contraction mapping, one verifies the continuity of MATH for MATH and MATH in some sufficiently small disk neighborhood MATH of MATH. Finally, we restrict MATH to lie in MATH where MATH which is nonempty for finite MATH. This completes the proof.
nlin/0012034
Using the general relations REF , we see that the function MATH may be expressed in terms of an integral of the corresponding candidate density MATH. In this connection, the desingularized representation REF of the density is useful. Let MATH and MATH denote the square root function MATH (defined in REF) and the analytic function MATH (compare REF ) defined from the endpoints MATH for MATH. Consider an ``internal" gap MATH intended to connect the endpoints MATH and MATH for MATH. Since MATH, the results of REF hold and throughout MATH the band contours MATH exist as smooth curves. Therefore, in MATH, the function MATH may be written as MATH where we recall that by construction MATH is an imaginary constant when restricted to each band MATH. Let MATH for MATH be a parametrization of the given gap path MATH. Therefore MATH and MATH. We will show that for all MATH in the sufficiently small disk MATH, the path parametrized by MATH where MATH admits the relevant strict inequality for all MATH. By continuity of the endpoints in MATH and MATH, this is a near-identity linear transformation. We are given that MATH and must show that MATH for MATH sufficiently small. First, we consider a neighborhood of the endpoint MATH. Since by assumption we are working in the neighborhood MATH of REF , and therefore in the bigger neighborhood MATH (compare the proof of REF ), the integrand MATH vanishes at MATH exactly like MATH for all MATH. This implies that in a sufficiently small (independent of MATH and MATH) neighborhood MATH in the complex plane that contains MATH for all MATH close enough to MATH, the region where MATH holds is a generalized sector whose boundary curves have tangents at MATH that meet at an angle of MATH. The band contour MATH has a tangent at its upper endpoint MATH that bisects this angle. Without loss of generality, we now suppose that the given gap contour MATH has a tangent at MATH whose angle lies strictly between the tangents to the boundary curves. Indeed, if this is not true of the given gap contour, it may be achieved sacrificing neither smoothness nor the inequality MATH by a small deformation near MATH. Now, the boundary curves in MATH satisfy MATH and from the fixed point theory used in the proof of REF it follows that these boundary curves deform continuously in MATH near MATH. Since the same is true of the path MATH by construction, it is clear that the disk MATH can be taken small enough that the inequality is satisfied on MATH for all MATH. To handle the other endpoint, MATH, choose an analogous fixed neighborhood MATH containing MATH for all MATH sufficiently close to MATH. Then, a similar argument can be used to show that, possibly by replacing MATH with a smaller disk, the inequality is satisfied on MATH for all MATH. Let MATH be defined so that the interval MATH parametrizes the curve MATH by the function MATH. Similarly, let MATH be defined so that MATH parametrizes MATH. Let MATH . It remains to verify the inequality (again, possibly by replacing MATH with a smaller disk) for MATH. Now, because we are avoiding the endpoints, there exists some MATH depending only on MATH, MATH, MATH, and MATH such that in this closed interval, we have MATH. Consequently, it is sufficient to show that MATH for MATH close enough to MATH. We have MATH . The first factor is uniformly bounded, and by simple continuity arguments using the fact that the map MATH is a near-identity transformation, the second factor can be made arbitrarily small for MATH near MATH and in particular the product can be made less than MATH. This completes the proof of existence of the ``internal" gap MATH. Having established the persistence of the gaps connecting pairs of endpoints in MATH, we must now show that the ``final" gap MATH, which must connect MATH to the origin, also persists for MATH near MATH. In this case, the near-identity transformation of the path MATH parametrized by MATH is given simply by MATH . The local analysis near MATH corresponding to the endpoint MATH goes through exactly as before. For the local analysis near MATH corresponding to MATH for all MATH, we first consider REF of the function MATH. For MATH in the interior of the gap MATH, we can use the analyticity of the given eigenvalue density MATH to rewrite REF for MATH in the form MATH where MATH is the complementary density function corresponding to MATH via REF . Now it follows from the boundary value REF that the function MATH extended by complex conjugation MATH to MATH is analytic at MATH. Therefore, the first two integrals on the right-hand side of REF can be combined and the path of integration may be deformed slightly either to the right (for MATH) or left (for MATH) in a small neighborhood of the origin. Thus, we deduce that MATH extends analytically to a neighborhood of the final endpoint MATH of the gap MATH. Now, for MATH real, it follows from reality of the logarithm that MATH . Since for MATH (respectively MATH) the portion of MATH near the origin necessarily lies in the second (respectively first) quadrant, this together with the analyticity of MATH at MATH shows that in some neighborhood MATH of the origin, the given gap contour MATH lies in some generalized sector bounded by the real axis and some boundary curve that makes a nonzero angle with the real axis at MATH and MATH (recall that MATH). Without loss of generality, we may assume that MATH has a tangent line at the origin making a nonzero angle with both the real axis and the tangent line of the boundary curve. Then, since the boundary curve again satisfies MATH the fixed point theory predicts smooth deformation of this curve with respect to MATH and MATH near MATH and MATH, which in conjunction with the continuity of the near-identity map MATH gives the necessary inequality in MATH. This concludes the analysis near MATH corresponding to the origin in the MATH-plane. With the endpoints taken care of in this way, the argument that the inequality holds on parts of MATH that are bounded away from the two endpoints MATH and MATH is analogous to the corresponding argument we used in proving the persistence of the ``internal" gaps. Therefore, for all MATH in the sufficiently small disk MATH, the ``final" gap contour MATH exists as well. This completes the proof.
nlin/0012034
From the NAME integral representation REF of MATH, the result will follow if it is true that MATH . Now, using the conjugation symmetry of the contours, REF , and analyticity of MATH, this is equivalent to the condition MATH . The first term then vanishes because the given asymptotic eigenvalue measure is real on the imaginary axis, and the second term is equivalent to a sum of integrals of MATH over the bands MATH of MATH. The reality of each of these integrals is exactly the content of REF , which proves the lemma.
nlin/0012034
This follows immediately from the series expansion REF for the function MATH, along with the fact that MATH where MATH for large MATH, and the large MATH asymptotic behavior of MATH guaranteed by REF .
nlin/0012034
To prove REF , note that MATH . Differentiating REF with respect to MATH, we find MATH and we have proved REF . To prove REF , replace MATH with MATH in REF , and differentiate with respect to MATH. To prove REF , we use the formula MATH and the NAME series representation REF for MATH and differentiate with respect to MATH: MATH where the second equality results from factoring out MATH and rearranging the sum. The relation REF then follows by using REF . To prove REF , one differentiates with respect to MATH, and uses REF .
nlin/0012034
The complexification of the moment MATH is the formula MATH that is, when MATH is taken to be the complex conjugate of MATH, this formula agrees with REF . Using now-familiar contour deformations and the paths MATH and MATH introduced in the proof of REF to represent the function MATH, one rewrites the moment as MATH where MATH is an arbitrarily large positively oriented loop contour and where the conjugate paths MATH are taken to be oriented from MATH toward the origin. Note that the paths MATH may be taken to be the same for all choices of the path MATH; the path MATH may be taken as the imaginary interval MATH and the path MATH may be deformed toward infinity with the only obstruction being any points of nonanalyticity of MATH. With the moment MATH rewritten in this way, the only dependence on the endpoints enters through the function MATH. Since all cuts of this function are contained inside the closed contour MATH and also between the contours MATH and MATH or between MATH and MATH, and since the branch of the square root is determined by asymptotic behavior at infinity, MATH is easily seen to be completely independent of MATH and an analytic function of the MATH independent complex variables MATH. The permutation symmetry of swapping any pair of endpoints MATH or any pair MATH follows from similar considerations.
nlin/0012034
Since we have already shown that MATH, it remains to show that MATH, the constant value of the function MATH in the band MATH, is identically zero as a function of MATH. To do this, we first establish a general formula, holding for an arbitrary genus zero ansatz (that is, not only for MATH), for the derivative of MATH with respect to MATH. In REF we observed that this quantity had the interpretation of a local wavenumber MATH. Let us calculate the wavenumber MATH in terms of the endpoints MATH and MATH. First, from the definition of the constant MATH, we have MATH . Next, using REF , we find MATH where we recall that the overbar on the logarithm indicates the average of the two boundary values taken on the contour MATH. We now show that the derivative with respect to MATH can be moved inside the integral and put onto the complementary density MATH. Recalling REF and using the facts that the logarithmic integral of the function MATH is independent of MATH and that MATH for MATH, REF implies MATH . Here, the MATH-derivative can be brought inside the integral since MATH vanishes at the moving endpoints, and the last step follows because the density function MATH is independent of MATH. In terms of the function MATH, we then have MATH . Using the expression for MATH obtained in the final remark in REF, we find that for MATH, MATH and therefore MATH where we have used the definition of MATH as an average of boundary values. Now the function MATH is the boundary value of MATH as MATH approaches the origin in the oriented contour MATH from the ``plus" side. This means that for MATH, MATH has an analytic extension as a function of MATH to the ``minus" side of MATH. A similar argument shows that MATH is analytic in MATH on the ``plus" side of MATH. And of course MATH extends analytically to the ``plus" side of MATH while MATH extends to the ``minus" side. These observations allow us to move the path of integration away from the integrable singularity at the origin in each integral. Namely, if we let MATH be a path from MATH to MATH lying to the right of MATH and MATH be a path from MATH to MATH lying to the left of MATH, then it is easy to see that MATH . Now, integrate by parts in each integral using the fact that MATH vanishes at the endpoints to find MATH . The paths of integration may now be combined into a single counterclockwise loop surrounding MATH and the singularity at the origin. Calculating this loop integral by a residue at infinity (again using detailed information about the form of MATH valid for genus MATH), one finds at last that MATH . Using the fact that for the genus zero ansatz at MATH the endpoint is purely imaginary, we see from this general formula that MATH is independent of MATH. We now show that with MATH, MATH . This follows from two observations. First, for MATH fixed on the imaginary axis below MATH, the function MATH converges pointwise to zero as MATH. This can be seen by noting that for MATH small enough MATH lies in the band MATH; a direct estimate of the boundary values of the functions MATH and MATH that vanishes as MATH is then easily obtained from the exact REF . Next, since as noted above there is an effective upper constraint for MATH on the measure MATH on the imaginary axis, it follows from a dominated convergence argument that the function MATH converges pointwise to zero for MATH as MATH. These results imply that for all MATH at MATH, MATH, and the proof is complete.
nlin/0012034
We need to show that the Jacobian determinant MATH does not vanish. Since MATH is a linear combination of MATH and MATH, it is equivalent to show that MATH. In REF, it was shown that MATH . To calculate the partial derivatives, we first establish a simple formula for MATH that involves the initial data MATH. For this purpose, we define a quantity MATH from REF by writing MATH, and to calculate MATH we momentarily suppose that MATH and MATH with MATH and MATH being independent real numbers with MATH and MATH. Then, substituting REF for MATH into REF , one exchanges the order of integration to find MATH . Now, define a square-root function MATH satisfying MATH, and defined in the MATH-plane slit along the real intervals MATH and MATH with the normalization MATH . Letting MATH (respectively MATH) be a small counter-clockwise oriented loop encircling MATH (respectively encircling MATH) as shown in REF , we may therefore write MATH . This formula will be analytic in MATH and MATH for MATH in a complex neighborhood of the imaginary interval MATH if the initial data MATH is analytic. With the help of REF , we may now easily compute derivatives of MATH with respect to MATH and MATH at MATH and MATH. First, note that MATH and MATH . Setting MATH and MATH, the final two terms in each of the above formulae can be combined, and the integrand of the MATH integral then can be calculated by a residue at MATH which vanishes identically as a function of MATH. Thus, only the first term survives in each case, and these too can be computed by residues. Using MATH and MATH, and expressing the derivatives of the inverse function MATH in terms of derivatives of MATH, one obtains MATH . Finally, we use these formulae to evaluate the Jacobian for MATH. We find MATH . By our monotonicity assumptions on the initial data, this Jacobian is finite and strictly nonzero for all MATH. Thus, the lemma is proved by appealing to the implicit function theorem.
nlin/0012034
From REF , the endpoint function MATH is differentiable in a neighborhood of MATH for each nonzero MATH. The proofs of the local continuation results given in REF can easily be applied here to show that, if the endpoint leaves the imaginary axis by moving into the right half-plane for a MATH ansatz or by moving into the left half-plane for a MATH ansatz, then a gap contour MATH will exist in MATH connecting MATH to MATH, on the interior points of which the inequality MATH holds strictly. Similarly, these proofs show that for small MATH a band contour MATH will exist on which the differential MATH is real and strictly negative. However in this case the difficulty is that for MATH the band MATH lies on the boundary of MATH and we must therefore prove that the band MATH lies entirely on one side or the other of the imaginary axis for small time. Note that as MATH moves along the contour MATH (whose existence for small time is guaranteed by the arguments in REF) from MATH to MATH, the function MATH defined in REF is real and strictly decreasing. Therefore, by the NAME equations, MATH is negative (respectively positive) for all MATH in a small lens-shaped region just to the left (respectively right) of MATH. We use the expression for the total derivative of the function MATH with respect to MATH obtained in the final remark in REF to calculate MATH . For the purely imaginary endpoint configuration at MATH, this quantity is strictly nonzero with sign MATH for all MATH on the positive imaginary axis below the endpoint MATH. Using the relation REF , we therefore see that for MATH the interior points of the band MATH all move into the right half-plane for MATH small and positive and into the left half-plane for MATH small and negative. Similarly, for MATH the band moves to the left for MATH and to the right for MATH. This shows that the ansatz corresponding to MATH always deforms for small time so that all inequalities remain valid, which proves the theorem.
nlin/0012034
To prove this, we need to examine the equations MATH and MATH for such endpoint configurations, which requires in particular that MATH can be defined for MATH on the imaginary axis above MATH. We will now show that under the condition MATH, the function MATH is analytic at MATH, and therefore has a unique analytic continuation for some distance along the imaginary axis above MATH. We begin with the WKB REF that defines MATH for MATH on the imaginary axis between MATH and MATH. In this formula, MATH and MATH are respectively the negative and positive real roots of the equation MATH. For even bell-shaped functions MATH, MATH and both functions are well-defined for MATH in the imaginary interval in question. To show the analyticity at MATH, we use the fact that MATH is real-analytic and for MATH just below MATH define a function MATH satisfying MATH in a neighborhood MATH of MATH containing only MATH as roots of MATH. MATH is taken to be cut on the real axis between MATH and MATH and has positive boundary values on the the upper half-plane side of the cut. With this normalization, we also have MATH . Then, the WKB density REF is rewritten as MATH where MATH is a clockwise-oriented loop surrounding the cut of MATH and lying in MATH. Because we are assuming that MATH, we can choose the neighborhood MATH and the loop contour MATH to be independent of MATH below but sufficiently near MATH such that for all such MATH the contour MATH only ever encloses the two roots MATH. If MATH, then more than two roots would have to coalesce at MATH when MATH, and the contour MATH would have to shrink as MATH approaches MATH in order to exclude the unwanted roots. Now, with MATH, the two roots MATH coalesce as MATH moves up the axis through MATH, and reemerge as a purely imaginary complex-conjugate pair for MATH just above MATH. For MATH just above MATH, we still have only two roots within MATH and enclosed by MATH, and we define MATH to be cut along the imaginary axis between these two roots and to be normalized by the same relation as before REF . With this choice, it is then clear that REF defines the analytic continuation of the original REF for MATH through the point MATH. For MATH on the imaginary axis above MATH, the function MATH takes positive imaginary boundary values on the left of the vertical cut. Thus, for such MATH we can write REF in the form MATH where the positive square root is taken, and where MATH is the positive imaginary number MATH that satisfies MATH. Or, changing variables to MATH, MATH . This formula, representing the analytic continuation of MATH, is positive imaginary for MATH above MATH on the imaginary axis. The moment condition MATH is satisfied automatically for MATH by any MATH configuration with endpoint MATH on the imaginary axis with MATH. In this situation, the moment MATH explicitly takes the form MATH . Now, with the band MATH connecting the origin to MATH in the first quadrant of the MATH-plane for MATH and the second quadrant of the MATH-plane for MATH (we are assuming that MATH does not coincide identically with an interval of the positive imaginary axis), it is easy to see that the function MATH satisfying MATH, cut on the band MATH and normalized to MATH for large MATH, is purely real for MATH in the imaginary interval MATH, and in fact MATH for such MATH, where the positive square root is intended. The contour MATH may be taken to coincide with the interval MATH oriented from MATH down to MATH, and correspondingly, MATH coincides with the interval MATH oriented from MATH down to MATH. So, the moment becomes MATH . Using MATH and changing variables MATH in the second term, one sees that MATH holds identically for all MATH. We will now show that the reality condition MATH will determine the endpoint MATH at MATH as a function of MATH. Using REF for MATH, and the fact that MATH, The relevant quantity to consider is MATH or with MATH and a change of variables MATH, MATH . Using REF and MATH with MATH real and positive, we get MATH . The equation MATH can evidently only have solutions if MATH. In this case we have MATH . We simplify further by exchanging the order of integration using MATH and thus find MATH where the positive square root is meant. The inner integral is identified as (compare page REF) MATH where MATH denotes the complete elliptic integral of the second kind with modulus MATH. Thus, the condition MATH becomes the relation REF which completes the proof of our claim.
nlin/0012034
The evaluation of the moments on the degenerate solution is completely straightforward using REF that makes clear the analytic dependence of the moments on the endpoints. Thus, for any MATH, we find MATH . Since for the degenerate set of endpoints MATH, where MATH is the square root function for genus MATH corresponding to the single complex endpoint MATH, we can again use REF to identify the right-hand side as exactly MATH, which in particular proves REF . Similarly, using REF for the moments along with degenerate form of the square root function for MATH as described above, we find MATH . Evaluating the first integral exactly by residues, and identifying the result with REF proves REF . Finally, REF is proved either by repeating the above arguments, or by simply noting the symmetry MATH and using the reality of the moments.
nlin/0012034
As described in REF, it follows from REF and the representation REF of MATH that MATH . Since MATH, we have MATH, and consequently the right-hand sides of REF vanish. Since MATH, the right-hand sides of REF vanish as well. The determinant of the left-hand side of these relations is MATH, and the corollary is proved.
nlin/0012034
This essentially follows from the uniqueness result for the scalar boundary-value problem for MATH described in REF . Indeed, both functions satisfy the same decay conditions at infinity, and by the explicit formulae are analytic in MATH with boundary values that are NAME continuous with exponent MATH. The fact that both functions satisfy the same boundary conditions almost everywhere on MATH follows from taking the limit MATH and MATH in the scalar boundary-value problem for genus MATH. Therefore both functions satisfy the scalar boundary-value problem for MATH and are equal by REF .
nlin/0012034
From REF , the candidate density function MATH obtained for the degenerate genus MATH configuration from MATH agrees with the nondegenerate genus MATH candidate density function. Since this function is NAME continuous it is bounded. This implies that in the degenerate limit, MATH remains uniformly bounded on MATH, the path of integration from MATH to MATH. Since MATH and MATH both converge to MATH in the degenerate limit, the result follows from the definition of the function MATH by REF as an integral along the shrinking band MATH.
nlin/0012034
By REF , MATH is the real part of the integral of MATH from MATH to MATH, where the analytic function MATH corresponds to the endpoints MATH, MATH and MATH. By construction, MATH, so equivalently we have MATH. According to REF , the MATH function MATH agrees with MATH when MATH since they are both derived from the same unique solution of the scalar boundary value for MATH with MATH. Evaluating for MATH completes the proof.
nlin/0012034
The fact that the degenerate triple of endpoints satisfies the MATH endpoint equations follows from the chain of results already presented in this section. It remains to verify the final claim: that the inequalities persist under reinterpretation of the MATH configuration subject to the additional REF at MATH as a degenerate MATH configuration. But this follows from REF , which implies that the functions MATH and MATH are exactly the same in both cases.
nlin/0012034
We begin the proof by computing the partial derivatives of the MATH moment MATH with respect to the endpoints, as well as the partial derivatives of the first four moments with respect to MATH, and evaluating them on the degenerate solution. First, from REF , one finds that for the moment MATH corresponding to a general genus MATH ansatz MATH . Recall that the contours MATH and MATH are bounded away from all of the endpoints MATH (see REF ). From this and the relation MATH holding for the degenerate MATH configuration, we find by taking linear combinations, MATH where on the left-hand side the derivatives of genus MATH moments are evaluated on the degenerate configuration and on the right-hand side the derivatives of MATH are evaluated on the corresponding critical MATH configuration. Using the relations developed in REF that allow derivatives of higher moments to be expressed in terms of derivatives of MATH, these relations imply MATH . Second, from the permutation invariance of the moments (compare REF) a chain rule calculation shows that MATH where on the right-hand side MATH is first evaluated on the degenerate endpoint configuration, and then differentiated with respect to MATH. Therefore, using REF , we find MATH where we have simplified the result with the help of REF . At the same time, the left-hand side can be expressed in terms of derivatives of MATH by the reasoning of REF, yielding MATH . Putting these results together along with a similar calculation involving derivatives with respect to MATH and the linear combination MATH, one finds MATH . Note that this identity, together with REF , implies that MATH . Third, to calculate the partial derivatives of the moments MATH with respect to MATH, we use REF and the expansion of MATH for large MATH to find MATH . Now, according to the calculations presented in REF, the relevant Jacobian matrix for eliminating MATH, MATH, MATH, and MATH is a product of a NAME matrix and a diagonal matrix: MATH . Evaluating the Jacobian determinant on the degenerate configuration, we find that the NAME determinant factor is nonzero because MATH, MATH, MATH, and MATH are all distinct, and the diagonal determinant is nonzero according to the above calculations and REF . It follows from the implicit function theorem that we can solve for MATH, MATH, MATH, and MATH. The partial derivatives of these implicitly-defined functions with respect to the remaining independent variables MATH, MATH, and MATH, are then obtained by NAME 's rule, using the expressions for the derivatives of the moments obtained above. This yields REF and completes the proof.
nlin/0012034
Using the relation REF and rewriting the integral over MATH as half of a loop integral around the band, we find MATH where MATH denotes a sufficiently small positively oriented contour surrounding MATH that is held fixed as MATH tends to zero. On both paths of integration, the following approximation for MATH holds uniformly: MATH where the partial derivatives with respect to MATH may be obtained explicitly from REF if desired. But in fact, they are not necessary for the present calculation; since MATH and MATH are analytic and uniformly bounded inside each loop as MATH tends to zero (MATH and MATH taken small enough to exclude MATH and MATH for all sufficiently small MATH), we find from the residue theorem that MATH . Now, as MATH, by analytic dependence on the endpoints, MATH and MATH converge to the corresponding quantities evaluated on the degenerate MATH endpoint configuration. That is, from REF with MATH and using the degenerate configuration, MATH . Comparing with REF for MATH or MATH evaluated on the degenerate configuration, we find MATH . Finally, using the explicit representation of these derivatives of MATH given in REF , we find MATH where we have used the definition of the parameters MATH and MATH. Passing to the limit MATH then completes the first part of the proof. To establish the convergence of the partial derivatives with respect to MATH and MATH, we note that since MATH, MATH, MATH and MATH depend differentially on MATH, MATH, and MATH, and since the functions MATH and MATH depend analytically on MATH, MATH, MATH, MATH, and MATH, it follows from REF that MATH is differentiable uniformly in MATH. This yields the convergence of derivatives of MATH expressed in REF . Now we carry out a similar analysis of the function MATH, which is a bit more complicated due to a logarithmic divergence. First, we show that MATH is differentiable with respect to MATH at MATH, MATH and MATH. Using the chain rule and the expressions for the partial derivatives of MATH with respect to endpoints obtained in REF, we find MATH . One can solve for the products MATH explicitly (it is an exact NAME system), and evaluate the result on the degenerate configuration, yielding MATH . It follows that as MATH, MATH with the error being uniformly small for MATH in any compact neighborhood of MATH. Therefore, we have MATH . Thus, it remains to analyze MATH with MATH. Recall from REF the formula for MATH in terms of the functions MATH and MATH: MATH . We note several asymptotic properties of MATH and MATH that follow from the form of the linear terms in the series expansions of the endpoints in positive powers of MATH (for MATH). For fixed MATH, we have MATH where the order MATH terms cancel because the symmetric contributions from MATH and MATH at this order cancel exactly (compare REF ). The error is uniformly small for MATH in any compact set not containing the points MATH, MATH or their conjugates. Using this result in REF for MATH, we see from the fact that the contours of integration lie a fixed distance from these points that for all MATH in between the contours MATH and MATH, MATH where MATH means the MATH function MATH constructed for MATH, that is, on the degenerate configuration. While the approximation REF of MATH holds for intermediate points on the contour of integration in REF for MATH, it fails near both limits. To give an approximation uniformly valid near the lower limit of integration, let MATH be the function defined by the relation MATH, cut along the band MATH and the negative imaginary axis, and normalized so that for MATH sufficiently large and positive real, MATH is positive real. Then we have MATH holding uniformly in a sufficiently small (but fixed as MATH) neighborhood of MATH. To approximate the square root near the upper limit of integration, let MATH be the function defined by the relation MATH, cut along the shrinking band MATH and normalized so that for large MATH, MATH. Then, MATH holding uniformly in a sufficiently small fixed neighborhood of MATH. We stress that these expansions are only valid when MATH. For MATH larger terms come into play that we have already taken into account by computing the derivative with respect to MATH. We take the path of integration in REF to pass through two points MATH and MATH that are fixed as MATH and lie respectively in the regions of validity of REF . Then, we have MATH . To handle the first term make the change of variables MATH: MATH . The quantity in square brackets has a convergent expansion in odd powers of MATH with coefficients MATH that are indpendent of MATH. Similarly, MATH has the convergent expansion MATH where MATH are the NAME coefficients of MATH. Since the convergence is uniform on the path of integration, the order of integration and summation may be exchanged: MATH since for MATH, MATH. Upon dividing by MATH, the main contribution must necessarily come from the third term, MATH . Here, the quantity in square brackets has a uniformly convergent expansion in positive powers of MATH, with coefficients MATH that are independent of MATH, while MATH has the uniformly convergent NAME expansion: MATH where MATH . Note that for MATH, the terms proportional to MATH in the square brackets cancel (compare REF ), and therefore MATH is a quantity of order MATH. Exchanging the order of summation and integration by uniform convergence, we have MATH . As long as MATH, MATH . These terms give constant contributions only for MATH, with all other terms being order at least MATH. On the other hand, if MATH, then there are logarithmic contributions. Thus, using MATH, we have MATH . Regardless of the path of integration, as MATH, MATH and consequently MATH . When we combine this expression with the other components of MATH at MATH, we recall that the sum of the constant terms in MATH vanishes because MATH, and thus we find MATH as MATH. This establishes the desired convergence of MATH. To verify the convergence of the corresponding partial derivatives, we use the following exact formula which can be obtained by direct application of the chain rule and substitution from the NAME system used to eliminate MATH, MATH, MATH, and MATH: MATH where MATH denotes the MATH . NAME matrix. In this formula, the notation MATH refers to the derivative after MATH, MATH, MATH, and MATH have been eliminated in favor of MATH, MATH, and MATH. Evaluating the determinants explicitly gives MATH . Substituting the expansions of the endpoints in terms of MATH, one finds that as MATH, MATH . Combining this relation with its complex conjugate, and using the chain rule relations MATH one obtains the desired convergence of the partial derivatives of MATH with respect to MATH and MATH. We complete the proof with a simple remark about the uniformity of these limits. The statement that for fixed MATH the region of uniform validity is an arbitrary fixed annulus in the MATH polar plane is mirrored in the expressions for the functions MATH and MATH, which become meaningless if MATH tends to infinity or zero.
nlin/0012034
By REF , it is sufficient to prove that the equations MATH and MATH can be solved for MATH and MATH when MATH is near MATH. Fix an arbitrary MATH, and for all MATH define MATH . By REF , these functions are differentiable with respect to MATH and MATH and the partial derivatives are continuous down to MATH. Of course when MATH, we have MATH . When MATH, the Jacobian determinant of these relations is MATH which is not zero when evaluated on either of the two explicit solutions of the model equations for MATH. It follows from the implicit function theorem that for each of the two solutions of the model problem and for sufficiently small positive MATH there is a solution MATH and MATH, continuous in MATH, of the equations MATH. The corresponding solution of the endpoint equations for genus MATH is given in terms of these functions for MATH by MATH with similar formulae for the complex conjugates.
nlin/0012034
With the orientation MATH of the contour MATH, the admissible differential MATH is a real nonnegative NAME measure on MATH with finite mass. Let MATH. Then MATH where MATH with MATH defined on MATH by the orientation MATH. First, note that the term that is quadratic in MATH is always nonnegative, being the NAME 's energy of a signed measure with finite positive and negative parts, each of which has finite NAME 's energy. Indeed, the nonnegativity of the NAME 's energy for such measures is, for example, the content of REF. Next, observe that for MATH, and with the value of the interpolant index MATH chosen according to REF , we have MATH . Thus we have MATH . Since according to REF we have MATH for MATH in the support of MATH, the integral on the right-hand side may be taken over the gaps of MATH. Therefore, MATH because MATH is a nonnegative measure and according to REF we have MATH for MATH in the gaps of MATH.
nlin/0012034
For each MATH, we have MATH. By definition of the deformed measure MATH, we have MATH . First, we expand the quadratic term for MATH small, using the fact that for any branch of the logarithm, MATH, MATH . The second integral proportional to MATH above is nonsingular because MATH is analytic on the support of MATH. Upon regularization by interpreting one or the other of the iterated integrals in the sense of the NAME principal value, the terms in the numerator can be separated. Thus, MATH . The first integral proportional to MATH can be handled without regularization. Here, for the real part we find MATH . Next, we expand the linear term in the energy for small MATH. We find MATH . In a now-familiar step (compare REF), we introduce a path of integration MATH that agrees with MATH in the support of MATH and then connects the final point of support to MATH. Then, using analyticity of MATH, this expression becomes MATH . Combining these calculations, and making an identification with the derivative of MATH along the contour MATH, we find that MATH where the derivative along the contour is meant. It is sufficient to integrate over the support of MATH. By REF , the function MATH is constant along each component of the support of MATH, which proves the theorem.
nlin/0012034
Under the transformation MATH, the image of the straight line making an angle MATH with the positive real axis in the MATH-plane is the graph of MATH a circle for all MATH, which always contains the two points MATH. This transformation fixes the straight line in the MATH-plane making an angle MATH with the positive real axis, and takes the origin to MATH and the point at infinity to MATH. Furthermore, the real MATH line is mapped to a complete circle (for MATH) and all other ray components of MATH are mapped to various circular arcs connecting MATH to MATH. The circle of radius MATH centered at the origin in the MATH-plane is mapped to a circle of radius MATH centered at MATH. Since MATH differs from the angles of all components of MATH, and since we are assuming that MATH, the image MATH of the contour is compact. For contours MATH with the additional symmetry REF , which by definition contain the the real axis in the MATH-plane, the condition MATH is necessary for compactness of MATH. The image of the contour MATH under this transformation is illustrated in REF . It is immediate from the definition and the unimodularity of MATH that the jump matrix MATH is also unimodular. The fact that MATH satisfies the interior smoothness condition follows from the corresponding local NAME smoothness of MATH on finite parts of MATH which is preserved under composition with the smooth map MATH, and the local smoothness near MATH follows from the decay condition satisfied by MATH. The jump matrix MATH is compatible at the self-intersection points MATH and MATH by the corresponding property of MATH and the continuity of the map MATH at MATH and MATH. At the other self-intersection point MATH, the compatibility condition follows from the decay of MATH at infinity. The correspondence between solutions of the two NAME problems is set up as follows. Given a matrix MATH solving the local NAME REF , the matrix MATH is a solution of the umbilical NAME REF . Conversely, given a matrix MATH satisfying the umbilical NAME REF , the matrix MATH satisfies the local NAME REF . These formulae make sense because by taking determinants of the jump conditions for the two problems and using the unimodularity of the jump matrices in conjunction with the NAME smoothness of the boundary values and NAME 's theorem that MATH and MATH. A similar argument (taking ratios of matrices and using the jump relations, NAME boundary conditions, and NAME 's theorem) shows that these formulae are injective transformations.
nlin/0012034
Consider first MATH acting on MATH. Using the representation REF , the restriction of the result to a component MATH of the boundary can be written as: MATH . That this operator is bounded to MATH is clear from REF . Summing the norm estimates over the components MATH then gives the boundedness of MATH. Now consider MATH acting on MATH. Using the representation REF , the restriction of the result to a component MATH of the boundary can be written as: MATH . The boundary value on the right-hand side is ``minus" because the approach to the boundary is from the right of MATH with its orientation (recall that by convention we are taking the boundary MATH of a simply-connected domain MATH to be oriented with MATH on the left). By similar arguments, it is then clear that MATH is bounded on MATH. However, in this case, more is true. If MATH is one of the self-intersection points of MATH, then at the point MATH the above formula reads: MATH where the sum is taken over all components MATH, and it is clear that the value at the mutual corner point MATH is the same in all components MATH. This, along with the usual argument (as in CITE, page REF) that piecewise NAME functions that are continuous are globally NAME implies that MATH is actually bounded from MATH to MATH. Similar arguments establish the analogous results for MATH.
nlin/0012034
Again, one proceeds by decomposing the NAME operators MATH according to REF or REF depending on the space, and then applies REF componentwise. When the range of the transformation is MATH, one shows continuity at the intersection points exactly as in the proof of REF .
nlin/0012034
We construct such a factorization algorithmically as follows. The factorization will be carried out locally at each intersection point, so first select for each MATH a number MATH sufficiently small that MATH, where MATH is the open disk of radius MATH centered at MATH, contains no other intersection points and that each circle of radius MATH centered at MATH meets each arc terminating at MATH exactly once. For MATH where the superscript MATH denotes the complement, that is, outside all disks, set MATH and MATH. Now, letting MATH be an intersection point, we will specify the factorization for MATH. Let the open arcs meeting at MATH be enumerated in counterclockwise order about MATH: MATH (here MATH is even but may depend on MATH). Let the unique point in MATH common to the boundary of MATH be denoted MATH. For MATH begin the factorization by setting MATH, and therefore MATH. Now suppose that a factorization MATH and MATH has been constructed on MATH. We now describe how to extend the factorization to MATH. Suppose first that the region bounded by MATH, MATH, and the boundary of the disk is a component MATH of the ``plus" region MATH. Let MATH be a MATH map from MATH into MATH with MATH and MATH. Such a map exists because MATH is arcwise connected CITE. Then, for MATH, we set MATH, and MATH. On the other hand, if the region bounded by MATH, MATH, and the boundary of the disk is a component of MATH, then we take MATH to be a MATH map from MATH into MATH with MATH and MATH, and then for MATH we set MATH and then MATH. Using this algorithm, we then construct factorizations on the part of each open arc MATH within MATH starting from MATH and working counterclockwise about MATH. This construction, when carried out under the compatibility condition (compare REF ) satisfied by the limiting jump matrices MATH at each endpoint, guarantees that the restrictions MATH uniformly satisfy the NAME continuity condition with exponent MATH on each open arc MATH. This follows from the interior smoothness condition, the continuity of the factorization at the disk boundaries, the NAME algebra property of NAME continuous functions, and the fact that MATH functions are NAME continuous with any exponent less than or equal to one. The construction also guarantees that MATH may be defined at each intersection point MATH to be continuous at the corner points for each MATH, and likewise that MATH may be defined to be continuous at the corner points for each MATH. Then it follows (see CITE, page REF and the appendices), that the functions MATH and MATH are in MATH and MATH respectively. Finally, the invertibility (and in fact the unimodularity) of MATH follows directly from the above algorithm and the unimodularity of MATH.
nlin/0012034
First suppose that we are given a solution MATH of the integral REF in MATH. For MATH define MATH . Then, MATH is a solution of the umbilical NAME REF . The analyticity of MATH in MATH and the normalization MATH follows directly from the representation REF and the properties of elements of MATH. That the MATH boundary values of MATH satisfy the jump relations follows from simply inserting REF into the jump relations and using the NAME formula in conjunction with REF . To show the injectivity of the map MATH observe that the NAME integral representation REF implies that MATH if and only if MATH. At the same time, since MATH and MATH both satisfy REF , it follows that MATH . Putting these two together gives MATH where we have used the NAME formula. From the invertibility of MATH it follows that MATH. On the other hand, suppose we are given a solution MATH of the umbilical NAME REF . For MATH, set MATH . That these two expressions yield the same result follows from the factorization of the jump matrix established in REF and the jump conditions satisfied by MATH on MATH. Moreover, it is clear that MATH is in both MATH and MATH; therefore it is an element of MATH. Also, the map MATH is injective because the matrix functions MATH are invertible. Now observe that MATH . Here, in the next-to-last step we have used the NAME formula, and in the final step we have used the continuity of the boundary values and computed a residue at MATH, which necessarily occurs within a component of either MATH or MATH. Finally, a similar argument shows that the composition of these two correspondences is the identity mapping. Consider REF evaluated for MATH: MATH with the last equality following from NAME 's theorem.
nlin/0012034
It is sufficient to prove the result for MATH. Consider the second term. Because MATH on MATH, we can write MATH . Similarly, because MATH on MATH, the third term can be written in the form MATH . To handle the first term, we first use the NAME formula to decompose: MATH where MATH has an analytic continuation into MATH and MATH has an analytic continuation into MATH. In whichever region contains MATH the corresponding function decays like MATH. Using this decomposition, we find MATH . The term that vanishes above does so because it is of the form MATH acting on a product of functions each having an analytic extension to MATH and decaying appropriately if MATH. Now, since by REF , MATH, we can again use MATH on this NAME algebra to finally write the first term of MATH in the form MATH . Similarly writing MATH with MATH, and applying similar reasoning, one finds that the fourth term in MATH can be written as MATH . With each term of MATH written in this way, it is not hard to see the compactness from more basic results already summarized. Consider MATH written in terms of the commutator as REF . The multiplication operator MATH is bounded from MATH to MATH. Then, from REF , the commutator is a bounded operator from MATH to the better space MATH. The result of this operation can be reinterpreted as an element of MATH, and from REF , the inclusion map MATH is a compact operator. The trivial inclusion map MATH is of course bounded, and finally composition with the operator MATH, bounded from MATH to MATH by REF does not alter the compactness. The term MATH expressed by REF is handled similarly. The term MATH given in REF is shown to be compact as follows. The multiplication operator MATH is a bounded map from MATH to MATH. By REF , the commutator is then a bounded map from MATH to the better space MATH. Again, the inclusion map MATH is compact by REF . Finally, by REF the operator MATH is bounded from MATH to MATH, and compactness is retained. The term MATH written in the form REF is handled similarly. All four terms of MATH have thus been shown to be compact on MATH.
nlin/0012034
By REF the operator MATH serves as both a left and a right pseudoinverse for MATH, and therefore MATH and MATH are both finite. This proves that MATH is a NAME operator on MATH. To calculate the index, we invoke continuity of the index for NAME operators with respect to uniform operator norm in MATH. The same arguments as above prove that the family of operators MATH is NAME for all MATH, and in particular for those real MATH between MATH and MATH. By the boundedness of MATH on MATH, this family of operators is continuous in operator norm as a function of MATH. Since for MATH the MATH, and since the index is a continuous integer-valued function over the whole range of MATH, we conclude that MATH.
nlin/0012034
Since MATH, MATH implies MATH, and then MATH being bounded implies via the closed graph theorem that MATH. Therefore the inverse MATH exists and is defined on the whole space MATH. Since MATH is bounded and hence closed, the inverse is also closed and therefore bounded by the closed graph theorem.
nlin/0012034
This follows from standard functional analysis results concerning the invertibility of bounded perturbations of the identity operator. The solution of the equation is furnished by NAME series.
nlin/0012034
It follows directly from the integral equation satisfied by MATH and the NAME formula that everywhere the boundary values taken on MATH by MATH are finite, they satisfy the jump relation REF . The estimate REF also follows directly from the representation REF .
nlin/0012042
The proof of REF goes as follows. First from REF we find MATH which for MATH gives MATH REF follows now by virtue of the resolvent identity REF .
nlin/0012042
The proof goes as follows. First notice that REF implies: MATH . The conventional ``dressing" argument compares grades on both sides of REF . The left hand side involves terms with grades MATH while grades on the right hand side are between MATH and MATH. Consequently, each side of REF lies in the zero grade sub-algebra. Correspondingly, the contributions of the above terms are equal to: MATH . The last equality ensures that MATH is in MATH and therefore the transformation generated by REF or REF is well-defined. To complete the proof of REF we will show that the algebra of transformations from REF closes and commutes with the isospectral flows. Let us first discuss the algebra closure. Consider: MATH . To proceed we need to find MATH. MATH . Inserting these results into REF we get MATH after comparing with the algebra of generators in MATH and noticing that the left hand side after dressing by MATH and projecting on the negative modes becomes : MATH . Now, consider commutation with the isospectral flows. Since MATH we have MATH and the same arguments as above yield this time: MATH .
nlin/0012042
The proof follows by taking MATH in REF which produces: MATH . Since MATH is equal to MATH and MATH we get from REF . MATH .
nlin/0012042
We make use of the property: MATH, satisfied by the trace MATH. Accordingly, MATH and therefore MATH which is equal to MATH . The middle term vanishes being equal to MATH while the first and last terms combine to give : MATH as announced in the proposition.
nlin/0012042
The proof follows from the direct calculation: MATH where we used relations REF . REF follows now from commutativity of MATH and MATH and also due to the fact that MATH.
nlin/0012042
Indeed MATH . The desired result follows now recalling that the quantities MATH are local as shown in REF .
quant-ph/0012127
See CITE.
quant-ph/0012127
Diagonalize MATH, that is, MATH, and let MATH . Clearly MATH. Now consider the quantum hypergraph MATH, whose edges also obey the upper bound MATH, and which has edge average MATH. Now let MATH i.i.d. with MATH. Then we can estimate with REF , applied to the variables MATH: MATH which is smaller than MATH if MATH in which case the desired covering exists.
quant-ph/0012127
We may assume that the support of MATH contains the support of MATH, otherwise the theorem is trivial. Consider the positive random variable MATH which has expectation MATH. Since the events MATH and MATH coincide we have to show that MATH . This is seen as follows: MATH . Taking traces, and observing that a positive operator which is not less than or equal MATH must have trace at least REF, we find MATH which is what we wanted.
quant-ph/0012127
Observing MATH (because MATH is operator monotone, see REF ) we find MATH (by REF ).
quant-ph/0012127
Observe that MATH is equivalent to MATH, and apply the previous theorem.
quant-ph/0012127
A direct calculation: MATH . Here the second line is because the mapping MATH is bijective and preserves the order, the third because for commuting operators MATH, MATH is equivalent to MATH, and the last line by REF .
quant-ph/0012127
Using the previous theorem with MATH and MATH we find MATH . Here everything is straightforward, except for the third line which is by the NAME - NAME - inequality REF .
quant-ph/0012127
The second part follows from the first by considering MATH, and the observation that MATH. To prove it we apply REF with MATH: MATH . Now using MATH (which follows from the validity of the estimate for real MATH, MATH: MATH which in turn is just the convexity of MATH) we find MATH . Hence MATH and choosing MATH the right hand side becomes exactly MATH. To prove the last claim of the theorem consider the variables MATH with expectation MATH and MATH, by hypotheses. Because of MATH we can apply what we just proved to obtain MATH the last line by the already used inequality MATH.
quant-ph/0012127
Draw edges at random, that is, consider i.i.d. random variables MATH with MATH. Then we obtain, using REF : MATH the third line only if we have MATH. Now the last expression is smaller than MATH for MATH justifying ex post our estimates. Hence for MATH as in the theorem there exists a covering with MATH edges.
quant-ph/0012127
The second estimate follows by applying REF with the distribution MATH. The first is proved by induction on MATH. The case MATH is trivial, and the case MATH is seen as follows: let MATH be a minimal weight generalized covering of MATH, that is, MATH . What we have to show is that MATH which means we have to find a distribution MATH such that MATH . With MATH this is obviously satisfied. Now assume MATH, and let MATH a minimal weight generalized covering of MATH. Define a probability distribution MATH on MATH by MATH . Multiplying the relation MATH by MATH (for a one - dimensional projector MATH on MATH) from both sides and taking the trace over the last factor we find MATH . This means that we have a generalized covering of MATH and hence MATH . Thus for all MATH which implies MATH which in turn implies MATH .
quant-ph/0012128
See CITE: this is just the monotonicity of the mutual information (data processing inequality) under the completely positive and trace preserving map MATH from the commutative algebra generated by the MATH as mutually orthogonal idempotents to the algebra of linear operators on MATH. MATH .
quant-ph/0012128
This is essentially the sub-additivity of entropy (see CITE, p. REF). MATH .
quant-ph/0012128
See CITE: it is another special case of monotonicity, known as coarse graining. For a direct proof observe that MATH and by the concavity of NAME entropy MATH . MATH .
quant-ph/0012128
See CITE, p. CASE: MATH .
quant-ph/0012128
Because of MATH we may assume that MATH. Thus we have to prove that MATH . But this is equivalent to MATH which in turn is equivalent to MATH, and this is obvious. MATH .
quant-ph/0012128
CASE: is equivalent to MATH, which is immediate from the definition. REF. is essentially REF . follows from REF. Finally, REF. and REF. are easy consequences of REF. MATH .
quant-ph/0012128
CASE: follows from MATH, and the definition. To prove REF., we first do the lower bound (the other follows from this straightforwardly): MATH where the inequality is with MATH, MATH (by REF . and REF), and by REF . Hence the subsequent equalities are valid with MATH and MATH . By REF we conclude MATH. Finally, REF. and REF. are easy consequences of REF. MATH .
quant-ph/0012128
CASE: follows from NAME 's inequality (compare CITE). REF. is seen as follows: with the eigenstates MATH of MATH define MATH with the conditional expectation MATH. Then it is obvious that MATH . From the definitions it can be directly verified that, with MATH, MATH hence by REF MATH and with REF . the claim follows. For REF. observe MATH and denoting the right-hand side by MATH, the claim follows from REF., observing that MATH by REF . and REF. Again, REF. and REF. are easy consequences. MATH .
quant-ph/0012128
CASE: is obvious, and REF. follows from REF. To prove REF., first calculate MATH with MATH . Observe that by REF MATH . But because of MATH we get MATH thus we conclude MATH where MATH . MATH .
quant-ph/0012128
From REF , we use the following estimates: for fixed MATH and MATH there exists a constant MATH such that MATH . Instead of REF , use REF in all steps of the proof in REF. Then, choosing MATH such that MATH and with MATH, the theorem follows. MATH .
cs/0101006
By the convexity-preserving properties of the NAME model, MATH can be represented as an intersection of halfspaces MATH for some index set MATH. But then MATH is an intersection MATH of convex hyperballs.
cs/0101006
Since we can represent MATH as an intersection of halfspaces, we can represent MATH as an intersection of hyperradius-MATH hyperballs. By the convexity preserving property of intersection, the result follows from its special case in which MATH is a halfspace and MATH is a hyperball. So, for the rest of the proof, we assume that we have this special case. Form a halfspace model of MATH by choosing as the point at infinity a point MATH that is on the boundary of MATH and exterior to MATH; if no such point exists, then all of MATH is contained in MATH so the intersection is trivially convex. In this halfspace model, as illustrated in REF , MATH is modeled as a tilted Euclidean hyperplane, while MATH is modeled as a Euclidean ball, so their intersection is modeled by a Euclidean disk, which must correspond to one of five types of sets in the intrinsic hyperbolic geometry on MATH: a ball, horoball, hyperball, halfspace, or the complement of a hyperball. We must rule out the last of these cases, which is not convex, and would be modeled by a Euclidean disk that is less than half contained within the halfspace model. In the halfspace model, hyperradius is modeled by the angle at which a surface meets the boundary of the model, so the condition that MATH implies that the hyperplane modeling MATH meets the boundary more steeply than does the ball modeling MATH. Consider the Euclidean line MATH through the Euclidean center of the ball modeling MATH, and perpendicular to the Euclidean hyperplane modeling MATH. Because of this relation on the angles of the two surfaces, the two points where MATH meets the boundary of MATH are both contained within the halfspace modeling MATH. Therefore, the Euclidean center of the disk modeling MATH (which lies on MATH) is also contained in the halfspace, so the portion of the disk contained in the halfspace is more than half. Thus, the disk cannot model a hyperball complement, and must be one of the other four possibilities, which are all convex.
cs/0101006
Embed MATH as the unit sphere in MATH, and consider it to be the sphere at infinity of a NAME model of a hyperbolic space MATH. Then each of the input spheres MATH is the boundary of a unique hyperplane MATH in MATH. For each input sphere, define a function MATH giving the (spherical or Euclidean) radius of the image of the sphere under a NAME transformation preserving the unit ball and taking MATH to the viewpoint of the transformed NAME model. Then by symmetry, MATH depends only on the hyperbolic distance from MATH to MATH, so its level sets are the lens-shaped neighborhoods of MATH REF . Thus we again have a nested convex family MATH and can solve our optimal viewpoint problem as a quasiconvex program.
cs/0101006
Embed MATH as the unit sphere in MATH, and consider it to be the sphere at infinity of a NAME model of a hyperbolic space MATH. Then the endpoints of each input edge MATH form the (infinite) endpoints of a unique hyperbolic line MATH in MATH. For each input edge, define a function MATH giving the arc length of the image of the edge under a NAME transformation preserving the unit ball and taking MATH to the viewpoint of the transformed NAME model. Then by symmetry, MATH depends only on the hyperbolic distance from MATH to MATH, so its level sets are the banana-shaped neighborhoods of MATH REF . Thus we again have a nested convex family MATH and can solve our optimal viewpoint problem as a quasiconvex program.
cs/0101006
The NAME triangulation of the points can be computed in MATH time, is NAME (due to its definition in terms of empty circles), forms a planar graph with MATH edges, and is guaranteed to contain the shortest transformed distance among the points. Therefore, applying REF to the NAME triangulation gives the desired result.
cs/0101006
Initialize a graph MATH to be empty, then repeat the following process: choose a set MATH of MATH random pairs of vertices, apply REF to maximize the minimum distance MATH among pairs in MATH, and then add to MATH all pairs of transformed vertices that have distance less than MATH. Each iteration of the process adds a set of edges to MATH with expected cardinality MATH CITE, including at least one edge involved in the optimal basis of the problem for the complete graph. Therefore, the algorithm terminates in MATH iterations after having solved MATH quasiconvex programs with expected size MATH. The pairs closer than MATH can be listed in time MATH, where MATH is the number of pairs CITE.
cs/0101006
We view the unit ball as a NAME model of MATH, turning the problem into one of finding the optimal viewpoint for a different NAME model of the same hyperbolic space. As we now show, there is a convex set of viewpoints such that a particular edge MATH has some length MATH or greater, so the optimal viewpoint selection problem forms a quasiconvex program. To see this, we return to our unit ball in MATH. We view the whole of MATH as the boundary of a halfspace model of MATH. There is a unique halfspace MATH in MATH that intersects MATH in the complement of our original unit ball. Replace each edge MATH by a hyperbolic line; in the halfspace model, this is a semicircle perpendicular to MATH, disjoint from MATH, and having endpoints at MATH and MATH. Any NAME transformation of the unit ball can be extended uniquely to an isometry of MATH that preserves MATH. For each such transformation, consider the Euclidean hyperplane MATH in the halfspace model, parallel to the halfspace boundary and tangent to the smallest semicircle; the (Euclidean) height of MATH is just half the length of the shortest transformed edge. So another way of describing our problem is that we are seeking the transformation for which MATH is as high as possible; that is, it intersects the boundary of MATH in a MATH-sphere of minimum (hyperbolic) radius. Viewed as a hyperbolic object, MATH is just a horosphere. Thus, we can describe our problem in purely hyperbolic terms: we are given a halfspace MATH, and a collection of hyperbolic lines. We seek a horosphere, with its infinite point in MATH, which intersects the boundary of MATH in a minimum radius MATH-sphere and which touches all the lines. In order to analyze the level sets for this problem, let two values MATH be given, and from now on fix both halfspace MATH and line MATH in MATH. We say that a point MATH on the boundary of MATH is MATH-good if there exists a sphere MATH in MATH, with radius MATH, with its center in MATH, with MATH touching MATH, and with MATH forming a radius-MATH-sphere centered at MATH REF . We wish to show that the set of MATH-good points is convex. Let MATH be the hypersphere in MATH containing the centers of radius-MATH spheres that intersect the boundary of MATH in radius-MATH-spheres; then the hyperradius of MATH is some function MATH. The set of MATH-good points is then just the projection onto the boundary of MATH of MATH. So, the convexity of this set follows immediately from REF . Similarly, define a MATH-good point to be the center of a radius-MATH-sphere on the boundary of MATH, such that some horosphere with tangency in MATH intersects the boundary at that MATH-sphere and touches MATH. Then the set of MATH-good points is just the intersection of the sets of MATH-good points, as MATH ranges over all positive real numbers. Thus, it is also convex. Let MATH be the set of viewpoints in the original hyperbolic space MATH for which a given pair MATH maps to points at distance at least MATH. Then MATH is the perpendicular projection from the boundary of MATH of the set of MATH-good points, where MATH is some monotonic function of MATH. Since projection preserves convexity, this set is convex, so the problem of selecting an optimal viewpoint forms a quasiconvex program.
cs/0101010
Without loss of generality, we assume MATH. The statements are proved as follows. CASE: For any node MATH in MATH, let MATH be the number of edges incident to MATH. Suppose MATH where MATH, MATH, and MATH. We partition MATH into MATH and MATH. Note that except MATH, every set has MATH nodes. For any MATH, denote MATH as the subgraph of MATH induced by all the edges incident to MATH. Suppose that MATH is a maximum weight matching of MATH. Let MATH. Note that a maximum weight matching of MATH is also one of MATH. Therefore, we can compute MATH using the following algorithm: CASE: Compute a maximum weight matching MATH of MATH. CASE: For MATH to MATH, CASE: let MATH; CASE: compute a maximum weight matching MATH of MATH. CASE: Return MATH. The running time is analyzed below. Let MATH be the total number of edges in MATH. For MATH, MATH, and MATH. Using the matching algorithm by CITE, we can compute MATH in MATH time. Note that MATH and MATH. Hence, the whole algorithm uses MATH time. CASE: Since we suppose MATH, any matching of MATH contains at most MATH edges. Thus, for every MATH, we can discard the edges incident to MATH that are not among the MATH heaviest ones; the remaining MATH edges must still contain a maximum weight matching of MATH. Note that we can find these MATH edges in MATH time, and from REF , we can compute MATH from them in MATH time. The total time taken is MATH. CASE: The algorithm in the proof of REF can be adapted to find MATH in MATH time by using the MATH-time matching algorithm of CITE to compute each MATH. For any MATH, we redefine MATH to be the total weight of edges incident to MATH. Then we can use the same analysis to show that the adapted algorithm runs in MATH time.
cs/0101010
First, we show that using MATH time, we can find a set MATH of only MATH edges such that MATH still contains a maximum weight matching of MATH. In the proof of REF , it has already been shown that we can find in MATH time a set of MATH edges that contains MATH. Thus, it suffices to find in MATH time another set of MATH edges that contains MATH; MATH is just the smaller of these two sets. Let MATH be the subset of nodes of MATH that are endpoints of MATH. For any MATH, we select, among the edges incident to MATH, a subset of edges MATH, which is the union of the following two sets: CASE: MATH; CASE: MATH is among the MATH heaviest edges with MATH. Observe that MATH must contain a maximum weight matching of MATH, and these MATH edges can be found in MATH time. By discarding all unnecessary edges (that is, edges neither in MATH nor in MATH), we can assume that MATH has only MATH edges, while still containing MATH and MATH. This preprocessing requires an extra MATH time for finding MATH. Below, we describe a procedure which, given any bipartite graph MATH and any node MATH of MATH, finds MATH from MATH in MATH time, where MATH is the number edges of MATH. Then, starting from MATH, we can apply this procedure repeatedly MATH times to find MATH from MATH. Since MATH is assumed to have only MATH edges, this process takes MATH time. This lemma follows. Let MATH and MATH be a maximum weight matching of MATH and MATH, respectively; denote by MATH the set of augmenting paths and cycles formed in MATH, and let MATH be the augmenting path in MATH starting from MATH. Note that the augmenting paths and cycles in MATH cannot improve the matching MATH; otherwise, MATH is not a maximum weight matching of MATH. Thus, we can transform MATH to MATH using MATH. Note that MATH is indeed a maximum augmenting path starting from MATH, which can be found in MATH time CITE.
cs/0101010
The statement are proved as follows. CASE: Let MATH be the set of nodes MATH in MATH such that REF MATH and REF either MATH is a leaf or MATH for all children MATH of MATH. Since the subtrees rooted at the nodes of MATH are disjoint, MATH. Thus, MATH. Let MATH be the tree in MATH induced by MATH, that is, MATH contains exactly the nodes of MATH and the least common ancestor of every two nodes of MATH. Note that MATH has at most MATH leaves and at most MATH internal nodes. On the other hand, every node of MATH is an internal node of MATH; thus, MATH. CASE: For every node MATH, let MATH be the set of MATH's children that have a weight at least MATH each. Let MATH be the set of the rest of MATH's children. Note that MATH because MATH has critical degree MATH. Since the weight of MATH is at most MATH and MATH, by REF we can compute MATH in time MATH . Since MATH has at most MATH edges and MATH, by REF , we can compute MATH from MATH in time MATH . From REF , we can compute MATH for all MATH in time MATH . Since the subtrees rooted at some node in MATH are disjoint, MATH. This statement follows.
cs/0101010
The two statements are proved as follows. CASE: Observe that for any MATH, MATH has at most MATH edges relevant to the computation of MATH, and they can be found in MATH time. Let MATH be this set of edges. Below, we assume that, for every MATH, MATH has only edges in MATH. Otherwise, it costs MATH extra time to find all MATH and the assumption holds. For every MATH, let MATH. Obviously, the nonempty sets MATH form a partition of MATH. Below, we show that for any nonempty MATH, we can compute MATH for all MATH in MATH time. Thus, the time for computing MATH for all MATH is MATH, and REF follows. We now give the details of computing MATH for all MATH. Let MATH be the child of MATH where MATH is the largest over all children of MATH. Since MATH, every edge of MATH has weight at most MATH. By REF , and the fact MATH, we can find MATH in MATH time. By REF , MATH. Thus, we can compute MATH for all MATH in time MATH . CASE: We partition MATH as follows. Let MATH be the set of nodes in MATH with weight greater than MATH. For any MATH, let MATH . Obviously, MATH. Since MATH and MATH for all nodes MATH in MATH, MATH has critical degree one. By REF , we can compute MATH for all nodes in MATH using MATH time. Each MATH is handled as follows. For every node MATH, MATH has at most MATH children with weight at least MATH; otherwise MATH and MATH. Thus, MATH has critical degree MATH. By REF , we can compute MATH for all MATH in MATH time. In summary, we can compute MATH for all MATH in MATH time.
cs/0101010
It follows from REF and the fact that MATH and MATH.
cs/0101010
The two statements are proved as follows. REF . A centroid path MATH is attached to another centroid path MATH if the root of MATH is the child of a node on MATH. We define the level of a centroid path as follows. The root centroid path has level zero. A centroid path has level MATH if it is attached to some centroid path with level MATH. Note that any subtree attached to a centroid path with level MATH has size at most MATH. Thus, there are at most MATH different levels. Moreover, subtrees attached to centroid paths with the same level are all disjoint. For any MATH, denote by MATH the set of all centroid paths in MATH with level MATH. Then MATH. CASE: We divide the centroid paths into REF groups. We first consider the centroid paths on level MATH where MATH. For any such MATH, MATH . Thus, MATH. Next, we consider the centroid paths on level MATH where MATH. Note that for a path MATH on level MATH, MATH. Thus, MATH is at most MATH .
cs/0101010
The two statements are proved as follows. CASE: Let MATH be the root of MATH. By definition, every edge in MATH for any MATH corresponds to a pair of side trees MATH where MATH and MATH for some MATH such that MATH and MATH contain some common labels. We call MATH an intersecting side tree pair. Thus, MATH is at most the total number of intersecting side tree pairs in MATH. To simplify our discussion, let MATH and MATH. Consider any node MATH. MATH is a side tree in MATH. Let MATH be MATH. Note that each path in MATH starting from a node MATH to its descendant MATH corresponds to a simple path MATH in MATH from MATH to MATH. Let MATH be the node on MATH which is the child of MATH. By the definition of side trees, among all the side trees in MATH, at most MATH have roots on MATH. For all side trees MATH, MATH is an intersecting side tree pair if and only if either REF MATH is a node on the path from the root of MATH to the root MATH; or REF MATH is a node on some path MATH on MATH where MATH is an edge in MATH. The number of side trees MATH in REF is less than MATH. The number of side trees MATH in REF is less than MATH. Let sum-MATH denote MATH. Below we prove sum-MATH. In total, MATH, as claimed in this statement. It remains to prove sum-MATH. For any leaf MATH of MATH, let MATH be the maximal path in MATH ending at MATH such that every node on MATH has at most one child; denote MATH as the root of MATH. Let MATH is a leaf of MATH. As MATH is a set of disjoint subtrees of MATH, MATH. Note that MATH. Thus, MATH. Let MATH be the tree obtained by removing all the paths in MATH. We have sum-MATH sum-MATH. Note that MATH contains at most half the leaves of MATH. Hence, sum-MATH. CASE: By REF , MATH is in the order of MATH .
cs/0101010
The two statements are proved as follows. CASE: Let MATH be the set of labels used in the side trees in MATH. First, we show that for all MATH, MATH can be computed in MATH time. Second, we recover MATH for all nontrivial MATH where MATH in MATH time. Then this statement follows. To compute MATH for all MATH, we apply the hierarchical bipartite matching algorithm of REF. Let MATH. For every node MATH, we associate with MATH the bipartite graph MATH and let MATH. Observe that MATH. In addition, for every node MATH, the total weight of all the edges incident to MATH in MATH is at most MATH. Hence, MATH and the associated bipartite graphs MATH satisfy the conditions for the hierarchical bipartite matching problem. For the time complexity, note that, for every MATH, the two node sets of MATH have size bounded by MATH and MATH, respectively. Thus, by REF , we can find MATH for all nodes MATH of MATH in MATH time. Next, we show how to recover MATH for all MATH, where MATH denotes the set of nodes MATH of MATH such that MATH and MATH is nontrivial. Note that every node MATH of MATH is also a node in MATH and every edge MATH of MATH corresponds to a path in MATH. Also observe that every node MATH must be a node in MATH; otherwise, MATH lies on a path corresponding to an edge MATH of MATH, and MATH contains a singleton node set and is trivial because MATH. Therefore, we can compute MATH for all MATH by traversing MATH once using MATH time. CASE: By REF , MATH .
cs/0101010
To derive the time for computing MATH, we simply sum the time bound stated in REF over all centroid paths of MATH. Observe that MATH . Thus, we can compute MATH in MATH time. By REF , MATH. Thus, this lemma follows.
cs/0101010
By REF , MATH. Thus, by REF , mast-MATH can be computed in MATH time. Since MATH, this theorem follows.
cs/0101015
First, we show that for each MATH assignment to MATH there is a unique value of MATH that minimizes MATH. Choose some MATH, and suppose that either MATH or MATH is MATH. Then MATH is also MATH by the constraint MATH or MATH. Alternatively, suppose MATH and MATH are both MATH; then if MATH is MATH, MATH can be decreased by MATH by setting MATH to MATH without violating any constraints. Thus, in any optimal integral solution to Linear Program MATH , MATH. Note that substituting MATH for MATH in MATH gives precisely MATH; thus minimizing MATH is equivalent to minimizing MATH. We now must show that all solutions to Linear Program MATH are integral. Every element of the constraint matrix is either zero or MATH. Each row has either a single nonzero element (for example, for the MATH bounds) or consists of zeroes and exactly one MATH and one MATH. Thus the matrix is totally unimodular, for example, using CITE. Since the right-hand side is integral, any vertex of the polytope defined by Linear Program MATH is integral CITE. Thus, all basic feasible solutions to Linear Program MATH are MATH vectors. So if MATH is a basic optimal solution to Linear Program MATH , then MATH. Conversely, if MATH, then the vector MATH in which MATH whenever MATH is nonzero is an optimal solution to Linear Program MATH , which is a basic optimal solution since an appropriate subset of the constraints MATH and MATH form a basis.
cs/0101015
We will show that the minimum MATH cuts in MATH correspond to MATH-minimizing protein sequences via Linear Program MATH of REF . Given a minimum MATH cut in MATH, let MATH be REF if MATH is in the MATH component, and REF otherwise. Similarly, let MATH be REF if MATH is in the MATH component, and MATH otherwise. Since no infinite-capacity edge MATH or MATH can appear in the cut, if MATH is in the MATH-component then MATH and MATH are as well. In terms of the MATH and MATH variables, we have MATH and MATH whenever MATH is nonzero, precisely the same constraints as in Linear Program MATH . Conversely, any MATH assignment MATH for which these constraints hold defines a MATH cut that does not include any infinite-capacity edge. Turning to the objective function, the total capacity of all edges in the cut is MATH where MATH is a constant and MATH is the objective function of Linear Program MATH . Thus, the capacity of the cut is minimized when MATH is. The rest follows from REF .
cs/0101015
Given a digraph MATH as input, the NAME maximum-flow algorithm takes MATH time CITE. We first apply REF to MATH to obtain MATH. We next use this maximum-flow algorithm to find a minimum MATH cut in MATH and then an optimal MATH from this cut. All these steps take MATH total time.
cs/0101015
The graph MATH is obtained by applying REF . Let MATH be the contraction map from REF . Let MATH and MATH. The mapping MATH is defined as MATH. To show that MATH has at most MATH nodes, consider any node MATH in MATH. Let MATH be the maximum flow used to define MATH. If MATH, then MATH is reachable from MATH in the residual graph MATH, and MATH is contracted onto the MATH supernode; if MATH, then at least one of MATH or MATH is nonzero and MATH is in the same strongly connected component in MATH as at least one of MATH and MATH. In either case MATH is contracted onto a supernode that contains MATH or some MATH; since the same thing happens to all MATH, there are at most MATH supernodes in MATH: one for each MATH, plus one for each of MATH and MATH. Using ideals of MATH is justified by the observation that requiring MATH to be in an ideal and MATH to be out of it has no effect on the presence or absence of other nodes, as MATH has no predecessors and MATH has no successors in MATH; thus there is a one-to-one correspondence preserving all nodes except MATH and MATH between the ideals of MATH containing MATH but not MATH and the ideals of MATH.
cs/0101015
Represent each node MATH in MATH by the variable MATH. For each MATH, let MATH if there is a directed edge MATH in MATH, and MATH otherwise. To define MATH, let MATH for each MATH; and, for each MATH, let MATH equal MATH's out-degree MATH. Apply REF to the resulting function MATH to get a graph MATH. Define a flow MATH in MATH as follows: MATH . Note that this flow is a maximum flow because it saturates all edges leaving MATH as well as all edges entering MATH. (It happens that this is the unique maximum flow, but we do not need this fact, as the NAME construction works for any maximum flow.) Our next goal is to show that the residual graph MATH of this flow contracts to MATH. MATH has the following classes of edges: Since MATH has no successors and MATH has no predecessors in MATH, the supernodes in MATH containing MATH and MATH consist of only MATH and MATH, respectively. Each node MATH is in the same strongly-connected component as at least one of MATH or MATH, so no other supernodes exist in MATH that do not contain at least one of the nodes MATH. Note that every node-simple path from MATH to MATH in MATH is of the form MATH (with the subscripts of each MATH possibly reversed), which corresponds to a node-simple path MATH in MATH. The converse also holds. Therefore, MATH and MATH are in the same strongly connected component in MATH if only if MATH and MATH are in the same strongly connected component in MATH. Now recall that for each MATH, MATH is defined in REF as the supernode in MATH containing MATH. So an edge from MATH to MATH in MATH corresponds to a node-simple path from MATH to MATH. By the above path-to-path correspondence, every directed edge from MATH to MATH in MATH corresponds to an edge from MATH to MATH, and vice versa. In summary, MATH is isomorphic to MATH, with the correspondence MATH for all MATH.
cs/0101015
The statements are proved as follows. CASE: Let MATH be a positive constant at most MATH, where MATH and MATH is the common denominator of all coefficients MATH and MATH. Below we show how to find a desired fittest protein sequence by minimizing MATH. First of all, since MATH and MATH are REF sequences, MATH. Then, since MATH is given, MATH can be minimized using REF in MATH time. Now suppose that MATH and MATH are two protein sequences with MATH and MATH. Then, MATH. Also, MATH. Therefore, MATH. Thus every MATH that minimizes MATH must also minimize MATH. The NAME distance term in MATH guarantees that from all MATH that do minimize MATH, minimizing MATH selects one that also minimizes this distance. CASE: To find a MATH with the largest (respectively, smallest) possible number of MATH residues, apply REF with all MATH and all MATH (respectively, MATH).
cs/0101015
By REF , the conditions below are necessary and sufficient for MATH to minimize MATH: CASE: MATH if MATH. CASE: MATH if MATH. CASE: MATH if MATH. CASE: MATH if MATH is an edge in MATH. We will build a graph MATH whose nodes are MATH, MATH, and MATH, and put in an edge MATH between any nodes for which the constraint MATH is required to minimize some MATH. In particular, we have the following classes of edges, for each MATH, where each class represents one of the above conditions: CASE: MATH and MATH whenever MATH. CASE: MATH and MATH whenever MATH. CASE: MATH and MATH whenever MATH. CASE: MATH whenever MATH. An assignment of MATH to MATH, MATH to MATH, and MATH to each node MATH satisfies MATH whenever MATH if and only if all of the constraints required for MATH to simultaneously minimize all MATH are satisfied. Note that such an assignment might not exist. To convert MATH into the desired graph MATH, and to check whether there exist any assignments meeting the constraints, contract each strongly connected component of MATH. If MATH and MATH are in the same strongly connected component, no simultaneous fittest solutions exist. Otherwise, let MATH in MATH be the supernode that contains MATH from MATH; let MATH be the supernode that contains MATH. Also, for each MATH in MATH, let MATH be the supernode into which MATH is contracted. Then MATH simultaneously minimizes all MATH if and only if MATH when MATH, MATH when MATH, and MATH when MATH is an edge in MATH - precisely the condition that the zeroes in MATH correspond to nodes in some ideal of MATH that contains MATH but not MATH. To show the running time, observe that constructing the graph MATH takes MATH time, which dominates the contraction step.
cs/0101015
REF enumerates the ideals of MATH in time MATH per ideal. For each ideal, invert the mapping MATH (in MATH time) to recover the corresponding protein sequence MATH.
cs/0101015
Any two fittest protein sequences MATH and MATH can differ only at indices MATH where MATH. Let MATH be the total weight of indices MATH. Then, MATH is an upper bound on the diameter. It is also a lower bound, as MATH and MATH are both ideals of MATH, and these ideals correspond to two protein sequences at distance MATH from each other.
cs/0101015
The statements are proved as follows. CASE: Let MATH be the ideals in MATH such that the sets of zeros in MATH are MATH, respectively. Let the maximal elements of MATH be the sets MATH over all MATH, where MATH is the symmetric difference operator. Thus, MATH consists of these sets and all of their subsets. We will show that MATH is the smallest downward-closed set system such that there is a MATH-chain between MATH and MATH in MATH. First, consider some set system MATH where for some MATH, MATH is not in MATH. Recall that MATH must be constant at the positions indexed by elements of MATH, where the constant depends on whether or not MATH is in the ideal in MATH corresponding to MATH. Partition MATH into sets MATH and MATH, where MATH consists of all MATH with MATH for all MATH. Since MATH and MATH differ on MATH, one of them is in MATH and the other is in MATH. However, since MATH, no protein sequence in MATH is MATH-adjacent to one in MATH. So there is no MATH-chain between MATH and MATH. Conversely, to exhibit a MATH-chain from MATH to MATH, it suffices to show by iterations a MATH-chain from MATH to the protein sequence MATH whose zeroes are given by MATH; the case of MATH is symmetric. Let MATH. If at any iteration MATH, we are done. Otherwise, let MATH be a maximal element in MATH. Then, MATH is also an ideal. Since MATH, MATH and the protein sequences corresponding to MATH and MATH are MATH-adjacent. After at most MATH such iterations, we reach MATH. CASE: Let MATH and MATH be the protein sequences in MATH with the largest and the smallest possible numbers of MATH residues, respectively. In other words, MATH and MATH correspond to MATH and its empty ideal, respectively. If MATH includes MATH for all MATH, then by REF , there are MATH-chains between any MATH and MATH and thus between any two protein sequences in MATH. If it does not, then there is no MATH-chain between MATH and MATH. In summary, the maximal elements of MATH are the sets MATH over all MATH.
cs/0101015
For any length-MATH sequence MATH, let MATH be the set of all length-MATH sequences MATH with MATH for each MATH with MATH. Let MATH be the empty sequence. Then, MATH is the set of all length-MATH sequences. Observe that we can find an element MATH that minimizes MATH over MATH in time MATH by setting MATH in MATH for each MATH and applying REF . Furthermore, the set MATH is the disjoint union of the sets MATH for MATH, where MATH. To enumerate MATH in order of increasing MATH, we maintain a data structure that represents all protein sequences less those already returned as a disjoint union of sets of the form MATH, together with a MATH-minimizing element MATH for each, organized as a priority queue with key MATH. Initially, the queue contains only MATH, where MATH is computed using REF in time MATH. At each step, the smallest pair MATH is removed from the priority queue and is replaced by up to MATH pairs MATH, where MATH is a MATH-minimizing element of MATH; MATH is then returned. Each such step requires no more than MATH applications of REF , and the cost of the at most MATH priority queue operations is at most MATH, giving a total cost of MATH per value returned.
cs/0101015
The statements are proved as follows. CASE: The concavity follows from the minimality of MATH and the fact that for any fixed MATH, MATH is linear in MATH with slope MATH. Then, the continuous piecewise linearity and the counts of segments and corners follow from the fact that MATH. CASE: By the concavity of MATH, MATH. Let MATH. For all MATH, MATH is minimized if and only if MATH is at distance MATH from MATH. Similarly, for all MATH, MATH is minimized if and only if MATH is at distance MATH from MATH. Therefore, MATH and MATH. REF . REF is straightforward. To prove REF , let MATH be a protein sequence that has the smallest MATH over all protein sequences at distance MATH from MATH. Then, MATH, and MATH. On the other hand, by the minimality of MATH, MATH. Furthermore, by REF . Thus, MATH . REF follows from algebra and these two inequalities. CASE: For given MATH and MATH, let MATH be the line through the point MATH and with slope MATH. Let MATH (respectively, MATH) be the protein sequence MATH such that MATH is the largest (respectively, smallest) possible over MATH. Note that MATH, MATH, MATH, and MATH can be computed in MATH total time using REF . Furthermore, MATH and MATH contain the segments of MATH immediately to the left and the right of MATH, respectively. Consequently, MATH is a corner of MATH if and only if MATH. To compute the corners and slopes of MATH, we first describe a recursive corner-slope finding subroutine as follows. The subroutine takes as input an interval MATH where MATH together with MATH and MATH. It outputs all the corners MATH of MATH together with slopes MATH and MATH where MATH. There are two cases. CASE: MATH. Then, there is no corner over the interval MATH, and thus the subroutine call ends without reporting any new corner or slope. CASE: MATH. Then, compute MATH at which MATH and MATH intersect; by the concavity of MATH stated in REF , MATH. Also, compute MATH and MATH. There are two subcases: CASE: MATH. Then the subroutine returns MATH as a new corner together with slopes MATH and MATH and recurses on the intervals MATH and MATH. CASE: MATH. The subroutine returns no new corner or slope but recurses on the intervals MATH and MATH. In this case, the subroutine has found the line containing a new segment of MATH, that is, the segment through the point MATH. This completes the description of the subroutine. The running time of this subroutine is dominated by that for computing MATH and MATH and thus is MATH. With this subroutine, we can find the corners and slopes of MATH as follows. Recall MATH from the proof of REF . Note that if MATH or MATH, then MATH has no corner at MATH. So we compute MATH and MATH and apply the subroutine to the interval MATH to find all the corners and slopes of MATH. This algorithm makes MATH recursive calls to the subroutine since by REF , there are only MATH corners and segments, and each recursive call finds at least one new corner or segment. The running time of the algorithm is dominated by the total running time of these calls and thus is MATH as stated in the statement.
cs/0101015
Let MATH and MATH. Let MATH, that is, the set of indices MATH for which MATH changes from MATH to MATH when MATH changes from MATH to MATH. Similarly, let MATH. Further, let MATH and MATH. Let MATH. Then, MATH. Let MATH. Let MATH, which is the sum of the terms MATH that MATH loses when MATH changes from MATH to MATH. Similarly, MATH is the sum of the terms MATH that MATH gains when MATH changes from MATH to MATH. To show MATH, we need to prove MATH or equivalently MATH. To do so by contradiction, suppose MATH. There are two cases: CASE: MATH. Notice that the protein sequence MATH with MATH has a smaller fitness value for MATH than MATH does, contradicting the minimality of MATH. CASE: MATH. Then, MATH. Therefor, the protein sequence MATH with MATH has a smaller fitness value for MATH than MATH does, contradicting the minimality of MATH.
cs/0101015
The statements are proved as follows. CASE: The concavity follows from the minimality of MATH and the fact that for any fixed MATH, MATH is linear in MATH with slope MATH. Then, the continuous piecewise linearity and the counts of segments and corners follow from REF . The proofs for the case that MATH is the rightmost corner and the complementary case are similar. So we only detail the proof for the former. Note that MATH is not a corner. So for every fixed MATH, the line MATH goes through the corner MATH and the point MATH. Thus, MATH and MATH. By symmetry, for every MATH, we have MATH. In summary, MATH and MATH. CASE: For any given MATH and MATH, let MATH be the line through the point MATH and with slope MATH. For each MATH, let MATH (respectively, MATH) be the protein sequence MATH such that MATH has the largest (respectively, smallest) possible cardinality over MATH. Note that MATH, MATH, MATH, and MATH can be computed in MATH total time using REF . Furthermore, MATH and MATH contain the segments of MATH immediately to the left and the right of MATH, respectively. Consequently, for MATH, MATH is a corner of MATH if and only if MATH. Also, MATH is the leftmost corner, and the segment of MATH to the right of MATH is contained by MATH. To compute the corners of MATH, we first describe a recursive corner-finding subroutine as follows. The subroutine takes as input an interval MATH where MATH together with MATH and MATH. It outputs all the corners MATH of MATH with MATH. There are two cases. CASE: MATH. Then, there is no corner over the interval MATH, and thus the subroutine call ends without reporting any new corner. CASE: MATH. Then, compute MATH at which MATH and MATH intersect; by the concavity of MATH stated in REF , MATH. Also, compute MATH and MATH. There are two subcases: CASE: MATH. Then the subroutine returns MATH as a new corner and recurses on the intervals MATH and MATH. CASE: MATH. The subroutine returns no new corner but recurses on the intervals MATH and MATH. In this case, the subroutine has found the line containing a new segment of MATH, that is, the segment through the point MATH. This completes the description of the subroutine. The running time of this subroutine is dominated by that for computing MATH and MATH and thus is MATH. With this subroutine, we can find the corners of MATH as follows. If every MATH, then MATH is the only corner. Otherwise, let MATH be MATH divided by the smallest nonzero MATH. Note that for every MATH, MATH has no corner at MATH, Then, we compute MATH and MATH. We report the leftmost corner MATH and apply the subroutine to the interval MATH to find all the other corners of MATH. This algorithm makes MATH recursive calls to the subroutine since by REF , there are only MATH corners and segments, and each recursive call finds at least one new corner or segment. The running time of the algorithm is dominated by the total running time of these calls and thus is MATH as stated in the lemma.
cs/0101015
The proofs for the cases of unweighted and weighted NAME distances are similar. So we only detail the proof for the unweighted case. Our algorithm for finding all distance-minimizing choices of MATH has three stages. Use REF to find all the corners of MATH. For each corner MATH, use REF to compute the closest NAME distance MATH between MATH and any protein sequence in MATH. Let MATH be the smallest MATH over all corners. Then, by REF , report all MATH with MATH as desired choices of distance-minimizing MATH. Consider each segment of MATH. Let MATH and MATH be the vertical coordinates of the left and right endpoints of the segment. Find a suitable MATH in the open interval MATH as follows. If MATH is finite, then set MATH; otherwise, set MATH. Then use REF to compute the closest unweighted or weighted NAME distance MATH between MATH and any protein sequence in MATH. If MATH, then by REF , report that every MATH in the interval MATH is a desired distance-minimizing MATH. This completes the description of the algorithm. By REF , akes MATH time. By REF , Stages REF also take MATH time. Thus, the total running time is as stated in the theorem.
cs/0101015
Note that each of the problems is in P, because we can recognize an element of MATH in polynomial time using REF . So to prove NAME we must only show that each problem is NAME. CASE: Reduce from the problem of counting the number of ideals in a dag, which is CITE. Given a dag MATH, apply REF to get a function MATH for which MATH is isomorphic to MATH. By REF , counting MATH is then equivalent to counting the number of ideals of MATH. CASE: The problem in REF is a special case of the problem in this statement. CASE: Using the same construction as in REF , we can reduce from the problem of computing the average cardinality of an ideal in MATH. To see that this latter problem is NAME, suppose that we can compute the average cardinalities of ideals in a MATH-node MATH and in an augmented graph MATH obtained from MATH by adding a single new node MATH and edges from every MATH to MATH. Let MATH be the average for MATH and MATH the average for MATH. Let MATH be the number of ideals in MATH. Then MATH for some MATH, while MATH, since the only new ideal in MATH consists of MATH and all other nodes, and thus has size MATH. Solving for MATH gives MATH, which can be computed from MATH, MATH, and MATH. CASE: This problem has the problem in REF as a special case with all MATH. CASE: To reduce the problem of counting protein sequences in MATH to counting protein sequences at a given unweighted NAME distance, take the dag MATH given by REF , and add to each node MATH a node MATH with edges MATH and MATH. Apply REF to this new graph MATH to obtain a function MATH for which MATH is in one-to-one correspondence with the set of ideals of the strongly-connected component graph MATH of MATH. Each strongly connected component of MATH consists of MATH and MATH for some MATH, so MATH is isomorphic to MATH. Now we choose MATH with MATH and MATH. Then, the contribution to MATH of each pair MATH is MATH regardless of their common value. Thus, the number of MATH-minimizing protein sequences equals the number of MATH-minimizing protein sequences at distance MATH from MATH, where MATH is the number of nodes in MATH.
cs/0101015
REF follows from the fact that the problem in REF can be reduced to the problem in this statement in polynomial time. REF is proved as follows. Since we can recognize MATH-minimizing protein sequences using REF , the problem is clearly in NP. To show that it is NAME, we reduce from PARTIALLY ORDERED KNAPSACK, REF from CITE. The input to PARTIALLY ORDERED KNAPSACK consists of a partially-ordered set MATH, each element MATH of which is assigned a size MATH and a value MATH, together with a upper bound MATH on total size and a lower bound MATH on total value. The problem is to determine whether there exists an ideal MATH in MATH such that MATH and MATH. NAME and NAME note that the problem, even with MATH for all MATH, is NAME in the strong sense (meaning that there is some polynomial bound on the size of all numbers in the input with which it remains NAME). Given an instance of PARTIALLY ORDERED KNAPSACK with MATH for all MATH and all numbers bounded by some polynomial MATH, build a graph MATH where each MATH is represented by a clique MATH of MATH nodes, and there is an edge from MATH to MATH if and only if MATH. Note that because MATH, MATH has polynomial size. Apply REF to generate a function MATH (in polynomial time) such that MATH is isomorphic to the component graph MATH obtained by contracting all strongly connected components of MATH. Since the strongly connected components of MATH are precisely the cliques MATH, MATH is isomorphic to MATH, interpreted as a dag. In particular any ideal of MATH corresponds to an ideal MATH of MATH. Let MATH. The norm of the corresponding vector MATH is MATH. Set MATH, MATH, and we have the problem stated in the theorem.
cs/0101016
If the peptide sequence is known, we can identify the nodes of MATH corresponding to the prefix subsequences of this peptide. These nodes form a directed path from MATH to MATH. Generally the mass of a prefix subsequence does not equal the mass of any suffix subsequence, so the path contains exactly one of MATH and MATH for each MATH. A satisfying directed path from MATH to MATH contains all observed prefix subsequences. If each edge on the path corresponds to one amino acid, we can visit the edges on the path from left to right, and concatenate these amino acids to form a peptide sequence that display the tandem mass spectrum. If some edge corresponds to multiple amino acids, we obtain more than one peptide sequences. NAME if the mass of a prefix subsequence coincidently equals the mass of a suffix subsequence, which means the directed path contains both MATH and MATH, we can remove either MATH or MATH from the path and form a new path corresponding to multiple peptide sequences which contain the real sequence.
cs/0101016
These statements are proved as follows. CASE: Given a mass MATH, MATH, MATH if and only if MATH equals one amino acid mass, or there exists an amino acid mass MATH such that MATH. If MATH is computed in the order from MATH to MATH, each entry can be determined in constant time since there are only REF amino acids. The total time is MATH. CASE: For any two nodes MATH and MATH of MATH, we create an edge for MATH and MATH, MATH, if and only if MATH and MATH. There are MATH pairs of nodes. With MATH, MATH can be constructed in MATH time.
cs/0101016
Let MATH and MATH be the paths that correspond to MATH. If MATH, by definition, after removing node MATH from MATH, MATH contains exactly one of MATH and MATH for all MATH. If MATH, then MATH, and either MATH or MATH, which corresponds to REF or REF respectively in the algorithm, because either MATH or MATH, but not both, is in MATH. A similar analysis holds for the cases of REF or REF . The loop at REF uses previously computed MATH and MATH to fill up MATH and MATH. Thus the algorithm computes MATH correctly. Note that MATH and REF , and REF take MATH time, and thus the total time is MATH.
cs/0101016
These statements are proved as follows. CASE: Note that MATH. Without loss of generality, assume that a feasible solution MATH contain node MATH. Then there exists some MATH, such that MATH is an edge in MATH and MATH. Therefore, we search the non-zero entries in the last row of MATH and find a MATH that satisfies both MATH and MATH. This takes MATH time. With MATH, we backtrack MATH to search the next edge of MATH as follows. If MATH, the search starts from MATH to MATH until both MATH and MATH are satisfied; otherwise MATH, and then MATH and MATH. We repeat this process to find every edge of MATH. The process visits every node of MATH at most once in the order from MATH to MATH and from MATH to MATH. The total cost is MATH time. CASE: We compute MATH by means of REF and find a feasible solution by means of REF . The total cost is MATH time and MATH space. CASE: The proof is similar to that of REF . We can find all the feasible solutions by backtracking MATH, and each feasible solution costs MATH time and MATH space. Computing MATH and finding MATH solutions cost MATH time and MATH space in total.
cs/0101016
Without loss of generality, let the MATH be the entry we want to compute where MATH. If MATH, MATH as defined; otherwise MATH and MATH if and only if MATH and MATH, which is equivalent to MATH and MATH. Thus both cases can be solved in MATH time.
cs/0101016
We retrieve consecutive edges starting from MATH, MATH, MATH, until the first MATH with MATH and MATH. Then we can fill MATH, MATH, MATH, and MATH immediately. Next, we start a new retrieving and filling process from MATH, and repeat this until MATH is visited. Eventually we retrieve MATH consecutive edges. A similar process can be applied to MATH. Using a common graph data structure such as a link list, a consecutive edge can be retrieved in constant time, and thus MATH can be computed in MATH time. By definition, MATH if and only if there exists some MATH with MATH, MATH, and MATH. If we have computed MATH and MATH, then MATH can be computed in constant time by means of the proof in REF . To find the MATH for MATH, we can visit every inside edge that ends at MATH. Therefore the computation of MATH visits every inside edge exactly once, and the total time is MATH.