paper
stringlengths
9
16
proof
stringlengths
0
131k
cs/0008010
As in the previous theorem, we achieve the upper bound by performing orthogonal flipturns only when no diagonal flipturn is available. However, we also choose orthogonal flipturns carefully if more than one is available. Say that an orthogonal flipturn is good if it can be followed by at least two diagonal flipturns and bad otherwise. We will perform a bad orthogonal flipturn only if no good orthogonal flipturn or diagonal flipturn is available. Let MATH be an orthogonal polygon. Without loss of generality, consider a forced orthogonal flipturn whose lid MATH lies on the top edge of MATH, and let MATH be the polygon resulting from this flipturn. See REF . The lid endpoints MATH and MATH must lie in two different pockets of MATH, since the flipturned pocket touches the top of MATH. The horizontal width of the pocket must be less than the horizontal width of MATH, so MATH cannot have both the upper left and upper right corners of MATH as vertices. Thus, by REF , any forced orthogonal flipturn can be followed first by a diagonal flipturn and then by at least one more (possibly orthogonal) flipturn. In particular, any bad flipturn can be followed by exactly one diagonal flipturn. Let MATH be a polygon with no diagonal pockets or good orthogonal pockets. Consider a bad orthogonal flipturn whose lid MATH is a subset of the top edge MATH of MATH, and let MATH be the resulting polygon. Exactly one of the top corners of MATH is a vertex of MATH. If this is the top right corner, call pocket MATH dexter; otherwise, call it sinister. Without loss of generality, suppose the pocket MATH is dexter. Let MATH be the polygon resulting from the only available diagonal flipturn, whose lid is the upper left edge of MATH. Since MATH must have no diagonal pockets, this flipturn moves vertex MATH to the upper left corner of MATH. See REF . If some pocket had a lid in MATH, that pocket would be inverted by the diagonal flipturn on MATH and MATH would have a diagonal pocket, contradicting our assumption that pocket MATH is bad. Similarly, if there is a bad pocket with lid in MATH, it cannot be dexter. Suppose there is a sinister pocket with lid MATH. Let MATH be a leftmost point in pocket MATH, and let MATH be a rightmost point in pocket MATH. See REF . The horizontal distance from MATH to MATH must be equal to MATH, and the horizontal distance from MATH to MATH must equal to MATH, since both pockets are bad. But this is impossible, since MATH. We conclude that MATH must be the only lid on the top edge of MATH. Now consider the orthogonal pocket of MATH created when pocket MATH is flipturned. Its lid MATH lies on the right edge of MATH. We claim that this pocket must be good. Let MATH be the resulting polygon when this pocket is flipturned. Since MATH is the bottommost edge of pocket MATH, nothing in MATH lies above and to the right of vertex MATH, so the upper right vertex of MATH is not a vertex of MATH. Since the height of pocket MATH is less than the height of the original polygon MATH, the bottom right vertex of MATH is also not a vertex of MATH. Therefore, by REF , MATH can undergo at least two consecutive flipturns. We have just shown that any forced bad flipturn is immediately followed by a diagonal flipturn, a good orthogonal flipturn, and then two diagonal flipturns. Thus, any sequence of five consecutive flipturns contains at least three diagonal flipturns, which remove at least six vertices from the polygon.
cs/0008010
We call an edge of an orthogonal polygon a bracket if both its vertices are convex or both its vertices are concave. An orthogonal MATH-gon has at least four brackets (the highest, leftmost, lowest, and rightmost edges) and at most MATH brackets. We claim that flipturns do not increase the number of brackets, and that any orthogonal flipturn decreases the number of brackets by two. Let MATH be an orthogonal polygon and let MATH be the result of one flipturn. Any bracket of MATH that lies completely outside the flipturned pocket is still a bracket in MATH; any bracket completely inside the flipturned pocket is inverted, but remains a bracket. Thus, to prove our claim, it suffices to consider just four edges, namely, the two edges adjacent to each endpoint of the lid. After symmetry considerations, there are only three cases to check for orthogonal pockets and ten cases for diagonal pockets. These cases are illustrated in REF . Since each orthogonal flipturn removes two brackets, and no diagonal flipturn adds brackets, there can be at most MATH orthogonal flipturns. Since each diagonal flipturn removes two vertices, and no orthogonal flipturn adds vertices there can be at most MATH diagonal flipturns. Thus, there can be at most MATH flipturns altogether.
cs/0008010
In any convexifying sequence, there are exactly MATH diagonal flipturns. Every orthogonal flipturn increases the perimeter of the polygon's bounding box by at least MATH. The initial bounding box has perimeter at least MATH, and the final rectangle has perimeter exactly MATH, so the maximum number of orthogonal flipturns is at most MATH.
cs/0008010
This follows directly from REF .
cs/0008010
We construct an orthogonal MATH-gon MATH essentially by following the proof of REF . MATH is a rectangle. MATH is an NAME hexagon, which is convexified by one flipturn. MATH is a rectangle with a rectangular orthogonal pocket in one side, which requires three flipturns to convexify. For all MATH, MATH consists of a rectangle with a single NAME pocket, where the tail of the L is an inverted and reflected copy of MATH. See REF . In the language of the proof of REF , MATH's only pocket is bad - flipturning it creates one diagonal pocket and one orthogonal pocket. If we flipturn diagonal pockets whenever possible, the first five flipturns eliminate six vertices and leave a distorted MATH. The theorem follows by induction.
cs/0008010
Let MATH be a simple polygon and let MATH be the result of one flipturn. As we argued in the proof of REF , it suffices to focus on brackets touching the endpoints of the lid. Let MATH and MATH denote the number of boundary brackets in MATH and MATH, respectively, so that MATH. For nondegenerate flipturns, it suffices to consider only flipturns with MATH, since MATH is never more than MATH. There are three cases to consider: no boundary brackets, one outer boundary bracket, and one inner boundary bracket. For each of these, there are nine subcases, depending on whether each lid endpoint becomes a convex vertex, becomes a concave vertex, or disappears after the flipturn. These cases are illustrated in REF . Standard degenerate flipturns always have two outer brackets, and both lid endpoints always become concave vertices. Thus, there are only three cases to consider, depending on the number of inner brackets, exactly as in REF . Similar arguments apply to extended flipturns.
cs/0008010
We define the potential MATH of a polygon to be its discrete angle plus half the number of brackets, that is, MATH. For the initial polygon MATH, we have MATH and MATH, so the initial potential MATH is at most MATH. For the final convex polygon, we have MATH and MATH, so the final potential MATH is at least MATH. By REF , every flipturn reduces the potential by at least one. Thus, after any sequence of MATH flipturns, the polygon must be convex.
cs/0008010
It suffices to store only the slopes and lengths of the edges in the proper order, without explicitly storing the vertex coordinates. Any flipturn reverses a contiguous chain of edges, namely, the edges of the flipturned pocket. Our goal, therefore, is to maintain a circular list of items subject to the operation MATH, which reverses the sublist starting with item MATH and ending with item MATH. For example, if the initial list is MATH, then MATH produces the list MATH, after which MATH produces MATH. We store the edges in the leaves of a balanced binary search tree, initially in counterclockwise order around the polygon. Rather than explicitly reversing chains of edges, we will store a reversal bit MATH at every node MATH, indicating whether that subtree should be considered reversed, relative to the orientation of the subtree rooted at MATH's parent. Initially, all reversal bits are set to MATH. Our algorithm for MATH is illustrated in REF . Without loss of generality, we assume that MATH appears before MATH in the linear order stored in the tree; otherwise, we simply toggle the reversal bit at the root and call MATH. First, we split the tree into three subtrees, containing the items to the left of MATH, items between MATH and MATH, and the items to the right of MATH. Second, we toggle the reversal bit at the root of the middle tree. Finally, we merge the three trees back together. Each split or merge can be performed using MATH rotations (using red-black trees CITE, splay trees CITE, or treaps CITE, for example), and we can easily propagate the reversal bits correctly at each rotation.
cs/0008010
We maintain the polygon vertices in a balanced binary tree, similarly to the proof of REF . The coordinates of the points are represented implicitly by storing a triple MATH at each internal node MATH, encoding an affine transformation to be applied to all edges in the subtree of MATH. Specifically, MATH is a translation vector for all edges in MATH's subtree if MATH and a point of reflection if MATH. Initially, MATH for all nodes MATH. The actual position of any vertex can be recovered in MATH time by composing the transformations along the path up to the root. We can easily maintain these triples under rotations, splits, and merges, similarly to the MATH algorithm described earlier. We omit the unenlightening details. Each node in this tree also stores the portion of its subhull not included in its parent's subhull. Specifically, we store the vertices of this convex chain in a secondary balanced binary tree. Instead of explicitly storing the coordinates of the vertices of this chain, however, we store only pointers to the appropriate leaves in the primary binary tree. The coordinates of any point can be recovered in MATH time by composing the linear transformations stored on the path up from the point's leaf. It remains only to show that we can merge any two sibling subhulls quickly. If we can merge two sibling subhulls in time MATH when all vertex coordinates are given explicitly, then we can update the convex hull of MATH in time MATH per flipturn. One logarithmic factor is the number of merges we must perform for each flipturn; the other is the cost of accessing the implicitly-stored vertex coordinates. Let MATH be the chain of polygon edges associated with some node MATH in the primary binary tree, and let MATH and MATH be the subchains associated with the left and right children of MATH, respectively. Since MATH has no self-intersections, the boundaries of the convex hulls MATH and MATH can intersect in at most two points. If the hull boundaries do not intersect, then the hulls can be either disjoint or nested. See REF . If MATH and MATH are nested, then one of them is the convex hull of MATH. In general, deciding whether to given convex polygons are nested requires MATH time, but the special structure of our problem allows a faster solution. We define the entrance and exit of a polygonal chain MATH as follows. The entrance of MATH is a pair of rays whose common basepoint is the first vertex of MATH that is also a vertex of MATH; the rays contain the convex hull edges on either side of this vertex. The exit of MATH is a similar pair of rays based at the last vertex of MATH that is also a vertex of MATH. See REF . Let MATH be the last vertex of MATH, and let MATH be the first vertex of MATH. The segment MATH does not intersect MATH, so if MATH is outside the convex hull of MATH, then MATH must be outside the entrance of MATH (that is, on the opposite side of the entrance from MATH). More generally, MATH if and only if MATH lies completely inside the entrance of MATH. Similarly, MATH if and only if MATH lies completely inside the exit of MATH. We can test in MATH time whether a convex polygon (represented as an array of vertices in counterclockwise order) lies inside a wedge. Thus, if we can compute the entrance and exit of any chain given those of its children, then we can test for nested sibling subhulls in MATH time. Fortunately, this is quite easy. If both edges of MATH defining the entrance of MATH are also edges of MATH, then the entrance of MATH is just the entrance of MATH. Otherwise, the entrance of MATH contains one of the two outer common tangents between MATH and MATH. Now suppose MATH and MATH are not nested. Using REF and CITE, we can decide in MATH time whether MATH and MATH intersect. If the two convex hulls have disjoint interiors, their algorithm also returns a separating line MATH. If we use MATH as a local vertical direction, we can divide MATH and MATH into separate upper and lower hulls, such that one outer common tangent joins the two upper hulls and the other joins to two lower hulls. This is precisely the setup required by REF and NAME, which finds these two common tangents in MATH time CITE. Finally, suppose the boundaries of MATH and MATH intersect at two points. In this case, we can find the two outer common tangent lines between them, and thus compute MATH, in MATH time. To find (say) the upper common tangent of MATH and MATH, we perform a modified binary search over the vertices of MATH. At each step of this binary search, we find the upper tangent line MATH to MATH (if any) passing through a vertex MATH in MATH time, using a second-level binary search. Thus, we can compute the convex hull, entrance, and exit of MATH from the convex hulls, entrances, and exits of MATH and MATH in MATH time. By our earlier argument, it follows that we can maintain the convex hull of MATH in MATH time per flipturn. We can build the original data structure in MATH time by explicitly computing the convex hulls of each subchain, each in linear time.
cs/0008010
We can construct the data structures to maintain the polygon and its convex hull in MATH time. In addition to the convex hull itself, we maintain a separate list of the lids of MATH, which requires only trivial additions to our data structures. This allows us to choose a legal flipturn in constant time. By REF , we can maintain both the polygon and its convex hull in MATH time per flipturn.
cs/0008010
Fix an index MATH. We prove the theorem by induction on the number of inner regions in the flipturned pocket. If the pocket contains no inner regions, it must be MATH-monotone, and we easily observe that MATH and MATH. The inner regions of MATH have a natural forest structure, defined by connecting each region to the next region encountered on a shortest path to infinity. The roots of this forest are inner regions directly adjacent to outer regions, and its leaves are inner regions adjacent to only one other region. Let MATH be some leaf region inside pocket MATH, let MATH, and define MATH and MATH analogously to MATH and MATH for this new polygon. Finally, let MATH be the result of flipturning the now-simpler pocket MATH, let MATH be the image of MATH under this flipturn (so MATH), and define MATH and MATH analogously. Since MATH has one less inner region than MATH, the inductive hypothesis implies that MATH. It suffices to consider the case where MATH lies either in strip MATH or in strip MATH, since otherwise we have MATH, MATH, MATH, and MATH, and so there is nothing to prove. Moreover, if MATH is in strip MATH, then MATH. Suppose MATH is an up-region, so MATH. Some region MATH of MATH is split into two regions by MATH. If we imagine a continuous transformation from MATH to MATH, the trapezoid MATH grows upward from the bottom edge of MATH. We have four cases to consider, illustrated in the first two rows of REF . (The last row shows the corresponding cases when MATH is a down-region.) CASE: In this case MATH splits MATH into two up-regions, so MATH. If MATH is in strip MATH, then MATH is in strip MATH, so MATH and MATH (but these might be either MATH or MATH). CASE: In this case MATH splits MATH into an up-region and a down-region, so MATH. If MATH is in strip MATH, then MATH and MATH. CASE: In this case MATH splits MATH into an up-region and a side region, so MATH. If MATH is in strip MATH, then MATH and MATH. CASE: Since MATH is an up-region, MATH must touch the bottom edge of MATH, which means that MATH must lie above MATH. In this case MATH splits MATH into two side regions, so MATH. Since MATH is in strip MATH, we have MATH but MATH. The lemma holds in every case. Four similar cases arise when MATH is a down-region and MATH. In each case, we have MATH, MATH, and MATH. We omit further details.
cs/0008010
Let MATH denote the vertical width of strip MATH (and strip MATH). REF implies that MATH . Let MATH and MATH denote the MATH-coordinates of the top of MATH and MATH, respectively, and let MATH be the MATH-coordinate of the lid midpoint MATH. We easily observe that MATH . Finally, define MATH and MATH. Combining REF , we obtain the identity MATH. In other words, the total height of all the up-regions plus the maximum MATH-coordinate of the polygon is an invariant preserved by any flipturn. Let MATH be the convex polygon produced by some sequence of flipturns starting from MATH, and define MATH and MATH analogously to MATH and MATH. Obviously, MATH has no up-regions, so MATH. Thus, by induction on the number of flipturns, we have the identity MATH. Since MATH is independent of the convexifying flipturn sequence, so is the vertical position of MATH. We can compute MATH in linear time by computing a horizontal trapezoidal decomposition of MATH, using NAME 's algorithm CITE or its recent randomized variant by CITE, and then performing a depth-first search of its dual graph. The argument for the horizontal position of MATH is symmetric.
cs/0008010
It suffices to consider the special case of orthogonal polygons. A flipturn sequence for an orthogonal polygon has length greater than MATH if and only if it contains an orthogonal flipturn. Thus, to prove the theorem, we only need to show the NAME of the decision problem Orthogonal NAME: Given an orthogonal polygon, does any flipturn sequence contain an orthogonal flipturn? We prove this problem is NAME by a reduction from Subset Sum: Given a set of positive integers MATH and another integer MATH, does any subset of MATH sum to MATH? The reduction algorithm is given in REF . The algorithm constructs a polygon in linear time by walking along its edges in clockwise order, starting and ending at the top of the first step. (The algorithm assumes without loss of generality that MATH is even.) REF shows an example of the reduction. The basic structure of the polygon is a staircase, with one square step for each of the MATH, plus one long step of height MATH splitting the other steps in half. Just below each of the upper steps is an an inward horizontal spike; just above each of the lower steps is an outward horizontal spike; and just behind the long step is a vertical test spike of length exactly MATH. The horizontal spikes all have length greater than MATH, and they increase in length as they get closer to the top and bottom of the polygon. At any point during the flipturning process, the polygon has one main pocket containing the test spike and several secondary pockets containing one or more smaller steps, each of whose heights is some MATH. Initially, there is just one secondary pocket, of height and width MATH. The MATH-th step (that is, the one with height MATH) is exposed the MATH-th time the main pocket is flipturned. No matter which flipturns we perform before flipturning the test spike, the vertical distance MATH between the top endpoint of the main pocket's lid and the top edge of the polygon's bounding box is always the sum of elements of MATH. Specifically, if we flipturn every step whose size is an element of some subset MATH as soon as it becomes available, then just before the test spike is flipped, MATH is the sum of the elements of MATH; see REF . Thus, since the test spike has length MATH, flipturning it can create an orthogonal pocket if and only if some subset of MATH sums to MATH.
cs/0008011
If MATH then dist-prod computes the distance product of MATH and MATH using the naive algorithm that runs in MATH time and we are done. Assume, therefore, that MATH. To see that the algorithm correctly computes the distance product of MATH and MATH in this case, note that for every MATH we have MATH where indices MATH for which MATH or MATH are excluded from the summation, and therefore MATH . We next turn to the complexity analysis. If MATH then fast-prod performs MATH arithmetical operations on MATH-bit integers. The NAME integer multiplication algorithm multiplies two MATH-bit integers using MATH bit operations. Letting MATH, we get that the complexity of each arithmetic operation is MATH. Finally, the logarithms used in the computation of MATH can be easily implemented using binary search. The complexity of the algorithm in this case is therefore MATH, as required.
cs/0008011
Property MATH clearly holds when MATH is initialized to MATH. In each iteration, the algorithm chooses a set MATH and then lets MATH for every MATH. For every MATH, we have MATH, as follows from the induction hypothesis and the triangle inequality, and thus the new value of MATH is again an upper bound on MATH. REF also follows easily by induction. Initially, MATH and MATH, for every MATH, so the condition is satisfied. Whenever MATH is assigned a new value, we have MATH and MATH. Until the next time MATH is assigned a value we are thus guaranteed to have MATH, as the value of MATH does not change, while the values of MATH and MATH may only decrease. Finally, if the conditions of REF hold, then at the end of the iteration we have MATH . As MATH, by REF, we get that MATH, as required.
cs/0008011
We prove the lemma by induction of MATH. It is easy to check that the claim holds for MATH. We show next that if the claim holds for MATH, then it also holds for MATH. Let MATH and MATH be two vertices connected by a shortest path that uses at most MATH edges. Let MATH be such a shortest path from MATH to MATH. If the number of edges on MATH is at most MATH then, by the induction hypothesis, after the MATH-st iteration we already have MATH (with very high probability). Suppose, therefore, that the number of edges on MATH is at least MATH and at most MATH. To avoid technicalities, we `pretend' at first that MATH is an integer. We later indicate the changes needed to make the proof rigorous. Let MATH and MATH be vertices on MATH such that MATH and MATH are separated, on MATH, by exactly MATH edges, and such that MATH and MATH, and MATH and MATH are separated, on MATH, by at most MATH edges. See REF . Such vertices MATH and MATH can always be found as the path MATH is composed of at least MATH and at most MATH edges. Let MATH be the set of vertices lying between MATH and MATH (inclusive) on MATH. Note that MATH. Let MATH. As MATH lies on a shortest path from MATH to MATH, we have MATH. As MATH lies between MATH and MATH, there are shortest paths from MATH to MATH, and from MATH to MATH that use at most MATH edges. By the induction hypothesis, we get that at the beginning of the MATH-th iteration we have MATH and MATH, with very high probability. We also have MATH. It follows, therefore, from REF MATH, that if there exists MATH, where MATH is the set of vertices chosen at the MATH-th iteration, then at the end of the MATH-th iteration we have MATH, as required. What is the probability that MATH? Let MATH. If MATH, then MATH and clearly MATH. Suppose, therefore, that MATH. Each vertex then belongs to MATH independently with probability MATH. As MATH, the probability that MATH is at most MATH . As there are less than MATH pairs of vertices in the graph, the probability of failure during the entire operation of the algorithm is at most MATH. (We do not have to multiply the probability by the number of iterations, as each pair of vertices should only be considered at one of the iterations. If a pair MATH violates the condition of the lemma, then it also does so at the MATH-th iteration, where MATH is the smallest integer such that there is a shortest path from MATH to MATH that uses at most MATH edges.) Unfortunately, MATH is not an integer. To make the proof go through, we prove by induction a slight strengthening of the lemma. Define the sequence MATH and MATH, for MATH. Note that MATH. We show by induction on MATH that, with high probability, for every MATH, if there is a shortest path from MATH to MATH that uses at most MATH edges, then at the end of the MATH-th iteration we have MATH. The proof is almost the same as before. If MATH is a shortest path from MATH to MATH that uses at most MATH edges, we consider vertices MATH and MATH on MATH such that MATH and MATH are separated by exactly MATH edges, and such that MATH and MATH, and MATH and MATH are separated by at most MATH edges. Repeating the above arguments we obtain a rigorous proof of the (strengthened) lemma.
cs/0008011
Condition MATH follows, as mentioned, from REF , the fact that in the last iteration MATH, and the fact that if MATH, and if there are no negative weight cycles in the graph, then there is a shortest path from MATH to MATH that uses at most MATH edges. Suppose now that MATH. By REF MATH we get that after the last iteration we have MATH and MATH, or equivalently, MATH. But, by the triangle inequality we have MATH. Thus, MATH, as required.
cs/0008011
For every MATH, let MATH be the number of the iteration of rand-short-path in which MATH was set for the last time. If MATH, let MATH. We need the following claim: If MATH, then MATH. Suppose that MATH was set for the last time at the MATH-th iteration. Let MATH be the elements of the matrix MATH at the beginning of the MATH-th iteration, and MATH be these elements at the end of the MATH-th iteration. By our assumption and by REF , we get that MATH . As MATH and MATH (see REF MATH), we get that MATH and MATH. Thus, MATH and MATH are already assigned their final values at the beginning of the MATH-th iteration, and therefore MATH, as required. We now prove REF by induction on MATH. If MATH, then MATH, and MATH returns the edge MATH which is indeed a shortest path from MATH to MATH. Suppose now that MATH returns a shortest path from MATH to MATH for every MATH and MATH for which MATH. Suppose that MATH. By REF , we get that MATH. By the induction hypothesis, the recursive calls MATH and MATH return shortest paths from MATH to MATH and from MATH to MATH. As MATH REF , the concatenation of these two shortest paths is indeed a shortest path from MATH to MATH, as required.
cs/0008011
Algorithm wit-to-suc begins by initializing all the elements of the MATH matrix MATH to REF. It then constructs, for each iteration number MATH, the set MATH of pairs MATH for which MATH. It is easy to construct all these sets in MATH by bucket sorting. (In the description of wit-to-suc, MATH denotes the maximal element in MATH. Note that MATH.) Next, for every MATH such that MATH, it sets MATH. It then performs MATH iterations, one of each iteration of rand-short-path in which values are changed. We prove, by induction on the order in which the elements of the matrix MATH are assigned nonzero values, that if MATH, then MATH returns a simple shortest path from MATH to MATH in the graph. This clearly holds after wit-to-suc sets MATH for every MATH, as the edge MATH is then a simple shortest path from MATH to MATH in the graph. Suppose that wit-to-suc is now about to perform the while loop for a pair MATH for which MATH. If MATH, then no new entries are assigned nonzero values. Suppose, therefore, that MATH. Let MATH. By REF , we get that MATH and MATH. Thus, MATH and MATH are already assigned nonzero values and by the induction hypothesis, the calls MATH and MATH return simple shortest paths in the graph from MATH to MATH, and from MATH to MATH. Let MATH be the first vertex on the path MATH for which MATH. The vertex MATH is well defined as MATH. As MATH, we get, by the induction hypothesis, that MATH traces a simple shortest path from MATH to MATH. The concatenation of the portion of MATH from MATH to MATH, and of MATH is clearly a shortest path from MATH to MATH. It is also simple as both portions are simple, and as for every MATH on the first portion, except MATH, we have MATH, while for every MATH on the second portion we have MATH. After the while loop corresponding to MATH, MATH returns this simple shortest path. Furthermore, if MATH is changed by this while loop, then MATH lies on the first portion of this simple shortest path, and MATH is the corresponding suffix of this simple shortest path, which is also a simple shortest path. Finally, the complexity of the algorithm is MATH as each iteration of the while loop reduces the number of zero elements of MATH by one.
cs/0008011
The proof is almost identical to the proof of REF . We show again, by induction on MATH, that if MATH, then after the MATH-th iteration of the algorithm we have MATH. The basis of the induction is easily established. Suppose, therefore, that the claim holds for MATH. We show that it also holds for MATH. Let MATH and MATH be two vertices such that MATH, where MATH. As in REF , let MATH be a shortest path from MATH to MATH that uses MATH edges, let MATH and MATH be two vertices on MATH such that MATH and MATH are separated, on MATH, by MATH edges, and such that MATH and MATH, and MATH and MATH, are separated, on MATH, by at most MATH edges (see REF ). As MATH, the set used in the MATH-th iteration, is assumed to be a strong MATH-bridging set, and as MATH, a vertex MATH is guaranteed to lie of a shortest path from MATH to MATH that uses MATH edges. This shortest path from MATH to MATH is not necessarily the portion of MATH going from MATH to MATH. Nonetheless, we still have MATH and MATH. As MATH and MATH we get, by the induction hypothesis, that MATH and MATH. After the distance product of the MATH-th iteration we therefore have MATH, as required.
cs/0008011
By the definition of bridging sets, we get that there exists MATH such that MATH. If MATH, we are done. Assume, therefore, that MATH. Let MATH be next to last vertex on a shortest path from MATH to MATH. Clearly, MATH, MATH and MATH. Thus, there exists MATH such that MATH, and therefore also MATH. There is, therefore, a shortest path from MATH to MATH that passes through MATH, then through MATH, and then through MATH. As there are no nonpositive weight cycles in the graph, a shortest path must be simple and therefore MATH. In general, suppose that we have found so far MATH distinct vertices MATH such that there is a shortest path from MATH to MATH that visits all these vertices. If MATH, then we are done. Otherwise, we can find another vertex MATH, distinct from all the previous vertices, such that there is a shortest path from MATH to MATH that passes though MATH. As the graph is finite, this process must eventually end with a vertex from MATH satisfying our requirements.
cs/0008011
We prove, by induction, that after the MATH-th iteration of short-path we have: CASE: MATH, for every MATH. CASE: If MATH then MATH. Otherwise, MATH and MATH. CASE: If MATH, then MATH. The proofs of REF are analogous to the proofs of REF . We concentrate, therefore, on the proof of REF . It is easy to check that REF holds before the first iteration. We show now that if it holds at the end of the MATH-st iteration, then it also holds after the MATH-th iteration. Let MATH be such that MATH. If MATH, then the condition MATH holds already after the MATH-st iteration. Assume, therefore, that MATH. Let MATH be a shortest path from MATH to MATH that uses MATH edges. Let MATH be the vertex on MATH for which MATH. (See REF .) Note that MATH. By the induction hypothesis, after the MATH-st iteration we have MATH and MATH. As MATH is a MATH-bridging set, we get, by REF , that there exists MATH such that MATH and MATH. Furthermore, as MATH, we get, by REF , that MATH and MATH. (The fact that MATH follows also from the induction hypothesis, as MATH.) As MATH, we get that MATH. Thus MATH. To sum up, we have MATH . As MATH and MATH, after the first distance product of the MATH-th iteration, we get that MATH and thus MATH and MATH. As MATH and MATH, after the second distance product we get that MATH and thus MATH, as required.
cs/0008011
The inequalities MATH follow from the fact that elements are always rounded upwards by scale. We next show that MATH. Let MATH be a witness for MATH, that is, MATH. Assume, without loss of generality, that MATH. Suppose that MATH, where MATH (the cases MATH and MATH are easily dealt with separately). If MATH, then in the first iteration of approx-dist-prod, when MATH, we get MATH. Assume, therefore, that MATH. In the iteration of approx-dist-prod in which MATH we get that MATH . Thus, after the call to dist-prod we have MATH as required.
cs/0008025
As described above, we reduce REF-SAT to the given problem by forming a configuration of men with two horizontal lines of men for each variable, and three vertical lines for each clause. We connect these lines by the fan-in and fan-out gadget depicted in REF . If variable MATH occurs as the MATH-th term of clause MATH, we place an interaction gadget REF at the point where the bottom horizontal line in the MATH-th pair of horizontal lines crosses the MATH-th vertical line in the MATH-th triple of vertical lines. If instead the negation of variable MATH occurs in clause MATH, we place an interaction gadget similarly, but on the top horizontal line in the pair. At all other crossings of horizontal and vertical lines we place the crossing gadget depicted in REF . Finally, we form a path of men from the final fan-in gadget (the arrow in REF ) to the goal line of the NAME board. The lines from any two adjacent interaction gadgets must be separated by four or more units, but other crossing types allow a three-unit separation. By choosing the order of the variables in each clause, we can make sure that the first variable differs from the last variable of the previous clause, avoiding any adjacencies between interaction gadgets. Thus, we can space all lines three units apart. If REF-SAT instance has MATH variables and MATH clauses, the resulting NAME board requires MATH rows and MATH columns, polynomial in the input size. Finally, we must verify that REF-SAT instance is solvable precisely if the NAME instance has a winning jump sequence. Suppose first that REF-SAT instance has a satisfying truth assignment; then we can form a jump sequence that uses the top horizontal line for every true variable, and the bottom line for every false variable. If a clause is satisfied by the MATH-th of its three variables, we choose the MATH-th of the three vertical lines for that clause. This forms a valid jump sequence: by the assumption that the given truth assignment satisfies the formula, the jump sequence uses at most one of every two lines in every interaction gadget. Conversely, suppose we have a winning jump sequence in the NAME instance; then as discussed above it must use one of every two horizontal lines and one or three of every triple of vertical lines. We form a truth assignment by setting a variable to true if its upper line is used and false if its lower line is used. This must be a satisfying truth assignment: the vertical line used in each clause gadget must not have had its interaction gadget crossed horizontally, and so must correspond to a satisfying variable for the clause.
cs/0008025
Piece MATH can king precisely if there is a directed path in MATH from MATH to one of the squares along the opponent's side of the board. A winning move exists precisely if there exists a piece MATH for which MATH includes all opposing pieces and contains an NAME path starting at MATH; that is, precisely if MATH is connected and has at most one odd-degree vertex other than the initial location of MATH.
cs/0008036
For each minimal model MATH-of MATH: MATH is a model of MATH CASE: for every model MATH of MATH base equivalent to some minimal model MATH-of MATH: MATH is a model of MATH, since MATH by REF CASE: MATH is a logical consequence of MATH. CASE: Only if: MATH is a logical consequence of MATH CASE: every model of MATH is a model of MATH, by REF CASE: MATH is a model of MATH.
cs/0008036
MATH CASE: MATH for every model MATH of MATH, by REF and transitivity of MATH CASE: for every model MATH of MATH: MATH, since for every model MATH of MATH: MATH CASE: for every model MATH of MATH: MATH is a model of MATH, by REF CASE: MATH is a logical consequence of MATH.
cs/0008036
The result is proven by induction on MATH. CASE: Goals with mulitset complexity MATH have to be a satisfiable MATH-constraint MATH. Then MATH and MATH is a MATH-answer of itself. CASE: Suppose the result holds for goals with multiset complexity less than some multiset MATH. CASE: MATH and MATH CASE: there exists a clause MATH of MATH and a goal MATH such that MATH and MATH and MATH, by REF CASE: there exists a MATH-answer MATH of MATH such that MATH and MATH, by the hypothesis CASE: there exists a MATH-answer MATH of MATH such that MATH and MATH, and by REF , MATH is a logical consequence of MATH. CASE: The result follows by arithmetic induction.
cs/0008036
We have to show that the supremum MATH can be attained for some MATH. CASE: For MATH, we have MATH. CASE: For MATH, we have to show that for any real MATH, MATH, the set MATH is finite. Let MATH be the finite set of real numbers of factors of clauses in MATH, MATH be the greatest element in MATH such that MATH and let MATH be the smallest integer such that MATH. Then, since each real number MATH is a product of a sequence of elements of MATH, the number of different products MATH is not greater than MATH, the permutation of MATH different things taken MATH at a time with repetitions, and thus finite. Hence, the supremum is the maximum attained for some MATH.
cs/0008036
We have to show that MATH. We prove by induction on MATH showing for each constraint language MATH, for each quantitative definite clause specification MATH-in MATH, for each MATH-interpretation MATH, for each MATH-chain MATH of MATH-interpretations extending some MATH-interpretation MATH, for each n-ary relation symbol MATH, for each MATH, for each MATH, for each MATH: MATH. CASE: MATH. CASE: Suppose MATH. CASE: MATH CASE: there exists a variant MATH of a clause in MATH-such that MATH and MATH, by REF CASE: MATH and MATH, by the hypothesis CASE: MATH, by definition of MATH CASE: MATH. CASE: MATH it follows immediately that MATH. CASE: REF follows by arithmetic induction. CASE: We have to show that MATH is a model of MATH-extending MATH. We prove that for each clause MATH in MATH, for each MATH: If MATH, then MATH. CASE: Note that since every MATH is a MATH-interpretation extending MATH, MATH-is a MATH-interpretation extending MATH. CASE: Now let MATH be a clause in MATH-such that for some MATH: MATH and MATH. CASE: Then there exists some MATH such that MATH, by REF and since for all MATH such that MATH CASE: MATH, by REF CASE: MATH, since MATH CASE: MATH. CASE: This completes the proof for REF . CASE: We have to show that MATH-is the minimal model of MATH-extending MATH. We prove for every base equivalent model MATH of MATH: MATH, which gives MATH, by induction on MATH showing for each constraint language MATH, for each quantitative definite clause specification MATH-in MATH, for each MATH-interpretation MATH, for each MATH-chain MATH of MATH-interpretations extending some MATH-interpretation MATH, for each n-ary relation symbol MATH, for each MATH, for each MATH, for each MATH: MATH. CASE: MATH. CASE: Suppose MATH. CASE: MATH CASE: there exists a variant MATH of a clause in MATH-such that MATH and MATH, by REF CASE: MATH and MATH, by the hypothesis CASE: MATH, since MATH is a model of MATH CASE: MATH. CASE: MATH it follows immediately that MATH. CASE: REF follows by arithmetic induction.
cs/0008036
CASE: For each minimal model MATH-of MATH: MATH is a model of MATH CASE: for every model MATH of MATH-base equivalent to some minimal model MATH-of MATH: MATH is a model of MATH, since MATH by REF CASE: MATH is a logical consequence of MATH. CASE: MATH is a logical consequence of MATH CASE: every model of MATH-is a model of MATH, by REF CASE: MATH is a model of MATH.
cs/0008036
The result is proven by induction on the depth MATH of the quantitative proof tree, where one unit of depth is from max-node to max-node. CASE: We know that quantitative proof trees of depth MATH have to take the form of a single max-node labeled by a satisfiable MATH-constraint MATH with root value REF. Then MATH is a logical consequence of MATH. CASE: Suppose the result holds for quantitative proof trees of depth MATH. CASE: Let MATH be a goal labeling a quantitative proof tree of depth MATH with answer constraint MATH and root value MATH, let MATH be a goal labeling the min-node obtained from MATH via MATH-using the variant MATH of a clause MATH in MATH, and let MATH be goals labeling max-nodes obtained from MATH via MATH. Then each goal MATH labels a quantitative proof tree of depth MATH with respective answer constraint MATH and root value MATH such that MATH and for each model MATH-of MATH: MATH, by definition of min/max tree CASE: MATH are logical consequences of MATH, by the hypothesis CASE: for each model MATH-of MATH, for each MATH: MATH and if MATH, then MATH, by definition of logical consequence CASE: for each model MATH-of MATH, for each MATH: MATH and if MATH, then MATH, since each model MATH-of MATH-is a model of MATH iff MATH-is a model of MATH CASE: for each model MATH-of MATH, for each MATH: MATH and if MATH, then MATH CASE: MATH is a logical consequence of MATH. CASE: The result follows by arithmetic induction.
cs/0008036
The result is proven by induction on MATH. CASE: We know that goals with complexity MATH have to take the form of a satisfiable MATH-constraint MATH. Then there exists a quantitative proof tree for MATH from MATH-consisting of a single max-node labeled with MATH and root value REF. CASE: Suppose the result holds for goals with complexity MATH. CASE: Let MATH, MATH, MATH, MATH, MATH, MATH, MATH and MATH. CASE: First we observe, that MATH, since MATH CASE: there exists a variant MATH such that MATH and MATH and MATH, by REF and renaming closure of MATH, finite V and infinitely many variables in VAR CASE: MATH such that MATH and MATH, by definition of the inference rules. CASE: Next, MATH, since MATH, MATH, MATH, MATH and MATH. CASE: Finally, MATH, since MATH. CASE: Now we can obtain goals MATH from MATH such that MATH, MATH, MATH and MATH CASE: for each goal MATH, there exists a quantitative proof tree from MATH-with respective answer constraint MATH and respective root value MATH and MATH, by the hypothesis CASE: there exists a quantitative proof tree for MATH from MATH-with answer constraint MATH and root value MATH and MATH. CASE: The result follows by arithmetic induction.
cs/0008036
MATH .
cs/0008036
MATH .
cs/0008036
MATH .
cs/0008036
MATH .
cs/0008036
MATH .
cs/0008036
MATH . The equality MATH holds iff MATH is a fixed point of MATH, that is, MATH with MATH. Furthermore, MATH is a fixed point of MATH iff MATH, MATH, MATH, MATH, by REF MATH .
cs/0008036
Let MATH be a subsequence of MATH converging to MATH. Then for all MATH: MATH and in the limit as MATH, for continuous MATH and MATH: MATH. Thus MATH is a maximum of MATH, using REF , and MATH is a fixed point of MATH. Furthermore, MATH, using REF , and MATH is a critical point of MATH.
cs/0008036
MATH .
hep-lat/0008007
The result in REF follows from MATH . The result in REF follows from the inequality between the geometric and arithmetic mean: MATH . From the NAME characterization of eigenvalues it follows that MATH for all MATH. Hence MATH holds. Now note that MATH . Thus the result in REF is proved.
hep-lat/0008007
The equations in REF can be represented as MATH where MATH is the MATH-th row of MATH. Consider a MATH with MATH. Note that MATH, hence MATH. For the unknown entries in MATH we obtain the system of REF which is equivalent to MATH . The matrix MATH is symmetric positive definite and thus MATH must satisfy MATH . Using MATH we obtain the result in REF . The construction in this proof shows that the solution is unique.
hep-lat/0008007
Let MATH be the MATH-th basis vector in MATH. Take MATH. The MATH-th rows of MATH and MATH are denoted by MATH and MATH, respectively. Now note MATH . The minimum of the functional REF is obtained if in REF we minimize the functionals MATH for all MATH with MATH. If we write MATH, then for MATH the functional REF can be rewritten as MATH . The unique minimum of this functional is obtained for MATH, that is, MATH for all MATH with MATH. Using REF it follows that MATH is the unique minimizer of the functional REF .
hep-lat/0008007
The construction of MATH in REF is as in REF with MATH, MATH. Hence REF is applicable with MATH. It follows that MATH is the unique minimizer of MATH . Decompose MATH as MATH with MATH strictly upper triangular. Then MATH and MATH are lower and strictly upper triangular, respectively, and we obtain: MATH . Hence the minimizers in REF are the same.
hep-lat/0008007
From the construction in REF it follows that MATH that is, MATH is such that MATH for all MATH. This is of the form REF with MATH, MATH. From REF we obtain that MATH is the unique minimizer of the functional MATH that is, of the functional REF . From the proof of REF , with MATH, it follows that the minimization problem MATH decouples into seperate minimization problems (compare REF ) for the rows of MATH: MATH for all MATH with MATH. Here MATH and MATH are the MATH-th rows of MATH and MATH, respectively. The minimization problem corresponding to REF is MATH . This decouples into the same minimization problems as in REF . Hence the functionals in REF have the same minimizer. Let MATH. Using the construction of MATH in REF we obtain MATH . Hence MATH holds for all MATH, that is, REF holds.
hep-lat/0008007
For MATH we use the decomposition MATH, with MATH diagonal and MATH. Furthermore, for MATH, MATH. Now note MATH . The inequality in REF follows from the inequality between the arithmetic and geometric mean: MATH for MATH. For MATH in REF we use the decomposition MATH. For the approximate inverse MATH we then have MATH. From REF it follows that MATH for all MATH. Furthermore from REF we obtain that for MATH we have MATH and thus equality in REF for MATH . We conclude that MATH is the unique minimizer of the functional in REF .
hep-lat/0008007
The right inequality in REF is already given in REF . We introduce the notation MATH for the eigenvalues of MATH. From REF we obtain MATH and from this it follows that MATH holds. Furthermore, MATH yields MATH and thus MATH. We now use the left inequality in REF applied to the matrix MATH. Note that MATH . A simple computation yields MATH and MATH . Substitution of REF results in MATH . Using this the left inequality in REF follows from the left inequality in REF .
hep-lat/0008007
From the assumptions it follows that MATH is a MATH-matrix. In CITE REF it is proved that then MATH is a MATH-matrix, too. Let MATH. Because MATH has only nonnegative entries it follows that MATH . Hence MATH. Using MATH we obtain the result REF .
hep-th/0008095
The representation function is NAME decomposed by inserting MATH twice into the right hand side of MATH. We use hermiticity MATH, MATH-invariance MATH and transversality MATH to obtain MATH . Since the NAME measure is bi-invariant, all terms vanish except those corresponding to the MATH, MATH, which are equivalent to the trivial representation.
hep-th/0008095
Consider an arbitrary lattice point MATH. The gauge transformation REF multiplies all `incoming' links by MATH and all `outgoing' links by MATH. Since MATH in REF is an intertwiner, MATH is unchanged. This holds for all lattice points MATH.
math-ph/0008002
Proof can be found in CITE. We sketch only the idea of the proof of REF. Using the known formula MATH where MATH is the transformation kernel corresponding to the potential MATH, MATH see also REF below, and substituting REF into REF , one gets after a change of order of integration a homogeneous NAME integral equation for MATH. Thus MATH.
math-ph/0008002
REF can be proved in several ways. One way CITE is to recover the spectral function MATH from MATH, MATH. This is possible since MATH, MATH, and MATH where MATH are the bound states of the NAME operator MATH in MATH, MATH is the delta-function, and MATH . Note that MATH and the number MATH in REF can be found as the simple poles of MATH in MATH and the number of these poles, and MATH where MATH, so MATH. It is well known that MATH determines MATH uniquely CITE, CITE. An algorithm for recovery of MATH from MATH is known REF . In CITE a characterization of the class of MATH-functions corresponding to potentials in MATH, MATH is given. Here we give a very simple new proof of REF (compare CITE): Assume that MATH and MATH generate the same MATH, that is, MATH. Subtract from REF for MATH this equation for MATH and get: MATH . Multiply REF by MATH and integrate by parts: MATH where we have used REF to conclude that at infinity the boundary term vanishes. From REF and property MATH REF it follows that MATH. REF is proved.
math-ph/0008002
This result is due to CITE. We give a new short proof based on property C CITE. We prove that data REF determine MATH uniquely, and then REF follows from REF . To determine MATH we determine MATH and MATH from data REF . First, let us prove that data REF determine uniquely MATH. Suppose there are two different functions MATH and MATH with the same data REF . Then MATH . The left-hand side in REF is analytic in MATH since MATH and MATH are, and the zeros of MATH in MATH are the same as these of MATH, namely MATH, and they are simple. The right-hand side of REF has similar properties in MATH. Thus MATH is an entire function which tends to REF as MATH, so, MATH and MATH. The relation MATH follows from the representation MATH . Various estimates for the kernel MATH in the formula MATH are given in CITE. We mention the following: MATH where MATH here and below stands for various estimation constants. From REF follows. Thus, we have proved MATH . Let us prove MATH . We use the Wronskian: MATH . The function MATH and therefore MATH, where the overbar stands for complex conjugate, we have already uniquely determined from data REF . Assume there are two functions MATH and MATH corresponding to the same data REF . Let MATH . Subtract REF with MATH in place of MATH from REF and get MATH or MATH is analytic in MATH and vanishes at infinity and MATH is analytic in MATH and vanishes at infinity. If this claim holds, then MATH, and therefore MATH, so MATH. To complete the proof, let us prove the claim. From REF one gets: MATH . Taking MATH in REF , integrating by parts and using REF , one gets: MATH . Thus MATH . Since MATH is uniquely determined by data REF , so is the constant A MATH (by REF ). Therefore REF imply: MATH . It remains to be checked that REF implies MATH . This follows from REF : if MATH and MATH are the same, so are MATH, and MATH as the difference of equal numbers MATH . REF is proved.
math-ph/0008002
Our proof is new and short. We prove that, if MATH is compactly supported or decays faster than any exponential, for example, MATH, MATH, then MATH determines uniquely MATH and MATH, and, by REF , MATH is uniquely determined. We give the proof for compactly supported potentials. The proof for the potentials decaying faster than any exponentials is exactly the same. The crucial point is: under both assumptions the NAME function is an entire function of MATH. If MATH is compactly supported, MATH for MATH, then MATH is an entire function of exponential type MATH, that is MATH CITE. Therefore MATH is meromorphic in MATH (see REF ). Therefore the numbers MATH, MATH, can be uniquely determined as the only poles of MATH in MATH. One should check that MATH . This follows from REF : if one takes MATH and uses MATH, then REF yields MATH . Thus MATH. Therefore MATH determines uniquely the numbers MATH and MATH. To determine MATH, note that MATH as follows from REF . Thus the data REF are uniquely determined from MATH if MATH is compactly supported, and REF implies REF .
math-ph/0008002
We claim that MATH determines uniquely MATH if REF holds. Thus, REF follows from REF . To check the claim, note that MATH, so MATH and use REF to get MATH for MATH, so MATH . From REF the claim follows. REF is proved.
math-ph/0008002
If MATH for MATH, then MATH for MATH, MATH, so data REF determine MATH and, by REF , MATH is uniquely determined. REF is proved. Of course, this theorem is a particular case of REF .
math-ph/0008002
First, assume MATH. If there are MATH and MATH which produce the same data, then as above, one gets MATH where MATH, MATH, MATH. Thus MATH . The function MATH is an entire function of MATH of order MATH (see REF with MATH), and is an entire even function of MATH of exponential type MATH. One has MATH . The indicator of MATH is defined by the formula MATH where MATH. Since MATH, one gets from REF the following estimate MATH . It is known CITE that for any entire function MATH of exponential type one has: MATH where MATH is the number of zeros of MATH in the disk MATH. From REF one gets MATH . From REF and the known asymptotics of the NAME eigenvalues: MATH one gets for the number of zeros the estimate MATH . From REF it follows that MATH . Therefore, if MATH, then MATH. If MATH then, by REF , MATH. REF is proved in the case MATH. Assume now that MATH and MATH . We claim that if an entire function MATH in REF of order MATH vanishes at the points MATH and REF holds, then MATH. If this is proved, then REF is proved as above. Let us prove the claim. Define MATH and recall that MATH . Since MATH, the function MATH is entire, of order MATH. Let us use a NAME lemma. CITE If an entire function MATH of order MATH has the property MATH, then MATH. If, in addition MATH as MATH, then MATH . We use this lemma to prove that MATH. If this is proved then MATH and REF proved. The function MATH is entire of order MATH. Let us check that MATH and that MATH . One has, using REF and taking into account that MATH: MATH . Here we have used elementary inequalities: MATH with MATH, MATH, and REF . We also used the relation: MATH . Estimate REF implies REF . An estimate similar to REF has been used in the literature (see for example . CITE). REF is proved.
math-ph/0008002
NAME REF - REF to get MATH . Assume that there are MATH and MATH which generate the same data MATH, MATH. Let MATH. Subtract from REF with MATH, similar equations with MATH, and get MATH . Multiply REF by MATH, where MATH, MATH, MATH, integrate over MATH and then by parts on the left-hand side, using REF . The result is: MATH . Note that MATH is an entire function of MATH. Since MATH and is compactly supported, the function MATH is an entire function of MATH, so it has a discrete set of zeros. Therefore MATH where MATH for almost all MATH. REF imply MATH. REF is proved.
math-ph/0008002
NAME REF to get MATH where MATH . It follows from REF that MATH where MATH is the NAME solution to REF . From REF one gets MATH where MATH. From REF one obtains MATH . From REF one concludes MATH . From REF one gets MATH . Thus MATH is known for all MATH. Since MATH is compactly supported, the data MATH determine MATH uniquely by REF is proved.
math-ph/0008002
Let MATH be arbitrary, MATH, MATH if MATH. Suppose MATH . Denote by MATH and MATH the transformation operators corresponding to potentials MATH and MATH which generate spetral functions MATH and MATH, MATH. Then MATH where MATH and MATH are NAME operators. REF implies: MATH where MATH is the adjoint operator and the norm in REF is MATH-norm. Note that MATH and MATH . From REF it follows that MATH where MATH is a unitary operator in the NAME space MATH. If MATH is unitary and MATH are NAME operators then REF implies MATH. This is proved in REF below. If MATH then MATH, therefore MATH and MATH. Here we have used the assumption about MATH being in the limit-point at infinity case: this assumption implies that the spectral function is uniquely determined by the potential (in the limit-circle case at infinity there are many spectral functions corresponding to the given potential). Thus if MATH, then MATH. REF is proved.
math-ph/0008002
From REF one gets MATH and, using MATH, one gets MATH . Denote MATH where MATH are NAME operators. From REF one gets: MATH or MATH . Since the left-hand side in REF is a NAME operator of the type REF while the right-hand side is a NAME operator of the type REF , they can be equal only if each equals zero: MATH and MATH . From REF one gets MATH or MATH. Thus MATH and MATH as claimed. REF is proved.
math-ph/0008002
If MATH and MATH have the same spectral function MATH then MATH for any MATH, where MATH the function MATH solves REF with MATH, and MATH, satisfies first two REF , and MATH where MATH is the transformation operator: MATH . Note that MATH . From REF it follows that MATH . Since Range MATH, REF implies that MATH is unitary (an isometry whose range is the whole space MATH). Thus MATH where MATH is a NAME operator of the type REF . Therefore MATH and this implies MATH. Therefore MATH and MATH. REF is proved.
math-ph/0008002
CASE: Step MATH is done by REF . Let us prove MATH. Assume there are MATH and MATH corresponding to the same MATH. Then MATH . Therefore MATH . By REF relation REF implies MATH, so MATH. CASE: Step MATH is done by solving REF for MATH. The unique solvability of this equation for MATH has been proved below REF . Let up prove MATH. From REF one gets MATH . Let MATH in REF and write REF as MATH . Note that MATH. Thus REF can be written as: MATH . This is a NAME integral equation for MATH. Since it is uniquely solvable, MATH is uniquely recovered from MATH and the step MATH is done. CASE: Step MATH is done by REF . The converse step MATH is done by solving the NAME problem: MATH . One can prove that any twice differentiable solution to REF solves REF with MATH given by REF . The NAME REF -REF is known to have a unique solution. REF -REF is equivalent to a NAME equation (CITE, CITE). Namely if MATH, MATH, MATH, then REF take the form MATH . Therefore MATH . This NAME equation is uniquely solvable for MATH. REF is proved.
math-ph/0008002
CASE: The step MATH is done by REF as we have already mentioned. The step MATH is done by finding MATH, MATH and MATH from the asymptotics of the function REF as MATH. As a result, one finds the function MATH . If MATH and MATH are known, then the function MATH is known. Now the function MATH can be found by the formula MATH . So the step MATH is done. CASE: The step MATH is done by solving REF for MATH. This step is discussed in the literature in detail, (see CITE, CITE). If MATH (actually a weaker condition MATH is used in the half-line scattering theory), then one proves that conditions REF - REF are satisified, that the operator MATH is compact in MATH and in MATH for any fixed MATH, and the homogeneous version of REF : MATH has only the trivial solution MATH for every MATH. Thus, by the NAME alternative, REF is uniquely solvable in MATH and in MATH. The step MATH is done. Consider the step MATH. Define MATH . The function MATH determines uniquely MATH by the formula: MATH and consequently it determines the numbers MATH as the only zeros of MATH in MATH, the number MATH of these zeros, and MATH. To find MATH, one has to find MATH. REF allows one to calculate MATH if MATH and MATH are known. To find MATH, use REF and put MATH in REF . Since MATH is known for MATH, REF allows one to calculate MATH. Thus MATH, are found and MATH can be calculated by REF . Step MATH is done. The above argument proves that the knowledge of two functions MATH and MATH for all MATH determines MATH uniquely. Note that: REF we have used the following scheme MATH in order to get the implication MATH, and REF since MATH and MATH, we have proved also the following non-trivial implication MATH. CASE: REF is done by REF . The converse step MATH is done by solving the NAME problem: MATH REF -REF is equivalent to a NAME integral equation for MATH (see CITE). MATH . One can prove that any twice differentiable solution to REF solves REF with MATH given by REF . A proof can be found in CITE, CITE and CITE. REF is proved.
math-ph/0008002
Clearly, every MATH solution to REF solves REF . Let us prove the converse. Let MATH solve REF . Define MATH . We wish to prove that MATH solves REF . Take the NAME transform of REF in the sense of distributions. From REF one gets MATH and from REF one obtains: MATH . Add MATH to both sides of REF and use REF to get MATH . From REF one gets: MATH REF is equivalent to REF since all the transformations which led from REF to REF are invertible. Thus, REF hold (or fail to hold) simultaneously. REF clearly holds because MATH since MATH are zeros of MATH. REF is proved.
math-ph/0008002
The solution to REF is MATH where MATH, MATH, MATH, MATH, MATH, MATH is defined in REF , MATH is defined in REF and MATH is defined in REF . The functions MATH are the data REF . Since MATH when MATH, REF implies MATH, so one knows MATH . From REF one derives MATH and MATH . From REF one gets MATH and MATH . Eliminate MATH from REF to get MATH where MATH REF is a NAME problem for the pair MATH, the function MATH is analytic in MATH, MATH and MATH is analytic in MATH. The functions MATH and MATH tend to one as MATH tends to infinity in MATH and, respectively, in MATH, see REF . The function MATH has finitely many simple zeros at the points MATH, MATH, where MATH are the negative eigenvalues of the operator MATH defined by the differential expression MATH in MATH. The zeros MATH are the only zeros of MATH in the upper half-plane MATH. Define MATH . One has MATH where MATH is the number of negative eigenvalues of the operator MATH, and, using REF , one gets MATH . Since MATH has no negative eigenvalues by REF , it follows that MATH. In this case MATH (see REF below), so MATH, and MATH is uniquely recovered from the data as the solution of REF which tends to one at infinity. If MATH is found, then MATH is uniquely determined by REF and so the reflection coefficient MATH is found. The reflection coefficient determines a compactly supported MATH uniquely by REF . If MATH is compactly supported, then the reflection coefficient MATH is meromorphic. Therefore, its values for all MATH determine uniquely MATH in the whole complex MATH-plane as a meromorphic function. The poles of this function in the upper half-plane are the numbers MATH. They determine uniquely the numbers MATH, MATH, which are a part of the standard scattering data MATH, MATH, MATH, MATH, where MATH are the norming constants. Note that if MATH then MATH, otherwise REF would imply MATH in contradiction to REF . If MATH is meromorphic, then the norming constants can be calculated by the formula MATH, where the dot denotes differentiation with respect to MATH, and MATH denotes the residue. So, for compactly supported potential the values of MATH for all MATH determine uniquely the standard scatering data, that is, the reflection coefficient, the bound states MATH, and the norming constants MATH. These data determine the potential uniquely. REF is proved. MATH .
math-ph/0008002
We prove MATH. The proof of the equation MATH is similar. Since MATH equals to the number of zeros of MATH in MATH, we have to prove that MATH does not vanish in MATH. If MATH, then MATH, MATH, and MATH is an eigenvalue of the operator MATH in MATH with the boundary condition MATH. From the variational principle one can find the negative eigenvalues of the operator MATH in MATH with the NAME condition at MATH as consequitive minima of the quadratic functional. The minimal eigenvalue is: MATH where MATH is the NAME space of MATH-functions satisfying the condition MATH. On the other hand, if MATH, then MATH . Since any element MATH of MATH can be considered as an element of MATH if one extends MATH to the whole axis by setting MATH for MATH, it follows from the variational REF that MATH. Therefore, if MATH, then MATH and therefore MATH. This means that the operator MATH on MATH with the NAME condition at MATH has no negative eigenvalues. Therefore MATH does not have zeros in MATH, if MATH. Thus MATH implies MATH. REF is proved.
math-ph/0008002
This is an immediate consequence of the following: Theorem CITE: If MATH is holomorphic in MATH, MATH is of NAME in MATH, that is: MATH and MATH where MATH then MATH. The function MATH maps conformally MATH onto MATH, MATH and if MATH, then MATH is holomorphic in MATH, MATH for MATH and MATH, and MATH . From REF and the above Theorem REF follows.
math-ph/0008002
Subtract from REF with MATH this equation with MATH and get: MATH where MATH . Multiply REF by MATH, integrate over MATH, and then by parts on the left, and get MATH . By the assumption MATH if MATH, so MATH and MATH vanish at infinity. At MATH the left-hand side of REF vanishes since MATH . Thus REF implies REF .
math-ph/0008002
One can prove CITE that the kernel MATH of the transformation operator must solve the NAME problem MATH and conversely: the solution to this NAME problem is the kernel of the transformation operator REF . The difficulty in a study of the problem comes from the fact that the coefficients in front of the second derivatives degenerate at MATH, MATH. To overcome this difficulty let us introduce new variables: MATH . Put MATH . Then REF becomes MATH where MATH is defined in REF and MATH . Note that MATH for any MATH and any MATH, where MATH is some constant. Let MATH . Write REF as MATH . Integrate REF with respect to MATH and use REF , and then integrate with respect to MATH to get: MATH . Consider REF in the NAME space MATH of continous function MATH defined for MATH, with the norm MATH where MATH is chosen so that the operator MATH is a contraction mapping in MATH. Let us estimate MATH: MATH where MATH is a constant which depends on MATH, and on MATH. If MATH then MATH is a contraction mapping in MATH and REF has a unique solution in MATH for any MATH and MATH. Let us now prove that estimate REF holds for the constructed function MATH. One has MATH . The last inequality follows from the estimate: MATH where MATH and MATH are arbitrarily small numbers, MATH . The proof of REF is complete when REF is proved. Estimate REF holds. From REF one gets MATH where MATH and MATH . Without loss of generality we can take MATH in REF : If REF is derived from REF with MATH, it will hold for any MATH (with a different MATH in REF ). Thus, consider REF with MATH and solve this inequality by iterations. One has MATH . One can prove by induction that MATH . Therefore REF with MATH implies MATH . Consider MATH . This is an entire function of order MATH and type REF. Thus MATH . From REF estimate REF follows. REF is proved. REF is proved.
math-ph/0008002
The idea of the proof is to consider MATH in REF as a parameter and to reduce REF to a NAME equation with constant integration limits and kernel depending on the parameter MATH. Let MATH, MATH, MATH . Then REF can be written as: MATH REF is equivalent to REF , it is a NAME equation with kernel MATH which is an entire function of MATH and of the parameter MATH. The free term MATH is an entire function of MATH and MATH. This equation is uniquely solvable for all MATH by the assumption. Therefore its solution MATH is an analytic function of MATH in a neighborhood of any point MATH, and it is an entire function of MATH CITE. Thus MATH is an analytic function of MATH in a a neighborhood of the positive semiaxis MATH.
math-ph/0008002
First, we prove convergence of the process REF in MATH. The proof makes it clear that this process will converge in MATH and that in final number of steps one recovers MATH uniquely on CITE. Let MATH, MATH, MATH. Let us start with The map MATH maps MATH into itself and is a contraction on MATH if MATH, MATH. Let MATH, MATH . One has: MATH . Here we have used the estimate MATH and the assumption MATH, MATH. If MATH then MATH is a contraction on MATH. Let us check that the map MATH maps MATH into itself if MATH. Using the inequality MATH if MATH, MATH, one gets: MATH . Thus if MATH then the map MATH is a contraction on MATH in the space MATH. REF is proved. From REF it follows that process REF converges at the rate of geometrical progression with common ratio REF . The solution to REF is therefore unique in MATH. Since for the data MATH which comes from a potential MATH the vector MATH solves REF in MATH, it follows that this vector satisfies REF . Thus, process REF allows one to reconstruct MATH on the interval from data REF , MATH, where MATH is defined in REF . If MATH and MATH are found on the interval MATH, then MATH and MATH can be calculated for MATH. Now one can repeat the argument for the interval MATH, and in finite number of the steps recover MATH on the whole interval CITE. Note that one can use a fixed MATH if one chooses MATH so that REF holds for MATH defined by REF with any MATH. Such MATH does exist if MATH. REF is proved.
math-ph/0008002
It is sufficient to prove that, for any MATH, the function MATH . Since MATH, and since MATH provided that MATH (see REF ), it is sufficient to check that MATH . One has MATH, thus MATH where MATH . From REF one obtains REF since MATH. REF is proved.
math-ph/0008002
The proof goes as above with one difference : if MATH then MATH is present in REF and in REF with MATH one has MATH . Thus, using REF , one gets MATH where MATH is a constant. Similarly one checks that MATH if MATH. REF is proved.
math-ph/0008002
Write MATH . Clearly MATH . By the NAME theorem CITE, one has MATH . Actually, the NAME theorem yields MATH. However, since MATH, one can prove that MATH. Indeed, MATH and MATH are related by the equation: MATH which implies MATH or MATH where MATH is the convolution operation. Since MATH and MATH the convolution MATH. So, differentiating REF one sees that MATH, as claimed. From REF one gets: MATH where MATH is a constant defined in REF below, the constants MATH are defined in REF and the function MATH is defined in REF . We will prove that MATH (see REF ). To derive REF , we have used the formula: MATH and made the following transformations: MATH where MATH . Comparing REF one concludes that MATH . To complete the proof of REF one has to prove that MATH, where MATH is defined in REF . This is easily seen from the asymptotics of MATH as MATH. Namely, one has, as in REF : MATH . From REF it follows that MATH . From REF it follows that MATH. REF is proved.
math-ph/0008002
One has MATH . From REF one gets: MATH . If MATH, then MATH . Here by MATH we mean the right-hand side of REF since MATH is, in general, not analytic in a disc centered at MATH, it is analytic in MATH and, in general, cannot be continued analytically into MATH. Let us assume MATH. In this case MATH is continuously differentiable in MATH. From the Wronskian formula MATH taking MATH, one gets MATH . Therefore if MATH and MATH, then MATH and MATH. One can prove CITE, that if MATH, then MATH is bounded as MATH, MATH. From REF it follows that MATH . From REF one gets: MATH . Since MATH is a real-valued function if MATH is real-valued this follows from the integral REF shows that MATH and REF implies MATH REF is proved.
math-ph/0008003
The idea of the proof in the MATH direction is as follows. One constructs a functor MATH by taking tensor products: on objects one has MATH for MATH, and on arrows one puts, in obvious notation, MATH. To go in the opposite direction, one repeats the above procedure, in defining a functor MATH by means of MATH, etc. Using REF , it easily follows that MATH and MATH. In the MATH direction, one constructs MATH, given an equivalence functor MATH, by putting MATH. The left MATH action on MATH is turned into a left MATH action on MATH by definition of MATH, and the right MATH action on MATH is turned into a right MATH action on MATH through MATH, since MATH. Thus MATH. Similarly, define MATH. The definition of equivalence of categories then trivially implies that the isomorphisms in REF hold, with MATH. For details, compare no. REF
math-ph/0008003
The MATH claim is part of ``NAME I", compare no. REFEF for REF, and REFEF for REF. The converse follows from nos. REF
math-ph/0008003
This is essentially REF (Schweizer works with the category of MATH-algebras with equivalence classes of NAME bimodules as arrows, rather than with the bicategory whose arrows are the NAME bimodules themselves, but his proof may trivially be adapted to our situation). We are indebted to NAME for drawing our attention to this result. NAME in addition provided a second proof in case that the algebras are unital: combine REF with REF.
math-ph/0008003
This follows from a combination of REF above with REF. A direct proof would be desirable.
math-ph/0008003
The second part of the proposition, which at the same time proves the MATH claim in the first part, is proved by the argument following REF. For the MATH claim in the first part, we are given a regular bibundle MATH, with regular inverse MATH. This leads to two isomorphisms, as displayed in REF for rings. From MATH as MATH bibundles we infer that the left MATH action on MATH is proper (since the canonical left MATH action on MATH is), which by reductio ad absurdum implies that the left MATH action on MATH is proper. Since the target projection MATH is a surjective submersion, so is the map MATH, and therefore MATH must be a surjective submersion as well. In similar vein, the isomorphism MATH as MATH bibundles implies that the right MATH action on MATH is free and transitive on the MATH-fibers. Together with the assumed regularity of the bibundle MATH, we have now proved both conditions in the proposition.
math-ph/0008003
This follows from REF .
math-ph/0008003
The equivalence of REF is REF. By REF is equivalent to MATH in REF , which by REF means that there is an invertible bibundle MATH in REF . By REF this is equivalent to the invertibility of the associated symplectic bimodule MATH in CITE, which by REF is REF .
math-ph/0008010
Note that MATH, MATH, where MATH are defined in REF above. Without loss of generality take MATH, let MATH. One has MATH . Choose MATH such that REF holds with an arbitrary fixed MATH. Then MATH . Since the set MATH is total in MATH, MATH, MATH is a bounded domain, the conclusion of REF follows.
math-ph/0008010
The conclusion of REF follows from REF .
math-ph/0008010
The function MATH is analytic with respect to MATH and MATH on the variety REF . Therefore its values on MATH extend uniquely by analyticity to MATH. In particular MATH is uniquely determined in MATH. By REF one gets: MATH . By REF and by REF , the orthogonality relation REF implies MATH.
math-ph/0008010
The proof is the same as the proof of REF - REF and is based on the following estimate CITE, CITE: MATH . The proof of REF is not simple CITE. It is given in REF.
math-ph/0008010
Multiply both sides of REF by MATH, where MATH, MATH, MATH, and integrate with respect to MATH and MATH over MATH, to get: MATH . Choose MATH and MATH such that MATH where MATH, and note that MATH as MATH, MATH, MATH, MATH, MATH is an arbitrary large but fixed number. From REF one gets MATH where MATH. One can choose MATH and MATH such that (CITE, see also REF below) MATH where MATH stands for various different constants. Thus REF yields: MATH where MATH and MATH are some positive constants, MATH, MATH means that MATH is small and MATH means that MATH is large. However, our argument is valid for MATH and MATH. One gets MATH and the minimizer is MATH . From REF one gets REF .
math-ph/0008010
MATH . The rest of the proof consists of the following steps: CASE: We prove that MATH where the norm is defined in REF . This estimate and REF imply (see the proof of REF ) that MATH . CASE: We prove that MATH where MATH, and the pair MATH solves REF approximately in the sense specified above. (See REF ). This estimate follows from REF and from the inequality MATH . Let us prove REF . One has MATH where MATH is given in REF and MATH. Using REF one gets MATH . Here we have used the estimate MATH which follows from REF and the NAME equality, and implies MATH . We also took into account that there are MATH spherical harmonics MATH with MATH, because MATH, and MATH. For large MATH one has MATH, MATH, so we write MATH, MATH. To estimate MATH, use REF - REF and get: MATH . Minimizing with respect to MATH the function MATH one gets MATH where MATH is given in REF and MATH is defined by REF . Thus, from REF - REF one gets REF . REF is proved.
math-ph/0008010
Using REF , one gets: MATH . As stated below REF , one has MATH . From REF one gets: MATH . It follows from REF that MATH . From REF one gets MATH where we have used the monotone decrease of MATH as a function of MATH. Using estimate REF in order to estimate MATH, MATH one gets: MATH . Minimization of the right-hand side of REF with respect to MATH yields, as in REF , the estimate similar to REF : MATH . Since MATH solves the equation MATH one can use the known elliptic estimate: MATH where MATH is a strictly inner subset of MATH and MATH, and get: MATH where MATH is any annulus MATH. By the embedding theorem, REF implies REF in MATH.
math-ph/0008010
As in the proof of REF , the data determine uniquely MATH on MATH so that MATH. If one has already proved that MATH, that is, MATH, then the boundary condition on MATH is uniquely determined because the scattering solution MATH is uniquely determined by the scattering amplitude in MATH (and is analytically determined by REF in MATH). Thus, the limiting values of MATH on MATH are uniquely determined. If the limit is zero (almost everywhere on MATH) then MATH, if the limit is infinity (almost everywhere on MATH) then MATH, and if the limit is a function MATH, then MATH. Therefore the main point is to prove that MATH is uniquely determined by MATH. Let us prove this. Assume the contrary: MATH. Let MATH, MATH, MATH. Denote by MATH a connected component of MATH. We want to show that MATH is an empty set. An important tool is the formula CITE similar to REF : MATH . This formula holds for domains with finite perimeter. If MATH, then REF yields: MATH . From REF one derives: MATH . From REF and NAME 's formula one gets: MATH . This leads to a contradiction unless MATH. Indeed, if MATH, that is, if MATH is not empty, take a point MATH on the boundary of MATH which belongs to MATH. This point is an interior point for MATH. Thus MATH . On the other hand, since MATH one has MATH, that is, MATH if MATH, MATH if MATH, MATH if MATH. In all the three cases MATH while MATH . From REF one gets a contradiction. REF is proved.
math-ph/0008010
Let us sketch the steps of the proof. CASE: MATH . This follows from the uniquness REF and from the compactness of the set MATH in the space MATH, MATH. For simplicity of the presentation we assume that MATH, that is, there is just one patch in the covering of MATH, for example, MATH is star-shaped. CASE: There exists an integer MATH such that MATH where MATH is the distance from a point MATH to MATH, we assume that MATH, MATH, MATH, MATH, and MATH here and below stand for various constants indepent of MATH and MATH, and we assume that MATH. Symbol MATH means that MATH with some constants MATH. Let us show that REF implies REF . From REF one gets, replacing MATH by MATH, dropping MATH and taking MATH: MATH . Recall that MATH stands for different constants. From REF one gets MATH . Since MATH as MATH and MATH, estimate REF implies MATH where we have used the definition of MATH, namely MATH, which implies MATH. Let us give the details of the proof. CASE: Assume that REF is false. Then MATH as MATH. Since MATH is a compact set in MATH, MATH, one can select sequences MATH and MATH which converge in MATH to MATH and MATH correspondingly, as MATH. Since MATH depends continuously on MATH (see CITE, CITE) in the sense MATH where the limit is taken in the process MATH in MATH, MATH, one concludes that MATH for the limiting surfaces MATH and MATH. By the uniqueness REF it follows that MATH. Therefore MATH. However, MATH. This is a contradiction which proves REF . CASE: The function MATH, where MATH is the scattering solution corresponding to the obstacle MATH, MATH solves the equation MATH satifies the radiation condition MATH and MATH . It is proved in CITE,vol. REF, p. REF, that solutions to elliptic second order equations with smooth coefficients cannot have zeros of infinite order up to the boundary without vanishing identically. This implies existence of an integer MATH for which the left REF holds. The proof of the right inequality requires some preparations. Let us sketch the steps of this proof. CASE: By the result of REF one gets estimate REF in the region MATH, MATH. Although in REF the functions MATH and MATH were the solutions to the NAME equations with potentials vanishing in MATH, the estimate REF is proved for any solutions to REF whose difference satisfy REF . In particular, estimate REF holds for our MATH. Let us define MATH, MATH. CASE: Let us prove the right REF . Extend MATH from MATH into MATH so that the estimate similar to REF holds: MATH . This is possible: using, for example, the known NAME 's theorem one can extend MATH into MATH since MATH is MATH - smooth, MATH and then MATH will be the MATH- smooth extension of MATH from MATH into MATH. Define MATH where we denoted by MATH the extended to MATH function MATH. Then MATH in MATH, MATH, and MATH . Denote MATH, MATH. Set MATH. Choose any point MATH, in a neighborhood of MATH, a connected component of MATH, such that MATH. By REF , MATH as MATH, so that MATH is small for small MATH. Consider analytic continuation of MATH, defined by REF , on the complex MATH-plane (as in CITE ). Let MATH be the angle between vectors MATH and MATH, MATH. Since MATH this analytic continuation, that is, replacement MATH, in the expression MATH is possible if MATH and MATH. Choose a point MATH on MATH closest to MATH and the coordinate system in which the origin is at MATH, and the MATH plane is tangent to MATH at the point MATH. Since MATH is sufficiently smooth, there exists a cone MATH with an opening MATH and vertex at MATH which belongs to MATH and its axis passes through the point MATH. The function MATH admits analytic continuation in the MATH-plane from the ray MATH to the sector MATH. Since there are no points of MATH inside the cone MATH, the expression MATH does not vanish in the region MATH, MATH. Therefore the function REF , which we denote MATH, considered as a function of MATH, admits analytic continuation in the sector MATH, and satisfies the following inequalities there: MATH where MATH . One can map the sector MATH, conformally onto the half-plane, MATH, MATH . Then MATH, where MATH is analytic in the half-plane MATH, and satisfies there the inequalities MATH . The known two-constants- theorem (see CITE) and REF imply: MATH where MATH is the harmonic measure corresponding to the domain MATH on the complex plane MATH with the boundary consisting of the lines MATH and MATH. Recall that MATH, the harmonic measure, is a harmonic function which solves the problem: MATH is bounded at infinity, MATH. By the maximum principle, MATH in MATH, and, by the NAME lemma, (see CITE p. REF), MATH . Thus MATH where MATH is a sufficiently small number. From REF it follows that in a sufficiently small neighborhood of the origin one has MATH. Returning to the MATH-variable one gets MATH where MATH stands for various constants. Since MATH, REF is identical to the right inequaltiy REF . REF is proved.
math-ph/0008012
If MATH: MATH is compact, then MATH is a composition of a bounded linear operator MATH and a compact operator MATH, so MATH is compact.
math-ph/0008012
Choose a sequence MATH such that MATH. Denote by MATH the projection MATH. If MATH is compact then MATH implies MATH and for any MATH there exists a subsequence MATH and a number MATH such that MATH for any MATH. Without loss of generality we can suppose that the sequence MATH is a subsequence of MATH and MATH for MATH . Therefore MATH for any MATH and for any MATH . For the subsequence MATH REF implies MATH. By choice of the subsequence MATH this implies MATH for any MATH . We proved convergence of MATH in MATH. Because the original sequence MATH was arbitrary compactness of the operator MATH proved.
math-ph/0008012
One takes the NAME domain MATH such that MATH. By the known embedding theorem for NAME domains the embedding MATH is compact. Since MATH, one obtains the conclusion of the lemma.
math-ph/0008012
It is obvious that MATH is a standard elementary domain. Fix MATH. The open set MATH is a finite union of domains MATH and domains MATH, MATH, MATH, MATH. Join any two points MATH, MATH by a smooth curve MATH and any pair MATH, MATH by a smooth curve MATH . The set MATH is a closed NAME curve that is the boundary of a NAME domain MATH. By construction MATH . Therefore MATH is a standard elementary domain of the class MATH.
math-ph/0008012
Because MATH is continuous in MATH for any MATH the open set MATH is a finite union of domains of the same type as in REF . Therefore the domain MATH is a standard elementary domain of the class MATH.
math-ph/0008012
Since smooth functions are dense in MATH it is sufficient to prove the desired estimate only for smooth functions MATH. Integrating the inequality MATH with respect to MATH over the segment MATH and using the NAME inequality we obtain MATH . For any normed space MATH and any MATH the following inequality holds MATH . Combining this inequality with previous one we obtain MATH . Because MATH we have finally MATH .
math-ph/0008012
Using REF , one gets: MATH .