paper
stringlengths
9
16
proof
stringlengths
0
131k
math/0103224
Let MATH, MATH, and MATH be the thickness, reach, and normal injectivity radius of MATH. We will show that MATH. Suppose some point MATH has two nearest neighbors MATH and MATH at distance MATH. Thus MATH is tangent at MATH and MATH to the sphere around MATH, so a nearby sphere cuts MATH four times, giving MATH. Similarly, suppose some MATH is on two normal circles of MATH of radius MATH. This MATH has two neighbors on MATH at distance MATH, so MATH. We know that MATH is less than the infimal radius of curvature of MATH. Furthermore, the midpoint of a chord of MATH realizing the infimal doubly self-critical distance of MATH is on two normal disks of MATH. Using REF , this shows that MATH, completing the proof.
math/0103224
This follows immediately from the definition, since MATH is a continuous function (from the set of triples of distinct points in space) to MATH. For, if curves MATH approach MATH, and MATH nearly realizes the thickness of MATH, then nearby triples of distinct points bound from above the thicknesses of the MATH.
math/0103224
We must prove that MATH has NAME constant MATH on MATH; it then has a NAME extension. By the triangle inequality, it suffices to prove, for any fixed MATH, that MATH whenever MATH and MATH are sufficiently close along MATH. Setting MATH, we have MATH using the law of sines and the definition MATH.
math/0103224
The two points MATH and MATH must be on the same component of MATH, and one of the arcs of MATH connecting them is contained in the ball with diameter MATH. By REF , the curvature of MATH is less than MATH. Thus by NAME 's lemma, the length of this arc of MATH is at most MATH, as claimed. Note that NAME 's proof CITE of NAME 's lemma for space curves, while stated only for MATH curves, applies directly to MATH curves, which have NAME tantrices on the unit sphere. (As NAME notes, the lemma actually applies even to curves with corners, when correctly interpreted.)
math/0103224
To show MATH convergence, we will show that the secant maps of the MATH converge (in MATH) to the secant map of MATH. Note that when we talk about convergence of the secant maps, we view them (in terms of constant-speed parametrizations of the MATH) as maps from a common domain. Since these maps are uniformly NAME, it suffices to prove pointwise convergence. So consider a pair of points MATH, MATH in MATH. Take MATH. For large enough MATH, MATH is within MATH of MATH in MATH, and hence the corresponding points MATH, MATH in MATH have MATH and MATH. We have moved the endpoints of the segment MATH by relatively small amounts, and expect its direction to change very little. In fact, the angle MATH between MATH and MATH satisfies MATH. That is, the distance in MATH between the points MATH and MATH is given by MATH. Therefore, the secant maps converge pointwise, which shows that the MATH converge in MATH to MATH. Since the limit link MATH has thickness at least MATH by REF , it is surrounded by an embedded normal tube of diameter MATH. Furthermore, all (but finitely many) of the MATH lie within this tube, and by MATH convergence are transverse to each normal disk. Each such MATH is isotopic to MATH by a straight-line homotopy within each normal disk.
math/0103224
Consider the compact space of all MATH curves of length at most MATH. Among those isotopic to a given link MATH, find a sequence MATH supremizing the thickness. The lengths of MATH approach MATH, since otherwise rescaling would give thicker curves. Also, the thicknesses approach some MATH, the reciprocal of the infimal ropelength for the link type. Replace the sequence by a subsequence converging in the MATH norm to some link MATH. Because length is lower semicontinuous, and thickness is upper semicontinuous (by REF ), the ropelength of MATH is at most MATH. By REF , all but finitely many of the MATH are isotopic to MATH, so MATH is isotopic to MATH. By REF , tight links must be MATH, since they have positive thickness.
math/0103224
We may assume that MATH has nonnegative geodesic curvature almost everywhere. If not, we simply replace it by the boundary of its convex hull within MATH, which is well-defined since MATH has nonpositive curvature. This boundary still surrounds MATH at unit distance, is MATH, and has nonnegative geodesic curvature. For MATH, let MATH denote the inward normal pushoff, or parallel curve to MATH, at distance MATH within the cone. Since the geodesic curvature of MATH is bounded by MATH, these are all smooth curves, surrounding MATH and hence surrounding the cone point. If MATH denotes the geodesic curvature of MATH in MATH, the formula for first variation of length is MATH where the last equality comes from NAME - NAME, since MATH is intrinsically flat except at the cone point. Thus MATH; since MATH surrounds MATH for every MATH, it has length at least MATH, and we conclude that MATH.
math/0103224
Recall that the cone angle at MATH is given by the length of the radial projection of MATH onto the unit sphere centered at MATH. If we choose MATH on a chord of MATH, this projection joins two antipodal points, and thus must have length at least MATH. On any doubly critical chord (for instance, the longest chord) the point MATH at distance MATH from either endpoint must lie outside the thick tube, by REF . Note that the cone angle approaches MATH at points far from MATH. The cone angle is a continuous function on the complement of MATH in MATH, a connected set. When MATH has positive thickness, even the complement of its thick tube is connected. Thus if the cone angle at MATH is greater than MATH, the intermediate value theorem lets us choose some MATH (outside the tube) from which the cone angle is exactly MATH. REF shows such a cone on a trefoil knot.
math/0103224
By REF we can find a point MATH in space, outside the unit-radius tube surrounding MATH, so that coning MATH to MATH gives a cone of cone angle MATH, which is intrinsically flat. Each of the sublinks MATH nontrivially linked to MATH must puncture this spanning cone in some point MATH. Furthermore, the fact that the link has unit thickness implies that the MATH are separated from each other and from MATH by distance at least MATH in space, and thus by distance at least MATH within the cone. Thus in the intrinsic geometry of the cone, the MATH are surrounded by disjoint unit-radius disks, and MATH surrounds these disks while remaining at least unit distance from them. Since MATH has unit thickness, it is MATH with curvature bounded above by MATH. Since the geodesic curvature of MATH on the cone surface is bounded above by the curvature of MATH in space, we can apply REF to complete the proof.
math/0103224
As in the proof of REF , we apply REF to show that we can find an intrinsically flat cone surface MATH bounded by MATH. We know that MATH is surrounded by an embedded unit-radius tube MATH; let MATH be the portion of the cone surface outside the tube. Each component of MATH is also surrounded by an embedded unit-radius tube disjoint from MATH. Let MATH be the MATH unit vectorfield normal to the normal disks of these tubes. A simple computation shows that MATH is a divergence-free field, tangent to the boundary of each tube, with flux MATH over each spanning surface inside each tube. A cohomology computation (compare CITE) shows that the total flux of MATH through MATH is MATH. Since MATH is a unit vectorfield, this implies that MATH . Thus MATH. The isoperimetric inequality within MATH implies that any curve on MATH surrounding MATH has length at least MATH. Since MATH has unit thickness, the hypotheses of REF are fulfilled, and we conclude that MATH completing the proof.
math/0103224
To prove the symmetry assertions, take any planar projection with MATH crossings of MATH over MATH. Turning the plane over, we get a projection with MATH crossings of MATH over MATH; the signs of the crossings are unchanged. The last two statements are immediate from the definitions in terms of signed and unsigned sums.
math/0103224
By the definition of parallel overcrossing number, we can isotope MATH and its parallel MATH so that, except for MATH simple clasps, MATH lies above, and MATH lies below, a slab in MATH. Next, we can use the embedded annuli which cobound corresponding components of MATH and MATH to isotope the part of MATH below the slab to the lower boundary plane of the slab. This gives a presentation of MATH with MATH bridges, as in REF .
math/0103224
Let MATH be the union of MATH and the exterior cone on MATH from MATH. Consider the area ratio MATH, where MATH is the ball of radius MATH around MATH in MATH. As MATH, the area ratio approaches MATH, the number of sheets of MATH passing through MATH; as MATH, the ratio approaches the density of the cone on MATH from MATH, which is the cone angle divided by MATH. NAME has shown that the monotonicity formula for minimal surfaces continues to hold for MATH in this setting CITE: the area ratio is an increasing function of MATH. Comparing the limit values at MATH and MATH we see that the cone angle from MATH is at least MATH.
math/0103224
By the solution to the classical NAME problem, each component of MATH bounds some minimal disk. Let MATH be the union of these disks. Since MATH is nontrivially linked, MATH is not embedded: it must have a self-intersection point MATH. By the lemma, the cone angle at MATH is at least MATH.
math/0103224
Replacing MATH with a slightly bigger smooth solid torus if neccesary, we may assume that MATH is transverse to the boundary torus MATH of MATH. The intersection MATH is then a union of closed curves. If there is a self-intersection, we are done. Otherwise, MATH is a disjoint union of simple closed curves, homologous within MATH to the core curve MATH (via the surface MATH). Hence, within MATH, its homology class MATH is the latitude plus some multiple of the meridian. Considering the possible arrangements of simple closed curves in the torus MATH, we see that each intersection curve is homologous to zero or to MATH. Our strategy will be to first eliminate the trivial intersection curves, by surgery on MATH, starting with curves that are innermost on MATH. Then, we will find an essential intersection curve which is innermost on MATH: it is isotopic to MATH and bounds a subdisk of MATH outside MATH, which must have self-intersections. To do the surgery, suppose MATH is an innermost intersection curve homologous to zero in MATH. It bounds a disk MATH within MATH and a disk MATH within MATH. Since MATH is an innermost curve on MATH, MATH is empty; therefore we may replace MATH with MATH without introducing any new self-intersections of MATH. Push MATH slightly off MATH to simplify the intersection. Repeating this process a finite number of times, we can eliminate all trivial curves in MATH. The remaining intersection curves are each homologous to MATH on MATH and thus isotopic to MATH within MATH. These do not bound disks on MATH, but do on MATH. Some such curve MATH must be innermost on MATH, bounding an open subdisk MATH. Since MATH is nontrivial in MATH, and MATH is empty, the subdisk MATH must lie outside MATH. Because MATH is knotted, MATH must have self-intersections, clearly outside MATH. Since we introduced no new self-intersections, these are self-intersections of MATH as well.
math/0103224
Span MATH with a minimal disk MATH, and let MATH be a sequence of closed tubes around MATH, of increasing radius MATH. Applying REF , MATH must necessarily have a self-intersection point MATH outside MATH. Using REF , the cone angle at MATH is at least MATH. Now, cone angle is a continuous function on MATH, approaching zero at infinity. So the MATH have a subsequence converging to some MATH, outside all the MATH and thus outside the thick tube around MATH, where the cone angle is still at least MATH.
math/0103224
Let MATH be the thick tube (the unit-radius solid torus) around MATH, and let MATH be the MATH unit vectorfield inside MATH as in the proof of REF . Using REF , we construct a cone surface MATH of cone angle MATH from a point MATH outside MATH. Let MATH be the cone defined by deleting a unit neighborhood of MATH in the intrinsic geometry of MATH. Take any MATH farthest from the cone point MATH. The intersection of MATH with the unit normal disk MATH to MATH at MATH consists only of the unit line segment from MATH towards MATH; thus MATH is disjoint from MATH. In general, the integral curves of MATH do not close. However, we can define a natural map from MATH to the unit disk MATH by flowing forward along these integral curves. This map is continuous and distance-decreasing. Restricting it to MATH gives a distance-decreasing (and hence area-decreasing) map to MATH, which we will prove has unsigned degree at least MATH. Note that MATH is isotopic to MATH within MATH, and thus MATH. Furthermore, each integral curve MATH of MATH in MATH can be closed by an arc within MATH to a knot MATH parallel to MATH. In the projection of MATH and MATH from the perspective of the cone point, MATH must overcross MATH at least MATH times. Each of these crossings represents an intersection of MATH with MATH. Further, each of these intersections is an intersection of MATH with MATH, since the portion of MATH not in MATH is contained within the disk MATH. This proves that our area-decreasing map from MATH to MATH has unsigned degree at least MATH. (An example of this map is shown in REF .) Since MATH it follows that MATH . The isoperimetric inequality in a MATH cone is affected by the negative curvature of the cone point. However, the length MATH required to surround a fixed area on MATH is certainly no less than that required in the Euclidean plane: MATH . Since each point on MATH is at unit distance from MATH, we know MATH is surrounded by a unit-width neighborhood inside MATH. Applying REF we see that MATH which by REF is at least MATH.
math/0103224
As before, we use REF or REF to construct a cone surface MATH of cone angle MATH or MATH. We let MATH be the complement of a unit neighborhood of MATH, and set MATH, isotopic to MATH. Our goal is to bound the area of MATH below. As before, take the collection MATH of embedded tubes surrounding the components of MATH, and let MATH be the MATH unit vectorfield normal to the normal disks of MATH. Fix some component MATH of MATH (where MATH may be the same as MATH), and any normal disk MATH of the embedded tube MATH around MATH. The flow of MATH once around the tube defines a map from MATH to MATH. The geometry of MATH implies that this map is an isometry, and hence this map is a rigid rotation by some angle MATH. Our first claim is that we can make a MATH-small perturbation of MATH which ensures that MATH is a rational multiple of MATH. Fix a particular integral curve of MATH. Following this integral curve once around MATH defines a framing (or normal field) on MATH which fails to close by angle MATH. If we define the twist of a framing MATH on a curve MATH by MATH it is easy to show that this framing has zero twist. We can close this framing by adding twist MATH, defining a framing MATH on MATH. If we let MATH be the writhe of MATH, then the NAME - NAME formula CITE tells us that MATH, where MATH is a normal pushoff of MATH along MATH. Since the linking number MATH is an integer, this means that MATH is a rational multiple of MATH if and only if MATH is rational. But we can alter the writhe of MATH to be rational with a MATH-small perturbation of MATH (see CITE for details), proving the claim. So we may assume that, for each component MATH of MATH, MATH is a rational multiple MATH of MATH. Now let MATH be the least common multiple of the (finitely many) MATH. We will now define a distance- and area-decreasing map of unsigned degree at least MATH from the intersection of MATH and the cone surface MATH to a sector of the unit disk of angle MATH. Any integral curve of MATH must close after MATH trips around MATH. Thus, the link MATH defined by following the integral curves through MATH points spaced at angle MATH around a normal disk to MATH is a degree-MATH satellite of MATH. Further, if we divide a normal disk to MATH into sectors of angle MATH, then MATH intersects each sector once. We can now define a distance-decreasing map from MATH to the sector by projecting along the integral curves of MATH. Letting MATH be the union of all the integral curves MATH, and identifying the image sectors on each disk gives a map from MATH to the sector. By the definition of MATH, MATH so MATH overcrosses MATH at least MATH times. Thus we have at least MATH intersections between MATH and MATH, as in the proof of REF . Since the sector has area MATH, this proves that the cone MATH has area at least MATH, and thus perimeter at least MATH. The theorem then follows from REF as usual.
math/0103224
By REF , there exists some cone point MATH for which the cone of MATH to MATH has cone angle MATH. Unrolling the cone on the plane, an isometry, constructs a plane curve MATH of the same arclength. Further, each chord length of MATH is a distance measured in the instrinsic geometry of the cone, which is at least the corresponding distance in MATH.
math/0103231
This result is a version of a well known fact in the projective case (see for example REF). In order to be able to use the adjunction theorems in our context, the lack of properness requires more care. We first prove that MATH where the labels for the different copies of MATH will help with the bookkeeping. We first note that MATH with the appropriate object MATH . The fiber square MATH shows that MATH where MATH is the projection map MATH . By the projection formula, we then obtain that MATH where the proper maps MATH and MATH are defined as the compositions MATH . By adjunction, we obtain that MATH . We have also used the fact that, since the projections MATH are flat, and MATH the object MATH is isomorphic to a bounded complex of locally free sheaves of finite rank (even though MATH might be singular). The fiber square MATH with MATH flat and MATH proper, and the base change theorem (REF, or REF) shows that MATH . By combining this fact with REF and the adapted version of REF , we see that MATH . Recall that MATH . Hence, we can write that MATH since MATH where we use REF (MATH is flat) and the flat base change theorem quoted above. Combine this fact with REF , to see that we have proved indeed that MATH . The proof of the second isomorphism of REF is completely analogous. The proof of the isomorphisms of REF is also very similar, but the existence of the invertible sheaf MATH on MATH enters the argument in a crucial way. For the first isomorphism of REF the only change occurs in REF as follows: MATH . The proof of the second isomorphism of REF is again analogous.
math/0103231
The exact functor MATH is induced by the kernel MATH . Consider the fiber square (with MATH flat) MATH . Therefore MATH . By the projection formula, we can then write MATH . But MATH and MATH . Hence MATH . Set MATH . Then MATH with MATH the diagonal embedding, and it follows that MATH . Now note that there exists a NAME system in MATH of the type REF with MATH and MATH . On MATH these sheaves are the NAME MATH and can be computed with the help of a local NAME resolution of MATH in MATH (see, for example, VII. REF). It follows that MATH . The lemma follows, since this NAME system on MATH is mapped by an exact functor into the required NAME system on MATH .
math/0103231
Note that the first isomorphisms in each of the three parts of proposition follow directly from REF , while the isomorphisms in the second groups of each part follow easily provided we proved the second isomorphisms in each part. The notation follows REF . CASE: By adjunction, we have that MATH . Since MATH the projection formula gives MATH . On the other hand MATH . Note that for MATH there exists a NAME system in MATH of the type REF with MATH and MATH . Indeed, on MATH these sheaves are in fact the NAME MATH . Since MATH we have that MATH so MATH for MATH . On MATH these sheaves are again the NAME MATH . Therefore, we see that MATH where MATH is the diagonal embedding. Our goal is to apply REF for the cohomological functor MATH induced by the right hand side of REF , MATH where MATH with the NAME system described above, and MATH . The hypotheses of REF require us to show that the group MATH is zero for MATH and MATH . To prove this claim, note first that the projection formula implies that MATH and the already quoted flat base change theorem (REF, or REF) shows that MATH . We can then apply adjunction for the pair of functors MATH associated to the diagonal of MATH to obtain that the group REF is isomorphic to MATH . Now consider the following commutative diagram MATH where MATH . Since MATH is flat, the base change theorem for the above fiber square allows us to write MATH . By adjunction, we conclude that the group REF is isomorphic to MATH . Since MATH is MATH - spherical, we have that MATH for MATH . For MATH REF shows that MATH the group above is zero if MATH that is when MATH . This proves the above claim that the group REF is zero for MATH and MATH . Therefore, REF implies indeed that MATH is isomorphic to the group obtained by setting MATH in REF , that is MATH which is isomorphic (for a MATH - spherical object MATH) to MATH . Note that in the case MATH the same conclusion is obtained without having to use REF , since in this case the MATH-condition implies the existence of a distinguished triangle of the form MATH . MATH . The proof of the second isomorphism in this part is almost identical to the previous argument; REF has to be replaced by MATH since, by REF MATH . MATH . We now proceed with the proof of the second isomorphism in this part. We have that MATH with MATH defined by MATH . According to REF , there exists a NAME system with MATH and MATH for MATH where MATH is the normal sheaf of MATH in MATH . In order to use again part MATH of REF for the cohomological functor MATH induced by the right hand side of REF , MATH with MATH we need to show that MATH is zero for MATH and MATH . Moreover, the NAME formula implies that it is enough to show that the groups MATH are zero for MATH . Here the normal sheaves MATH and MATH give the decomposition of the normal sheaf MATH along the two directions corresponding to the embeddings MATH . After regrouping the various tensor products we obtain the the group REF is isomorphic to MATH . By REF , we have that MATH so adjunction and the projection formula imply that the group above is isomorphic to MATH . Since MATH is flat, the NAME formula for the interior fiber square in REF says that MATH for any MATH . Indeed, MATH . Hence, we have shown that the group REF is isomorphic to MATH . Clearly, if MATH is MATH - spherical, and MATH the group is MATH which clearly is zero for MATH . We also have to consider the case MATH . Then the group is MATH which is again zero for MATH . In all the other situations with MATH the group is zero. It is immediate then that REF implies the required isomorphism MATH for MATH . As in the proof of REF the case MATH from the above calculation follows without invoking REF .
math/0103231
Apply the cohomological functor MATH to the distinguished triangle MATH and look at the following piece of the resulting long exact sequence: MATH . Note that, since MATH is MATH - spherical, by MATH and MATH of REF , the last two groups on the right are isomorphic to MATH (and in fact to the field MATH). The morphism MATH was chosen to correspond to the identity in the group MATH which shows that the induced homomorphism MATH is in fact an isomorphism. Since part MATH of the previous proposition shows that the group MATH is zero, we obtain indeed that MATH is zero. The proof of the other half of the proposition is completely analogous.
math/0103231
(of the lemma) We need to show that MATH . Consider the fiber square (with MATH flat) MATH where we have used subscripts to distinguish between different copies of MATH . We have that MATH . By the projection formula, we can write MATH . By using another fiber square (with MATH flat) MATH . Hence MATH and again by the projection formula we obtain that the right hand side of REF is isomorphic to MATH since MATH . This ends the proof of the lemma.
math/0103231
Write MATH with MATH . By REF and the associativity of the composition of correspondences we can write that MATH . We can now interpret the last line of the previous formula as an exact functor MATH given by MATH . REF shows that there exists a NAME system of the type REF with MATH and MATH for MATH where MATH is the diagonal embedding. In order to apply REF , we need to examine the action of the functor MATH on the cohomology sheaves MATH . We have that MATH where we have used REF , and the fact that MATH (see REF ). We have that MATH . But MATH . Since MATH after regrouping the terms, we obtain that MATH . Since MATH is MATH - spherical, the last line in the above formula is zero unless MATH . For MATH the last line is MATH and for MATH it is, by REF , MATH since MATH by the starting REF made on MATH . Part MATH of REF implies then that there exist a distinguished triangle MATH . Note that the proof works also in the case MATH . In that case, REF is not needed, and the above calculation shows that MATH and the distinguished triangle REF defining the MATH - condition in this case finishes the argument. The proof of the existence of the second distinguished triangle is very similar, and requires the reversals of the roles of MATH and MATH the replacement of REF by MATH and the use of the required analog of REF , namely MATH .
math/0103231
For the first isomorphism, note that the operation MATH with one argument fixed is an exact functor of triangulated categories, so REF provides the triangle MATH . We also consider the distinguished triangle of REF MATH . We start with a lemma. The morphism MATH is an isomorphism. (of the lemma) Since MATH is MATH - spherical, REF MATH implies that MATH so it is enough to show that the morphism MATH is non - zero. In fact, we will show that their composition MATH induces group homomorphisms of Hom groups (all isomorphic to MATH) by REF MATH that are isomorphisms, where, REF MATH shows that MATH and, REF MATH imply that MATH . In order to show that the composition MATH is non-zero, we will ``probe" it with the help of the cohomological functor MATH . First, we apply this functor to the distinguished triangle REF , and we look at a piece of the resulting long exact sequence of groups. MATH . But by REF MATH the leftmost group is zero, hence the morphism MATH is in fact an isomorphism MATH . We now apply the same cohomological functor to the distinguished triangle REF and investigate a piece of the corresponding long exact sequence of groups. MATH . REF implies that MATH and REF shows the the group is zero. Hence the morphism MATH is an isomorphism MATH which finishes the proof of the lemma. The lemma shows that we can consider the following ``REF - diagram" (page REF) which is essentially a version of the octahedron axiom in triangulated categories. MATH . The starting point is the commutative square located in the lower left corner. The ``REF - diagram" proposition REF shows that we can fill in the diagram, so MATH . This ends the proof of the first isomorphism of the proposition. The proof of the second one is very similar. Of course, we have to replace the distinguished triangles REF by MATH and then show that the morphism MATH is an isomorphism. Everything works as above - the only modification is that the isomorphism REF has to be replaced by MATH which is true due to REF . We are now in the position to finish the proof of the theorem. The argument is similar to the one used to prove the previous proposition. Consider the distinguished triangle REF defining MATH with a choice of a (non - canonical) morphism MATH . Since MATH is MATH - spherical, we apply the cohomological functor MATH to this distinguished triangle and write a relevant piece of the associated long exact sequence: MATH . By REF MATH we know that MATH and by REF MATH REF MATH (in this order), we have that MATH . We conclude that the morphism MATH is a generator of this group. Let's now look at the distinguished triangle, MATH obtained by applying the exact functor MATH to the distinguished triangle REF defining MATH . Due to REF , we have that MATH so MATH by REF . We claim that the morphism MATH is a generator of this group (that is, non - zero). Indeed, after applying the cohomological functor MATH to the distinguished triangle REF , we can write a relevant piece of the resulting long exact sequence as follows: MATH . REF MATH show that MATH . Moreover, we can write that MATH where the first and the last isomorphisms follow from REF , and the middle one from REF says that the last group is zero, which proves indeed that MATH is non - zero. In other words, we have shown that the morphisms MATH and MATH coincide up to an isomorphism. We can then look at the diagram MATH . We have just argued that the right hand square is commutative. REF of a triangulated category implies the existence of the dotted morphism. Another well known property of triangulated categories (see, for example, REF , page REF) shows that it is also an isomorphism, which concludes the proof of REF .
math/0103234
The idea is to construct monic polynomials MATH with integer coefficients and squarefree discriminant MATH, so that MATH is unramified over MATH. To do so, we construct MATH and set MATH. Then the discriminant MATH of MATH factors as MATH since each factor is linear in MATH, a simple sieving argument shows that the discriminant is squarefree for a positive proportion of tuples MATH in suitable ranges. In order to make this program work, we must add constraints on the MATH and MATH to ensure that various conditions are met. We first make sure that MATH has integer coefficients. It suffices to require that MATH be of the form MATH with MATH an integer coprime to MATH but divisible by MATH and by all primes less than MATH not dividing MATH, that MATH be integers divisible by MATH, and that MATH be an integer coprime to MATH. Now MATH and so the coefficient of MATH is divisible by MATH for MATH. Consequently, MATH has integer coefficients. Next, we force MATH to have a specific number of real roots, which determines the number of real embeddings of the field MATH. By homogeneity, the number of real roots of MATH depends only on the tuple MATH. Thus we can construct intervals MATH and MATH such that if MATH for all MATH and MATH for some MATH, then MATH has the desired number of real roots. Next, we ensure that the splitting field of MATH over MATH has NAME group MATH. We do this by imposing congruence conditions modulo some auxiliary primes. Pick any degree MATH polynomial MATH over MATH with NAME group MATH such that the splitting fields of MATH and MATH are linearly disjoint. Then by the NAME Density Theorem, there exist infinitely many primes MATH and MATH such that MATH factors completely modulo MATH and MATH, while MATH is irreducible over MATH and factors into one quadratic factor and MATH linear factors over MATH. We may impose congruence conditions on the MATH and MATH so that MATH, which forces MATH to have NAME group MATH over MATH. Let us rewrite the factorization of MATH as MATH . The first term can written as MATH plus a multiple of MATH, so is coprime to MATH; the remaining terms are each MATH plus a multiple of MATH, so are also coprime to MATH. Hence MATH is coprime to MATH. Also, if MATH is a prime less than MATH not dividing MATH, then MATH and so none of the factors of MATH is divisible by MATH either. For future convenience, we restrict MATH to a very special form. We require them to be of the form MATH for MATH fixed once and for all and MATH a prime. Then by homogeneity, we can write MATH for some integers MATH. By imposing congruence conditions on MATH and MATH modulo the primes dividing MATH, we may ensure that no prime except possibly MATH divides more than one of the factors MATH. Finally, we ensure that each factor of MATH is squarefree; this step is analogous to the proof that MATH of the positive integers are squarefree. (We have followed CITE in this stage of the argument.) Fix MATH and pick a prime MATH such that MATH for all MATH; we will sieve over integers MATH such that MATH. As noted above, the only prime that can divide more than one of the factors MATH and MATH is MATH. Thus we must exclude MATH of the possible values MATH in the range of interest. Under this restriction, the factors MATH are pairwise coprime, so it suffices to ensure that each one is squarefree for a positive proportion of MATH among the values of interest. Let MATH denote the set of MATH for which MATH and no MATH is divisible by MATH. Let MATH, let MATH denote the number of MATH such that each MATH is squarefree, let MATH denote the number of MATH such that no MATH is divisible by the square of any prime less than MATH, and let MATH denote the number of MATH such that MATH is divisible by the square of a prime greater than MATH. These are related by the equation MATH . Now MATH can be written, by inclusion-exclusion, as a sum over squarefree numbers MATH whose prime factors are all less than MATH. Any such number is at most MATH, so, with MATH denoting the NAME function at MATH and MATH the number of divisors of MATH, we have MATH . As for MATH, we have the estimate MATH . Putting this together, we conclude that a positive proportion of MATH yield squarefree MATH. We now have produced MATH unramified MATH-extensions of prescribed signature over quadratic fields of discriminant at most MATH. Moreover, the number of distinct values of MATH occurring is also MATH. Thus at least this many quadratic fields of discriminant less than MATH admit unramified MATH-extensions of the desired signature.
nlin/0103039
The proof of REF can be found in CITE or in CITE. The proof of REF is a straight forward extension of that of REF . To prove REF , let us first consider the case when MATH. Then we have MATH . Recall the following NAME inequalities in MATH . Then by the above inequalities we have: MATH . Moreover, it is clear that for MATH . Since MATH is dense in MATH we conclude the proof of REF . Let us now prove REF . Again we consider first the case where MATH . Recall NAME 's inequality in MATH: MATH . The above gives MATH . To prove REF we again take MATH and we use REF to find MATH . By (REF b-c) and REF inequalities we finish our proof. The proof of REF is similar to REF . From REF we have MATH . By (REF a) and REF inequalities we finish our proof.
nlin/0103039
We use the NAME procedure to prove global existence and to establish the necessary a priori estimates. Let MATH be an orthonormal basis of MATH consisting of eigenfunctions of the operator MATH. Denote MATH and let MATH be the MATH-orthogonal projection from MATH onto MATH. The NAME procedure for REF is the ordinary differential system. MATH . Since the nonlinear term is quadratic in MATH, then by the classical theory of ordinary differential equations, the system REF has a unique solution for a short interval of time MATH. Our goal is to show that the solutions of REF remains finite for all positive times which implies that MATH. MATH-estimates We take the inner product of REF with MATH and use REF to obtain MATH . Notice that MATH and by NAME 's inequality we have MATH . Denoting by MATH, from the above inequalities we get: MATH . By NAME 's inequality we obtain MATH and then by NAME 's inequality we reach MATH . That is MATH . MATH-estimates Integrating REF over the interval MATH . Now, take the inner product of REF with MATH to obtain MATH . Notice that MATH . We denote MATH. Then we have MATH . We use REF to obtain MATH . By NAME 's inequality we have MATH . We integrate the above equation over MATH and use REF to obtain: MATH . Now, we integrate with respect to MATH over MATH and use REF to get MATH for all MATH. For MATH we integrate with respect to MATH over the interval MATH . From REF we conclude: MATH for all MATH, where MATH enjoys the following properties: CASE: MATH is finite for all MATH. CASE: MATH is independent of MATH. CASE: If MATH, but MATH, then MATH depends on MATH and MATH. Moreover, in this case MATH. CASE: MATH. Returning to REF and integrating over the interval MATH, for MATH and MATH and using REF we get MATH where MATH as a function of MATH satisfies REF as MATH above. Also, there exists MATH large enough, depends on MATH, but independent of MATH, such that MATH . MATH estimate (via the vorticity) Let us denote MATH and MATH. The NAME system REF is equivalent to MATH . Let us take Curl of the above equation, keeping in mind that we have periodic boundary conditions, to obtain MATH . Notice that MATH and that MATH. Let us take the inner product of the above equation with MATH . We use the identity MATH to reach MATH . Notice that MATH, therefore MATH and upon applying REF MATH . For every divergence-free function MATH, and for every MATH we have the identity MATH . As a result, we have MATH . Thanks to the identity REF we have MATH. Now, we estimate the right hand side of the above to get: MATH . We use the NAME inequality (REF a) and NAME 's inequality to find MATH and we use NAME 's inequality again to obtain MATH . Let us denote MATH, then MATH . We use REF to obtain MATH for every MATH. From the definition of MATH we observe MATH . Now, we integrate with respect to MATH over MATH and use REF to get MATH . Here again MATH enjoys REF of MATH, mentioned above. Notice that by establishing the estimate REF for MATH one indeed is providing an upper bound for the MATH-norm of MATH. Similar estimates for the MATH-norm of MATH can be also obtained by considering first the NAME system REF MATH taking the inner product with MATH, and then following a sequence of inequalities and estimates to achieve an upper bound for MATH. Let us now summarize our estimates. For any MATH we have CASE: From REF : MATH CASE: From REF we have MATH . CASE: From REF MATH for any MATH, where MATH as MATH. Next, we establish uniform estimates, in MATH, for MATH and MATH. Recall REF MATH . From the above estimates and REF we have MATH and MATH . Consequently MATH . Therefore MATH and in particular MATH where MATH is a constant which depends on MATH and MATH. By NAME 's Compactness Theorem (see, for example, CITE and CITE ) we conclude that there is a subsequence MATH such that MATH or equivalently MATH where MATH is given in REF . Let us relabel MATH and MATH by MATH and MATH respectively. Let MATH, then from REF we have MATH for all MATH. Since MATH weakly in MATH then MATH weakly in MATH, for every MATH, where MATH. In particular, there is a subsequence of MATH, which we will also denote MATH, such that MATH strongly in MATH and MATH for every MATH. Now, it is clear that MATH also that MATH. On the other hand MATH . MATH by REF we have MATH applying NAME - NAME inequality MATH and hence MATH. MATH . Again thanks to REF MATH and by NAME MATH . Since MATH bounded in MATH and MATH in MATH we conclude that MATH . Finally, MATH by virtue of REF , and since MATH weakly in MATH, we obtain MATH . As a result of the above we have for every MATH for every MATH. Notice that since MATH, and since MATH strongly in MATH for every MATH, we have MATH. Moreover, because MATH is dense in MATH, REF implies that MATH or equivalently MATH. In particular, from REF we conclude the existence of a regular solution for the system REF . Uniqueness of regular solutions Next we will show the continuous dependence of regular solutions on the initial data and, in particular, we show the uniqueness of regular solutions. Let MATH and MATH be any two solutions of REF on the interval MATH, with initial values MATH and MATH respectively. Let us denote MATH, MATH, MATH, and by MATH. Then from REF we get: MATH . The above equation holds in MATH, since MATH belongs to MATH, the dual space of MATH, we use REF to obtain MATH . Notice that MATH, (see, for example, CITE , REF ). As a result we have: MATH . Now we use REF to get MATH and by NAME 's inequality we have: MATH . Hence, MATH . Since MATH we conclude the continuous dependence of the solutions of REF on the initial data on any bounded interval MATH. In particular, we conclude the uniqueness of regular solutions.
nlin/0103039
Let MATH, to be chosen later. By NAME 's REF we have MATH and by NAME MATH . Since MATH we have MATH . Let MATH be fixed, we choose MATH, from the above we have MATH . Now we sum over MATH, MATH, to reach MATH which concludes our proof.
nlin/0103039
The linearized REF equation about a regular solution MATH takes the form MATH where MATH and MATH. Notice that MATH evolves according to the equation MATH which we write symbolically as MATH . Let MATH, for MATH, be a set of linearly independent vectors in MATH, and let MATH be the corresponding solutions of REF with initial value MATH, for MATH. We denote MATH where MATH, and MATH is the orthogonal projector of MATH onto MATH with respect to the inner product MATH given in REF . Let MATH be an orthonormal basis, with respect to inner product MATH, of the space MATH, that is, MATH. We set MATH . Notice that MATH. We set MATH . Notice that by the NAME REF MATH where MATH. Let us denote MATH, for MATH. From REF we have MATH and by virtue of REF we have MATH . Observe that MATH . Thus MATH where MATH. Let us now estimate MATH. Using REF we have MATH again by REF MATH integrating by parts and using REF MATH . Therefore, MATH where MATH. As a result we have MATH . Thanks to REF we have MATH and by the NAME inequality we get MATH . Since MATH we have MATH. Therefore, the above gives MATH . Applying the NAME - NAME REF we obtain MATH that is MATH . Applying NAME 's inequality MATH then using NAME 's inequality MATH and by virtue of the NAME inequality (REF c) we obtain MATH . Using NAME 's inequality again we reach MATH . Substituting in REF we get: MATH . Now, we require MATH to be large enough such that MATH . According to the trace formula (see CITE, CITE or CITE) such a MATH will be an upper bound for the fractal and NAME dimensions of the global attractor. Observe that from the asymptotic behavior of the eigenvalues of the operator MATH there is a constant MATH such that MATH . Therefore, since MATH is the trace of the operator MATH restricted to some subspace of dimension MATH, we have MATH . Let us require MATH to be large enough so that MATH and MATH . For such a MATH REF is satisfied, and thus MATH provides an upper bound for the fractal and NAME dimensions of the global attractor. By NAME 's inequality we have MATH and thanks to REF we get MATH . Therefore, from the above, REF we have MATH which concludes our proof.
nlin/0103039
Let MATH be fixed. From the proof of REF and by passing to the limit one can show that the estimates REF also hold for the exact solution of the system REF . That is MATH and MATH . This implies that there are subsequences MATH and MATH, and corresponding functions MATH and MATH such that: MATH and MATH as MATH. Next we will use the above estimates and REF to show that MATH for some constant MATH which depends on MATH, but is independent of MATH. Indeed, from REF (or equivalently REF ) we have MATH . Thus, MATH . In order to prove REF we only need to find the proper estimate for MATH . Applying REF we obtain MATH . As a result of the above estimates we have MATH and by integrating the above estimate over the interval MATH we have MATH . From all the above we conclude REF . By virtue of the above estimates and NAME 's compactness Theorem (see, for example, CITE, CITE, or CITE) there exists a subsequence, which will also be labeled by MATH, that converges to MATH strongly in MATH. Furthermore, since MATH we have that MATH strongly in MATH, as MATH; and that MATH a.e. in MATH. As a result of these estimates one can extract subsequences, which will be also labeled by MATH and MATH, respectively, and show that as MATH by following an approach similar to that used in the proof of REF . This finishes the proof of the Theorem.
nlin/0103052
If MATH is a cocycle we have MATH for any function MATH. For MATH, this equation shows that MATH is a symmetry of MATH . Hence MATH since both MATH and MATH are symmetries of MATH and MATH is a NAME function. This show that MATH as claimed. If MATH is a coboundary, we find MATH showing that MATH with MATH . Therefore we find MATH with MATH . This proves the second part of the Lemma.
nlin/0103052
The first REF entails that the vector fields MATH are tangent to the symplectic leaves. Hence MATH . Thus REF and the second REF leads to MATH . Set MATH . As in the proof of REF we get MATH . Finally, we notice that the third REF entails MATH . Hence the previous equation becomes MATH as claimed.
nlin/0103052
The identity REF is nothing else that a disguised form of the NAME identity MATH used to define the formal adjoint of the operator MATH. In this identity MATH and MATH are arbitrary, and the bracket denotes the usual scalar product in MATH. We notice that, by REF the vector MATH is the opposite of the NAME operator associated with the lagrangian density MATH, MATH and we write the identity REF , for MATH, in the operator form MATH where MATH is the NAME derivative of the scalar differential operator MATH. One easily recognizes in this equation the identity REF by recalling that MATH .
nlin/0103052
The point is to show that the homogeneity assumption (together with the cocycle condition) entails the involutivity REF used to define our class of cocycles. To prove this result we exploit the well-known property that, for every cocycle, the bracket MATH is still a NAME function of MATH. This means that MATH that is MATH . Let us write MATH . By the above condition the functions MATH are constant, and therefore MATH . Accordingly MATH . REF entails MATH, since MATH should have at least degree one. So the combined action of the cocycle and of the homogeneity condition entails the involutivity REF , as required.
nlin/0103052
Let us first check the formula for MATH. We know that the first coefficient MATH is an homogeneous bivector verifying the cocycle condition MATH. Hence, by the final proposition of REF there exist an homogeneous vector field MATH, such that MATH. This proves the first case of identity REF . To prove by induction the remaining cases , we use the identity MATH . It follows from the transformation law MATH with respect to the special one parameter family of local diffeomorphisms MATH constructed as follows. First we compose the flows MATH associated with the vector fields MATH so to obtain the multiparameter family of local diffeomorphisms MATH . Then we reduce this family by setting MATH . By expanding REF in powers of MATH, and by equating the coefficients of MATH we obtain exactly REF. Assume presently that the representation REF is true for the first MATH coefficients MATH. To prove that it is also true for MATH we consider REF for MATH. We notice that this equation holds for any choice of the vector fields MATH. In particular it holds also for MATH. Let us denote by MATH the restriction of the operator MATH to the first MATH vector fields of the sequence. Then we can write MATH . By REF, and MATH . Therefore REF becomes: MATH . Let us compare this equation with MATH expressing the NAME identity MATH at the order MATH in MATH. It takes the form of a cocycle condition: MATH . Therefore there exist a vector field MATH such that MATH . By induction this proves the representation REF for any MATH.
quant-ph/0103041
The formal proof corresponds directly to NAME 's informal proof. Thus, let MATH be a subset of some spatial hypersurface MATH. If MATH then obviously MATH for all MATH. So, suppose that MATH, and let MATH be a unit vector such that MATH. Since MATH is a manifold, and since MATH, there is a family MATH of subsets of MATH such that, for each MATH, the distance between the boundaries of MATH and MATH is nonzero, and such that MATH. Fix MATH. By NIWS and time-translation covariance, there is a MATH such that MATH whenever MATH. That is, MATH whenever MATH. Since energy is bounded from below, we may apply REF with MATH to conclude that MATH for all MATH. That is, MATH for all MATH. Since this holds for all MATH, and since (by monotonicity) MATH, it follows that MATH for all MATH. Thus, MATH for all MATH.
quant-ph/0103041
By no absolute velocity, there is a pair MATH of timelike translations such that MATH is in MATH and is disjoint from MATH. By time-translation covariance, we have, MATH . Thus, localizability entails that MATH is orthogonal to itself, and so MATH.
quant-ph/0103041
Since MATH is a covering of MATH, probability conservation entails that MATH. Thus, MATH where the third equality follows from time-translation covariance.
quant-ph/0103041
If MATH then the conclusion obviously holds. Suppose then that MATH, and let MATH be a unit vector in the range of MATH. Fix MATH. Using REF and NAME 's lemma, it follows that MATH for all MATH. Then, MATH for all MATH. Thus, MATH for all MATH, and consequently, MATH. Since MATH, and since (by assumption) MATH, it follows that MATH for all MATH.
quant-ph/0103041
Let MATH be an open subset of MATH. If MATH then probability conservation and time-translation covariance entail that MATH for all MATH. If MATH then, since MATH is a manifold, there is a covering MATH of MATH such that the distance between MATH and MATH is nonzero for all MATH. Let MATH, and let MATH for MATH. Then REF entails that MATH when MATH. If we let MATH then probability conservation entails that MATH for all MATH (see REF ). By time-translation covariance and microcausality, for each MATH there is a MATH such that MATH . Since the energy is bounded from below, REF entails that MATH for all MATH. That is, MATH for all MATH.
quant-ph/0103041
We prove by induction that MATH, for each MATH, and for each bounded MATH. For this, let MATH denote the spectral measure for MATH. (REF: MATH) Let MATH. We verify that MATH satisfies REF 's theorem. Clearly, no absolute velocity and energy bounded below hold. Moreover, since unitary transformations preserve spectral decompositions, translation covariance holds; and since spectral projections of compatible operators are also compatible, microcausality holds. To see that localizability holds, let MATH and MATH be disjoint bounded subsets of a single hyperplane. Then microcausality entails that MATH, and therefore MATH is a projection operator. Suppose for reductio ad absurdum that MATH is a unit vector in the range of MATH. By additivity, MATH, and we therefore obtain the contradiction: MATH . Thus, MATH, and NAME 's theorem entails that MATH for all MATH. Therefore, MATH has spectrum lying in MATH, and MATH for all bounded MATH. (Inductive step) Suppose that MATH for all bounded MATH. Let MATH. In order to see that NAME 's theorem applies to MATH, we need only check that localizability holds. For this, suppose that MATH and MATH are disjoint subsets of a single hyperplane. By microcausality, MATH, and therefore MATH is a projection operator. Suppose for reductio ad absurdum that MATH is a unit vector in the range of MATH. Since MATH is bounded, the induction hypothesis entails that MATH. By additivity, MATH, and therefore we obtain the contradiction: MATH . Thus, MATH, and NAME 's theorem entails that MATH for all MATH. Therefore, MATH for all bounded MATH.
quant-ph/0103041
Let MATH be the unique total number operator obtained from taking the sum MATH where MATH is a disjoint covering of MATH. Note that for any MATH, we can choose a covering containing MATH, and hence, MATH, where MATH is a positive operator. By microcausality, MATH, and therefore MATH. Furthermore, for any vector MATH in the domain of MATH, MATH. Let MATH be the spectral measure for MATH, and let MATH. Then, MATH is a bounded operator with norm at most MATH. Since MATH, it follows that MATH for any unit vector MATH. Thus, MATH. Since MATH is dense in MATH, and since MATH is in the domain of MATH (for all MATH), it follows that if MATH, for all MATH, then MATH. We now concentrate on proving the antecedent. For each MATH, let MATH. We show that the structure MATH satisfies the conditions of REF . Clearly, energy bounded below and no absolute velocity hold. It is also straightforward to verify that additivity and microcausality hold. To check translation covariance, we compute: MATH . The third equality follows from number conservation, and the fourth equality follows from translation covariance. Thus, MATH for all MATH. Since this holds for all MATH, MATH for all MATH.
quant-ph/0103098
If NAME is honest, the security of the bit hiding scheme implies that the dishonest Bobs can learn at most MATH bits of information before the open phase. If NAME is honest, the most general strategy for a dishonest NAME is to prepare a pure state MATH with five parts, MATH for MATH. In the commit phase, she gives MATH and MATH to REF and NAMEREF. In the open phase, she applies some quantum operation MATH on MATH. Then, she sends MATH and MATH to REF and NAMEREF. If MATH are indeed MATH singlets, the test is passed and MATH does not change the state of MATH and MATH, and NAMEREF indeed obtains MATH in the same state as in the commit phase after teleportation. (MATH can only be changed during the teleportation steps when interacting with MATH if they are not singlets.) In this case, the distribution of k and therefore the committed bit MATH in both phases are the same - NAME cannot change or delay her commitment. In case MATH and MATH are not singlet states the analysis of the failure probability of the random hashing method follows the quantum key distribution security proof by CITE. If MATH are in some state orthogonal to MATH singlets, the test is passed with probability MATH only. In general, let MATH be the fidelity of MATH with respect to MATH singlets. The probability to change the commitment without being caught is MATH which is less than MATH.
quant-ph/0103098
We first derive bounds on the conditional probability distribution of any MATH given MATH. We consider MATH implied by any MATH and MATH. When we maximize MATH over MATH with fixed MATH, we obtain an overestimate of MATH because we can maximize MATH over a superset of MATH. Hence we can focus on a particular decoding process MATH defined as: MATH where MATH . Given these we can compute MATH . Substituting REF into REF , the first inequality becomes trivial and the second inequality gives a necessary condition on the set MATH: MATH, MATH . We now use REF to bound MATH. We introduce the notations MATH and MATH for the conditional probabilities, and MATH and MATH for the fixed prior probabilities for MATH. We can then write the mutual information as MATH represents the information on MATH obtained from the outcome MATH, weighted by MATH. Note also the outcomes in MATH do not contribute to the mutual information. We will maximize REF subject to the constraints MATH and MATH . The maximization of MATH is made tractable by noting that any optimal MATH can be replaced by another MATH with MATH, and satisfying the same constraints REF . We prove this using the fact that MATH is linear REF and convex, giving MATH . Absorbing the nonnegative factors MATH into the probability vectors, this becomes simply MATH . The convexity of MATH is proved by showing that the Hessian matrix MATH is positive semidefinite (it is straightforward to show that its eigenvalues are MATH and MATH). Given any MATH, we first construct an intermediate MATH as follows: For each outcome MATH with unequal conditional probabilities MATH, introduce two outcomes in MATH with conditional probabilities: MATH . All outcomes in MATH occur in MATH unchanged. Note that the number of outcomes are such that MATH and MATH. The constraints REF are satisfied by MATH because the quantities involved are conserved by construction. MATH by applying REF to each replacement. Finally, as all MATH outcomes have MATH, and all MATH outcomes have MATH, we introduce the desired random variable MATH with just three outcomes, with conditional probabilities MATH, MATH, and MATH. MATH still satisfies REF , and the linearity of MATH implies MATH. So, we have established that there exists an optimal MATH with conditional probabilities MATH, MATH, and MATH; these may be interpreted as the ``certainly REF", ``certainly REF", and ``don't know" outcomes. It is now trivial to show that the best choice of these parameters consistent with the constraints is given by MATH, MATH. These parameters lead to the mutual information MATH, proving the theorem.
quant-ph/0103098
We write the density matrices MATH of the hiding states in the NAME decomposition MATH where the sum is over all MATH possible MATH-bit strings MATH, and MATH is identified with a tensor product of MATH . NAME matrices as defined in REF. We restrict our choice of LOCC measurements to those that measure the eigenvalues of a particular optimal MATH, which can be MATH or MATH. This measurement is in the LOCC class because it is the product of the eigenvalues of all the NAME matrix components, which can be measured locally and communicated classically to obtain the final result. If we associate the outcomes MATH and MATH with MATH and MATH respectively, the POVM elements are MATH and MATH. Using the fact MATH we can calculate the conditional probabilities of interest: MATH . In this notation, MATH is given by MATH . If the right-hand side of REF is negative, then we can always do better by inverting the assignment of outcomes, flipping the sign of this factor. So, we can always achieve MATH . Our goal is to establish a lower bound on this quantity due to the orthogonality of MATH and MATH. Thus we consider MATH . Let MATH be fixed, and MATH, which depends on MATH, be the corresponding MATH which maximizes MATH. Let MATH . We can rephrase the optimization in REF as a minimization over MATH for MATH (MATH is fixed by the normalization of MATH), subject to the following constraints: CASE: Optimality of MATH: MATH . CASE: Orthogonality of MATH and MATH, implying that MATH, or MATH . Note that we do not impose the positivity of MATH, and obtain a valid, though possibly loose bound. The above constraints are imposed by introducing the NAME multipliers MATH and MATH for MATH, transforming the problem to the unconstrained minimization: MATH . We can fix MATH and minimize over MATH and MATH for MATH. If MATH whenever MATH (the other case will be discussed later) this function is analytic and we obtain the minimum by setting the derivatives of REF with respect to the independent variables MATH, MATH, and MATH to zero: MATH . Here MATH if MATH and MATH if MATH. We need to solve REF , and REF for MATH. First of all, we eliminate the MATH by substituting REF into the constraints REF : MATH . We can obtain two other equations from REF : MATH . We now have REF , and REF in four variables MATH, MATH, MATH, and MATH. We can perform standard eliminations and obtain an expression for the minimum of MATH: MATH where MATH. To reexpress REF in the notation of the theorem statement, we use MATH and MATH, which follow from REF . We have MATH . This can be simplified by changing variables MATH and MATH and solving for MATH. We obtain MATH . To achieve the desired lowest minimum in REF , we will replace MATH by its upper bound. Since MATH, MATH and we obtain the statement REF to be proven. The analysis for the cases when MATH for some s is similar to the one just presented. We obtain values of MATH which are always greater than in REF , so these cases can be excluded.
quant-ph/0103135
We will exemplify the proofs by proving the case MATH. Other cases the reader can prove analogously. We first use REF to write the premise as MATH (MATH should be used for all cases - in it the subscript REF is not MATH) and then we find the canonical expression of MATH by typing (see REF for details on our program beran): The program responds with: which is nothing but MATH. Using REF we get the desired conclusion.
quant-ph/0103135
Since CITE proved the cases MATH we only give sketchy proofs for these cases for the sake of completeness. For MATH, REF , given the premise REF and the NAME REF [MATH, etc] we have (since MATH): MATH. Thus, the conclusion from REF reads MATH . REF follow analogously. Since MATH and MATH, we have proved the theorem for MATH. For MATH, (again we have MATH, etc.) both sides of the conclusion of REF , and REF reduce to MATH, MATH, and MATH, respectively. Let us now consider the case MATH, REF . According to the first definition of MATH from REF we have, given the premises (MATH and MATH) and the orthomodularity property CITE: MATH and therefore, using NAME REF and the second premise and REF : MATH . The right-hand side of the conclusion in REF reads: MATH . Now MATH and since we also have MATH and MATH and therefore: MATH, MATH, and MATH, we have MATH as well. Hence, using REF we reduce REF to: MATH which is nothing but REF . Hence, REF is proved. Let us next consider REF . Here we have: MATH and MATH and therefore: MATH . On the other hand, we have MATH which is nothing but REF and this proves REF . As for REF , here we again have MATH and MATH. Thus we get: MATH . For the right-hand side we have: MATH which is nothing but REF , what proves REF . Since MATH and MATH, we have proved the theorem for MATH.
quant-ph/0103135
In this and all other proofs of this section, we will implicitly make use of the rules MATH, MATH, MATH, and MATH, MATH. Also, MATH, MATH, MATH, MATH, MATH, and MATH. We will use F-H implicitly. Recall that MATH. For REF , MATH. For REF , MATH. For REF , MATH.
quant-ph/0103135
REF and the fact that MATH, MATH.
quant-ph/0103135
For REF , using MATH and MATH and F-H we can write the conclusion as MATH . To prove that the right-hand side boils down to the left-hand one is straightforward and can be done in a complete analogy to the case MATH already done above - REF . For example, for MATH we have: MATH . For REF , the proof follows from REF by symmetry. The proof of REF seems a little tricky, so we show it in some detail. First, we show that (under the hypotheses) MATH . From MATH and MATH we have MATH. Therefore MATH, establishing REF . The left-hand side of REF reduces to MATH. The right-hand side reduces to MATH. Using REF , we see they are the same. For REF we use REF and MATH, MATH.
quant-ph/0103135
Expanding definitions and using F-H, MATH.
quant-ph/0103135
For REF , we have MATH . In the second step, we use REF from Ref. CITE. In the third and fifth steps we apply the NAME REF , and in the fourth and sixth steps we apply absorption laws. For REF , MATH and MATH in any OL, so MATH [from REF ]MATH. For REF , we have MATH where in the second and third steps we apply F-H and in the last step we apply REF of Ref. CITE. Finally, REF is proved as follows. REF of Ref. CITE, which we repeat below as REF , was shown to hold in all REFGOs. MATH . Using REF of Ref. CITE and renaming variables, we see that this is the same as MATH . Substituting MATH for MATH and MATH for MATH, MATH . Since MATH holds in any OML, we have MATH where in the third step we use REF and in the last step we use REF .
quant-ph/0103135
The result follows immediately from REF of Ref. CITE.
cs/0104001
The proof easily follows by telescoping the sum that defines MATH: MATH .
cs/0104001
We first remind that MATH is the binarized version of MATH as follows from the implementation of Lookup. To prove that MATH, observe that NAME increases MATH (line MATH), and possibly MATH (line MATH), only if both MATH and MATH: this implies that MATH and MATH. To prove that MATH, notice that at time MATH after performing an operation Op-MATH=SetRow-MATH on the MATH-th row of MATH, MATH is satisfied for any triple MATH such that MATH and MATH thanks to the operation MATH (line MATH). For MATH the proof is analogous. Now, all such triples MATH are enumerated by NAME (lines MATH - MATH): for each of them such that MATH was false at time MATH, MATH is increased and possibly MATH is increased as well (lines MATH). If MATH flips from MATH to MATH, then necessarily MATH flips from MATH to MATH for some MATH, and then, as stated above with respect to MATH, MATH gets increased. Thus, recalling that MATH is the binarized version of MATH, we have for any MATH: MATH . From the definition of MATH in REF we have that: MATH . This proves the relation MATH. A similar argument is valid also for NAME, while NAME does not affect MATH at all. To complete the proof we remark that MATH just after any Init operation and that Reset leaves the data structure as if reset entries were never set to MATH. Indeed, Reset can be viewed as a sort of ``undo" procedure that cancels the effects of previous NAME, NAME or Init operations.
cs/0104001
It is straightforward to see from the pseudocode of the operations that any NAME, NAME and NAME operation requires MATH time in the worst case. Init takes MATH in the worst case: in more detail, each MATH can be directly computed via matrix multiplication and any other initialization step requires no more than MATH worst-case time. To prove that the cost of any Reset operation can be charged to previous NAME, NAME and NAME operations, we use a potential function MATH associated to each term MATH of the polynomial. From the relation: MATH given in Invariant REF, it follows that MATH for all MATH. Thus, MATH. Now, observe that NAME increases MATH by at most MATH units per operation, while Init increases MATH by at most MATH units per operation. Note that MATH does not affect MATH. We can finally address the case of Reset operations. Consider the distinction between the two cases MATH in line MATH and MATH in line MATH. In the first case, we can charge the cost of processing any triple MATH to some previous operation on the MATH-th row of MATH or to some previous operation on the MATH-th column of MATH; in the second case, we consider only those MATH for which some operation on the MATH-th column of MATH was performed after both MATH and MATH were set to MATH. In both cases, any Reset operation decreases MATH by at most MATH units for each reset entry of MATH, and this can be charged to previous operations which increased MATH.
cs/0104001
To prove the claim, it suffices to check that MATH . Unrolling the recursion for MATH, we obtain: MATH . Likewise, MATH holds. Thus, by idempotence of the closed semiring of Boolean matrices, we finally have: MATH .
cs/0104001
Since MATH by REF , we prove that: MATH . To this aim, it is sufficient to prove that any MATH that appears (or disappears) in the correct value of MATH due to an operation different from NAME appears (or disappears) in MATH as well, and that any entry of MATH equal to MATH is also equal to MATH in MATH. CASE: NAME/NAME: assume a NAME operation is performed on the MATH-th column of variable MATH (see REF ). By induction, we assume that all new MATH's are correctly revealed in the MATH-th column of our data structure for MATH after the MATH-th iteration of SetCol-MATH in line MATH. Notice that MATH, that is changes of MATH are limited to the MATH-th column: this implies that these changes can be correctly propagated by means of a NAME operation to any polynomial that features MATH as a variable. As a consequence, by REF , the MATH-th iteration of SetCol-MATH in line MATH correctly reveals all new MATH's in our data structure for MATH, and again these new MATH's all lie on its MATH-th column. Thus, at the end of the loop in lines MATH - MATH, all new MATH's appear correctly in the MATH-th column of MATH. Similar considerations apply also for MATH. To prove that lines MATH - MATH insert correctly in MATH all new MATH's that appear in MATH and that MATH we use again REF and the fact that any MATH that appears in MATH also appears in MATH. Indeed, for any entry MATH that flips from MATH to MATH due to a change of the MATH-th column of MATH or the MATH-th row of MATH there is a sequence of indices MATH such that MATH, MATH, and either one of MATH or MATH just flipped from MATH to MATH due to the NAME/NAME operation. The proof for NAME is completely analogous. CASE: Reset: assume a Reset operation is performed on variable MATH. As Reset-MATH can reset any subset of entries of variables, and not only those lying on a row or a column as in the case of SetRow-MATH and SetCol-MATH, the correctness of propagating any changes of MATH to the polynomials that depend on it easily follows from REF . CASE: Init: each Init operation recomputes from scratch all polynomials in Data Structure REF. Thus MATH after each Init operation.
cs/0104001
The proof easily follows from REF .
cs/0104001
An amortized update bound follows trivially from amortizing the cost of the rectangular matrix multiplication MATH against MATH update operations. This bound can be made worst-case by standard techniques, that is, by keeping two copies of the data structures: one is used for queries and the other is updated by performing matrix multiplication in the background. As fas as Lookup is concerned, it answers queries on the value of MATH in MATH worst-case time, where MATH.
cs/0104001
A rectangular matrix multiplication between a MATH matrix by a MATH matrix can be performed by computing MATH multiplications between MATH matrices. This is done in MATH. The amortized time of the reconstruction operation MATH is thus MATH. The rest of the claim follows from REF .
cs/0104001
The proof is by induction on MATH. The base REF is trivial. We assume by induction that the claim is satisfied for MATH and we prove that it is satisfied for MATH as well. Sufficient condition: Any path of length up to MATH between MATH and MATH in MATH is either of length up to MATH or it can be obtained as concatenation of three paths of length up to MATH in MATH. Since all these paths are correctly reported in MATH by the inductive hypothesis, it follows that MATH or MATH or MATH. Thus MATH. Necessary condition: If MATH then at least one among MATH, MATH and MATH is MATH. If MATH, then by the inductive hypothesis there is a path of length up to MATH. If MATH, then there are two paths of length up to MATH whose concatenation yields a path no longer than MATH. Finally, if MATH, then there are three paths of length up to MATH whose concatenation yields a path no longer than MATH.
cs/0104001
The proof easily follows from REF and from the observation that that the length of the longest simple path in MATH is no longer than MATH. MATH is required to guarantee the reflexivity of MATH.
cs/0104001
By induction. The base is trivial. We assume that the claim holds inductively for MATH, and we show that, after any operation, the claim holds also for MATH. CASE: Init-MATH: since any Init-MATH operation rebuilds from scratch MATH, the claim holds from REF . CASE: Set-MATH: let us assume that a Set-MATH operation is performed on the MATH-th row and column of MATH and a new path MATH of length up to MATH, say MATH, appears in MATH due to this operation. We prove that MATH after the operation. Observe that MATHet-MATH puts in place any new MATH's in any occurrence of the variable MATH in data structure MATH. We remark that, although the maintained value of MATH in data structure MATH is not updated by NAME and therefore the correctness of the current operation is not affected, this step is very important: indeed, new MATH's corresponding to new paths of length up to MATH that appear in MATH will be useful in future Set-MATH operations for detecting the appearance of new paths of length up to MATH. If both the portions MATH and MATH of MATH have length up to MATH, then MATH gets recorded in MATH, and therefore in MATH, thanks to one of MATHw-MATH or MATHl-MATH. On the other hand, if MATH is close to (but does not coincide with) one endpoint of MATH, the appearance of MATH may be recorded in MATH, but not in MATH. This is the reason why degree MATH does not suffice for MATH in this dynamic setting. CASE: Reset-MATH: by inductive hypothesis, we assume that MATH flips to zero after a Reset-MATH operation only if no path of length up to MATH remains in MATH between MATH and MATH. Since any MATH operation on MATH leaves it as if cleared MATH's in MATH were never set to MATH, MATH flips to zero only if no path of length up to MATH remains in MATH.
cs/0104001
The proof follows from REF by considering the time bounds of operations on polynomials described in REF. As each maintained polynomial has constant degree MATH, it follows that the space used is MATH.
cs/0104001
We prove that MATH, MATH, MATH and MATH are sub-matrices of MATH: MATH . We first observe that, by definition of NAME closure, MATH. Thus, since MATH, MATH and MATH are all closures, then we can replace MATH with MATH, MATH with MATH and MATH with MATH. This implies that MATH and then MATH by REF . Now, MATH is a sub-matrix of MATH and encodes explicitly all paths in MATH with both end-points in MATH, and since MATH, then MATH. It follows that MATH. With a similar argument, we can prove that MATH, MATH and MATH are sub-matrices of MATH. In particular, for MATH we also need to observe that MATH.
cs/0104001
It is possible to compute MATH, MATH, MATH and MATH with three recursive calls of MATH, a constant number MATH of multiplications, and a constant number MATH of additions of MATH matrices. Thus: MATH where MATH is the time required to multiply two MATH . Boolean matrices. Solving the recurrence relation, since MATH, we obtain that MATH (see for example, the Master Theorem in CITE).
cs/0104001
The following equalities prove the first condition of MATH-transitivity: MATH . The other conditions can be proved analogously. The hypothesis that MATH is MATH-centered is necessary for the MATH-completeness.
cs/0104001
Since MATH it holds that MATH and MATH. The proof follows from REF and from the facts that: MATH, MATH and MATH.
cs/0104001
Let MATH and MATH. By definition, we have: MATH and MATH. By exploiting a monotonic behavior of polynomials and closures over Boolean matrices, we have: MATH. Thus: MATH.
cs/0104001
The proof is by induction on the size MATH of matrices in Data Structure REF. The base is trivial. We assume that the claim holds for instances of size MATH and we prove that it holds also for instances of size MATH. CASE: Op-MATH=Init-MATH: since Init-MATH performs Init operations on each object, then MATH. CASE: Op-MATH=Set-MATH: we first prove that MATH. Observe that MATH is obtained as a result of a composition of functions that relax the correct intermediate values of polynomials and closures of Boolean matrices in our data structure allowing them to contain less MATH's. Indeed, by the properties of Lookup described in REF, we know that, if MATH is the correct value of a polynomial at any time, then P. Lookup-MATH. Similarly, by inductive hypothesis, if MATH is a NAME closure of a MATH . Boolean matrix, then at any time K. Lookup-MATH. The claim then follows by REF , which states that the composition of relaxed functions computes values containing at most the MATH's contained in the values computed by the correct functions. To prove that MATH, based on the definition of MATH, it suffices to verify that MATH, where MATH and MATH. In particular, we prove that if MATH flips from MATH to MATH due to operation Set-MATH, then either MATH flips from MATH to MATH (due to lines MATH - MATH when MATH), or MATH flips from MATH to MATH (due to lines MATH - MATH when MATH). Without loss of generality, assume that the Set-MATH operation is performed with MATH (the proof is completely analogous if MATH). As shown in REF , sub-matrices MATH, MATH and MATH may undergo MATH-centered updates due to this operation and so their variation can be correctly propagated through NAME and NAME operations to polynomial MATH (line MATH) and to polynomials MATH, MATH and MATH (lines MATH - MATH). As MATH is also MATH-centered due to line MATH, any variation of MATH, that is assumed to be elsewhere correct from previous operations, can be propagated to closure MATH through a recursive call of Set-MATH in line MATH. By the inductive hypothesis, this propagation correctly reveals any new MATH's in MATH. We remark that MATH may contain less MATH's than MATH due to any previous NAME operations done in line MATH. Observe now that MATH occurs in polynomials MATH, MATH and MATH and that MATH is not necessarily MATH-centered. This would imply that we cannot propagate directly changes of MATH to these polynomials, as no efficient operation for doing so was defined in REF. However, by REF , MATH is MATH-transitive and MATH-complete with respect to MATH. Since MATH, by REF performing both SetRow-MATH and SetCol-MATH operations on data structures MATH, MATH and MATH in lines MATH - MATH is sufficient to correctly reveal new MATH's in MATH, MATH and MATH. Again, note that MATH, MATH and MATH may contain less MATH's than MATH, MATH and MATH, respectively, due to any previous NAME operations done in lines MATH - MATH. We have then proved that lines MATH - MATH correctly propagate any MATH-centered update of MATH to MATH. To conclude the proof, we observe that MATH also occurs in polynomials MATH, MATH, MATH, MATH and indirectly affects MATH. Unfortunately, we cannot update MATH efficiently as MATH is neither MATH-centered, nor MATH-transitive/MATH-complete with respect to MATH. So in lines MATH - MATH we limit ourselves to update explicitly MATH and to log any changes of MATH by performing NAME operations on polynomials MATH, MATH, and MATH and a LazySet-MATH operation on MATH. This is sufficient to guarantee the correctness of subsequent Set-MATH operations for MATH. CASE: Op-MATH=Reset-MATH: this operation runs in judicious order through the objects in the data structure and undoes the effects of previous Set-MATH and Init-MATH operations. Thus, any property satisfied by MATH still holds after performing a Reset-MATH operation.
cs/0104001
Since MATH, the proof easily follows by telescoping the sum that defines MATH: MATH .
cs/0104001
Since all the polynomials in Data Structure REF are of constant degree and involve a constant number of terms, the amortized cost of any NAME, NAME, NAME, and Reset operation on them is quadratic in MATH (see REF ). Let MATH be the time complexity of any Set-MATH, LazySet-MATH and Reset-MATH operation. Then: MATH-for some suitably chosen constant MATH. As MATH, this implies that MATH. Init-MATH recomputes recursively MATH from scratch using Init operations on polynomials, which require MATH worst-case time each. We can then prove that the running time of Init-MATH is MATH exactly as in REF . To conclude the proof, observe that if MATH is the space used to maintain all the objects in Data Structure REF, and MATH is the space required to maintain a polynomial with the data structure of REF, then: MATH . Since MATH by REF , then MATH.
cs/0104001
We recall that, by REF , each entry of MATH can be queried in MATH worst-case time, and each Update operation can be performed in MATH worst-case time. Since MATH and MATH can be computed in MATH worst-case time by means of MATH queries on MATH, we can support both insertions and deletions in MATH worst-case time, while a reachability query for any pair of vertices MATH can be answered in MATH worst-case time by simply querying the value of MATH.
cs/0104001
Balancing the two terms in the update bound MATH yields that MATH must satisfy the equation MATH. The current best bounds on MATH CITE imply that MATH CITE. Thus, the smallest update time is MATH, which gives a query time of MATH.
cs/0104003
First, we resolve MATH with predicate substitutivity as follows. If the occurrence of MATH is in the head of MATH, then we select the subgoal of predicate substitutivity which is not an equation. Next, we apply symmetry to all equations of the resolvent. The resulting clause has the form: CASE: MATH . If, on the other hand, the occurrence of MATH is in a subgoal MATH of MATH, then we select such a subgoal. The resulting clause has the form: CASE: MATH . In either case, the occurrence of MATH is in some equation MATH. Next, we apply function substitutivity to MATH as many times as it is necessary to make such an occurrence appear at the top level of an equation MATH. Finally, we apply reflexivity to all equations except MATH, thus disposing of all unwanted equations. The resulting clause has the claimed form. Hence the lemma holds.
cs/0104003
First, we apply equation introduction to the definitions of MATH and the MATH's. MATH . Let MATH, MATH, be renaming substitutions CITE for MATH such that: MATH where MATH if MATH. Since the variables in MATH do not occur in MATH or in any other MATH (MATH), the application of MATH to a variable MATH renames MATH uniquely. Hence, we can think of such an application as the addition of the subscript MATH to MATH. Next, we apply equation introduction to MATH and rename variables, obtaining: MATH where MATH . Subsequently, we resolve with REF and rename MATH and MATH. MATH . Let MATH and MATH (that is, MATH occurs in some (and only one) MATH and some MATH, for MATH). We now define MATH and apply transitivity and factoring, replacing the equations by (up to variable renaming): CASE: MATH where MATH . Note that for all MATH and all MATH: MATH . Equivalently, for all MATH and all MATH: CASE: MATH . Hence, MATH for MATH, so that we have exactly all equations for constructing the MATH's. Next, we repetitively fold using function substitutivity, and arrive at: MATH NAME form is now obtained by folding with REF : MATH . Finally, we apply predicate substitutivity and symmetry, and then fold with the completed definitions of the MATH predicates. The resulting clause has the desired form; hence we conclude that the theorem holds.
cs/0104003
First, we use equation introduction on the definitions of the MATH's. MATH . Next, we apply equation introduction to MATH in such a way that no two MATH's have variables in common, except for MATH and MATH. We obtain: MATH where the MATH's are as in REF and MATH . Now, we apply MATH to the if-part of the definition of MATH, where MATH is the mgu of MATH and MATH and use equation introduction in the resulting instance, getting (up to variable renaming): MATH where MATH . Subsequently, we resolve REF with REF : MATH . Before folding, we add the following equations, recalling that subgoal addition preserves soundness: MATH (Such equations, after applying transitivity, will enable us to have the same MATH in each subgoal of the resulting clause in prechain form.) Now we add the equations: MATH (Such equations will enable us to fold with respect to REF .) We now apply transitivity and factoring in such a way that the equations are replaced by (up to variable renaming): MATH where MATH . Next, we repetitively fold using function substitutivity and arrive at: MATH where MATH NAME form is now obtained by folding with REF : MATH . Finally, we apply predicate substitutivity and symmetry, and then fold with the completed definitions of the MATH predicates. The resulting clause has the desired form; hence we conclude that the theorem holds.
gr-qc/0104057
From the fact that MATH we have that MATH, where MATH corresponds to the evaluation of the spin network obtained by dropping an arbitrary link from the original one. MATH . As it is shown in CITE, MATH . From this we have MATH and REF follows.
gr-qc/0104057
The case MATH corresponds to REF . For MATH we observe that in the definition of MATH in REF we can obtain a bound containing MATH different MATH's in the denominator by bounding MATH of the three MATH's by MATH. For the case MATH we can write four inequalities as in the previous Lemma choosing different triplets. Multiplying the four inequalities each representation appears repeated three times so we obtain the exponent MATH in the bound.
gr-qc/0104057
We study the integral MATH for any choice of numbers MATH for MATH and a point MATH. First we integrate out MATH using REF , obtaining MATH where MATH. We can bound the previous expression by MATH . Lets concentrate on the MATH integration. In order to do so we use a coordinate system in which two of the coordinates are MATH while the third is the angle MATH between MATH and a given plane containing the geodesic between MATH and MATH. The ranges of these coordinates are MATH where we set MATH. In terms of this coordinates the measure MATH becomes MATH . In terms of this coordinates REF becomes MATH . Finally if we put in the form of the measure MATH, that is, MATH (where MATH is the measure of the unit sphere), we can complete the integration to obtain the sought for bound, namely MATH which concludes the proof.
gr-qc/0104057
The MATH-simplex amplitude MATH corresponds to introducing four additional MATH in the multiple integral REF together with an additional integration corresponding to the four new edges and the additional vertex respectively. Using REF this additional integration can be bounded by a constant, so that after using REF we have MATH for any arbitrary triple MATH in the same triangle.
gr-qc/0104057
We observe that a different bound can be obtained for MATH containing respectively two or one representations in the denominator if we bound either two or one of the three MATH in REF by MATH instead of just taking absolute value. The integration on the right still converges (see CITE).
gr-qc/0104057
We divide each integration region MATH into the intervals MATH, and MATH so that the multiple integral decomposes in a finite sum of integrations of the following types: CASE: All the integrations are in the range MATH. We denote this term MATH, where MATH is the number of REF in the triangulation. This term in the sum is finite by REF . CASE: All the integrations are in the range MATH. This term MATH is also finite since using REF for MATH, and MATH respectively we have MATH CASE: MATH integrations in MATH, and MATH in MATH. In this case MATH can be bounded using REF as before. The idea is to choose the appropriate subset of representations in the bounds (and the corresponding values of MATH) so that only the MATH representations integrated over MATH appear in the corresponding denominators. Since this is clearly possible, the MATH terms are all finite. We have bounded MATH by a finite sum of finite terms which concludes the proof.
hep-th/0104158
CASE: Any MATH will satisfy REF with MATH . However, direct calculation shows that MATH if the NAME equation of motion is satisfied. REF : REF implies that MATH where MATH, MATH are two linearly independent real solutions of MATH, which can and will be normalized such that MATH . It is no loss of generality to assume that MATH in REF since this may always be achieved by forming suitable linear combinations of the MATH, MATH. By direct calculation using REF one may then verify that MATH satisfies the NAME equation of motion.
hep-th/0104183
Let MATH be a Hermitian line bundle with a Hermitian connection over MATH. We can take an open covering MATH such that there are local sections MATH. We define the transition functions MATH by MATH. Because MATH is equipped with a Hermitian metric, MATH takes values in MATH. By the Hermitian connection MATH we define the connection forms MATH by MATH. Then we have a MATH-ech cochain MATH. This cochain is a cocycle because the following relations are satisfied: MATH . If we take other local section MATH, then we have the MATH-ech cocycle MATH which is cohomologous to MATH. This construction of the MATH-ech cocycle gives a homomorphism from the isomorphism classes of Hermitian line bundles with Hermitian connection to MATH. If local sections MATH give MATH whose cohomology class is trivial, then we have a global horizontal section of MATH. This implies that MATH is trivial and the homomorphism is injective. If we are given a cohomology class of MATH, then we express the class by a MATH-ech cocycle with respect to an open covering. Obviously the MATH-ech cocycle gives rise to a Hermitian line bundle with a Hermitian connection. Hence the homomorphism is surjective and the theorem is proved.
hep-th/0104183
We can directly verify REF by the definition of the induced map MATH. Using REF we can prove REF .
hep-th/0104183
By the construction of MATH we have MATH . We can calculate the other terms using the property of the MATH-ech cocycle, that is . MATH. So the lemma is proved.
hep-th/0104183
This is shown by direct computation using the cocycle condition of the MATH-ech cocycle and NAME 's theorem.
hep-th/0104183
The calculation of MATH using NAME 's theorem gives the formula.
hep-th/0104183
The well-definedness of MATH is proved by REF . If we fix an open covering of MATH, we can prove that MATH is independent of the cocycle representation by REF . In order to prove that MATH is independent of the choice of open covering of MATH, we consider a refinement of an open covering. In this case we can take the induced covering of MATH and cocycle representation of MATH that does not change the value of MATH. So this proposition is proved.
hep-th/0104183
These properties are the consequences of the local sum definition of the action. For REF , we construct a triangulation MATH of MATH and a map MATH by pulling back MATH and MATH of MATH. Then the functoriality of the integration shows REF . If one reverses the orientation of MATH, then the integration over the oriented simplex changes sign. Thus REF is proved. The formula in REF is obtained by separating the summation of the simplices.
hep-th/0104183
If MATH be a cocycle representation of MATH, then we have MATH by definition. We triangulate MATH by MATH and chose a map MATH. We compute the right hand side of the formula as follows. MATH . Applying NAME 's theorem and the cocycle conditions, and canceling the summation over the simplices in the interior of MATH, we obtain the left hand side of the formula.
hep-th/0104183
We prove MATH. By using the cocycle condition of MATH and NAME 's theorem, we have MATH and MATH . Because MATH, the cocycle condition holds.
hep-th/0104183
First we fix an open covering MATH and show that MATH induces MATH . For MATH we define MATH as follows. MATH . We can verify MATH by direct computation using NAME 's theorem. This implies that MATH . Hence MATH induces the homomorphism MATH of MATH-ech cohomologies. Secondly, we show that the homomorphism of NAME cohomologies induced from MATH is well-defined. An open covering MATH is a refinement of MATH if there exists a map of indices MATH such that MATH. The refinement of the open covering of MATH induces a refinement of the coverings of the map space as follows. If a triangulation MATH of MATH and a map MATH are given, then we have an open set MATH where MATH. We can define a map MATH of indices of the coverings of the map space by MATH. It is easy to see that MATH. So we have a refinement MATH of MATH. These refinements induce homomorphisms of MATH-ech cohomologies MATH which commute with MATH as follows. MATH . Taking the direct limit, we obtain the well-defined homomorphism MATH which is induced from MATH.
hep-th/0104183
By REF an isomorphism class of a line bundle with a connection corresponds uniquely to a NAME cohomology class. The NAME cohomology class that corresponds to the isomorphism class of MATH is expressed by MATH. So the isomorphism class is uniquely determined by MATH.
hep-th/0104183
Let MATH be a cocycle representation of MATH. Note that MATH. We obtain the curvature form by the computation of MATH using the following formula MATH .
hep-th/0104183
Let MATH be two open sets such that MATH. By REF we have MATH. So MATH is well-defined. Let MATH be open sets of MATH. On the non-empty intersection MATH we have MATH using again REF . This shows that MATH is indeed the section of MATH. By REF takes its values in the unit norm.