paper stringlengths 9 16 | proof stringlengths 0 131k |
|---|---|
cs/0011023 | Let MATH denote the bid on the MATH-th object of the MATH-th disadvantaged bidder. Let MATH be MATH's bid on the MATH-th object. Because the bids of the MATH disadvantaged bidders are within MATH, MATH has no incentive to bid over MATH. Thus, MATH, and MATH. Since bids from different disadvantaged bidders are independent, MATH . From the fact that MATH, MATH wins exactly MATH objects on average. |
cs/0011023 | Because MATH are symmetric for MATH, we need only show that the probability distribution of MATH is as described in REF . Let MATH . Then MATH . Let MATH . Note that MATH. The probability distribution of MATH equals MATH . |
cs/0011023 | If MATH, the lemma is the same as REF . If MATH, we divide the objects into MATH groups of MATH objects each and employ REF to obtain bids for the first group. We then set the bids for the other MATH groups to the corresponding bids for the first group. We scale every bid by a factor of MATH so that the bids sum to MATH. This gives the desired probability distribution. |
cs/0011023 | From REF and the fact that our game is a zero-sum game, the MATH disadvantaged bidders win MATH objects in total. Since they all use the same bidding algorithm, by symmetry, each of them wins MATH objects. This upper bound of MATH is also a lower bound since the adversary can always win at least MATH objects by employing the same bidding algorithm as the disadvantaged bidders. |
cs/0011023 | For each MATH, CASE: MATH denotes the expected number of objects MATH wins if MATH uses the uniform probability distribution while the disadvantaged bidders may use any arbitrary probability distribution; CASE: MATH denotes the expected number of objects MATH wins if MATH does not permute his initial bid sequence and the disadvantaged bidders employ the uniform probability distribution; CASE: MATH denotes the expected number of objects MATH wins if MATH uses a given probability distribution and the disadvantaged bidders employ the uniform probability distribution. Since MATH for MATH, it suffices to prove that MATH. Without loss of generality, assume that MATH. Let MATH be the largest index such that MATH; if no such MATH exists, let MATH. Since MATH for MATH, MATH . To calculate MATH, let MATH be the probability that MATH places MATH on the MATH-th object. Then, MATH . Since MATH, MATH . To calculate MATH, let MATH be the probability that a disadvantaged bidder places MATH on the MATH-th object. Then, MATH . Since MATH for each MATH, by NAME 's inequality, MATH . |
cs/0011023 | Given an initial bid sequence MATH of the MATH disadvantaged bidders, MATH chooses his initial bid sequence to be MATH. Since MATH's bids are different from MATH, in light of REF , we may assume that the disadvantaged bidders permute their bids with the uniform probability distribution. Consequently, the expected number of objects won by MATH is as desired. |
cs/0011023 | Given an optimal initial bid sequence MATH of MATH, we show that this sequence can be transformed into a desired sequence MATH without decreasing MATH. Let MATH be the number of MATH's bids that are in MATH. There are three cases. CASE: MATH. For each MATH, let MATH where MATH is the biggest index such that MATH. Then the expected number of objects won by MATH is the same as that of MATH, and the new sequence is as desired. CASE: MATH. This case is impossible since MATH can increase MATH by decreasing one of his bids outside MATH by MATH and increasing the one that is in MATH by MATH. CASE: MATH. Without loss of generality, let MATH be MATH's MATH bids in MATH in the increasing order. We first decrease MATH by MATH and increase MATH by MATH for MATH. As shown below, this adjustment never decreases MATH. Then, since MATH's adjusted bids are not in MATH, his new initial bid sequence can be further transformed into a desired sequence as in REF . Let MATH be the decreased amount of MATH resulted from decreasing MATH. Let MATH be the increased amount of MATH resulted from increasing MATH for MATH. We need to show that MATH. It suffices to prove that MATH. Let MATH denote the expected number of objects MATH wins if MATH. Then, MATH . Assume that MATH and MATH. Then, MATH and MATH. Note that MATH increases with MATH. Since MATH, MATH is minimized when MATH and thus MATH. Consequently, MATH . |
cs/0011023 | From REF , MATH has an optimal initial bid sequence MATH, such that for all MATH, MATH. If MATH, then it cannot win any object. If MATH, then it can win MATH objects on average. The unit price MATH pays for these objects is strictly greater than MATH . Since the expected number of objects won by such MATH is an integral multiple of MATH, MATH for some integer MATH, and MATH . Since MATH is an integer, MATH and thus MATH. |
cs/0011023 | By REF , MATH wins at most MATH objects on average. By REF , this upper bound is also the lower bound of the expected number of objects MATH can win. Then this theorem follows from the fact that our auction is a zero-sum game. |
cs/0011023 | Assume that MATH employs this bidding algorithm. From his budget constraint, he wins at most MATH objects. This upper bound is also a lower bound. To prove this claim by contradiction, assume that MATH wins fewer than MATH objects and thus does not exhaust all his budget. Then, the total number of objects won by the other bidders exceeds MATH. Because MATH is a multiple of MATH and MATH has not exhaust his budget, every object's winning bid must be at least MATH. Therefore, the total of the winning bids of the other bidders exceeds MATH. Since this contradicts the budget constraint, MATH can win at least MATH objects. |
cs/0011023 | Let MATH denote the bid on the MATH-th object by the MATH-th disadvantaged bidder. Let MATH be MATH's bid on the MATH-th object. Because MATH, MATH has no incentive to set MATH greater than MATH. Thus, MATH and MATH. Since bids from different disadvantaged bidders are independent, MATH . MATH maximizes MATH as follows: MATH . |
cs/0011025 | Immediately, from the fact that the vector defining MATH is a prefix of the vector defining MATH. |
cs/0011025 | Immediate from the definitions. MATH. |
cs/0011025 | Let MATH be a vector that denotes MATH. Then, by definition of the characteristic function MATH. That is for any MATH holds MATH. Since MATH, MATH. Thus, MATH implies MATH for any sequences MATH and MATH. In particular, this holds for the sequences appearing in MATH. Repetitive application of this argument proves the lemma. |
cs/0011025 | Immediate from the definitions. MATH. |
cs/0011025 | Let MATH be a vector that denotes MATH. Then, by definition of the characteristic function MATH. That is for any MATH holds MATH. Since MATH, MATH. Thus, MATH for any sequences MATH and MATH. In particular, this holds for the sequences appearing in MATH. Repetitive application of this argument and the transitivity of MATH proves the lemma. |
cs/0011025 | CASE: Let MATH, and assume that MATH has an infinite NAME. This derivation contains an infinite directed subsequence, that is a subsequence of goals MATH such that the selected atom of MATH, MATH, is a direct descendant of the selected atom of MATH, MATH. There is some MATH, such that for any MATH holds MATH. Let MATH be greater than MATH. Then there is a clause MATH, such that the MATH exists and for some MATH holds that MATH, where MATH is a computed answer substitution for MATH. Observe that the choice of MATH implies that MATH. Since MATH is one of the selected atoms in MATH the condition of the term-acceptability with respect to MATH is applicable. Thus, MATH, that is, MATH. By proceeding in this way we construct an infinitely decreasing chain of atoms, contradicting the well-foundedness of MATH. CASE: Let MATH be in MATH, and let be MATH be a clause in MATH, such that the MATH exists. We define a relation MATH, such that MATH for any MATH, where MATH is the computed answer substitution for MATH. Let MATH be the transitive closure of MATH. We start from the following observation. If MATH there is a derivation, started by MATH, and having a goal with selected atom MATH in it. Similarly, if MATH and MATH there is a derivation, started by MATH, and having a goal with selected atom MATH in it, followed (not necessary immediately) by a goal with the selected atom MATH. We extend this observation to MATH, and claim that if MATH then there is a derivation for MATH, having a goal with the selected atom MATH in it. An additional observation is that if MATH is defined for some MATH and MATH, then MATH and MATH are are in MATH. In particular, if MATH there is a derivation for MATH that has a goal MATH with the selected atom MATH in it. Thus, we can continue the derivation from MATH in the same way we have constructed a derivation from MATH to MATH. This process can be done forever, contradicting that MATH terminates for MATH. If MATH, MATH from the observation above we construct a derivation from MATH that has a goal with the selected atom MATH, and then continue by mimicking the derivation from MATH, that has a goal with the selected goal MATH. Thus, MATH. This also proves the asymmetry (if MATH and MATH, then MATH, by transitivity, and this causes contradiction by irreflexivity). The well-foundedness follows from the finiteness of all the derivations. The term-acceptabity with respect to MATH follows immediately from the definition of MATH. |
cs/0011025 | Assume that MATH. This means that there exists an occurrence MATH of a variable MATH and a replacement MATH, such that MATH. On the other hand, MATH is rigid on MATH. Thus, the replacement cannot be extended to a substitution MATH, such that MATH. This means, that MATH is non-linear in it variables, that is, MATH appears among the variables of MATH at least twice. Let MATH be all occurrences of MATH in MATH. MATH is one of them. We distinguish the following cases: CASE: MATH. Let MATH be a term, obtained from MATH by a simultaneous replacement of MATH in MATH by MATH. Let MATH be a term, obtained from MATH by a simultaneous replacement of MATH in MATH by MATH. Then, by the monotonicity of MATH holds MATH. However, since MATH does not appear in MATH, except for MATH and all of those, and only them, have been replaced by a new terms the compositions of the replacements above are substitutions. This means that there are two substitutions MATH and MATH, such that MATH, contradicting the rigidity of MATH. CASE: MATH. Similarly to the previous case. CASE: MATH. Let MATH be a term, obtained from MATH by a simultaneous replacement of MATH in MATH by MATH. Then, by the subterm property of MATH holds that MATH. By the same reasoning as above MATH is an instance of MATH. Thus, MATH is equal to it with respect to MATH (rigidity), that is, MATH providing a contradiction to the incomparability. |
cs/0011025 | Assume for the sake of contradiction, that MATH is not ground. Thus, MATH has at least one variable occurrence say MATH. Then, MATH, contradicting that MATH. |
cs/0011025 | CASE: Let MATH. This means that for any term MATH, MATH. In particular, MATH. Clearly, MATH is a subterm of MATH. Thus, by REF MATH, that is, MATH. Thus, MATH. CASE: Let MATH. If MATH, then MATH holds. Thus, at least one of those terms is not equal to MATH and MATH. Hence, MATH, proving the second statement of the lemma as well. |
cs/0011025 | Let MATH be non-empty and let MATH. Then, REF allow to conclude monotonicity and subterm properties for the argument positions occupied by the instances of MATH and to mimic the proof of REF . |
cs/0011025 | Let MATH be a substitution. If MATH, MATH and the proof is done. Otherwise, exists some MATH, such that MATH. By REF MATH. Thus, MATH. This means, that for any MATH, for all occurrences MATH holds that MATH. But now, the pseudo-rigidity condition is applicable, and MATH. |
cs/0011025 | Suppose the above condition is satisfied for MATH. Take any MATH and any clause MATH such that MATH exists. Suppose, that MATH is a body atom, such that MATH and that MATH is a computed answer substitution for MATH. Then MATH is identical to MATH, and thus, MATH is identical to MATH. Since MATH is rigid on MATH and MATH, MATH. Finally, since MATH is a computed answer substitution MATH, for all MATH. Thus, by definition of valid interargument relation the arguments of MATH satisfy MATH, for all MATH. Thus, by the rigid term-acceptability assumption MATH. Combined with MATH, we get MATH. |
cs/0011025 | We base our proof on the notion of directed sequence. Let MATH be non-terminating, that is, MATH has an infinite derivation. By REF it has an infinite directed derivation as well. Let MATH be this infinite directed derivation. We denote MATH and MATH. There is a clause MATH, and substitutions MATH and MATH such that MATH, for some MATH, MATH, MATH and MATH is ground. Note, that MATH is ground due to the well-modedness. The term-acceptability condition implies that MATH, that is MATH. Since MATH and MATH are well-founded MATH is ground. Thus, MATH and, by the output-independence of MATH, MATH. By transitivity, MATH. Thus, selected atoms of the goals in the infinite directed derivation form an infinite decreasing chain with respect to MATH, contradicting the well-foundedness of the order. |
cs/0011025 | We define MATH as following: MATH where MATH is the MATH-th element in the sequence, generated by REF for the directed sequence MATH with MATH. The sequence MATH is well-defined: if MATH, on one hand we get that MATH, and on the other hand, MATH, that is MATH. For MATH only one of those definitions is applicable. The requirement of the lemma are clearly fulfilled. |
cs/0011025 | Since MATH and MATH are queries in some of the NAME of the simply moded queries and of the simply moded program, they are simply moded CITE. Thus, the output positions, both of MATH and of MATH, are occupied by distinct variables. Since MATH we can claim that MATH, up to variables renaming. Thus, REF becomes applicable (note that we never required in the lemma that both directed sequences shall originate from the derivations of the same NAME), and we can obtain a new directed sequence as required. |
cs/0011025 | We base the choice of MATH on the NAME. More precisely, we define MATH if there is a well-moded and simply moded goal MATH and there is a directed sequence MATH in the NAME for MATH, such that the selected atom of MATH is MATH and the selected atom of MATH is MATH and MATH, MATH. Let MATH be a lest NAME model of MATH. We have to prove that: CASE: MATH is an order relationship, that is, is irreflexive, asymmetric and transitive; CASE: MATH is output-independent; CASE: MATH is well-founded; CASE: MATH is term-acceptable with respect to MATH and MATH CASE: MATH is an order relationship, that is MATH is irreflexive, asymmetric and transitive. CASE: NAME. If MATH holds, then exists a directed sequence MATH, such that the selected atom of MATH is MATH, the selected atom of MATH is MATH, MATH and MATH. By repetitive application of REF an infinite branch is build and the contradiction to the finiteness of the NAME is obtained. CASE: Asymmetry. If MATH holds, then exists a directed sequence MATH, such that the selected atom of MATH is MATH, the selected atom of MATH is MATH, MATH, MATH. If MATH holds, then exists a directed sequence MATH, such that the selected atom of MATH is MATH, the selected atom of MATH is MATH, MATH, MATH. By repetitive application of REF an infinite branch is build and the contradiction to the finiteness of the NAME is obtained. CASE: Transitivity. If MATH holds, then exists a directed sequence MATH, such that the selected atom of MATH is MATH, the selected atom of MATH is MATH, MATH, MATH. If MATH holds, then exists a directed sequence MATH, such that the selected atom of MATH is MATH, the selected atom of MATH is MATH,MATH, MATH. By applying REF new directed sequence is build, such that the selected atom of its first element is MATH and the selected atom of its last element is MATH. By definition of MATH holds that MATH. CASE: MATH is output-independent. Assume that there are two atoms MATH and MATH, such that MATH, but MATH. Then exists a directed sequence MATH, such that the selected atom of MATH is MATH, the selected atom of MATH is MATH, MATH and MATH. However, MATH. Thus, by repetitive application of REF an infinite branch is build and the contradiction to the finiteness of the NAME is obtained. CASE: MATH is well-founded. Assume that there is an infinitely decreasing chain MATH. This means that there is an infinite directed sequence in the tree (concatenation of infinitely many finite ones), contradicting the finiteness of the tree. CASE: MATH is term-acceptable with respect to MATH and MATH. Let MATH. Let MATH be a substitution, such that MATH are ground and MATH. The goal MATH is a well-moded goal, however, it is not necessary simply moded. Thus, we define a new goal MATH such that it will coincide with MATH on its input positions, and its output positions will be occupied by a linear set of variables. More formally, let MATH be MATH restricted to Var-MATH. Then MATH, and thus, MATH is ground, while MATH, and thus, MATH is a linear sequence of variables. Summing up, MATH is well-moded and simply moded goal. Thus, it terminates with respect to MATH and its derivations has been considered while defining MATH. By definition of MATH exists some substitution MATH, such that MATH. Thus, MATH. Since MATH is a least NAME model MATH is a correct answer substitution of MATH and, since MATH and MATH are well-moded (the later as the NAME of the well-moded clause and the well-moded goal), MATH is a computed answer substitution as well CITE. Thus, the next goal to be considered in the derivation is MATH. This is a directed descendant of MATH, thus, by definition of MATH, MATH. By definition of MATH, MATH, and by the definition of MATH, MATH. Thus, MATH. |
cs/0011025 | If MATH and MATH are well-moded then MATH is well-moded as well CITE. Analogously, MATH is simply moded CITE. Thus, the input positions of MATH are ground and the output positions of MATH are occupied by distinct variables. Therefore, MATH cannot affect the input positions of MATH, and MATH, that is, MATH. Since MATH is identical to MATH, MATH holds. |
cs/0011025 | We define MATH in the following way: MATH . The properties of MATH follow from the corresponding properties of MATH and MATH. Let MATH be a clause. We have to prove that for any substitution MATH, such that MATH and MATH are ground and MATH holds that MATH. Then, by the property of MATH stated above there is some MATH, unifiable with MATH. Let MATH be the most general unifier of MATH and MATH. By REF MATH. On the other hand, MATH. On the other hand, the output argument positions of MATH are occupied by a sequence of distinct variables (simply modedness of MATH and MATH - we can always assume that fresh variants of the clauses are considered CITE). Thus, there is some MATH, such that MATH. MATH. Since MATH is the least NAME model, MATH is a correct answer substitution for MATH. Since MATH and any MATH are well-moded MATH is also computed answer substitution. Thus, for the goal MATH the computed answer substitution will be MATH. CASE: MATH. Then, by the definition of term-acceptability with respect to a set, MATH. However, MATH and MATH is MATH. Thus, by claiming MATH, and MATH. CASE: MATH. This means that MATH, that is, MATH. |
cs/0011026 | Consider a flat folding of a mountain-valley pattern, and let MATH be consecutive creases with the same direction. The portion MATH of the segment can be in one of three configurations (see REF ): CASE: The portion forms a ``spiral" with MATH being the outermost edge of the spiral, and MATH being the innermost; or CASE: The portion forms a ``spiral" with MATH being the outermost edge of the spiral, and MATH being the innermost; or CASE: The portion forms two ``spirals" connected by a common outermost edge and with MATH and MATH being the two innermost edge. Now if MATH, then MATH cannot be the innermost edge of a spiral, or else MATH would penetrate through MATH. Similarly, if MATH, then MATH cannot be the innermost edge of the spiral. Because in all three configurations above we must have at least one of MATH and MATH as innermost, we cannot have both inequalities true. |
cs/0011026 | Let MATH be maximum such that MATH all have the same direction. By the mingling property, either MATH or MATH. In the former case, MATH is a foldable end, so we have the desired result. A generalization of the latter case is that we have MATH all with the same orientation, and MATH. If MATH, then MATH is a foldable end, so we have the desired result. Otherwise, let MATH be maximum such that MATH all have the same direction. By the mingling property, either MATH or MATH. In the former case, MATH is a crimpable pair, so we have the desired result. In the latter case, induction applies. |
cs/0011026 | This is obvious because folding a foldable end is equivalent to chopping off a portion of the segment. Thus, if we take a flat folding of the original pattern, remove that portion of the segment, and double up the number of layers for the adjacent portion of the segment, we have a flat folding of the new object. |
cs/0011026 | Let MATH be a crimpable pair, and assume by symmetry that MATH is a mountain and MATH is a valley. Consider a flat folding MATH of the original segment, such as the one in REF (left). We orient our view to regard the segment MATH as flipping over during the folding, so that the remainder of the (unfolded) segment keeps the same orientation. Thus, MATH is above MATH which is above MATH. Now suppose that MATH places some layers of paper in between MATH and MATH. Then these layers of paper can be moved to immediately above MATH, because MATH is at least as long as MATH, and hence there are no barriers closer than MATH. See REF . Similarly, we move material between MATH and MATH to immediately below MATH. In the end, we have a flat folding of the object obtained from making the crimp MATH. |
cs/0011026 | By REF , the pattern is mingling, and hence by REF we can find a crimpable pair or a foldable end. Making this fold preserves flat foldability by REF , so by induction the result holds. |
cs/0011026 | First note that it is trivial to check in constant time whether a pair of consecutive folds form a crimp or whether an end is foldable. We begin by testing all such folds, and hence in linear time have a linked list of all possible folds at this time. We also maintain reverse pointers from each symbol in the string to the closest relevant possible fold. Now when we make a crimp or an end fold, only a constant number of previously possible folds can no longer be possible, and a constant number of previously impossible folds can be newly possible. These folds can be discovered by examining a constant-size neighborhood of the performed fold. We remove the old folds from the list of possible folds, and add the new folds to the list. Then we perform the first fold on the list, and repeat the process. By REF , if the list ever becomes empty, it is because the mountain-valley pattern is not flat-foldable. |
cs/0011026 | Performing an all-layers simple fold that is not allowable forbids us from all-layers simple folding certain creases, and hence the resulting segment cannot be completely folded after that point. Therefore, only allowable folds can be in the sequence. It remains to show that performing an allowable fold preserves foldability by a sequence of all-layers simple folds. But performing an allowable fold is equivalent to removing the smaller portion of paper to one side of the fold. Hence, it can only make more (allowable) folds possible, so the mountain-valley pattern remains foldable. |
cs/0011026 | Let MATH be the complement string of MATH (that is, the complement of each letter of MATH), and let MATH be the reverse string of MATH. The fold at position MATH of MATH is allowable precisely if the first MATH characters of the suffix of MATH starting in the MATH-nd position are identical to the suffix of MATH starting in the MATH-st position, and the single endpoint of MATH (MATH if MATH, MATH if MATH) has length less than or equal to its complement. We build a single suffix tree containing all suffixes of MATH and MATH in MATH time. Further, we augment this tree with the capability to perform least-common ancestor (LCA) queries in constant time after linear preprocessing time CITE. This LCA data structure enables us to return the length of the longest prefix match of two given suffixes in constant time. To find the end-most possible fold, we can search for the longest prefix match of MATH and MATH, where the MATH-th fold attempt takes place at MATH if MATH is odd, and MATH if MATH is even. Thus the attempted folds alternate in from the left and right ends. A fold can occur at MATH if MATH equals MATH or MATH, and the length of the longest prefix match between MATH and MATH is MATH, or if the boundary condition above is satisfied. We then perform this first legal fold, thus reducing the length of MATH. We can continue our scan for the next fold by appropriately reducing the length of the necessary longest prefix match to reflect the new end of the string. Note that the suffix tree remains unchanged, and hence once one is computed, the folding process takes MATH time. |
cs/0011026 | For MATH, valley fold MATH if exactly one of MATH and MATH is in MATH. After these folds, as we travel along the steps corresponding to MATH, we travel in the MATH direction for elements that belong to MATH and in the MATH direction for elements that belong to MATH. Because the sums of elements of both MATH and MATH are MATH, the point MATH has the same MATH-coordinate as the point MATH after these folds. Because MATH, the steps corresponding to MATH's are confined to remain in between the MATH coordinates of points MATH and MATH. Because MATH has the same MATH-coordinate as MATH and because the vertical distance between MATH and MATH is MATH, point MATH will have the same MATH-coordinate as either MATH or MATH. Now valley fold MATH. Because the vertical distance between MATH and MATH is MATH, the MATH-coordinate of MATH will be same as that of MATH or MATH and the step between MATH and MATH will lie exactly between the MATH-coordinates of MATH and MATH. This situation is illustrated in REF . Now valley fold MATH. Because MATH, the partly folded staircase, which currently lies between the MATH-coordinates of MATH and MATH, fits within the rectangle MATH. Now valley fold MATH. We now have the semi-folded stairs on the right and the rectangular frame MATH on the left. Finally, valley fold all of the remaining unfolded creases in the staircase. This can be done because the rectangular frame is now on the left of MATH and all steps of the staircase are on the right of MATH. |
cs/0011026 | If either MATH or MATH is folded without having the staircase confined between the MATH-coordinates of MATH and MATH, the rectangular frame MATH would intersect with the staircase and would make the other of MATH and MATH impossible to fold. Hence the staircase must be brought between the MATH-coordinates of MATH and MATH before folding either MATH or MATH. Because the last and the second-last steps of the staircase are of size MATH and MATH, respectively, point MATH must have the same coordinate as the point MATH when the staircase is confined between the MATH-coordinates of MATH and MATH. As we travel from MATH to MATH along the staircase, we travel equally in positive and negative MATH directions along the steps corresponding to the elements of MATH. Hence the sum of elements along whose steps we travel in negative MATH direction is same as the sum of elements along whose steps we travel in the MATH direction. Thus there is a solution to the partition instance, if the crease pattern in REF is foldable. |
cs/0011037 | The last rule of a derivation of MATH must be one of MATH, MATH, MATH, MATH, MATH or MATH. In each of these cases the claim is trivial. For example, in the case MATH we conclude by two applications of MATH that MATH. |
cs/0011037 | Induction on MATH , using REF : If MATH is a variable or a constant the claim is obvious. If MATH is of the form MATH then by REF either MATH or MATH. We apply the induction hypothesis to the corresponding subterm and can type MATH by the same rule used to type MATH. If MATH is of the form MATH, then MATH must be typed due to MATH, hence MATH and we may apply the induction hypotheses to MATH (without loss of generality MATH does not to occur in MATH) and then conclude the claim by MATH. Similar if MATH. |
cs/0011037 | Induction on MATH shows that only conversions have to be considered. The only non-trivial case is handled in REF . |
cs/0011037 | Induction on MATH and case distinction according to REF : If MATH is of the form MATH then MATH has to be of type MATH and hence MATH has to be empty (since there is no elimination rule for the type MATH). If MATH is of the form MATH or MATH, then MATH has to be empty as well, for otherwise the term would not be normal. So the interesting case is if MATH is of the form MATH. In this case we distinguish cases according to MATH. If MATH is MATH or MATH, MATH or MATH, then MATH can only consist of at most MATH, MATH or MATH terms, respectively (for otherwise there were a redex); hence we have the claim (or the term is not of one of the types we consider). In the case MATH we apply the induction hypothesis to MATH yielding that MATH is of the form MATH with a list MATH. So if MATH would consist of more then one term there would be a redex; but if MATH is a single term, then the whole term would have arrow-type. |
cs/0011037 | Induction on MATH , using the fact that MATH is typed and therefore in the case of an application only one of the terms can contain the variable MATH free (compare REF ). For instance, if MATH is MATH, then the last rule of a derivation of MATH must be one of MATH, MATH, MATH, MATH, MATH or MATH. In each of these cases the claim is obvious. |
cs/0011037 | Induction on MATH and case distinction according to REF . The case MATH is trivial. In the case MATH the MATH cannot be empty (for otherwise the term would have an arrow type), so we can apply the induction hypothesis to the (by REF ) shorter term MATH. Similarly for MATH, where we apply the induction hypothesis to MATH or MATH depending whether MATH is MATH or MATH (note that by typing one of these has to be the case) and MATH, where we again use REF . For MATH we use the induction hypothesis for MATH. For MATH use the induction hypothesis for MATH. In the cases MATH and MATH, by typing, MATH has to be of the form MATH and we can apply the induction hypothesis to (say) MATH. |
cs/0011037 | REF shows that for closed terms of type MATH the usual length and the number of free variables coincide (due to the typing MATH of the MATH function). The number of free variables trivially does not increase when reducing the term to normal form. |
cs/0011037 | We prove this by induction on MATH. The only non-trivial case is when MATH is an application (note that the case MATH does not occur since MATH is typed). So let MATH be of the form MATH. We distinguish cases whether MATH is a list or not. If MATH is a list with MATH entries then, as MATH is typed, we know that MATH must be of the form MATH and therefore closed. MATH is easiliy seen to be again a list with MATH entries. Therefore we get MATH . If MATH is not a list, then from the fact that MATH is typed we know that at most one of the terms MATH and MATH contains the variable MATH free. In the case MATH (the other case is handled similarly) we have MATH . This completes the proof. |
cs/0011037 | We prove this by induction on the definition of the relation MATH. Case MATH via MATH. We distinguish whether MATH is a list or not. Subcase MATH is a list with MATH entries. Then by typing restrictions and from REF we know that MATH. Also MATH has to be of the form MATH. MATH is again a list with MATH entries, so MATH . Subcase MATH is not a list. Then MATH . Case MATH via MATH. Then MATH is not a list, since otherwise, MATH were of the form MATH and we must not reduce within braces. Therefore this case can be handled as the second subcase above. CASE: We distinguish subcases according to the form of the conversion. Subcase MATH. We have MATH . Subcase MATH] is handled similarly. Subcase MATH] with MATH a list with MATH entries. Then by REF and the linear typing discipline MATH. Therefore we have MATH and on the other hand MATH which obviously is strictly smaller. Subcase MATH. We have MATH . The remaining subcases are similar to the last one. |
cs/0011037 | We only treat unary functions. Let MATH be an almost closed normal list. Then MATH, as MATH. Therefore an upper bound for the number of steps is MATH, which is polynomial in the length of the input MATH. |
cs/0011038 | By REF , MATH . Since MATH and MATH, we use NAME 's inequality CITE on sums of independent bounded random variables to have REF . |
cs/0011038 | REF follow from REF becomes straightfoward when restated in terms of distance as follows. For any MATH with MATH, there is a node MATH on the path between MATH and MATH in MATH such that MATH . Furthermore, if MATH then MATH is distinct from MATH and MATH. |
cs/0011038 | This corollary follows from REF and simple algebra. |
cs/0011038 | See REF. |
cs/0011038 | The proof is straightforward. Note that the more unbalanced MATH is, the smaller its g-depth is. |
cs/0011038 | See REF. |
cs/0011038 | See REF. |
cs/0011038 | The inequalities follow from REF , respectively. |
cs/0011038 | The proof is by induction on MATH. The base case follows from the fact that the statement holds for MATH at line REF. The induction step follows from the use of a relevant triplet at line REF. |
cs/0011038 | We analyze the time and space complexities separately as follows. Time complexity. Line REF takes MATH time. Line REF takes MATH total time to examine MATH triplets for each MATH. As for the repeat at line REF, lines REF, and REF take MATH time to search through MATH. For the MATH-th iteration of the repeat where MATH, line REF takes MATH total time to examine at most MATH triplets for each of MATH and MATH. Thus, each iteration of the repeat takes MATH time. Since the repeat iterates at most MATH times, the time complexity of NAME is as stated. Space complexity. MATH and the sets MATH for all nodes MATH in MATH take MATH work space. MATH takes MATH space. Lines REF in NAME and lines REF in Update-MATH can be implemented to use MATH space. The other variables needed by NAME take MATH space. Thus, the space complexity of NAME is as stated. |
cs/0011038 | There are two directions, both using the following equation. By line REF, MATH . MATH . To prove by contradiction, assume MATH or MATH in MATH. If MATH, then MATH, and by MATH, MATH is an internal node in MATH. By MATH, the triplet formed by MATH is not small. Thus, by MATH and REF , MATH. By symmetry, if MATH, then MATH. In either case, the test of line REF passes. MATH . Since MATH, MATH. If MATH is a leaf in MATH, then by MATH, MATH is leaf MATH in MATH, and MATH. By MATH and REF , MATH. If MATH is an internal node in MATH, then by MATH, MATH, and REF , we have MATH. In either case, MATH. By symmetry, since MATH, MATH. Thus, the test of line REF fails. |
cs/0011038 | There are two directions. MATH . From lines REF , MATH . Thus, whether MATH and MATH are leaves or internal nodes in MATH, by MATH, MATH, and MATH, MATH . By line REF , MATH . Then, since MATH and thus MATH, by MATH, we have MATH . By symmetry, MATH. Thus, the test of line REF fails. MATH . To prove by contradiction, assume that MATH is not on the path between MATH and MATH. By similar arguments, if MATH (respectively, MATH), then MATH (respectively, MATH). Thus, the test of line REF passes. |
cs/0011038 | By REF , for every node MATH strictly between MATH and MATH in MATH, there exists a leaf MATH with MATH. To choose MATH, there are two cases: REF both MATH and MATH are internal nodes in MATH, and REF MATH or MATH is a leaf in MATH. CASE: By REF , let MATH and MATH. By MATH, neither MATH nor MATH is small. To fix the notation for MATH and MATH with respect to their topological layout, we assume without loss of generality that REF or equivalently the following statements hold: CASE: In MATH and thus in MATH by MATH, MATH is on the paths between MATH and MATH, between MATH and MATH, and between MATH and MATH, respectively. CASE: Similarly, MATH is on the paths between MATH and MATH and between MATH and MATH. CASE: MATH. Both MATH and MATH define MATH, and the target triplet is one of these two for some suitable MATH. To choose MATH, we further divide REF into three subcases. CASE: MATH and MATH. The target triplet is MATH. Since MATH, by REF , let MATH be a node on the path between MATH and MATH in MATH with MATH and thus by REF MATH. By the condition of REF , MATH is strictly between MATH and MATH in MATH. Also, by REF , MATH. Thus, by REF , since MATH is not small, MATH . So MATH is as desired for Case REFa. CASE: MATH. The target triplet is MATH. Let MATH be the first node after MATH on the path from MATH toward MATH in MATH. Then, MATH. By REF , MATH. Next, since MATH and MATH, MATH . So MATH. Since MATH and MATH is not small, MATH . So MATH is as desired for Case REFb. CASE: MATH. If MATH, the target triplet is MATH; otherwise, it is MATH. The two cases are symmetric, and we assume MATH. Let MATH be the first node after MATH on the path from MATH toward MATH in MATH. Then, MATH. By REF , MATH. Since MATH and MATH, MATH . Hence MATH. Then, since neither MATH nor MATH is small and MATH, MATH . So MATH is as desired for REF with MATH. CASE: By symmetry, assume that MATH is a leaf in MATH. Since MATH, MATH is an internal node in MATH. Let MATH. By symmetry, further assume MATH. There are two subcases. If MATH, the proof is similar to that of REF and the desired MATH is in the middle of the path between MATH and MATH in MATH. Otherwise, the proof is similar that of REF and MATH is the first node after MATH on the path from MATH toward MATH in MATH. In both cases, the desired triplet is MATH. |
cs/0011038 | The two statements are proved as follows. CASE: This statement follows directly from the initialization of MATH at line REF, the deletions from MATH at line REF, and the insertions into MATH at lines REF . The proof is by induction on MATH. CASE: MATH. By MATH, MATH, MATH, MATH, and REF , MATH is a splitting triplet for MATH in MATH. By the maximization in Update-MATH at line REF, MATH is a splitting tuple for some edge MATH that contains a triplet MATH with MATH. By MATH, MATH is not small. By REF , MATH is MATH. Induction hypothesis: REF holds for MATH. Induction step. We consider how MATH is obtained from MATH during the MATH-th iteration of the repeat at line REF. There are two cases. CASE: MATH also exists in MATH. By MATH, MATH and MATH also satisfy REF for MATH. By the induction hypothesis, MATH is a splitting tuple for MATH in MATH that contains a triplet MATH with MATH. Then, since MATH and MATH at line REF, MATH is not reset to null. Thus, it can be changed only through replacement at line REF by a splitting tuple for some edge MATH in MATH that contains a triplet MATH with MATH. By MATH, MATH is not small. Thus, by MATH, MATH, MATH, MATH, and REF , MATH is MATH. CASE: MATH. This case is similar to the base case but uses the maximization in Update-MATH at line REF. |
cs/0011038 | The proof is by induction on MATH. CASE: MATH. By REF , MATH, and the greedy selection of line REF, line REF constructs MATH without edge lengths. Then, MATH holds trivially. MATH follows from MATH, MATH, and line REF. MATH follows from MATH, MATH and the use of REF at line REF. Induction hypothesis: MATH, MATH, and MATH hold for some MATH. Induction step. The induction step is concerned with the MATH-th iteration of the repeat at line REF. Right before this iteration, by the induction hypothesis, since MATH, some MATH satisfies REF . Therefore, during this iteration, by MATH and REF , and REF, MATH at line REF has a splitting tuple for MATH that contains a triplet MATH with MATH. Furthermore, line REF finds such a tuple. By MATH, MATH is not small. Lines REF create MATH using this triplet. Thus, MATH follows from MATH. By REF , MATH follows from MATH. MATH follows from MATH since the triplets involved at line REF are not small. |
cs/0011038 | By REF , MATH if MATH . Similarly, by REF , MATH if MATH . We choose MATH. Consequently, MATH. By REF , with probability at least MATH, NAME outputs MATH, and MATH and MATH hold, which correspond to the two statements of the theorem. |
cs/0011042 | Assume MATH is order-consistent and MATH. Let MATH be an answer set for MATH. Since MATH is order-consistent, so is MATH. By REF , there is a splitting sequence MATH for MATH such that all MATH-components of MATH are signed. Notice that MATH is also a splitting sequence for MATH, and that all MATH-components of MATH are signed as well. By the Splitting Sequence Theorem, there is a solution MATH to MATH with respect to MATH such that MATH. We complete the proof by showing that MATH is a solution to MATH with respect to MATH. (From this it follows, again by the Splitting Sequence Theorem, that MATH is an answer set for MATH.) Observe that any splitting sequence can be ``extended" by inserting MATH at its beginning. That is, since MATH is a splitting sequence for MATH and MATH, so is MATH, where CASE: MATH, CASE: for all natural numbers MATH such that MATH, MATH, CASE: for all ordinals MATH such that MATH, MATH, CASE: MATH. Notice that since all MATH-components of MATH and MATH are signed, so are all MATH-components. For convenience then, we will assume, without loss of generality, that MATH. Under this assumption, any atom that occurs in MATH belongs to one of the sets MATH (for some MATH such that MATH). Let MATH be such that MATH. For all MATH such that MATH, MATH . Hence, we can show that MATH is a solution to MATH with respect to MATH simply by showing that MATH is an answer set for MATH . We will do this by showing that REF has the same answer sets as MATH . First, notice that the latter program is the same as MATH . So it is enough to show that adding the rule MATH to REF does not affect its answer sets. Since REF is a signed program, we can use the Signing Lemma: it remains only to show that atom MATH is among the consequences of REF . Take MATH, MATH and MATH, The sequence MATH is a splitting sequence for MATH. We construct a solution to MATH with respect to MATH as follows. Take MATH. It is straightforward, using the Splitting Sequence Theorem, to verify that MATH is an answer set for MATH. Notice that MATH is exactly the program REF . Since REF is signed, it is consistent. Let MATH be one of its answer sets. Since MATH is order-consistent, so is MATH, and, by NAME 's Theorem, it too is consistent. Let MATH be one of its answer sets. By this construction, the sequence MATH is a solution to MATH with respect to MATH. By the Splitting Sequence Theorem, MATH is an answer set for MATH. Since MATH, MATH. It follows that MATH. And since MATH was an arbitrarily chosen answer set for REF , we conclude that MATH is among its consequences. |
cs/0011045 | We first show that given any arrangement completely filling the matrix, rearranging the numbers to become monotonic in one coordinate does not increase the overall spread. It is obvious that rearranging the numbers in any way within the same line does not change the spread in that line, thus the rearrangement for full monotonicity within a coordinate does not change the spread in that coordinate. Suppose the spread has increased for some coordinate. The situation is described in REF . Suppose after the rearrangement the maximum spread in that coordinate is in line MATH and is MATH (where MATH was in line MATH before the rearrangement, and MATH was in line MATH). Then MATH . Then there are MATH (since MATH and MATH are now in line MATH) MATH's less than MATH and not equal to MATH. There are MATH's less than MATH and not equal to MATH. Therefore, by Pigeonhole Principle, there exists MATH that was paired up with MATH before the rearrangement. But then MATH. We have proved that rearranging the numbers in one coordinate line after line does not increase the spread in the matrix. CITE show that if the arrangement was monotonic in one coordinate then it will remain so after the number are rearranged to become monotonic in another coordinate. Thus the matrix can be rearranged to have a fully monotonic arrangement one coordinate at a a time, one line at a time, without increasing the spread. |
cs/0011045 | The proof is a generalization of NAME 's proof of the main theorem in CITE. We use induction on MATH and the largest dimension size, MATH. The base cases of MATH and MATH are trivial. Suppose we have a matrix with the largest dimension size MATH (largest coordinate value MATH). By the induction hypothesis, the herringbone arrangement maximizes the smalls sequence in the MATH-dimensional matrix up to the coordinate value MATH in every dimension, that is, it maximizes the initial segment of the smalls sequence for the entire matrix. Consider the smallest element MATH which has not yet been used in the arrangement. As we have noted before, every cell in the matrix is an intersection of MATH lines. We shall call a line protected if it has a smallest number in it. Since we are placing numbers in the increasing order, this means a line is protected if it has any number in it. When we put MATH in any cell, it will be the smallest number in any unprotected line in its intersection. Thus the goal is to put MATH into a cell that is an intersection of as many as possible protected lines. However, since we have a complete herringbone arrangement of a smaller cube matrix, any free cell has at most one protected line in its intersection. The cells that have one protected line are precisely the cells that lie in the lines that intersect a face of the existing herringbone arrangement. Consider now all the lines that intersect one face. After placing the first element in any of these lines, there always exists a cell that is an intersection of at least two protected lines. Thus, once started, one must stay with the same face to ensure larger elements in the smalls list. Notice, that the cells that are being filled are exactly a MATH-dimensional projection of a face of a herringbone arrangement, that is a MATH-dimensional matrix. Thus by induction hypothesis it is filled with a herringbone arrangement. The question that remains is which one of the faces should one start with. Notice, that one of the properties of the herringbone arrangement is that at any point the size of the available faces differs by at most MATH in any dimension, and they can differ in at most one dimension. Suppose we have one face MATH of size MATH and another face MATH of size MATH. It is easy to see that the smalls sequence of the MATH face agrees with the initial segment of the smalls sequence of the MATH face, given they are filled with the same numbers. The next element of the smalls sequence of the MATH face appears there exactly MATH times. However, after filling the MATH face we must start a new face, and the next (same) element in the sequence would appear there MATH times, thus in this case we get a smaller element in the smalls sequence. Therefore, to maximize the smalls sequence we must first fill the larger volume face projections. The above arguments produce, by REF , a herringbone arrangement. |
cs/0011045 | By REF , herringbone arrangement of a completely filled cube maximizes the smalls sequence. A process similar to the derivation of the bounding smalls sequence creates a bounding bigs sequence. For the bigs sequence, however, we start instead with the largest element in the cell with the largest coordinate value and work our way downward. Since for any arrangement of the elements in the matrix, we know the bounding sequences MATH and MATH, the spread for any arrangement is at least MATH. The closed formula for MATH is unusably complicated. However, consider the case of MATH for some MATH. In this case MATH is the minimum in the first line after filling a subcube with sides of size MATH, that is MATH. Thus MATH is a crude overestimate of any MATH (rounding up to the closest MATH) that coincides with MATH in infinitely many values. The sequence MATH is complementary of MATH. There are MATH lines in a MATH-dimensional cube, therefore there are MATH elements in the smalls and bigs sequences, therefore the index complimentary to MATH in the sequence is MATH and MATH . Since the sequences MATH and MATH are complimentary and MATH is convex, then MATH is concave and MATH is achieved in the middle of the sequence, that is, when MATH. MATH . |
cs/0011045 | We shall assume for simplicity that MATH is odd. For even MATH the argument works in a similar way. We can divide the matrix into MATH pieces by cutting through the middle of each face, including the middle line into both sides that are separated by it. For MATH see REF . The initial and the last pieces, coordinate-wise, are entirely within the minima sequence and the maxima sequence arrangements, respectively. Now each line lies in two of the MATH pieces. The central lines go through the initial and the last pieces. The spread in those lines is exactly the maximum difference between the minima and maxima Herringbone arrangements, as calculated above. We will show that the spread in any other line does not exceed that. There are three types of lines that are not central lines: CASE: lines that do not cross the bisecting plane, CASE: lines that cross the bisecting plane and the endpoints are neither in the first or the last quadrant of the cube, CASE: lines that cross the bisecting plane and one of the endpoints is either in the first or the last quadrant of the cube. If a line does not cross the bisecting plane, then either its minimum is in the first quadrant or its maximum is in the last quadrant, since only the first and the last quadrants do not have the bisecting plane cutting through them. Without loss of generality, let the line lie completely in the minima arrangement half, and its minimum be in the first quadrant. Then the line's minimum is at most MATH away from the parallel central line's minimum, while it's maximum is at least MATH away from a central maximum. Thus the spread in the line is not greater than the spread in a central line. The minimum in a line of type REF is not in the first quadrant, therefore one of the coordinates if the minimum is greater than MATH. As we have mentioned, Herringbone arrangement is a fully monotonic arrangement, the values increasing in the direction of increasing coordinates. Therefore the line's minimum is greater than the minimum in the parallel central line. Similarly, the line's maximum is less than the maximum in the parallel central line. Thus the difference between the line's maximum and minimum, the spread, is less than that in the parallel central line. We will now consider a line of type REF . Without loss of generality we assume that the line's minimum is in the first quadrant. Thus the maximum is not in the last quadrant, since all the fixed coordinates of the points on the line are less than MATH. Due to the full monotonicity of the Herringbone arrangement, both the minimum and the maximum in the line are less than those in the parallel central line. We will show that the difference between the central and the line's minimum is at least the difference between the maxima, thus making the spread in the line at most that in the central line. Moreover, we show that this is true for two lines of type REF that differ in only one coordinate by MATH, and the spread in the line closer to the central is at least the spread in the other line. Let MATH and MATH be the minima in these lines, and MATH, MATH be the maxima, MATH. Let MATH be the number of cells cut off the corner of size MATH of a MATH-dimensional cube. Then MATH and MATH . Notice that MATH . Therefore MATH . Thus MATH . Thus MATH and the maximum in a non-central line is further from a central maximum than the minimum in a non-central line from a central minimum. Therefore the spread in a line of type REF is not greater than the spread in a central line. We have shown that the maximum spread is achieved in the center and is as calculated above. |
cs/0011046 | Although findSlot(root) does not guarantee to find an available position for a MATH operation for illegitimate MATH, if findSlot(root) does return MATH, then from the logic of findSlot, MATH is a child of some node of MATH, and thus MATH will contain MATH as a result of MATH. For a deleteMin operation, MATH returns some leaf of MATH (not necessarily at greatest depth in MATH) provided MATH is nonempty, so deleteMin returns root.val of MATH. The MATH time bound is satisfied because any path from root to leaf in MATH has length at most MATH. |
cs/0011046 | Let MATH denote the path of nodes examined via recursion for a given invocation of MATH. Observe that MATH is called within this invocation of verify iff MATH is the rightmost path within MATH (otherwise swAncestor would return a non-MATH value). By construction, MATH for MATH sets up the leftmost path in MATH to the right of MATH. Let MATH be a subtree of MATH, rooted at MATH, with MATH leaves. If MATH successive verify invocations examine the leaves of MATH, then REF hold for the nodes of MATH afterwards (this can be shown by induction on subtree height). Let MATH be a preorder listing of the leaves of MATH; MATH has at most MATH items since any binary tree of MATH items has at most MATH leaves. It is straightforward to show that any MATH successive invocations of verify(root) visit the leaves of MATH in an order corresponding to some rotation of sequence MATH. |
cs/0011046 | Each insert and deleteMin operation invokes verify(root), however such operations also change the active tree. With respect to the sequence of verify(root) calls starting from the initial state, each change to the active tree either occurs in a subtree previously visited by verify or occurs in a subtree not yet visited by verify. In the former case, the tree modification satisfies REF along the path from root to the modified nodes. In the latter case, a future verify establishes the desired properties. After MATH operations, the active tree has at most MATH leaves; since each operation invokes verify(root), MATH operations visit all leaves provided MATH, by an argument similar to the proof of REF . Therefore MATH suffices. |
cs/0011046 | REF establishes that REF hold after MATH operations. In the rest of this proof, we assume that REF hold. For the sake of generality we suppose that MATH is only loosely balanced: assume there exist constants MATH and MATH so that for any MATH, MATH, the minimum height MATH taken over all subtrees of MATH with MATH nodes that contain node root satisfies MATH. From this assumption, it follows that any subtree MATH of MATH rooted at root with height exceeding MATH is nonoptimal; furthermore, it follows that there is some node MATH not contained in MATH, that is a child of some node of MATH, so that MATH has depth at most MATH. We define MATH for any state MATH to be a variant function. MATH . It is straightforward to show that once MATH is zero, any subsequent operation application results in zero MATH, and that zero MATH implies balance. If the initial MATH is some value MATH, any new item inserted into the heap is placed at minimum depth, any deleteMin removes a node at maximum depth, so MATH does not increase by the insert or delete operations. Moreover, every operation invokes balance(root), which decreases positive MATH by at least one, so within MATH operations, REF is established. |
gr-qc/0011067 | We use the definitions, constructions and notations of the proof of CITE. Thus, let MATH be a coordinate neighborhood of the form MATH, with MATH and MATH, in which MATH is the graph of a MATH function MATH, and in which MATH is the graph of a semi - convex function MATH. Here MATH is a locally MATH hypersurface in MATH into which MATH has been embedded. Let MATH denote the projection onto MATH of MATH, thus MATH is the graph of MATH over MATH. Now, let MATH, where MATH is the set of full measure in MATH constructed in the proof of CITE, and MATH is the projection onto MATH of MATH. Since MATH is NAME, the graph of g over MATH has full measure in MATH. Let MATH be the corresponding point on MATH, thus the generator MATH of MATH passing through MATH exits the horizon at a point MATH. Let MATH be any null geodesic which extends MATH to the past, and let MATH be any sequence of points on MATH which are to the causal past of MATH and which approach MATH as MATH tends to infinity. Since MATH is a null geodesic which exits MATH at MATH, the MATH's lie to the timelike past MATH of MATH. Thus the integral curve MATH of MATH starting at MATH meets MATH at some point MATH. One can then construct a causal curve from MATH to MATH by following MATH from MATH to MATH, and any generator of MATH passing through MATH; as MATH is globally hyperbolic this generator will necessarily intersect MATH. It follows that there exists a timelike curve MATH from MATH to MATH. By the compactness of the space of causal curves, passing to a subsequence if necessary, the MATH's converge (in a well known sense) to a causal curve MATH from MATH to a point MATH. The achronality of MATH shows that MATH is a generator of MATH passing through MATH, hence MATH and MATH. Now suppose that MATH. By the construction of the set MATH, there exists MATH, approximating MATH, such that MATH, the graph of MATH, is a MATH hypersurface in MATH which, in a well defined sense, makes second order contact with MATH at MATH. (More precisely, MATH and MATH, as well as their first derivatives, agree at MATH, and the second derivative of MATH agrees with the so-called second NAME derivative of MATH at MATH.) Since MATH is tangent to MATH at MATH, the null geodesic MATH is normal to MATH at MATH. Let MATH be the MATH map which moves the points of MATH along the family of null geodesics normal to MATH which includes MATH. Then we have MATH, compare CITE. Define MATH by MATH and let MATH be the graph of MATH; for MATH, MATH and MATH. Note that as MATH is tangent to MATH at MATH, MATH is normal to MATH at MATH. The fact that the Jacobian of MATH is nonzero at MATH implies that MATH is not a focal point to MATH along MATH. Moreover, there can be no focal points to MATH along the segment of MATH to the future of MATH, compare CITE. It follows that by taking MATH small enough and MATH short enough, there will be no focal points to MATH along MATH. This implies by normal exponentiation that there exists an embedded MATH null hypersurface MATH which contains MATH and, by shrinking it if necessary, MATH, as well, compare CITE. Moreover, there exists a neighborhood MATH of MATH in which MATH is achronal: Indeed, since spacetime is time orientable, MATH is a two-sided connected embedded hypersurface in MATH. As such MATH admits a connected neighborhood MATH which is separated by MATH (MATH, and MATH consists of two components). Then a future directed timelike curve joining points of MATH would be a timelike curve from the future side of MATH to the past side of MATH, which is impossible if the curve remains in MATH. We conclude that MATH is achronal in MATH. Consider now the timelike curves MATH constructed earlier in the proof; since the MATH's converge to MATH there exists MATH such that all the MATH's are entirely contained in MATH for MATH. Moreover, by taking MATH larger if necessary, it is clear that each such MATH will meet the hypersurface MATH in MATH obtained by pushing MATH to the past along the integral curves of MATH. One can then construct a timelike curve from MATH to MATH contained in MATH by following MATH from MATH to MATH, and then an integral curve of MATH to MATH. This contradicts the achronality of MATH in MATH, and establishes REF . MATH . |
gr-qc/0011067 | It is shown in CITE that MATH is locally the graph of a semi-convex function. That is, there is a coordinate system MATH so that MATH is given by a graph MATH where MATH is MATH and MATH is convex. Define new coordinates by MATH for MATH and MATH. In these coordinates MATH is given by MATH . |
gr-qc/0011067 | We choose a coordinate system MATH on a open set MATH containing MATH as in REF so that MATH is given by MATH where MATH is convex. We may assume that the point MATH has coordinates MATH. We also assume that MATH is of the form MATH for MATH an open convex set in MATH and that MATH takes values in the interval MATH. Then MATH is locally NAME and thus the NAME differential MATH exists and is a compact convex set of linear functionals on MATH CITE. As MATH is convex MATH is just the set of sub-differentials to MATH at MATH in the sense of convex analysis CITE. It follows that for MATH if we write MATH with MATH that MATH . There is another useful description of MATH. Let MATH be the set of points MATH in MATH where the classical derivative MATH exists. As MATH is locally NAME MATH has full measure in MATH. Let MATH be the set MATH of limit points of sequences MATH of sequences MATH with MATH. Then, CITE, MATH . Letting, as in the introduction, MATH be the set of points where MATH is differentiable, if MATH and MATH, then MATH if and only if MATH. By REF and CITE this is the case if and only if MATH is on exactly one generator of MATH. If MATH then let MATH be the unique semi-tangent to MATH at MATH. Then at MATH the tangent plane to MATH can be defined either in terms of MATH or in terms of MATH to be the set of vectors MATH so that MATH or MATH. Thus there is a positive scalar MATH so that MATH. Therefore the normal cone at MATH is one dimensional and MATH . It follows from this that if MATH and MATH then MATH if and only if MATH and MATH if and only if MATH where MATH. Unraveling all this and using REF gives that in order to complete the proof it is enough to show MATH . Denote the right side of this equation by MATH. Then, CITE, the set of semi-tangents MATH is a closed subset of MATH and therefore MATH. If MATH then there is a generator MATH with MATH, MATH and parameterized so that it is unit speed with respect to the auxiliary Riemannian metric MATH. For each positive integer MATH, MATH is an interior point of the generator MATH and thus MATH. Then MATH and MATH. Thus MATH which yields MATH. This shows REF holds and completes the proof of the lemma and therefore of REF . MATH . |
gr-qc/0011067 | Let MATH be a convex normal neighborhood of MATH, having closure disjoint from MATH. For MATH sufficiently small the distance sphere MATH is contained in MATH, is compact, and agrees with the geodesic sphere of radius MATH centered at MATH. Then, MATH restricted to MATH achieves a minimum at some point MATH, say. Let MATH be the unique minimizing geodesic from MATH to MATH. From the choice of MATH on MATH, and simple distance function considerations, one has for each MATH on MATH where MATH. Since MATH is minimizing on each segment, REF implies that MATH is a MATH-minimizing segment. MATH . |
gr-qc/0011069 | One computes MATH where in the second line we have performed a partial integration and used the equations of motion. On the other hand MATH where in the second line we have used that MATH, which holds because MATH is a Killing field. This gives MATH . So, in order to prove the lemma, we must show that MATH on MATH. The left side of this equation is equal to MATH where we have used that MATH by Killing's equation. The hand side is given by MATH where we have used that MATH, MATH and that MATH on MATH. Hence both sides are equal, thus proving the lemma. |
gr-qc/0011069 | The proof of the above result is divided into two parts. In the first part, we show that the time ordered products can be normalized in such a way that the interacting current density MATH is covariantly conserved. A proof of this requires the demonstration of a corresponding set of NAME identities, see REF below. A proof of a similar set of NAME identities for the stress energy tensor in NAME space was previously given independently by CITE and CITE. Our proof of the NAME identities follows CITE and CITE closely, up to the point at which one has to remove the anomaly. Here the methods of the present paper differ from those of CITE and CITE, which are based on momentum space techniques and therefore not suitable in the present context. In the second part we then demonstrate (by a chain of arguments somewhat similar to the one given in CITE) that conservation of the interacting current implies the second statement of the theorem, REF . In order to show conservation of the interacting current, we first expand MATH in terms of totally retarded products (compare REF ), and express these by products of time ordered products (compare REF ). It is then not difficult to see that REF is equivalent to the set of NAME identities MATH for all MATH and all possible sub-Wick monomials MATH of MATH (which therefore do not contain derivatives). A proof of the NAME identities, REF , is given in the Appendix. We now show that the NAME identities imply REF . Assume first that MATH. Then it is easy to see that there is a function MATH for which MATH on some neighbourhood of MATH and for which MATH in some neighbourhood of MATH. We then have MATH where we have used the abbreviation MATH. By the support properties of MATH and the support properties of the totally retarded products, this is equal to MATH . We next use the NAME identities for the MATH-products which are easily obtained from the NAME REF for the MATH-products and formula REF . This gives MATH where MATH. We observe that the first sum is just MATH. Performing the MATH-integrations in the second term, using that MATH on a neighbourhood of MATH, and observing that the MATH-support of MATH is contained in MATH, we find that the above expression is equal to MATH . But the terms under the sum over MATH are all equal, by the symmetry of the MATH-products, therefore this expression is (we shift the first summation index) MATH . Now, again using that MATH and the support properties of the totally retarded products, we see that we may add the term MATH under the integral, because this does not make any contribution. This then makes it obvious that the above expression is just MATH which proves REF if MATH. By a similar chain of arguments, one can prove that REF also holds if instead MATH. We will now show that the general case follows from these two facts. To this end, we will show that, for an arbitrary MATH as in ``technical data", one can always construct a function MATH with the same properties as MATH and with the additional properties that: MATH, there holds either that MATH or MATH and MATH in a neighbourhood of MATH, where MATH. We may thus write MATH where in the last line we have used the fact that we already know REF for functions like MATH. This then proves the theorem, because MATH, by current conservation. It thus remains to construct a MATH as in ``technical data" in the previous subsection, such that in addition REF holds. Now recall that MATH is of the form MATH in a neighbourhood of MATH where MATH is a compactly supported smooth function on MATH such that MATH. Let us choose a MATH with MATH such that either MATH or MATH, and a function MATH satisfying MATH on MATH. This function then clearly satisfies either MATH or MATH. Let us define MATH for MATH. It is clear that MATH can be continued smoothly to a function in MATH. By definition of MATH, REF holds in a neighbourhood of MATH. We have thus constructed a MATH with the desired properties, thus finishing the proof. |
gr-qc/0011069 | We want to prove the NAME identities by an induction on MATH and the total degree of the NAME monomials defined by MATH. Let us write MATH for the left hand side minus the right hand side of REF , that is, the anomaly and let us inductively assume that the NAME identities can be satisfied for some MATH and some order MATH. The logic of the induction step is the following. In REF , we show that the NAME identities for MATH and MATH, can be reduced to the scalar identity obtained by taking the vacuum expectation value of REF . It is argued in REF that these can be satisfied. By what we have just said, it is then sufficient to show the NAME identities for MATH, MATH, when one of the MATH is equal to MATH. Again, arguing as in REF , only the scalar identity has to be proven. This is done in REF . CASE: We want to show that MATH is a multiple of the identity operator. To show this, we first demonstrate that it commutes with any free field operator. We have MATH . To proceed, we calculate MATH . We now demand that time ordered products containing a once differentiated free-field factor satisfy the following normalization condition MATH . The above expression for MATH together with the normalization REF then gives: MATH . From this, we obtain MATH . Now the expression in braces vanishes by the NAME identities at total degree less than MATH, showing thus that MATH commutes with the free field. By the NAME expansion requirement, the operator MATH must be a linear combination of (multi-local) NAME products with distributional coefficients. Now it can be seen that any operator of this form which commutes with a free field is in fact a multiple of the identity, showing thus that MATH is a MATH-number. We next want to show that the numerical distribution MATH is localized at MATH. In order to see this, note that for any point MATH, one can find a NAME surface MATH in MATH which separates some points, MATH say, from the other points MATH and MATH. Without loss of generality we assume that the latter are in not in the causal past of the first set of points. Then, by causal factorization and the induction hypothesis, we have MATH . Therefore we conclude MATH is a scalar distribution supported on the total diagonal in MATH. CASE: We show that the scalar NAME identities hold for MATH factors when one of the NAME monomials is a free field. We find from the formula for the time ordered products with a free field factor, REF , that MATH where MATH is the NAME propagator. From this one obtains MATH . Now using REF and the NAME identities for MATH factors, one finds that this is equal to MATH . Hence we have shown that the scalar NAME identities hold for MATH factors if one of the factors is a free field. CASE: We now show that one can remove the anomaly by a suitable redefinition of the time ordered products. In order to do this, we first show that it is possible to write the MATH-number distribution MATH as the total divergence MATH of some vector-valued distribution MATH which is supported on the total diagonal of MATH having microlocal scaling degree MATH on the total diagonal and which is invariant under the flow MATH. It is clear that the redefined time ordered products MATH then satisfy the NAME identities and all the other requirements, thus concluding the proof. Let us call a symbol MATH ``invariant" if there holds MATH for all MATH. In addition to scalar valued symbols, we also want to consider tensor-valued symbols MATH. These are defined in the same way as the ordinary symbols above, but with the difference that MATH is now a tensor in the tangent space at MATH. The rank and the principal part of such symbols are defined by analogy to the scalar case. An invariant, tensor-valued symbol is one for which MATH for all MATH. Any contraction or tensor product with MATH or covariant derivative with respect to MATH of an invariant, tensor-valued symbol gives again a symbol of that kind. Any distribution MATH which is supported on the total diagonal with microlocal scaling degree MATH (with respect to the total diagonal) and which is addition invariant under the flow MATH (such as for example MATH) arises in the form REF from an invariant symbol MATH of degree MATH, and vice-versa. Analogous statements hold true for tensor-valued symbols. Let MATH be the invariant symbol corresponding to MATH. Let us assume that there exist vector-valued, invariant symbols MATH with the property that MATH for all MATH, and let MATH be the corresponding vector-valued, invariant distribution on MATH. Then MATH satisfies REF . It thus remains to construct a vector valued, invariant symbol MATH with the property REF . In order to do that, we start with a lemma. The distribution MATH (the anomaly) has the property MATH for all MATH, in other words MATH . Let us chose a bounded region MATH containing the points MATH and a function MATH which is equal to one on a neighbourhood of MATH and which has the property that MATH where MATH and MATH and MATH are as under ``Technical NAME in the previous section (with MATH taken to be MATH) and where in addition MATH and MATH. Then by causal factorization, MATH . The second to last term vanishes by current conservation. Now by REF , we have MATH for all MATH. Together with the NAME expansion requirement on the time ordered products, REF , one can conclude from this that MATH where MATH is the generator of the group MATH. Hence it remains to show that MATH . But this equation is just the infinitesimal version of the covariance requirement, so the lemma is proven. The lemma says that MATH . We are now going to show that this equation implies the existence of vector-valued, invariant symbols MATH satisfying REF . We do this by an induction in the degree MATH of MATH. If MATH, then it is easy to see that REF already implies MATH, so in that REF can trivially be satisfied by choosing MATH. Let us now assume that REF had been shown to imply the existence of vector-valued, invariant symbols MATH satisfying REF , whenever the degree of MATH is less or equal to MATH. We need to show that we can construct symbols MATH with the desired properties also if the degree of MATH is MATH. To do this, we want to exploit REF by testing it with compactly supported functions MATH of the form MATH, where MATH and MATH are testfunctions in MATH and MATH respectively. Let us expand MATH in terms of MATH, MATH where MATH are tensor-valued, invariant symbols of degree MATH, depending on the indicated arguments. From REF with MATH we conclude that MATH where the symbol MATH is defined by the last equation. This implies that MATH . It follows from this that the symbol MATH vanishes (and hence also its principal symbol), in other words MATH . The point is now that the right side of this equation is actually MATH which shows that MATH for some vector-valued, invariant symbol MATH of degree less or equal to MATH. Let us now set MATH . MATH is by definition an invariant, vector-valued symbol of degree MATH. By construction, MATH satisfies satisfies REF . Thus we can apply the induction hypothesis and conclude that there are invariant, vector-valued symbols MATH of degree MATH such that MATH . From this one immediately concludes that the invariant, vector-valued symbol MATH satisfies REF , thus concluding the proof. |
hep-th/0011015 | By REF and CITE, the NAME - NAME equations in the NAME gauge have unique global solutions with finite energy for all smooth initial data MATH with support in the base of MATH. Moreover, these solutions are smooth in all variables. We shall show that their support properties are as stated above. To see this, we note that the field equation for MATH can be read as a hyperbolic equation in the ``external" field MATH, MATH . Setting MATH gives a first order system satisfying the following integral equation with respect to the time variable MATH . Here MATH is the NAME 's function (propagator) of the equation for MATH and the spatial dependence has been suppressed. This integral equation is known to have a unique solution MATH, provided MATH satisfies a suitable local NAME condition, see for example, CITE. In our case, this NAME condition holds because MATH is smooth, and therefore locally bounded. Because of the hyperbolic character of the equation, the solution in a given double cone depends only on the initial data on the base of that double cone. Thus, for initial data MATH of compact support contained in MATH, MATH and therefore MATH vanish in the causal complement MATH. Moreover, NAME 's equations give MATH and therefore MATH vanishes in MATH if the initial data for MATH have support in MATH. Finally, by NAME 's equations, MATH so MATH is time independent in MATH and therefore given by REF . |
hep-th/0011015 | REF (see also CITE) establishes the existence and uniqueness of finite energy solutions for sufficiently small smooth initial data. In particular, if the initial data for MATH and MATH are smooth, have compact support and are bounded by a sufficiently small constant and the initial data for MATH are computed from the gauge condition MATH and the NAME constraint MATH a unique global solution exists and satisfies the preceding gauge conditions at all times. Moreover, this solution is smooth in the spatial variables and locally bounded in time. Thus, with the global NAME problem for the NAME - NAME equations for sufficiently small smooth data (corresponding to small electric charge) under control, we can proceed as in the scalar case. The solution MATH solves the integral equation MATH where MATH is the NAME 's function (propagator) of the free NAME equation which has the same hyperbolic properties as MATH in the scalar case and MATH is regarded as an external field. Thus the non - linear term again satisfies a local NAME condition since MATH is bounded in MATH uniformly for MATH in finite intervals. By the same argument as in the scalar case, therefore, MATH vanishes in MATH. This implies that MATH vanishes in MATH, and the results for MATH and MATH follow as before. |
hep-th/0011015 | The leading term MATH in the asymptotic expansion of MATH corresponds to the MATH contribution to the series REF . The next term is given by the vacuum expectation value of the MATH contribution which has the form MATH . Plugging into the integral the expression given in REF and taking locality into account, one obtains, for sufficiently large translations MATH, the function MATH appearing in the statement. The proof that the remainder MATH has the stated decay properties requires more work. We begin by noting that if MATH is localized in the double cone MATH, the multiple commutators MATH contributing to MATH are localized in MATH, as a consequence of the support properties of MATH and locality. So, in view of the spacelike commutativity of local gauge - invariant polynomials in the fields, it suffices to establish the asymptotic decay properties of matrix elements of MATH between vectors of the form MATH and the vacuum MATH. Next, we introduce the notation MATH and recall that, as a consequence of temperedness and the spectrum condition, it suffices to regularize the fields in the time variable in order to obtain operators depending smoothly on the spatial variables on their natural domain of definition CITE. Now for large MATH as above, the contribution arising from the MATH term in MATH has the form MATH apart from a factor MATH. Here MATH has been replaced by MATH since the vacuum expectation value of the commutator has been subtracted in MATH. The second equality is obtained by substituting MATH, which is legitimate in the present setting since the matrix element under the integral is continuous in all variables. Because of locality, the latter integral extends over a bounded region MATH which can be held fixed for MATH and MATH. Moreover, for MATH and MATH the operators MATH are localized in the fixed double cone MATH and are gauge invariant, like MATH. So we can apply the NAME - NAME - NAME cluster theorem CITE to their vacuum expectation values, compare the properties of the NAME - NAME formalism stated above. Thus MATH uniformly in MATH and MATH. Combining this estimate with the preceding information, we obtain the bound MATH . The higher order terms REF can be treated similarly. In fact, for MATH, MATH we have MATH where MATH is some fixed compact set. The multiple commutator function is bounded in MATH, uniformly in MATH and MATH. Thus the integral is bounded by MATH, completing the proof of the statement. |
hep-th/0011015 | As MATH and MATH can be replaced in any given order of perturbation theory by MATH for sufficiently large MATH, the statement follows from the preceding lemma by extracting the asymptotically leading contribution in MATH from the function MATH appearing there. |
hep-th/0011237 | Since the metrics MATH and MATH are conformally equivalent as expressed in REF , the corresponding connections are related by MATH (see for example, REF). Now let MATH be vector fields tangent to MATH, and denote the normal projections corresponding to MATH with respect to MATH and MATH by MATH and MATH, respectively. Then the above equation implies MATH . Taking into account the fact that MATH vanishes if and only if MATH does, this proves the claim. |
hep-th/0011237 | There is a one - to - one correspondence between wedges MATH in dS and pairs of lightlike rays in MATH of the form MATH, where MATH are unit vectors in Euclidean MATH satisfying MATH . Namely, MATH and MATH are the unique unit vectors such that MATH . Conversely, MATH is the set of all MATH satisfying MATH . In particular, MATH corresponds to the rays MATH . The rays corresponding to MATH are calculated to be MATH . Since MATH exhausts all values in MATH for MATH one can fix MATH such that for MATH corresponding to a given wedge MATH, one has MATH . But then there exists a rotation MATH satisfying MATH . This shows that MATH is of the form REF. Further, the intersection with RW of a wedge of the form REF is a NAME - NAME wedge if and only if its edge is contained in MATH. This is the case if and only if MATH . It remains to prove relation REF for the bijection MATH. Denote by MATH the map which results from MATH see REF , under the coordinate transformation MATH . Recall that a point MATH is in the edge MATH if and only if it is of the form MATH. For such MATH, one calculates (see REF in the Appendix) MATH . This shows by REF that MATH where MATH denotes the intersection of MATH with MATH. Now, the edge of the wedge MATH in REF is MATH . Since MATH commutes with the rotations, REF implies MATH . Obviously, MATH is the connected component of the causal complement (in RW ) of MATH which has nontrivial intersection with MATH . This proves REF . |
hep-th/0011237 | MATH contains MATH if and only if the apices of MATH are contained in the closure of MATH compare REF. The MATH- and MATH-components in ambient MATH of these two apices are given by MATH respectively, where MATH and MATH . Hence, taking into account the symmetry MATH, the two apices are contained in the closure of MATH if and only if MATH . Since relation REF between MATH and MATH implies MATH these inequalities are equivalent to MATH which yields the assertions of the lemma. |
hep-th/0011237 | As mentioned in the proof of REF , there are unique unit vectors MATH corresponding to a wedge MATH such that MATH if and only if both inequalities MATH hold. The unit vectors MATH corresponding to MATH and hence to MATH are determined by the equation MATH to be MATH . Let now MATH. Then MATH must satisfy MATH and MATH where MATH . It follows that MATH and MATH are non - zero and have the same sign, since MATH by assumption. This contradicts REF. |
hep-th/0011237 | The first statement has been established already in REF , while the latter one is a consequence of the subsequent REF . |
hep-th/0011237 | Let MATH and MATH satisfy REF. Then REF entails MATH. Moreover, the intersection of the wedges MATH is non - empty, because by REF it contains a double cone MATH where MATH and MATH . For the proof of the second part of the statement we proceed to ambient NAME space and denote by MATH the NAME transformation corresponding to a given NAME transformation MATH . Since MATH and MATH act only on the MATH and MATH-coordinates of MATH it follows from the argument in REF that there are MATH such that MATH . Further, a NAME transformation acting only on MATH and MATH leaves MATH invariant if and only if it leaves the unit vector in the MATH-direction MATH invariant. Hence, MATH and MATH satisfy the above equation if and only if they satisfy the condition MATH which implies MATH . The reflection MATH about the edge MATH commutes with MATH satisfies MATH and maps MATH onto itself. Hence, applying MATH to REF , it follows that MATH. Combining the preceding facts one gets MATH. But MATH for MATH given by REF , hence MATH by REF . |
hep-th/0011237 | The claimed properties follow from the corresponding properties of the underlying NAME net, taking into account the specific features of MATH established in REF. |
hep-th/0011237 | The first step is to show that MATH arises from the linear map MATH in MATH defined by MATH . Since MATH leaves the set of spacelike vectors invariant. Denoting MATH for spacelike MATH the claim is that MATH . To see this, let MATH be the map defined by the right hand side of the above equation. Recall CITE that the MATH-coordinate of a point MATH is given by MATH where MATH denotes the Euclidean norm of MATH and this expression coincides with MATH by REF . Furthermore, the coordinates MATH are just the natural MATH coordinates of MATH. Thus one easily verifies that the MATH-coordinates are left invariant by MATH while the MATH-coordinate transforms according to MATH . Hence, MATH coincides with MATH proving REF . But this equation implies that for any spacelike linear subspace MATH . It follows that MATH leaves invariant the set of intersections of dS with three-dimensional spacelike linear subspaces of MATH, that is, the set of NAME edges. Now let MATH be a spacelike linear subspace of MATH whose intersection with dS is contained in MATH. Then the MATH-coordinates of the intersection are contained in the interval MATH . Hence every MATH satisfies MATH and therefore MATH is spacelike. This shows that the preimage under MATH of every edge contained in MATH is a NAME edge. |
math-ph/0011001 | The proof uses standard compact operator results, see for example, CITE. First note that the operator MATH is compact. This is straightforward: since MATH as MATH, it follows that MATH is the norm limit as MATH of the finite rank operators defined by MATH for MATH and MATH otherwise, and thus is compact. Since MATH, MATH is compact too. As the series MATH converges strongly, MATH is compact. |
math-ph/0011001 | To get a contradiction, assume MATH, MATH, satisfying REF , is a solution of REF . Multiplying REF by MATH, and summing with respect to MATH from MATH to MATH we get MATH . If MATH the imaginary part of MATH is negative (see REF and the discussion following it) and thus, if some MATH is nonzero then the left side of REF has strictly negative imaginary part, which is impossible since the right side is real. |
math-ph/0011001 | REF is of the form MATH in MATH, where MATH and MATH . We show first that MATH. Indeed, let MATH and set MATH. Then we see that MATH and, by multiplying with MATH and summing over MATH we get MATH . Note that, because MATH, the following quantity is real: MATH implying that MATH with (compare REF ) MATH . Let MATH. Obviously MATH for MATH while for MATH we have, by REF MATH . Thus it is necessary that MATH for all MATH. Assume MATH. Let MATH be such that MATH for all MATH and MATH (thus MATH). Then from REF MATH or, setting MATH, MATH . It is here that we use the genericity condition on MATH. In fact we will show that REF implies MATH if REF is satisfied. To see this define MATH as MATH if MATH and, if MATH, MATH. Then by REF MATH is orthogonal in MATH to all MATH, MATH. By the genericity REF then MATH, which is a contradiction. Thus MATH. Since MATH is analytic in MATH for small enough MATH, and compact by the same simple arguments as in REF it follows that MATH exists and is analytic in MATH at MATH. CASE: This part is an immediate calculation. CASE: Note first that MATH, because MATH . Also, MATH being an integral with respect to a discrete measure of a MATH analytic function, depends analytically on small MATH. The rest of the proof of REF closely follows that of REF , using the following result. For MATH we have MATH. Assume the contrary was true. At MATH, with MATH and MATH, relation REF , using REF , gives MATH . Multiplying with MATH and summing over MATH we would get MATH and since we assumed MATH then, as in the proof of REF , it follows that MATH for all MATH. This gives, using REF , that MATH . Denote by MATH the sequence MATH if MATH and MATH. As in the proof of REF , using the genericity REF , we get MATH, an obvious contradiction. |
math-ph/0011001 | The substitution MATH, and MATH leads to the following system of equations for MATH and MATH . We now let MATH and, for MATH, MATH. We use again the NAME alternative and, as in the previous proofs, we need only to show the absence of a solution of the homogeneous equation at MATH. We thus multiply the homogeneous equations associated to REF in the following manner: the equation for MATH by MATH and the equation for MATH by MATH, then sum over all MATH. As in the previous proofs, from the reality of the right-hand side and then from the genericity REF MATH. Then, similarly, MATH. The rest is immediate. |
math-ph/0011001 | Special care is only needed near MATH. The system REF - REF now reads MATH . We take MATH and MATH. The system becomes MATH . The system REF is of the form MATH where MATH are in MATH. We prove that the homogeneous equation has no nontrivial solutions: MATH implies MATH. Let MATH, MATH, MATH for MATH and MATH, MATH and MATH. The system REF becomes MATH . As in the proofs in Case I, multiplying the first equation by MATH, summing over MATH we first get from the reality of the right-hand side that MATH for MATH and then by REF we get that MATH. The conclusion MATH now follows in the same way. End of proof of REF . The operator MATH is compact on MATH and MATH and MATH are analytic in a complex neighborhood of MATH. We saw in REF that the kernel of MATH is trivial and by the analytic NAME alternative it follows that MATH exists and is analytic in a small neighborhood of MATH. Hence MATH are analytic. Similarly, MATH are analytic in the same region. |
math-ph/0011001 | All but the last claim has already been shown. The last statement is a standard NAME theorem (note that MATH is the NAME transform along the imaginary line). |
math-ph/0011001 | We can write REF (with MATH) as MATH . It is easy to check, in view of the fact that MATH and MATH are MATH, that MATH. Furthermore MATH is convergent as MATH. Thus MATH as MATH. Since now the l.h.s. of REF converges to zero and MATH does not, the equality REF is only consistent if MATH. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.