paper stringlengths 9 16 | proof stringlengths 0 131k |
|---|---|
cs/0007015 | REF by induction on MATH. The base case MATH trivially follows from the domain of w variables, which have non-negative values. The same observation concerning the domain of w variables simplifies the proof obligation to the case MATH. It is useful also to observe base cases for MATH and MATH, since by the end of round two the computation is based, which simplifies reasoning for higher rounds. For MATH the verification is again trivial by the domain of w variables. For MATH, it is required to show that by the end of round two, MATH. In fact any change to MATH is an increase from its original value of zero, and at least one increment occurs because MATH is observed by MATH within two rounds following MATH. No subsequent reduction to MATH results in a value less than one, since MATH is at least zero at all states. This verifies the base case for MATH. Now suppose the hypothesis MATH for every MATH such that MATH at some state MATH. Note that no such process MATH subsequently executes SREF in the computation, by REF ; therefore any subsequent change to MATH occurs by REF. If SREF assigns MATH a value at least MATH in the round following MATH, or if MATH already has such a value and does not decrease, then the induction step is verified. Therefore we consider the possibility that MATH either remains unchanged or decreases below MATH by execution of REF. A decrease only occurs if MATH, so a decrease below MATH is only possible if there is a neighbor MATH satisfying MATH, which would in turn imply that such a value existed in MATH in the previous round. But by hypothesis, MATH, and since MATH the value of MATH is at least MATH, which contradicts MATH. Therefore such a decrease to MATH cannot occur. The remaining case to consider is that MATH and does not change in the round following MATH. Here there are two cases for MATH and MATH, either MATH is even or it is odd. If MATH is even, then MATH is equal to MATH and the hypothesis for MATH is proved - the value of MATH can remain unchanged in the round following MATH and satisfy the hypothesis. If, however, MATH is odd, then MATH is required to increment to verify the hypothesis for MATH. Observe that if MATH is odd, then MATH is equal to MATH, so we infer that MATH held at round MATH (here we assume the hypothesis not only for MATH, but MATH as well, which is permissible because base cases for MATH and MATH have been verified). Therefore by round MATH, process MATH observes MATH and increments MATH, which verifies the hypothesis for MATH. |
cs/0007015 | REF by induction on MATH, for MATH. Note that round MATH occurs in a based computation, since within two rounds following MATH the computation is based. The base case for induction is shown for MATH and MATH, since the main induction step relies on two previous rounds of a based computation. For MATH, since every clock variable is at least zero, the base cases are verified directly by the domain of clock variables - which are at least zero at any state. Note that for any MATH, MATH trivially satisfies the conclusion because clock variables are always at least zero; therefore in the remainder of the proof we consider only the case of MATH and MATH satisfying MATH. Now suppose the hypothesis holds for MATH and MATH, MATH, aiming to show that the hypothesis also holds for MATH, that is, that clock variables satisfy the specified lower bound by the end of round MATH. By REF , by round MATH, any process in the set MATH. has either executed SREF or will not do so throughout the remainder of the computation. Therefore in round MATH, any reduction to MATH for MATH could only occur by REF establishes that MATH holds invariantly following round MATH. So if process MATH executes SREF, the result satisfies MATH, which would verify the inductive hypothesis for MATH and round MATH. If MATH does not execute SREF in round MATH, then consider two cases for MATH. Case: MATH is even. Observe that MATH differs from MATH, meaning that the obligation is to show that MATH is either at least MATH by the end of round MATH, or that MATH increments during round MATH. If the former holds, the hypothesis is proved, so suppose MATH at the end of round MATH. Because MATH is even, MATH by hypothesis for MATH. But this implies that during round MATH, the value of MATH either did not change or was reduced by REF. However a reduction by REF would satisfy the hypothesis for MATH as well, because of REF 's bound on w variables. The only remaining possibility is that MATH does not change in round MATH, implying that MATH observes cEcho during round MATH. Therefore, if MATH when MATH observes cEcho, then MATH will increment either by REF or SREF. To show that MATH does indeed observe cEcho, we use the hypothesis for MATH and each MATH. If MATH, then by round MATH (and throughout round MATH) the relation MATH holds at least until MATH increments its clock. If MATH, then MATH and MATH are equal, and again the relation MATH holds at least until MATH increments its clock. Case: MATH is odd. A similar detailed argument can be given for this case, but there is a simpler approach: MATH and MATH are equal, so the hypothesis for MATH and MATH directly suffice to verify the hypothesis for MATH. |
cs/0007015 | In any computation, either some process executes a double-reset or no process does so. In the latter case, the core system component stabilizes within MATH rounds, and the repair timer concurrently reaches the timer-final condition within MATH rounds by REF . This demonstrates MATH stabilization time if no double-reset occurs; the same argument applies to the case where any double-reset occurs by REF and not by the core system. REF implies that a double-reset occurs at most once for each process in this case. Now consider the possibility that the core system executes a double-reset at least once in a computation. All such assignments cease after the base system stabilizes, which occurs within MATH rounds, so the system stabilization time is MATH. To show that any process executes a double-reset at most once, we demonstrate that the core system stabilizes before MATH holds at any process, since Requirement REF prevents repeated resets of the clock so long as MATH. If any double-reset assignment occurs, then within MATH rounds thereafter, a state MATH occurs such that each clock is at most MATH by REF , and also within MATH rounds, time-accuracy holds and is invariant thereafter by REF . Although REF is conditioned on MATH for a MATH-perturbed initial state, its proof arguments are valid for the case of a MATH-faulty initial state, provided some process executes a double-reset in the first round. While we do not suppose that a double-reset occurs in the first round, the state preceding the first double-reset can be considered as the initial state for the subsequent computation, so that REF 's results apply for the suffix computation. Time accuracy for the extreme case of a MATH-faulty initial state implies for MATH that at least MATH rounds have transpired. Therefore, if MATH, where MATH is the number of rounds needed for stabilization, then as soon as time accuracy holds, no clock increases beyond MATH until the core system has stabilized. Requirement REF implies stabilization within MATH rounds, which ensures that the core system stabilizes before there is the possibility of a second double-reset. To complete the proof we address the period between the first double-reset and before time accuracy holds. This is at most MATH rounds, and it is easy to show that no clock increases from zero to beyond MATH within MATH rounds, so a second double-reset does not occur in the period before time accuracy holds. |
cs/0007030 | The identity function from MATH to itself trivially is a step refinement from MATH to itself. Hence MATH is reflexive. Transitivity follows from the observation that if MATH is a step refinement from MATH to MATH and MATH is a step refinement from MATH to MATH, then the function composition MATH is a step refinement from MATH to MATH. |
cs/0007030 | For REF , suppose MATH. By induction on MATH we prove MATH . If MATH then both MATH and MATH are MATH. Clearly, MATH. For the induction step, suppose MATH. For reasons of symmetry we may assume, without loss of generality, that MATH. Let MATH be the largest index with MATH and MATH. (By monotonicity, MATH can only be related to indices less than or equal to MATH, and by totality there is at least one such an index.) We distinguish between three cases: CASE: MATH. Then by REF , MATH. By induction hypothesis, MATH . Hence MATH. CASE: MATH. Then by REF , MATH. By induction hypothesis, MATH . Hence MATH. CASE: MATH. Then by REF , MATH. By REF , this implies MATH. By induction hypothesis, MATH . Hence MATH. This completes the proof of the induction step. For REF , suppose that MATH. Then there exists an index relation MATH that relates MATH and MATH. Using REF and the fact that both MATH and MATH are total, it follows that each finite prefix of MATH is also a finite prefix of MATH, and vice versa. This implies MATH. |
cs/0007030 | REF follows from the definitions. REF follow immediately from REF and the definitions. |
cs/0007030 | Suppose MATH is a step refinement from MATH to MATH. Let MATH be an execution of MATH. Inductively, we define an execution MATH of MATH and an index relation MATH such that MATH and MATH are MATH-related via MATH. To start with, define MATH and declare MATH to be an element of MATH. Now suppose MATH and MATH is a nonfinal index of MATH. We distinguish between two cases: CASE: If MATH then define MATH, MATH, and declare MATH to be an element of MATH; CASE: otherwise, declare MATH to be an element of MATH. By construction, using the defining properties of a step refinement, it follows that MATH is an index relation. This implies MATH. |
cs/0007030 | The relation MATH is a step refinement from MATH to MATH. |
cs/0007030 | Together with an arbitrary norm function, any step refinement (viewed as a relation) is a normed forward simulation. |
cs/0007030 | Let MATH be a function such that MATH and MATH implies CASE: If MATH then MATH. CASE: If MATH then MATH. CASE: If MATH then MATH. The existence of MATH, which chooses between a left move REF of MATH, a common move REF of MATH and MATH, or a right move REF of MATH, is guaranteed by the fact that MATH is a normed forward simulation. Let MATH. Then MATH. Inductively, we define a sequence MATH of REF in MATH. The first element in the sequence is MATH. If MATH is an element of the sequence, and MATH is a nonfinal index of MATH, then we define MATH as follows CASE: If MATH then MATH. CASE: If MATH then MATH. CASE: If MATH then MATH. Suppose that both MATH and MATH occur in sequence MATH. We claim that MATH and MATH. To see why this is true assume without loss of generality that MATH occurs before MATH. Now observe that the values of both the first and second component of elements in MATH increase monotonically. This means that each successor of MATH up to and including MATH has been obtained from its predecessor by applying REF . This implies that the the second respectively third components of all elements in the sequence from MATH until MATH coincide. Hence MATH and MATH. Using this property, we can define for each element MATH in MATH, MATH and MATH. Let MATH and let MATH. By construction of MATH, using the properties of MATH, it follows that MATH is an execution fragment of MATH that starts in MATH, and that MATH is an index relation over MATH. This implies MATH. |
cs/0007030 | Immediate from the definitions and REF . |
cs/0007030 | REF follows by REF . The proof of REF is routine. |
cs/0007030 | If MATH is infinite then let MATH. If MATH is finite then let MATH be the finite prefix of MATH up to and including the first state whose index is related by MATH to the final index of MATH. Inductively we define a sequence MATH of pairs in MATH. The first element of the sequence is MATH. If MATH is an element of the sequence and MATH is a nonfinal index then we define MATH as follows: CASE: MATH CASE: MATH CASE: MATH . Note that since MATH is an index relation, MATH is properly defined. Let MATH. It is routine to check that MATH, that MATH and MATH are MATH-related via MATH, and that MATH is reduced. A tricky point is the totality of MATH and MATH. We prove that MATH is total by contradiction. Suppose that MATH is not total. Let MATH be the smallest index of MATH with MATH. Let MATH be the smallest index of MATH with MATH (MATH exists since index relation MATH is total). Let MATH be the maximal index of MATH with MATH (there is a maximal index since MATH implies MATH, which implies MATH by monotonicity of index relation MATH). Let MATH. Since MATH, MATH. Hence MATH. But this contradicts the fact that MATH be the maximal index of MATH with MATH. In a similar way also the totality of MATH and NAME can be proved by contradiction. |
cs/0007030 | For reflexivity, observe that the identity function from MATH to itself is a branching forward simulation from MATH to itself. For transitivity, suppose MATH and MATH are branching forward simulations from MATH to MATH and from MATH to MATH, respectively. We claim that MATH is a branching forward simulation from MATH to MATH. It is trivial to check that MATH satisfies REF in the definition of a branching forward simulation. For REF , suppose that MATH. Then there exists a state MATH of MATH such that MATH and MATH. Hence there is an execution fragment MATH starting in MATH such that MATH and MATH are MATH-related via some index relation MATH. By REF , we may assume that MATH is reduced. Also, there is an execution fragment MATH starting in MATH such that MATH and MATH are MATH-related via some index relation MATH. Again by REF , we may assume that MATH is reduced. Using the fact that both MATH and MATH are reduced, it is routine to check that MATH and MATH are MATH-related via index relation MATH. Thus MATH satisfies REF in the definition of a branching forward simulation. |
cs/0007030 | The relation MATH is a branching forward simulation from MATH to MATH. |
cs/0007030 | Trivial. |
cs/0007030 | Similar to the proof of REF . |
cs/0007030 | REF follows immediately by REF and the totality of MATH. In order to prove REF , suppose that MATH is image-finite. Let MATH be an execution of MATH. We have to establish the existence of an execution MATH of MATH with MATH. If MATH is finite then this follows by REF and the totality of MATH. So assume that MATH is infinite. We use a minor variation of NAME 's Lemma CITE presented by CITE: Let MATH be an infinite digraph such that REF MATH has finitely many roots, that is, nodes without incoming edges, REF each node of MATH has finite outdegree, and REF each node of MATH is reachable from some root. Then there is an infinite path in MATH starting from some root. The nodes of the graph MATH that we consider are pairs MATH where MATH is a finite execution of MATH and MATH is an index relation that relates MATH to some finite prefix of MATH. There is an edge from a node MATH to a node MATH iff MATH is a prefix of MATH and MATH extends MATH with precisely one element. It is straightforward to check that MATH satisfies REF 's Lemma. Hence MATH has an infinite path. Let MATH be the union of all the index relations occurring on nodes in this path, and let MATH be the limit of the finite executions of the nodes in this path. Observe that, by image-finiteness of MATH, each index of MATH occurs in the domain of MATH. Hence MATH. |
cs/0007030 | For REF , suppose that MATH is deterministic and that MATH is a normed backward simulation from MATH to MATH. Suppose that MATH is a reachable state of MATH. We will prove that MATH contains exactly one element. Since any normed backward simulation that is functional on the reachable states trivially induces a step refinement, this gives us MATH. Because MATH is a normed backward simulation it is a total relation, so we know MATH contains at least one element. Suppose that both MATH and MATH; we prove MATH. Since MATH is reachable, MATH has an execution MATH that ends in MATH. By REF , MATH has executions MATH and MATH which end in MATH and MATH, respectively, such that MATH and MATH. By REF , MATH. Now MATH follows by REF , using the fact the MATH is deterministic. For REF , suppose that all states of MATH are reachable, MATH has fin, and MATH is a normed backward simulation from MATH to MATH. Suppose that MATH is a state of MATH. Since MATH is reachable, there is an execution MATH that ends in MATH. Let MATH be trace of MATH. By REF there exists, for each MATH, an execution MATH of MATH that ends in MATH such that MATH. By REF , MATH. Hence MATH. But since MATH has fin, MATH is finite by REF . Hence MATH is image-finite. |
cs/0007030 | REF follows by REF . The proof of REF is routine. |
cs/0007030 | Similar to the proof of REF . |
cs/0007030 | The relation MATH is a branching backward simulation from MATH to MATH. |
cs/0007030 | Clearly, MATH is a forest. The function MATH which maps each finite execution of MATH to its last state is a step refinement from MATH to MATH, and the relation MATH, together with an arbitrary norm function, is a normed forward simulation from MATH to MATH. |
cs/0007030 | Take MATH. By REF , MATH is a forest and MATH. Since MATH, also MATH by soundness of history relations. Next apply the partial completeness result for backward simulations REF to conclude MATH. |
cs/0007030 | Straightforward from the definitions. |
cs/0007030 | Forward implication follows by REF . For backward implication, suppose MATH. Then MATH by the definition of history relations, and MATH because any step refinement is a normed forward simulation. Now MATH follows by the fact that MATH is a preorder. |
cs/0007030 | Suppose that MATH is a normed history relation from MATH to MATH, and MATH is a normed history relation from MATH to MATH. Because MATH and MATH have fin, both MATH and MATH are finite. Since MATH is a step refinement, it maps start states of MATH to start states of MATH. Using the fact that MATH is a forward simulation, we infer that MATH is surjective on start states. Hence MATH. By a similar argument, using the fact that MATH is a normed history relation from MATH to MATH, we obtain MATH. This means that MATH is also injective on start states. Let MATH be an arbitrary trace of MATH and MATH. Using a similar argument as above, we infer MATH . Since, by REF , both MATH and MATH are finite, it follows that MATH . This means that MATH and MATH are injective on the sets MATH and MATH, respectively. But since each reachable state is in a set MATH or MATH, for some MATH, it follows that MATH and MATH are injective on all states. Now the required isomorphism property follows from the fact that MATH and MATH are step refinements. |
cs/0007030 | Analogous to that of REF , using REF . |
cs/0007030 | By REF , there exists an automaton MATH with MATH. Next, REF yields the required automaton MATH with MATH, which proves REF . The proof of REF is similar, but uses REF . |
cs/0007039 | REF were introduced and shown to be derived in a preferential relation in CITE. For REF, suppose MATH. Applying S, we get MATH and, by Right weakening, we conclude REF. For REF, suppose MATH. Then, by REF, we get MATH and, by Left Logical Equivalence, we conclude REF. |
cs/0007039 | We must prove that MATH satisfies NAME, Left Logical Equivalence, Right Weakening and Consistency Preservation with respect to MATH. The rest of the rules are already satisfied since they do not involve an underlying consequence relation. First notice that Consistency Preservation is immediate by definition of MATH. For NAME, suppose MATH then MATH. By compactness of MATH, there exist MATH such that MATH and MATH. By repeated applications of Or, we get MATH. Let MATH, then MATH and MATH. By NAME of MATH on MATH, we have MATH. By REF , we have MATH, so, by REF , we have MATH. Using Cut, we get MATH, as desired. For Left Logical Equivalence, suppose MATH, MATH, and MATH, that is, MATH and MATH. By compactness, there exist MATH such that MATH, MATH, MATH, and MATH. As above, we have MATH and MATH. Therefore, by REF , we get MATH, as desired. Coming to Right Weakening, suppose MATH and MATH, that is, there exists MATH such that MATH and MATH. By And, we have MATH, so, using Right Weakening of MATH on MATH, we get MATH, as desired. |
cs/0007039 | REF were introduced and showed to be derived in a preferential relation in CITE. For REF, suppose MATH. Applying S, we get MATH, and, by Right weakening, we conclude REF. For REF, suppose MATH. Then, by REF, we get MATH, and, by Left Logical Equivalence, we conclude REF. |
cs/0007039 | We must prove that MATH satisfies NAME, Left Logical Equivalence, Right Weakening and Consistency Preservation with respect to MATH. The rest of the properties are already satisfied since MATH is a rational inference relation. First notice that Consistency Preservation is immediate by definition of MATH. For NAME, suppose MATH then MATH. By compactness of MATH, there exists MATH such that MATH and MATH. By repeated applications of Or, we get MATH. Let MATH, then MATH and MATH. By NAME of MATH on MATH we have MATH. By REF , we have MATH, so, by REF , we have MATH. Using Cut, we get MATH. For Left Logical Equivalence, suppose MATH, MATH and MATH, that is, MATH and MATH. By compactness there exist MATH such that MATH, MATH, MATH and MATH. As above we have MATH and MATH. Therefore by REF we get MATH. Coming to Right Weakening, suppose MATH and MATH, that is, there exists MATH such that MATH and MATH. By And we have MATH, so using Right Weakening of MATH on MATH we get MATH. |
cs/0007039 | REF is immediate from defining condition (MATH). For REF , suppose that MATH. We must show MATH. Since MATH, we have MATH, by Right Weakening. Applying Or, we get MATH. By hypothesis and And, we get MATH. The left to right direction of REF is straightforward. For the right to left direction, suppose MATH. Then, by Compactness, there exist MATH such that MATH, for all MATH, and MATH. By Conjunctiveness, we have MATH. So, by Dominance, we have MATH. Hence, by Transitivity, MATH, as desired. For REF , we have, by definition of MATH that MATH, for all MATH or there is MATH such that MATH and MATH. If the former holds then the result follows immediately. If the latter holds then MATH that contradicts MATH, by Dominance. |
cs/0007039 | For REF , if MATH we get immediately MATH, by REF . If not, that is MATH, then, by And and Right Monotonicity, we have MATH. Again, by REF , we have MATH. For REF , assume MATH. If MATH. Then we have MATH, so MATH, and hence, by Dominance, MATH, as desired. If not then there must be MATH such that MATH and MATH. Therefore MATH. Now, suppose MATH towards a contradiction. By Connectivity, we have MATH and so, by Conjunctiveness, MATH. Hence MATH, a contradiction to Reflexivity. For REF , if MATH then we immediately have MATH, by REF . If not, that is MATH, then applying Bounded Disjunction, we have MATH. The latter implies MATH, so, by Dominance MATH, for all MATH. Hence MATH, by REF . |
cs/0007039 | We shall try not to overlap with the proof of NAME and NAME proof of REF (see proof of REF). Therefore we do not cover the case where the second half of condition REF applies. The list of rules we verify is NAME, Left Logical Equivalence, And, Cut, Cautious Monotony, Or and Rational Monotony. Right Weakening follows from the above list. We shall first show that MATH is a rational inference relation. For NAME, suppose that MATH but not MATH for all MATH. So there exists MATH such that MATH. But then MATH and therefore MATH. For Left Logical Equivalence, suppose that MATH, MATH and MATH for all MATH. Since MATH we have MATH. By Dominance we get MATH and by Transitivity MATH for all MATH. Therefore MATH. For And, suppose that MATH and MATH. In case MATH for all MATH we have immediately MATH. Turning to Or, suppose that MATH and MATH. If MATH for all MATH and MATH for all MATH, then by Conjunctiveness we have either MATH or MATH. In either case MATH for all MATH by Transitivity. Therefore MATH. In the mixed case, say MATH for all MATH and there exists MATH such that MATH and MATH, we have MATH. Now suppose that MATH. By Conjunctiveness we must have MATH, a contradiction. Thus MATH. Therefore MATH. For Cut, suppose that MATH and MATH. If MATH for all MATH then, by definition, MATH. If not, there exists MATH such that MATH and MATH. Now suppose that MATH for all MATH. Observe that MATH. We moreover have that MATH. Therefore MATH. For Rational Monotonicity, suppose that MATH and MATH. If MATH for all MATH, then we get a contradiction because MATH. For Cautious Monotony, suppose that MATH and MATH. Observe that in case MATH then the result follows by an application of Rational Monotony. If not, that is, MATH, then by applying And we have MATH. If MATH for all MATH, then, since MATH, we have MATH but MATH therefore MATH. Otherwise there exists MATH such that MATH and MATH. But then we have that MATH and therefore MATH which is a contradiction to our hypothesis. Definition REF is identical to NAME and NAME 's one in the second disjunct. Therefore we shall only treat the first disjunct. For Dominance, suppose MATH and MATH. We have MATH and MATH. By Or, we get MATH. By NAME, we have MATH. Applying And, we get MATH. For Conjunctiveness, suppose MATH and MATH. These imply MATH and MATH, by Left Logical Equivalence. Applying And, we get MATH. By reflexivity of MATH and And, we have MATH. By Left Logical Equivalence again, we have MATH, and so MATH, as desired. For Transitivity, let MATH and MATH. Suppose MATH. By REF , we have MATH. REF gives MATH. By S, we have MATH. So, we have MATH. Using the initial hypothesis and Or, we get MATH. By REF , we have MATH, that is, MATH. Now, suppose MATH and MATH. Then, by REF , we have MATH. Since MATH, we have MATH. Therefore if MATH, then And gives MATH. We shall now show that the initial rational inference relation MATH and the induced one MATH by the expectation ordering with REF are the same. We show first that MATH. Let MATH. We must show that MATH. If MATH, for all MATH, then it clearly holds. If not, let MATH then MATH. So MATH. Also, MATH. If MATH then MATH, for all MATH (using REF ), so, by our hypothesis, MATH. Observe that MATH, and MATH. Right Weakening gives MATH. So MATH and therefore MATH. Hence MATH. For the other direction, that is, MATH, let MATH. Suppose first that MATH for all MATH. Therefore MATH. This gives either MATH or MATH. Since obviously the latter does not hold we must have that MATH and hence, by Right Weakening, MATH. Now suppose that there exists MATH with MATH and MATH, that is, MATH. The latter implies that MATH and MATH. Observe that MATH hence, by NAME, MATH. Applying Cut on the latter and MATH we get MATH. Applying And on MATH and MATH we get MATH. Since MATH we have that MATH. Applying Rational Monotonicity on the latter and MATH we get MATH, that is, MATH. It remains to show MATH. Let MATH be MATH and assume MATH. By definition of MATH, we have that MATH or MATH, where MATH. The former implies MATH, for all MATH, by REF . By Dominance and Transitivity, we have MATH, as desired. The latter implies that MATH. Since MATH is (classically) equivalent to MATH, we get MATH and so MATH, by Dominance and Transitivity. For the other direction, assume MATH. If MATH, for all MATH then MATH, by REF , and therefore MATH, by REF . If not then, by Conjunctiveness on the hypothesis, we have MATH. Since MATH is (classically) equivalent to MATH, we get MATH. So MATH and so, by REF , we have MATH. The latter implies, by REF , MATH, as desired. |
cs/0007039 | Let MATH and MATH be the ranked consequence operators induced by MATH and MATH respectively, where the latter is the closure of the former under arbitrary unions and intersections. Without loss of generality we can assume that the sets belonging in MATH carry the same indices in MATH. We must prove that MATH is equal to MATH. From left to right, suppose MATH and there exists MATH such that MATH. Since MATH we also have MATH. Suppose MATH. If MATH for some MATH then as above MATH. If MATH for all MATH, that is, MATH for all MATH, then MATH for all MATH. For either MATH or MATH, where MATH, and MATH for all MATH. From right to left, suppose MATH, then there is MATH such that MATH. Either MATH or MATH, where MATH. In both cases there is some MATH such that MATH. Therefore MATH. Suppose now that MATH. If there exists MATH such that MATH and MATH then either MATH or MATH, where MATH. In the first case there exists MATH such that MATH. We also have that MATH for all MATH. So for the same MATH we have that MATH. Therefore MATH. In the second case we have that MATH for all MATH while there exists MATH such that MATH. If MATH for all MATH, then we immediately have that MATH for all MATH and hence MATH. |
cs/0007039 | Let MATH be a ranked consequence operator. Denote MATH with MATH, and MATH with MATH. We should verify that MATH satisfies NAME, Transitivity, and Conjunctiveness. For NAME, suppose MATH. We have MATH, so if MATH than MATH, for all MATH. Hence MATH. For Transitivity, suppose MATH and MATH. Pick a MATH such that MATH. We have MATH, since MATH. Hence MATH, since MATH, as desired. For Conjunctiveness, suppose MATH and MATH, towards a contradiction. By our assumptions, there exist MATH and MATH with MATH such that MATH and MATH and MATH and MATH. Now, MATH's form a chain under inclusion, so either MATH, or MATH. If MATH then MATH, a contradiction, since MATH and MATH. Similarly, for MATH. We must now show that MATH iff MATH. Assume MATH. We have either MATH, for all MATH, or there exists MATH such that MATH and MATH. Assume the former. We have immediately MATH, for all MATH. Hence MATH, by REF . Assume the latter, then MATH, that is, MATH. Hence MATH, by REF . The other direction is similar. |
cs/0007039 | We shall give an alternative proof with a straightforward verification of the rules of rational inference. We shall show that a ranked consequence operator MATH based on MATH induced by a chain of sets MATH satisfies NAME, Left Logical Equivalence, Right Weakening, And, Cut, Cautious Monotonicity, Or and Rational Monotonicity. For NAME, suppose MATH. We have either MATH for all MATH or there exists some MATH such that MATH. In the first case we have immediately MATH. In the second case we have that MATH, by our hypothesis, and therefore MATH. For Left Logical Equivalence, suppose that MATH and MATH. We have either MATH, that is, MATH, for all MATH or there exists some MATH such that MATH and MATH. Since MATH and MATH are equivalent under MATH, we have in the first case that MATH for all MATH and In the second case that there exists some MATH such that MATH and MATH. In both cases we have MATH. For Right weakening, suppose MATH and MATH. If MATH for all MATH then we immediately get MATH. If there exists MATH such that MATH and MATH then we also have that MATH by hypothesis. Therefore MATH and MATH. For And, suppose MATH and MATH. If MATH for all MATH then we immediately have MATH. If not then there exists MATH such that MATH, MATH, MATH, and MATH. Since MATH is linear let MATH. Then MATH and therefore MATH. So MATH and MATH. For Cut, suppose MATH and MATH. If MATH for all MATH then we immediately have MATH. If not, then there exists MATH such that MATH and MATH. If MATH REF for all MATH, then for MATH, MATH, that is, MATH. Combining with our hypothesis we get MATH, that is, MATH, a contradiction. Therefore there exists MATH such that MATH and MATH. There are two cases: either MATH or MATH. In the first case we have MATH as well, so by (regular) cut on MATH and our hypothesis we get MATH. Therefore MATH. In the second case, observe that MATH so as above MATH. Since MATH then MATH so again MATH. For Cautious Monotony, suppose MATH and MATH. If MATH for all MATH then we also have MATH hence MATH for all MATH. Therefore MATH. If not, there exists MATH such that MATH, MATH, MATH and MATH. Let MATH then MATH and both MATH and MATH. From the latter we get MATH. Suppose that MATH then MATH. Combining it with above we get MATH which is a contradiction to our hypothesis. For Or, suppose that MATH and MATH. If both MATH and MATH for all MATH then we also have MATH for all MATH and therefore MATH. If this is true for only one of them, say MATH for all MATH but there exists MATH such that MATH and MATH, then we also have MATH and MATH which implies MATH. So MATH and therefore MATH. If neither of them holds then there exist MATH such that MATH, MATH, MATH and MATH. Let MATH then we have MATH, MATH and MATH. So MATH and therefore MATH. For Rational Monotonicity, suppose that MATH and MATH. We can't have MATH for all MATH because we have that MATH. So there exists MATH such that MATH and MATH. We have by monotonicity of MATH that MATH. Now observe that MATH implies that if MATH then MATH. However since MATH we have that MATH so MATH. Therefore MATH by definition. |
cs/0007039 | Denote the comparative rational inference relation by MATH. We shall define a chain of sets MATH which generates a ranked consequence operator MATH equal to MATH. Let MATH be the equivalence relation induced by MATH (an expectation ordering is clearly a preorder). The equivalence classes will be denoted by MATH (where MATH). It is also clear that the set of equivalence classes is linearly ordered. Now, for each MATH, let MATH . Note here that, by Dominance, the sets MATH are closed under consequence. Moreover, we have MATH iff MATH. Now, generate a ranked consequence operator MATH as in REF . We must show that MATH and MATH are identical. Let MATH, that is, either MATH for all MATH, or there exists MATH such that MATH and MATH. In the first case MATH for all MATH. So MATH for all MATH and therefore MATH. in the second case consider MATH. We have MATH so MATH. Suppose that MATH then by compactness there exists MATH such that MATH. By Dominance we have MATH and by definition of MATH, MATH, that is, MATH, a contradiction. Now let MATH. If MATH for all MATH then MATH for all MATH and we are done. If not, then there exists MATH such that MATH and MATH. Since MATH we must have MATH. By compactness there exists MATH such that MATH, that is, MATH and MATH. Therefore MATH. |
cs/0007044 | Let MATH be a nonhomogeneous NAME process with intensity function MATH, which implies MATH. Now, the chance that no new tuple was inserted during MATH is the same as the chance that the process MATH has no arrivals during MATH, that is, MATH. The chance that a new tuple was inserted during MATH is just the complement of the chance of no arrivals, namely, MATH . Taking the derivative of this expression with respect to MATH and making a change of variables, the probability density of the time until the next insertion from time MATH is MATH. Thus, MATH. |
cs/0007044 | Let MATH be a randomly chosen tuple, and assume that no corresponding tuple to MATH in MATH is deleted. The proof is identical to that of REF , replacing MATH with MATH and MATH by MATH. |
cs/0007044 | Considering all MATH, and using the well-known fact that if MATH for MATH are independent, then MATH we conclude that the remaining lifetime of MATH (denoted MATH) has a nonhomogeneous exponential distribution with intensity function MATH. The probability of a given tuple MATH surviving through time MATH is thus MATH and the probability that a randomly chosen tuple in MATH survives until time MATH is therefore MATH . |
cs/0007044 | MATH . |
cs/0007044 | Let MATH denote the lifetime of a tuple inserted into MATH at time MATH. Similarly to the proof of REF , we know that MATH. The probability such a tuple survives through time MATH is the random quantity MATH . Considering all the possible elements of the vector MATH, we then obtain MATH . Assume now that MATH has fixed multiplicity for all MATH. Consequently, we replace MATH with MATH. Drawing on the proof of the previous lemma, MATH . |
cs/0007044 | Let MATH be the number of insertion events in MATH, and let their times be MATH. Suppose that MATH and that insertion event MATH happens at time MATH. Event MATH inserts a random number of tuples MATH, each of which has probability MATH of surviving through time MATH. Therefore, the expected number of tuples surviving through MATH from insertion event MATH is MATH. Consequently, MATH . Next, we recall, given that MATH, that the times MATH of the insertion events are distributed like MATH independent random variables with probability density function MATH on the interval MATH. Thus, MATH . Finally, removing the conditioning on MATH, we obtain MATH . In the case that MATH is always MATH, we may use the notion of a filtered NAME process: if we consider only tuples that manage to survive until time MATH, the chance of a single insertion in time interval MATH no longer has the limiting value MATH, but instead MATH. Therefore, the insertion of surviving tuples can be viewed as a nonhomogeneous NAME process with intensity function MATH over the time interval MATH, so MATH. |
cs/0007044 | Each tuple in MATH has a survival probability of MATH, which yields that MATH. By the definitions of MATH and MATH, one has that MATH so therefore MATH . |
cs/0007044 | Let MATH denote a generic random variable with cumulative distribution MATH. The probability of MATH surviving throughout MATH is then MATH and therefore the expected number of tuples in MATH that survive through time MATH is MATH . We now consider a tuple MATH inserted at some time MATH. The probability that such a tuple survives through time MATH is simply MATH. By reasoning similar to the proof of REF , MATH . The conclusion then follows from MATH . |
cs/0007044 | Let MATH, MATH, and MATH be the times until the next insertion, modification, and deletion in MATH, respectively. From REF, we have that MATH. Now, for each MATH, there are MATH tuples whose deletion would cause a deletion in MATH. The time until deletion of any such MATH is distributed like MATH. The deletion processes for all these tuples are independent across all of MATH, so we can use REF to conclude that MATH . From the preceding discussion, we have MATH . Since MATH, we therefore have, again using independence and REF , that MATH, where MATH . Integrating over MATH results in MATH . Therefore, the probability of any alteration to MATH in the time interval MATH is MATH . |
cs/0007044 | Consider a continuous-time NAME chain on the same state space MATH, and with the same instantaneous transition probabilities MATH, where MATH. However, in the new chain, the holding time in each state MATH is simply a homogeneous exponential random variable with arrival rate MATH. We call this system the linear-time chain, to distinguish it from the original chain. Define MATH to be the chance that the linear-time chain is in state MATH at time MATH, given that it is in state MATH at time MATH. Standard results for finite-state continuous time NAME chains imply that MATH . By a transformation of the time variable, we then assert that MATH from which the result follows. |
cs/0007044 | We first compute the expected number of surviving tuples MATH whose values MATH migrate to MATH. Given a value MATH, there are MATH tuples at time MATH such that MATH. Using the previous results, the expected number of these tuples surviving through time MATH is MATH, and the probability of each surviving tuple MATH having MATH is MATH. Using the independence assumption and summing over all MATH, one has that the expected numbers of tuples in MATH that survive through MATH and have MATH is MATH . We next consider newly inserted tuples. Recall that MATH denotes the probability that MATH, given that MATH is inserted into MATH at time MATH. Suppose that an insertion occurs at time MATH. The expected number of tuples MATH created at this insertion that both survive until MATH and have MATH is MATH . By logic similar to REF , one may then conclude that the expected number of newly-inserted tuples that survive through time MATH and have MATH is MATH . The result follows by adding the last two expressions. |
cs/0007044 | Let MATH, which is a monotonically nondecreasing function. From REF , MATH for all MATH. We have MATH. By applying the monotonic function MATH to both sides of the inequality MATH, one has that MATH for all MATH. Substituting in the definitions of MATH and MATH, one then obtains MATH for all MATH, and therefore MATH. |
cs/0007044 | In this case, we note that the random variable MATH is identical to MATH (using the notation of REF), and is independent of MATH. The number MATH of modification events in MATH has a NAME distribution with mean MATH, and hence variance MATH. Therefore we have, for any MATH, MATH . |
hep-th/0007163 | The uniqueness of such a function follows immediately from the NAME - NAME Theorem. To show the existence we shall consider the following function MATH . The set of REF and the constraint MATH (the latter means that MATH is an elliptic function) form the system of MATH equations on the functions MATH . For generic data this system is non-degenerate. Then it has the only solution (up to the permutations) and therefore defines the function MATH uniquely. |
hep-th/0007163 | Consider a function MATH. It's straightforward to check that the function MATH satisfy all defining properties of the function MATH. The uniqueness of MATH implies that MATH. |
hep-th/0007163 | To obtain these equations one has to divide REF by MATH and compare the residues of the both sides of the obtained equation at the points MATH . |
hep-th/0007163 | Let a function MATH be a solution to the differential equation MATH . We define new variables MATH, MATH by the conditions MATH. In terms of these variables system REF has the following form MATH . The rest of the proof is parallel to the genus one case considered above. |
hep-th/0007163 | The uniqueness of such a function (provided it exists) follows immediately from the NAME - NAME Theorem. To show the existence we write this function down explicitly in terms of the NAME MATH-function. The NAME MATH-function, associated with an algebraic curve MATH of genus MATH is an entire function of MATH complex variables MATH, and is defined by its NAME expansion MATH where MATH is a matrix of MATH-periods, MATH, of normalized holomorphic differentials MATH on MATH: MATH. Here MATH is a basis of cycles on MATH with the canonical matrix of intersections: MATH, MATH. The MATH-function has the following monodromy properties with respect to the lattice MATH, spanned by the basis vectors MATH and the vectors MATH with coordinates MATH: MATH where MATH is an integer vector, MATH. The complex torus MATH is called the Jacobian variety of the algebraic curve MATH. The vector MATH with coordinates MATH defines the so-called NAME transform: MATH. According to the NAME - NAME theorem, if the divisors MATH and MATH, where MATH , are in the general position then there exists a unique meromorphic function MATH such that MATH is its poles' divisor and MATH. It can be written in the form: MATH where MATH where MATH is the vector of NAME constants. Let MATH be a unique meromorphic differential on MATH, which is holomorphic outside MATH, where it has double pole, and is normalized by REF . It defines a vector MATH with the coordinates MATH . Define functions MATH by MATH . Let a vector MATH, MATH be a solution to the system MATH . The function MATH satisfies REF - REF . |
hep-th/0007163 | Consider a function MATH. Projections MATH of its zeros onto MATH-plane satisfy matrix REF . The first MATH equations ensure that restrictions REF are satisfied. Therefore the polynomial MATH is of degree MATH and the last MATH equations in system REF state that the points MATH are its zeros. The coefficients of MATH then are time-independent, that is, MATH, MATH. The last set of equations is equivalent to system REF . |
hep-th/0007194 | It follows easily from REF that MATH; since MATH the result follows. |
hep-th/0007194 | By REF , MATH . It follows that MATH. |
hep-th/0007194 | The vector fields MATH commute with MATH; it follows that MATH . Written out explicitly, this equation says that MATH . Applying the operator MATH, MATH, we obtain MATH . The lemma is an immediate consequence of these formulas. |
hep-th/0007194 | Since MATH, REF implies that MATH . Since MATH by REF, we conclude that MATH. By REF , the restriction of MATH to the small phase space equals MATH. It follows that the restriction of MATH to the small phase space equals MATH; hence the functions MATH form a coordinate system in a neighbourhood of the small phase space. |
hep-th/0007194 | By the genus MATH dilaton equation MATH, we have MATH, and the formula for MATH follows. |
hep-th/0007194 | Let MATH; then MATH and MATH. Thus MATH satisfies the equations MATH for MATH, and MATH, as well as the dilaton equation MATH, and is thus determined by the restrictions of the partial derivatives MATH to the small phase space. But these vanish; we conclude that MATH. |
math-ph/0007003 | This proof is from Ref. CITE. Define MATH by MATH for MATH, and MATH for MATH. Clearly MATH. Computing the strain MATH by REF gives MATH showing that MATH is a smooth isometry with MATH. |
math-ph/0007003 | Setting MATH and MATH in the definition of a flat bilinear form and using the symmetry of MATH gives MATH for every MATH. We will use this result repeatedly in the proof of the lemma. Let MATH and MATH. We need to demonstrate the existence of at least MATH linearly independent vectors MATH such that MATH for all MATH and MATH. We only need to consider the case MATH as there is nothing to prove for MATH. It suffices to demonstrate the existence of a single MATH satisfying MATH for all MATH. To see this, note that, having found one such MATH, MATH naturally induces a map MATH where MATH is the subspace spanned by MATH so that MATH is a vector space of dimension MATH and then repeat the demonstration of the existence of a vector MATH. Continue in this way, reducing the dimension of the quotient by one at each step, until that dimension has been reduced to MATH. In fact, it suffices to find a nonzero MATH satisfying MATH, for this automatically implies MATH for all MATH by MATH and the positive-definiteness of MATH. To exhibit a suitable MATH, we first construct a maximal set of non-zero vectors MATH such that MATH for MATH. We will construct this set inductively. Let MATH be a set of MATH elements of MATH, such that MATH for MATH. Such a set always exists since we can choose MATH and set MATH to be any non-zero vector in MATH. This set can be enlarged since we can show the existence of a MATH-th vector, MATH, such that the above holds for MATH. To this end, set MATH. Choose any MATH in MATH with MATH. This is always possible, since MATH, and to find such a MATH, we need to solve MATH linear equations in MATH unknowns. Then MATH, with MATH, MATH, yields MATH where, we have used the bilinearity of MATH and MATH . But for each MATH, we have MATH by MATH and the positive-definiteness of MATH . It follows that MATH for all MATH. So, MATH is the desired vector. Using MATH times the result of the paragraph above, construct nonzero vectors MATH, with MATH, for MATH. These must be dependent, that is, we must have MATH, with at least one MATH, say MATH, nonzero. But now we have MATH. We conclude that MATH. Setting MATH gives the existence of a MATH such that MATH and as remarked above, this suffices to prove the lemma. |
math-ph/0007003 | CASE: Set MATH. Let MATH for all points in the open subset MATH. Let MATH be the orthogonal complement of MATH in MATH. Evidently the dimension of MATH is MATH. On the other hand, we may express MATH via MATH given MATH, there exist MATH and MATH ,such that MATH . Take smooth local extensions MATH and MATH. Clearly, MATH . By continuity, the vector fields MATH are linearly independent in a neighborhood of MATH. MATH is open so that we can choose this neighborhood in such a way that MATH. Therefore, in this neighborhood MATH . This implies that MATH is a smooth distribution on MATH and consequently, so is MATH. CASE: follows immediately from the above argument by noting that the set MATH is the set of all MATH with MATH. |
math-ph/0007003 | Given a set MATH, let MATH denote the boundary of MATH, MATH denote the closure of MATH and MATH denote the interior of MATH. First, we note that MATH is closed in MATH implies that MATH is nowhere dense. This can be shown as follows: Since MATH is closed, MATH. Therefore, MATH. However, MATH. Consequently, it follows that MATH. Define MATH as MATH . Clearly, MATH is open. MATH is a closed subset of MATH with a nonempty interior, so that it is a second category set. REF implies that each MATH is closed in MATH. Therefore, the preceding argument along with the NAME category theorem implies that MATH is dense in MATH. Now, we set MATH. Clearly, MATH. Also, MATH implies that MATH . Assume that MATH is nonempty and let MATH be an arbitrary point. Then, MATH. Therefore, MATH, where MATH is the complement of MATH and is open in MATH. From the definition of MATH, it is clear that MATH. Since MATH is closed, MATH. Therefore, we have MATH . This implies that MATH is an interior point since MATH is an intersection of open sets, which gives an open neighborhood of MATH contained in MATH. Since MATH was an arbitrary point in MATH, it follows that every point in MATH is an interior point so that MATH is open. |
math-ph/0007003 | CASE: Let MATH and MATH. Then MATH so that MATH, and MATH for any arbitrary vector MATH. Therefore, MATH . Using REF , we obtain MATH . This implies that MATH. Also, MATH . A similar argument shows that MATH. Therefore, MATH. By the NAME 's Theorem cited above, MATH is an integrable distribution in MATH. By a similar argument, MATH is also integrable in MATH. Since MATH is the subspace of relative nullity, it follows that MATH is totally geodesic in both MATH and MATH. CASE: It can be shown that, given any MATH, there exists a unique vector field MATH defined on MATH defined by MATH and MATH extends smoothly to MATH. Let MATH be a vector that is parallel transported along MATH. A calculation shows that MATH is a constant along MATH. Therefore, if MATH, MATH for all MATH. Therefore, if MATH is any vector in MATH, it follows that MATH, so that MATH. Consequently, MATH. Since MATH is closed by REF , MATH. Combining these results, we have MATH. |
math-ph/0007003 | CASE: Let MATH. By REF , MATH. We see that MATH which is open by REF , where, as before, MATH denotes the complement of MATH. Let MATH, where MATH is the open ball with radius MATH. Then MATH is a nonempty open set with MATH for all MATH . If MATH for all MATH, by REF MATH is smooth and integrable on MATH so that we can extend the geodesic beyond MATH remaining in a leaf of MATH, thereby contradicting the maximality of the geodesic MATH. Therefore, there exists a MATH such that MATH which is open REF . Therefore, MATH is a nonempty open set. Now, MATH is dense, so that MATH is nonempty. Choose any MATH. By the triangle inequality for the metric on MATH, we have MATH . MATH implies that MATH. CASE: This follows immediately from the preceding result by REF and the definition of MATH. |
math-ph/0007003 | We begin with the following observations which are easily verified using the fact that geodesics are extremal curves for the arc length. CASE: If MATH, the unit speed geodesic through MATH and MATH is unique (except for reparameterization) and is given by MATH . CASE: Every unit speed curve MATH with MATH and MATH has MATH with equality if and only if the curve MATH is identical to the curve MATH above. Assume that MATH. In this case we have MATH is a unit speed curve with MATH and MATH so that by REF above, we have that MATH is a unit speed geodesic in MATH. By the uniqueness of the geodesic, it follows that MATH, so that MATH. We will now show the converse. Using the standard identification between MATH and MATH, where MATH is an arbitrary point, we have MATH and MATH . Since MATH is an isometry, MATH implies that MATH so that MATH . |
math-ph/0007003 | We will identify the tangent spaces MATH and MATH with MATH and MATH respectively by the standard exponential map. Since MATH, REF implies that MATH where MATH and MATH . Similarly, MATH where MATH and MATH . Since MATH is an isometric immersion, MATH . This along with MATH and MATH, and using the identification of the tangent spaces with MATH and MATH yields MATH . Therefore, MATH . |
math-ph/0007003 | We will first show that for every MATH, where MATH is as defined in REF , there exists a MATH such that MATH. We will prove this proposition by induction. REF implies that this statement is true if MATH. Note that the statement is trivially true for all MATH if MATH. We will assume that this statement is true for MATH for MATH. Let MATH. REF implies that either CASE: The maximal geodesic starting at MATH ends on the boundary in which REF proves the proposition, or CASE: There exists a MATH, such that the geodesic cannot be extended in a leaf of MATH beyond MATH. In this case, REF implies that MATH. Let MATH be a decreasing sequence with MATH. By REF , there exists a sequence MATH with MATH and MATH for MATH. By the hypothesis, there exists a MATH such that MATH. This procedure defines a sequence MATH. Since MATH is compact, there exists an accumulation point MATH and a subsequence MATH with MATH as MATH such that MATH . It follows from the continuity of the functions MATH and MATH with respect to both the arguments and the fact MATH that MATH. We saw above that MATH. REF now implies that MATH. This proves the proposition. The proof of the theorem follows from noting that since MATH is dense in MATH, there is a sequence MATH such that MATH where MATH. By the proposition, there exists a sequence MATH with MATH. Using the compactness of MATH to extract a subsequence that converges and using the continuity of MATH and MATH, it follows that there exists a MATH such that MATH. Therefore, MATH . |
math-ph/0007006 | See NAME 's book CITE for a proof of a more general result. An outline of the proof is as follows: NAME first transforms the equation into another complex MATH-plane by using the NAME transform. And then he compares MATH with the solutions of the sine equation MATH and finally transforms back to the original complex MATH-plane. So the above asymptotic expressions are the asymptotic expressions for solutions of the sine equation (in the MATH-variable) expressed in terms of the original MATH-variable. The NAME regions are determined by the NAME transformation. Also we can deduce the last assertion of the theorem from CITE. This is proved in CITE for more general equations. |
math-ph/0007006 | Let MATH be an eigenfunction with eigenvalue MATH, so that MATH where MATH for some MATH, MATH. Write MATH . Fix MATH with MATH. Let MATH. Then MATH and MATH . Thus our ODE becomes MATH . Then we multiply this by MATH, integrate and use integration by parts to get MATH for all MATH, where we note that the line MATH stays in the left- and right-hand NAME regions where MATH (and hence MATH) decays exponentially to zero as MATH approaches infinity. Taking the real part of REF gives (since MATH) MATH . But MATH if MATH and MATH (certainly true if MATH). So from REF we conclude that MATH for all MATH. That is, MATH for all MATH . Taking MATH gives MATH, in particular MATH and taking MATH gives MATH . Then finally using MATH, we have MATH . That is, MATH . |
math-ph/0007006 | The main idea of the proof is the same as that of the proof of REF . NAME if REF is little different from the equation for REF , NAME regions for REF are the same as for REF (if MATH has the same sign as MATH) or else are rotated by MATH (if MATH has the opposite sign). See Section MATH in CITE for details. But in either case, the lines MATH with MATH lie within the left- and right-hand NAME regions, where we impose the zero boundary conditions. And this gives the integrabilities in the proof. Let MATH. Then like we derived REF in the proof of REF we have MATH . Since MATH for all MATH, we get MATH by letting MATH in REF . (This is true for any MATH with real coefficients and MATH.) If we find conditions on MATH and MATH such that MATH for every MATH and MATH, then we have from REF with MATH, that MATH . So then MATH as desired, like in the proof of REF . When MATH, we can rewrite the expression in REF as MATH times MATH . Now REF is nonnegative if each quadratic in MATH has non-positive discriminant: MATH which is REF . The coefficients of MATH in REF are all increasing functions of MATH, and so it suffices that REF hold at MATH. Now when MATH, it is easy to see similarly that the theorem holds. This completes the proof. |
math-ph/0007006 | Let MATH for MATH. Then MATH and MATH . Hence by integration by parts, MATH . Now by the formula MATH and splitting real and imaginary parts of the above, we get the lemma. |
math-ph/0007006 | For any MATH, by REF with MATH and by MATH, we have that MATH and this is negative for MATH with MATH, because then MATH for all MATH and so MATH. This argument also shows that MATH in MATH; see REF . Similarly in MATH, for all MATH we have that MATH. For MATH (so that MATH), we use REF along vertical line segments starting from points on the line MATH to conclude MATH in this region. |
math-ph/0007006 | In the regions MATH and MATH, we use REF with horizontal lines to infinity to get the statements in REF and MATH of this theorem. In the region MATH between MATH and MATH with the real part less than or equal to that of the zero of MATH in the third quadrant (see REF ), we use REF with vertical lines MATH to show that MATH. That is, we use MATH. If MATH, we can find MATH, so that MATH, and MATH. Hence, the above integral is an increasing function of MATH. Hence we have the desired result in this region MATH. The region below MATH is contained in MATH since the rightmost point of MATH lies in MATH (see REF ). So a similar argument shows that MATH in the region below MATH. Also, the region below MATH is contained in MATH (see REF ) and so modified arguments show that the other statements of this theorem in REF are true. |
math-ph/0007006 | On horizontal line segments MATH in MATH, REF becomes MATH . Since MATH in MATH, MATH is a strictly increasing function of MATH on each horizontal line segment in MATH. |
math-ph/0007006 | The proofs of REF give everything except the last statement of the theorem. For that, recall we can take MATH by REF ; this implies MATH is an odd function with respect to reflection in the imaginary axis, so MATH on the whole imaginary axis. Now we use REF to complete the proof. |
math-ph/0007006 | We will first prove this for MATH. Suppose that MATH. Then we could find a vertical line segment MATH in MATH whose end points are MATH and MATH. We apply REF to this line segment to get MATH . This would imply MATH on the curve MATH since MATH in MATH. So then since MATH is analytic, MATH in MATH. This is a contradiction. Hence MATH. Similarly, suppose that MATH. Then we could find a smooth curve MATH in MATH such that MATH, MATH, MATH and MATH. Note that MATH and MATH in MATH. This contradicts REF like for the case of MATH. We now see that the above argument still holds for MATH. Proof of REF . We use REF again and a similar argument like in the proof of REF. |
math-ph/0007006 | Suppose MATH is an eigenfunction of REF with eigenvalue MATH with MATH. Since MATH has infinitely many zeros in MATH (by the paragraph shortly before REF ), certainly MATH also has infinitely many zeros in MATH. Proof of REF . Suppose that MATH for some point MATH on the imaginary axis. By REF with MATH, we have that MATH . So then since MATH, MATH at every point MATH for MATH. Now by REF , we have that MATH at every MATH for MATH and MATH. Thus MATH does not have any zeros in MATH. The entire function MATH does not have infinitely many zeros in any bounded region. So if MATH had infinitely many zeros in MATH, then MATH would have infinitely many zeros in MATH. But if MATH has a zero MATH in MATH, then by REF MATH, MATH has no zeros in MATH. So then MATH would have infinitely many zeros in a bounded region. This is a contradiction. Thus MATH has infinitely many zeros in MATH and at most finitely many zeros in MATH. Conversely, suppose that MATH has infinitely many zeros in MATH and at most finitely many zeros in MATH. Choose a zero MATH in MATH. Then by REF , we see that MATH at MATH since MATH. Proof of REF . Suppose that MATH for every point on the imaginary axis. Then by REF , MATH for every point in MATH. So then MATH has no zeros in MATH. Now since we know that MATH has infinitely many zeros in MATH, MATH must have infinitely many zeros in MATH. Conversely, suppose MATH for some point on the imaginary axis. Then MATH would have at most finitely many zeros in MATH by the argument as in the proof of REF. This completes the proof. |
math-ph/0007006 | We omit the proof because it is very similar to the proof of REF . We use REF instead of REF , and also make use of REF . |
math-ph/0007006 | It is useful to have the following two formulas, which follow from multiplying REF by MATH and separating real and imaginary parts: MATH and MATH . At the end of the proof we will justify the fact that we can differentiate through the integrals that follow. MATH . This is clear by integrating REF , using the zero boundary conditions in the left- and right-hand NAME regions. MATH . Suppose MATH is a nonnegative integer. Then MATH where the last step is by integration by parts. So using REF , we have that MATH . Hence, MATH . Then again using the integration by parts and REF , we have that this equals MATH . Also, we differentiate REF without applying integration by parts: MATH . Also, applying integration by parts twice to the right-hand side of REF , we have that REF equals MATH . By equating REF , we get MATH . Hence equating REF and substituting REF give MATH. MATH . Suppose that MATH. Then MATH by integration by parts. This with REF gives MATH. MATH . Suppose MATH. Then we differentiate through REF with respect to MATH again to get MATH . But MATH, and so applying integration by parts again to the last term gives MATH. Now to complete the proof we need to show that we can differentiate through the above integrals, which reduces to showing that MATH . So we estimate the following: MATH where MATH . So then it is enough to show that for each MATH, there exist MATH and MATH such that MATH . Proof of REF is as follows: Suppose that MATH with MATH. Then MATH is in the decaying NAME regions (see REF ). By the asymptotic REF we get that for some MATH, MATH for all MATH and large MATH, say MATH. Also choose MATH. Now the NAME integral formula says that MATH . So then MATH where the last inequality holds if MATH. Choose MATH. Then MATH if MATH for large MATH. The region where MATH and MATH covers all of MATH but a bounded region. Since the minimum of MATH in this bounded region is strictly positive, we can find a large MATH so that the left-hand side of REF is bounded by MATH. Thus REF holds since MATH. And this completes the proof. |
math-ph/0007006 | This is a consequence of REF with MATH, or it can be proved using the subharmonicity of MATH . |
math-ph/0007007 | The first inequality follows from the strict positivity of MATH. The upper bound can be given by choosing the projection onto the ground state of the Hamiltonian MATH, as a test density matrix MATH. Then we apply the bound to the repulsion by attraction REF , together with the observation, that the nucleus has to be at the point of the maximum of the potential MATH. (Otherwise the energy could be lowered by shifting MATH). |
math-ph/0007007 | Applying the diamagnetic inequality CITE to the lower bound in REF , we get MATH . To get the upper bound, we apply NAME 's inequality REF MATH to the upper bound in REF . |
math-ph/0007007 | We follow closely the proof of the analogous REF. Let MATH be a minimizing sequence for MATH with MATH. First note that MATH is bounded above, because MATH is bounded above, and since the other contributions to the energy are bounded relative to MATH. Now we show that the magnetic-kinetic energy is bounded below by REF-norm of MATH: Using the diamagnetic inequality (the prerequisite for the final result in CITE) MATH and the decomposition of MATH as in REF , we get MATH . Moreover, using the NAME inequality, MATH . Because MATH this gives MATH where we have used the NAME inequality in the last step. We can then conclude that the corresponding sequence MATH is bounded in MATH and MATH is bounded in MATH. Therefore, for each MATH, there exists a subsequence, again denoted by MATH, that converges to some MATH weakly in MATH, and pointwise almost everywhere. It follows from weak convergence that MATH and MATH. From NAME 's lemma we infer that MATH . Moreover, since MATH for every MATH, and choosing MATH the dual of MATH, we see that MATH . Since the MATH's are trace class operators on MATH, we can pass to a subsequence such that for some MATH for every compact operator MATH. (Here we used the NAME Theorem and the fact that the dual of the compact operators is the trace class operators). In particular, we have MATH in the weak operator sense. It is clear that MATH. Now let MATH be an orthonormal basis for MATH. Again by NAME 's Lemma MATH . In the same way one shows that MATH . It remains to show that MATH. We already mentioned that for some constant MATH for all MATH. It follows from REF that MATH weakly on the dense set of MATH functions. Since the operators are bounded by REF holds weakly in MATH. Now consider some MATH acting as a multiplication operator on MATH. It is easy to see that MATH is relatively compact with respect to MATH, i. CASE: MATH is compact. In fact, it is NAME, because the trace of its square is given by MATH with the NAME MATH, and this is bounded by NAME 's inequality. From REF , we infer that MATH is compact (it is even NAME). Thus there exists a sequence MATH of finite-rank operators which approximates MATH in norm. We have MATH where we have used REF . Hence MATH in the sense of distributions. Because we already know that MATH converges to MATH pointwise almost everywhere, we conclude that MATH. We have thus shown that there exists a MATH with MATH and MATH, from which we conclude that MATH. |
math-ph/0007007 | This follows immediately from the strict convexity of MATH. |
math-ph/0007007 | (We proceed essentially as in CITE). For any MATH . Especially, for all MATH, MATH . Now if there exists a MATH with MATH and MATH we can choose MATH small enough to conclude that MATH which contradicts the fact that MATH minimizes MATH. |
math-ph/0007007 | For MATH this was shown in CITE, and MATH was computed numerically to be MATH in CITE. Fix MATH. We assume that MATH, and will show that MATH has an eigenvalue strictly below zero. Using MATH with some MATH as a (not normalized) test function we compute MATH where MATH is the potential generated by the charge distribution MATH, that is, MATH . Note that MATH for MATH, so we can choose MATH small enough to conclude that MATH has an eigenvalue strictly below zero and binds more charge than MATH. |
math-ph/0007007 | This follows from REF and the bound on MATH given in CITE (see REF below). |
math-ph/0007007 | For MATH, that is, the hydrogen atom, this was shown in CITE (and also in CITE). So we can restrict ourselves to considering the case MATH. Let MATH be a normalized ground state for MATH, with angular momentum MATH. Define MATH . The function MATH is continuous and bounded. It achieves its minimal value at MATH, because otherwise one could lower the energy by translating MATH. Strictly speaking, MATH with MATH and MATH . The phase in REF is chosen such that the kinetic energy remains invariant. (Note that translating MATH is equivalent to changing the gauge of the magnetic potential.) In the sense of distributions, MATH with MATH. The function MATH is pointwise strictly positive and continuous. Assume now that MATH. Since, by REF , MATH for some MATH, there is a MATH such that MATH that is, MATH is superharmonic in some open region containing MATH. This contradicts the fact that MATH achieves its minimum at MATH. As a consequence, a ground state of MATH must have angular momentum MATH. |
math-ph/0007007 | Mimicking the proof of the last theorem, we see that if MATH has a ground state, the corresponding wave function MATH has MATH. Hence it is given by REF . |
math-ph/0007007 | Fix MATH, and let MATH be the minimizer of the density functional MATH under the constraint MATH. Using MATH as a trial function we get MATH . It is well known that MATH falls off more quickly that the inverse of any polynomial, so MATH is finite CITE. For the converse, we estimate MATH . The observation MATH then leads to REF . |
math-ph/0007007 | For MATH we use again REF , and the analogous inequality in the other direction for MATH, to conclude that MATH where we have also used REF . With the aid of REF one easily sees that MATH . Using again MATH we finally get MATH . In particular, if we choose MATH, we arrive at the desired result, as long as MATH is large enough to ensure MATH. But if MATH, REF holds trivially because of the positivity of MATH. |
math-ph/0007007 | The equation for the kinetic energy is obvious by definition of MATH. We calculate the energy of attraction as MATH where MATH . For the term in square brackets we use REF, MATH to estimate the difference to the expected limit in the NAME: MATH . For the repulsion MATH we calculate MATH where MATH . The inequality is supplied by REF, with an adaptation due to the different notation concerning the normalization. After inserting REF , the integrals can be evaluated as MATH . |
math-ph/0007007 | Combining all the bounds of the Lemma, and using MATH to simplify, gives MATH . For the upper bound on MATH we specify MATH as MATH, the minimizing wave function for the NAME. See REF or the following REF . It remains MATH, for all MATH, so these wave functions are normalized to MATH. Since the variational principle with fixed norm can be replaced by a variational principle with bounded norm, as we have remarked in the Subsection REF, the wave function MATH is good for the upper bounds for all MATH. MATH is the kinetic energy in the hyperstrong theory. Using the virial REF , we may replace MATH on the right side of REF by MATH. With this choice of MATH REF gives the upper bound MATH . To derive a lower bound, we use the error bound, when confining the theory to the lowest NAME band, which we have estimated in the Subsection REF. By REF we have, for MATH large enough, MATH . Also this confined NAME theory has rotation invariant minimizers, since the lowest NAME band is mapped onto itself by rotations around the z-axis. Moreover, since the potential is superharmonic, also the minimizer of this confined theory has MATH. See REF of Subsection REF. The variational principle can therefore be restricted to states MATH, where the wave functions MATH are specified as in REF , with general MATH, normalized to MATH: MATH . The comparison with NAME in REF is now used as a lower bound MATH . The right side of this inequality can be considered as a functional similar to the NAME, but with the constant MATH in front of the kinetic energy. With an appropriate scale transformation, this is equivalent to a NAME multiplied with MATH, as long as this parameter is positive. Taking the infima of both sides, we conclude that MATH for MATH large enough. Considering the limit MATH at constant MATH we infer MATH . Combining this with REF proves, in union with REF , the theorem. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.