paper
stringlengths
9
16
proof
stringlengths
0
131k
cs/0107007
As a first step we prove that the algorithm computes the maximal flow. Let MATH be the minimal index such that MATH is not saturated after termination of the algorithm and MATH be the minimal index such that MATH. We define a partition MATH of the nodes by MATH . It is trivial from the definition of MATH that the edges MATH are saturated. Since MATH, and MATH is the minimal value such that MATH, we have MATH for MATH. Since MATH is not saturated, all edges MATH must be saturated. From the definition of MATH and the non-negativity of the portfolio vector it is easy to see that edges MATH for MATH, MATH and positive capacity cannot exist. Thus, every edge MATH with MATH and MATH is saturated. The NAME Theorem then implies that the algorithm indeed computes a maximal flow. Observing the fact that in each loop iteration either index MATH is decremented or index MATH is incremented, and that there are only MATH different values that either MATH or MATH can take on before the algorithm terminates, there are at most MATH loop iterations, and the linear running time bound follows.
cs/0107007
Starting with the first slope MATH we build up a binary tree. Each is labeled with a pair of two real entries MATH. The leaves of the tree correspond to the rows and the columns in the following way. Starting from column MATH we add leaves from left to right. We add leaves with labels MATH, MATH, MATH, until we reach a row index MATH such that MATH, that is, this index is the last under the crucial line. To be precise we let MATH; note that it may be the case that MATH, so this sequence of leaves may be empty. Then we add the leaf MATH. Next, we consider column MATH and add leaves MATH, until we reach an index MATH, such that MATH. Then we add the leaf MATH and proceed similarly with column MATH. Note that the order of adding leaves is crucial to this data structure and the correctness of the algorithm is based on that. Starting from left to right we group the leaves in pairs of REF and build a parent node for each pair according to the following rule MATH . We build MATH layers iteratively, until we reach a single root node MATH. It is easy to see that this tree based algorithm imitates the greedy algorithm described before and that MATH is exactly the flow value. Building this tree structure takes constant time per tree node, and since there are MATH nodes we have a total time of MATH, which is no better than the time bound of the greedy algorithm. The advantage is that we can dynamically update this data structure efficiently. We will first sort all of the MATH possible return pairs by their slope with the point MATH, so that as the slope determined by our portfolio increases we can quickly (in constant time per pair) determine which pairs are added and which are removed from our half-space of interest. This takes MATH time. To update our data structure for each point insertion/removal, all that is required is swapping the position of two neighboring leaves. With obvious techniques, the positions of these two leaves can be found in MATH time, and we can update the tree by looking at the path from the two leaves to the root and update each node on that path. Each update step requires MATH operations and the length of the path is bounded by MATH. Since there are at most MATH point additions and removals, each taking MATH time, it takes at most MATH time to consider all possible portfolios.
cs/0107007
Given positive integers MATH, it is shown in CITE that computing the MATH-dimensional volume of the polyhedron MATH is MATH-hard. Let MATH and consider the polyhedron MATH where MATH. Note that for any valid assignment of values to MATH we have MATH, so there is a MATH that will satisfy REF . Now let MATH and define a MATH contingency table by MATH, with row sums MATH and column sums MATH. To completely define our stock problem, we must also give values for MATH, MATH, the portfolio MATH, and the threshold MATH, which we do as follows: MATH . It is straightforward to verify from these values that the return pairs in the critical region (the shaded region in REF ) are exactly the entries MATH for MATH. Therefore, the tables that satisfy our criteria, that MATH, are precisely those with MATH . Therefore the feasible tables that meet our criteria are exactly those that correspond to points in polyhedron MATH, and so the fraction of tables that meet the criteria is exactly the volume of MATH.
cs/0107007
Let MATH. Thus, MATH, where MATH is the density produced by the random walk. Since MATH for all MATH, it is easy to see that MATH and so MATH. By NAME 's inequality, MATH . Since the samples are not entirely uniform, we must consider the error introduced by the approximately uniform sampling distribution as well. Let MATH denote a uniform density over the set MATH, and then approximating a uniform distribution within bound MATH, REF implies MATH . Setting MATH the theorem follows.
cs/0107007
We prove this result by reduction from NUMERICAL-REF-DIM-MATCHING. Consider an instance of NUMERICAL-REF-DIM-MATCHING, that is, disjoint sets MATH, each containing MATH elements, a size MATH for each element MATH and bound MATH. We would like to know if MATH can be partitioned into MATH disjoint sets such that each of these sets contains exactly one element from each of MATH, MATH, and MATH, and the sum of the elements is exactly MATH (we can change this requirement to MATH without difficulty). This problem is NAME in the strong sense, so we restrict the sizes to be bounded by a polynomial, MATH for some constant MATH. We construct an instance of the problem of computing MATH by making a contingency table in which MATH, where MATH is the number of items in set MATH with value MATH. The existence of a greedy or flow based algorithm implies the existence of a solution in which all entries in the solution table are multiples of MATH, and such a solution exists with MATH if and only if there is a valid partition of MATH. If such a partition exists, we can find it by simply taking all of the triples ``selected" (with multiplicity determined by the integer multiple of MATH), and use elements from MATH, MATH, and MATH as determined by the three coordinates of each selected point.
cs/0107007
The problem can be modeled as linear program with a number of variables, that corresponds to the number of entries of the contingency table, and MATH inequalities.
cs/0107007
We consider the first pair of stocks MATH and MATH as in the two dimensional case and define a new portfolio as MATH and MATH. We divide the two dimensional plane in MATH regions by MATH parallel lines MATH of constant distance. Thus, we divide the entries of the joint distribution matrix into MATH different sets (see REF ). Each entry in the matrix corresponds to a variable and the variables satisfy the row sum and column sum condition of the joint distribution. Next, we sum up the entries in the MATH different sets and assign the sums to MATH new variables. By combining these sum variables from two different pairs of stocks, we get a new table with new row and column sum conditions, resulting again in MATH new sum variables. Repeating combinations in this manner, we stop after MATH iterations and the creation of MATH variables and MATH constraints, leaving just one table with REF border distributions (expressed as variables). Assuming, that the variables of the border distributions correspond to the distribution of the stocks MATH and MATH, we do the following. We define a portfolio MATH and MATH for our last table and consider the line MATH, dividing our last table in two sets. The variables below that line are summed up and we solve a linear program by maximizing this sum subject to the constraints created before. Since we reduced the number of entries in each table from MATH to only MATH, that are considered in the next table, we lost some precision during the combination. But, after the first pairing in the lowest level of the binary tree, each sum variable represents a loss probability of the combination of the two stocks within an error of MATH. Furthermore, it is easy to see that during the repeated combination of the stocks the error accumulates linearly in each iteration. Thus, the theorem follows.
cs/0107007
The proof is based on a similar construction as the approximation algorithm and is omitted for brevity.
cs/0107010
The number of positions that can be occupied by a minimal block of size MATH is MATH for each input, or MATH for all inputs. Consider an input MATH with a minimal block MATH of size MATH. Block MATH has MATH nonempty subsets; label them MATH. By the minimality of MATH, for each MATH the input MATH has MATH as minimal blocks if MATH, and MATH as a minimal block if MATH. Therefore MATH cannot have MATH as a minimal block. So of the MATH positions, only one out of MATH can be occupied by a minimal block of size MATH. When MATH an additional factor of MATH is needed, since MATH has MATH as a minimal block.
cs/0107010
REF takes time MATH, totaled over all inputs. Let us analyze REF , which identif ies the minimal blocks. For each input MATH, every block MATH that is selected is minimal, since each non-minimal block in MATH was removed in a previous iteration. Furthermore, for each block MATH the number of removals of MATH blocks is less than MATH. Therefore the total number of removals is at most MATH which sums to MATH. Since each removal takes MATH time, the total time is MATH. We next analyze REF , which creates the MATH lists MATH. Since each minimal block MATH is contained in MATH sets of variables, the total number of insertions is at most MATH for input MATH. So the time is MATH by the previous calculation. F inally we analyze REF , which computes block sensitivity using the minimal blocks. Each MATH evaluation is performed at most once, and involves looping through a list of minimal blocks contained in MATH, with each iteration taking MATH time. For each block MATH, the number of distinct MATH pairs such that MATH is at most MATH. Therefore, again, the time for each input MATH is at most MATH and a bound of MATH follows.
cs/0107010
Assume without loss of generality that MATH is empty. Then MATH has cardinality less than MATH. We know that MATH depends only on MATH, and also that it depends only on MATH where MATH if MATH and MATH otherwise. Choose any NAME weight MATH, and consider an input MATH with MATH and with two variables MATH and MATH such that MATH, MATH, and MATH. Let MATH. We have MATH, but on the other hand MATH, so MATH by symmetry. Again applying symmetry, MATH whenever MATH and MATH. Therefore MATH is either MATH, MATH, or a constant function.
cs/0107010
Given a vertex MATH of a distinct variable tree, let MATH be the set of variables in the subtree of which MATH is the root. Assume that MATH is represented by two distinct tree decompositions, MATH and MATH, such that MATH has a vertex MATH and MATH has a vertex MATH with MATH and MATH incomparable (that is, they intersect, but neither contains the other). Then let MATH, MATH, MATH, and MATH. The crucial lemma is the following. MATH is a function of MATH, MATH, MATH, and MATH, for some Boolean functions MATH, MATH, and MATH. We can write MATH as MATH, where MATH is Boolean; similarly we can write MATH as MATH. We have that, for all settings of MATH, MATH. Consider a restriction that f ixes all the variables in MATH. This yields MATH . Therefore, for all restrictions of MATH, MATH depends on only a single bit obtained from MATH, namely MATH. So we can write MATH as MATH for some Boolean MATH - or even more strongly as MATH, since we know that MATH does not depend on MATH. By analogous reasoning we can write MATH as MATH for some functions MATH and MATH. So we have MATH . Next we restrict MATH, obtaining MATH which implies that, for some functions MATH and MATH, MATH . This shows that MATH and MATH are equivalent up to negation of output, since MATH and MATH must depend on MATH for some restriction of MATH. So we have MATH for some Boolean functions MATH (henceforth simply MATH), MATH, and MATH (MATH). Next we restrict MATH and MATH: MATH . Thus, for all restrictions of MATH and MATH, MATH depends on only a single bit obtained from MATH, which we'll call MATH (and which can be taken equal to MATH). Note that MATH does not depend on MATH. Analogously, for both possible restrictions of MATH, MATH depends on only a single bit obtained from MATH, which we'll call MATH. So we can write MATH where MATH and MATH are two-input Boolean functions. We claim that MATH and MATH. There must exist a setting MATH of MATH such that MATH depends on both MATH and MATH. Suppose there exists a setting MATH of MATH such that MATH. MATH must be a nonconstant function, so f ind a constant MATH such that MATH depends on MATH, and choose a setting for MATH and MATH such that MATH. (If MATH is a MATH function, then either MATH or MATH will work, whereas if MATH is a MATH or MATH function, then only one value of MATH will work.) For MATH to be well-def ined, we need that whenever MATH, the value of MATH is determined (since MATH has no access to MATH). This implies that MATH has the form MATH or MATH for some function MATH. Therefore MATH can be written as MATH for some function MATH. Now repeat the argument for MATH. We obtain that MATH can be written as MATH for some functions MATH MATH. Therefore MATH . So we can take MATH (equivalently MATH), and write MATH (or MATH) as MATH . We now prove the main theorem: that MATH has a unique tree decomposition, up to double-negation. From REF , MATH effectively has as inputs the two bits MATH and MATH, and MATH the two bits MATH and MATH. Thus we can check, by enumeration, that either MATH and MATH are labeled with the same function, and that function is either MATH, MATH, MATH, or MATH; or MATH and MATH are both labeled with either MATH or MATH. (Note that MATH can be different for MATH and for MATH.) In either case, for all MATH there exists a function MATH, taking MATH, MATH, and MATH as input, that captures all that needs to be known about MATH. Furthermore, since MATH and MATH do not depend on MATH, neither does MATH, and we can write it as MATH. Let MATH be the unique vertex in MATH such that MATH contains MATH and is minimal among all MATH sets that do so. If MATH is labeled with MATH, MATH, or MATH, then MATH cannot be a vertex of MATH. If MATH is labeled with some other function, then MATH and the function at MATH is represented by a nontrivial tree. Either way we obtain a contradiction. Now that we have ruled out the possibility of incomparable subtrees, we can establish uniqueness. Call a set MATH unif iable if there exists a vertex MATH, in some decomposition of MATH, such that MATH. Let MATH be the collection of all unif iable sets. We have established that no pair MATH, MATH is incomparable: either MATH, MATH, or MATH. We claim that any decomposition must contain a vertex MATH with MATH for every MATH. For suppose that MATH is not represented in some decomposition MATH. Certainly MATH, so let MATH be the parent set of MATH in MATH: that is, the unique minimal set such that MATH and there exists a vertex MATH in MATH with MATH. Then the function at MATH is represented by a nontrivial tree, containing a vertex MATH with MATH - were it not, then MATH could not be a vertex in any decomposition. Furthermore, the function at MATH cannot be MATH, MATH, or MATH. If it were, then again MATH could not be a vertex in any decomposition, since it would need to be labeled correspondingly with MATH, MATH, or MATH. Having determined the unique set of vertices that comprise any tree decomposition, the vertices' labels are also determined up to double-negation.
cs/0107010
We f irst normalize each row MATH so that MATH. For each entry MATH, MATH . We next form a unitary matrix MATH from MATH by using the NAME (CGS) orthogonalization procedure (see CITE for details). The idea is to project MATH to make it orthogonal to MATH, then project MATH to make it orthogonal to both MATH and MATH, and so on. Initially we set MATH. Then for each MATH, we set MATH. Therefore MATH . We need to show that the discrepancy between MATH and MATH does not increase too drastically as the recursion proceeds. Let MATH. By hypothesis, MATH. Then MATH. Assume that MATH for all MATH. By induction, MATH since MATH and MATH. So for all MATH, MATH. Let MATH. By the def inition of MATH, MATH where MATH is a column of MATH. Since MATH, MATH is maximized when MATH, or MATH. Adding MATH from normalization yields a quantity less than MATH. This can be seen by working out the arithmetic for the worst case of MATH, MATH.
cs/0107010
F irst, MATH where the MATH's are entries of MATH and the MATH's are error terms satisfying MATH. So by the NAME inequality, MATH differs from MATH by at most MATH. Second, for MATH, MATH where the MATH's and MATH's are error terms, and the argument proceeds analogously.
cs/0107010
For each MATH, let MATH. By hypothesis, every entry of MATH has magnitude at most MATH; thus, each row or column MATH of MATH has MATH. Then MATH . The right-hand side, when expanded, has MATH terms. Any term containing MATH matrices MATH has MATH norm at most MATH, and can therefore add at most MATH to the discrepancy with MATH. So the total discrepancy is at most MATH . Since MATH evaluated at MATH is MATH and since MATH is concave, MATH when MATH. Therefore MATH and the discrepancy is at most MATH in the MATH norm.
cs/0107010
Given MATH, we want, subject to the following two constraints, to f ind an algorithm MATH that approximates MATH with a minimum number of queries. F irst, MATH uses at most MATH qubits, meaning that MATH and the relevant matrices are MATH. Second, the correctness probability of MATH is known to a constant accuracy MATH. Certainly the number MATH of queries never needs to be more than MATH, for, although each quantum algorithm is space-bounded, the composite algorithm need not be. Let MATH be the MATH error we can tolerate in the matrices, and let MATH be the resultant MATH error in the f inal states. Setting MATH, by REF we have MATH . From the NAME inequality, one can show that MATH. Then solving for MATH, MATH which, since MATH is constant, is MATH. Solving for MATH, we can verify that MATH, as required by REF . If we generate almost-unitary matrices, they need to be within MATH of actual unitary matrices. By REF we can use MATH-almost-unitary matrices. F inally we need to ensure that we approximate every unitary matrix. Let MATH be the needed precision. Invoking REF , we set MATH and obtain that MATH is suff icient. Therefore the number of bits of precision needed per entry, MATH, is MATH. We thus need only MATH bits to specify MATH, and can search through all possible MATH in time MATH. The amount of time needed to evaluate a composite algorithm MATH is polynomial in MATH and MATH, and is absorbed into the exponent. The approximation algorithm is this: f irst let MATH be a constant at most MATH, and let MATH . Then f ind the smallest MATH such that the maximum probability of correctness over all MATH-query algorithms MATH is at least MATH (subject to MATH uncertainty), and return MATH. The algorithm achieves an approximation ratio of MATH, for the following reason. F irst, MATH. Second, MATH, since by repeating the optimal algorithm MATH until it returns the same answer twice (which takes either two or three repetitions), the correctness probability can be boosted above MATH. F inally, a simple calculation reveals that MATH returns the same answer twice after expected number of invocations MATH.
cs/0107014
In this case, there exists no constraint MATH such that MATH and MATH is productive.
cs/0107014
The proof follows immediately from the definition of observables by noting that, according to REF, the agent MATH has no transition and MATH iff MATH, where obviously MATH is equal to MATH iff MATH (recall that MATH is the generic agent containing only MATH and MATH).
cs/0107014
Immediate by observing that a procedure call can be evaluated in MATH iff it can be evaluated in MATH, for any MATH.
cs/0107014
. We show that given an agent MATH and a satisfiable constraint MATH, if there exists a derivation MATH, with MATH, then there exists also a derivation MATH with MATH and MATH. By REF , this will imply the thesis. The proof is by induction on the length MATH of the derivation. MATH. In this case MATH. By the definition MATH is also a derivation of length MATH and then the thesis holds. MATH. If the first step of derivation MATH does not use rule MATH, then the proof follows from the inductive hypothesis. Now, assume that the first step of derivation MATH uses rule MATH and let MATH be the declaration used in the first step of MATH. If MATH was not modified in the transformation step from MATH to MATH (that is, MATH), then the result follows from the inductive hypothesis. We assume then that MATH, MATH is then the result of the transformation operation applied to obtain MATH. The proof proceeds by distinguishing various cases according to the operation itself. Here we consider only the operations of unfolding, tell elimination, tell introduction and folding. The other cases are deferred to the Appendix. Unfolding: If MATH is the result of an unfolding operation then proof is immediate. Tell elimination and introduction: If MATH is the result of a tell elimination or of a tell introduction the thesis follows from a straightforward analysis of the possible derivations which use d or d'. First, observe that for any derivation which uses a declaration MATH, we can construct another derivation such that the agent MATH is evaluated before MATH. Moreover for any constraint MATH such that MATH, (where MATH is a relevant most general unifier of MATH and MATH), there exists a derivation step MATH if and only if there exists a derivation step MATH, where, for some constraint e, MATH, MATH and therefore MATH. Finally, since by REF is idempotent and the variables in the domain of MATH do not occur neither in MATH nor in MATH, for any constraint MATH we have that MATH. Folding: If MATH is the result of a folding then let - MATH be the folded declaration (MATH), - MATH be the folding declaration (MATH), - MATH be the result of the folding operation REF where, by hypothesis, MATH and MATH. In this case MATH and we can assume, without loss of generality, that MATH. By the inductive hypothesis, there exists a derivation MATH with MATH and MATH . Since MATH, we have that MATH . Since by hypothesis for any agent MATH, MATH, there exists a derivation MATH such that MATH and MATH. By REF and since MATH, we have that MATH . Let MATH be an appropriate renaming of MATH, which renames only the variables in MATH, such that MATH (note that this is possible, since MATH). By hypothesis, MATH. Then, without loss of generality we can assume that MATH if and only if the procedure call MATH is evaluated, in which case declaration MATH is used. Thus there exists a derivation MATH where MATH. By REF we have MATH . We now show that we can substitute MATH for MATH in the previous derivation. Since MATH is a renaming of MATH, the equality MATH is conjunction of equations involving only distinct variables. Then, by replacing the variables MATH with MATH and vice versa in the previous derivation we obtain the derivation MATH where MATH and MATH. From REF it follows that MATH . Then, from REF and since MATH we obtain MATH . Moreover, we can drop the constraint MATH, since the declarations used in the derivation are renamed apart and, by construction, MATH. Therefore there exists a derivation MATH which performs exactly the same steps of MATH, (possibly) except for the evaluation of MATH, and such that MATH and MATH. From REF and since MATH, it follows that MATH . Since MATH holds by hypothesis for any agent MATH, there exists a derivation MATH where MATH and MATH. From REF and since MATH, we obtain MATH . Finally, since MATH, there exists a derivation MATH and then the thesis follows from REF .
cs/0107014
We consider only the case of successful derivations, since the case of deadlocked (failed) derivations can be proved analogously by considering the notions of deadlock (failure) weight and deadlocked (failed) split derivation. Assume that there exists a (finite, successful) derivation MATH. We show, by induction on the success weight of MATH, that there exists a derivation MATH, where MATH. CASE: If MATH then, since MATH is weight complete, from REF it follows that there exists a (successful) split derivation in MATH of the form MATH where MATH, REF is not used and therefore each derivation step is done in MATH. Inductive Case. Assume that MATH. Since MATH is weight complete there exists a (successful) split derivation in MATH where MATH. If rule MATH is not used in MATH then the proof is the same as in the previous case. Otherwise MATH has the form MATH where MATH. Let MATH be the derivation MATH . By the inductive hypothesis, there exists a derivation MATH where MATH. Without loss of generality, we can assume that MATH and hence there exists a derivation MATH . Finally, by our hypothesis on the variables and by construction, MATH which concludes the proof.
cs/0107014
Immediate.
cs/0107014
Immediate.
cs/0107014
. Observe that, for MATH, the proof of REF follows from the first part of REF . We prove here that, for each MATH, CASE: If REF holds for MATH then REF holds for MATH; CASE: If REF hold for MATH then REF holds for MATH. The proof of the Lemma then follows from straightforward inductive argument. CASE: If MATH was not affected by the transformation step from MATH to MATH then the result is obvious by choosing MATH. Assume then that MATH is affected when transforming MATH to MATH. We have various cases according to the operation used to perform the transformation. Here we show only the proofs for the unfolding and the folding operations, the other cases being deferred to the Appendix. Unfolding: Assume MATH was obtained from MATH by unfolding. In this case, the situation is the following: - MATH - MATH - MATH where cl and u are assumed to be renamed so that they do not share variables. Let MATH. By the definition of transformation sequence, there exists a declaration MATH. Moreover, by the hypothesis on the variables, MATH and then MATH. Therefore, by REF , there exists a constraint MATH, such that MATH and MATH . By the hypothesis on the variables and since u is renamed apart from cl, MATH and therefore MATH. Then, by Point REF, there exists a constraint MATH, such that MATH . By REF , MATH. Furthermore, by hypothesis and construction, MATH and, without loss of generality, we can assume that MATH . Then, by REF and since MATH, we have that MATH and this completes the proof. Folding: Let - MATH be the folded declaration (MATH), - MATH be the folding declaration (MATH), - MATH be the result of the folding operation MATH, where, by hypothesis, MATH, MATH, MATH, MATH and there exists MATH such that MATH. Then, MATH and MATH hold. Moreover, we can assume without loss of generality that MATH. Since MATH, from REF and Point REF it follows that there exists a constraint MATH such that MATH and MATH . We can assume, without loss of generality, that MATH. Then by using REF we obtain that MATH which concludes the proof of REF. CASE: Assume that REF of this Lemma hold for MATH. We prove that REF holds for MATH. Let MATH, and let MATH be the corresponding declaration in MATH. Moreover let MATH be a context, MATH a satisfiable constraint and let MATH be a constraint, such that MATH and MATH is defined. Without loss of generality, we can assume that MATH. Then, since by inductive hypothesis, REF holds for MATH, there exists a constraint MATH such that MATH, MATH . Since by inductive hypothesis REF holds for MATH, there exists a constraint MATH, such that MATH, MATH and MATH. By REF , MATH and MATH and then the thesis follows.
cs/0107014
. The proof proceeds by showing simultaneously, by induction on MATH, that for MATH: CASE: for any agent MATH, MATH; CASE: MATH is weight complete. CASE: We just need to prove that MATH is weight complete. Assume that there exists a derivation MATH, where MATH is a satisfiable constraint and MATH. Then there exists a derivation MATH, such that MATH, whose weight is minimal and where MATH. It follows from REF that MATH is a split derivation. Induction step. By the inductive hypothesis for any agent MATH, MATH and MATH is weight complete. From REF it follows that if MATH is weight complete then for any agent MATH, MATH. So, in order to prove REF , we only have to show that MATH is weight complete. Assume then that there exists a derivation MATH such that MATH is a satisfiable constraint and MATH. From the inductive hypothesis it follows that there exists a split derivation MATH where MATH . Let MATH be the modified clause in the transformation step from MATH to MATH. If in the first MATH steps of MATH there is no procedure call which uses MATH then clearly there exists a split derivation MATH in MATH, MATH which performs the same steps of MATH and then the thesis holds. Otherwise, assume without loss of generality that MATH is the rule used in the first step of derivation MATH and that MATH is the clause employed in the first step of MATH. We also assume that the declaration MATH is used only once in MATH, since the extension to the general case is immediate. We have to distinguish various cases according to what happens to the clause MATH when moving from MATH to MATH. As before, we consider here only the unfolding and the folding cases, the others being deferred to the Appendix. Unfolding: Assume that MATH is unfolded and let MATH be the corresponding declaration in MATH. The situation is the following: - MATH, - MATH, and - MATH, where MATH and MATH are assumed to be renamed apart. By the definition of split derivation, MATH has the form MATH . Without loss of generality, we can assume that MATH if and only if MATH is evaluated in the first MATH steps of MATH, in which case MATH is used for evaluating it. We have to distinguish two cases. CASE: There exists MATH such that the MATH-th derivation step of MATH is the procedure call MATH. In this case MATH has the form MATH . Then there exists a corresponding derivation in MATH which performs exactly the same steps of MATH except for a procedure call to MATH. In this case the proof follows by observing that, since by the inductive hypothesis MATH is a split derivation, the same holds for MATH. CASE: There is no procedure call to MATH in the first MATH steps. Therefore MATH has the form MATH . Then, by the definition of MATH, there exists a derivation MATH . Observe that from the derivation MATH and REF it follows that MATH . The hypothesis on the variables implies that MATH. Then, by the definition of transformation sequence and since MATH, there exists a declaration MATH. By REF it follows that there exists a constraint MATH such that MATH and MATH . Therefore, by the definition of MATH, by REF and since MATH is defined, there exists a derivation MATH where MATH and, by REF , MATH . By REF MATH holds and, by definition of weight, we obtain MATH . Moreover, we can assume without loss of generality that MATH. Then, by the definition of procedure call MATH and there exists a derivation MATH such that the first MATH derivation steps do not use rule MATH and the MATH-th derivation step uses the rule MATH. Now, we have the following equalities MATH . By the definition of weight, MATH, by REF , MATH and MATH, since MATH is a split derivation. Therefore MATH and then, by definition, MATH is a split derivation in MATH. This, together with REF , implies the thesis. Folding: Assume that MATH is folded and let - MATH be the folded declaration (MATH), - MATH be the folding declaration (MATH), - MATH be the result of the folding operation MATH, where, by definition of folding, MATH and MATH. Since MATH is a guarding context, the agent MATH in MATH appears in the scope of a MATH guard. By definition of split derivation MATH has the form MATH where MATH is a guarding context. Without loss of generality we can assume that MATH. Then, from the definition of MATH it follows that there exists a derivation MATH which performs exactly the first MATH steps as MATH. Since MATH, the definition of weight implies that MATH is defined, where MATH. Then, by REF , we have that MATH . The definitions of derivation and folding imply that MATH holds. Moreover, from the assumptions on the variables, we obtain that MATH. Thus, from REF it follows that there exists a constraint MATH such that MATH . From the definition of weight and the fact that MATH is defined it follows that there exists a derivation MATH, where MATH and MATH. Then, by the definition of weight, MATH and therefore, by REF , MATH hold. Moreover, from REF we obtain MATH . Without loss of generality, we can now assume that MATH. Then, by REF it follows that MATH . From the definition of weight MATH and since MATH is a split derivation we obtain MATH. Then, from REF it follows that MATH and therefore, by construction, MATH is a derivation in MATH such that: REF rule MATH is not used in the first MATH steps; REF rule MATH is used in the MATH-th step. The thesis then follows from REF thus concluding the proof.
cs/0107014
The proof of this result is essentially the same as that one of the total correctness REF provided that in such a proof, as well as in the proofs of the related preliminary results, we perform the following changes: CASE: Rather than considering terminating derivations, we consider any (possibly non-maximal) finite derivation. CASE: Whenever in a proof we write that, given a derivation MATH, a derivation MATH is constructed which performs the same steps MATH does, possibly in a different order, we now write that a derivation MATH is constructed which performs the same step of MATH (possibly in a different order) plus some other additional steps. Since the store grows monotonically in CCP derivations, clearly if a constraint MATH is the result of the derivation MATH, then a constraint MATH is the result of MATH such that MATH holds. For example, for REF in the proof of REF , when considering a (non-maximal) derivation MATH which uses the declaration MATH we can always construct a derivation MATH which performs all the steps of MATH (possibly plus others) and such that the MATH agent is evaluated before MATH. Differently from the previous proof, now we are not ensured that the result of MATH is the same as that one of MATH, since MATH is non-maximal (thus, MATH could also avoid the evaluation of MATH). However, we are ensured that the result of MATH is stronger (that is, implies) that one of MATH.
cs/0107014
The proof uses the NAME Lemma and the fact that the transition system defining the CCP operational semantics is finitely branching. Let us denote by MATH the maximal element appearing in the set MATH, for each MATH, that is, MATH is the maximal length of the sub-traces MATH for a fixed MATH and MATH. We now construct a tree MATH representing the (infinitely many) finite traces MATH produced by MATH. The nodes of the tree MATH are labeled by configurations of the form MATH, for some MATH, and the edges are labeled by the sub-traces MATH. More precisely, the tree MATH is defined inductively as follows: (Base step). The root (level REF) of MATH is labeled by MATH. For each derivation of the form MATH which performs at most MATH transition steps and which produces the trace MATH we add a son MATH of the root (at level REF) labeled by MATH and an edge, labeled by MATH, connecting the root and MATH. (Inductive step). Assume that MATH has depth MATH and let MATH be a configuration labeling a node MATH at level MATH. For each derivation of the form MATH which performs at most MATH transition steps we add a son MATH of MATH labeled by MATH and we add an edge labeled by MATH, connecting MATH and MATH. Note that the number of the configurations MATH obtained in this way is finite, since we allow at most MATH transition steps. Therefore we construct a finitely branching tree. On the other hand, such a tree contains infinitely many nodes, as it contains all the (different) constraints MATH with MATH. Then, from the NAME Lemma it follows that the tree contains an infinite branch and this, by construction of the tree, implies that MATH produces the infinite trace MATH where, for each MATH, MATH for some MATH.
cs/0107014
The first part follows from REF . The part concerning the length is a direct consequence of the definition of the transformation sequence, since each transformation operation can at most add or delete a finite number of computation step.
cs/0107014
Assume that MATH produces the infinite active trace MATH where, in order to simplify the notation, we assume that the MATH are different constraints while the MATH are sequences of constraints all equal to MATH (so the MATH are sequences of stuttering steps). Clearly, by definition of produced sequence, MATH produces also the (infinitely many) finite prefixes of MATH . From REF it follows that MATH produces the traces MATH where, for any MATH, there exists MATH such that for any MATH we have that MATH. Therefore the set MATH admits a (finite) maximal element for each MATH. REF then implies that MATH produces the infinite trace MATH and clearly, by construction, MATH holds. Analogously for the vice versa.
cs/0107014
By a straightforward inductive argument it follows that if there exists a derivation MATH, then MATH. Now, if MATH has the form MATH, where each MATH is either a choice agent or a procedure call or MATH, then MATH which implies MATH. Obviously if MATH is the empty context then MATH, from which the second part of the Lemma follows.
cs/0107014
We now show that given an agent MATH and a satisfiable constraint MATH, if there exists a derivation MATH, with MATH, then there exists also a derivation MATH with MATH and MATH. By REF , this will imply the thesis. The proof is by induction on the length MATH of the derivation. MATH. In this case MATH. By the definition MATH is also a derivation of length MATH and then the thesis holds. MATH. If the first step of derivation MATH does not use rule MATH, then the proof follows from the inductive hypothesis: In fact, if MATH then by the inductive hypothesis, there exists a derivation MATH with MATH and MATH. We can assume, without loss of generality, that MATH. Therefore, there exists a derivation MATH. Now, to prove the thesis it is sufficient to observe that, by the hypothesis on the variables, MATH. Now, assume that the first step of derivation MATH uses rule MATH and let MATH be the declaration used in the first step of MATH. If MATH was not modified in the transformation step from MATH to MATH (that is, MATH), then the result follows from the inductive hypothesis. We assume then that MATH, MATH is then the result of the transformation operation applied to obtain MATH, and we now distinguish various cases according to the operation itself. CASE: MATH is the result of an unfolding operation. In this case the proof is straightforward. CASE: MATH is the result of a tell elimination or of a tell introduction. In this case the thesis follows from a straightforward analysis of the possible derivations which use d or d'. First, observe that for any derivation which uses a declaration MATH, we can construct another derivation such that the agent MATH is evaluated before MATH. Moreover for any constraint MATH such that MATH, (where MATH is a relevant most general unifier of MATH and MATH), there exists a derivation step MATH if and only if there exists a derivation step MATH, where, for some constraint e, MATH, MATH and therefore MATH. Finally, since by REF is idempotent and the variables in the domain of MATH do not occur neither in MATH nor in MATH, for any constraint MATH we have that MATH. CASE: MATH is the result of a backward instantiation. Let MATH be the corresponding declaration in MATH. The situation is the following: - MATH - MATH where MATH has no variable in common with MATH (the case MATH is analogous and hence omitted). In this case MATH . By the inductive hypothesis, there exists a derivation MATH with MATH and MATH . Moreover, since MATH, we have that MATH . If MATH is not evaluated in MATH, then the proof is immediate. Otherwise, by the definition of MATH and since MATH, there exists also a derivation MATH such that MATH and MATH. Therefore, by REF MATH . By the definition of MATH, MATH . Then, by the definition of derivation and since MATH, MATH and then the thesis follows from REF . CASE: MATH is obtained from MATH by either an ask simplification or a tell simplification. We consider only the first case (the proof of the other one is analogous and hence it is omitted). Let - MATH, and - MATH, where for MATH, MATH. According to the definition of MATH and by REF , for any derivation MATH for MATH there exists a derivation MATH for MATH which performs the same steps of MATH (possibly in a different order) and such that whenever the choice agent inside MATH is evaluated the current store implies MATH. Therefore the thesis follows from the above equivalence. CASE: MATH is the result of a branch elimination or of a conservative ask elimination. The proof is straightforward by noting that: REF according to REF we consider also inconsistent stores resulting from non-terminated computations; REF an ask action of the form MATH always succeeds. CASE: MATH is the result of a distribution operation. Let - MATH - MATH where MATH and for every constraint MATH such that MATH, if MATH is productive then both the following conditions hold: CASE: there exists at least one MATH such that MATH CASE: for each MATH, either MATH or MATH. In this case MATH. By the inductive hypothesis, there exists a derivation MATH with MATH and MATH . Moreover, since MATH, we have that MATH . Now, we distinguish two cases: CASE: MATH is not evaluated in MATH. In this case the proof is obvious. CASE: MATH is evaluated in MATH. We have two more possibilities: CASE: There exists MATH, such that MATH where MATH. In this case the thesis follows immediately, since using MATH one can obtain the agent MATH after having evaluated the choice agent in MATH. CASE: There is no MATH, such that MATH. In this case MATH is the agent MATH and MATH . From the definition of derivation, the definition of MATH and the hypothesis that MATH is evaluated in MATH, it follows that MATH is of the form MATH, where either MATH is a choice agent or MATH. By REF , MATH and by definition of derivation MATH. Then, since there is no MATH such that MATH, by definition of distribution, MATH is not productive. Then, by definition, MATH has at least one finite derivation MATH such that MATH, where MATH. Moreover, since in a derivation we can add to the store only constraints on the variables occurring in the agents, MATH holds. Without loss of generality, we can assume that MATH. Therefore, by the previous observation, MATH and since MATH and MATH, there exists a derivation MATH . Moreover, since MATH, there exists a derivation MATH . Finally, to prove the thesis it is sufficient to observe that from REF and from the definition of MATH it follows that MATH. Moreover MATH which concludes the proof of this case. CASE: MATH is the result of a folding. Let - MATH be the folded declaration (MATH), - MATH be the folding declaration (MATH), - MATH be the result of the folding operation REF where, by hypothesis, MATH and MATH. In this case MATH and we can assume, without loss of generality, that MATH. By the inductive hypothesis, there exists a derivation MATH with MATH and MATH . Since MATH, we have that MATH . Since by hypothesis for any agent MATH, MATH, there exists a derivation MATH such that MATH and MATH. By REF and since MATH, we have that MATH . Let MATH be an appropriate renaming of MATH, which renames only the variables in MATH, such that MATH (note that this is possible, since MATH). Moreover by hypothesis, MATH. Then, without loss of generality we can assume that MATH if and only if the procedure call MATH is evaluated, in which case declaration MATH is used. Thus there exists a derivation MATH where MATH. By REF we have MATH . We show now that we can substitute MATH for MATH in the previous derivation. Since MATH is a renaming of MATH, the equality MATH is a conjunction of equations involving only distinct variables. Then, by replacing MATH with MATH and vice versa in the previous derivation we obtain the derivation MATH where MATH . From REF it follows that MATH . Then, from REF and since MATH we obtain MATH . Moreover, we can drop the constraint MATH, since the declarations used in the derivation are renamed apart and, by construction, MATH. Therefore there exists a derivation MATH which performs exactly the same steps of MATH, (possibly) except for the evaluation of MATH, and such that MATH and MATH. From REF and since MATH, it follows that MATH . Since MATH holds by hypothesis for any agent MATH, there exists a derivation MATH where MATH and MATH. From REF and since MATH, we obtain MATH . Finally, since MATH, there exists a derivation MATH and then the thesis follows from REF .
cs/0107014
Immediate.
cs/0107014
Immediate.
cs/0107014
Observe that, for MATH, the proof of REF follows from the first part of REF . We prove here that, for each MATH, REF if REF holds for MATH then REF holds for MATH; REF if REF hold for i then REF holds for MATH. The proof of the Lemma then follows from straightforward inductive argument. CASE: If MATH was not affected by the transformation step from MATH to MATH then the result is obvious by choosing MATH. Assume then that MATH is affected when transforming MATH to MATH and let us distinguish various cases. CASE: MATH was obtained from MATH by unfolding. In this case, the situation is the following: - MATH - MATH - MATH where cl and u are assumed to be renamed so that they do not share variables. Let MATH. By the definition of transformation sequence, there exists a declaration MATH. Moreover, by the hypothesis on the variables, MATH and then MATH. Therefore, by REF , there exists a constraint MATH, such that MATH and MATH . By the hypothesis on the variables and since u is renamed apart from cl, MATH and therefore MATH. Then, by Point REF, there exists a constraint MATH, such that MATH, MATH and MATH. By REF , MATH. Furthermore, by hypothesis and construction, MATH and, without loss of generality, we can assume that MATH. Then, by REF and since MATH, we have that MATH and this completes the proof. CASE: MATH is the result of a tell elimination or introduction. The proof is analogous to that one given for REF and it is omitted. CASE: MATH is the result of a backward instantiation. Let MATH be the corresponding declaration in MATH. The situation is then the following: - MATH - MATH where MATH has no variable in common with MATH (the case MATH is analogous and hence omitted). By the hypothesis, MATH, MATH and there exists MATH such that MATH. Then MATH and, without loss of generality, we can assume that MATH. Moreover, by the definition of transformation sequence, there exists a declaration MATH and then, by REF , there exists a constraint MATH such that MATH and MATH . Using the hypothesis on the variables and since f is renamed apart from MATH, we have that MATH . Then, from Point REF (assumed as hypothesis) and REF it follows that there exists a constraint MATH such that MATH and MATH hold. By definition of weight, we can assume that MATH and therefore, we have that MATH. We have now two cases: CASE: MATH. In this case, by REF , there exists a derivation MATH such that MATH, MATH and MATH . By the hypothesis on the variables, we can build a derivation MATH which performs exactly the same steps of MATH, plus possibly a tell action, such that MATH, MATH and MATH . Let MATH. By the previous result and by definition of weight MATH. Moreover, by hypothesis, MATH and we can assume, without loss of generality, that MATH. Then, by REF and by definition of MATH, it follows that MATH and then the thesis holds. CASE: MATH. In this case, by REF , MATH. By REF this means that there exists a derivation MATH such that MATH is not evaluated in MATH, MATH, MATH and MATH. By definition, we can construct another derivation MATH which performs exactly the same steps of MATH (and therefore MATH) and such that MATH. Let MATH. By definition of derivation MATH and therefore MATH. The remainder of the proof is now analogous to that one of the previous case. CASE: Either MATH is the result of an ask simplification or MATH is the result of a tell simplification. The proof is analogous to that one given for REF and hence it is omitted. CASE: MATH is the result of a branch elimination or of a conservative ask elimination. The proof is straightforward by noting that: REF according to REF we consider also inconsistent stores resulting from non-terminated computations; REF an ask action of the form MATH always succeeds; REF if we delete a MATH action we obtain a derivation whose weight is smaller. CASE: MATH is the result of a distribution. Let - MATH - MATH where MATH and for every constraint MATH such that MATH, if MATH is productive then both the following conditions hold: CASE: there exists at least one MATH such that MATH CASE: for each MATH, either MATH or MATH . We prove that, for any derivation MATH with MATH, there exists a derivation MATH such that MATH where also MATH, and MATH. This together with the definition of weight implies the thesis. If MATH is not evaluated in MATH, then the proof is immediate. Otherwise we have to distinguish two cases: CASE: There exists a MATH, such that MATH and MATH. In this case we can construct the derivation MATH which performs exactly the same steps of MATH and then the thesis holds. CASE: MATH is of the form MATH . By REF and by definition of MATH, we can construct another derivation MATH which performs the same steps of MATH (possibly in a different order) and such that the the agent MATH is not evaluated in the first MATH steps, where MATH and MATH. Let MATH. Now, if MATH is not productive, the proof is analogous to that one of REF and hence it is omitted. Then assume that MATH is productive. By definition of distribution there exists at least one MATH such that MATH and for each MATH, either MATH or MATH. Then, by definition, there exists a derivation MATH, which performs the same steps of MATH (possibly in a different order). Therefore there exists a derivation MATH which performs the same steps of MATH (in a different order). By construction MATH and then the thesis holds. CASE: MATH is the result of a folding. Let - MATH be the folded declaration (MATH), - MATH be the folding declaration (MATH), - MATH be the result of the folding operation MATH, where, by hypothesis, MATH, MATH, MATH, MATH and there exists MATH such that MATH. Then, MATH and MATH hold. Moreover, we can assume without loss of generality that MATH. Since MATH, from REF and Point REF it follows that there exists a constraint MATH such that MATH and MATH . We can assume, without loss of generality, that MATH. Then by using REF we obtain that MATH which concludes the proof of REF. CASE: Assume that REF of this Lemma hold for MATH. We prove that REF holds for MATH. Let MATH, and let MATH be the corresponding declaration in MATH. Moreover let MATH be a context, MATH a satisfiable constraint and let MATH be a constraint, such that MATH and MATH is defined. Without loss of generality, we can assume that MATH. Then, since by inductive hypothesis, REF holds for MATH, there exists a constraint MATH such that MATH, MATH . Since by inductive hypothesis REF holds for MATH, there exists a constraint MATH, such that MATH, MATH and MATH. By REF we obtain MATH and MATH and then the thesis holds.
cs/0107014
We prove the thesis for one derivation step. Then the proof of the Lemma follows by using a straightforward inductive argument. Assume that MATH are satisfiable constraints, MATH is a constraint and that there exists a derivation MATH such that MATH and the first step can use rule MATH only for evaluating agents of the form MATH. By the definition of derivation we have MATH, where MATH is not a guarding context. We have now three cases: CASE: MATH. In this case MATH . Since MATH is not a guarding context the definition of weight implies that MATH where MATH. Then the thesis holds REF MATH and there exists a declaration MATH. In this case MATH . From the definition of derivation it follows that MATH. Furthermore, by definition of transformation sequence, there exists a declaration MATH. Since MATH is defined by hypothesis (where MATH), from REF it follows that there exists a constraint MATH such that MATH and MATH. From the definition of derivation it follows that MATH. REF implies that there exists a constraint MATH such that MATH, MATH and MATH . These results together with the inclusion MATH imply that MATH and MATH thus concluding the proof for this case. CASE: MATH and MATH. In this case MATH . Since MATH is not a guarding context and MATH we obtain MATH where MATH, which concludes the proof.
cs/0107014
The proof is straightforward, by observing that by the hypothesis on MATH the first step of MATH uses the rule MATH (in case such a step exists) and therefore, by definition of split derivation, MATH, where MATH. Then by definition, MATH is a split derivation in MATH.
cs/0107014
The proof proceeds by showing simultaneously, by induction on MATH, that for MATH: CASE: for any agent MATH, MATH; CASE: MATH is weight complete. CASE: We just need to prove that MATH is weight complete. Assume that there exists a derivation MATH, where MATH is a satisfiable constraint and MATH. Then there exists a derivation MATH, such that MATH, whose weight is minimal and where MATH. It follows from REF that MATH is a split derivation. Induction step. By the inductive hypothesis for any agent MATH, MATH and MATH is weight complete. From REF it follows that if MATH is weight complete then for any agent MATH, MATH. So, in order to prove REF , we only have to show that MATH is weight complete. Assume then that there exists a derivation MATH such that MATH is a satisfiable constraint and MATH. From the inductive hypothesis it follows that there exists a split derivation MATH where MATH . Let MATH be the modified clause in the transformation step from MATH to MATH. If in the first MATH steps of MATH there is no procedure call which uses MATH then clearly there exists a split derivation MATH in MATH, MATH which performs the same steps of MATH and then the thesis holds. Otherwise, assume without loss of generality that MATH is the rule used in the first step of derivation MATH and that MATH is the clause employed in the first step of MATH. We also assume that the declaration MATH is used only once in MATH, since the extension to the general case is immediate. We have to distinguish various cases according to what happens to the clause MATH when moving from MATH to MATH. CASE: MATH is unfolded. Let MATH be the corresponding declaration in MATH. The situation is the following: - MATH, - MATH, and - MATH, where MATH and MATH are assumed to be renamed apart. By the definition of split derivation, MATH has the form MATH . Without loss of generality, we can assume that MATH if and only if MATH is evaluated in the first MATH steps of MATH, in which case MATH is used for evaluating it. We have to distinguish two cases. CASE: There exists MATH such that the MATH-th derivation step of MATH is the procedure call MATH. In this case MATH has the form MATH . Then there exists a corresponding derivation in MATH which performs exactly the same steps of MATH except for a procedure call to MATH. In this case the proof follows by observing that, since by the inductive hypothesis MATH is a split derivation, the same holds for MATH. CASE: There is no procedure call to MATH in the first MATH steps. Therefore MATH has the form MATH . Then, by the definition of MATH, there exists a derivation MATH . Observe that from the derivation MATH and REF it follows that MATH . The hypothesis on the variables implies that MATH. Then, by the definition of transformation sequence and since MATH, there exists a declaration MATH. By REF it follows that there exists a constraint MATH such that MATH and MATH . Therefore, by the definition of MATH, by REF and since MATH is defined, there exists a derivation MATH where MATH and, by REF , MATH . By REF MATH holds and, by definition of weight, we obtain MATH . Moreover, we can assume without loss of generality that MATH. Then, by the definition of procedure call MATH and there exists a derivation MATH such that the first MATH derivation steps do not use rule MATH and the MATH-th derivation step uses the rule MATH. Now, we have the following equalities MATH . By the definition of weight, MATH, by REF , MATH and MATH, since MATH is a split derivation. Therefore MATH and then, by definition, MATH is a split derivation in MATH. This, together with REF , implies the thesis. REF A tell constraint in MATH is eliminated or introduced. In the first case, let MATH be the corresponding declaration in MATH. Therefore the situation is the following: - MATH - MATH where MATH is a relevant most general unifier of MATH and MATH and the variables in the domain of MATH do not occur neither in MATH nor in MATH. Observe that for any derivation which uses the declaration MATH, we can construct another derivation such that the agent MATH is evaluated before MATH. Then the thesis follows from REF and from the argument used in the proof of REF . The proof for the tell introduction is analogous and hence it is omitted. CASE: MATH is backward instantiated. Let MATH be the corresponding declaration in MATH. The situation is the following: - MATH, - MATH, where MATH has no variable in common with MATH (the case MATH is analogous and hence omitted). We distinguish two cases: CASE: There is no procedure call to MATH in the first MATH steps. Therefore MATH has the form MATH . Without loss of generality, we can assume that MATH. Then, by the definition of MATH, there exists a derivation corresponding to MATH, MATH . Following the same reasoning as in REF , we can prove that there exists a constraint MATH such that MATH where MATH and MATH. The rest of the proof is analogous to REF (unfolding) and hence it is omitted. CASE: There is exists MATH such that the MATH-th derivation step of MATH is the procedure call MATH. We distinguish two more cases: CASE: MATH. In this case we can assume, without loss of generality, that MATH has the form MATH where MATH is a renaming of MATH such that MATH. In this case there exists a derivation MATH . Observe now that, given any set of declarations, if there exists a derivation MATH for the configuration MATH where MATH is satisfiable and MATH, then there exists a derivation for MATH which performs the same steps of MATH plus (possibly) two steps corresponding to the evaluation of MATH and MATH. Since MATH is logically equivalent to MATH, we can substitute MATH for MATH. Moreover, since MATH is a renaming of MATH and therefore MATH holds, we can drop the agent MATH. Finally, observe that MATH can be reduced to a conjunction of equations of the form MATH, where MATH and MATH are distinct variables. Therefore, we can drop the constraint MATH, since the declarations used in the derivation are renamed apart and MATH. Then the thesis holds for this case. CASE: MATH. In this case, the situation is the following: - MATH, - MATH, where MATH is a renaming of MATH which has no variables in common with MATH. Let MATH be a renaming of MATH such that MATH. Now the proof is analogous to the previous one by observing that, for any set of declarations, if there exists a derivation MATH for MATH where MATH is satisfiable and MATH, then there exists a derivation for MATH which performs the same steps of MATH, plus some tell actions (analogously to the previous case, we can drop the tell agents MATH and MATH). This concludes the proof of this case. CASE: An ask guard in MATH is simplified. Let - MATH, - MATH, where for MATH, MATH and MATH is the declaration to which the guard simplification was applied. By the definition of split derivation MATH has the form MATH . Since by the inductive hypothesis for any agent MATH, MATH, it is easy to check that there exists a derivation MATH such that MATH and MATH. From REF it follows that MATH . Without loss of generality, we can assume that MATH is chosen in such a way that the first MATH steps of MATH do not use rule MATH and that MATH is maximal, in the sense that either MATH is not satisfiable or in the MATH-th step we can only use rule MATH. In the first case, let MATH be the context obtained from MATH as follows: any (renamed) occurrence of the agent MATH in MATH, introduced in MATH by a procedure call of the form MATH, is replaced by a (suitably renamed) occurrence of the agent MATH. Then, by definition of MATH, we have that MATH is a derivation in MATH which does not use rule MATH and such that MATH . Then the thesis follows by definition of split derivation. Now assume that MATH is satisfiable. By REF , there exists a constraint MATH, such that MATH and MATH where MATH . By definition of weight, by REF and since MATH, there exists a derivation MATH such that MATH and MATH. Then, by the definition of weight and by REF , MATH holds. Without loss of generality, we can assume that MATH. Therefore, from REF it follows that MATH . Let MATH be the agent obtained from MATH as follows: any (renamed) occurrence of the agent MATH in MATH, introduced in MATH by a procedure call of the form MATH, is replaced by a (suitably renamed) occurrence of the agent MATH. By the definition of MATH and since MATH, there exists a derivation MATH which does not use rule MATH. Observe that, by construction, MATH has the form MATH, where MATH is either a choice agent or MATH for each MATH. Moreover, since the first MATH steps of MATH do not use rule MATH (and therefore, it is not possible evaluate a procedure call of the form MATH inside a guarding context), MATH has the form MATH, where either MATH or MATH is a (renamed) occurrence of the agent MATH while MATH is a (suitably renamed) occurrence of the agent MATH. By REF , MATH, where MATH is a renamed version of the context MATH in MATH, which was introduced in MATH by a procedure call of the form MATH. Now from the definition of derivation and of ask simplification it follows that, if MATH is a choice branch in MATH and MATH is the corresponding choice branch in MATH, then MATH holds. Therefore, by using the same arguments as in REF , since (by inductive hypothesis) MATH is weight complete and MATH, we obtain that there exists a split derivation in MATH of the form MATH such that MATH and MATH. Then, by using the same arguments as in REF , from the definition of weight and from REF it follows that MATH where MATH. Moreover, we can assume without loss of generality that MATH . Then by REF we obtain MATH and therefore, by definition of weight, MATH holds. By REF and by construction of MATH is a split derivation in MATH. By the definition of split derivation MATH, where MATH. Then, by REF , we have that MATH . Finally, MATH is a derivation in MATH. By construction the first MATH steps of MATH do not use rule MATH, the MATH-th step uses rule MATH. Thus the thesis follows from REF . CASE: MATH is the declaration to which either a branch elimination or an ask elimination was applied. In the case of branch elimination the proof follows immediately from the fact that we consider also the inconsistent results of non-terminated computations. As for the ask elimination case, let us assume that - MATH and - MATH. We show, by induction on the weight MATH, where MATH, that there exists a split derivation MATH in MATH, such that MATH and MATH. Then the proof follows by REF . CASE: In this case MATH and by definition of split derivation, MATH, MATH has the form MATH rule MATH is not used and therefore each derivation step is done in MATH. Moreover, observe that since MATH, if MATH is satisfiable, then MATH is a guarding context. Then, it is easy to check that MATH is a split derivation in MATH, such that MATH and then the thesis follows by the previous observation. Induction step. Assume that MATH and that MATH has the form MATH since the other case is immediate. By the definition of MATH and since MATH is a split derivation, there exists a derivation MATH which does not use rule MATH. Moreover, by definition of split derivation MATH and therefore, by inductive hypothesis there exists a split derivation in MATH, MATH such that MATH . Without loss of generality, we can assume that MATH. Therefore, by REF and by definition of MATH and MATH, MATH . Then by definition of weight, since MATH and by REF MATH . Moreover, by our hypothesis on the variables of MATH and of MATH, there exists a derivation MATH, MATH . By REF , since MATH do not use Rule MATH and MATH is a split derivation in MATH, we have that MATH is a split derivation in MATH, such that MATH. Now, the thesis follows by REF . CASE: An ask guard in MATH is distributed. Let - MATH - MATH, where, for every constraint MATH such that MATH, if MATH is productive then there exists at least one MATH such that MATH and for each MATH, either MATH or MATH. By the definition of split derivation, MATH has the form MATH . If the first MATH steps of MATH do not evaluate the agent MATH then the proof is analogous to that one of REF . Otherwise, let us assume that MATH . Since by the inductive hypothesis for any agent MATH, MATH there exists a derivation MATH where MATH and MATH. By REF , MATH . Without loss of generality we can assume that the first MATH steps of MATH neither use rule MATH nor contain the evaluation of any (renamed) occurrence MATH of the agent MATH, where MATH is a renamed version of the declaration MATH and MATH has been introduced by the evaluation of a procedure call of the form MATH. Moreover, we can assume that MATH is maximal, in the sense that either MATH is not satisfiable or the MATH-th step can only either use rule MATH or evaluate a (renamed) occurrence of MATH introduced by a procedure call of the form MATH. If MATH is not satisfiable, then the proof is analogous to that one of the previous REF . Assume then that MATH is satisfiable. By REF , there exists a constraint MATH, such that MATH and MATH where MATH . By definition of weight, by REF and since MATH, there exists a derivation MATH such that MATH and MATH. Then, by the definition of weight and by REF , MATH . Without loss of generality, we can assume that MATH. Therefore from REF it follows that MATH . Let MATH be the agent obtained from MATH as follows: any (renamed) occurrence of the agent MATH in MATH which has been introduced by a procedure call of the form MATH is replaced by a (suitably) renamed occurrence of the agent MATH. By the definition of MATH and since MATH, there exists a derivation MATH which does not use rule MATH. Now, by construction, MATH has the form MATH, where MATH is either a choice agent or MATH. Moreover, since MATH is weight complete, MATH and analogously to REF , there exists a split derivation MATH such that MATH and MATH. Then, by using the same arguments as in REF , from the definition of weight and REF it follows that MATH where MATH. From this point the proof proceeds exactly as in REF by using REF and therefore it is omitted. CASE: Finally assume that MATH is folded. Let - MATH be the folded declaration (MATH) - MATH be the folding declaration (MATH), - MATH be the result of the folding operation MATH, where, by definition of folding, MATH and MATH. Since MATH is a guarding context, the agent MATH in MATH appears in the scope of a MATH guard. By definition of split derivation MATH has the form MATH where MATH is a guarding context. Without loss of generality we can assume that MATH. Then, from the definition of MATH it follows that there exists a derivation MATH which performs exactly the first MATH steps as MATH. Since MATH, the definition of weight implies that MATH is defined, where MATH. Then, by REF , we have that MATH . The definitions of derivation and folding imply that MATH holds. Moreover, from the assumptions on the variables, we obtain that MATH. Thus, from REF it follows that there exists a constraint MATH such that MATH . From the definition of weight and the fact that MATH is defined it follows that there exists a derivation MATH, where MATH and MATH. Then, by the definition of weight, MATH and therefore, by REF , MATH holds. Moreover, from REF we obtain MATH . Without loss of generality, we can now assume that MATH . Then, by REF it follows that MATH . From the definition of weight MATH and since MATH is a split derivation we obtain MATH. Then, from REF it follows that MATH and therefore, by construction, MATH is a derivation in MATH such that: REF rule MATH is not used in the first MATH steps; REF rule MATH is used in the MATH-th step. The thesis then follows from REF thus concluding the proof.
cs/0107014
The proof of this result is essentially the same as that one of the total correctness REF provided that in such a proof, as well as in the proofs of the related preliminary results, we perform the following changes: CASE: Rather than considering terminating derivations, we consider any (possibly non-maximal) finite derivation. CASE: Whenever in a proof we write that, given a derivation MATH, a derivation MATH is constructed which performs the same steps of MATH, possibly in a different order, we now write that a derivation MATH is constructed which performs the same steps as MATH (possibly in a different order) plus some other additional steps. Since the store grows monotonically in ccp derivations, clearly if a constraint MATH is the result of the derivation MATH, then a constraint MATH is the result of MATH such that MATH holds. For example, for REF in the proof of REF (in the Appendix), when considering a (non-maximal) derivation MATH which uses the declaration MATH we can always construct a derivation MATH which performs all the steps of MATH (possibly plus others) and such that the MATH agent is evaluated before MATH. Differently from the previous proof, now we are not ensured that the result of MATH is the same as that one of MATH, since MATH is non-maximal (thus, MATH could also avoid the evaluation of MATH). However, we are ensured that the result of MATH is stronger (that is, implies) that one of MATH.
cs/0107014
We have to show that, given an agent MATH and a satisfiable constraint MATH, if there exists a derivation MATH, then there exists also a derivation MATH such that MATH and MATH. The proof is analogous to that one given for REF , therefore we illustrate only the modifications needed to adapt such a proof. Assume that the first step of derivation MATH uses rule MATH and let MATH be the declaration used in the first step of MATH. Assume also that MATH and that MATH is the result of the transformation operation applied to obtain MATH. As usual, we distinguish various cases according to the kind of operation performed. Here we consider only those cases whose proof is different from that one of REF , due to the fact that here we consider traces (consisting of intermediate results) rather than the final constraints. CASE: In this case MATH, MATH, where MATH is a relevant most general unifier of MATH and MATH (or a renaming, in case of MATH and MATH consist of distinct variables). From the definition of the operation we know that the variables in the domain of MATH do not occur neither in MATH nor in MATH and, differently from the case of REF , that MATH. For any derivation which uses a declaration MATH, if the agent MATH is evaluated before MATH then the proof is analogous to that one given for REF . Otherwise, if the agent MATH is not evaluated before MATH, then by using the condition MATH we obtain that the evaluation of the agent MATH can add to the store only constraints on variables which do not occur neither in the global store (before the evaluation of MATH) nor in MATH. Therefore the contribution to the global store of the agent MATH (before the evaluation of the agent MATH) when restricted to MATH is equivalent either to the constraint MATH or to the constraint MATH. In the first case the global store is the same as that one existing before the evaluation of MATH. In the second case we can obtain the constraint MATH by evaluating the same agents evaluated in MATH also in MATH. CASE: In this case the proof is analogous to that one given for REF by observing the following: If in the derivation MATH in MATH either the agent MATH or the agent MATH are evaluated, then in the derivation MATH the agent MATH can be evaluated and then one performs exactly the same steps of MATH, except for the evaluation of a renamed version of the agents MATH and MATH. CASE: For the ask simplification the proof of REF is simplified by using REF and by observing that, for any derivation, when the choice agent inside MATH is evaluated the current store certainly implies MATH. Therefore we do not need to construct the new derivation MATH. The same holds for the tell simplification. CASE: In this case the proof is analogous to that given for the previous REF , by observing that in the derivation MATH. Therefore we can construct a derivation MATH where MATH and MATH. Moreover, we can drop the constraint MATH, since the declarations used in the derivation are renamed apart and, by construction, MATH. We then obtain that there exists a derivation MATH which performs exactly the same steps of MATH except for (possibly) the evaluation of MATH and such that MATH and MATH. Now, the proof is the same to that given for REF , since the evaluation of MATH does not modify the current store with respect to the variables not in MATH.
cs/0107014
Immediate.
cs/0107014
Immediate.
cs/0107014
The proof is analogous to that given for REF , by using REF instead of REF , respectively. We have only to observe the following facts: For REF , Point REF we can evaluate the agent MATH after the global store implies MATH. In this way the new derivation has the same sequence of intermediate results. For REF , Point REF , by using REF , if there exists a derivation MATH then MATH. If MATH is not productive then the proof is straightforward. Otherwise, assume that MATH is productive. By definition of distribution there exists at least one MATH such that MATH and, for each MATH, either MATH or MATH. Then, by definition, there exists a derivation MATH which performs the same steps of MATH in the same order, except for one step of evaluation of the agent MATH which is performed before evaluating the agent MATH. Then the thesis follows by definition of the relation MATH.
cs/0107014
The proof is analogous to that given for REF and proceeds by showing simultaneously, by induction on MATH, that for MATH and for any agent MATH: CASE: MATH; CASE: MATH is weight complete for the traces. The proof of the base case is analogous to that given for the base case of REF and hence it is omitted. For the induction step we have that, by induction hypothesis, for any agent MATH, MATH and MATH is weight complete for the traces. REF holds also when considering MATH rather than MATH. From REF and (the counterpart for traces of) REF then it follows that if MATH is weight complete for traces then, for any agent MATH, MATH. So, in order to prove REF , we have only to show that, for any derivation MATH such that MATH is a satisfiable constraint and MATH, there exists a split derivation in MATH, MATH, such that MATH and MATH. From the inductive hypothesis it follows that there exists a split derivation MATH where MATH and MATH. Now, let MATH be the modified clause in the transformation step from MATH to MATH. The rest of the proof is essentially analogous to that given for REF . The only points which require some case are the following: CASE: In this case, the proof is analogous to that given for REF . CASE: In this case the proof is analogous to that given for REF , provided we observe the following fact for REF in such a proof: Given any set of declarations, if there exists a derivation MATH for the configuration MATH where MATH is satisfiable and MATH, then there exists a derivation for MATH which performs the same steps of MATH plus (possibly) two steps corresponding to the evaluation of MATH and MATH, after the evaluation of MATH and MATH. CASE: Analogously to the proof of REF , it is sufficient to observe the following. From REF it follows that, for any derivation, when the choice agent inside a context MATH is evaluated the current store implies MATH. Then, by definition of ask simplification, the constraint MATH and MATH are equivalent with respect to the current store (and therefore we do not need to construct the new derivation MATH). The same reasoning applies to the case of tell simplification. CASE: The proof is analogous to that of REF .
cs/0107022
By hypothesis, the arrow MATH can be expressed as the parallel and sequential composition of arrows in MATH; therefore, by functoriality of tensor product, MATH can be finitely decomposed as MATH where MATH, with MATH. Then, the cell MATH is just the (diagonal) composition MATH, with MATH.
cs/0107022
By REF , we know that the cells MATH and MATH are generated by the basis MATH. By vertically composing MATH with the horizontal identity of MATH, we get the cell MATH. Then, the cell MATH is obtained as the composition MATH, because MATH. The cell MATH can be generated by a similar construction.
cs/0107022
Obvious, by observing that the property holds for all cells in MATH except MATH, for which however we have shown how to generate its counterpart MATH.
cs/0107022
The fact that all composed cells are pullbacks is straightforward, as all basic tiles are pullbacks and such a property is preserved by the three operations of the tile model (horizontal and vertical sequential compositions and parallel composition). The proof that all pullbacks can be obtained in this way is more subtle. We exploit the fact that, in the category MATH, whenever the pullback of MATH and MATH exists and MATH can be decomposed as MATH, then also the pullback of MATH and MATH exists (because MATH is less instantiated than MATH). Since each arrow MATH in MATH can be finitely decomposed as MATH where MATH, with MATH, then the pullback of MATH and MATH, if it exists, can be computed stepwise. In fact, the proof is by induction on the length MATH of a fixed decomposition of MATH. Thus, it reduces to prove that if the pullback of MATH and MATH (with MATH) exists, then it is generated by MATH. We proceed by case analysis on MATH and, for each case, by induction on the length of the decomposition of MATH, exploiting the besic cells in MATH to cover all possible combinations.
cs/0107022
The proof follows from REF , that is, from the existence of the tile MATH that can be vertically composed with MATH (the horizontal identity for MATH), and with MATH being horizontally composed with the result (see REF ).
cs/0107022
The proof of point REF proceeds by rule induction. For the `empty goal' rules we rely on the fact that MATH is the unit for MATH and that the vertical identities always exist. For the `atomic goal' we rely on the results of REF on the correspondence between mgu's and pullbacks while applying the tile MATH to the goal MATH. For the `conjunctive goal' rules, the difficulty is that MATH and MATH might share some variables. In fact, by inductive hypothesis we can assume that MATH and therefore we must employ the pullback tiles for propagating MATH to MATH. This can be done by exploiting the tiles MATH and MATH. For proving the point REF, we fix a decomposition of MATH in terms of basic tiles of MATH and then we proceed by induction on the number of tiles MATH used for building MATH.
cs/0107022
We want to prove that for any goal MATH and tile MATH, there exist MATH, MATH and MATH such that MATH and MATH. Fixed a decomposition of MATH in terms of basic tiles, the proof proceeds by induction on the number of tiles associated to the clauses that are considered in the decomposition.
cs/0107030
The probability of MATH being uniquely identifiable from its first MATH bits is the probability that no string among the MATH other ones in the list starts with the same pattern. Hence, this probability is MATH.
cs/0107030
NAME and NAME agree on a random MATH. Assume that they draw sequences MATH and MATH that fulfill the typicality conditions above. For the value received, NAME prepares a list of guesses: MATH. From REF , this list contains no more than MATH elements. NAME reveals MATH slice values, with MATH. From REF , the probability that NAME is unable to correctly identify the correct string is bounded as MATH. This quantity goes to MATH when MATH, and MATH for MATH. Therefore, MATH such that MATH for all MATH.
cs/0107030
Using random coding arguments, REF states that for each MATH sufficiently large, there exists slices MATH of which the first ones are to be entirely disclosed, giving MATH. The number MATH of key elements of dimension MATH is MATH with MATH the number of raw key elements. Hence MATH. Regarding the probability of failure, there are two sources of possible failure: the failure of identification MATH and the fact that MATH. From REF and from the AEP, both probabilities are upper bounded by MATH. Therefore, the total failure probability behaves as MATH when MATH.
cs/0107030
If MATH is discrete, let MATH, otherwise set MATH, with MATH a quantized approximation of MATH. Similarly, let MATH when MATH is discrete or approximate it with a discrete variable MATH otherwise. For any MATH, there exits MATH, MATH such that MATH CITE. By applying REF on MATH and MATH, we have MATH for any MATH. Therefore, MATH .
cs/0107030
Assume that we can predict how many bits the practical BCP discloses, for instance given an estimate of the bit error rate. Disclosing a slice entirely, as done in REF , reveals MATH bits. Whenever the practical BCP is expected to disclose less than MATH bits (for example, when the bit error rate is low), we can use it instead of disclosing the entire key without increasing MATH.
cs/0107031
Let MATH be a solution to a one-row NAME puzzle, and suppose that the leftmost block is removed in move MATH. Because move MATH removes the leftmost group, it cannot form new clickable groups. The sequence MATH is then a solution to the same puzzle except perhaps for the group containing the leftmost block. If the leftmost block is removed in this subsequence, continue discarding moves from the sequence until the remaining subsequence removes all but the group containing the leftmost block. Now the puzzle can be solved by adding one more move, which removes the last group containing the leftmost block. Applying the same argument to the rightmost block proves the lemma.
cs/0107031
Because MATH, there is a derivation MATH. The proof is by induction on the length MATH of this derivation. In the base case, MATH, we have MATH, which is clearly solvable. Assume all strings derived in at most MATH steps are solvable, for some MATH. Now consider the first step in a MATH-step derivation. Because MATH, the first production cannot be MATH. So there are three cases. CASE: MATH: In this case MATH, such that MATH and MATH both in at most MATH steps. By the induction hypothesis, MATH and MATH are solvable. By REF , there are internal solutions for MATH and MATH, where the rightmost block of MATH and the leftmost block of MATH are removed last, respectively. Doing these two moves at the very end, we can now arbitrarily merge the two move sequences for MATH and MATH, removing all blocks of MATH. CASE: MATH: In this case MATH, such that MATH in at most MATH steps. By the induction hypothesis, MATH is solvable. By REF , there is an internal solutions for MATH; if either the leftmost or rightmost block of MATH has color MATH, it can be chosen to be removed in the last move. Therefore, the solution for MATH followed by removing the remaining MATH (if it still exists) is a solution to MATH. CASE: MATH: This case is analogous to the previous case.
cs/0107031
Suppose MATH be solvable. We will prove that MATH by induction on MATH. The base case, MATH follows since MATH. Assume all solvable strings of length at most MATH are in MATH, for some MATH. Consider the case MATH. Since MATH is solvable, there is a first move in a solution to MATH, let's say removing a group MATH for MATH. Thus, MATH. Now, neither the last symbol of MATH nor the first symbol of MATH can be MATH. Let MATH. Since MATH, and MATH is solvable, MATH is in MATH by the induction hypothesis. Observe that MATH by one of the derivations: MATH if MATH is odd, or MATH if MATH is even. Thus, if MATH, MATH can be derived as MATH. Analogously for MATH. It remains to consider the case MATH. Consider the first step in a derivation for MATH. There are three cases. CASE: MATH: We can assume that MATH, otherwise we consider the derivation of MATH in which this first step is skipped. By REF , MATH and MATH are both solvable. Consider the substring MATH of MATH that was removed in the first move. Either MATH (MATH possibly empty) or MATH (MATH possibly empty). Without loss of generality, we assume the former case, that is, MATH. Then MATH is solvable because MATH is solvable and MATH was maximal. Since MATH, it follows that MATH, and by the induction hypothesis, MATH. Hence MATH is a derivation of MATH and MATH. CASE: MATH: Since MATH, it must be the case that MATH, where MATH. By REF , MATH is solvable, hence so is MATH because MATH was maximal. Moreover, MATH and thus MATH by the induction hypothesis and MATH. CASE: MATH: Since MATH, either MATH and MATH, or MATH and MATH. Without loss of generality, assume MATH. Analogously to the previous case, MATH, hence MATH.
cs/0107031
The context-free grammar can be converted into a grammar in NAME normal form of size MATH and with MATH nonterminals. The algorithm in CITE runs in time MATH times the number of nonterminals plus the number of productions, which is MATH.
cs/0107031
CASE: Each group MATH of the checkerboard MATH must be removed. This is only possible if MATH is merged with some other group of the same color not in MATH, so there are at least MATH groups outside of MATH. These groups must be separated from MATH by at least one extra group. Therefore, MATH or MATH. CASE: Analogously, if MATH is not at one end of the puzzle, then there are two extra groups at either end of MATH. Therefore, MATH or MATH.
cs/0107031
Clicking on the median group removes that piece and merges its two neighbors into the new median group (it has two neighbors because MATH is odd). Therefore, the resulting puzzle again has a median group with size at least two, and the process repeats. In the end, we solve the puzzle.
cs/0107031
If the puzzle contains a checkerboard of length at least MATH, then it is unsolvable by REF . If the median has size at least two, then we are also done by REF , so we may assume that the median is a singleton. Thus there must be a nonsingleton somewhere to the left of the median that is not the leftmost group, and there must be a nonsingleton to the right of the median that is not the rightmost group. Also, there are two such nonsingletons with at most MATH other groups between them. Clicking on any one of these nonsingletons destroys two groups (the clicked-on group disappears, and its two neighbors merge). The new median moved one group right [left] of the old one if we clicked on the nonsingleton left [right] of the median. The two neighbors of the clicked nonsingleton merge into a new nonsingleton, and this new nonsingleton is one closer to the other nonsingleton than before. Therefore, we can continue applying this procedure until the median becomes a nonsingleton and then apply REF . Note that if one of the two nonsingletons ever reaches the end of the sequence then the other singleton must be the median.
cs/0107031
Sufficiency is a straightforward application of REF . First solve the instance MATH so that all groups but MATH disappear and MATH becomes a nonsingleton. Then solve instance MATH so that all groups but MATH disappear and MATH becomes a nonsingleton. These two solutions can be executed independently because MATH and MATH form a ``barrier." Then MATH and MATH can be clicked to solve the puzzle. For necessity, assume that MATH is a sequence of clicks that solves the instance. One of these clicks, say MATH, removes the blocks of group MATH. (Note that this group might well have been merged with other groups before, but we are interested in the click that actually removes the blocks.) Let MATH be maximal such that the blocks of group MATH are also removed during click MATH. Clearly MATH is odd, since groups MATH and MATH have the same color and we have only two colors. It remains to show that the instances MATH and MATH are solvable. The clicks MATH can be distinguished into two kinds: those that affect blocks to the left of MATH, and those that affect blocks to the right of MATH. (Since MATH is not removed before MATH, a click cannot be of both kinds.) Consider those clicks that affect blocks to the left of MATH, and apply the exact same sequence of clicks to instance MATH. Since MATH removes MATH and MATH at once, these clicks must have removed all blocks MATH. They also merged MATH and MATH, so that this group becomes a nonsingleton. One last click onto MATH hence gives a solution to instance MATH. Consider those clicks before MATH that affect blocks to the right of MATH. None of these clicks can merge MATH with a block MATH, MATH, since this would contradict the definition of MATH. Hence it does not matter whether we execute these clicks before or after MATH, as they have no effect on MATH or the blocks to the left of it. If we took these clicks to the right of MATH, and combine them with the clicks after MATH (note that at this time, block MATH and everything to the left of it is gone), we obtain a solution to the instance MATH. This proves the theorem.
cs/0107033
Immediate.
cs/0107033
The proof is immediate from the definition of expectation and the possibility of rearrangment of terms of positive series.
cs/0107033
First, we change variables to MATH. Obviously, the statement of the Theorem is equivalent to the statement that MATH. We also write MATH and similarly for the primitives MATH and MATH. Now, the probability of that all of the MATH are greater than some fixed MATH equals MATH so that MATH . Perform the change of variables MATH, to get MATH . For MATH, we can write MATH where MATH is a constant. We also know that MATH is a monotonic function so if we break up the integral above as MATH we see that the first integral approaches MATH while the second integral goes to REF. Note that the proof also evaluates MATH.
cs/0107033
Immediate from the proof of REF .
cs/0107033
Expand the fraction in a geometric series and apply NAME 's theorem.
gr-qc/0107018
The first two assertions are obvious, for REF see CITE.
gr-qc/0107018
The conformal factor MATH satisfies REF (see also REF). This equation has the following form MATH where the tensor MATH and the functions MATH, MATH are analytic. We want to prove that the symmetric and trace free part of the tensor MATH, for all MATH, vanish at the point MATH. To prove this we use induction on MATH, in the same way as in the proof of REF. The cases MATH are given by REF . To perform the induction step we assume MATH and show that the statement for MATH implies that for MATH. Using REF , we express MATH in terms of MATH. Using the induction hypothesis the result follows. We use the previous result in the analytic expansion of MATH to conclude that MATH, where MATH is an analytic, positive, function, and MATH. There exist a much more elegant method to prove the same result directly out of REF using complex analysis, as it is explained in CITE.
gr-qc/0107018
We expand in powers series in the coordinates MATH the analytic functions MATH in REF . For each power, we use the following explicit formula in order to solve REF . Let MATH a three tuple of homogenous polynomials of order MATH, which satisfies MATH for some integer MATH. Then, we have MATH . By REF , the series defined by the MATH majorizes the one defined by the MATH, hence the last one defines a convergent power series. Note that the term with MATH in REF made no contribution to MATH.
gr-qc/0107018
Since MATH does not depend on MATH, the extrinsic curvature MATH satisfies the equation MATH where MATH denote the NAME derivative with respect to the vector field MATH. We express this equation in terms of MATH, the covariant derivative with respect to MATH where, in the second line, we have used REF . Then we use REF , and REF to prove REF . To compute MATH we use REF and MATH.
gr-qc/0107018
By REF we have that MATH and MATH are analytic with respect to MATH and the angles MATH (but not with respect to MATH!). This proves the assumption made in CITE, MATH and MATH are essentially the functions MATH and MATH defined there.
gr-qc/0107050
Let MATH, be a positive definite MATH real, symmetric, matrix, and MATH its characteristic polynomial with MATH, continuous, polynomial, functions of MATH's. Since MATH is symmetric, the necessary and sufficient condition that MATH be positive definite, is MATH. Therefore MATH, as an inverse image of an open subset, is itself open.
gr-qc/0107050
Let MATH. Then, there is MATH such that (in matrix notation): MATH with MATH, MATH, MATH the three positive eigenvalues of MATH. Since MATH belongs to MATH, there is a continuous mapping MATH such that MATH and MATH. Introduce now the mapping MATH, with MATH. As MATH belongs to MATH, its determinant is not zero for every MATH. Therefore, by NAME 's theorem, MATH is positive definite - just like MATH. But MATH and MATH, that is, the matrix MATH is connected to MATH, by a continuous curve lying entirely in MATH. Consider now the mapping: MATH with: MATH is continuous and MATH. This means that MATH is finally arcwise connected to MATH.
gr-qc/0107050
We give a rigorous proof of the statement that the matrices MATH, are automorphisms. To this end, define the mapping MATH, with MATH where MATH. Define also the matrices MATH, MATH, with MATH, given by REF . It is straightforward to verify that MATH. Using the NAME Identities and the definitions above, it is not difficult to see that: MATH . Consider now two sets MATH, such that MATH, for some MATH. Since the derivative of MATH at REF is zero, we have that: MATH which in turn, implies that: MATH . The last expression says that: MATH that is, the mapping MATH is constant MATH. Thus it holds, in particular, that MATH or MATH.
gr-qc/0107050
We first note that the number of independent relations in Lemma's hypothesis, equals the number of independent MATH's and is therefore, at most REF. We second observe that in Class NAME Types, the structure constants are characterized by the matrix MATH only, and thus the relevant numbers involved are the (at most REF) real, non zero, eigenvalues of MATH. In NAME Type MATH and MATH, the non vanishing eigenvalues, are exactly REF. In conclusion, in each and every Class NAME Type, the number of independent relations in Lemma's Hypothesis, exactly equals the number of the non vanishing eigenvalues of matrix MATH. In Class B, the null eigenvector MATH of MATH, is also present. In this case, MATH vanishes identically, since rank REF is less than REF and the number of independent relations in Lemma's Hypothesis is reduced to at most REF. An apparent complication, is thus emerging for Class B Type MATH and MATH, where the independent relations are two while the relevant numbers are REF (the two real, non zero, eigenvalues of MATH plus the non vanishing component of MATH). The resolution to this apparent complication, is provided by the algebraic invariant: MATH . This quantity, which is not meant to replace the dynamical variable MATH, vanishes identically in Class A models, while in Class B provides the third relation needed (see CITE). Thus in every NAME Type, REF numbers appear: in Class A, REF eigenvalues of MATH which correspond to MATH, and REF eigenvalues of MATH which correspond to MATH. Similarly, in Class B, the at most REF eigenvalues of MATH plus the third component of its null eigenvector which correspond to MATH and the at most REF eigenvalues of MATH plus the third component of its null eigenvector which correspond to MATH. The justification for considering only these two triplets and not - for example - the non diagonal components of MATH, lies in the fact that MATH can be put in diagonal form through the action of MATH, while MATH will have the proper form, for being null eigenvector of MATH. So, taking this irreducible form for both the matrix and its null eigenvector, we have the following relations: In Class A: MATH while in Class B: MATH . In each and every case, the corresponding system, can be easily solved, resulting in the equality between the eigenvalues of MATH and MATH, as well as MATH and MATH. There is thus, a matrix MATH, such that (in matrix notation) MATH . Of course, MATH and is there, only as a reminder of the tensor density character of MATH.
gr-qc/0107050
Since the matrices MATH are positive definite, there are MATH such that MATH. Let MATH be defined as MATH and MATH. With MATH representing again a given, albeit arbitrary NAME Type. Using the above and REF we have: MATH . The hypothesis MATH translates into MATH which through the lemma implies that there is MATH such that MATH. Since MATH (in matrix notation): MATH . Similarly, we have: MATH . The above imply that the matrix MATH satisfies: MATH and MATH that is, MATH.
math-ph/0107003
Recall that MATH with MATH. We want to show that MATH cannot be too close to MATH in REF. By completeness of the set of eigenvectors MATH, we have MATH . We now use the NAME equation. We have MATH for MATH. Let us take MATH for MATH; then the following equation holds true for all MATH: MATH . The middle term of the left side is necessary for the equation to hold at sites outside MATH, that are neighbors of sites in MATH. Taking the NAME transform, we get MATH where MATH is a `boundary vector', MATH . Notice that MATH if MATH. From REF, we have MATH . The electronic energy in MATH is given by MATH. We saw in REF that MATH. By REF, this can be strengthened to MATH where MATH (respectively, MATH) is the projector onto the subspace spanned by MATH (respectively, MATH). See REF for intuition. We show below that MATH and this will straightforwardly lead to the lower bound. In order to see that the boundary vector has a projection in the subspace of the eigenvectors with large eigenvalues, we first remark that MATH . We used the fact that each site MATH of MATH has at least one neighbor MATH outside of MATH, and we obtained an inequality by restricting the sum over such pairs. Let us introduce MATH such that MATH and MATH. We first consider the situation where MATH. Using first MATH and then the previous inequality, we have MATH . For MATH, REF imply MATH . We can write a lower bound by proceeding as in REF, but using the bound REF for MATH, instead of MATH. The bathtub principle then gives MATH where we introduce MATH such that MATH . Notice that for MATH, we have MATH, so that MATH implies MATH. This justifies the use of REF. We bound the first integral of REF using MATH, and we obtain MATH . One can derive a more explicit expression for the lower bound. First, MATH . Second we use MATH, to get MATH . One can use the upper bound of REF to get MATH . Recall that MATH is the volume of the unit sphere in MATH dimensions. The lower bound of REF allows to write MATH . Then one gets the bound MATH . Hence the boundary correction to MATH is bounded below by MATH with MATH . Recall that we supposed MATH, where MATH is the index of the largest eigenvalue that is smaller than MATH. Were it not the case, we can write, with MATH, MATH . We used the previous inequality to bound the first sum, and MATH for the second sum. This is greater than MATH provided MATH . A sufficient condition is that MATH is an increasing function of MATH. Computing the derivative (the derivative of MATH is MATH, that is smaller than MATH using REF), and requiring it to be positive leads to the condition MATH . The right side is greater than MATH.
math-ph/0107003
We can suppose MATH. The definition of MATH involves a sum over the first MATH eigenvectors (more precisely, of their NAME transforms). In case of degenerate eigenvalues one is free to choose any eigenvectors. For the proof of REF it turns out that the possible degeneracy of MATH brings some burden, and it is useful to redefine MATH by averaging over eigenvectors with eigenvalue MATH: MATH here, MATH is such that MATH and MATH. The degeneracy of MATH is MATH (which may be zero). Of course, MATH is still given as the integral of MATH multiplied by MATH. The goal is to prove that MATH cannot approach MATH in REF. Since MATH, we have MATH . We introduce MATH with MATH the boundary vector defined in REF. By REF , it is enough to show that MATH is bounded below by a quantity of the order of MATH. We have MATH where MATH is the projector onto the subspace spanned by all MATH with MATH, and MATH is the projector corresponding to the eigenvalue MATH. We want to show that MATH for small MATH. This amounts to prove that the vector MATH cannot lie entirely in the subspace spanned by MATH. Notice that if MATH and MATH, then MATH has no neighbors in MATH. Using the assumption of REF , as well as MATH and MATH, we get MATH . The last inequality uses the assumption of REF , and the fact that MATH is at most MATH and at least REF. Next we consider MATH. We have, for MATH, MATH and therefore, if MATH, MATH . We write MATH, with MATH if MATH, REF otherwise. Notice that MATH. Clearly, MATH, and therefore MATH . Furthermore, from REF, MATH satisfies MATH . Because MATH, the last inequality implies MATH . With the first inequality in REF, this yields MATH hence MATH . NAME to REF, we obtain MATH . We can combine this bound with REF; we have then for all MATH . We introduce MATH such that MATH and we have MATH . We bound the first two integrals using MATH; from the definitions of MATH and MATH we have MATH . As a result, we obtain the bound we were looking for, MATH .
math-ph/0107003
We have MATH where MATH . We used here another convention for the characteristic function, namely MATH is REF if MATH is true, and is REF otherwise. Notice that MATH. Then we both have MATH . And by NAME, this implies MATH . One shows in REF that the integral of MATH is bounded by REF. Recall that MATH are the eigenvectors of MATH. Let MATH, respectively, MATH, be the projectors onto the first MATH eigenvectors, respectively, the last MATH eigenvectors. By REF, one has inequalities MATH . Let us introduce sets MATH and MATH by MATH . We obtain a lower bound by substituting REF into REF, and restricting the integrals to MATH and MATH. Namely, MATH . Let MATH. From the assumption of the lemma, and using MATH and REF , we have that for all MATH, MATH . Extracting a factor MATH, and using the above inequality, we can write MATH . Consider MATH. The assumption of the lemma for MATH, together with the bound for the gradient in REF , implies MATH . Therefore MATH . This can be rewritten as MATH that is, MATH . Hence MATH . The quantity in the brackets is negative for MATH. Observing that MATH (because MATH and using REF ), the bracket is bounded by MATH. Therefore, MATH . On the other hand, for MATH, MATH . Since MATH, we have from REF MATH . Therefore MATH . Clearly, MATH; then MATH . We use now REF. The worst situation happens when MATH is equal to the right side of the previous equation. Using REF we finally get the lower bound of REF .
math-ph/0107003
Let us introduce MATH . By the definition of the discrete Laplacian, MATH . Let us denote by MATH the quantity inside the brackets above. Clearly, MATH. Let MATH, MATH, represents all combinations of inversions of some coordinates. We have the following inequality: MATH . Indeed, starting from the Right-hand side, we have in essence (with MATH and MATH) MATH which is the LHS of REF. The Right-hand side of REF is clearly smaller than MATH. One computes now MATH for MATH. First, MATH . Second, MATH . We used MATH, and the bracket in the second line is MATH. Gathering REF, we obtain MATH . One can check that whenever MATH and differs from REF, there exists a neighbor MATH that belongs to MATH. Then the condition of the lemma implies that MATH . Furthermore, MATH has less than MATH elements since MATH; then for any MATH that satisfies the assumption of the lemma there exists MATH such that MATH . We get a lower bound for MATH by considering only those sites, that is, MATH uniformly in MATH.
math-ph/0107003
We proceed ab absurdo and explore ways where MATH could be uniformly zero. The constraint MATH takes a simple form, namely MATH. Furthermore, MATH satisfies MATH; if MATH, we can find MATH such that MATH and MATH - in which case MATH must be perpendicular to MATH. The condition MATH for all MATH implies that MATH. This should also be true when MATH is replaced with MATH, hence MATH for all MATH. Now take MATH. We have MATH and these two components must be equal, since MATH is parallel to MATH. Hence MATH, or MATH . In this case MATH takes the form MATH . Since MATH we have MATH; if MATH, one can take MATH, and MATH is strictly positive, so we can suppose MATH. Let MATH, and MATH the vector with components MATH . Then MATH. If we require this to be zero for MATH, then we need MATH. But we also require MATH for all MATH, hence MATH. So MATH must be zero, that is, MATH for all MATH. We also have MATH, and MATH cannot be always equal to REF. If MATH, one checks that necessarily MATH, which is impossible because MATH. Hence MATH cannot be uniformly zero when moving along the NAME surface.
math-ph/0107003
First, we remark that eigenvectors of MATH with eigenvalue smaller than MATH have exponential decay outside of MATH. Indeed, for MATH the NAME equation can be written MATH . If MATH, we have MATH . Using this inequality, we can proceed by induction on the distance between MATH and MATH. The induction hypothesis is that the following holds true MATH for any MATH such that MATH. As a result, we have MATH . Let us introduce MATH . We show that MATH is bounded below by MATH, up to a contribution no greater than MATH. Recall that MATH is the Hamiltonian with infinite repulsions. If MATH is the projector onto the domain MATH, let MATH. MATH . By the NAME inequality, the last term is smaller than MATH . We used REF with MATH being respectively REF, in order to control the quantities in both brackets. Recall that MATH denotes the MATH-th eigenvalue of the Hamiltonian MATH; that is, with infinite repulsions. Let us introduce the projector MATH onto the corresponding eigenstate. Then MATH where the MATH satisfy MATH, and MATH. By the bathtub principle CITE, we obtain the lower bound MATH . There remains to show that MATH is close to MATH. We have MATH . We bounded MATH. Since MATH, we obtain the proposition.
math-ph/0107003
Let MATH be a multiple of MATH, and MATH be such that MATH. We consider a box MATH. We introduce MATH. Let MATH, MATH, represent a translation possibly followed by an axis permutation. We define the averaged Hamiltonian MATH . Then MATH . Indeed, all summands in the above equation are equal, and the Hamiltonian MATH has no more than MATH negative eigenvalues, and at least MATH zero eigenvalues. The Right-hand side is equal to MATH. Let MATH be the number of sites in MATH that have MATH neighbors in MATH. We have MATH and MATH; and the number of bonds in MATH is MATH. Then the averaged Hamiltonian is MATH with MATH . Let MATH; then MATH and MATH. One easily checks that MATH . Notice that all operators commute. In REF, the terms involving MATH cancel, since MATH, and MATH. Now, as MATH, MATH . Therefore REF implies MATH .
math-ph/0107003
The fermionic free energy MATH can be expressed in terms of the eigenvalues of MATH, MATH . In order to compare this with the corresponding infinite-volume REF , we partition the NAME zone MATH according to the level sets of the function MATH; more precisely, we define measures MATH, MATH, by MATH . Notice that MATH and MATH. Next we introduce MATH, that are equal to MATH averaged over MATH: MATH . The ground state energy REF of a density MATH of electrons in MATH can then be written as MATH . From the lower bound without a boundary term, we have MATH for all MATH, and equality when MATH. Actually, REF can be strengthened by introducing a term depending on the boundary of MATH. In REF , MATH can be taken to be increasing in MATH for MATH. Also, MATH. Therefore there exists a function MATH, with MATH for MATH, MATH, and MATH . Next we define MATH then the following is stronger than REF and holds true, MATH . With MATH chosen appropriately both sequences MATH and MATH are increasing, and the inequality above is an equality when MATH. The sequence MATH is said to `majorize' MATH. We can apply an inequality due to NAME, NAME and NAME (and independently found by NAME); see CITE page REF. For any concave function MATH, we have MATH . (Conversely, if REF holds for all concave MATH, then MATH majorizes MATH.) We use this inequality with MATH which is concave. We get MATH where the last step is NAME 's inequality. Then MATH . In the limit MATH, we have MATH . As a result, for all MATH we get a lower bound for large MATH that is uniform in the limit MATH. One also gets a lower bound by using concavity of MATH, that holds for all temperatures, but that is not uniform in MATH: MATH . The integrand in the last line is strictly positive if MATH is small enough, and chosen to vanish appropriately as MATH.
math-ph/0107003
Let us introduce MATH . We assume MATH (otherwise, replace MATH by MATH in the next expressions, and ignore the sums whose initial number is greater than the final one). Then MATH . We proceed as in REF and define MATH . Then MATH . REF is still valid, for both MATH and MATH. Hence MATH . Here, MATH, MATH are the eigenvalues of the operator MATH. We define MATH . Then REF takes the simpler form MATH . The sequence in the Right-hand side is not necessarily increasing, but one gets a lower bound by rearranging the terms. Hence one can apply NAME, NAME, NAME inequality. Indeed, it also works when the total sum over elements of the sequences are not equal, provided the concave function is increasing - which is the case with MATH. One obtains MATH . We use now MATH, and we find MATH . The remaining effort consists in estimating the sum of MATH, using exponential decay of eigenfunctions MATH either in MATH or in MATH. Retracing REF, we get MATH . Notice that the last term can be written with MATH instead of MATH, as can be seen from REF. We use MATH, and we finally obtain MATH .
math-ph/0107003
The NAME inequality allows us to write MATH for any set of orthonormal functions MATH (in the NAME space of antisymmetric wave functions on MATH). We can choose the MATH to be eigenfunctions of MATH and MATH - decorrelating MATH and MATH. In MATH, the free electrons experience a uniform potential MATH; the energy levels are given by the spectrum of MATH plus MATH. This only shifts the chemical potential, so that we obtain the first claim of the proposition. Now we estimate MATH. Let us introduce MATH then MATH, MATH, and the upper bound for the ground state energy can be cast in the form MATH . This allows us to summon again the NAME, NAME, NAME inequality, and we get MATH . The derivative of MATH satisfies MATH (recall that MATH). Since the measure MATH is concentrated on those MATH where MATH lies between MATH and MATH, we can bound REF by MATH . We need a bound for MATH; since MATH, we have MATH. Let us take MATH such that MATH, and MATH such that MATH. Then MATH . If MATH is chosen so as to minimize the norm of such MATH, we have MATH . Combining this inequality with REF, we get MATH . This leads to the upper bound of REF .
math-ph/0107003
Let MATH and MATH be the energies per site of the all MATH and all MATH . NAME configurations. A configuration can be specified by the set MATH of MATH spins. Let MATH be the set of bonds connecting MATH and MATH. Notice that MATH. The partition function of the NAME model can be written as MATH . Now the upper bound for MATH implies that the partition function of the NAME model is bounded below by MATH . The last factor vanishes in the thermodynamic limit. One then makes the connection with NAME by multiplying MATH by MATH and by choosing the temperature to be MATH, and the magnetic field to be MATH (the magnetic field is negative). The other bound is similar, simply replace MATH by MATH.
math-ph/0107003
Because MATH was assigned periodic boundary conditions, we have MATH . It is not hard to check that for any configuration MATH specified by MATH, one has MATH . Then MATH . We need a bound for the last term. The fact is that typical configurations of classical particles cannot have too much boundary: MATH is smaller than MATH. Indeed, MATH . Therefore MATH . The last term vanishes in the limit MATH, and the term involving MATH vanishes when MATH.
math-ph/0107003
Setting MATH, and making the change of variables MATH, one gets MATH . The integral over MATH can be split into one running from REF to MATH, and one running from MATH to REF. For the first part we bound MATH, while the bound for the second part can be chosen to be MATH. Everything can now be computed explicitly, and we find MATH.
math-ph/0107003
Let us introduce a map MATH such that MATH; precisely, MATH . The condition MATH becomes MATH. The derivative of MATH is MATH . We check now that MATH. Let us assume that MATH. Then MATH . Then we can write MATH . One gets a lower bound by replacing the last characteristic function by the condition MATH. Recall that MATH; the assumption of the lemma implies that MATH; as a consequence, a lower bound is the volume of the sphere of radius MATH.
math-ph/0107003
By REF, MATH . The lower bound then follows from MATH .
math-ph/0107003
Since MATH we have MATH . This is less than MATH, and we obtain the bound REF . We consider now MATH. MATH . In the last line, MATH is allowed to be REF. Let MATH. Then MATH . One computes now the MATH-th component of the gradient; it involves a term MATH that is smaller than REF; there are sums over MATH, with less than MATH terms; the sum MATH is bounded by MATH; finally, the number of sites where MATH differs from REF is bounded by MATH. As a result, the MATH-th component of the gradient is bounded by MATH, and we obtain REF . We estimate now the gradient of MATH. One easily checks that MATH . We can use the bound REF for the gradient of MATH. The gradient of the last term is less than MATH, so we can write MATH . Finally, one easily checks that MATH . Using REF , as well as MATH, one gets REF .
math-ph/0107011
Elementary considerations show that MATH for MATH, and MATH otherwise. Inserting REF for MATH, we can therefore write MATH . We now use the fact that MATH and change the order of integration to get MATH . The last integral can be evaluated to be MATH and therefore MATH where we integrated by parts in the last step, using the demanded decrease properties of MATH.
math-ph/0107011
To obtain an upper bound to the ground state energy, we use as a trial state a NAME determinant of the MATH lowest eigenvectors of the Laplacian on MATH, including spin. The calculation of the expectation value is essentially the same as the one done by CITE in the NAME case. In the thermodynamic limit the boundary conditions do not matter (see CITE for details), and one can just consider plain waves, with momenta up to the NAME momentum MATH. The function MATH then arises from the integral MATH . We are left with the lower bound. We start with the following decomposition of the NAME potential, proven in REF. If MATH denotes the characteristic function of a ball of radius MATH centered at the origin, then MATH where MATH (compare with REF ). The key to the lower bound is a correlation inequality derived in CITE. It states that for any projection MATH and MATH on the one-particle space, and for antisymmetric MATH, MATH . Here we denote by MATH the one-particle reduced density matrix of MATH, with corresponding density MATH. Using this, with MATH, and integrating over MATH and MATH as in REF , we get that MATH . The function MATH is the integral kernel of MATH, and MATH denotes the spin variables of the electrons. The error term is MATH . Here we introduced the operator MATH; MATH and MATH are the densities corresponding to MATH and MATH, respectively. Since MATH, we can proceed exactly as in CITE to estimate MATH for any MATH. Here MATH measures the "difference" of MATH and MATH. Since MATH is supposed to be the ground state, we can use the upper bound to get the a priori knowledge that MATH, see CITE. Now let MATH be the projection onto the first MATH eigenstates of the Laplacian with periodic boundary conditions. It is shown in CITE that, for any MATH, MATH as MATH. The second term in REF , when divided by MATH, converges in the thermodynamic limit to MATH, as explained in the upper bound. Moreover, MATH. Putting together REF , we therefore get that in the ground state MATH, MATH as MATH. The last term is positive, since MATH is positive definite. Minimizing the right side of REF over MATH gives the desired result.
math-ph/0107014
Since MATH is invariant under MATH it suffices to show that MATH is also invariant. But from REF MATH that is clearly invariant under S.
math-ph/0107014
For easy reference we write Hamiltonian REF MATH where MATH. We restrict our study to solutions that at time MATH leave the origin. Definition We call ejection solutions, solutions of the Hamiltonian system with Hamiltonian REF and symplectic REF-form REF that at time MATH leave the origin, that is, solutions with MATH . Remark: The set of ejection trajectories is parametrized by a circle. In fact, for these trajectories REF implies that MATH and we can write MATH and MATH for MATH. For a small MATH and MATH solutions of REF will be close to the solutions of a ressonant harmonic oscillator with period and amplitude given by MATH respectively. Let MATH denote an initial condition for an ejection trajectory, that is, MATH. Fixing MATH and MATH we have that MATH is uniquely determined by MATH. We write the set of ejection trajectories at time MATH and parameter MATH as MATH . Then MATH . The following lemma shows that, for MATH small enough, an ejection trajectory with angle MATH pointing to the right half plane will cross the MATH axis transversaly at some time MATH. Let MATH. Let MATH be a positive real number with MATH. Let MATH, where MATH. If MATH is small enough, there exist times MATH such that MATH and MATH . Moreover MATH is a continuous function of MATH. We first prove existence of MATH. Let MATH. We write MATH where MATH is the harmonic oscillator flow given by MATH where MATH. From REF MATH satisfies MATH . Let MATH and MATH. Then MATH where MATH is the MATH component of MATH. Let MATH where MATH is a positive small number. Let MATH. We can find MATH small enough such that MATH this implies that MATH . Therefore for MATH satisfying REF implies that MATH has the same sign as MATH, that is, MATH is positive. Analogously we show that MATH is negative. Therefore by continuity of the flow, there exists times MATH, for which MATH and satisfying REF . To prove continuity it suffices to prove that at time MATH the projection of the flow on the MATH plane intersects the MATH axis transversaly, that is, it suffices to show that MATH. But MATH . For MATH small enough it follows from REF , using the same argument as in the existence part, that MATH. For notational simplicity we will write in the next lemma MATH and MATH. Let MATH . Then for MATH small enough there exists MATH such that MATH. Let MATH as in REF . By NAME 's Theorem it follows that there exists MATH, MATH such that MATH, that we write as MATH . For an ejection trajectory we have that MATH and REF imply that MATH. Therefore MATH with MATH. MATH satisfies REF and by REF . Solving the first equation for MATH and substituting on the second equation we obtain MATH . Now we estimate MATH. From the NAME expansions we have that MATH . Using REF we obtain MATH . But, from REF we have that MATH giving that MATH . Therefore there exists a constant MATH such that MATH . We can then write MATH and REF becomes MATH . Thus for MATH small enough we can find MATH such that MATH and MATH. By the continuity part of REF there is MATH, such that MATH. Therefore we proved that for MATH small enough there is an angle MATH such that the ejection trajectory with this angle will pass through the origin in some future time MATH. But by REF the system is symmetric with reflections about the origin. Therefore, the orbit will pass an infinite number of times around the origin proving the theorem.
math-ph/0107030
Every square from the MATH-grid containing MATH is included in MATH. On the other hand, every closed ball of radius MATH can be covered by at most REF squares from the grid. Therefore MATH . MATH .
math-ph/0107030
CASE: MATH . CASE: MATH. MATH .