paper stringlengths 9 16 | proof stringlengths 0 131k |
|---|---|
cs/0010018 | We answer a query simply by repeatedly following the pointer MATH for the subinterval MATH that contains the query point. For each block found via this chain of pointers, we look up the value MATH and compare the priorities of the intervals found in this way. Each successive block in the chain corresponds to an interval of size smaller by a MATH factor than the previous block, so the total number of blocks considered is MATH. For any interval MATH containing the query point there is a maximal block such that MATH contains the subinterval containing the query in that block; then by the assumption of maximality MATH must have an endpoint in the block and is a candidate for MATH. Therefore, the true maximum-priority interval containing the query is one of the ones found by the query, and the query algorithm is correct. |
cs/0010018 | To insert or delete an interval, we create a new copy of each block containing one of the endpoint intervals. By the same argument used to bound query time, there are at most MATH such blocks. For each copied block, we update the priority queues corresponding to subintervals containing the updated interval, copy pointers to these priority queues into the MATH pointers of the new block, and use these priority queues to set each value of MATH. We then copy each pointer MATH from the previous version of the block, except for the one or two subintervals containing the updated interval's endpoints, which are changed to point to the new blocks for those subintervals. Each update causes the creation of at most MATH new blocks, using space MATH. Each update also changes MATH priority queues, in time MATH. |
cs/0010018 | Form a set of intervals MATH with priority MATH for MATH. The maximum priority interval containing MATH has as its left endpoint the predecessor of MATH. Thus, we can use a static version of the data structure described in REF (with MATH) to solve this problem. |
cs/0010018 | We consider a left-right sweep of the rectangles by a vertical line; for each position of the sweep line we maintain a dynamic set of intervals formed by the intersections of the rectangles with the sweep line. This intersection changes only when the sweep line crosses the left or right boundary of a rectangle; at the left boundary we insert the MATH-projection of the rectangle and at the right boundary we delete it. With each rectangle boundary we store a pointer to the version of the data structure formed when crossing that boundary. A query can be handled by using the integer predecessor data structure of REF to find the MATH-coordinate of the nearest rectangle boundary to the right of the query point, and then performing a query in the corresponding version of the interval data structure. |
cs/0010018 | We consider the minimum spanning tree verification formulation of the problem, and consider the non-tree edges in sorted order by weight. Our algorithm finds the replacements for each path edge in a certain order; when a path edge's replacement is found we reduce the size of the graph by contracting that edge. This contraction clearly does not change the replacement for the remaining edges. We use a union-find data structure to keep track of the relation between the original graph vertices and the vertices of the contracted graph. Since the contractions will be performed along the edges of a fixed tree (namely, the given path), we can use the linear-time union-find data structure of CITE or its recent simplification by CITE. Our algorithm, then, simply performs the following steps for each edge MATH, in sorted order by edge weight: for each uncontracted edge MATH remaining in the path between MATH and MATH, set that edge's replacement to MATH, contract the edge, and unite MATH and MATH in the union-find data structure. The time per edge MATH is a constant, plus a term proportional to the number of path edges contracted as a result of processing edge MATH. Since each edge can only be contracted once, the total time is linear. |
cs/0010018 | We first partition the space MATH into horizontal stripes, according to the maximum-priority horizontal input stripe covering each point in the space; essentially this is just the lower envelope computation of REF . Let MATH denote the minimum priority occurring in this partition. Similarly, we partition the space into vertical stripes according to the maximum-priority vertical input stripe covering each point, and let MATH denote the minimum priority occurring in this partition. Finally, we let MATH denote the maximum priority of any universal stripe. (We set MATH, MATH, or MATH to MATH if the corresponding set of stripes is empty.) We then use this information to search for conflicts, as follows, depending on the types of the two conflicting stripes: CASE: To find a conflict between two horizontal stripes, if one exists, test whether there exists an ambiguity in the construction of the horizontal partition, as discussed below REF . If there is such an ambiguity, let MATH denote the maximum priority of any ambiguity. Then a conflict exists if and only if MATH. Similarly we can find a conflict between two vertical stripes by letting MATH denote the maximum priority of an ambiguity in the vertical partition, and testing whether MATH. CASE: A conflict between two universal stripes exists if and only if some two or more universal stripes have priority MATH, and if MATH. CASE: A conflict between a universal and a horizontal stripe exists if and only if MATH is also the priority of one of the stripes in the horizontal partition, and MATH. Similarly a conflict between a universal and a vertical stripe exists if and only if MATH is also the priority of one of the stripes in the vertical partition, and MATH. CASE: A conflict between a horizontal stripe and a vertical stripe exists if and only if there is a priority MATH that appears both in the horizontal and the vertical partition. Thus, the problem has been reduced to a constant number of comparisons, together with two more complex operations: determining whether MATH appears in either of two sets of priorities, and determining the intersection of those two sets. Since we know the sorted order of the priorities, we can represent them by values in the range MATH and use a simple bitmap to perform these membership and intersection tests in linear total time. |
cs/0010018 | If the line is horizontal (vertical), the number of cells cut by the line at most doubles at every even (odd) level of the MATH-tree construction, and remains unchanged at every odd (even) level. The result follows from the MATH bound on the number of levels in the tree. |
cs/0010018 | The bound on the number of crossed cells follows immediately from REF . The parent of a maximal covered cell must be crossed, and each crossed cell can have at most one maximal covered child, so the number of maximal covered cells is also MATH. |
cs/0010018 | We build a MATH-tree of the rectangle vertices this can be done in time MATH and perform a depth first traversal of the tree. As we traverse the tree, we maintain at each cell of the traversal the following information: CASE: The maximum priority of a rectangle covering the cell, and one or (if they exist) two rectangles having that maximum priority. CASE: A list of the rectangles crossing the cell, sorted by priority. CASE: A sorted list of the horizontal boundaries of the rectangles that cross the cell. CASE: A sorted list of the vertical boundaries of the rectangles that cross the cell. When the traversal reaches a cell MATH, we can determine which of rectangles cross or cover the children of MATH, and extract the sorted sublists for its two children, in time linear in the number of rectangles crossing MATH. We also find the set of rectangles that cross MATH but maximally cover one of its children, scan this set for the maximum priority, and use this information (together with the maximum priority of a rectangle covering MATH) to determine the maximum priority of a rectangle covering each child. When the traversal reaches a leaf cell, we apply the algorithm of REF to test whether the cell contains a conflict. While one child of a cell MATH is being processed recursively, we store with MATH only the portions of the sorted lists that have not been passed to that child, so that each rectangle or rectangle edge is stored in one of the lists only at a single level of the tree, keeping the total space linear. All operations performed when traversing a cell take time linear in the number of rectangles crossing or maximally covering the cell, so by REF the total time is MATH. |
cs/0010019 | Let MATH be a length function and let MATH be a MATH-ensemble. We define the binary relation: MATH . Clearly, this relation is polynomial-time recognizable, since MATH can be computed in polynomial time. Also, the relation is evasive (with respect to MATH) since for every MATH there is at most one MATH satisfying MATH, and so MATH . On the other hand, consider the machine MATH that computes the identity function, MATH for all MATH. It violates the correlation intractability requirement, since for all MATH, MATH . In fact, since MATH is polynomial-time recognizable, even the weak correlation intractability of MATH is violated. |
cs/0010019 | The intuition is that since MATH is evasive, it is infeasible for the forger to find a message MATH so that MATH. Thus, a forgery of the modified scheme must be due to REF , which yields a breaking of the original scheme. Formally, let MATH be an adversary who mounts an adaptive chosen message attack on MATH, and whose success probability in obtaining an existential forgery (in the Random Oracle Model) is MATH. Assume, toward contradiction, that MATH is not negligible in the security parameter MATH. Denote by REL the event in which during an execution of MATH, it hands out a message MATH for which MATH (either as a query to the signer during the chosen message attack, or as the message for which it found a forgery at the end), and let MATH be the probability of that event. Using the hypothesis that MATH is evasive, we prove that MATH is negligible in the security parameter MATH. Suppose, to the contrary, that MATH is not negligible. Then, we can try to efficiently find pairs MATH by choosing a key-pair for MATH, and then implementing the attack, playing the role of both the signer algorithm and the adversary MATH. With probability MATH, one of MATH's messages during this attack satisfies MATH, so just choosing at random one message that was used and outputting it yields a success probability of MATH (with MATH being the number of different messages that are used in the attack). If MATH is not negligible, then neither is MATH, contradicting the evasiveness of MATH. It is clear that barring the event REL, the execution of MATH against the original scheme MATH would be identical to its execution against MATH. Hence the probability that MATH succeeds in obtaining an existential forgery against MATH is at least MATH. Since MATH is negligible, and MATH is not, then MATH's probability of obtaining an existential forgery against MATH is also not negligible, contradicting the assumed security of MATH. |
cs/0010019 | When we use an ensemble MATH to implement the random oracle in the scheme MATH, we obtain the following real scheme (which we denote MATH): CASE: Uniformly pick MATH, set MATH, MATH, and output MATH. CASE: Output MATH. CASE: Output MATH. Consider now what happens when we use the ensemble MATH to implement the the scheme MATH (recall the definition of MATH from REF). Since MATH is evasive, then from REF we infer that the MATH is secure in the Random Oracle Model. However, when we use the ensemble MATH to implement the scheme, the seed MATH becomes part of the public verification-key, and hence is known to the adversary. The adversary can simply output the pair MATH, which will be accepted by MATH as a valid message-signature pair (since MATH). Hence, the adversary achieves existential forgery (of MATH) under key-only attack. Alternatively, the adversary can ask the legitimate signer for a signature on MATH, hence obtaining the secret signing-key (that is, total forgery). |
cs/0010019 | Since MATH is evasive, then from REF it follows that MATH is secure in the Random Oracle Model. On the other hand, suppose that one tries to replace the random oracle in the scheme by an ensemble MATH (where MATH be the index in the enumeration). An adversary, given a seed MATH of a function in MATH can then set MATH and output the pair MATH, which would be accepted as a valid message-signature pair by MATH. Alternatively, it can ask the signer for a signature on this message MATH, and so obtain the secret signing-key. |
cs/0010019 | Below we describe such a signature scheme. For this construction we use the following ingredients. CASE: MATH is a signature scheme, operating in the Random Oracle Model, that is existentially unforgeable under a chosen message attack. CASE: A fixed (and easily computable) parsing rule which interpret messages as triples of strings MATH. CASE: The algorithms MATH and MATH of a NAME system, as described in REF above. CASE: Access to three independent random oracles. This is very easy to achieve given access to one oracle MATH; specifically, by setting MATH, MATH and MATH. Below we use oracle MATH for the basic scheme MATH, oracle MATH for the NAME, and oracle MATH for our evasive relation. We note that if MATH is a MATH-oracle, then so are MATH and MATH. CASE: The universal function ensemble MATH from Subsection REF, with proper complexity bound MATH. We denote by MATH the universal machine that decides the relation MATH. That is, on input MATH, machine MATH invokes the MATH evaluation algorithm, and accepts if MATH. We note that MATH works in time MATH in the worst case. More importantly, if MATH is a function ensemble that can be computed in time MATH (where MATH is some polynomial), then for any strings MATH, on input MATH, machine MATH works for only MATH many steps. Using all the above, we describe an ideal signature scheme MATH. As usual, the key generation algorithm, MATH, remains unchanged. The signature and verification algorithms proceed as follows. CASE: MATH CASE: Parse MATH as MATH, and set MATH and MATH. Let MATH. CASE: Apply MATH to verify whether MATH is a valid NAME, with respect to the oracle MATH and security parameter MATH, for the claim that the machine MATH accepts the input MATH within time MATH. (The punch-line is that we do not directly check whether the machine MATH accepts the input MATH within time MATH, but rather only if MATH is a valid NAME of this claim. Although MATH, this NAME can be verified in polynomial-time.) CASE: If MATH is a valid proof, then output MATH. CASE: Otherwise, output MATH. CASE: MATH REF+REF. As above REF. If MATH is a valid proof, then accept REF. Otherwise, output MATH. The computation required in REF of the signature and verification algorithms can be executed in polynomial-time. The reason being that (by definition) verifying a NAME can be done in polynomial-time, provided the statement can be decided in at most exponential time (which is the case here since we have MATH). It is also easy to see that for every pair MATH output by MATH, and for every MATH and every MATH, the string MATH constitutes a valid signature of MATH relative to MATH and the oracle MATH. To show that the scheme is secure in the Random Oracle Model, we first observe that on security parameter MATH it is infeasible to find a string MATH so that MATH, since MATH is evasive. By REF , it is also infeasible to find MATH such that MATH and yet MATH is a valid NAME of the contrary relative to MATH (with security parameter MATH). Thus, it is infeasible for a polynomial-time adversary to find a message that would pass the test on REF of the signature/verification algorithms above, and so we infer that the modified signature is secure in the Random Oracle Model. We now show that for every candidate implementation, MATH, there exists a polynomial-time adversary effecting total break via a chosen message attack (or, analogously, an existential forgery via a ``key only" attack). First, for each function MATH, denote MATH, MATH, and MATH. Then denote by MATH the ensemble of the MATH functions. Suppose that MATH is the MATH function ensemble in the enumeration mentioned above, namely MATH. Given a randomly chosen MATH-bit seed MATH, the adversary generate a message MATH so that MATH is a NAME (with respect to the adequate security parameter) for the true statement that MATH accepts the input MATH within MATH steps, where MATH and MATH. Recall that the above statement is indeed true (since MATH), and hence the adversary can generate a proof for it in time which is polynomial in the time that it takes to compute MATH. (By the perfect completeness property of the NAME system, the ability to prove correct statements holds for any choice of the random oracle, and in particular when it is equal to MATH.) Since this adversary is specifically designed to break the scheme in which the random oracle is implemented by MATH, then the index MATH - which depends only on the choice of MATH - can be incorporated into the program of this adversary. By REF, it is possible to find MATH (given an oracle access to MATH) in time polynomial in the time that it takes MATH to accept the input MATH. Since MATH is polynomial-time computable, then MATH works on the input MATH in polynomial time, and thus the described adversary also operates in polynomial-time. By construction of the modified verification algorithm, MATH is a valid signature on MATH, and so existential forgery is feasible a-priori. Furthermore, requesting the signer to sign the message MATH yields the signing key, and thus total forgery. |
cs/0010019 | In this proof we use the same notations as in the proof of REF. Let MATH be an encryption scheme that is semantically secure in the Random Oracle Model, and we modify it to get another scheme MATH. The key generation algorithm remains unchanged, and the encryption and decryption algorithms utilize a random oracle MATH, which is again viewed as three oracles MATH and MATH. CASE: of plaintext MATH using the public encryption-key MATH: CASE: Parse MATH as MATH, set MATH and MATH, and let MATH. CASE: If MATH is a valid NAME, with respect to oracle MATH and security parameter MATH, for the assertion that MATH accepts the pair MATH within MATH steps, then output MATH. CASE: Otherwise (that is, MATH is not such a proof), output MATH. CASE: of ciphertext MATH using the private decryption-key MATH: CASE: If MATH, output MATH and halt. CASE: If MATH, output MATH and halt. CASE: If MATH then parse MATH as MATH, and set MATH, MATH, and MATH. If MATH is a valid NAME, with respect to oracle MATH and security parameter MATH, for the assertion that MATH accepts the pair MATH within MATH steps, then output MATH and halt. CASE: Otherwise output MATH. The efficiency of this scheme follows as before. It is also easy to see that for every pair MATH output by MATH, and for every plaintext MATH, the equality MATH holds for every MATH. To show that the scheme is secure in the Random Oracle Model, we observe again that it is infeasible to find a plaintext that satisfies the condition in REF of the encryption algorithm (respectively, a ciphertext that satisfies the condition in REF of the decryption algorithm). Thus, the modified ideal encryption scheme (in the Random Oracle Model) inherits all security features of the original scheme. Similarly, to show that replacing the random oracle by any function ensemble yields an insecure scheme, we again observe that for any such ensemble there exists an adversary who - given the seed MATH - can generate a plaintext MATH (respectively, a ciphertext MATH) that satisfies the condition in REF of the encryption algorithm (respectively, the condition in REF of the decryption algorithm). Hence, such an adversary can identify when MATH is being encrypted (thus violates semantic security), or ask for a decryption of MATH, thus obtaining the secret decryption key. |
cs/0010019 | The proof of REF is a straightforward generalization of the proof of REF. Actually, we need to consider two cases: the case MATH and the case MATH. In the first case, we proceed as in the proof of REF (except that we define MATH). In the second case, for every ensemble MATH, we define the relation MATH . We show that MATH is evasive by showing that, for every MATH and MATH, there exist at most polynomially (in MATH) many MATH's such that MATH. This is the case since MATH implies that there exists some MATH such that MATH and MATH. But using the case hypothesis we have MATH, which implies that MATH and hence also MATH. Next, using the other case hypothesis (that is, MATH), we conclude that MATH. Therefore, there could be at most polynomially many such MATH's, and so the upper bound on the number of MATH's paired with MATH follows. The evasiveness of MATH as well as the assertion that MATH is polynomial-time computable follow (assuming that the function MATH itself is polynomial-time computable). On the other hand, consider the machine MATH that, on input MATH, outputs the MATH-bit prefix of MATH. Then, for every MATH, we have MATH. For the proof of REF , assume that MATH (for all but finitely many MATH's). We start by defining the ``inverse" of the MATH function MATH (where, in case there exists no MATH such that MATH, we define MATH). By definition it follows that MATH, for all MATH's (because MATH belongs to the set MATH), and that MATH, whenever there exists some MATH for which MATH. Next we define MATH . This relation is well defined since, by the conditions on the lengths of MATH and MATH, we have MATH and so the function MATH is indeed defined on the input MATH. In case MATH, this relation may not be polynomial-time recognizable. Still, it is evasive with respect to MATH, since with security parameter MATH we have for every MATH . Using MATH, we conclude that the set of MATH's paired with MATH forms a negligible fraction of MATH, and so that MATH is evasive. Again, the machine MATH, that on input MATH outputs the MATH-bit prefix of MATH, satisfies MATH, for all MATH's. |
cs/0010019 | We assume, for simplicity that MATH (and so MATH and MATH). Given MATH as stated, we again adapt the proof of REF. This time, using MATH, we define the relation MATH . Notice that in this definition we have MATH, and also MATH, so this relation is indeed MATH-restricted. Again, it is easy to see that MATH is polynomial-time recognizable, and it is evasive since every string MATH is coupled with at most a MATH fraction of the possible MATH-bit long strings, and MATH. (Here we use the hypothesis MATH.) On the other hand, consider a (real-life) adversary that given the seed MATH for the function MATH, sets the input to this function to be equal to MATH. Denoting the MATH-prefix of MATH (equiv., of MATH) by MATH, it follows that MATH is a prefix of MATH and so MATH. Thus, this real-life adversary violates the (restricted) correlation intractability of MATH. |
cs/0010019 | For simplicity, we consider first the case MATH. Let MATH be a MATH-ensemble. Adapting the proof of REF, we define the relation MATH (Notice that since MATH, the MATH's are indeed in the range of the function MATH.) Clearly, this relation is polynomial-time recognizable. To see that this relation is evasive, notice that for any fixed MATH-bit seed MATH, we have MATH . Hence, the probability that there exists a seed MATH for which MATH holds, for MATH, is at most MATH. It follows that MATH . However, the corresponding multi-invocation restricted correlation intractability condition does not hold: For any MATH, setting MATH we get MATH. To rule out the case MATH, we redefine MATH so that MATH if MATH for MATH and MATH for MATH. |
cs/0010019 | Let MATH. For every seed MATH, we define MATH so that MATH equals MATH if MATH is the smallest integer such that MATH. In case MATH holds for all MATH's, we define MATH arbitrarily. Let MATH, and MATH (S stands for ``Small image"). Since MATH is evasive, it is infeasible to find a MATH not in MATH. Thus, for every probabilistic polynomial-time MATH, MATH. On the other hand, the probability that such MATH outputs a MATH so that MATH is bounded above by MATH . Combining the two cases, the proposition follows. |
cs/0010021 | Follows immediately from REF . |
cs/0010021 | Follows immediately from REF . |
cs/0010021 | Follows immediately from REF . |
cs/0010021 | Follows immediately from REF . |
cs/0010021 | We use REF to convert the price history MATH and the strategy set MATH into a system of linear constraints MATH and MATH, with the next day's price change MATH determined by MATH for some MATH. Since the values MATH are computable in time polynomial in MATH, this conversion takes time polynomial in MATH. Then, MATH. Since MATH, the constraints in MATH must be vacuous; in other words, for each MATH with MATH, the corresponding constraint in MATH is MATH. Therefore, MATH. Furthermore, since both MATH and MATH are constant with respect to MATH, MATH . So to compute the desired MATH, we compute MATH and MATH as follows. To avoid the degeneracy caused by MATH, we work with MATH instead of MATH by replacing MATH with MATH and making related changes. Let MATH, which is the center of MATH. As is true for MATH, as MATH, the vector MATH converges weakly to a normal distribution centered at the MATH-dimensional point MATH . Under the assumption that each MATH is nonzero, the distribution of MATH is full-dimensional (within its restricted MATH-dimensional space), as in the limit the variance of each coordinate MATH is nonzero conditioned on the values of the other coordinates, which implies that the smallest subspace containing the distribution must contain all MATH axes. We can calculate the covariance matrix of MATH directly from the MATH, as it is equal to the covariance matrix for a single trader: on the diagonal, MATH; and for off-diagonal elements, MATH. Given MATH, MATH has density MATH for some constant MATH, and we can evaluate this density in MATH time given MATH, which is MATH time under our assumption that MATH is fixed. Let MATH be the MATH-th constraint of MATH, that is, MATH. Let MATH denote the constraint MATH. Let MATH. We next convert the constraints of MATH on MATH into constraints on MATH. First of all, notice that MATH. So MATH if and only if MATH. The term MATH may not be constant. In such a case, as MATH, the hyper plane bounding the half space MATH keeps moving away from the origin, which presents some technical complication. To remove this problem, we analyze the term in three cases. If MATH, then since MATH is the center of MATH, as MATH, MATH converges to MATH. In other words, MATH is infeasible with probability MATH in the limit. Then, since MATH, such MATH cannot exist in MATH. Similarly, if MATH, then MATH and MATH is vacuous. The interesting constraints are those for which MATH; in this case, by algebra, MATH if and only if MATH. Thus, let MATH be the matrix formed by these constraints; MATH can be computed in MATH time. Then, since MATH is constant with respect to MATH, MATH. Similarly, MATH converges to REF MATH, REF MATH, or REF MATH for REF MATH, REF MATH, or REF MATH, respectively. Therefore, by REF , MATH equals MATH for REF and equals MATH for REF . REF requires further computation. MATH . The numerator and denominator of the ratio in REF are both integrals of the distribution of MATH in the limit over the bodies of possibly infinite convex polytopes. To deal with the possible infiniteness of the convex bodies MATH and MATH, notice that the density drops exponentially. So we can truncate the regions of integration to some finite radius around the MATH-dimensional origin MATH with only exponentially small loss of precision. Finally, since the distribution of MATH in the limit is normal, by applying the NAME integration algorithm for log-concave distributions CITE to the numerator and denominator separately, we can approximate MATH within the desired time complexity. |
cs/0010021 | Let MATH represent the output of the MATH-th NOR gate, where MATH. Without loss of generality we assume that gate MATH is the output gate. The variables MATH and MATH are dummies to allow for a zero right-hand-side in MATH; our first two constraints are MATH and MATH. Suppose gate MATH has inputs MATH and MATH. The NOR operation is implemented by the following three linear inequalities: MATH . The first two constraints ensure that the output is never MATH if an input is MATH, while the last requires that the output is MATH if both inputs are MATH; the constraints are thus satisfied if and only if MATH. Using the dummy variables, the first two constraints are written as MATH . Let MATH be the system obtained by combining all of these inequalities. Then for each MATH, MATH determines MATH for all MATH. The vector MATH is chosen so that MATH. |
cs/0010021 | For each MATH, let MATH be the constraint MATH. To turn these inequalities into equations, we add slack variables to soak up any excess over MATH, with some additional care taken to ensure that there is a unique assignment to the slack variables for each setting of the variables MATH. We will use the following MATH variables, which we think of as alternate names for MATH through MATH: Observe that for each MATH, MATH can take on any integer value MATH between MATH and MATH, and that for any fixed value of MATH, the MATH constraints uniquely determine the values of MATH and MATH for all MATH. So each constraint MATH permits MATH to take on precisely the same values MATH to MATH that MATH does, and each MATH uniquely determines MATH and thus the assignment of all MATH and MATH. |
cs/0010021 | First of all, MATH because a MATH machine is a CPP machine that happens not to have any abstaining paths. For the inverse direction, represent each abstaining path of a CPP machine by a pair consisting of one accepting and one rejecting path, and each accepting or rejecting path by two accepting or rejecting paths. Then the resulting MATH machine accepts if and only if the CPP machine does. |
cs/0010021 | To show MATH, replace each rejecting path of a MATH machine with an abstaining path in a MATH machine. For the inverse direction, replace each abstaining path of the MATH machine with a rejecting path in the MATH machine. |
cs/0010021 | Immediate from REF and the definition of BCPP and CPP. |
cs/0010021 | Let MATH be a deterministic implementation a MATH machine, where MATH and each MATH supplies a witness for the MATH-th oracle query. We will show that the language MATH accepted by MATH is in MATH. To simplify the presentation, we assume that each oracle query is a Boolean formula with a fixed number MATH of variables, where MATH is polynomial in MATH, and that MATH is an assignment for those variables. We assume that MATH consists of a sequence of functions MATH for generating oracle queries, a set of MATH verifiers MATH for verifying the witnesses MATH, and a combining function MATH that produces the output from the input and the outputs of the MATH. Each function MATH takes as input the input MATH and the outputs of MATH through MATH. MATH sees the output of MATH and the input MATH. MATH sees the input MATH and the outputs of MATH through MATH. The output of the combined machine MATH is the output of any computation path where MATH is chosen so that MATH whenever such a MATH exists. In other words, we demand that MATH be a satisfying assignment when possible, and ignore those paths where satisfiable queries are issued but satisfying assignments are not supplied. We will represent MATH with a noncommittal machine MATH, where each MATH is a random bit-vector replacing the corresponding MATH, and MATH is an extra supply of random bits used to amplify the good computation paths to overwhelm the bad ones. This amplification process is a little complicated, because it is not enough to amplify paths that find good witnesses to particular queries; it may be that a bad witness for an earlier query causes some MATH to issue a different query from the correct one. So we must amplify a path that finds a witness to query MATH enough to overwhelm not only the exponentially many invalid witnesses to query MATH, but also the exponentially many valid witnesses that might be returned to instances queries MATH through MATH based on an incorrect answer to query MATH. Let MATH be the vector of outputs of MATH. For each MATH, we define an amplification exponent MATH as follows: MATH . We will write MATH for the coefficient MATH; these coefficients MATH are chosen to make REF work below. Now let MATH whenever MATH for all MATH, where MATH is the output of MATH through MATH in the computation of MATH. If MATH for some MATH, MATH. The effect of the MATH bits is to set the weight of each non-abstaining path to MATH. Clearly MATH can be computed in polynomial time as long as MATH is polynomial. The number of MATH bits needed is the maximum value of MATH, which is MATH. For this to be polynomial, we need MATH. A good path MATH is precisely one for which each MATH is the correct output of the NAME. A bad path MATH is one in which one or more of the MATH values is incorrect. We will match bad paths with good paths, and show that the weight of each good path is much larger than the total weight of all bad paths mapped to it. Identify each path with the sequence MATH that generates it. Let MATH generate some bad path. Let MATH be the first point at which MATH is an invalid witness to a satisfiable query. Then there is a good path MATH such that MATH for MATH. Furthermore, if MATH and MATH are the vectors of verifier outputs for MATH and MATH, then not only is MATH for MATH, but also MATH while MATH since the only false verifier outputs are false negatives. The maximum value for MATH is obtained if MATH for MATH; so we have MATH . Now MATH so MATH. But the ratio between the weight of MATH and its corresponding good path MATH is then at most MATH. Since there are only MATH paths altogether, there are fewer than MATH bad paths; thus, the ratio between the total weight of all bad paths mapped to MATH and the weight of MATH is less than MATH. Summing over all good paths shows that the total weight of all bad paths is less than half the total weight of all good paths, so at least MATH of all non-abstaining paths are good. It follows that, conditioned on not abstaining, MATH accepts with probability greater than MATH if MATH accepts, and accepts with probability less than MATH if MATH rejects. Hence MATH. |
cs/0010021 | Let membership in MATH be computed by MATH. Assume MATH (the case MATH is symmetric). Consider MATH independent executions of MATH with input MATH; call the random variables representing their outputs MATH. Because the executions are independent, for any MATH vector of values MATH. So conditioning on no abstentions, MATH has a binomial distribution with MATH, and NAME bounds imply MATH is exponentially small in MATH. Since MATH is constant and we can make MATH polynomially large in MATH, the result follows. |
cs/0010021 | First we show that bounded market prediction is a member of BCPP. Given a market, construct a noncommittal NAME machine MATH whose input is the price history and strategies, and whose random inputs supply the settings for the population variables MATH. Let MATH abstain if the price history is inconsistent with the input and population variables; depending on the model, this is either a matter of checking the linear inequalities produced by REF or the equations produced by REF . Otherwise, MATH accepts if the market rises and rejects if the market falls on the next day. The probability that MATH accepts thus equals the probability that the market rises: either more than MATH or less than MATH. Since the problem is to distinguish between these two cases, MATH solves the problem within the definition of a MATH-machine. In the other direction, we reduce from any NAME MATH. Suppose MATH is accepted by some NAME MATH. We will translate MATH and its input MATH into a bounded market prediction problem. First use REF to amplify the conditional probability that MATH accepts to either more than MATH or less than MATH as bounded market prediction demands. Then convert MATH into two polynomial-size circuits, one computing MATH and the other computing MATH . Without loss of generality we may assume that MATH and MATH are built from NOR gates. Applying REF to each yields two sets of constraints MATH and MATH and column vectors MATH and MATH such that MATH if and only if MATH and MATH if and only if MATH, where MATH satisfies the previous linear constraints and MATH is the initial prefix of MATH consisting of variables not introduced by the construction of REF . We also have from REF that there is a one-to-one correspondence between assignments of MATH and assignments of MATH satisfying the MATH constraints, so probabilities are not affected by this transformation. Now use REF to construct a market model in which MATH, MATH, and MATH are enforced by the strategies and price history, and MATH determines the price change on the next day of trading. Thus the consistent settings of the variables MATH are precisely those corresponding to settings of MATH for which MATH, or, in other words, those yielding computation paths that do not abstain. The market rises when MATH, or when MATH accepts. So if we can predict whether the market rises or falls with conditional probability at least MATH, we can predict the likely output of MATH. It follows that bounded market prediction for the AS+FI model is NAME. To show the similar result for the AS+PI model, use REF to convert the constraints MATH, MATH into a system of linear equations MATH, and then proceed as before, using REF to convert this system to a price history and letting MATH determine the price change (and thus the sign of the price change) on the next day of trading. |
cs/0010021 | Similar to the proof of REF . |
cs/0010028 | We prove the lemma by structural induction on the syntax of the underlying program. It uses the concrete semantic rules of REF , the definition of MATH in REF , and the specifications of the abstract operations given in REF. The proof is straightforward due to the close correspondence of the concrete and the abstract semantics. We only detail the reasoning for the base case and for the case of a goal MATH where MATH is an atom of the form MATH. The other cases are similar. CASE: Let MATH and MATH. Assume that MATH and MATH. It must be proven that MATH . This relation holds because of the three following facts: Induction step. Let MATH and MATH, where MATH is an atom of the form MATH. Assume that MATH and MATH. It must be proven that MATH . By REF , there exist program substitutions and program sequences such that MATH . Moreover, by definition of MATH, there exist abstract values such that MATH . The following assertions hold. By MATH, MATH, and the induction hypothesis, MATH . By MATH, MATH, MATH, and the specification of SUBST, MATH . By MATH, MATH, MATH, and the specification of RESTRG, MATH . By MATH, MATH, MATH, and the hypothesis that sat safely approximates MATH, MATH . Finally, by MATH, MATH, MATH, MATH, MATH, MATH, and the specification of EXTGS, MATH . |
cs/0010028 | The result follows from the definition of TAB in REF , the definition of TCB in REF, and REF . |
cs/0010028 | Let MATH be the concrete semantics of the underlying program. Since sat is pre-consistent, there exists a concrete behavior MATH such that CASE: MATH, and CASE: sat safely approximates MATH. The first condition implies that MATH since TCB is monotonic and MATH. The second condition and REF imply that MATH safely approximates MATH . The result follows from the two implied statements and REF . |
cs/0010028 | Assume that sat safely approximate MATH. Let MATH and MATH. It must be proven that-MATH . Assume that the left part of the implication holds. REF implies that MATH . Since sat is a post-fixpoint and Cc is monotonic, MATH and then MATH . |
cs/0010028 | Let us abbreviate MATH by MATH. It is sufficient to prove that, for any MATH and any MATH, MATH . Fix MATH, MATH, and MATH satisfying the left part of the implication. By REF , MATH . Since sat safely approximates every MATH, MATH for all MATH. Finally, since MATH is chained-closed, MATH . |
cs/0010028 | The proof is in three steps. First we construct a sequence MATH of lower-approximations of MATH which is not necessarily a chain; then we modify it to get a chain MATH; finally, we show that MATH. The proof uses the following property of program substitution sequences, whose proof is left to the reader. If MATH, MATH and MATH are program substitution sequences such that MATH and MATH, then MATH and MATH have a least upper-bound, which is either MATH or MATH. The least upper-bound is denoted by MATH in the proof. CASE: Since sat is pre-consistent, there exists a concrete behavior MATH such that sat safely approximate MATH and MATH. The sequence MATH is defined by MATH . Since MATH, TCB is monotonic and MATH is a fixpoint of TCB, it follows that MATH . Moreover, by REF , sat safely approximates every MATH. CASE: MATH is now constructed by induction over MATH. The correctness of the construction process requires to prove that, after each induction step, the relation MATH holds. We first define MATH . Let MATH. Assume, by induction, that MATH. For every MATH, we define MATH . Since MATH and MATH, we have that MATH is well-defined and MATH. Moreover, since sat safely approximates MATH (by induction) and MATH, and MATH is equal either to MATH or MATH, in the definition of MATH, we have that sat safely approximates every MATH. CASE: The NAME sequence of the concrete semantics is a chain MATH defined as follows: MATH . Since MATH and TCB is monotonic, it follows, by induction, that MATH . Therefore, by definition of the least upper bound and since the least fixpoint is the limit of the NAME sequence, MATH . Thus, MATH . |
cs/0010028 | The result is an immediate consequence of REF . |
cs/0010028 | The fact that MATH, for all MATH, is a direct consequence of the definition of the operation MATH. Moreover, the hypotheses on the sequences ensure that MATH; thus the sequence MATH is stationary. |
cs/0010029 | By structural induction on MATH. |
cs/0010029 | By structural induction on MATH. |
cs/0010029 | By structural induction on MATH. |
cs/0010029 | By structural induction on MATH. |
cs/0010029 | For any type MATH, we have MATH, hence by REF , MATH, that is MATH has no solution. For the second proposition, we prove its contrapositive. Suppose MATH has a solution, say MATH. By definition of a maximum and REF , we have MATH. Hence by REF , MATH. By the rules in REF , MATH. Therefore MATH, since otherwise MATH would contain MATH as a strict subexpression which is impossible. |
cs/0010029 | The proof of the first part is by structural induction. For the base case, suppose MATH. Then by Rule (Var), MATH and hence for some MATH, we have MATH. Thus again by (Var), MATH. Now consider the case MATH where the inductive hypothesis holds for MATH. By REF , there exists a type substitution MATH such that MATH, and MATH where MATH for each MATH. Thus by REF , MATH. By the inductive hypothesis, for all MATH we have MATH where MATH, therefore by transitivity of MATH we have MATH and hence by REF , MATH (that is, MATH). Now suppose MATH. By Rule (Pred), there exists a type substitution MATH such that MATH where MATH for each MATH. Thus by REF , MATH. By the first part of the statement, for all MATH we have MATH where MATH, therefore by transitivity of MATH we have MATH and hence by Rule (Pred), MATH. The final case for a query follows directly from Rule (Query). |
cs/0010029 | The proof of the first part is by structural induction. For the base case, suppose MATH. Then by Rule (Var), MATH. If MATH, there is nothing to show. If MATH, then by definition of an ordered substitution, MATH and hence MATH where MATH. Now consider the case MATH where the inductive hypothesis holds for MATH. By REF , there exists a type substitution MATH such that MATH, and MATH where MATH for each MATH. By the inductive hypothesis, for all MATH we have MATH where MATH, and hence by transitivity of MATH and REF , MATH (that is, MATH). Now consider an atom MATH. By Rule (Pred), there exists a type substitution MATH such that such that MATH where MATH for each MATH. By the inductive hypothesis, for all MATH we have MATH where MATH, and hence by Rule (Atom), MATH. |
cs/0010029 | We first show that MATH is a well-defined type substitution. Since MATH is linear, MATH and hence MATH is uniquely defined. Moreover, since MATH is idempotent, MATH cannot occur in MATH. Therefore MATH, and hence by the condition on MATH in the statement, MATH. For the inequality MATH and for each MATH such that MATH has a non-variable term in MATH, we have that the same inequality is also in MATH, and so MATH, and consequently MATH, is a solution for it. For each MATH such that MATH, we have a corresponding inequality MATH in MATH. Since MATH is true and MATH, it follows that MATH is true. |
cs/0010029 | Consider a non-variable position MATH in MATH. There is exactly one inequality in MATH with MATH as left-hand side. Moreover, MATH is a flat type (declared range type of a function), thus linear, and (because of indexing the parameters in MATH by MATH) has no parameters in common with any other left-hand side of MATH. Now consider a position MATH where MATH has the variable MATH. Because of the linearity of MATH, there is exactly one inequality in MATH with MATH as left-hand side. Let MATH be a subset of MATH with MATH for all MATH. By the definition of MATH, if MATH for some variable MATH or if MATH, then MATH. If however MATH is MATH and MATH is MATH for some positions MATH and MATH, then MATH is a prefix of MATH, and so, since we use the positions to index the parameters, MATH. |
cs/0010029 | Termination is proved by remarking that the sum of the sizes of the terms in left-hand sides of inequalities strictly decreases after each application of a rule. By REF the initial system is left-linear and acyclic, and one can easily check that each rule preserves the left-linearity as well as the acyclicity of the system. Furthermore each rule preserves the satisfiability of the system and its principal solution if one exists. Indeed REF preserve all solutions by definition of the subtyping order. REF replaces a parameter MATH by its upper bound MATH. As the system is left-linear this computes the principal solution for MATH, and thus preserves the principal solution of the system if one exists. REF replaces a parameter MATH having no occurrence in the left-hand side of an inequality, hence having no upper bound, by the maximum type of its lower bound MATH; this computes the principal solution for MATH and thus preserves the principal solution of the system if it exists. Now consider a normal form MATH for MATH. If MATH contains a non variable pair MATH irreducible by REF , then MATH, and hence MATH, have no solution. Similarly MATH has no solution if it contains an inequality MATH with MATH or an inequality MATH with MATH REF . In the other cases, by irreducibility and acyclicity, MATH contains no inequality, hence MATH is in solved form and the substitution associated to MATH is a principal solution for MATH. |
cs/0010029 | Suppose MATH is computed by the algorithm of REF , and that MATH is the sequence of systems of this computation, that is, MATH is equal to MATH viewed as a substitution. By REF , MATH. In particular, this means that no system MATH REF contains an inequality MATH where MATH and MATH is not a parameter. It is easy to see that MATH is a computation of the algorithm for MATH, and hence MATH (that is, MATH viewed as a substitution) is a principal solution of MATH. |
cs/0010029 | Since MATH is a minimal matcher, we have MATH . It remains to be shown that there exists a type substitution MATH such that MATH as defined above is an ordered substitution. Let MATH be the solution of MATH corresponding to MATH, and MATH be the solution of MATH corresponding to MATH (see REF ). Note that since MATH is principal for MATH and MATH, MATH is a principal solution. By REF , MATH is a solution of MATH, and moreover, since MATH is a principal solution of MATH, there exists a type substitution MATH such that for each MATH occurring (on a left-hand side or right-hand side) in MATH, MATH . In particular, let MATH be a variable occurring in MATH in position MATH, and let MATH be the subterm of MATH in position MATH. By REF , MATH. By REF , MATH, and so by Rule (Var), MATH. Since by definition of MATH, MATH, we also have MATH, and so by REF , the condition in REF is fulfilled. Since the choice of MATH was arbitrary, the result follows. |
cs/0010029 | By CITE, MATH is nicely moded. Let MATH and MATH be the variable typings used to type MATH and MATH, respectively (in the sense of REF ). Let MATH be the selected atom and MATH. By REF , MATH. Moreover, MATH for some type substitution MATH. Let MATH. Note that since MATH, MATH is a variable typing. By REF , we have MATH and MATH (but not necessarily MATH, because of the special rule for head atoms) and in particular, MATH. Since MATH is nicely typed, it follows by REF that MATH is principal for MATH and MATH. Moreover by assumption of moded unification, there exists a substitution MATH such that MATH. We assume MATH is minimal, that is, MATH. By REF , there exists a variable typing MATH such that MATH is an ordered substitution, and moreover MATH. Therefore by REF , MATH and MATH. In particular, MATH. Now since MATH is nicely typed and MATH, MATH is principal for MATH and MATH. Moreover by assumption of moded unification, there exists a minimal substitution MATH such that MATH. By REF , there exists a variable typing MATH such that MATH is an ordered substitution, and moreover MATH. Therefore by REF , MATH and MATH. Hence by Rule (Query), MATH. Finally, MATH and so by the linearity conditions and REF , it follows that CASE: if MATH is an output argument vector in MATH, other than MATH, and MATH is the instance of the declared type of MATH used for deriving MATH, then MATH, MATH, and hence MATH is a principal variable typing for MATH and MATH, CASE: analogously, if MATH is an output argument vector in MATH, and MATH is the instance of the declared type of MATH used for deriving MATH, then MATH, MATH, and hence, by REF , MATH is a principal variable typing for MATH and MATH. So we have shown that MATH is nicely moded, MATH is a variable typing such that MATH, and the principality requirement on MATH is fulfilled. Thus MATH is a nicely-typed query. |
cs/0010031 | Consider a recursive call MATH of NAME. Let MATH be the node that is selected to be processed in REF . All of MATH's predecessors in the original graph MATH have either been processed in a previous REF or deleted in some REF . Therefore, the current weight of MATH, MATH, as seen by the recursive call MATH, is just MATH, as defined in REF of NAME. Furthermore, we add node MATH to our independent set in REF if and only if we add MATH to our independent set in REF . |
cs/0010031 | We will prove the result for NAME. The full result follows from REF . Clearly, the returned set of nodes MATH is an independent set. By REF , we need only show that MATH is a MATH-approximation with respect to MATH and MATH. We will prove this by induction on the recursion. The base case of the recursion is trivial, since there are no positive weight nodes. For the inductive step, assume that MATH is a MATH-approximation with respect to MATH. Then MATH is also a MATH-approximation with respect to MATH since MATH and MATH. To show that MATH is a MATH-approximation with respect to MATH, we will derive an upper bound MATH on the maximum MATH-weight independent set and a lower bound MATH on the MATH-weight of any MATH-maximal independent set of nodes. A MATH-maximal independent set of nodes either contains MATH or adding MATH to it violates the property that it is an independent set. Our MATH performance bound is MATH. Note that only MATH and its successor nodes will have a nonzero contribution to MATH-weight. The total weight of a maximum MATH-weight independent set is at most MATH. The total weight of any MATH-maximal independent set is at least MATH, since any such set contains at least one element of MATH, and all such nodes are assigned weight MATH. Since the algorithm always chooses a MATH-maximal set, its MATH performance bound is MATH. To show the bound is tight, pick some MATH that maximizes MATH, and assign it weight MATH and all of its successors weight MATH, where MATH. Let every other node in MATH have weight MATH. When we run NAME, the value of MATH will be MATH, the value of each of its successors will be MATH, and the value of any other node is irrelevant because it has zero weight. Thus NAME returns a set of total weight MATH but the maximum weight independent set has total weight at least MATH. |
cs/0010031 | NAME computes MATH for each node MATH in time proportional to its indegree, and computes MATH for each node in time proportional to its outdegree, for a total time of MATH. In the case of NAME, a recursive call is made at most once for each node in the graph, and defining MATH and MATH in each call takes time proportional to the node's outdegree, for a total running time of MATH. |
cs/0010031 | See CITE. |
cs/0010031 | For each MATH in MATH, MATH is a clique. |
cs/0010031 | Let MATH be a node of MATH. Let MATH, MATH, MATH, and MATH be the set of all successors of MATH in MATH, MATH, and MATH, respectively. Let MATH be any independent subset of MATH. Then CASE: MATH, and CASE: MATH is an independent subset of MATH, implying MATH. |
cs/0010031 | Let MATH be some independent set of successors of MATH. Under the conditions of the lemma each MATH intersects MATH. But since the MATH do not themselves intersect, each must intersect MATH in a distinct element. Thus there are at most MATH of them. |
cs/0010031 | Let MATH be a tree decomposition of MATH with width MATH. We will use this tree decomposition to construct an ordering of the connected node subsets of MATH, with the property that if MATH then either MATH or MATH intersects some frontier set MATH with at most MATH elements. The full result then follows from REF . Choose an arbitrary root MATH for MATH, and let MATH if MATH is an ancestor of MATH in the resulting rooted tree. Extend the resulting partial order to an arbitrary linear order. For each connected node subset MATH of MATH, let MATH be the greatest node in MATH for which MATH intersects MATH. Given two connected node subsets MATH and MATH of MATH, let MATH if MATH and extend the resulting partial order to any linear order. Ordering MATH can be done in MATH time using depth first search. We can then compute and the maximum node in MATH containing each node of MATH in time MATH by considering each MATH in order. The final step of ordering the MATH in the given set system MATH takes MATH time, since we must examine each element of each MATH to find the maximum one. The total running time is thus linear in the size of the input. Now suppose MATH in this ordering. We will show that any such MATH intersects MATH, and thus that MATH is our desired frontier set MATH. There are two cases. If MATH, we are done. The case MATH is more complicated. We will make heavy use of a lemma from CITE, which concern the effect of removing some node MATH from MATH. Their REF implies that if MATH are not in MATH, then either MATH and MATH are separated in MATH by MATH or MATH and MATH are in the same branch (connected component) of MATH. Let MATH be the parent of MATH (which exists because MATH is not the greatest element in the tree ordering). We have MATH since MATH. Since MATH is a connected set, it cannot be separated without removing any of its nodes; thus by REF every element of MATH is in the same branch of MATH, which consists precisely of the subtree of MATH rooted at MATH. Now MATH contains at least one node MATH in the vertex set of an element of the subtree rooted at MATH, and at least one node MATH in MATH, which is not in this subtree because MATH. So by REF, either one of MATH is in MATH or MATH is separated by MATH. In the latter case MATH intersects MATH since MATH is also connected. |
cs/0010031 | CITE gives a recursive MATH algorithm for computing tree decompositions of constant-treewidth graphs based on a linear time algorithm for finding approximate separators for small node subsets. Replacing this separator-finding subroutine with the linear time algorithm of CITE gives a MATH time algorithm for computing a tree decomposition of a planar graph. Since each separator has size at most MATH, the resulting tree decomposition has width at most MATH by REF. Since all we need to compute a good ordering of MATH is the ordering of the MATH nodes, we can compute this ordering as described in the proof of REF and represent it in MATH space by assigning each node an index in the range MATH to MATH. Ordering MATH then takes linear time as described in the proof of REF . |
cs/0010031 | Given an undirected MATH-node graph MATH, construct a MATH-node directed acyclic graph MATH by REF directing the edges of MATH in any consistent order, and REF adding a new source node MATH to MATH with edges from MATH to every node in MATH. Let MATH be an independent set in MATH. Then every node in MATH is a successor of MATH in MATH, and furthermore these nodes are all independent. It follows that MATH. Conversely, if MATH is an independent set of successors of some node MATH in MATH, it cannot contain MATH (since MATH is not a successor of any node), and thus MATH is also an independent set in MATH. So we have MATH. |
cs/0010031 | Let MATH be any independent set in MATH. Choose the ordering so that all nodes in MATH precede all nodes not in MATH. Then for any MATH, MATH has no predecessors in the oriented graph and MATH. Let MATH be the independent set computed by the algorithm. If MATH is in MATH but not MATH, it must have a successor MATH in MATH with non-negative value. Since the value of each MATH is its weight less the weight of all its neighbors in MATH, the total weight of all elements of MATH must exceed the total weight of all elements in MATH, and we have MATH. |
cs/0010031 | We will prove the result for NAME; by REF the same result holds for NAME. Let MATH order the nodes in order of decreasing weight. Let us show by induction on MATH that if the greedy algorithm chooses a node MATH, then MATH; but if the greedy algorithm does not choose MATH, then MATH. Suppose we are processing some node MATH and that this induction hypothesis holds for all nodes previously processed. If the greedy algorithm picks MATH, then all MATH's predecessors were not chosen and have negative value, and MATH. If the greedy algorithm does not pick MATH, it is because it chose some MATH; now MATH. Since the only nodes with non-negative weights are those chosen by the greedy algorithm, NAME selects them as its output. |
cs/0010031 | The proof that both algorithms return the same approximation is similar to the proof of REF . The proof of the approximation ratio follows the same structure as the proof of REF . We prove the result for NAME. By REF , we need only show that the returned set of bids MATH is a MATH-approximation with respect to MATH and MATH. We do this using induction on the recursion. The fact that MATH is a MATH-approximation with respect to MATH follows trivially from the inductive assumption. In the case of MATH, we will derive an upper bound MATH on the maximum MATH-weight of a set of feasible bids and a lower bound MATH on the MATH-weight of any MATH-maximal set of bids. A MATH-maximal set of bids either contains MATH or adding MATH to it would violate the feasibility constraints. In the case of a set of feasible bids, its total MATH-weight is at most MATH, since the only nonzero contribution to MATH-weight comes from MATH and MATH. In the case of a MATH-maximal set of bids, if MATH cannot be added to the set, then either REF a successor of MATH is already in the set, in which case the total MATH-weight is at least MATH, or REF the budget constraint is exceeded, in which case the total MATH-weight is at least MATH. Therefore, the MATH-weight of these bids is at least MATH and the MATH performance bound is MATH . The proof of the running time is similar to the proof of REF . All of the steps that NAME and NAME share with NAME and NAME take linear time. NAME adds the cost of computing the last term in REF . Storing MATH in a variable MATH for each MATH allows this term to be computed in time MATH for each node, with an additional MATH cost per node to update the appropriate MATH. The same technique allows budget constraints to be tested in MATH time per node during the second step. Thus the additional time is linear. The corresponding modification to NAME similarly adds only linear time. Rather than updating the weight of each node MATH before each recursive call, we will compute the ``current" weight of each node MATH as it is required, subtracting off the total weight MATH of all previously-processed nodes in MATH as in NAME. |
cs/0010031 | Similar to the proof of REF . The additional MATH term comes from having to apply up to MATH budget constraints to each node; since MATH, this term also covers the cost of reading the MATH from the input and initializing the variables for each subset. |
cs/0010031 | Since each heavy bid consumes more than half a bidder's budget, each bidder can win at most one bid. This is just a simple unweighted budget constraint and can be solved as described in REF for a performance bound of MATH. |
cs/0010031 | This proof uses the same structure and notation as the proof of REF . An upper bound MATH on the MATH-revenue of any feasible set of bids is MATH. With regards to a MATH-maximal set of bids, if MATH cannot be added to the set because the budget constraint MATH will be exceeded, the existing bids in the set must have weight at least MATH, since MATH. A lower bound MATH on the MATH-revenue of any MATH-maximal set of bids is therefore MATH. The performance bound of this algorithm is MATH, as claimed. |
cs/0010031 | The sum of the optimal revenues for the heavy and light bids is at least equal to the optimum revenue among all bids. From REF , the better of the two solutions will be within a factor of MATH of the optimum for the general problem. For the running time, observe that decomposing the bids into heavy and light bids takes linear time, that NAME and NAME are equivalent to NAME and NAME and thus take linear time by REF , and that NAME and NAME can be made to run in linear time using techniques similar to those used for NAME and NAME. |
cs/0010032 | Contained in REF. |
cs/0010032 | Contained in REF. |
cs/0010032 | See REF. |
cs/0010032 | See REF. |
cs/0010032 | First we show that a (minimal) model of one of MATH and MATH is also a model of the other: CASE: The conditional facts in MATH are logical consequences of MATH. By induction on MATH, it follows that MATH is implied by MATH. Therefore, every model of MATH is also a model of MATH. CASE: Next, we prove that every minimal model of MATH is also a model of MATH: Suppose that this would not be the case, that is, MATH is a minimal model of MATH, but it violates a rule MATH in MATH. This means that MATH are true in MATH. Since MATH is a minimal model of MATH, MATH contains for MATH a conditional fact MATH that is violated in the interpretation MATH (that is, the interpretation that agrees with MATH except that MATH is false in it). It follows that MATH, that MATH is false in MATH, and that MATH is true in MATH (since the default negation literals are treated like new propositions, making MATH false does not change any of the default negation literals). Now consider the conditional fact that is derived from the rule instance and the MATH: MATH . Since MATH is a fixpoint of MATH, this fact is also contained in MATH. But this is a contradiction, since it is violated in MATH. Now let MATH be a minimal model of MATH. REF shows that it is a model of MATH. If it were not minimal, there would be a smaller model MATH of MATH. Then also a minimal model MATH of MATH must exist that is still smaller than (or equal to) MATH. But by REF above, MATH is also a model of MATH which contradicts the assumed minimality of MATH. Let conversely MATH be a minimal model of MATH. By REF , it is a model of MATH. If it were not minimal, there would be a smaller model MATH of MATH, which is by REF also a model of MATH, which again contradicts the assumed minimality of MATH. |
cs/0010032 | If this were not the case, there were a minimal model MATH of MATH which does not satisfy MATH. Because of MATH and MATH, there must be a model MATH of MATH that is smaller than MATH, but not a model of MATH. Since MATH is smaller than MATH, it assignes the same truth values to the default negation literals. But then it satisfies MATH, that is, it is a model of MATH. Thus, a violated formula must be one that is added by MATH. Take the first such formula. It cannot be a propositional consequence, because propositional consequences are by definition satisfied in all models that satisfy the preconditions (and until this first formula, it satisfied all preconditions). But it can also not be added by REF , or REF , because all these formulas consist only of default negation literals, which are interpreted the same in both models. Therefore, we can conclude MATH. But that contradicts the assumed minimality of MATH. |
cs/0010032 | CASE: Let MATH be a static expansion of MATH. By the lemma above, MATH. But then it easily follows that MATH is also a static expansion of MATH: It has to satisfy MATH . Since MATH, the formula MATH is anyway contained in the preconditions of MATH, so the union with MATH changes nothing. Assume conversely that MATH is a static expansion of MATH, that is, MATH . Again by the lemma above we get MATH. But this means that the preconditions are not changed when we do not add MATH explicitly to the preconditions: MATH . CASE: This follows with the following sequence of equations: MATH . |
cs/0010032 | CASE: Consider the interpretation MATH that extends MATH by interpreting MATH as false and all remaining atoms as true. Suppose that it were not a model of MATH, that is, it would violate some disjunction in MATH. Since all atoms that MATH interprets as false do not appear in MATH, and all the remaining atoms except MATH are interpreted as true, the violated disjunction can only be MATH. But this is a contradiction, since then the proper disjunction MATH would not be minimal. Thus, MATH is a model of MATH. Then there is also a minimal model MATH of MATH that is less than or equal to MATH. The atoms interpreted as false in MATH as well as MATH must be false in MATH, since it is less or equal to MATH. The atoms interpreted as true in MATH appear as facts in MATH, so they must be true in MATH. CASE: Consider the interpretation MATH that extends MATH by interpreting MATH as true, MATH as false, and all remaining atoms as true. Suppose that it were not a model of MATH, that is, it would violate some disjunction in MATH. Since all atoms that MATH interprets as false do not appear in MATH, and all the remaining atoms except MATH are interpreted as true, the atoms in the violated disjunction can only be a subset of MATH. But this contradicts the assumed minimality of the disjunction MATH. Thus, MATH is a model of MATH. Then there is also a minimal model MATH of MATH that is less than or equal to MATH. This means that the atoms interpreted as false in MATH as well as MATH must be false in MATH. All atoms interpreted as true in MATH must be true in MATH, since they appear as facts in MATH. Finally, also MATH must be true in MATH, since otherwise it would violate the disjunction MATH. |
cs/0010036 | The proof comes immediately from the proof of CITE . |
cs/0010036 | The result is obvious if MATH. If MATH, it is clear from the definition that a non dual configuration can not belong to a strongly connected component since it does not belong to a circuit. So we just have to prove that the set of dual configurations is a strongly connected component. Let MATH and MATH be two dual configurations. We have MATH, the value of MATH and MATH is MATH or MATH. Let MATH be the dual configuration defined by MATH . We are going to show that there exists a path from MATH to MATH and a path from MATH to MATH, which implies the existence of a path from MATH to MATH. Similarly, the existence of a path from MATH to MATH could be stated. The path from MATH to MATH is built by following an arbitrary maximal path starting from MATH and in which no transition is made in which player MATH gives a card to its right neighbor (such a path exists since in each infinite path player MATH plays an infinite number of times). The unique possible transition from the last configuration of this path is the one in which player MATH plays, which proves that this configuration is MATH. We have now to find a path from MATH to MATH. Let MATH be the last index smaller than or equal to MATH such that MATH. By consecutively applying the playing rule from configuration MATH on players MATH, MATH, MATH, MATH, we obtain the following configuration: MATH . Let MATH be the last index smaller than MATH such that MATH. We can apply the same techniques and by iterating the process, we obtain the configuration MATH. This achieves the proof. |
cs/0010036 | From the shot vector definition, we obtain: MATH . Since the necessary and sufficient condition to apply the rule on position MATH of MATH is MATH, we are going to focus on the difference MATH. MATH which proves the lemma. |
cs/0010036 | Let MATH and MATH be two sequences of transitions from MATH to MATH : MATH . Let us recall that: MATH . Assume that MATH. We have MATH. We are going to show that there exists a path of positive length from MATH to MATH, which is a contradiction. For that, we are going to build step by REF sequence of transitions from MATH to MATH: MATH. For MATH let us denote by MATH the following sequence: MATH . There exists a first index MATH such that MATH, which implies that there exists MATH such that MATH and MATH. Since MATH is the first index having this property, we have MATH, so MATH and MATH. Since MATH and MATH satisfy the conditions of REF , we can apply the rule on position MATH to MATH to obtain a new configuration denoted by MATH. Let MATH be the following sequence of transitions: MATH . We then have MATH. By iterating this process, we can define MATH . Since MATH and MATH, after MATH steps, we will obtain MATH, which is the contradiction. Since the case where MATH is similar, MATH and therefore MATH, which achieves the proof. |
cs/0010036 | In order to show that MATH we can consider two sequences of transitions, one from MATH to MATH and the other from MATH to MATH, and then make a similar proof as the one used in REF . On the other hand, let MATH and MATH be two non dual configurations of MATH such that MATH. Let MATH be a sequence of transitions from MATH to MATH. The sequence MATH built by concatenating the sequence MATH from MATH to MATH and the sequence MATH is clearly a sequence from MATH to MATH, and so we obtain: MATH . |
cs/0010036 | Let us assume that MATH and MATH are incomparable (otherwise MATH and MATH are comparable and the result is obvious). We are first going to prove that MATH is reachable from MATH. For that, we just have to find a configuration MATH such that MATH and MATH. Let MATH be a sequence of transitions from MATH to MATH and let MATH be the first index such that MATH and MATH. Let us call MATH the position on which the transition is applied on MATH. Clearly MATH and MATH. Since MATH and MATH verify the condition of REF , we can apply the transition on position MATH of MATH to obtain a new configuration MATH. We have MATH and MATH, which proves that MATH is reachable from MATH. The proof is similar for MATH. This implies that MATH is reachable from MATH, and by REF is the greatest configuration smaller than both MATH and MATH. If MATH is not dual, it is the greatest lower bound of MATH et MATH, and if MATH is dual, MATH is then this lower bound. Since MATH also has a greatest element, it is a lattice, which ends the proof. |
cs/0010036 | Assume that MATH. Let MATH be such that MATH. We are first going to show step by step that there exists a path from MATH to MATH in which player MATH never plays. Let MATH be such that MATH is maximal among the MATH and such that MATH. Since MATH, such MATH exists and MATH. We have MATH, so MATH and MATH. Let MATH be the configuration obtained from MATH by applying the rule on position MATH. We are going to show that MATH is such that MATH. If MATH, then MATH and for all MATH, MATH. If MATH, then MATH and for all MATH, MATH. In the two cases MATH and so MATH, which MATH. By iterating the process, we arrive to the fixed point MATH by a path with no transition in position MATH. Therefore MATH and then for all configuration MATH between MATH and MATH, MATH (for a given MATH, MATH can only increase when following a path). |
cs/0010036 | Let us consider the dominance ordering on dual configurations, in which a configuration MATH is greater or equal to a configuration MATH if and only if MATH, MATH. The greatest element of this order is clearly MATH and the maximal length of a chain in this order is MATH. Let MATH and MATH be two dual configurations such that MATH covers MATH in this order. It is clearly possible to go from MATH to MATH by a transition, so the covering relations are a subset of the set of transitions between dual configurations. Since the dual configurations are the unique non trivial strongly connected component of MATH, it is clear that the maximal length of a path between two dual configurations in MATH is MATH, which proves the corollary. |
cs/0010037 | For any letter MATH consider MATH and MATH. Since MATH is sub-normalised it follows that for each letter MATH, there is MATH such that MATH that is, for each MATH, its greatest lower bound constraint is less or equal than its least upper bound constraint. Now, let MATH be an interpretation such that CASE: MATH and MATH; CASE: MATH, for all letters MATH . We will show on induction on the number of connectives of MATH that MATH is an interpretation satisfying MATH. CASE: Suppose MATH is a meta letter MATH. By definition, MATH and, thus, MATH satisfies MATH. CASE: Suppose MATH is a meta letter MATH. By definition, MATH and, thus, MATH satisfies MATH. CASE: Suppose MATH is a meta proposition MATH. By induction on MATH and MATH, MATH satisfies MATH and MATH and, thus, MATH satisfies MATH. CASE: The cases MATH is similar. |
cs/0010037 | Similarly to REF , for any letter MATH, consider MATH and MATH. Since MATH is sub-normalised, it follows that for each letter MATH, there is MATH such that MATH. Now, let MATH be an interpretation such that CASE: MATH and MATH; CASE: MATH, for all letters MATH . It is easily verified that MATH satisfies MATH. |
cs/0010037 | MATH . Assume MATH and suppose to the contrary that there is a MATH such that MATH. Therefore, there is a fuzzy interpretation MATH such that MATH and MATH. Let MATH be the following four-valued interpretation: MATH . We show on induction on the structure of a proposition MATH that MATH satisfies MATH iff MATH. CASE: If MATH satisfies MATH then MATH. By definition, MATH follows. If MATH does not satisfy MATH then MATH. By definition, MATH follows. CASE: If MATH satisfies MATH then MATH. By definition, MATH follows and, thus, MATH. If MATH does not satisfy MATH then MATH. By definition, MATH follows and, thus, MATH. CASE: If MATH satisfies MATH then MATH and MATH. By induction on MATH and MATH, both MATH and MATH hold and, thus, MATH. The case MATH is similar. As a consequence, since MATH satisfies MATH but not MATH, it follows that MATH and MATH, which is contrary to the assumption MATH. MATH . Assume that MATH holds, that is, MATH and MATH. Consider MATH. Then either MATH or MATH. From MATH it follows that MATH, that is, MATH. Therefore, for MATH, MATH follows. From MATH, MATH follows, that is, MATH. As a consequence, for MATH, MATH is unsatisfiable and, thus, MATH holds. Therefore, for all MATH holds. |
cs/0010037 | Since MATH, MATH returns true. Therefore, there is a completed branch MATH. Suppose that for each completed branch MATH, MATH, there is a letter MATH occurring in MATH such that both MATH and MATH. As a consequence, collecting all the MATH expressions in branches MATH, informally MATH is equivalent to MATH and, thus, MATH is classically equivalent to MATH. It follows that MATH, contrary to our assumption. Therefore, there is a completed branch MATH such that for any letter MATH occurring in MATH, CASE: if MATH then MATH and MATH; CASE: if MATH then MATH and MATH. Let MATH be the following four-valued interpretation: MATH . It follows that for any letter MATH, MATH. Furthermore, it can easily be shown on induction on the structure of MATH and MATH that MATH satisfies both MATH and MATH. Therefore, MATH and MATH. |
cs/0010037 | MATH . Assume MATH and MATH. If MATH then MATH, by REF . Otherwise, MATH implies MATH (as a NNF of MATH is normalised and by REF ). So, let us show that MATH. Suppose to the contrary that MATH. From REF , there is an interpretation MATH such that MATH, MATH and for no letter MATH, MATH. Consider the following fuzzy interpretation MATH: CASE: if MATH then MATH; CASE: if MATH then MATH; CASE: if MATH then MATH. Let us show on induction of the structure of any proposition MATH and any MATH that MATH iff MATH holds. CASE: By definition, MATH implies MATH and, thus, MATH. On the other hand, MATH implies MATH and, thus, MATH. As a consequence, MATH; CASE: MATH implies MATH. Therefore, either MATH or MATH. As a consequence, MATH (MATH). On the other hand, MATH implies MATH. Therefore, MATH and, by definition, MATH follows. As a consequence, MATH; CASE: Straightforward. Therefore, from MATH it follows that MATH satisfies MATH. From MATH it follows that MATH does not satisfy MATH, contrary to the assumption that MATH holds. MATH . From MATH and from REF for all MATH, MATH follows. Otherwise, if MATH then MATH and, thus, for any MATH. |
cs/0010037 | Assume MATH. NAME all meta-literals in MATH with MATH. Consider a deduction of MATH, which returns false, and let MATH be the deduction tree. As a consequence, all branches MATH in MATH are closed. Let us consider the following substitution, MATH, in each branch MATH. For each meta literal MATH occurring in MATH, MATH if MATH is not marked with MATH then for MATH replace MATH with MATH and for MATH replace MATH with MATH; and MATH if MATH is marked with MATH then for MATH replace MATH with MATH and for MATH replace MATH with MATH. MATH is mapped into it. Let MATH and MATH be the result of this substitution, for each meta proposition MATH and for each (closed) branch MATH, respectively. We show on induction of the depth MATH of each branch MATH in the deduction tree MATH, that MATH is a branch in a deduction tree of MATH and, thus, the tree MATH formed out by the (closed) branches MATH, for MATH branch in MATH, is a closed deduction tree for MATH. Therefore, MATH. CASE: Therefore, there is an unique closed branch MATH in MATH as the result of the application of the MATH rule, that is, MATH. Since MATH is closed, MATH. There are eight possible cases for MATH such that MATH, MATH and MATH. Let us consider the cases MATH, MATH. By definition, MATH is MATH. Therefore, MATH is a closed branch of a deduction tree for MATH; MATH, MATH. By definition, MATH is MATH. Therefore, MATH is a closed branch of a deduction tree for MATH. The other cases can be shown similarly. CASE: Consider a branch MATH of depth MATH. MATH is the result of the application of one of the rules in MATH to a branch MATH of depth MATH. On induction on MATH, MATH is a branch in a deduction tree of MATH. Let us show that MATH is still a branch in a deduction tree of MATH. MATH . Suppose that REF has been applied to MATH and, thus, MATH. By definition of MATH, MATH is in MATH, that is, MATH is in MATH. As a consequence, the MATH rule can be applied to it and, thus, MATH and MATH are in MATH. MATH the case of REF is similar. Finally, MATH suppose that REF has been applied to literals MATH such that MATH and MATH. By definition of MATH, MATH and MATH are in MATH. Now, we proceed similarly to the case MATH. As MATH is sub-normalised, either MATH or MATH has to be marked with MATH. Without loss of generality, we can distinguish two cases MATH only MATH is marked with MATH; and MATH both MATH and MATH are marked with MATH. Let us consider case MATH. There are eight possible cases for MATH such that MATH, MATH and MATH. Let us consider the case MATH, MATH. By definition, MATH contains both MATH and MATH, which are pairwise contradictory. Therefore, REF can be applied to MATH and MATH contains MATH and, thus, MATH is closed; MATH, MATH. By definition, MATH contains both MATH and MATH, which are pairwise contradictory. Therefore, REF can be applied to MATH and MATH contains MATH and, thus, MATH is closed. The other cases are similar. Finally, consider the case MATH both MATH and MATH are marked with MATH. Without loss of generality, there are four possible cases for MATH such that MATH, MATH and MATH. Let us consider the case MATH, MATH. By definition, MATH contains both MATH and MATH, which are pairwise contradictory, for MATH. Then proceed similarly as above. The other cases are similar. |
cs/0010037 | From hypothesis, from REF it follows immediately that either MATH or MATH holds. |
cs/0010037 | MATH . Assume MATH and MATH. Consider a DNF MATH of MATH. From MATH it follows that MATH and, thus, for each MATH, MATH, that is, MATH is unsatisfiable. NAME all the meta literals in a NNF of MATH with MATH. Let MATH be all the branches of a deduction of MATH. Consider a branch MATH. Obviously, MATH is closed. Therefore, there are meta literals MATH such that MATH. Consider the four pairs for MATH and MATH, respectively: MATH . At first, if MATH is REF then MATH is unsatisfiable, that is, MATH and, thus, REF is satisfied. Second, MATH cannot be the case as MATH. So, for the other cases, we can assume that MATH is satisfiable. Let us consider the following transformation MATH for each branch MATH. For each meta literal MATH occurring in MATH, MATH if MATH is not marked with MATH then MATH; MATH if MATH is not marked with MATH then MATH; MATH if MATH is marked with MATH then MATH; and MATH if MATH is marked with MATH then MATH. Let MATH, MATH and MATH be the result of this transformation, for each meta proposition MATH, for each branch MATH and for each set of meta propositions MATH, respectively. Similarly to REF , it can be shown on induction of the depth MATH of each branch MATH of a deduction MATH, that the branch MATH is a closed branch of a four-valued deduction MATH. But, MATH is MATH and, thus, MATH. In the induction proof, it suffices to show that if MATH then MATH, that is, if the MATH rule is applicable to MATH then the MATH rule is applicable to MATH. For the other rules the proof is immediate. So, as we have seen above, either case MATH or case MATH holds. If MATH is the case then MATH and MATH and, thus, MATH. Otherwise, if MATH is the case then MATH and MATH and, thus, MATH, which completes MATH. MATH . Consider MATH. It suffices to show that for each MATH, if either MATH or MATH then MATH. If MATH then MATH is unsatisfiable, as MATH and, thus MATH. Otherwise, if MATH then, by REF , it follows that MATH. |
cs/0010037 | MATH . Assume that for both MATH, MATH holds. If MATH then condition MATH is trivially satisfied. Otherwise, assume MATH. From REF , MATH follows. But then, we know that MATH. As a consequence, for MATH no interpretation satisfies MATH. Therefore, since by REF holds for MATH, it follows that for MATH, MATH has to be unsatisfiable, that is, MATH and, thus, MATH, for MATH. As a NNF of MATH is normalised, from REF it follows that MATH. Therefore, REF is satisfied. MATH . From REF it follows directly that for all MATH, MATH. In particular, MATH holds for MATH, where MATH and MATH. |
cs/0010037 | MATH . Assume that MATH. Suppose to the contrary that MATH such that MATH. Therefore, there is a fuzzy interpretation MATH such that MATH and MATH. But, from the hypothesis MATH follows. Absurd. MATH . Assume that for all MATH, MATH. Suppose to the contrary that MATH. Therefore, there is a fuzzy interpretation MATH such that MATH. Consider MATH. Of course, MATH satisfies MATH. Therefore, from the hypothesis it follows that MATH satisfies MATH, that is, MATH. Absurd. |
hep-th/0010227 | In such a case the Hamiltonian REF is invariant under the NAME reflection symmetry MATH and it is always possible to find in the NAME space MATH a complete basis of stationary states with definite U REF symmetry. If the energy level is not degenerate, the corresponding physical state MATH has to be either even or odd under MATH symmetry. In the degenerate case, if MATH is not the same state that MATH, the stationary functionals MATH are parity even/odd, respectively. This implies that kernel element MATH is reflection invariant MATH . On the other hand in the path integral representation MATH we have that MATH because the P transformation leaves the points MATH and MATH invariant but changes the sign of the MATH - term in the exponent of the path integral, since it reverses the orientation of every path. The contribution of this term becomes MATH instead of MATH for any trajectory MATH in MATH connecting MATH with MATH with winding number MATH. Thus, the kernel element MATH is not invariant under reflection symmetry, unless MATH (mod. MATH). In particular for MATH, the kernel element MATH is parity odd and purely imaginary MATH . This is in disagreement with REF unless the kernel element vanishes for those points MATH . |
hep-th/0010227 | Since the potential term MATH is non-trivial it gives a non-trivial contribution to the energy of stationary states. The states with lowest energy which are parity even vanish at MATH, where the potential terms attains its maximal value, and cannot have the same energy as parity odd states which vanish at MATH, where the potential terms attains its minimal value. This feature implies that the quantum vacuum state MATH is non degenerate, is parity even and vanishes at MATH. The splitting of energies between the ground state and the first excited state can also be understood in terms of tunnelling effect induced by instantons. But the argument used above is completely rigourous and does not rely on any semiclassical approximation or asymptotic expansion (see CITE for an early anticipation of this behaviour of the ground state based in numerical calculations). |
hep-th/0010227 | The basic strategy is similar to that used in the planar rotor. Because of the non-degeneracy of the energy levels, any stationary state must have a definite MATH-parity symmetry with respect to the four points MATH where the circles with holonomies MATH and MATH cross each other. There are four quantum parity symmetry transformations MATH, MATH, MATH and MATH. Although, the four transformation are identical in MATH, for example, all of them leave the four points invariant, they define four different unitary transformations in the space of quantum states. If we redefine our coordinates so that MATH, we have that MATH. In such coordinates MATH the gauge field with the required holonomy properties is given by MATH in a gauge with boundary conditions MATH . Since the MATH symmetry leaves the point MATH invariant, physical states MATH must verify MATH . Thus, if MATH is MATH even MATH has a node at MATH, that is, MATH. For the same reason MATH which implies that MATH odd states must vanish at MATH, MATH and MATH, that is, MATH. In a similar way we get that MATH which implies that MATH odd states must vanish at MATH, whereas MATH even states must vanish at MATH, MATH and MATH. Similar properties hold for the remaining parity operators MATH. Since there is a complete basis of stationary states consisting of MATH even and MATH odd states, this implies the vanishing of the kernel matrix elements REF . As in the planar rotor case there is an alternative derivation of the same results. It is based on the path integral approach. The method also carries enough information to identify the parity of the vacuum state. The essential feature is to prove that the matrix element MATH of the euclidean time evolution kernel vanishes for any MATH. This property can be easily derived from the path integral representation of the heat kernel MATH where MATH is the holonomy of the closed path MATH. In the path integral representation REF a path MATH connecting MATH and MATH transforms under the MATH reflection symmetry into another path MATH which connects the same points and gives the same contribution to the real term of the exponent in the path integral. However, the contribution of both paths to the imaginary part is different. They contribute to the path integral with a phase factor which is exactly the holonomy of MATH along the paths. It is immediate to see that the ratio of both contributions MATH equals the holonomy of the closed loop obtained by composition MATH which is in the homotopy class MATH of MATH. The holonomy splits, by NAME theorem, into a factor which is the holonomy of the basic circles MATH and the magnetic flux MATH, MATH crossing two surfaces MATH and MATH in MATH: MATH being the domain of MATH enclosed by the curves MATH and MATH and MATH being the surface enclosed by MATH and MATH (see REF ). Those contributions of magnetic fluxes are opposite and cancel each other. Thus, the contribution of MATH is reduced to the holonomy of MATH, that is, MATH. This means that the contributions of MATH and MATH to the path integral are equal but with opposite signs. The contributions of both paths cancel and the argument can be repeated path by path to show that the whole path integral vanishes. In a similar way we can prove the vanishing of the kernel element MATH for MATH and MATH, because in that case the corresponding holonomies of paths and reflected paths differ by the holonomies of the loops MATH and MATH, respectively. The relative negative sign is again the basis for the cancellation of the corresponding contributions to the path integral. Notice, however, that the argument cannot prove the vanishing of MATH, MATH or MATH. CASE: Paths giving oposite contributions to the path integral kernel . |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.