paper stringlengths 9 16 | proof stringlengths 0 131k |
|---|---|
cs/0101016 | These statements are proved as follows. CASE: By REF , MATH and MATH can be computed in MATH time and MATH space. By REF , the last row and the last column of MATH can be reconstructed from MATH and MATH in MATH time. By REF , a feasible solution of MATH can be found in MATH time. Therefore, finding a feasible solution takes MATH time and MATH space. CASE: The proof is similar to the proof of REF . Finding an additional feasible solution takes MATH time and MATH space. Thus finding MATH solutions takes MATH time and MATH space. |
cs/0101016 | Similar to REF . |
cs/0101016 | Similar to REF . |
cs/0101016 | Let MATH and MATH be two tables for MATH computed from REF . Without loss of generality, let the modification be between two consecutive prefix nodes MATH and MATH with MATH and MATH. All the prefix nodes to the right of MATH have the same mass offset from the normal locations because the corresponding sequences contain the modified amino acid. By adding a new edge MATH to MATH, we create a feasible solution MATH that contains this edge. If MATH, then MATH, and thus MATH and MATH. There are MATH possible combinations of MATH and MATH, and checking all of them takes MATH time. If MATH, then MATH must contain an edge MATH with MATH, which skips over MATH and MATH. MATH can be found if MATH and MATH and MATH. There are at most MATH edges, which can be examined in MATH time. Checking MATH possible MATH costs MATH time. The total complexity is MATH time and MATH space. |
cs/0101016 | The correctness proof is similar to that for REF . For every MATH, REF visit every edge of MATH at most once, so the total time is MATH. |
cs/0101016 | Algorithm NAME computes MATH in MATH time and MATH space. For every MATH and MATH, if MATH and MATH, we compute the sum MATH. Let MATH be the maximum value, and we can backtrack MATH to find all the edges of the feasible solution. The total cost is MATH time and MATH space. |
cs/0101017 | By definition, MATH if and only if, for all MATH, there exists MATH such that MATH. Hence we have MATH, for all MATH. This is equivalent to MATH. On the other hand, MATH, and thus MATH. If MATH, then MATH, for all MATH. Therefore, for all MATH, there exists a MATH such that MATH and hence MATH is a relative liveness property of MATH. |
cs/0101017 | By definition, MATH is a relative safety property of MATH if and only if MATH . By taking the counterpositive of the implication this is equivalent to MATH . The part MATH is equivalent to the condition MATH. Thus, MATH is a relative safety property of MATH if and only if MATH. All MATH-words MATH in MATH such that MATH can be represented by the set MATH. Thus, MATH is a relative safety property of MATH if and only if MATH. |
cs/0101017 | The characterizations given by REF reduce the problem to questions decidable in PSPACE CITE (notice that for PLTL formulas one can build in PSPACE an automaton for the formula and for its complement CITE). Hardness can be established by a reduction from regular language inclusion CITE. |
cs/0101017 | If MATH, then, trivially, MATH is a relative safety and a relative liveness property of MATH. If MATH is a relative safety property of MATH, then MATH REF . If, additionally, MATH is a relative liveness property of MATH, then, by REF , MATH. Therefore, we can replace MATH by MATH in the safety condition and obtain MATH. Because MATH, we finally obtain MATH. |
cs/0101017 | Let MATH, and let MATH. Then MATH. Thus, MATH, and we have MATH. We get, for all MATH and all MATH (MATH is related to MATH), that there is a MATH such that MATH. So MATH is a dense set in MATH. Let MATH be a dense set in MATH. Then, for all MATH and all MATH, there exists MATH such that MATH. Let MATH be in MATH, let MATH be in MATH and let MATH. Because MATH is a dense set in MATH, there exists MATH such that MATH. Thus MATH. Because MATH, we have MATH. By REF , MATH is a relative liveness property of MATH. |
cs/0101017 | MATH is a relative safety property of MATH if and only if MATH . If MATH is the complement of MATH with respect to MATH, that is, MATH, which is equivalent to MATH, then MATH is a relative safety property of MATH if and only if MATH. If we define this condition topologically, then MATH is a relative safety property of MATH if and only if MATH. Thus, MATH is a relative safety property of MATH if and only if MATH is an open set in MATH. Because MATH is the complement of MATH with respect to MATH, we finally obtain that MATH is a relative safety property of MATH if and only if MATH is a closed set in MATH. |
cs/0101017 | Let MATH, that is, MATH. Assume MATH. Let MATH such that MATH. Because MATH is a safety property, there exists MATH such that MATH. So MATH is not a prefix of a MATH-word in MATH and thus it is not in MATH. Since MATH is in MATH we have that MATH which contradicts MATH. So MATH must hold. If MATH, then MATH follows immediately. |
cs/0101017 | Since MATH is satisfied by MATH within fairness, by REF we have that MATH. Furthermore, since MATH is limit closed we have that MATH and hence MATH . Consider thus a reduced NAME automaton MATH accepting MATH (by reduced we mean that states from which no MATH-word can be accepted have been eliminated). The finite-state labeled transition system MATH we are trying to construct is MATH with its acceptance condition removed. Indeed, by REF MATH accepts MATH. Furthermore, all strongly fair infinite computations of MATH will go infinitely often through a former accepting state of MATH and thus will satisfy MATH. |
cs/0101017 | Let MATH such that MATH and, for all MATH, MATH. We have, for all atomic propositions MATH, that MATH if and only if MATH, and thus MATH if and only if MATH. Because MATH, we have MATH if and only if MATH. According to the semantics of boolean connectives we obtain MATH if and only if MATH. For all MATH, MATH, which means that MATH and MATH. Thus MATH if and only if MATH. |
cs/0101017 | The proof is by induction on the structure of MATH. If MATH contains exactly one temporal operator that quantifies over all atomic propositions in MATH (the induction's basis), then all proper subformulas MATH of MATH are boolean formulas and hence MATH. By REF and since MATH, for all proper subformulas MATH of MATH and all MATH we have MATH if and only if MATH. Therefore, if MATH, MATH if and only if MATH. We use this equivalence to prove the induction's basis. Because all atomic propositions of MATH are in the scope of the only temporal operator, we need not prove the induction's basis for boolean connectives. MATH: MATH if and only if there exists MATH such that MATH and MATH, for all MATH. This is equivalent to the existence of a MATH such that MATH and MATH, for all MATH such that MATH. Thus, MATH if and only if MATH. MATH: MATH if and only if there exists no MATH such that MATH or there exists a MATH and a MATH such that MATH, MATH, and, for all MATH, MATH. This is equivalent to: There exists no MATH such that MATH, or there exists a MATH and a MATH such that MATH, MATH, and, for all MATH, MATH. Therefore, MATH if and only if MATH. MATH: MATH if and only if there exists MATH such that MATH. This is equivalent to the existence of MATH such that MATH. Hence, MATH if and only if MATH. MATH: MATH if and only if MATH, for all MATH. This is equivalent to: For all MATH with MATH we have MATH. Since MATH there are infinitely many different MATH with MATH and consequently MATH if and only if MATH. MATH: MATH if and only if MATH. Equivalently, there exists a MATH and a MATH such that MATH, MATH, and MATH, for all MATH such that MATH. So MATH if and only if MATH. This last step finishes the proof of the induction's basis. In the inductive step, the proper subformulas of MATH need not necessarily satisfy the preconditions of the lemma, because they can contain atomic propositions that are not in the scope of a temporal operator (of the subformula). Hence, in general, a subformula MATH of MATH is the boolean combination of boolean formulas MATH and purely temporal formulas MATH. By induction, we have MATH if and only if MATH. According to REF , MATH if and only if MATH. Thus MATH if and only if MATH, because MATH. Hence, if MATH, then MATH if and only if MATH. Therefore, for all subformulas MATH of MATH, we have: if MATH, then MATH if and only if MATH. We use this condition as our induction's hypothesis. MATH: Because of the lemma's preconditions, MATH and MATH must be purely temporal subformulas of MATH, for a binary boolean connective MATH. Then, by induction and the semantics of boolean connectives, MATH if and only if MATH. MATH: MATH if and only if there exists MATH such that MATH and, for all MATH, MATH. By induction, this is equivalent to the existence of MATH such that MATH, and, for all MATH we have MATH or MATH. Therefore, MATH if and only if MATH. MATH: MATH if and only if there exists no MATH such that MATH or there exists a MATH and a MATH such that MATH, MATH, and MATH, for all MATH. By induction, this is equivalent to: There exists no MATH such that MATH or there exists a MATH and a MATH such that MATH, MATH, and MATH, for all MATH. Therefore, MATH if and only if MATH. MATH: MATH if and only if there exists MATH such that MATH. By induction, this is equivalent to the existence of MATH such that MATH. Hence, MATH if and only if MATH. MATH: MATH if and only if MATH, for all MATH. By induction, this is equivalent to: For all MATH such that MATH, we have MATH. Since MATH, there are infinitely many different MATH such that MATH. Therefore MATH if and only if MATH. MATH: MATH if and only if MATH. Equivalently, by induction, there exists MATH and MATH such that MATH, MATH, and MATH, for all MATH such that MATH. So, MATH if and only if MATH. |
cs/0101017 | REF establish the result. |
cs/0101017 | MATH: We assume MATH (otherwise the condition holds trivially). If MATH is a MATH-word in MATH, then MATH (remember that MATH and therefore MATH are prefix-closed). Let MATH be the prefix of MATH of length MATH. MATH is then the sequence of all prefixes of MATH and thus generates MATH as its limit. To each of the MATH we construct a set MATH of minimal inverse images of MATH. Let MATH be the set of all words MATH in MATH, such that there is no shorter word MATH in MATH with MATH. We define MATH . Because all MATH are in MATH there must be a MATH such that MATH to each MATH. Consequently, MATH is not empty, for all MATH. Let MATH. For all MATH such that MATH, we have MATH by definition of MATH. Because the set MATH is finite (its cardinality corresponds to the number of states in the minimal automaton accepting MATH), we obtain: MATH is a finite set, for all MATH. Because MATH if MATH and all MATH are nonempty sets, we observe that MATH is an infinite set. By MATH we denote the proper prefix relation; that is, for all MATH, MATH if and only if MATH and MATH. We show: For all MATH and all MATH, there exists a word MATH such that MATH. Let MATH be in MATH and let MATH be in MATH such that MATH. Hence MATH. Because MATH is prefix-closed, MATH is in MATH and thus MATH. The remainder of MATH after MATH we call MATH; that is, MATH. We assume that MATH is not in MATH and show a contradiction. If MATH, then there must be a word MATH such that MATH and MATH. Because MATH is in MATH, we have MATH. Because MATH, we obtain MATH and MATH. So MATH is in MATH, MATH and MATH. Therefore MATH, which contradicts the choice of MATH. Hence all preconditions to apply NAME 's Lemma are satisfied by the sets MATH, MATH, and thus there exists an infinite sequence MATH of words in MATH such that MATH and MATH, for all MATH. The sequence MATH uniquely generates some MATH and, because MATH, for all MATH, we obtain MATH. So, for all MATH, there exists a MATH such that MATH. Thus MATH. MATH: Let MATH. Let MATH be in MATH, such that MATH is defined. Because MATH is prefix-closed, all MATH are in MATH. So, for all MATH, MATH is in MATH. Because MATH is defined, there are infinitely many different MATH in MATH, for MATH. Thus MATH is in MATH, and we obtain MATH. |
cs/0101017 | We assume that MATH and derive MATH. By REF if for all MATH, there exists some MATH such that MATH. Consider thus an arbitrary MATH. Because MATH is weakly continuation-closed on MATH, there exists MATH such that MATH . As MATH, we get MATH, and in particular, by substituting MATH for MATH, there exists some MATH such that MATH . Given REF this is equivalent to MATH . Thus we know that MATH is in MATH, which, in view of REF , is equivalent to MATH . So, there exists MATH such that MATH . Viewing MATH as a single word MATH, we have shown that for all MATH, there exists MATH and MATH such that MATH (because of REF ) and MATH (because of REF ). Consider now the language MATH of prefixes of MATH. Clearly, MATH and MATH. Because MATH, we have MATH. Using REF and given that MATH, we obtain MATH, or MATH. We have thus shown that for all MATH, there exists MATH, such that MATH. Hence we have shown that MATH . |
cs/0101017 | We assume that MATH and show that MATH. Let MATH, let MATH, and let MATH such that MATH. If MATH is defined, then, by REF , MATH. Therefore, there exists a MATH such that MATH. If MATH is undefined, then there is a prefix MATH of MATH such that MATH. (In fact, there are infinitely many of these prefixes MATH.) Then, by definition of MATH and MATH, we have, for all MATH, that MATH. If there exists MATH such that MATH, then let MATH be the only MATH-word in MATH. MATH is in MATH. So by REF , MATH. If there exists no MATH such that MATH, then MATH contains maximal words, which contradicts the theorem's preconditions. So, for all MATH, there exists a MATH such that MATH. Thus MATH. |
cs/0101017 | Let the extension of MATH with respect to empty abstract suffixes be the language MATH. Let MATH be the abstraction homomorphism defined by MATH, for all MATH, and MATH. Because MATH is weakly continuation-closed on MATH, MATH is weakly continuation-closed on MATH and MATH. The latter equality holds, because MATH being weakly continuation-closed on MATH implies for all MATH, MATH if MATH CITE. Because MATH, MATH does not contain maximal words. Let MATH be the NAME that we obtain by replacing the atomic proposition MATH in MATH by a new atomic proposition MATH. We have CASE: MATH if and only if MATH, CASE: MATH if and only if MATH, and CASE: MATH if and only if MATH. Additionally, by REF , we have that MATH if and only if MATH. According to the above established equivalences and MATH, we finally obtain MATH if and only if MATH. |
cs/0101020 | Whenever two players mutually accuse each other to not use the secure channels approriately it is not possible for the remaining honest players to decide which of the two players is cheating and the secure channel between the two players cannot be used. If the players of two possible collusions MATH covering MATH cannot use the secure channels between them for the above reason, then again it is not clear for MATH which of the two possible collusion is cheating. To continue with the protocol all messages between players who are complaining about each other have to be exchanged over the broadcast channel or over secure channels via MATH. Obviously MATH learns all secrets or the protocol must be aborted. In both cases the protocol is not MATH-robust. |
cs/0101020 | Whenever two players mutually accuse each other to not properly use the oblivious transfer channel it is impossible for the remaining honest players to decide which of the two players is actually refusing to cooperate. Let MATH and MATH be two possible collusions covering MATH, such that all the players from MATH are in conflict with all the players from MATH about refusing to use the oblivious transfer channel. Then the oblivious transfer channels between the players of MATH and the players of MATH cannot be used and it is impossible for MATH to decide who is cheating. The player MATH must assist the players from MATH and MATH. As no other player can assist we are in the three party situation with an oblivious transfer channel only between a player NAME and NAME and a player NAME and NAME. For each bit being transferred from NAME to NAME the player MATH knows either as much as NAME about this bit or he knows as much as NAME. The players NAME and NAME cannot agree on a bit known to both without MATH knowing it, too. Many functions can hence not be computed by multiparty protocols in this situation. |
cs/0101020 | By assumption one set of MATH contains all cheaters, so we have to show that a set from MATH which is not a vertex cover cannot contain all cheaters. This is trivial as for every set MATH from MATH which is not a vertex cover of MATH there exists a pair of players who are in conflict but neither of them is contained in MATH. By the definition of conflicts one of the two is cheating, but not contained in MATH and thus MATH cannot contain all cheaters. If no vertex cover of MATH is contained in MATH, then for every set MATH of MATH there exists a pair of players who are in conflict but neither of them is contained in MATH. Hence no set of MATH contains all cheaters and the assumption is violated. |
cs/0101020 | For every vertex cover MATH it is consistent with the conflict structure to assume that only the players in MATH are cheating. Hence if there exists a vertex cover MATH which does not contain a specific player, then this player need not be a cheater. So whenever a cheater can be identified by deduction from the conflict structure he must be contained in MATH. On the other hand if this intersection is not empty then every player in this intersection must be cheating as one set of MATH contains only cheaters, which follows from REF . |
cs/0101020 | If we set MATH to be the set of all subsets of MATH with at most MATH players, then deciding consistency with a given conflict graph MATH is the same as deciding if there exists a vertex cover with at most MATH vertices. This is MATH-complete CITE. |
cs/0101020 | We show that a search algorithm which can identify a cheater whenever a cheater can be identified by deduction from the conflict structure can be used to decide if a graph has a unique vertex cover of cardinality MATH. We let MATH be the set of all subsets of MATH with at most MATH elements. Let MATH be a search algorithm which identifies a cheater if it is possible to identify a cheater. We will use this algorithm to decide if there is a unique solution to the vertex cover problem. As this uniqueness problem is MATH-hard we then have shown the problem of identifying cheaters to be MATH-hard (see CITE for the uniqueness problem and CITE for the uniqueness preserving reduction of vertex cover to the satisfiability problem). To find a unique vertex cover (and hence decide its existence) we run MATH if no cheater is identified then there is not a unique solution. If a cheater MATH is identified we restrict our graph MATH to MATH. We repeate this procedure until MATH has either found enough cheaters such that they form a vertex cover, which then must be a unique vertex cover, or not enough cheaters can be identified and no unique vertex cover exists. |
cs/0101020 | Deciding membership in MATH for a set MATH is clearly in MATH as one can guess a superset of MATH which yields a satisfying assignemnet for MATH. On the other hand we can reduce the satisfiability problem to deciding membership in MATH. As MATH contains for every set MATH all the subsets of MATH the boolean function MATH has a satisfying truth assignment iff for one player MATH we have MATH. |
cs/0101020 | We will not state a full proof here as it can be found in CITE. But we will restate the copying procedure as it is an important subprotocol of all of the following protocols. Suppose NAME is committed to NAME to a bit MATH and wants two instances of this commitment. Then NAME creates MATH pairs of bit commitments such that each pair NAME to MATH. Then NAME randomly partitions these MATH pairs in three subsets of MATH pairs, thus obtaining three BCX and asks NAME to prove the equality of the first new BCX with her BCX for MATH. This destroys the old BCX and one of the new BCX, but an honest NAME can thereby convince an honest NAME that the two remaining BCX both stand for the value MATH. |
cs/0101020 | Every player chooses a random bit and commits to it to every other player. Then the bits are opened using the broadcast channel. Some players might complain about other players. Every bit accepted by a non-collusion is called a valid bit. Then the result of the coin tossing is chosen to be the NAME of the valid bits. Every player whose bit is not accepted must be a cheater as he is in conflict with a non collusion. As only a set of players contained in MATH can be identified as cheaters and no two sets of MATH cover MATH the bits of a non-collusion will be accepted as valid. Therefore the resulting bit is really random as it cannot be chosen by a collusion. |
cs/0101020 | If NAME wants to generate a DBC for a bit MATH all players have to commit to a random bit using GBCX and then unveil this bit to NAME. Then NAME will generate a GBCX such that the NAME of all the bits equal the bit MATH. The problem is that all the GBCX are only unveiled to NAME. Hence we cannot distinguish between a party refusing to unveil to NAME and NAME just claiming so. All other conflicts can be solved by REF . So assume NAME to be in conflict with a set of players MATH while she is creating a DBC. Then we will force the players from MATH to unveil their bits publicly. If some players are unable to unveil we have identified cheaters. We seem to loose a little bit of security or correctness as the complement MATH of the set MATH can reconstruct the secret. But as NAME is contained in MATH the secret can only be recovered if NAME is cheating, too. If NAME is part of the collusion the collusion does not learn anything new by reconstructing NAME input bit. |
cs/0101020 | We prove the claims of the lemma for the following protocol. Forward Oblivious Transfer via NAME of MATH CASE: NAME sends the bits MATH to NAME. CASE: NAME commits to MATH to NAME and to NAME using a GBCX involving only the players NAME, NAME, and NAME. Then NAME opens the commitment to NAME to convince her that she is now committed to MATH to NAME. CASE: NAME commits to a bit MATH to NAME. CASE: NAME runs COT REF with NAME. For the security of the protocol we have to prove that CASE: NAME and NAME cannot together learn the secret MATH of NAME. CASE: NAME alone cannot learn the secret MATH of NAME (together with NAME or together with NAME MATH are not secret any more as they can be derived from the input respectively, output of the function.) Point REF. is clear from the security of the COT protocol. Point REF. follows directly from the security of the GBCX protocol and the COT protocol. To prove the partial correctness it is enough to prove that Caro alone cannot alter the two bits without getting in conflict with NAME or NAME. NAME can check if the two bits NAME is committed to equal the bits she sent to NAME because of the binding property of the GBCX bit commitment. NAME can check if the bits NAME is committed to equal the bits NAME sent to NAME by the properties of the COT protocol used. |
cs/0101020 | If the sender and the receiver of an oblivious transfer are not in conflict yet, then a new conflict arises as soon as one party complains. So we are left with the interesting case where the sender and the receiver are already in conflict. In this situation we use the following protocol: Oblivious Transfer for players in conflict REF Let MATH be the set of players not in conflict with NAME or NAME. CASE: NAME chooses a bit MATH CASE: For all MATH do CASE: NAME chooses random bits MATH and performs with NAME Forward Oblivious Transfer via MATH of MATH CASE: If NAME or NAME gets in conflict with MATH then let MATH CASE: NAME calculates MATH and MATH and broadcasts these two bits. We now prove the security, partial correctness, and fairness of the above protocol. Security: The secret bit MATH of NAME cannot be learnt by anyone due to the security of the COT protocol. Now we look at NAME secrets. Let MATH denote the set of players the receiver NAME is in conflict with and MATH be the set the sender NAME is in conflict with. The players of the set MATH can together reconstruct a secret of the sender NAME. But the set MATH cannot contain all cheaters, the complete collusion is larger. If NAME is honest (otherwise we don't need to protect her secret), then all players of MATH are cheaters and have to be considered as part of the collusion. The complete collusion able to reconstruct a secret bit and containing all cheaters is then at least as large as MATH. The set MATH is contained in MATH, otherwise NAME would have left the protocol, then MATH and no collusion of MATH learns a secret. It remains to be shown that no honest but curiuous player gets to know a secret. As MATH, because no two collusions cover all but one player, NAME secret is always distributed among several honest players and no single honest but curious player can reconstruct it. We can conclude that the protocol is MATH-secure. Partial correctness: According to REF no player of MATH can have altered the values of the bits without a new conflict arising. At the end of the protocol the set MATH contains only the players NAME and NAME are not in conflict with. Thus the players of MATH cannot have altered the bits, hence we even get MATH-partial correctness for this protocol. Fairness is not an issue here as only one player, NAME, learns a result. The sender is committed to bits MATH as each player MATH is committed to the bits MATH the sender can ask all players from MATH to open the bits. If the bits are not opened correctly either the sender or the receiver will object and a new conflict must arise between a player from MATH and NAME or NAME. |
cs/0101020 | We will restate the GCOT protocol of CITE without a proof of its security. Details can be found in CITE. Then we will carefully investigate the steps and see, that by replacing GBCX with the modified protocol of REF and using the oblivious transfer of REF each step either works, or a new conflict arises, or a cheater is identified. The steps which did not work can be repeated and eventually the protocol works or a cheater can be identified unambiguously. In the restated protocol we will use the notation of CITE: indices are superscript and MATH denotes the Oblivious Transfer for players in conflict REF protocol of REF . MATH CASE: All participants together choose one decodable MATH linear code MATH with MATH and MATH for positive constants MATH, efficiently decoding MATH errors. CASE: NAME randomly picks MATH, commits to the bits MATH and MATH REF of the code words, and proves that the codewords fulfil the linear relations of MATH. CASE: NAME randomly picks MATH, with MATH and sets MATH for MATH and MATH for MATH. CASE: NAME runs MATH with NAME who gets MATH for MATH. NAME tells MATH to NAME who opens MATH for each MATH. CASE: NAME checks that MATH for MATH and MATH for MATH, sets MATH, for MATH and corrects MATH using MATH's decoding algorithm, commits to MATH for MATH, and proves that MATH. CASE: All players together randomly pick a subset MATH with MATH, MATH and NAME opens MATH and MATH for MATH. CASE: NAME proves that MATH for MATH. CASE: NAME randomly picks and announces a privacy amplification function MATH such that MATH and MATH and proves MATH and MATH. CASE: NAME sets MATH, commits to MATH and proves MATH. As GBCX commitments as well as zero knowledge proofs convincing a non collusion can be performed by all players unless a cheater is identified REF the honest behaviour of NAME and NAME can be checked by a non collusion in all steps, but in REF . If now NAME claims that NAME cheated in REF . then NAME can open the codewords MATH according to REF then either NAME or NAME are caught cheating or if the opening was not successful a new conflict must arise REF . If this is the case we repeat REF . to REF with new random choices. After a finite number of repetitions a cheater will be identified as there cannot be arbitrarily many conflicts. As the codewords which might have to be opened are random and not related to NAME secret inputs no security is lost by restarting the protocol. Hence the security is the same as stated in REF . |
cs/0101020 | We restate the protocols from CITE to see that they involve only primitives which can be dealt with according to our results so far. A PAND can be realized by the following protocol: NAME is committed to MATH and NAME is committed to MATH. Then NAME chooses a random bit MATH and runs MATH with NAME who gets MATH. We have MATH because for MATH we have MATH and hence MATH, for MATH we get MATH and MATH. For a GPAND protocol the COT protocol has to be replaced by GCOT. To evaluate an AND on DBCs we observe that MATH . From this we can conclude that an AND operation on DBCs can be realized by MATH GPAND one for each pair of players and NAME operations for each player. |
cs/0101020 | To implement such a NOT gate one player is picked who must invert his GBCX (his ``share" of the DBC which represents a bit MATH). The player generates a new GBCX and proves that it is unequal to the GBCX he held before. This GBCX together with the GBCX of the other players form a DBC for the inverted bit. |
cs/0101020 | According to REF we can realize the boolean operation AND and NOT on DBCs such that whenever the protocol fails a cheater is identified. Furthermore we can generate DBCs successfully or a cheater will be identified REF . Using these techniques we will implement oblivious circuit evaluation. The protocol will be restarted each time it had to be aborted, but without the players which were identified as cheaters. We will next have to clarify how a protocol begins and how it is ended. Below we will sketch the structure of the comlete protocol, without mentioning possible restarts, closely following CITE. Initialization Phase: All players have to agree on the function to be computed as well as on the circuit MATH to be used, they have to agree on an adversary structure MATH such that the protocol will be MATH robust and all players have to agree on the security parameters used and on a code MATH for the GCOT protocol. Furthermore the players agree on how to, in case of a restart of the protocol, choose the input of a cheater which has been excluded from the protocol. Then all players create DBCs to commit to their inputs. Computing Phase: The circuit is evaluated using AND and NOT gates on the input DBCs. If the circuit requires several copies of a DBC then a DBC is copied by copying the GBCX it consists of. A GBCX can be copied by copying all its BCX with the procedure of REF . Revelation Phase: The result of a computation is hidden in DBCs. These have to be unveiled in a way to ensure the fairness of the protocol. Following CITE we use the techniques from CITE to fairly unveil the secret information such that no collusion can run off with an advantage of more than a fraction of a bit. Of course a MATH-secure protocol cannot be more than MATH-fair. |
cs/0101020 | By inspection of REF we can see, that the only step where the MATH-security is lost is the use of REF . The secret which is distributed when applying REF is a secret of the sender in the oblivious transfer by REF . Hence the MATH-security is lost only if the sender was honest. To complete the proof we look at the set which can reconstruct the distributed secret. Let a player REF be in in conflict with a set MATH containing a player NAME and NAME being in conflict with a superset MATH of the set MATH. Of course MATH contains NAME. If we use REF to implement oblivious transfer between NAME and NAME then secret bits of NAME are distributed among the players of MATH a subset of MATH. If NAME is honest (otherwise we need not protect her secret) then all players of MATH are cheating and the complete collusion able to reconstruct secret bits of NAME is MATH a subset of MATH. The set MATH contains all cheaters and can reconstruct a secret of NAME, but it can only be a subset of MATH if MATH and MATH are disjoint, i. e., if MATH does not contain any player in conflict with the sender. |
cs/0101020 | Let MATH be any maximal set of MATH. In addition to REF we have to prove that we can additionally prevent MATH from reconstructing any secret data. From REF we know that a complement of a maximal set MATH contains all cheaters and can reconstruct a secret only if the receiver of an oblivious transfer by REF was in conflict with a superset of MATH. As MATH is maximal either the receiver is detected cheating by being in conflict with a set not in MATH or the receiver has to be in conflict with exactly all players from MATH. We keep in mind that oblivious transfer, as well as GCOT, is needed in one direction only between every pair of players. We modify the protocol such that a player who is in conflict with exactly the players of MATH always sends in an oblivious transfer if it is implemented by REF . It remains to be shown that it is impossible that the receiver and the sender are in conflict with the players of MATH. REF is only employed if the sender and the receiver are in conflict. Hence the the sets of players the sender and the receiver are in conflict with have to differ as no one can be in conflict with himself. |
cs/0101020 | Let NAME be in conflict with the set MATH maximal in MATH and NAME be in conflict with the set MATH maximal in MATH and let MATH. NAME and NAME must be in conflict, because only one collusion from MATH is cheating. Hence NAME MATH and NAME MATH. We now look at any additional conflict. This conflict has to involve a player from MATH otherwise we could identify a cheater, because MATH would not be a vertex cover of the conflict graph and NAME must be cheating as he is in conflict with all players of MATH. For the same reason one of the two players in conflict must be contained in MATH else NAME would be caught cheating. Hence every additional conflict is a conflict between a player from the set MATH and a player from the set MATH. |
cs/0101020 | All properties of MATH and MATH were dealt with in REF . Hence we will have to consider only the a posteriori security in this proof. To achieve the security stated in point REF. of the theorem we need one more modification of the protocol developed so far. Again we keep in mind that oblivious transfer has to be used in one direction only between every pair of players. We will introduce more rules regulating the direction in which oblivious transfer has to be used whenever REF is employed. CASE: If a player is in conflict with the set MATH then this player sends only in an oblivious transfer which is implemented by REF . CASE: If a player is in conflict with a maximal set of MATH and needs to employ REF , then this player always sends to players who are not in conflict with a maximal set of MATH. CASE: If two players are in conflict with a maximal set of MATH then we use a previously fixed order MATH on the set of maximal sets of MATH. The player in conflict with the maximal set larger with respect to the order MATH sends and the player in conflict with the maximal set smaller with respect to the order MATH receives. To be consistent with the above the set MATH must be maximal with respect to the ordering MATH. These additional rules for the direction of oblivious transfers are in accordance with the proof of REF . Hence we don't need to prove points REF. and REF. from the above theorem as the proof of REF still applies. Now we prove point REF. of the above theorem. A set MATH with MATH maximal in MATH is able to reconstruct a secret only if the receiver of an oblivious transfer was in conflict with the players of MATH and MATH does not contain any players the sender is in conflict with REF . Let the sender be in conflict with the players of a set MATH then MATH must be maximal in MATH or the direction of the oblivious transfer would have to be different (point REF. of the above enumeration). Furthermore MATH and MATH must be disjoint as otherwise MATH would contain a player the sender is in conflict with. We can conclude that a set MATH with MATH maximal in MATH is able to reconstruct a secret only if there exist two players each of which is in conflict with a maximal set and these two sets are disjoint. From now on we will consider only this situatiuon. From REF we know that in such a situation every conflict is a conflict of a player from MATH and a player of MATH. So whenever in this case two players are in conflict with maximal sets of MATH one set must be MATH and the other must be MATH. During an oblivious transfer which is implemented by REF only MATH or MATH could learn a secret bit, but according to point REF. of the above enumeration the direction of the oblivious transfer is always chosen in a way that only among the players of one of these two sets secrets will be shared. So from the complements of the sets which are maximal in MATH only one set is excluded from MATH all other sets of MATH were already contained in MATH. |
cs/0101020 | After termination of the protocol no collusion can change the result or abort the protocol anymore hence the only problem left is that a collusion of players leaks their secrets. If a collusion leaks their secret data it can happen that an honest but curious player learns a secret without himself colluding. But it is obvious that for a MATH-secure protocol the collusion which is leaking secrets plus the honest but curious player must not be contained in MATH if the honest but curious player is to learn a secret. Hence after termination the protocol is MATH-robust for MATH. |
cs/0101021 | The theorem holds trivially if MATH. For the rest of the proof we assume MATH, and thus MATH. Many graphs may appear during the execution of MATH. These graphs can be organized as nodes of a binary tree MATH rooted at MATH, where REF if MATH and MATH are obtained from MATH by calling MATH, then MATH and MATH are the children of MATH in MATH, and REF if MATH, then MATH has no children in MATH. Further consider the multiset MATH consisting of all graphs MATH that are nodes of MATH. We partition MATH into MATH multisets MATH as follows. MATH consists of the graphs MATH with MATH. For MATH, MATH consists of the graphs MATH with MATH. Let MATH, and thus set MATH. Define MATH. We first show MATH for every MATH. Let MATH be a graph in MATH. Let MATH be the set consisting of the leaf descendants of MATH in MATH; for example, MATH. By Condition P REF, MATH. By Condition P REF, no two graphs in MATH are related in MATH. Therefore MATH contains at most one ancestor of MATH in MATH for every graph MATH in MATH. It follows that MATH. Since MATH for every MATH in MATH, REF holds. CASE: Suppose that the children of MATH in MATH are MATH and MATH. Let MATH, where MATH is the auxiliary binary string for MATH. Let MATH. Then, MATH. By the super-additivity of MATH, MATH. Since MATH is continuous, REF can be proved by showing MATH and MATH below. By Condition P REF, MATH. By REF , MATH. Thus, MATH, and MATH . Now we regard the execution of MATH as a process of growing MATH. Let MATH. At the beginning of the function call MATH, MATH has exactly one node MATH, and thus MATH. At the end of the function call, MATH is fully expanded, and thus MATH. By Condition P REF, during the execution of MATH, every function call MATH with MATH increases MATH by MATH. Hence MATH . Note that MATH . By REF , MATH, and thus MATH. Hence MATH. By REF , MATH and MATH, finishing the proof of REF . CASE: By Conditions P REF and P REF, MATH for every MATH. Since MATH, MATH. Together with REF , we know MATH for every MATH. By the definition of MATH, MATH for every MATH. Therefore MATH and MATH can be obtained from each other in time MATH . Clearly MATH. Since MATH and MATH, MATH, and REF follows. |
cs/0101021 | REF - REF are straightforward by REF and the definitions of MATH, MATH and MATH. REF is proved as follows. It takes MATH time to locate MATH (respectively, MATH) in MATH (respectively, MATH) by looking for the node with the lowest degree on MATH (respectively, MATH). By REF , it takes MATH time to obtain MATH, MATH, and MATH from MATH. Therefore MATH and MATH can be located in MATH and MATH in MATH time by depth-first traversal. Now MATH can be obtained from MATH by removing MATH and its incident edges. The cycle MATH in MATH is simply MATH. Also, MATH can be obtained from MATH by removing MATH, MATH, and their incident edges. The MATH in MATH is simply the boundary of the face that encloses MATH and its incident edges in MATH. Since we know the positions of MATH in MATH and MATH, MATH can be obtained from MATH and MATH by fitting them together along MATH by aligning MATH. The overall time complexity is MATH. |
cs/0101021 | Since a MATH-node MATH-graph has MATH edges, there are at most MATH distinct MATH-node MATH-graphs. Thus, there exists an indexing scheme MATH such that MATH and MATH can be obtained from each other in MATH time. The theorem follows from REF . |
cs/0101021 | Assume for a contradiction that such a MATH does not exist. It follows that the sum of degrees of all nodes in MATH is at least MATH. This contradicts the fact that MATH has MATH edges. |
cs/0101021 | Since MATH and MATH are both triconnected, and each node of MATH has degree at least three in MATH and MATH, REF holds for each case of the connectivity of the input MATH-graph MATH. REF - REF are straightforward by REF and the definitions of MATH, MATH and MATH. REF is proved as follows. First of all, we obtain MATH from MATH. Since MATH does not contain any node of degree MATH or MATH, MATH is the only degree-MATH node in MATH. Therefore it takes MATH time to identify MATH in MATH. MATH is the only degree-REF neighbor of MATH. Since MATH, MATH is the only degree-REF neighbor of MATH. MATH is the common neighbor of MATH and MATH that is not adjacent to MATH. From now on, MATH, for each MATH, is the common neighbor of MATH and MATH other than MATH. Clearly, MATH and thus MATH can be identified in MATH time. MATH can now be obtained from MATH by removing MATH. Similarly, MATH can be obtained from MATH and MATH by deleting MATH after identifying MATH. Finally, MATH can be recovered by fitting MATH and MATH together by aligning MATH. Based on MATH, MATH can then be obtained from MATH by removing the edges of MATH that are not originally in MATH. |
cs/0101021 | Since there are at most MATH distinct MATH-node MATH-graphs, there exists an indexing scheme MATH such that MATH and MATH can be obtained from each other in MATH time. The theorem follows from REF . |
cs/0101022 | It is sufficient to show the result for the first step. The general result follows from the persistence of simply-modedness under input-consuming derivation steps CITE. Let MATH be the atom in MATH selected in the step and MATH. We prove the first statement. Clearly MATH and MATH are unifiable. Since MATH is input-consistent and by hypothesis, MATH is linear and has variables in the free positions, and variables or flat terms in the controlled positions. Moreover, since MATH is simple and MATH is selectable, MATH is non-variable in the controlled positions. Considering in addition that the clause is a fresh copy renamed apart from MATH, it follows that MATH is an instance of MATH. Let MATH be the substitution with MATH such that MATH. Since MATH is a linear vector of variables, there is a substitution MATH such that MATH and MATH. Since MATH is simply moded, we have MATH, and therefore MATH. Thus it follows by the previous paragraph that MATH is an MGU of MATH and MATH. More precisely, we have MATH, MATH, MATH, and MATH, and so in particular, the derivation step using MATH is input-consuming. Since mgu's are unique modulo renaming, the first statement follows. We now show the second statement. If MATH contains flat (that is, non-variable) terms in all controlled positions, then clearly MATH must be non-variable in those positions for the derivation step using the clause to be input-consuming. But then MATH is also selectable. Since the same holds for every clause, the statement follows. |
cs/0101022 | Notice that since MATH by hypothesis, MATH is nicely-moded as well. Since, by REF , input-consuming derivations only affect variables occurring in the output positions of a query, one only has to appropriately instantiate every resolvent in the derivation. Clearly, every resolution step remains input-consuming (the selected atom is just instantiated a bit further). |
cs/0101022 | Let MATH and MATH. By properties of mgu's (see CITE), there exist substitutions MATH and MATH (the names have been chosen to correspond closely to REF ) such that MATH and all those mgu's are relevant. Since, by hypothesis, MATH, it follows that MATH is an instance of MATH. Hence, we can assume without loss of generality that MATH is such that MATH and thus CASE: MATH, and CASE: MATH. Since MATH is fresh with respect to MATH, this means that MATH is simply-local with respect to the clause MATH. By relevance of MATH, simply-modedness of MATH and the fact that MATH and MATH are variable disjoint, it follows that MATH. Hence, MATH. Since, by simply-modedness of MATH, MATH is sequence of distinct variables, we can assume without loss of generality that MATH is such that MATH and thus CASE: MATH, and CASE: MATH. Since MATH is fresh with respect to MATH and noting REF , this means that MATH is simply-local with respect to the query MATH. |
cs/0101022 | By induction on MATH. Base. Let MATH. In this case, MATH . By REF , there exists a set of variables MATH such that MATH and MATH (that is, MATH is fresh with respect to MATH), and MATH. Suppose that MATH is of the form MATH where MATH is the input clause used in the first derivation step, MATH such that MATH and MATH. Since MATH is simply-local with respect to MATH and MATH, we have that MATH where CASE: MATH, CASE: MATH, CASE: MATH, CASE: MATH. Now MATH. By standardisation apart and simply-modedness of MATH, it follows that MATH, and so MATH. Consider the derivation MATH. We have that MATH. Since MATH, we have that MATH. Thus, MATH. Induction step. Let MATH. By REF , there exist MATH substitutions such that MATH such that MATH, and MATH, and all the MATH-steps are performed in the sub-derivation MATH . By the induction hypothesis and standardisation apart, there exist MATH disjoint sets of fresh variables (with respect to MATH) such that for all MATH, CASE: MATH, CASE: MATH. |
cs/0101022 | MATH . By induction on the length of MATH. Base. Let MATH. In this case MATH has the form MATH where MATH is the input clause and MATH satisfies MATH. Since MATH is simply-local with respect to MATH and MATH, by REF , MATH is simply-local with respect to MATH. Hence, by definition of MATH, MATH . Induction step. Let MATH. In this case, MATH has the form MATH where MATH is the input clause used in the first derivation step, MATH satisfies MATH and MATH. Since MATH is simply-local with respect to MATH and MATH, by REF , MATH is simply-local with respect to MATH. Let MATH. By REF of simply-local substitution, there exists a set MATH of fresh variables (with respect to MATH) such that MATH and MATH . By standardisation apart, MATH . By REF and the NAME Lemma, there exist MATH and a derivation MATH isomorphic to MATH (modulo the NAME Lemma), and MATH has the form MATH where MATH. In particular, for all MATH, MATH is an input-consuming successful derivation which is strictly shorter than MATH. Hence, by the inductive hypothesis, for all MATH, MATH . By simply-modedness of MATH, MATH. By REF , there exist distinct sets of fresh variables MATH, such that MATH. By induction on MATH, one can prove that, for all MATH, MATH . The base case is trivial. The induction step, follows from the inductive hypothesis (that is, MATH for MATH), standardisation apart and simply-modedness of MATH. For all MATH, let MATH. Hence, by REF , MATH . By standardisation apart, for all MATH, MATH . By REF , for all MATH, MATH . By standardisation apart and simply-modedness of MATH, it follows that for all MATH, MATH. Hence, by REF , it follows that MATH . By REF and the fact that MATH are disjoint sets of variables, it follows that MATH . Moreover, by REF , MATH . By definition of MATH, MATH. Since MATH, we have proven that MATH . Since by REF , MATH, this completes the proof of the MATH direction. MATH . We first need to establish the following fact. Let the atom MATH and the clause MATH be simply-moded. Suppose that there exist two substitutions MATH and MATH such that CASE: MATH is simply-local with respect to MATH, CASE: MATH, CASE: MATH. Then, for each variant MATH of MATH variable disjoint with MATH, there exists MATH such that MATH and MATH. Since MATH, it follows that (since MATH and MATH are variable-disjoint) MATH and MATH are unifiable, and MATH REF is an instance of the most general common instance of MATH and MATH. Now, since by REF and MATH is simply-local with respect to MATH, we can choose MATH such that MATH . Using that MATH is an mgu, the assumptions in the statement, and REF , we have MATH, that is, MATH . By REF , MATH . By REF , MATH. This completes the proof of the Fact. We now continue the proof of the main statement. We show by induction on MATH that if MATH for some MATH and substitution MATH such that MATH, then there exists an input-consuming successful derivation MATH and MATH. Base. Let MATH. In this case, MATH. By Definition of MATH, there exists a clause MATH of MATH and a substitution MATH such that MATH is simply-local with respect to MATH and MATH. Let MATH be a variant of MATH variable disjoint from MATH. By REF , there exists an mgu MATH of MATH and MATH such that MATH and MATH, that is, there exists an input-consuming successful derivation MATH. Induction step. Let MATH and MATH. By definition of MATH, there exists a clause MATH of MATH and a substitution MATH such that MATH is simply-local with respect to MATH, MATH and MATH. By REF of simply-local substitution, there exist MATH substitutions and MATH disjoint sets of fresh variables (with respect to MATH) such that MATH where CASE: MATH, CASE: MATH, CASE: for MATH, MATH, CASE: for MATH, MATH. By simply-modedness of MATH, we have that for all MATH, MATH. So we have MATH and MATH and hence, by the inductive hypothesis, for all MATH, there is an input-consuming derivation MATH such that MATH . Let MATH be a variant of MATH variable disjoint from MATH such that MATH for some renaming MATH. By REF , there exists an mgu MATH of MATH and MATH such that MATH and MATH. Thus, MATH . By the inductive hypothesis, for all MATH, there exists an input-consuming successful derivation MATH such that MATH. Hence, there exists an input-consuming successful derivation MATH such that MATH. Hence, there exists an input-consuming successful derivation MATH with MATH and MATH, that is, MATH. |
cs/0101022 | Let MATH. The proof is by induction on MATH. Base. Let MATH. In this case the thesis follows from REF . Induction step. Let MATH. MATH . Let MATH. By REF , MATH is simply-local with respect to MATH. By REF , there exists a successful input-consuming derivation of the form MATH where MATH. By REF and standardisation apart, MATH where MATH is a set of fresh variables (with respect to MATH), and MATH where MATH is a set of fresh variables (with respect to MATH and MATH). By simply-modedness of MATH and standardisation apart, MATH, and so MATH. By simply-modedness of MATH, MATH. Hence, by standardisation apart, MATH, that is, MATH. By the inductive hypothesis, MATH and MATH, that is, MATH. MATH . By the inductive hypothesis, there exists an input-consuming successful derivation MATH where MATH. Again by the inductive hypothesis, there exists an input-consuming successful derivation MATH such that MATH. Since, by standardisation apart, MATH, it follows that there is an input-consuming successful derivation MATH . |
cs/0101022 | The proof is almost identical to the one of REF . The basic difference is that now, in the base cases, we have to consider derivations of length zero. MATH . If MATH, then MATH and MATH (the empty substitution). The thesis follows from the fact that MATH is simply-moded and MATH contains the set of all simply-moded atoms. MATH . If MATH then MATH is just a renaming of the output variables of MATH. The thesis follows by taking MATH to be the empty substitution and MATH to be the derivation of length zero. |
cs/0101022 | The proof is analogous to the one of REF , but using REF instead of REF. |
cs/0101022 | First, for each predicate symbol MATH, we define MATH to be the number of predicate symbols it depends on: MATH. Clearly, MATH is always finite. Further, it is immediate to see that if MATH then MATH and that if MATH then MATH. We can now prove our theorem. By REF , it is sufficient to prove that for any simply-moded one-atom query MATH, all input-consuming derivations of MATH are finite. First notice that if MATH is defined in MATH then the result follows immediately from the hypothesis that MATH is input terminating and that MATH is an extension of MATH. So we can assume that MATH is defined in MATH. For the purpose of deriving a contradiction, assume MATH is an infinite input-consuming derivation of MATH such that MATH is defined in MATH. Then MATH where MATH is the input clause used in the first derivation step and MATH. Clearly, MATH has an infinite input-consuming derivation in MATH. By the NAME lemma, for some MATH and for some substitution MATH, CASE: there exists an infinite input-consuming derivation of MATH of the form MATH CASE: there exists an infinite input-consuming derivation of MATH . Let MATH. By REF , MATH is simply-local with respect to MATH. Consider the instance MATH of MATH. By REF , MATH. We show that REF cannot hold, by induction on MATH with respect to the ordering MATH defined by: MATH iff either MATH or MATH and MATH. Base. Let MATH (MATH is arbitrary). In this case, MATH does not depend on any predicate symbol of MATH, thus all the MATH as well as all the atoms occurring in its descendents in any input-consuming derivation are defined in MATH. The hypothesis that MATH is input terminating contradicts MATH above. Induction step. We distinguish two cases: CASE: MATH, CASE: MATH. In case MATH we have that MATH . Therefore, MATH . In case MATH, from the hypothesis that MATH is simply-acceptable with respect to MATH and MATH, MATH is simply-local with respect to MATH and MATH, it follows that MATH. Consider the partial input-consuming derivation MATH. By REF and the fact that MATH is a moded level mapping, we have that MATH. Hence, MATH. In both cases, the contradiction follows by the inductive hypothesis. |
cs/0101022 | The proof follows from REF , by setting MATH. |
cs/0101022 | By definition, the NAME are finitely branching. The claim now follows by NAME 's Lemma. |
cs/0101022 | Consider an NAME MATH for MATH. By the hypothesis that MATH, it follows that there exists a substitution MATH (possibly the empty substitution) such that MATH is more general than MATH, and MATH is a partial c.a.s. for MATH, such that either no atom is selectable or MATH is the selected atom. Clearly, MATH. By REF we have MATH. Hence the thesis. |
cs/0101022 | We show that there exists a moded level mapping MATH for MATH such that MATH is simply-acceptable with respect to MATH and MATH, the latter being the least simply-local model of MATH containing MATH. Given an atom MATH, we denote with MATH an atom obtained from MATH by replacing the terms filling in its output positions with fresh distinct variables. Clearly, we have that MATH is simply-moded. Then we define the following moded level mapping for MATH: MATH . Notice that, the level MATH of an atom MATH is independent from the terms filling in its output positions, that is, MATH is a moded level mapping. Moreover, since MATH is input terminating and MATH is simply-moded, all the input-consuming derivations of MATH are finite. Therefore, by REF , MATH is defined (and finite), and thus MATH is defined (and finite) for every atom MATH. We now prove that MATH is simply-acceptable with respect to MATH and MATH. Let MATH be a clause of MATH and MATH be an instance of MATH where MATH is a simply-local substitution with respect to MATH. We show that MATH . Consider a variant MATH of MATH variable disjoint from MATH. Let MATH be a renaming such that MATH. Clearly, MATH and MATH unify. Let MATH. Since MATH is simply-local with respect to MATH and MATH, we have MATH. Hence MATH, and MATH is an input-consuming derivation step, that is, MATH is a descendant of MATH in an NAME for MATH. Moreover, MATH. Let MATH. Hence, MATH is simply-local with respect to MATH. Then, we have that MATH . |
cs/0101022 | By REF . |
cs/0101023 | Since MATH and MATH are unifiable, there exists a relevant mgu MATH of them (compare r. REF ). Now, MATH is a renaming of MATH. Thus MATH is a variant of MATH. Then there exists a renaming MATH such that MATH and MATH. Now, take MATH. |
cs/0101023 | Let us first establish the following claim. Let MATH and MATH be two variable disjoint sequences of terms such that MATH is linear and MATH. If MATH and MATH are two variable disjoint terms occurring in MATH then MATH and MATH are variable disjoint terms. The result follows from REF. We proceed with the proof of the lemma by induction on MATH. CASE: Let MATH. In this case MATH and the result follows trivially. Induction step. Let MATH. Suppose that MATH and MATH where MATH is the selected atom of MATH, MATH is the input clause used in the first derivation step, MATH is a relevant mgu of MATH and MATH and MATH. Let MATH and MATH. We first show that MATH . We distinguish two cases. MATH. In this case, REF follows from the hypothesis that MATH is input-consuming. MATH. Since MATH, by standardization apart, we have that MATH. Moreover, since MATH, it also holds that MATH. Then, REF follows from relevance of MATH. Now we show that MATH . Again, we distinguish two cases: MATH. In this case, because of the standardization apart condition, MATH will never occur in MATH. Hence, MATH and MATH. MATH. In this case, in order to prove MATH we show that MATH. The result then follows by the inductive hypothesis. From the standardization apart, relevance of MATH and the fact that the first derivation step is input-consuming, it follows that MATH. From the hypothesis that MATH is nicely-moded, MATH. Hence, MATH. Since MATH, this proves that MATH. It remains to be proven that MATH. We distinguish two cases. MATH. Since MATH, the fact that MATH follows immediately by standardization apart condition and relevance of MATH. MATH. By known results (see REF ), there exists two relevant mgu MATH and MATH such that CASE: MATH, CASE: MATH, CASE: MATH. From relevance of MATH and the fact that, by nicely-modedness of MATH, MATH, we have that MATH, and by the standardization apart condition MATH. Now by nicely-modedness of MATH, MATH. Since MATH is relevant and by the standardization apart condition it follows that MATH . The proof proceeds now by contradiction. Suppose that MATH. Since by REF, and MATH, we have that MATH. By MATH, this means that there exist two distinct variables MATH and MATH in MATH such that MATH, MATH and MATH . Since, by the standardization apart condition and relevance of the mgu's, MATH and MATH, we have that MATH and MATH are two disjoint subterms of MATH. Since MATH, MATH is linear and disjoint from MATH, REF contradicts REF . |
cs/0101023 | Let MATH, MATH, MATH and MATH. Hence, MATH and MATH . By MATH and the fact that MATH is nicely-moded and MATH is relevant, we have that MATH. Then, MATH and MATH . Moreover, MATH where MATH . We construct the derivation MATH as follows. MATH where MATH . By MATH, MATH is an input-consuming derivation step. Observe now that MATH . This proves that MATH is an input-consuming derivation step. |
cs/0101023 | It will be obtained from the proof of REF by setting MATH. |
cs/0101023 | Let MATH be an infinite input-consuming derivation of MATH. Then MATH contains an infinite number of MATH-steps for some MATH. Let MATH be the minimum of such MATH. Hence MATH contains a finite number of MATH-steps for MATH and there exists MATH and MATH such that MATH where MATH is a finite prefix of MATH which comprises all the MATH-steps of MATH for MATH and MATH is the subquery of MATH consisting of the atoms resulting from some MATH-step (MATH). By REF , there exists an infinite input-consuming derivation MATH such that MATH where MATH. This proves REF . Now, let MATH. Note that in MATH the atoms of MATH will never be selected and, by REF , will never be instantiated. Let MATH be obtained from MATH by omitting the prefix MATH in each query. Hence MATH is an infinite input-consuming derivation of MATH where an infinite number of MATH-steps are performed. Again, By REF , for every finite prefix of MATH of the form MATH where MATH and MATH are obtained by partially resolving MATH and MATH, respectively, and MATH is a MATH-step for some MATH, we have that MATH. Hence, from the hypothesis that there is an infinite number of MATH-steps in MATH, it follows that there exists an infinite input-consuming derivation of MATH. This proves REF . |
cs/0101023 | First, for each predicate symbol MATH, we define MATH to be the number of predicate symbols it depends on. More formally, MATH is defined as the cardinality of the set MATH. Clearly, MATH is always finite. Further, it is immediate to see that if MATH then MATH and that if MATH then MATH. We can now prove our theorem. By REF , it is sufficient to prove that for any nicely-moded one-atom query MATH, all input-consuming derivations of MATH are finite. First notice that if MATH is defined in MATH then the result follows immediately from the hypothesis that MATH is input terminating and that MATH is an extension of MATH. So we can assume that MATH is defined in MATH. For the purpose of deriving a contradiction, assume that MATH is an infinite input-consuming derivation of MATH such that MATH is defined in MATH. Then MATH where MATH is the input clause used in the first derivation step and MATH. Clearly, MATH has an infinite input-consuming derivation in MATH. By REF , for some MATH and for some substitution MATH, CASE: there exists an infinite input-consuming derivation of MATH of the form MATH CASE: there exists an infinite input-consuming derivation of MATH . Notice also that MATH is nicely-moded. Let now MATH. Note that MATH is an instance of a clause of MATH. We show that REF cannot hold. This is done by induction on MATH with respect to the ordering MATH defined by: MATH iff either MATH or MATH and MATH. Base. Let MATH (MATH is arbitrary). In this case, MATH does not depend on any predicate symbol of MATH, thus all the MATH as well as all the atoms occurring in its descendents in any input-consuming derivation are defined in MATH. The hypothesis that MATH is input terminating contradicts MATH above. Induction step. We distinguish two cases: CASE: MATH, CASE: MATH. In case MATH we have that MATH. So, MATH. In case MATH, from the hypothesis that MATH is quasi recurrent with respect to MATH, it follows that MATH. Consider now the partial input-consuming derivation MATH. By REF and the fact that MATH is a moded level mapping, it follows that MATH. Therefore, MATH. In both cases, the contradiction follows by the inductive hypothesis. |
cs/0101023 | By definition, the NAME are finitely branching. The claim now follows by NAME 's Lemma. |
cs/0101023 | Immediate by REF of NAME. |
cs/0101023 | We show that there exists a moded level mapping MATH for MATH such that MATH is quasi recurrent with respect to MATH. Given an atom MATH, we denote with MATH an atom obtained from MATH by replacing the terms filling in its output positions with fresh distinct variables. Clearly, we have that MATH is simply-moded. Then we define the following moded level mapping for MATH: MATH . Notice that, the level MATH of an atom MATH is independent from the terms filling in its output positions, that is, MATH is a moded level mapping. Moreover, since MATH is input terminating and MATH is simply-moded (in particular, it is nicely-moded), all the input-consuming derivations of MATH are finite. Therefore, by REF , MATH is defined (and finite), and thus MATH is defined (and finite) for every atom MATH. We now prove that MATH is quasi recurrent with respect to MATH. Let MATH be a clause of MATH and MATH be an instance of MATH (for some substitution MATH). We show that MATH. Let MATH. Hence, MATH where MATH is a sequence of fresh distinct variables. Consider a variant MATH of MATH variable disjoint from MATH. Let MATH be a renaming such that MATH. Clearly, MATH and MATH unify. Let MATH. By properties of substitutions (see CITE), since MATH consists of fresh variables, there exists two relevant mgu MATH and MATH such that CASE: MATH, CASE: MATH. Since MATH, we can assume that MATH. Because of standardization apart, since MATH consists of fresh variables, MATH and thus MATH. Since MATH is a sequence of variables, we can also assume that MATH. Therefore MATH. Moreover, since MATH, we have that MATH is an input-consuming derivation step, that is, MATH is a descendant of MATH in an NAME for MATH. By definition of MATH, MATH; hence MATH . Let now MATH. By REF and the hypothesis that MATH is input-recursive, that is MATH, it follows that MATH . Moreover, since MATH is simply-moded, MATH. Hence, by definition of MATH and standardization apart, MATH, that is, MATH . Therefore, by REF , MATH, that is, MATH . Hence, MATH . |
cs/0101024 | It suffices to prove that a certain inferior algorithm has expected profit MATH. The inferior algorithm is as follows: Use the solution to the secretary problem to select, from the first MATH input items, an item with the minimum expected final rank. Similarly, pick an item with maximum expected rank from the second MATH inputs. For simplicity, we initially assume that MATH is even; see comments at the end of the proof for odd MATH. Let MATH be the time step in which the low selection is made, and MATH the time step in which the high selection is made. Using the bounds from CITE, we can bound the expected profit of this inferior algorithm by MATH . CITE show that MATH, and so the expected profit of the inferior algorithm is at least MATH. For odd MATH, the derivation is almost identical, with only a change in the least significant term; specifically, the expected profit of the inferior algorithm for odd MATH is MATH, which again is at least MATH. |
cs/0101024 | This analysis is performed by examining the expected amortized profits for individual intervals. In particular, for any interval MATH, MATH . Since there are MATH intervals and the above analysis is independent of the interval number MATH, summing the amortized profit over all intervals gives the expected profit stated in the lemma. |
cs/0101024 | Let MATH be the random variable of the final rank of the MATH-th input item. Let MATH be the amortized cost for interval MATH as defined in REF. Since MATH is nonzero only when interval MATH is active, MATH . Therefore, MATH . Under what conditions is an interval active? If MATH this interval is certainly active. If the algorithm was not in the holding state prior to this step, it would be after seeing input MATH. Similarly, if MATH the algorithm must be in the free state during this interval, and so the interval is not active. Finally, if MATH the state remains what it has been for interval MATH. Furthermore, since MATH must be odd for this case to be possible, MATH is even, and MATH cannot be MATH (and thus MATH unambiguously indicates whether interval MATH is active). In summary, determining whether interval MATH is active requires looking at only MATH and occasionally MATH. Since the expected amortized profit of REF depends on whether MATH is odd or even, we break the analysis up into these two cases below. CASE: Note that MATH, and MATH cannot be exactly MATH, which means that with probability MATH interval MATH is active. Furthermore, MATH is independent of whether interval MATH is active or not, and so MATH . CASE: Since interval REF cannot be active, we assume that MATH. We need to consider the case in which MATH, and so MATH . Computing the expected amortized cost of interval MATH is slightly more complex than in REF . MATH . Combining both cases, MATH where the first sum accounts for the odd terms of the original sum, and the second sum accounts for the even terms. When MATH is even this sum becomes MATH which agrees with the claim in the lemma. When MATH is odd the sum can be simplified as MATH which again agrees with the claim in the lemma. The simplified forms follow the fact that for any odd MATH we can bound MATH. |
cs/0101024 | Since the move functions (which define specific algorithms) work on permutations, we will fix an ordering of permutations in order to compare strategies. We order permutations first by their size, and then by a lexicographic ordering of the actual permutations. When comparing two different algorithms MATH and MATH, we start enumerating permutations in this order and count how many permutations cause the same move in MATH and MATH, stopping at the first permutation MATH for which MATH, that is, the first permutation for which the algorithms make different moves. We call the number of permutations that produce identical moves in this comparison process the length of agreement between MATH and MATH. To prove the optimality of our algorithm by contradiction, we assume that it is not optimal, and of all the optimal algorithms let MATH be the algorithm with the longest possible length of agreement with our algorithm OP. Let MATH be the first permutation in which MATH. Since MATH is different from OP at this point, at least one of the following cases must hold: CASE: MATH and MATH and MATH (that is, REF is not in the holding state, gets a low rank input, but does not make it a low selection). CASE: MATH and MATH and MATH (that is, REF is not in the holding state, gets a high rank input, but makes it a low selection anyway). CASE: MATH and MATH and MATH (that is, REF is in the holding state, gets a high rank input, but doesn't make it a high selection). CASE: MATH and MATH and MATH (that is, REF is in the holding state, gets a low rank input, but makes it a high selection anyway). In each case, we will show how to transform algorithm MATH into a new algorithm MATH such that MATH performs at least as well as MATH, and the length of agreement between MATH and OP is longer than that between MATH and OP. This provides the contradiction that we need. CASE: Algorithm MATH's move function is identical to MATH's except for the following values: MATH . In other words, REF is the same as algorithm MATH except that we ``correct MATH's error" of not having made this input a low selection. The changes of the moves on input MATH insures that MATH is the same as MATH. It is easily verified that the new sets MATH (corresponding to the holding state) are identical to the sets MATH for all MATH. The only difference at MATH is the insertion of MATH, that is, MATH. Let MATH and MATH be the profits of MATH and MATH, respectively. Since their amortized costs differ only at interval MATH, MATH . By one of the conditions of REF , so we finish this derivation by noting that MATH . Therefore, the expected profit of REF is greater than that of MATH. CASE: As in REF we select a move function for algorithm MATH that causes only one change in the sets of holding states, having algorithm MATH not make input MATH a low selection. In particular, these sets are identical with those of REF with the one exception that MATH. Analysis similar to REF shows MATH . CASE: In this case we select a move function for algorithm MATH such that MATH, resulting in REF selecting input MATH as a high selection, and giving an expected profit gain of MATH . CASE: In this case we select a move function for algorithm MATH such that MATH, resulting in REF not taking input MATH as a high selection, and giving an expected profit gain of MATH . In each case, we transformed algorithm MATH into a new algorithm MATH that performs at least as well (and hence must be optimal), and has a longer length of agreement with algorithm OP than MATH does. This directly contradicts our selection of MATH as the optimal algorithm with the longest length of agreement with OP, and this contradiction finishes the proof that algorithm OP is optimal. |
cs/0101024 | If we let MATH be a random variable denoting the rank of our given item from among the first MATH inputs, then we see that the value of MATH depends on the rank of the MATH-st input. In particular, if the rank of the MATH-st input is MATH (which happens with probability MATH), then the new rank of our given item will be MATH. On the other hand, if the rank of the MATH-st input is MATH (which happens with probability MATH), then the rank of our given item is still MATH among the first MATH inputs. Using this observation, we see that MATH which is what is claimed in the lemma. |
cs/0101025 | By REF , MATH . |
cs/0101025 | We prove that MATH . The result then follows from REF . Suppose MATH. Then, MATH and MATH. By the hypothesis, MATH. Let MATH. Then, by REF , we have MATH . Thus MATH. |
cs/0101025 | If MATH, the statement is obvious from the definition of MATH. In the other cases, the proof is by induction on the size of MATH. The inductive step, when MATH has more than one binding, is straightforward. For the base case, when MATH, we have to show that MATH . The result then follows from REF . Let MATH, MATH, and MATH. Suppose MATH . Then, by definition of MATH, MATH . There are two cases: CASE: MATH. Then, by hypothesis, MATH. Hence we have MATH. Thus, by REF , MATH . CASE: MATH. Then we must have MATH where MATH and MATH. The proof here splits into two branches, REFa and REFb, depending on whether MATH or MATH. CASE: We first assume that MATH. Then, by REF we have that MATH and MATH. Hence, MATH . Combining REF and case REFa we obtain MATH . Hence as MATH is extensive and monotonic MATH and hence, when MATH, MATH. CASE: Secondly suppose that MATH. In this case, we have, by REF : MATH . Thus, by the hypothesis, MATH . Therefore we can write MATH where MATH . Thus MATH . Combining REF and case REFb for MATH, the result follows immediately by the monotonicity and extensivity of MATH. |
cs/0101025 | This is a classical property of upper closure operators CITE. |
cs/0101025 | We show that MATH . The result then follows from REF . Suppose MATH and MATH. Then, as MATH is monotonic, we have MATH. We distinguish two cases. CASE: There exists MATH such that MATH. Then MATH and hence, by REF , MATH. CASE: Otherwise, by definition of MATH and REF , there exists MATH such that MATH and MATH and thus MATH. |
cs/0101025 | REF follow from REF , respectively. |
cs/0101025 | If MATH, so that MATH, the statement can be easily verified after having observed that MATH. Otherwise, if MATH, we proceed by induction on MATH. For the base case, let MATH. Then MATH . For the inductive step, let MATH and let MATH . By definition of MATH we have MATH . |
cs/0101025 | We assume that MATH. (If such a MATH does not exist we simply swap MATH and MATH.) Let MATH denote a ground term and let MATH . Then, by REF , for MATH, MATH, we define MATH where MATH . Now, if MATH and MATH, then we have MATH. Hence MATH and we can easily observe that MATH but MATH. On the other hand, if MATH and MATH, then by REF there exists MATH with MATH such that MATH . Let MATH. We have MATH and thus we can observe that MATH but MATH. |
cs/0101025 | We prove the two inclusions separately. CASE: MATH. Let MATH be in the right-hand side. If MATH there is nothing to prove. Therefore we assume MATH. We need to prove that if MATH and MATH then MATH or MATH. Obviously, we have MATH and MATH. Moreover, by definition of MATH, there exist MATH where MATH and MATH such that MATH . Since MATH, we have MATH or MATH. Let us consider the first case (the other is symmetric). Then, applying the definition of MATH, there is a MATH with MATH such that MATH . Since MATH, there exists MATH such that MATH. Thus MATH and MATH. Hence, as MATH, we have MATH. Consider an arbitrary MATH where MATH. Then MATH. Thus, since MATH and MATH, MATH. Thus, as this is true for all such MATH, MATH. CASE: MATH. Let MATH. We need to show that MATH is the meet of elements in the right-hand side. If MATH then there is nothing to prove. Suppose MATH. For each MATH such that MATH, we will show there is an element MATH in the right-hand side such that MATH and MATH. Then MATH. There are two cases. CASE: MATH; Let MATH. Then MATH and MATH. CASE: MATH; in this case, applying the definition of MATH, there must exist a set MATH with MATH such that MATH . However, since MATH, we have MATH. Thus, for some MATH, if MATH is such that MATH then MATH. Choose MATH so that MATH and MATH and let MATH. Then MATH, MATH, and MATH. |
cs/0101026 | Let us first show MATH . Local commutativity implies for each MATH, MATH . Therefore MATH. Now, let us show MATH . Using REF, we have MATH, hence MATH. Using REF again concludes the proof. Let us return to the general case. Obviously, it is sufficient to check REF for sites MATH. Clearly, MATH. The latter is MATH according to REF. |
cs/0101026 | Let MATH, MATH, and MATH. This defines MATH and MATH from initial configuration MATH by MATH as usual. By monotonicity, MATH and MATH, so MATH's values satisfy MATH which is REF if MATH and REF otherwise. The same value is obtained for MATH. By invariant histories, there is a MATH such that MATH and MATH . Thus, MATH is commutative. |
cs/0101026 | By REF , MATH. Therefore MATH implies that MATH is monotonic. |
cs/0101026 | Let MATH be a commutative transition rule. It remains to prove that it has invariant histories. Let MATH be an asynchronous trajectory and MATH. Then there is an asynchronous trajectory MATH dominating MATH with initial configuration MATH, such that MATH. Let MATH. We show how to build, for each MATH, a trajectory MATH with the given properties that dominates MATH up to time MATH. When MATH then MATH will converge to a trajectory with the same properties that dominates MATH. For MATH we can choose MATH. We assume that MATH can be constructed for all MATH and prove it for MATH. Let MATH, and MATH. Let the trajectory MATH be defined by MATH. The inductive assumption gives a trajectory MATH with initial configuration MATH dominating MATH, with MATH . Using this trajectory, we define, for MATH: MATH is an asynchronous trajectory. Let us show that MATH satisfies REF. This holds by definition for MATH and MATH. Let us show that it also holds for MATH with MATH. We have MATH . For domination, we must check two properties. We have MATH. By the definition of MATH, for MATH, MATH . By the definition of MATH, for MATH, using REF, we have MATH . Further, MATH . By the above definition, MATH . Also, from here and REF, MATH . By domination, MATH and hence for all MATH, we have, combining REF with REF, MATH . If MATH then MATH. If MATH then clearly MATH since this means that in both processes, no progress has been made in MATH from the initial configuration. Assume therefore that MATH and hence MATH. Assume MATH. Then MATH and hence MATH. If MATH then MATH and hence the same transition that gives MATH also gives MATH. Otherwise MATH hence MATH. Also, MATH, hence MATH. The inductive assumption implies MATH. On the other hand, REF and MATH implies MATH which concludes this case. Assume now MATH. Since MATH changes if and only if MATH does we can assume that MATH since otherwise we can decrease MATH without changing MATH. The same is true for MATH. Under these assumptions we have MATH. By REF, MATH . We assumed this to be equal to MATH. Hence MATH. Also MATH, MATH, and hence the inductive assumption implies the statement. Let MATH be a trajectory. Then the synchronous trajectory with initial configuration MATH dominates MATH. Let MATH. By REF above, there is a trajectory MATH with initial configuration MATH dominating MATH such that MATH. This just means that MATH is a synchronous trajectory up to time REF. Continuing the application of REF, we can dominate MATH by a synchronous trajectory MATH up to time REF, etc. Now we can conclude the proof of the theorem as follows. Let MATH be a trajectory with initial configuration MATH and let MATH be the synchronous trajectory with the same initial configuration. Let us define MATH . To prove REF, note that due to domination, MATH and hence for every MATH there is a MATH with MATH. Let MATH be the first such: MATH. By domination, MATH. |
cs/0101026 | Let MATH. The three components of each state MATH of MATH will be written as MATH . The statement of the theorem will obtain by MATH, MATH. The field MATH will be used to keep track of the time of the simulated cells mod REF, while MATH holds the value of MATH for the previous value of MATH. Let us define MATH. If there is a MATH such that MATH (that is, some neighbor lags behind) then MATH that is, there is no effect. Otherwise, let MATH be MATH if MATH, and MATH otherwise. MATH . Thus, we use the MATH and MATH fields of the neighbors according to their meaning and update the three fields according to their meaning. It is easy to check that this transition rule simulates MATH in the MATH field if we start it by putting REF into all other fields. Let us check that MATH is locally commutative. If two neighbors MATH are both are allowed to update then neither of them is behind the other modulo REF, hence they both have the same MATH field. Suppose that MATH updates before MATH. In this case, MATH will use the the MATH field of MATH for updating and put its own MATH field into MATH. Next, since now MATH is ``ahead" according to MATH, cell MATH will use the MATH field of MATH for updating: this was the MATH field of before. Therefore the effect of consecutive updating is the same as that of simultaneous updating. |
cs/0101026 | There is a standard construction to simulate NAME machines with such cellular automata, so the question reduces to the question whether an arbitrary NAME machine will halt when started on an empty tape. |
cs/0101026 | From now on, without danger of confusion, let us write MATH and forget about MATH. Let us be given a cellular automaton MATH like in REF , with state set MATH. We construct a new cellular automaton over the set of states MATH, with the following transition function MATH. Over states MATH, the functions MATH behave as MATH. Further, we have the following rules for MATH when at least one of the arguments is MATH. MATH and MATH, MATH in all remaining cases. By these rules, the symbol MATH ``sweeps" right and in its wake the rule MATH will operate as if it had started from the a configuration of all REF's. Thus, let MATH be the synchronous trajectory of MATH with MATH for all MATH. Then clearly if MATH is any synchronous trajectory of MATH with MATH then for all MATH, for all MATH we have MATH. Let us now apply the construction of the proof of REF to MATH to obtain commutative rule MATH over the set of states MATH. We will prove that MATH has an asynchronous trajectory MATH with MATH and MATH for some MATH, if and only if MATH has a synchronous trajectory MATH with MATH for all MATH and MATH for some MATH. Since we know that the question whether this happens is undecidable from MATH, we will have proved that the question whether some cellular automaton has an asynchronous trajectory MATH with MATH and MATH for some MATH is undecidable; this will complete the proof. The ``if" part: Suppose first that MATH has a synchrounous trajectory MATH with MATH for all MATH, and and MATH for some MATH. As mentioned above, then the synchronous trajectory MATH of MATH has MATH for all MATH. Consider the synchronous trajectory MATH of MATH started from MATH for all MATH. Then for all MATH and all MATH we have MATH . Let MATH be the first number MATH divisible by REF. We have MATH . The ``only if" part: Assume that MATH is an asynchronous trajectory of MATH with MATH and MATH for some MATH. Then MATH and defining MATH, REF implies MATH . Applying theorem repeatedly, we obtain MATH or, if MATH, the same relation with the first argument of MATH omitted, for MATH and MATH. Now, if MATH then MATH while MATH. We have just found that MATH develops according to MATH for MATH and MATH. As discussed above, therefore MATH if and only if MATH computes REF at MATH from an all-REF initial configuration. |
cs/0101026 | Let the local state space be the set of integers MATH. Let MATH and MATH be the rules for a commutative cellular automaton transition rule with state set MATH. We define the transition function MATH. We will write MATH as MATH. We require MATH and MATH in all remaining cases. Let us show that MATH has invariant histories if and only if MATH has no asynchronous trajectory MATH over MATH with MATH and MATH for some MATH. Assume first that MATH has such a trajectory. Let us define the initial configuration MATH of MATH as MATH if MATH and REF otherwise. We may apply REF first to get MATH. Or, we may apply REF first to cells MATH on the right repeatedly. Sooner or later we have MATH, which allows MATH by REF in the next step. Thus, depending on the order of rule application, we obtained in cell MATH the sequence MATH or MATH. Suppose now that MATH has no such trajectory and let MATH be an arbitrary configuration of MATH. Each occurrence of a state MATH remains such an occurrence. On segments between them, the commutative rule MATH works. The only other transitions possible are MATH and MATH. Assume MATH and consider the sequence of different values in MATH. Let us show that REF cannot both occur in this sequence and hence only one of the transitions is possible. Indeed, if REF occurs before REF then our assumption about MATH excludes the occurrence of REF in the sequence any later. If REF occurs in the sequence before REF then our rules (in particular, REF) do not allow any change of the state of MATH after that. |
cs/0101028 | Straightforward. |
cs/0101028 | See REF. |
cs/0101028 | First, we inductively define the sequence MATH and a sequence of MATH-dimensional vectors MATH. Then, we prove the inequality claimed in the lemma. We first define MATH and MATH by looking at the first phase of MATH. Assume that at time MATH, a robot starts moving towards the origin on path MATH. Let MATH be the distance that the robot has searched on path MATH. Define MATH . Once MATH is defined, we define MATH and MATH by looking at phase MATH of MATH. Recall that phase MATH starts at time MATH and ends at time MATH. Assume that path MATH is the unique path that is not searched in phase MATH but is searched in phase MATH. Also assume that at time MATH, a robot starts moving towards the origin on path MATH. Let MATH be the distance that the robot has searched on path MATH. For MATH, let MATH that is, we switch the MATH-th entry and the MATH-th entry of MATH to obtain MATH. Let MATH. Since MATH has a finite competitive ratio, MATH is an infinite sequence. For every MATH, MATH is the time when a robot starts moving towards the origin on some path MATH. Let MATH be the index such that phase MATH is the first phase that path MATH is searched again after MATH. Such MATH exists and is finite. For all MATH, MATH and MATH for MATH. To prove this claim, assume that MATH, that is, a robot starts moving towards the origin on path MATH immediately after MATH. In phase MATH, that robot moves back on path MATH to the origin and then searches another path MATH with MATH. Then, some robot starts moving towards the origin on path MATH with MATH right after MATH. Since MATH and MATH, by the definition of MATH, MATH . By the choice of MATH, path MATH must be idle from MATH to MATH. Hence, MATH . Moreover, MATH. Thus, from REF , MATH . By the choice of MATH, path MATH is reused in phase MATH. Assume that a robot searches path MATH immediately after MATH. By the inductive procedure for defining MATH and MATH, MATH . Combining REF , we have MATH. This and REF conclude the proof of REF . MATH is a MATH-sequence. To prove this claim, the only nontrivial property of the sequence that we need to verify is that there are at least MATH integers MATH such that MATH. By REF , it suffices to prove that there exist MATH integers MATH such that MATH . Without loss of generality, we label the MATH paths in such a way that CASE: the label of the path where a robot starts moving towards the origin immediately after MATH is REF; CASE: paths MATH are not searched before MATH; CASE: MATH where MATH is the first phase in which path MATH is searched. By the assumption on MATH and the definition of MATH, MATH . For MATH, let MATH be the label of the path where a robot starts moving towards the origin immediately after MATH. By the definitions of MATH and MATH, MATH . Therefore, MATH . By this and REF , at least MATH integers MATH satisfy REF , finishing the proof of REF . Continuing the proof of REF , for each MATH, let MATH be the path where a robot starts moving towards the origin right after MATH. Let MATH be the first time when path MATH is searched for exactly distance MATH in phase MATH. By the properties stated in REF , the MATH robots stand at different paths at time MATH. Let MATH be the distances that the robots except the one on path MATH are from the origin at time MATH. Let MATH. There are two cases. CASE: MATH. If the goal is on path MATH at distance MATH, then when MATH finds the goal, its performance ratio is at least MATH . CASE: MATH. Let MATH be a path that has been searched up to distance MATH at time MATH. If the goal is on path MATH at distance MATH, then when MATH finds the goal, its performance ratio is at least MATH . Since MATH has a finite competitive ratio, MATH . Therefore, by REF , MATH . This and REF conclude the proof of REF . |
cs/0101028 | The smallest competitive ratio of our exploration algorithm is at most MATH . At MATH, this expression assumes its minimum MATH . |
cs/0101028 | By REF , we only need to prove MATH . Let MATH, by REF , which is a competitive ratio of a randomized exploration algorithm with one robot and MATH paths. Let MATH, by REF , which is the smallest competitive ratio of deterministic algorithms also for one robot and MATH paths. Since MATH CITE, to prove REF , it is suffice to show that MATH which is equivalent to MATH. |
cs/0101028 | By NAME 's formulation of NAME 's minimax principle CITE, it suffice to lower bound the competitive ratios of all deterministic algorithms against a chosen probability distribution. The proof idea is to show that an optimal algorithm against the distribution used by CITE for REF can be modified to be cyclic. |
cs/0101028 | By the definition of infimum, for all MATH, there exists MATH such that MATH . Since MATH is in MATH, MATH . Since MATH and MATH for all MATH, MATH . |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.