Title: Some Questions of Uniformity in Algorithmic Randomness
URL Source: https://arxiv.org/html/2111.01472
Markdown Content: \usetikzlibrary
arrows
Some Questions of Uniformity in Algorithmic Randomness Laurent Bienvenu, Barbara F. Csima, Matthew Harrison-Trainor Abstract
The Ξ© numbersβthe halting probabilities of universal prefix-free machinesβare known to be exactly the Martin-LΓΆf random left-c.e. reals. We show that one cannot uniformly produce, from a Martin-LΓΆf random left-c.e. real πΌ , a universal prefix-free machine π whose halting probability is πΌ . We also answer a question of Barmpalias and Lewis-Pye by showing that given a left-c.e. real πΌ , one cannot uniformly produce a left-c.e. real π½ such that πΌ β π½ is neither left-c.e. nor right-c.e.
1 Introduction
Prefix-free Kolmogorov complexity, which is perhaps the most prominent version of Kolmogorov complexity in the study of algorithmic randomness, is defined via prefix-free machines: A prefix-free machine is a partial computable function π : 2 < π β 2 < π ( 2 < π being the set of finite binary strings) such that no two distinct elements of dom β‘ ( π ) are comparable under the prefix relation. The prefix-free Kolmogorov complexity of π₯ β 2 < π relative to the machine π is defined to be the quantity πΎ π β’ ( π₯ )
min β‘ { | π | : π β’ ( π )
π₯ } . To get a machine-independent notion of Kolmogorov complexity, one needs to take an optimal prefix-free machine, that is, a prefix-free machine π such that for any prefix-free machine π , one has πΎ π β€ πΎ π + π π for some constant π π which depends solely on π . Then one defines the prefix-free Kolmogorov complexity πΎ by setting πΎ
πΎ π . The resulting function πΎ only depends on the choice of π by an additive constant, because by definition, if π and π are optimal machines, then | πΎ π β πΎ π |
π β’ ( 1 ) . To be complete, one needs to make sure optimal machines exist. One way to build one is to take a total computable function π β¦ π π from β to 2 < π whose range is prefix-free (for example, π π
0 π β’ 1 ), and set π β’ ( π π β’ π )
π π β’ ( π ) where ( π π ) is an effective enumeration of all prefix-free machines. It is easy to see that π is prefix-free and for all π , πΎ π β€ πΎ π π + | π π | , hence π is optimal. Machines π of this type are called universal by adjunction and they form a strict subclass of optimal prefix-free machines.111For example, given a universal prefix-free machine π , we can construct an optimal prefix-free machine π , which is not universal by adjunction, by defining, for π β dom β‘ ( π ) , π β’ ( π β’ 0 )
π β’ ( π β’ 1 )
π β’ ( π ) if | π | odd, and π β’ ( π )
π β’ ( π ) if | π | is even. This is well-defined because π is prefix-free, and the fact that π is prefix-free and optimal implies that π is. π is not universal by adjunction; one can see this for example by noting that every string in the domain of π is of even length, but this is not true of any machine that is universal by adjunction. See, for example, [CNSS11, CS09].
Remark.
Often no distinction is made between optimal prefix-free machines and universal prefix-free machines. E.g., in [Nie09] it is said that optimal prefix-free machines are often called universal prefix-free machines. In this paper, the distinction will be important. An optimal prefix-free machine is a prefix-free machine π such that for every prefix-free machine π , there is a constant π π such that πΎ π β€ πΎ π + π π . A universal prefix-free machine is one that is universal by adjunction. Thus every universal machine is optimal, but the converse is not true. Every machine in this paper will be prefix-free, and so we often omit the term βprefix-freeβ.
1.1 Omega Numbers
Given a prefix-free machine π , one can consider the βhalting probabilityβ of π , defined by
Ξ© π
β π β’ ( π ) β 2 β | π | .
The term βhalting probabilityβ is justified by the following observation: a prefix-free machine π can be naturally extended to a partial functional from 2 π , the set of infinite binary sequences, to 2 < π , where for π β 2 π , π β’ ( π ) is defined to be π β’ ( π ) if some π β dom β‘ ( π ) is a prefix of π , and π β’ ( π ) β otherwise. The prefix-freeness of π on finite strings ensures that this extension is well-defined. With this point of view, Ξ© π is simply π { π β 2 π : π ( π ) β } , where π is the uniform probability measure (a.k.a. Lebesgue measure) on 2 π , that is, the measure where each bit of π is equal to 0 with probability 1 / 2 independently of all other bits.
For any machine π , the number Ξ© π is fairly simple from a computability-theoretic viewpoint, namely, it is the limit of a computable non-decreasing sequence of rationals (this is easy to see, because Ξ© π is the limit of Ξ© π π
β π β’ ( π ) β’ [ π ] β 2 β | π | ). We call such a real left-c.e. It turns out that every left-c.e. real πΌ β [ 0 , 1 ] can be represented in this way, i.e., for any left-c.e. πΌ β [ 0 , 1 ] , there exists a prefix-free machine π such that πΌ
Ξ© π , as consequence of the Kraft-Chaitin theorem (see [DH10, Theorem 3.6.1]).
One of the first major results in algorithmic randomness was Chaitinβs theorem [Cha75] that the halting probability Ξ© π of an optimal machine π is always an algorithmically random real, in the sense of Martin-LΓΆf (for background on Martin-LΓΆf randomness, one can consult [DH10, Nie09]). From here on we simply call a real random if it is random in the sense of Martin-LΓΆf.
This is particularly interesting because this gives βconcreteβ examples of Martin-LΓΆf random reals, which furthermore are, as we just saw, left-c.e. Whether the converse is true, that is, whether every random left-c.e. real πΌ β [ 0 , 1 ] is equal to Ξ© π for some optimal machine π remained open for a long time. The answer turned out to be positive, a remarkable result with no less remarkable history. Shortly after the work of Chaitin, Solovay [Sol75] introduced a preorder on left-c.e. reals, which we now call Solovay reducibility: for πΌ , π½ left-c.e., we say that πΌ is Solovay-reducible to π½ , which we write πΌ βͺ― π π½ , if for some positive integer π , π β’ π½ β πΌ is left-c.e.222In fact Solovay gave a more intuitive definition, which in substance states that computable approximations of π½ from below converge more slowly than computable approximations of πΌ from below, but the version we give is equivalent to Solovayβs original definition and easier to manipulate.. Solovay showed that reals of type Ξ© π for optimal π are maximal with respect to the Solovay reducibility. While this did not fully settle the above question, Solovay reducibility turned out to be the pivotal notion towards its solution. Together with Solovayβs result, subsequent work lead to the following theorem.
Theorem 1.1.
For πΌ β [ 0 , 1 ] left-c.e., the following are equivalent.
(a)
πΌ is Martin-LΓΆf random
(b)
πΌ
Ξ© π for some optimal machine π
(c)
πΌ is maximal w.r.t Solovay reducibility.
The implication ( π ) β ( π ) is Chaitinβs result and the implication ( π ) β ( π ) is Solovayβs, as discussed above. Calude, Hertling, Khoussainov, and Wang [CHKW01] showed ( π ) β ( π ) , and the last crucial step ( π ) β ( π ) was made by KuΔera and Slaman [KS01]. We refer the reader to the survey [BS12] for an exposition of this result.
Summing up what we know so far, we have for any real πΌ β [ 0 , 1 ] :
πΌ
β’
is left-c.e.
β
πΌ
Ξ©
π
β’
for some machine
β’
π
πΌ
β’
is left-c.e. and random
β
πΌ
Ξ© π β’ for some optimal machine β’ π
The first equivalence is uniform: Given a prefix-free machine π (represented by its index in an effective enumeration of all prefix-free machines), we can pass in a uniform way to a left-c.e. index for Ξ© π ; and moreover, given a left-c.e. index for a left-c.e. real πΌ β [ 0 , 1 ] , we can pass uniformly to an index for a prefix-free machine π with Ξ© π
πΌ (a consequence of the so-called Kraft-Chaitin theorem, see [DH10, Theorem 3.6.1]). By a left-c.e. index, we mean an index for a non-decreasing sequence of rationals
It was previously open however (see for example [Bar18, p.11]) whether the second equivalence was uniform, that is: given an index for a random left-c.e. πΌ β [ 0 , 1 ] , can we uniformly obtain an index for an optimal machine π such that πΌ
Ξ© π ? Our first main result is a negative answer to this question.
Theorem 1.2.
There is no partial computable function π such that if π is an index for a Martin-LΓΆf random left-c.e. real πΌ β [ 0 , 1 ] , then the value of π β’ ( π ) is defined and is an index for an optimal Turing machine π π β’ ( π ) with halting probability πΌ .
Thus one cannot uniformly view a Martin-LΓΆf random left-c.e. real as an Ξ© number.
On the other hand, we show that given a left-c.e. random πΌ β [ 0 , 1 ] , one can uniformly find a universal left-c.e. semi-measure π with β π π β’ ( π )
πΌ . An interesting corollary is that one cannot uniformly turn a universal left-c.e. semi-measure π into a universal machine whose halting probability is β π π β’ ( π ) .
1.2 Differences of left-c.e. reals
The set of left-c.e. reals is closed under addition and multiplication, not under subtraction or inverse. However, the set { πΌ β π½ β£ πΌ , π½ β’ left-c.e. } , of differences of two left-c.e. reals is algebraically much better behaved, namely it is a real closed field [ASWZ00, Rai05, Ng06]. Barmpalias and Lewis-Pye proved the following theorem.
Theorem 1.3 (Barmpalias and Lewis-Pye [BLP17]).
If πΌ is a non-computable left-c.e. real there exists a left-c.e. real π½ such that πΌ β π½ is neither left-c.e. nor right-c.e.
The proof is non-uniform, and considers two separate cases depending on whether or not πΌ is Martin-LΓΆf random (though it is uniform in each of these cases). Barmpalias and Lewis-Pye ask whether there is a uniform construction; we show that the answer is negative.
Theorem 1.4.
There is no partial computable function π such that if π is an index for a non-computable left-c.e. real πΌ , then π β’ ( π ) is defined and is an index for a left-c.e. real π½ such that πΌ β π½ is neither left-c.e. nor right-c.e.
Barmpalias and Lewis-Pye note that it follows from [DHN02, Theorem 3.5] that if πΌ and π½ are left-c.e. reals and πΌ is Martin-LΓΆf random while π½ is not, then πΌ β π½ is a Martin-LΓΆf random left-c.e. real. In particular, if πΌ in Theorem 1.3 is Martin-LΓΆf random, then the corresponding π½ must be Martin-LΓΆf random as well. Thus πΌ and π½ are the halting probabilities of universal machines.
Theorem 1.5 (Barmpalias and Lewis-Pye [BLP17]).
For every universal machine π , there is a universal machine π such that Ξ© π β Ξ© π is neither left-c.e. nor right-c.e.
Recall that the construction for Theorem 1.3 was uniform in the Martin-LΓΆf random case. So it is not too surprising that Theorem 1.5 is uniform; but because we cannot pass uniformly from an arbitrary Martin-LΓΆf random left-c.e. real to a universal machine (Theorem 1.2), this requires a new proof.
Theorem 1.6.
Theorem 1.5 is uniform in the sense that there is a total computable function π such that if π
π π is an optimal (respectively universal by adjunction) machine, then π
π π β’ ( π ) is optimal (respectively universal by adjunction) and Ξ© π β Ξ© π is neither left-c.e. nor right-c.e.
2 Omega Numbers 2.1 No uniform construction of universal machines
We prove Theorem 1.2:
Theorem 1.2.
There is no partial computable function π such that if π is an index for a random left-c.e. real πΌ β [ 0 , 1 ] , then π β’ ( π ) is defined and is an index for an optimal prefix-free machine π π β’ ( π ) with halting probability πΌ .
Proof.
First note that we can assume that the partial computable function π is total. Indeed, define a total function π as follows: for each input π , π β’ ( π ) is an index for a machine which on input π waits for π β’ ( π ) to converge, and then copies π π β’ ( π ) β’ ( π ) . Fix a partial computable function π taking indices for left-c.e. reals to indices for prefix-free machines. Using the recursion theorem, we will define a left-c.e. ML-random πΌ
πΌ π β [ 0 , 1 ] using, in its definition, the index π β’ ( π ) of a prefix-free Turing machine π π β’ ( π ) . We must define πΌ even if π π β’ ( π ) is not optimal or π β’ ( π ) does not converge. We can always assume that π π β’ ( π ) is prefix-free by not letting it converge on a string π if it has already converged on a prefix of π ; we can also assume that π β’ ( π ) converges by having πΌ follow some fixed left-c.e. random π½ (say the one chosen below) until π β’ ( π ) converges. During the construction of πΌ we will also build an auxiliary machine π . We will ensure that πΌ is a random left-c.e. real, but that either π π β’ ( π ) is not optimal (which will happen because for all π , there is π such that πΎ π π β’ ( π ) β’ ( π ) > πΎ π β’ ( π ) + π ), or π β’ ( dom β‘ ( π π β’ ( π ) ) ) is not πΌ . This will prove the theorem. In the construction, we will build πΌ
πΌ π (using the recursion theorem to know the index π in advance) while watching π
π π β’ ( π ) . (From now on, we drop the index π everywhere; we will write πΌ π for the left-c.e. approximation to πΌ .) We will try to meet the requirements:
π π : For some π , πΎ π β’ ( π )
πΎ π β’ ( π ) + π .
If π is universal, then there must be some π such that, for all π , πΎ π β’ ( π ) β€ πΎ π β’ ( π ) + π . Thus meeting π π for every π will ensure that π is not universal. At the same time, we will be trying to get a global win by having π β’ ( dom β‘ ( π ) ) β πΌ . We will define stage-by-stage rationals πΌ 0 < πΌ 1 < πΌ 2 < β― with πΌ
lim π πΌ π . (Recall that an index for such a sequence is an index for πΌ .) Fix π½ a left-c.e. random, 3 4 < π½ < 1 . We will have πΌ
π β’ π½ + π for some π , π β β , π > 0 , so that πΌ will be random (indeed, multiplying by the denominator of π and subtracting π½ , we see that π½ βͺ― π πΌ , and since π½ is random, by Theorem 1.1, so is πΌ ). It is quite possible that we will have πΌ
π½ . Let π½ 0 < π½ 1 < π½ 2 < β― be a computable sequence of rationals with limit π½ . At each stage π we will define πΌ π
π π β’ π½ π + π π for some π π , π π β β in such a way that π
lim π π π and π
lim π π π are reached after finitely many stages. We think of our opponent as defining the machine π with measure πΎ π at stage π , with πΎ
lim π πΎ π the measure of the domain of π . Our opponent must keep πΎ π β€ πΌ π , as if they ever have πΎ π > πΌ π then we can immediately abandon the construction and choose π , π such that πΌ
π β’ π½ + π has πΌ π < πΌ < πΎ and get a global win. Our opponent also has to (eventually) increase πΎ π whenever we increase πΌ π , or they will have πΎ < πΌ . However, they may wait to do this. But, intuitively speaking, whenever we increase πΌ π , we can wait for our opponent to increase πΎ π correspondingly (as long as, in the meantime, we work towards making πΌ random). The requirements can be in one of four states: inactive, preparing, waiting, and restraining. Unless it is injured by a higher priority requirement, in which case it becomes inactive, a requirement will begin inactive, then be preparing, before switching back and forth between waiting and restraining. Before giving the formal construction, we will give an overview. To start, each requirement will be inactive. When activated, a requirement will be in state preparing. When entering state preparing, a requirement π π will have a reserved code π β 2 < π and a restraint π π
2 β ( | π | + π ) . The reserved code π will be such that π has not yet converged on input π nor on any prefix or extension of π , so that we can still use π as a code for some string π to make πΎ π β’ ( π ) β€ | π | . While in this state, our left-c.e. approximation to πΌ will copy that of π½ . The requirement π π will remain in this state until the measure of the domain of the machine π is close to our current approximation to πΌ , namely, within π π . (If our opponent does not increase the measure of π as we increase the approximation to πΌ , then we win.) At this point, we will set π β’ ( π )
π for some string π for which πΎ π β’ ( π ) is currently greater than | π | + π . The requirement will move into state waiting. From now on, we are trying to ensure that π can never converge on a string of length β€ | π | + π , so that πΎ π β’ ( π ) will never drop below | π | + π , satisfying π π . We do this by having the approximation to πΌ π grow very slowly, so that π can only add a small amount of measure at each stage. π π will now move between the states waiting and restraining. The requirement π π will remain in state waiting at stages π when the measure of the domain of π is close (within π π ) to π½ π , so that π π is content to have πΌ approximate π½ . However, at some stages π , it might be that π½ π is at least π π greater than πΎ π , the measure of the domain of π so far. In this case, π π is in state restraining and has to actively restrain πΌ π to not increase too much. Letting π
πΌ π β 1 and π
π π β ( πΌ π β 1 β πΎ π ) , where π is the stage when π π enters the state restraining, π π has πΌ temporarily approximate π β’ π½ + π . Whenever the measure of the domain of π increases by 1 2 β’ π π , π π updates the values of π and π (recall that π½ β₯ 3 4 ). Thus, each time the values of π and π are reset, the measure of the domain of π has increased by at least 1 2 β’ π π . (Again, if our opponent does not increase the measure of π as we increase the approximation to πΌ , then we win.) This can happen at most finitely many times until the measure of the domain of π is within π π of the current approximation to π½ , and so the requirement re-enters state waiting.333Of course, the requirement does not have to re-enter state waiting, but in this case the values of π and π are eventually fixed. The requirement may then later re-enter state restraining if the approximation to π½ π increases too much faster than the measure of the domain of π , but since the measure of the domain of π will increase by at least 1 2 β’ π π every time π π switches from restraining to waiting, π π can only switch finitely many times. Just considering one requirement, the possible outcomes of the construction are as follows:
β’
πΎ π
πΌ π at some stage π , in which case we can immediately ensure that πΌ < πΎ and that πΌ is random.
β’
πΎ < πΌ ; the requirement may get stuck in preparing or restraining. If it gets stuck in preparing, we have πΌ
π½ is random. If it gets stuck in restraining, we have πΌ
π β’ π½ + π , with π and π rational, and this is random.
β’
πΎ
πΌ ; in this case, the requirement always leaves preparing, and every time it enters restraining it returns to waiting. After some stage, it is always in waiting and has πΌ
π½ , which is random. The requirement is satisfied by having πΎ π β’ ( π ) β€ | π | but πΎ π β’ ( π )
| π | + π .
With multiple requirements, there is injury. A requirement only allows lower priority requirements to be active while it is waiting. Every stage at which a requirement is preparing or restraining, it injures all lower priority requirements. So, at any stage, there is at most one requirementβthe lowest priority active requirementβwhich can be in a state other than waiting. Construction. Stage 0 . Begin with πΌ 0
0 , all the requirements other than π 0 inactive, and π 0 not converged on any input. Set πΌ π
π½ π . Activate π 0 and put it in state preparing. Choose a reserved code π 0 such that π π β’ ( π 0 ) β and set the restraint π 1
2 β | π 0 | . Stage π > 0 . Let πΎ π
π β’ ( dom β‘ ( π π ) ) be the measure of the domain of π at stage π . If πΎ π > πΌ π β 1 , we can immediately end the construction, letting πΌ π‘
πΌ π β 1 + ( πΎ π β πΌ π β 1 ) β’ π½ π‘ for π‘ β₯ π , so that
πΌ
lim π‘ β β πΌ π‘
πΌ π β 1 + ( πΎ π β πΌ π β 1 ) β’ π½ < πΎ π β€ π β’ ( dom β‘ ( π π ) ) .
So for the rest of this stage, we may assume that πΎ π β€ πΌ π β 1 . Find the highest-priority active requirement π π , if it exists, such that π½ π β πΎ π β₯ π π . Cancel every lower priority requirement. Let π π be the lowest priority active requirement. (Every higher priority requirement is in state waiting.)
Case 1.
π π is preparing.
Set πΌ π
π½ π . π π has a reserved code π π and restraint π π . If π½ π β πΎ π > π π , π π remains preparing. Otherwise, if π½ π β πΎ π < π π , find a string π π such that πΎ π β’ ( π π ) β’ [ π ] > | π π | + π . Put π β’ ( π π )
π π . π π is now waiting.
Case 2.
π π is waiting and π½ π β πΎ π < π π .
Set πΌ π
π½ π . Requirement π π remains in state waiting. Activate π π + 1 and put it in state preparing. Choose a reserved code π π + 1 such that π π β’ ( π π + 1 ) β and set the restraint π π + 1
2 β | π π + 1 | β π β 1 .
Case 3.
π π is waiting and π½ π β πΎ π β₯ π π .
Set the reference values π π
πΌ π β 1 and π π
π π β ( πΌ π β 1 β πΎ π ) . (In Claim 1 we will show that π π > 0 .) Put π π in state restraining. Set πΌ π
π π β’ π½ π + π π .
Case 4.
π π is in state restraining.
π π has a restraint π π and reference values π π and π π . If πΎ π β€ π π + 1 2 β’ π π , keep the same reference values, and set πΌ π
π π β’ π½ π + π π . If πΎ π > π π + 1 2 β’ π π , then what we do depends on whether π½ π β πΎ π < π π or π½ π β πΎ π β₯ π π . In either case, we call stage π incremental for π π . If π½ π β πΎ π < π π , then set πΌ π
π½ π and put π π into state waiting. If π½ π β πΎ π β₯ π π , change the reference values π π and π π to π π
πΌ π β 1 and π π
π π β ( πΌ π β 1 β πΎ π ) , and set πΌ π
π π β’ π½ π + π π . π π remains restraining. End construction. Verification.
Claim 1.
At every stage π
0 , πΌ π β 1 β€ πΌ π β€ π½ π , and for every requirement π π which is active at stage π , either π π is preparing or πΌ π β πΎ π < π π .
Proof.
Assume the result holds for all π‘ < π . Let π be the lowest priority active requirement at stage π (after the cancellation). By choice of π , for π β² < π we have π½ π β πΎ π < π π β² . We now check that no matter which case of the construction was used to define πΌ π , the result holds. In all cases we will have πΌ π β πΎ π β€ π½ π β πΎ π < π π β² , so it is really πΌ π β πΎ π < π π that we must check.
(1)
At stage π the construction was in Case 1 or Case 2. We set πΌ π
π½ π β₯ π½ π β 1 β₯ πΌ π β 1 . Either we are in Case 1 and π π remains preparing, or πΌ π β πΎ π
π½ π β πΎ π < π π .
(2)
At stage π the construction was in Case 3. We set πΌ π
π π β’ π½ π + π π . Now in Case 3, π π
πΌ π β 1 and π π
( π π β ( πΌ π β 1 β πΎ π ) ) . Note that πΌ π β 1 β πΎ π β€ πΌ π β 1 β πΎ π β 1 < π π by induction, so πΌ π β₯ πΌ π β 1 . Also
πΌ π
( π π β ( πΌ π β 1 β πΎ π ) ) β’ π½ π + πΌ π β 1
π π β’ π½ π β ( πΌ π β 1 β πΎ π ) β’ π½ π + ( πΌ π β 1 β πΎ π ) + πΎ π
π
π
β’
π½
π
+
(
1
β
π½
π
)
β’
(
πΌ
π
β
1
β
πΎ
π
)
+
πΎ
π
<
π
π
β’
π½
π
+
(
1
β
π½
π
)
β’
π
π
+
πΎ
π
π π + πΎ π
β€ π½ π .
Finally, since weβve just seen that πΌ π < π π + πΎ π , we have that πΌ π β πΎ π < π π .
(3)
At stage π the construction was in Case 4. We set πΌ π
π π β’ π½ π + π π . Then since π π was in state restraining at stage π , we must have defined πΌ π β 1
π π β’ π½ π β 1 + π π unless π was an incremental stage, in which case π π and π π were reset at stage π before defining πΌ π . If π was not incremental, then πΌ π
π π β’ ( π½ π β π½ π β 1 ) + π π β’ π½ π β 1 + π π
π π β’ ( π½ π β π½ π β 1 ) + πΌ π β 1 β€ π π β’ ( π½ π β π½ π β 1 ) + π½ π β 1 β€ π½ π . Also πΌ π β 1
π π β’ π½ π β 1 + π π β€ π π β’ π½ π + π π
πΌ π . Finally, if we let π ~ < π be the stage where π π and π π were last defined, then we see that
πΌ π β πΎ π
π π β’ π½ π + π π β πΎ π
(
π
π
β
(
πΌ
π
~
β
1
β
πΎ
π
~
)
)
β’
π½
π
+
πΌ
π
~
β
1
β
πΎ
π
β€
(
π
π
β
(
πΌ
π
~
β
1
β
πΎ
π
~
)
)
β’
π½
π
+
πΌ
π
~
β
1
β
πΎ
π
~
π π β’ π½ π + ( 1 β π½ π ) β’ ( πΌ π ~ β 1 β πΎ π ~ )
< π π .
Now suppose stage π was incremental for π π . If π½ π β πΎ π < π π , then the result follows as in (1), and if π½ π β πΎ π β₯ π π , then the result follows as in (3).β
Claim 2.
Suppose that the requirement π π is activated at stage π and never injured after stage π . Then π π has only finitely many incremental stages.
Proof.
The restraint π π is defined when π π is activated, and never changes after stage π . Suppose to the contrary that there are incremental stages π 0 < π 1 < π 2 < β― after stage π . We claim that πΎ π π + 1 β₯ 1 2 β’ π π + πΎ π π . From this it follows that there are at most 2 / π π incremental stages for π π , as if there were that many incremental stages, for some sufficiently large stage π‘ we would have πΎ π‘ greater than 1 and hence greater than πΌ π‘ β 1 βand so the construction could immediately end, with finitely many incremental stages. Fix π for which we will show that πΎ π π + 1 β₯ 1 2 β’ π π + πΎ π π . Since stage π π is incremental, at the start of that stage π π is in stage restraining. There are two cases, depending on whether π½ π π β πΎ π π < π π or π½ π π β πΎ π π β₯ π π . Case 1: π½ π π β πΎ π π < π π . During stage π π , the requirement π π enters state waiting. Since stage π π + 1 is the next incremental stage, there must be some unique stage π‘ , π π < π‘ < π π + 1 , where π π enters state restraining again and stays in that state until at least stage π π + 1 . At stage π‘ we define π π
πΌ π‘ β 1 and π π
π π β ( πΌ π‘ β 1 β πΎ π‘ ) . These values cannot be redefined until the next incremental stage, π π + 1 , where we have πΎ π π + 1
π π + 1 2 β’ π π . Then:
πΎ
π
π
+
1
>
π
π
+
1
2
β’
π
π
πΌ π‘ β 1 + 1 2 β’ ( π π β ( πΌ π‘ β 1 β πΎ π‘ ) )
1 2 β’ π π + 1 2 β’ ( πΌ π‘ β 1 + πΎ π‘ )
β₯ 1 2 β’ π π + πΎ π‘
β₯ 1 2 β’ π π + πΎ π π .
Case 2: π½ π π β πΎ π π β₯ π π . During stage π π , the requirement π π remains in state restraining, defining β π
πΌ π π β 1 and π π
π π β ( πΌ π π β 1 β πΎ π π β 1 ) . It stays in that state, with the same reference values π π and π π , until the next incremental stage π π + 1 , where we have πΎ π π + 1
π π + 1 2 β’ π π . We get a similar computation to the previous case:
πΎ
π
π
+
1
>
π
π
+
1
2
β’
π
π
πΌ π π β 1 + 1 2 β’ ( π π β ( πΌ π π β 1 β πΎ π π ) )
1 2 β’ π π + 1 2 β’ ( πΌ π π β 1 + πΎ π π )
β₯ 1 2 β’ π π + πΎ π π .
Thus for each π we have πΎ π π + 1 β₯ 1 2 β’ π π + πΎ π π , completing the proof of the claim. β
Claim 3.
Each requirement is injured only finitely many times.
Proof.
We argue by induction on the priority of the requirements. Suppose that each requirement of higher priority than π π is only injured finitely many times. Fix a stage π after which none of them are injured. By the previous claim, by increasing π we may assume that no higher priority requirement has an incremental stage after stage π . First of all, π π can only be activated at stages when every higher priority requirement is waiting. If π π is never activated after stage π , then it cannot be injured. Increasing π further, assume that π π is activated at stage π . If π π is injured after stage π , it is at the first stage π‘
π such that a requirement π π of higher priority than π π has π½ π‘ β πΎ π‘
π π . Moreover, the requirement π π remains in the state waiting until such a stage. Suppose that π‘
π is the first such stage, if one exists. At the beginning of stage π‘ , π π is waiting, and so π π enters the state restraining. Then π π can only leave state restraining, and re-enter state waiting, at a stage which is incremental for π π ; since there are no such stages after stage π , π π can never re-enter stage waiting. So even π π is never again re-activated, and so cannot be injured. Thus π π can be injured only once after stage π , proving the claim. β
Claim 4.
πΌ
lim π πΌ π is random.
Proof.
There are three possibilities.
(1)
Some requirement enters state preparing at stage π , and is never injured nor leaves state preparing after stage π . The requirement π π is the lowest priority requirement which is active at any point after stage π . In this case, at each stage π‘ β₯ π , we set πΌ π‘
π½ π‘ and so πΌ
π½ is random.
(2)
Some requirement enters state restraining at stage π , and is never injured nor leaves state restraining after stage π . The requirement π π is the lowest priority requirement which is active at any point after stage π . Increasing π , we may assume that this requirement π π never has an incremental stage after stage π . Then the target value π π β’ π½ + π π at stage π is also the target value at all stages π‘ β₯ π . At each such stage π‘ β₯ π , we set πΌ π‘ + 1
π π β’ π½ π‘ + π π . Thus πΌ
π π β’ π½ + π π , with π π , π π β β , and so is random.
(3)
For each requirement there is a stage π after which the requirement is never injured and is always in state waiting. There are infinitely many stages π at which we are in Case 2 of the construction. At every stage, all requirements except possibly for the lowest priority requirement are in state waiting. For requirements π 1 , β¦ , π π , there is some first stage π‘ at which the lowest priority requirement is in state waiting and never again leaves state waiting. At stage π‘ , we must be in Case 2 of the construction. Indeed, in Case 3 the requirement π π leaves state waiting. In Case 2, we set πΌ π‘
π½ π‘ . Moreover, we activate the next requirement, and the next requirement is never injured. So there is a greater corresponding first stage π‘ at which that requirement is in state waiting and never again leaves that state. Continuing, there are infinitely many stages at which we set πΌ π‘
π½ π‘ . It follows that πΌ
π½ , which is random. β
Claim 5.
Suppose that π β’ ( dom β‘ ( π ) )
πΌ . For each requirement π π , there is a stage π after which the requirement is active, never injured, and is always in state waiting.
Proof.
We argue inductively that for each requirement π π , there is a stage π after which the requirement is never injured and is always waiting. By Claim 3 there is a stage π after which π π is never injured, and (inductively) every higher priority requirement is always waiting after stage π . By Claim 2, by increasing π we may assume that π π has no incremental stages after stage π . Then π π is activated at the least such stage π since each higher priority requirement is always waiting. Note that π π can never be injured after stage π , as if π π is injured by π π , then π π enters state restraining. Now we claim that, if π π is preparing, it leaves that state after stage π . Indeed, if π π never left state preparing, we would have πΌ
π½ . By assumption, πΌ
π β’ ( dom β‘ ( π ) )
lim π πΎ π . Thus for some stage π‘ we must have that π½ π‘ β πΎ π‘ < π π . At this stage π‘ , π π leaves state preparing. Now we claim that π π can never enter state restraining after stage π . Since π π has no incremental stages after stage π , if π π did enter state restraining, it would never be able to leave that state. Moreover, π π and π π can never change their values. So we end up with πΌ
π π β’ π½ + π π . Moreover, for all π‘ β₯ π , πΎ π‘ < π π + 1 2 β’ π π , as there are no more incremental stages. Then πΎ β€ π π + 1 2 β’ π π < π π + π π β’ π½
πΌ , contradicting the hypotheses of the claim. Thus π π can never enter state restraining after stage π . Thus we have shown that for sufficiently large stages, π π is in state waiting. β
Claim 6.
Suppose that π β’ ( dom β‘ ( π ) )
πΌ . Then every requirement π π is satisfied.
Proof.
Since π β’ ( dom β‘ ( π ) )
πΌ , at all stages π , πΎ π β€ πΌ π β 1 . As argued in the previous claim, there is a stage π at which π π is activated, and after which π π is never injured. At this stage π , π π enters state preparing and we choose π π such that π β’ ( π π ) β and set π π
2 β ( | π π | + π ) . By the previous claim, π π exits state preparing at some stage π‘ > π . At this point, we have π½ π‘ β πΎ π‘ < π π . We choose a string π such that πΎ π β’ ( π ) > | π π | + π and put π β’ ( π π )
π . Thus πΎ π β’ ( π ) β€ | π π | . π π enters state waiting, and πΌ π
π½ π . Since, at stage π‘ , πΎ π β’ ( π ) > | π | π + π , for every string π with | π | β€ | π π | + π , π β’ ( π ) β π . For each stage π‘ β² β₯ π‘ the requirement π π is no longer in state preparing and so by Claim 1 we have πΎ π‘ β² + 1 β πΎ π‘ β² β€ πΌ π‘ β² β πΎ π‘ β² < π π . From this it follows that we can never have π β’ ( π )
π for any π with | π | β€ | π π | + π ; if π β’ ( π )
π for the first time at stage π‘ β² + 1 > π‘ , then we would have πΎ π‘ β² + 1 β πΎ π‘ β² β₯ | π |
π π , which as we just argued cannot happen. β
We can now use the claims to complete the verification. By Claim 4, πΌ
lim π πΌ π is indeed random, and by Claim 1 πΌ β€ π½ and so πΌ β [ 0 , 1 ] . So the function π must output the index of a machine π with π β’ ( π )
πΌ . By Claim 6, each requirement is satisfied and so, for every π , there is π such that πΎ π β’ ( π )
πΎ π β’ ( π ) + π . Thus π is not optimal, a contradiction. This completes the proof of the theorem. β
2.2 Almost uniform constructions of optimal machines
We just established that there is no uniform procedure to turn a left-c.e. Martin-LΓΆf random πΌ β [ 0 , 1 ] into a universal machine π such that Ξ© π
πΌ . However, algorithmic randomness offers a notion of βalmost uniformityβ, known as layerwise computability, see [HR09]: Let ( π° π ) be a fixed effectively optimal Martin-LΓΆf test, i.e., a Martin-LΓΆf test such that for any other Martin-LΓΆf test ( π± π ) , there exists a constant π such that π± π + π β π° π for all π , and this constant π can be uniformly computed in an index of the Martin-LΓΆf test ( π± π ) . Note that an effectively optimal Martin-LΓΆf test is in particular universal, i.e., π₯ is Martin-LΓΆf random if and only if π₯ β π° π for some π . A function πΉ from [ 0 , 1 ] (or more generally, from a computable metric space) to some represented space π³ is layerwise computable if it is defined on every Martin-LΓΆf random π₯ and moreover there is a partial computable π from [ 0 , 1 ] Γ β to π³ where π β’ ( π₯ , π )
πΉ β’ ( π₯ ) whenever π₯ β π° π .
Here we are in a different setting as we are dealing with indices of reals instead of reals, but by extension we could say that a partial function πΉ : β β π³ is layerwise computable on left-c.e. reals if πΉ β’ ( π ) is defined for every index π of a random left-c.e. real, and if there is a partial computable function π : β Γ β β π³ such that π β’ ( π , π )
πΉ β’ ( π ) whenever the left-c.e. real πΌ π of index π does not belong to π° π (note that the definition remains the same if π is required to be total). Even with this weaker notion of uniformity, uniform construction of optimal machines from their halting probabilities remains impossible.
Theorem 2.1.
There does not exist a layerwise computable mapping πΉ from indices for random left-c.e. reals πΌ π β [ 0 , 1 ] to optimal machines such that Ξ© π πΉ β’ ( π )
πΌ π .
Proof.
This is in fact a consequence of a stronger result: there is no β β² -partial computable function πΉ such that πΉ β’ ( π ) is defined whenever πΌ π is Martin-LΓΆf random and Ξ© π πΉ β’ ( π )
πΌ π . Since a β β² -partial computable function can be represented by a total computable function π ( . , . ) such that for every π on which πΉ is defined, lim π‘ π β’ ( π , π‘ )
πΉ β’ ( π ) , we see that a layerwise computable function on left-c.e. reals is a particular case of β β² -partial computable function.
Let now πΉ be a β β² -partial computable function and π a total computable such that lim π‘ π β’ ( π , π‘ )
πΉ β’ ( π ) whenever πΉ β’ ( π ) is defined.
The idea is to run the same construction as in Theorem 1.2, but instead of playing against the machine of index π β’ ( π ) , we play against the machine of index π β’ ( π , π 0 ) , with π 0
0 . If at some point we find a π 1 > π 0 such that π β’ ( π , π 1 ) β π β’ ( π , π 0 ) , we restart the entire construction, this time playing against the machine of index π β’ ( π , π 1 ) , until we find π 2 > π 1 such that π β’ ( π , π 2 ) β π β’ ( π , π 1 ) , then restart, etc. Of course when we restart the construction, we cannot undo the increases we have already made on πΌ . This problem is easily overcome as follows. First observe that the strategy presented in the proof of Theorem 1.2, is robust: instead of starting at πΌ
0 , and staying in the interval [ 0 , 1 ] throughout the construction, for any rational interval [ π , π ] β [ 0 , 1 ] , we could have started the construction with πΌ 0
π and stayed within [ π , π ] by β for example β targeting the random real π + ( π β π ) β’ π½ instead of π½ . Now, let π be a random left-c.e. real in [ 0 , 1 ] with computable lower approximation π 0 < π 1 < β¦ . We play against the machine of index π β’ ( π , π π ) by applying the strategy of Theorem 1.2 with the added constraint that πΌ must stay in the interval [ π π , π π + 1 ] . If we then find a π π + 1 such that π β’ ( π , π π + 1 ) β π β’ ( π , π π ) , we then move to the next interval [ π π + 1 , π π + 2 ] and apply the strategy to diagonalize against the machine of index π β’ ( π , π π + 1 ) while keeping πΌ in this interval, etc.
There are two cases:
β’
Either π β’ ( π , π‘ ) eventually stabilizes to a value π β’ ( π , π π ) , in which case we get to fully implement the diagonalization against the machine of index π β’ ( π , π π )
πΉ β’ ( π ) , which ensures that πΌ π β Ξ© π πΉ β’ ( π ) or that π πΉ β’ ( π ) is not optimal.
β’
Or π β’ ( π , π‘ ) does not stabilize, in which case we will infinitely often move πΌ from the interval [ π π , π π + 1 ] to [ π π + 1 , π π + 2 ] , which means that the limit value of πΌ
πΌ π will be π , hence πΌ π is random, while πΉ β’ ( π ) is undefined since π β’ ( π , π‘ ) does not converge.
In either case, we have shown what we wanted. β
Finally, we can consider a yet weaker type of non-uniformity. In the definition of layerwise computability on left-c.e. reals, we asked that for πΌ π β π° π , the machine of index π β’ ( π , π ) has halting probability πΌ π and π β’ ( π , π )
π β’ ( π , π β² ) if πΌ π β π° π βͺ π° π β² . Here we could try to remove this last condition by allowing π β’ ( π , π ) and π β’ ( π , π β² ) to be codes for different machines (but both with halting probabilities πΌ π ). In this setting, we do get a positive result.
Theorem 2.2.
There exists a partial computable function π ( . , . ) such that if πΌ π β π° π , πΌ π β [ 0 , 1 ] , then π β’ ( π , π ) is defined and Ξ© π π β’ ( π , π )
πΌ π .
Proof.
This follows from work of Calude, Hertling, Khoussainov, and Wong [CHKW01] and of KuΔera and Slaman [KS01]. Let Ξ© be the halting probability of an optimal machine. KuΔera and Slaman showed how from the index of a left-c.e. real πΌ β [ 0 , 1 ] one can build a Martin-LΓΆf test ( π± π ) such that if πΌ β ( π± π ) then one can, uniformly in π , produce approximations πΌ 1 < πΌ 2 < β¦ of πΌ and Ξ© 1 < Ξ© 2 < β¦ of Ξ© such that ( πΌ π + 1 β πΌ π )
2 β π β’ ( Ξ© π + 1 β Ξ© π ) (see [DH10, Theorem 9.2.3]). Then, by [CHKW01], one can use such approximations to uniformly build a uniform machine with halting probability πΌ , as long as πΌ β ( 2 β π , 1 β 2 β π ) (see [DH10, Theorem 9.2.2])
Thus, given an index for πΌ , if ( π± π ) is the Martin-LΓΆf test built as in [KS01], we can build the test π± π β²
π± π + 2 βͺ ( 0 , 2 β π β 2 ) βͺ ( 1 β 2 β π β 2 , 1 ) (whose index can uniformly be computed from that of ( π± π ) ). Now, if πΌ β π° π , then we can compute a constant π such that πΌ β π± π + π β² , and apply the above argument with π
π + π + 2 . β
2.3 Uniform constructions of semi-measures
Another way to define Omega numbers, which is equivalent if one is not concerned about uniformity issues, is via left-c.e. semi-measures (see [DH10, Section 3.9]).
Definition 2.3.
A semi-measure is a function π : β β β + such that β π π β’ ( π ) β€ 1 . It is left-c.e. if the set { ( π , π ) β£ π β β , π β β , π β’ ( π )
π } is c.e., or equivalently, if π is the limit of a non-decreasing sequence ( π π ) of uniformly computable functions such that β π π π β’ ( π ) β€ 1 for all π .
There exist universal left-c.e. semi-measures, i.e., left-c.e. semi-measures π such that for any other left-c.e. semi-measure π , there is a π
0 such that π β’ ( π )
π β π β’ ( π ) for all π . The Levin coding theorem (see [DH10, Theorem 3.9.4]) asserts that a left-c.e. semi-measure π is universal if and only if there are positive constants π 1 , π 2 such that π 1 β 2 β πΎ β’ ( π ) < π β’ ( π ) < π 2 β 2 β πΎ β’ ( π ) for all π . An important result from Calude, Hertling, Khoussainov, and Wang [CHKW01] is that a left-c.e. real πΌ is an Omega number if and only if it is the sum β π π β’ ( π ) for some universal left-c.e. semi-measure π . Interestingly, with this representation of Omega numbers, uniform constructions are possible.
Theorem 2.4.
There is a total computable function π such that if π is an index for a random left-c.e. real πΌ β [ 0 , 1 ] , then π β’ ( π ) is defined and is an index for a universal left-c.e. semi-measure π π β’ ( π ) with sum πΌ .
Proof.
Let π be a fixed universal semi-measure and πΎ β€ 1 its sum. Suppose we are given (the index of) a left-c.e. real πΌ . We build our π by building uniformly, for each π > 0 , a left-c.e. semi-measure π π of halting probability πΌ β 2 β π and will take π
β π
0 π π . While doing so, we also build an auxiliary Martin-LΓΆf test ( π° π ) π
0 . The measure π π is designed as follows. We monitor the semi-measure π and πΌ at the same time and run the following algorithm
Let π 0 be the stage at which we entered step 1. Wait for the least stage π β₯ π 0 such that some value π β’ ( π ) with π β€ π has increased since the last π -stage. If there is more than one such π at stage π , let π be the one whose most recent π -stage is least. Let π₯ be the amount by which π β’ ( π ) has increased since the previous π -stage, and say that π is an π -stage. Move to step 2.
Put ( πΌ π , πΌ π + 2 β π β’ π₯ ) into π° π . Move to step 3.
Increase π π β’ ( π ) by 2 β π β’ ( πΌ π β πΌ π 0 ) . At further stages π‘ β₯ π , when we see an increase πΌ π‘ + 1
πΌ π‘ , we increase π π β’ ( π ) by 2 β π β’ ( πΌ π‘ + 1 β πΌ π‘ ) . Moreover, if we now have πΌ π‘ + 1
πΌ π + 2 β π β’ π₯ , we go back to step 1, otherwise we stay in this step 3.
By construction we do have β π π π β’ ( π )
2 β π β’ πΌ . Still by construction, the measure of π° π is bounded by πΎ β 2 β π β€ 2 β π , so it is indeed a Martin-LΓΆf test. Thus, if πΌ is indeed random, there is a π such that πΌ β π° π . Looking at the above algorithm, πΌ β π° π means that for this π , we enter step 1 of the algorithm infinitely often and thus whenever some π β’ ( π ) is increased by π₯ at step 1, this is met by a sum of increases of π π β’ ( π ) by strictly more than 2 β π β’ π₯ during step 3. Thus, π π
2 β π β’ π , which makes π π a universal semi-measure, and thus π
π π is universal. β
An interesting corollary is that one cannot uniformly turn a universal left-c.e. semi-measure π into a prefix-free machine whose halting probability is β π π β’ ( π ) . Indeed, if we could, then we could uniformly turn a random left-c.e. πΌ β [ 0 , 1 ] into a prefix-free machine of halting probability πΌ by first applying the above theorem to get a universal left-c.e. semi-measure π of sum πΌ , and then we could turn π into a machine π of sum πΌ . This would contradict Theorem 1.2.
To summarize, for arbitrary (not necessarily random) left-c.e. reals, we can make all of the transformations uniformly:
{tikzpicture} \tikzstyle π π£ π π π¦ π π π π‘ π π π π π π π π π
[ π π π π€ , π‘ π π₯ π‘ π€ π π π‘ β
3 π π , π π π π π π’ π β π π π β π‘
1 π π , π π π π π
π π π π‘ π π ] \node ( π π ) π π‘ ( 0 , 0 ) [ π π π π‘ π π π π π ] π π π π‘ β π . π . π π π π ; \node ( π π π π π’ π π ) π π‘ ( 6 , 0 ) [ π π π π‘ π π π π π ] π π π π‘ β π . π . π π π π β π π π π π’ π π ; \node ( π π π β π π π ) π π‘ ( 3 , β 3 ) [ π π π π‘ π π π π π ] π π π π π π₯ β π π π π π π π β π π π ; \draw [ π π π π π π π β π π π π π π π , π π π’ π π π π π π’ π π π π π π π π π π‘ π π π π ] ( π π ) β β ( π π π π π’ π π ) ; \draw [ π π π π π π π β π π π π π π π , π π π’ π π π π π π’ π π π π π π π π π π‘ π π π π ] ( π π π β π π π ) β β ( π π π π π’ π π ) ; \draw [ π π π π π π π β π π π π π π π , π π π’ π π π π π π’ π π π π π π π π π π‘ π π π π ] ( π π π β π π π ) β β ( π π ) ;
For random left-c.e. reals, and optimal prefix-free machines, we can only make the following transformations uniformly:
{tikzpicture} \tikzstyle π π£ π π π¦ π π π π‘ π π π π π π π π π
[ π π π π€ , π‘ π π₯ π‘ π€ π π π‘ β
3 π π , π π π π π π’ π β π π π β π‘
1 π π , π π π π π
π π π π‘ π π ] \node ( π π ) π π‘ ( 0 , 0 ) [ π π π π‘ π π π π π ] π π π π π π π π π π‘ β π . π . π π π π ; \node ( π π π π π’ π π ) π π‘ ( 6 , 0 ) [ π π π π‘ π π π π π ] π’ π π π£ π π π π π π π π π‘ β π . π . π π π π β π π π π π’ π π ; \node ( π π π β π π π ) π π‘ ( 3 , β 3 ) [ π π π π‘ π π π π π ] π π π‘ π π π π π π π π π π₯ β π π π π π π π β π π π ; \draw [ π π π π π π π β π π π π π π π , π π π’ π π π π π π’ π π π π π π π π π π‘ π π π π ] ( π π ) β β ( π π π π π’ π π ) ; \draw [ β π π π π π π π , π π π’ π π π π π π’ π π π π π π π π π π‘ π π π π ] ( π π π β π π π ) β β ( π π π π π’ π π ) ; \draw [ β π π π π π π π , π π π’ π π π π π π’ π π π π π π π π π π‘ π π π π ] ( π π π β π π π ) β β ( π π ) ;
3 Differences of Left-c.e. Reals Theorem 1.4.
There is no partial computable function π such that if π is an index for a non-computable left-c.e. real πΌ , then π β’ ( π ) is defined and is an index for a left-c.e. real π½ such that πΌ β π½ is neither left-c.e. nor right-c.e.
Proof.
Using the recursion theorem, define a left-c.e. real πΌ while watching the left-c.e. real π½ produced from πΌ by a function π . We will also define a right-c.e. real πΏ . Let π π be an enumeration of the right-c.e. reals with right-c.e. approximations ( π π π ) . We will ensure that πΌ β π π for any π , so that πΌ is non-computable, and that either πΌ β π½
πΏ or for all sufficiently large stages, πΌ grows more than π½ (and so πΌ β π½ is left-c.e.).
Each stage of the construction will be in one of infinitely many possible states: wait and follow ( π ) for some π . In wait, πΌ will be held to the same value and we will begin decreasing the right-c.e. real πΏ closer to πΌ β π½ ; if there are infinitely many wait stages, then in fact we will have πΏ
πΌ β π½ . At follow ( π ) stages, πΌ will increase as much as π½ , and possibly more, in an attempt to have π π < πΌ . Because πΌ will be increasing as much as and possibly more than π½ , if from some point on all stages are follow ( π ) stages, then πΌ β π½ will be left-c.e. We will only enter follow ( π ) when we have a reasonable chance of making π π < πΌ , i.e., when π π is not too much greater than πΌ , and we will only exit follow ( π ) when we have succeeded in making π π < πΌ . Since π π is right-c.e. and πΌ is left-c.e., this can never be injured. It is possible that we will never succeed in making π π < πΌ (because in fact π π
πΌ ) but in this case we will still ensure that πΌ is not computable and make πΌ β π½ left-c.e. We just have to make sure that we never increase πΌ β π½ above πΏ .
Note that technically when defining πΌ π we cannot wait for π½ π to converge. But we can do this by essentially the following argument. First, fix a non-computable left-c.e. real πΎ and let πΌ π
πΎ π until the uniform procedure provides us with a π½ and π½ 0 converges at some stage π 0 . Then we can restart the construction, considering the construction to begin with πΌ 0
πΎ π 0 . We can also in a uniform way replace the given approximation to π½ (which might not even be total or left-c.e.) by a different one which is guaranteed to be left-c.e. and which converges in a known amount of time, and is equal to π½ in the case that π½ is in fact left-c.e.
Construction.
Stage π
0 . Begin with πΌ 0
πΎ π 0 , πΏ 0
1 + πΌ 0 β π½ 0 . Say that stage 1 will be a wait stage.
Stage π + 1 . We will have determined in stage π whether stage π + 1 is a wait or follow stage.
wait: Let πΌ π + 1
πΌ π and
πΏ π + 1
min β‘ ( πΏ π , πΌ π + 1 β π½ π + 1 + 1 2 π ) .
Check whether, for some π β€ π , π π + 1 π β₯ πΌ π + 1 and π π + 1 π β πΌ π + 1 < 1 2 π . If we find such an π , let π be the least such. The next stage is a follow ( π ) stage. If there is no such π , the next stage is a wait stage.
follow ( π ) : In all cases, let πΏ π + 1
πΏ π . Then:
(1)
Check whether
πΌ π + π½ π + 1 β π½ π
π π + 1 π .
If so, set
πΌ π + 1
π π + 1 π + π β€ πΌ π + π½ π + 1 β π½ π
where π < 1 2 π . The next stage is a wait stage.
(2)
Otherwise, check whether for some π ,
0 β€ π π π β πΌ π < 1 2 π + 2 β’ [ πΏ π β ( πΌ π β π½ π ) ] .
If we find such a π , choose the least such π , and let π
0 be such that
π π π + π β πΌ π < 1 2 π + 2 β’ [ πΏ π β ( πΌ π β π½ π ) ] .
Let
πΌ π + 1
max β‘ ( π π π + π , πΌ π + π½ π + 1 β π½ π ) .
If π
π , the next stage is a wait stage. Otherwise, the next stage is a follow ( π ) stage.
(3)
Finally, in any other case, let
πΌ π + 1
πΌ π + π½ π + 1 β π½ π .
The next stage is a follow ( π ) stage.
End construction.
The verification will consist of five claims followed by a short argument.
Claim 1.
πΌ
sup πΌ π comes to a limit.
Proof.
In wait stages, we do not increase πΌ . If we enter follow ( π ) , then we can increase πΌ by at most 2 2 π before we exit follow ( π ) . Thus
πΌ β€ β π β π 2 2 π < β . β
Claim 2.
Suppose that, from some stage π‘ on, every stage is a follow ( i ) stage. Then:
(1)
for all π β₯ π‘ , πΌ π + 1 β πΌ π β₯ π½ π + 1 β π½ π ,
(2)
πΌ β π½ is left-c.e.,
(3)
for all π β₯ π‘ ,
πΏ π β ( πΌ π β π½ π ) β₯ 1 2 β’ [ πΏ π‘ β ( πΌ π‘ β π½ π‘ ) ] .
Proof.
(1) follows from the fact that we either set the next stage to be a wait stage, or we have πΌ π + 1 β₯ πΌ π + π½ π + 1 β π½ π . (2) follows easily from (1). For (3), since πΏ π
πΏ π‘ for all π β₯ π‘ , whenever we define
πΌ π + 1
πΌ π + π½ π + 1 β π½ π
we maintain
πΏ π + 1 β ( πΌ π + 1 β π½ π + 1 )
πΏ π β ( πΌ π β π½ π ) .
The other possible case is when we find π such that
0 β€ π π π β πΌ π < 1 2 π + 2 β’ [ πΏ π β ( πΌ π β π½ π ) ] .
and define
πΌ π + 1
π π π + π .
Note that in this case we permanently have π π β πΌ < 0 so we can never do this again for the same π . We have
πΏ π + 1 β ( πΌ π + 1 β π½ π + 1 )
πΏ
π
β
π
π
π
β
π
+
π½
π
+
1
β₯
πΏ
π
β
πΌ
π
+
π½
π
β
1
2
π
+
2
β’
[
πΏ
π
β
(
πΌ
π
β
π½
π
)
]
2 π + 2 β 1 2 π + 2 β’ [ πΏ π β ( πΌ π β π½ π ) ] .
Thus, for all stages π β₯ π‘ ,
πΏ π β ( πΌ π β π½ π )
β₯ β π β π 2 π + 2 β 1 2 π + 2 β’ [ πΏ π‘ β ( πΌ π‘ β π½ π‘ ) ]
β₯ 1 2 β’ [ πΏ π‘ β ( πΌ π‘ β π½ π‘ ) ] . β
Claim 3.
For all stages π , πΏ π
πΌ π β π½ π .
Proof.
We argue by induction. This is true for π
0 . If stage π + 1 is a wait stage, then there are two possible values for πΏ π + 1 : πΏ π or πΌ π + 1 β π½ π + 1 + 1 2 π . It is clear that the second is strictly greater than πΌ π + 1 β π½ π + 1 . We also have, since πΌ π + 1
πΌ π and π½ π + 1 β₯ π½ π , that πΏ π > πΌ π β π½ π β₯ πΌ π + 1 β π½ π + 1 . If stage π + 1 is a follow stage, then πΏ π + 1
πΏ π . There are two options for πΌ π + 1 . First, we might set πΌ π + 1 β€ πΌ π + π½ π + 1 β π½ π so that πΌ π + 1 β π½ π + 1 β€ πΌ π β π½ π and πΏ π + 1
πΌ π + 1 β π½ π + 1 follows from the induction hypothesis πΏ π
πΌ π β π½ π . Second, we might set
πΌ π + 1
π π π + π .
where
π π π + π β πΌ π < 1 2 π + 2 β’ [ πΏ π β ( πΌ π β π½ π ) ] .
Then
πΌ
π
+
1
β
π½
π
+
1
β€
π
π
π
+
π
β
π½
π
<
πΌ
π
β
π½
π
+
1
2
π
+
2
β’
[
πΏ
π
β
(
πΌ
π
β
π½
π
)
]
β€
1
2
π
+
2
β’
πΏ
π
+
2
π
+
2
β
1
2
π
+
2
β’
[
πΌ
π
β
π½
π
]
<
πΏ
π
πΏ π + 1 .
This completes the proof. β
Claim 4.
πΌ is non-computable.
Proof.
If πΌ was computable, then it would be equal to a right-c.e. real π π . For all stages π , πΌ β€ π π π . Let π‘ be a stage such that π π‘ π β πΌ π‘ < 1 2 π . Increasing π‘ , we may assume that there is π β€ π such that we are in follow ( π ) from stage π‘ on. Increasing π‘ further, we can assume that for each π β² < π , if π π < πΌ , then we have seen this by stage π‘ . Consider the inequality
π π π β πΌ π < 1 2 π + 2 β’ [ πΏ π β ( πΌ π β π½ π ) ] .
By (3) of Claim 2, the right-hand-side has a lower bound, and this lower bound is strictly positive by Claim 3. Since π π
πΌ , there is a stage π β₯ π‘ where this inequality holds. Then by choice of π‘ , π is the least value satisfying this inequality and we set πΌ π + 1
π π π . β
Claim 5.
If there are infinitely many wait stages, then πΏ
πΌ β π½ .
Proof.
Using Claim 3, for each wait stage π , we have
πΌ π β π½ π β€ πΏ π β€ πΌ π β π½ π + 1 2 π β 1 .
Thus πΏ
πΌ β π½ . β
We are now ready to complete the proof. It follows from Claim 1 that πΌ is a left-c.e. real that comes to a limit, and by Claim 4, πΌ is a non-computable. If there are infinitely many wait stages, then by Claim 5 πΏ
πΌ β π½ is right-c.e. The other option is that there is π such that every stage from some point on is a follow ( π ) stage. In this case, by (2) of Claim 2, πΌ β π½ is left-c.e. β
We now turn to Theorem 1.6 which says that one can uniformly construct, from an optimal (respectively universal) machine π , an optimal (respectively universal) machine π such that Ξ© π β Ξ© π is neither left-c.e. nor right-c.e. We first prove this for optimal machines, and then obtain the result for universal machines as a corollary.
Theorem 3.1.
Theorem 1.5 is uniform, in the sense that there is a total computable function π such that if π
π π is an optimal machine, then π
π π β’ ( π ) is optimal and Ξ© π β Ξ© π is neither left-c.e. nor right-c.e.
Proof.
Let πΎ , πΏ be two Solovay-incomparable left-c.e. reals. As explained in [BLP17], if πΌ is random, then π½
πΌ + πΎ β πΏ is left-c.e. and random, and πΌ β π½ is neither left-c.e. nor right-c.e. Our goal is to make this idea effective. Let us first express πΏ as the sum β π 2 β β β’ ( π ) where β is a computable function. In what follows, when we write β β’ ( π ) for a string π , we mean β β’ ( π ) where π is the integer associated to π via a fixed computable bijection. Furthermore, let π be a machine such that π β’ ( dom β‘ ( π ) )
πΎ . We build a machine π from a machine π as follows. First, we wait for π to issue a description π β’ ( π 0 )
π 0 . When this happens, π issues a description π β’ ( π 0 β’ 0 )
π 0 and countably many descriptions by setting π β’ ( π 0 β’ 1 β’ π )
π β’ ( π ) for every π β dom β‘ ( π ) . Now, for every string π β π 0 in parallel, we enumerate all descriptions π β’ ( π )
π . As long as the enumerated descriptions are such that | π | β₯ β β’ ( π ) , π copies these descriptions. If at some point we find a description π β’ ( π )
π with | π | β€ β β’ ( π ) β 1 , we then issue descriptions π β’ ( π β’ 0 )
π , and π β’ ( π β² )
π for every π β² of length β β’ ( π ) which extends π β’ 1 , except for π β²
π β’ 1 β β’ ( π ) β | π | , for which we leave π β’ ( π β² ) undefined. After having done that, π copies all further π -descriptions of π , regardless of the length of these descriptions. By construction, π is prefix-free, because any π -description π β’ ( π )
π is replaced in π by a set of descriptions π β’ ( π β² )
π β² where the π β² form a prefix-free set of extensions of π . Moreover, π is optimal because by construction, whenever a description π β’ ( π )
π is enumerated, a π -description of π of length at most | π | + 1 is issued. Let us now evaluate Ξ© π β Ξ© π . The very first description π β’ ( π 0 )
π 0 of π gives rise to descriptions in π of total measure 2 β π β 1 + 2 β π β 1 β’ π β’ ( dom β‘ ( π ) ) , where π
| π 0 | . Thus this part of the construction contributes to Ξ© π β Ξ© π by an amount 2 β π β 2 β π β 1 β 2 β π β 1 β’ π β’ ( dom β‘ ( π ) )
2 β π β 1 β 2 β π β 1 β’ πΎ . Now, for other strings π β π 0 , there are two cases. Either a description π β’ ( π )
π with | π | < β β’ ( π ) is found (which is equivalent to saying that πΎ π β’ ( π ) < β β’ ( π ) ), or no such description is found. Let π΄ be the set of π for which such a description is found. For π β π΄ , all π -descriptions of π are copied identically in π . For π β π΄ , all π -descriptions of π are copied except one description π β’ ( π )
π (thus of measure 2 β | π | ) which is mimicked in π by a set of descriptions of measure 2 β | π | β 2 β β β’ ( π ) . Putting it all together:
Ξ© π β Ξ© π
2 β π β 1 β 2 β π β 1 β’ πΎ + β π β π΄ 2 β β β’ ( π )
To finish the proof, we appeal to the theory of Solovay functions. When β is a computable positive function, the sum β π 2 β β β’ ( π ) is not random if and only if β β’ ( π ) β πΎ β’ ( π ) β β [BD09, BDNM15]. This is the case here as πΏ
β π 2 β β β’ ( π ) is Solovay-incomplete hence not random. Suppose that the machine π is indeed an optimal machine. Then πΎ π
πΎ + π β’ ( 1 ) , and thus we have β β’ ( π ) β πΎ π β’ ( π ) β β . In particular, for almost all π , β β’ ( π ) > πΎ π β’ ( π ) . This shows that the set π΄ above is cofinite and therefore that β π β π΄ 2 β β β’ ( π )
πΏ β π for some (dyadic) rational π . Plugging this in the above equality, we get
Ξ© π β Ξ© π
2 β π β 1 β 2 β π β 1 β’ πΎ + πΏ β π
Since πΎ and πΏ are Solovay-incomparable, this shows that Ξ© π β Ξ© π is neither left-c.e. nor right-c.e. β
Corollary 3.2.
There is a total computable function π such that if π
π π is a universal machine, then π
π π β’ ( π ) is universal and Ξ© π β Ξ© π is neither left-c.e. nor right-c.e.
Proof.
Given π
π π , construct π
π π β’ ( π ) as in the previous theorem. Define a machine π
π π β’ ( π ) by setting π β’ ( 0 β’ π )
π β’ ( 0 β’ π ) and π β’ ( 1 β’ π )
π β’ ( 1 β’ π ) . Then Ξ© π
1 2 β’ Ξ© π + 1 2 β’ Ξ© π , and so Ξ© π β Ξ© π
1 2 β’ ( Ξ© π β Ξ© π ) . Thus if π is universal, then so is π , and Ξ© π β Ξ© π is neither left-c.e. nor right-c.e. β
References [ASWZ00] Klaus Ambos-Spies, Klaus Weihrauch, and Xizhong Zheng. Weakly computable real numbers. J. Complexity, 16(4):676β690, 2000. [Bar18] George Barmpalias. Aspects of Chaitinβs Omega. In Algorithmic Randomness: Progress and Prospects, pages 623β632. Springer, 2018. [BD09] Laurent Bienvenu and Rodney Downey. Kolmogorov complexity and Solovay functions. In Symposium on Theoretical Aspects of Computer Science (STACS 2009), volume 09001 of Dagstuhl Seminar Proceedings, pages 147β158, http://drops.dagstuhl.de/opus/volltexte/2009/1810, 2009. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, Germany Internationales Begegnungs- und Forschungszentrum fuer Informatik (IBFI), Schloss Dagstuhl, Germany. [BDNM15] Laurent Bienvenu, Rodney G. Downey, AndrΓ© Nies, and Wolfgang Merkle. Solovay functions and their applications in algorithmic randomness. Journal of Computer and System Sciences, 81(8):1575β1591, 2015. [BLP17] George Barmpalias and Andrew Lewis-Pye. A note on the differences of computably enumerable reals. In Computability and complexity, volume 10010 of Lecture Notes in Comput. Sci., pages 623β632. Springer, Cham, 2017. [BS12] Laurent Bienvenu and Alexander Shen. Random semicomputable reals revisited. In Michael J. Dinneen, Bakhadyr Khoussainov, and AndrΓ© Nies, editors, Computation, Physics and Beyond, volume 7160 of Lecture Notes in Computer Science, pages 31β45. Springer, 2012. [Cha75] Gregory J. Chaitin. A theory of program size formally identical to information theory. J. Assoc. Comput. Mach., 22:329β340, 1975. [CHKW01] Cristian S. Calude, Peter H. Hertling, Bakhadyr Khoussainov, and Yongge Wang. Recursively enumerable reals and Chaitin Ξ© numbers. Theoret. Comput. Sci., 255(1-2):125β149, 2001. [CNSS11] Cristian S. Calude, AndrΓ© Nies, Ludwig Staiger, and Frank Stephan. Universal recursively enumerable sets of strings. Theoret. Comput. Sci., 412(22):2253β2261, 2011. [CS09] Cristian S. Calude and Ludwig Staiger. On universal computably enumerable prefix codes. Math. Structures Comput. Sci., 19(1):45β57, 2009. [DH10] Rod Downey and Denis Hirschfeldt. Algorithmic Randomness and Complexity. Theory and Applications of Computability. Springer, 2010. [DHN02] Rod Downey, Denis R. Hirschfeldt, and AndrΓ© Nies. Randomness, computability, and density. SIAM J. Comput., 31(4):1169β1183, 2002. [HR09] Mathieu Hoyrup and CristΓ³bal Rojas. Computability of probability measures and Martin-LΓΆf randomness over metric spaces. Information and Computation, 207(7):2207β2222, 2009. [KS01] Antonin KuΔera and Ted Slaman. Randomness and recursive enumerability. SIAM Journal on Computing, 31:199β211, 2001. [Ng06] Keng Meng Ng. Some properties of d.c.e. reals and their degrees. Masterβs thesis, National University of Singapore, 2006. [Nie09] AndrΓ© Nies. Computability and Randomness. Oxford Logic Guides. Oxford University Press, 2009. [Rai05] Alexander Raichev. Relative randomness and real closed fields. J. Symbolic Logic, 70(1):319β330, 2005. [Sol75] Robert M. Solovay. Handwritten manuscript related to Chaitinβs work. IBM Thomas J. Watson Research Center, Yorktown Heights, NY, 215 pages, 1975. Generated on Thu Jul 13 18:19:10 2023 by LATExml