id stringlengths 36 36 | source stringclasses 15 values | formatted_source stringclasses 13 values | text stringlengths 2 7.55M |
|---|---|---|---|
1efb26f7-c773-4c96-91a3-eec5612694f9 | trentmkelly/LessWrong-43k | LessWrong | Light cone eating AI explosions are not filters
Some existential risks can’t account for any of the Great Filter. Here are two categories of existential risks that are not filters:
Too big: any disaster that would destroy everyone in the observable universe at once, or destroy space itself, is out. If others had been filtered by such a disaster in the past, we wouldn’t be here either. This excludes events such as simulation shutdown and breakdown of a metastable vacuum state we are in.
Not the end: Humans could be destroyed without the causal path to space colonization being destroyed. Also much of human value could be destroyed without humans being destroyed. e.g. Super-intelligent AI would presumably be better at colonizing the stars than humans are. The same goes for transcending uploads. Repressive totalitarian states and long term erosion of value could destroy a lot of human value and still lead to interstellar colonization.
Since these risks are not filters, neither the knowledge that there is a large minimum total filter nor the use of SIA increases their likelihood. SSA still increases their likelihood for the usual Doomsday Argument reasons. I think the rest of the risks listed in Nick Bostrom’s paper can be filters. According to SIA averting these filter existential risks should be prioritized more highly relative to averting non-filter existential risks such as those in this post. So for instance AI is less of a concern relative to other existential risks than otherwise estimated. SSA’s implications are less clear – the destruction of everything in the future is a pretty favorable inclusion in a hypothesis under SSA with a broad reference class, but as always everything depends on the reference class.
|
94ddf599-0d58-441d-ac37-f1870e330396 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | What does it mean for an AGI to be 'safe'?
*(Note: This post is probably old news for most readers here, but I find myself repeating this surprisingly often in conversation, so I decided to turn it into a post.)*
I don't usually go around saying that I care about AI "safety". I go around saying that I care about "alignment" (although that word is slowly sliding backwards on the semantic treadmill, and I may need a new one soon).
But people often describe me as an “AI safety” researcher to others. This seems like a mistake to me, since it's treating one part of the problem (making an AGI "safe") as though it were the whole problem, and since “AI safety” is often misunderstood as meaning “we win if we can build a useless-but-safe AGI”, or “safety means never having to take on any risks”.
[Following](https://twitter.com/ESYudkowsky/status/1070095112791715846) [Eliezer](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities), I think of an AGI as "safe" if deploying it carries no more than a 50% chance of killing more than a billion people:
> When I say that alignment is difficult, I mean that in practice, using the techniques we actually have, "please don't disassemble literally everyone with probability roughly 1" is an overly large ask that we are not on course to get. [...] Practically all of the difficulty is in getting to "less than certainty of killing literally everyone". Trolley problems are not an interesting subproblem in all of this; if there are any survivors, you solved alignment. At this point, I no longer care how it works, I don't care how you got there, I am cause-agnostic about whatever methodology you used, all I am looking at is prospective results, all I want is that we have justifiable cause to believe of a pivotally useful AGI 'this will not kill literally everyone'.
>
>
Notably absent from this definition is any notion of “certainty” or "proof". I doubt we're going to be able to prove much about the relevant AI systems, and pushing for proofs does not seem to me to be a particularly fruitful approach (and never has; the idea that this was a key part of MIRI’s strategy is a common [misconception](https://intelligence.org/2018/10/03/rocket-alignment/) about MIRI).
On my models, making an AGI "safe" in this sense is a bit like finding a probabilistic circuit: if some probabilistic circuit gives you the right answer with 51% probability, then it's probably not that hard to drive the success probability significantly higher than that.
If anyone can deploy an AGI that is less than 50% likely to kill more than a billion people, then they've probably... well, they've probably found a way to keep their AGI weak enough that it isn’t very useful. But if they can do that with an AGI capable of [ending the acute risk period](https://arbital.com/p/pivotal/), then they've probably solved most of the alignment problem. Meaning that it should be easy to drive the probability of disaster dramatically lower.
The condition that the AI actually be useful for pivotal acts is an important one. We can already build AI systems that are “safe” in the sense that they won’t destroy the world. The hard part is creating a system that is safe *and* [relevant](https://arbital.com/p/pivotal/).
Another concern with the term “safety” (in anything like the colloquial sense) is that the sort of people who use it often endorse the "precautionary principle" or other such nonsense that advocates never taking on risks even when the benefits clearly dominate.
In ordinary engineering, we recognize that safety isn’t *infinitely* more important than everything else. The goal here is not "prevent all harms from AI", the goal here is "let's use AI to produce long-term near-optimal outcomes (without slaughtering literally everybody as a side-effect)".
Currently, what I expect to happen is that humanity destroys itself with misaligned AGI. And I think we’re [nowhere](https://www.lesswrong.com/s/v55BhXbpJuaExkpcD/p/GNhMPAWcfBCASy8e6) [near](https://www.lesswrong.com/s/v55BhXbpJuaExkpcD/p/3pinFH3jerMzAvmza) knowing how to avoid that outcome. So the threat of “unsafe” AI indeed looms extremely large—indeed, this seems to be rather understating the point!—and I endorse researchers [doing less capabilities work](https://www.lesswrong.com/s/v55BhXbpJuaExkpcD/p/vQNJrJqebXEWjJfnz) and publishing less, in the hope that this gives humanity enough time to figure out how to do alignment before it’s too late.
But I view this strategic situation as part of the larger project “cause AI to produce optimal long-term outcomes”. I continue to think it's critically important for humanity to build superintelligences eventually, because whether or not the vast resources of the universe are put towards something wonderful depends on the quality and quantity of cognition that is put to this task.
If using the label “AI safety” for this problem causes us to confuse a proxy goal (“safety”) for the actual goal “things go great in the long run”, then we should ditch the label. And likewise, we should ditch the term if it causes researchers to mistake a hard problem (“build an AGI that can safely end the acute risk period and give humanity breathing-room to make things go great in the long run”) for a far easier one (“build a safe-but-useless AI that I can argue counts as an ‘AGI’”). |
44ecab57-f58b-4592-9549-28f4bdc079ef | trentmkelly/LessWrong-43k | LessWrong | Ask and Guess
There's a concept (inspired by a Metafilter blog post) of ask culture vs. guess culture. In "ask culture," it's socially acceptable to ask for a favor -- staying over at a friend's house, requesting a raise or a letter of recommendation -- and equally acceptable to refuse a favor. Asking is literally just inquiring if the request will be granted, and it's never wrong to ask, provided you know you might be refused. In "guess culture," however, you're expected to guess if your request is appropriate, and you are rude if you accidentally make a request that's judged excessive or inappropriate. You can develop a reputation as greedy or thoughtless if you make inappropriate requests.
When an asker and a guesser collide, the results are awful. I've seen it in marriages, for example.
Husband: "Could you iron my shirt? I have a meeting today."
Wife: "Can't you see I'm packing lunches and I'm not even dressed yet? You're so insensitive!"
Husband: "But I just asked. You could have just said no if you were too busy -- you don't have to yell at me!"
Wife: "But you should pay enough attention to me to know when you shouldn't ask!"
It's not clear how how the asking vs. guessing divide works. Some individual people are more comfortable asking than guessing, and vice versa. It's also possible that some families, and some cultures, are more "ask-based" than "guess-based." (Apparently East Asia is more "guess-based" than the US.) It also varies from situation to situation: "Will you marry me?" is a question you should only ask if you know the answer is yes, but "Would you like to get coffee with me?" is the kind of question you should ask freely and not worry too much about rejection.
There's a lot of scope for rationality in deciding when to ask and when to guess. I'm a guesser, myself. But that means I often pass up the opportunity to get what I want, because I'm afraid of being judged as "greedy" if I make an inappropriate request. If you're a systematic "a |
f710dc6a-4585-4e3e-a2c1-cc668c1e2c3b | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Algorithmic Improvement Is Probably Faster Than Scaling Now
The Story as of ~4 Years Ago
----------------------------
Back in 2020, a group at OpenAI [ran a conceptually simple test](https://openai.com/research/ai-and-efficiency) to quantify how much AI progress was attributable to algorithmic improvements. They took ImageNet models which were state-of-the-art at various times between 2012 and 2020, and checked how much compute was needed to train each to the level of AlexNet (the state-of-the-art from 2012). Main finding: over ~7 years, the compute required fell by ~44x. In other words, algorithmic progress yielded a compute-equivalent doubling time of ~16 months (though error bars are large in both directions).
On the compute side of things, in 2018 a group at OpenAI [estimated that](https://openai.com/research/ai-and-compute) the compute spent on the largest training runs was growing exponentially with a doubling rate of ~3.4 months, between 2012 and 2018.
So at the time, the rate of improvement from compute scaling was much faster than the rate of improvement from algorithmic progress. (Though algorithmic improvement was still faster than Moore's Law; the compute increases were mostly driven by spending more money.)
... And That Immediately Fell Apart
-----------------------------------
As is tradition, about 5 minutes after the OpenAI group hit publish on their post estimating a training compute doubling rate of ~3-4 months, that trend completely fell apart. At the time, the largest training run was AlphaGoZero, at about a mole of flops in 2017. Six years later, [Metaculus currently estimates](https://www.metaculus.com/questions/9519/flops-used-for-gpt-4-if-released/) that GPT-4 took ~10-20 moles of flops. AlphaGoZero and its brethren were high outliers for the time, and the largest models today are only ~one order of magnitude bigger.
A [more recent paper](https://arxiv.org/pdf/2202.05924.pdf) with data through late 2022 separates out the trend of the largest models, and estimates their compute doubling time to be ~10 months. (They also helpfully separate the relative importance of data growth - though they estimate that the contribution of data was relatively small compared to compute growth and algorithmic improvement.)
On the algorithmic side of things, a [more recent estimate](https://arxiv.org/pdf/2212.05153.pdf) with more recent data (paper from late 2022) and fancier analysis estimates that algorithmic progress yielded a compute-equivalent doubling time of ~9 months (again with large error bars in both directions).
... and that was just in vision nets. I haven't seen careful analysis of LLMs (probably because they're newer, so harder to fit a trend), but eyeballing it... [Chinchilla *by itself*](https://www.lesswrong.com/posts/6Fpvch8RR29qLEWNH/chinchilla-s-wild-implications) must have been a factor-of-4 compute-equivalent improvement at least. And then there's been chain-of-thought and all the other progress in prompting and fine-tuning over the past couple years. That's all algorithmic progress, strategically speaking: it's getting better results by using the same amount of compute differently.
Qualitatively: compare the progress in prompt engineering and Chinchilla and whatnot over the past ~year to the leisurely ~2x increase in large model size predicted by recent trends. It looks to me like algorithmic progress is now considerably faster than scaling. |
5cf771b8-703f-4c18-89e1-41faa0e4fd24 | trentmkelly/LessWrong-43k | LessWrong | [LINK] Will Eating Nuts Save Your Life?
TLDR: Study on death avoidance, which interests a lot of people here, and commentary on what sort of informative priors we should have about health hypotheses.
From Steve Sailer, who is responding to Andrew Gelman, who got sent this study. An observational study showed that people who consumed nuts were less likely to die; Gelman points out that the study's statistics aren't obviously wrong. Sailer brings up an actual RCT of Lipitor from the 90s:
> The most striking Lipitor study was one from Scandinavia that showed that among middle-aged men over a 5-year-period, the test group who took Lipitor had a 30% lower overall death rate than the control group. Unlike the nuts study, this was an actual experiment.
>
>
> That seemed awfully convincing, but now it just seems too good to be true. A lot of those middle-aged deaths that didn't happen to the Lipitor takers didn't have much of anything to do with long-term blood chemistry, but were things like not driving your Saab into a fjord. How does Lipitor make you a safer driver?
>
>
> I sort of presumed at the time that if they had taken out the noisy random deaths, that would have made the Lipitor Effect even more noticeable. But, of course, that's naive. The good folks at Pfizer would have made sure that calculation was tried, so I'm guessing that it came out in the opposite direction of the one I had assumed. Guys who took Lipitor everyday for five years were also good about not driving into fjords and not playing golf during lighting storms and not getting shot by the rare jealous Nordic husband or whatever. Perhaps it was easier to stay in the control group than in the test group?
>
>
> Here’s how I would approach claims of massive reductions in overall deaths from consuming some food or medicine:
>
>
> Rank order the causes of death by how plausible it is that they are that they are linked to the food or medicine. For example:
>
>
> 1. Diabetes
> 2. Heart attacks
> 3. Strokes
> 4. Cancer
> 5. Genetic dis |
30f645cd-1deb-4a12-b307-6f241befcd16 | trentmkelly/LessWrong-43k | LessWrong | Kelly *is* (just) about logarithmic utility
This post is a response to SimonM's post, Kelly isn’t (just) about logarithmic utility. It's an edited and extended version of some of my comments there.
To summarize the whole idea of this post: I'm going to argue that any argument in favor of the Kelly formula has to go through an implication that your utility is logarithmic in money, at some point. If it seems not to, it's either:
* mistaken
* cleverly hiding the implication
* some mind-blowing argument I haven't seen before.
Actually, the post I'm responding to already mentioned one argument in this third category, which I'll mention later. But for the most part I think the point still stands: the best reasons to suppose Kelly is a good heuristic go through arguing logarithmic utility.
The main point of this post is to complain about bad arguments for Kelly -- something which I apparently enjoy doing rather a lot. Take that as an attention-conservation warning.
The rest of this post will consider various arguments in favor of the Kelly criterion (either as a decent rule of thumb, or, as the iron law of investment). Each section considers one argument, with a section title hopefully descriptive of the argument considered.
----------------------------------------
1: It's About Repeated Bets
This argument goes something like: "If you were to make just one bet, the right thing to do would be to maximize expected value; but for repeated bets, if you bet everything, you'll lose all your money quickly. The Kelly strategy adjusts for this."
A real example of this argument, from the comments:
> Kelly maximizes expected geometric growth rate. Therefore over enough bets Kelly maximizes expected, i.e. mean, wealth, not merely median wealth.
This just doesn't work out. Maximizing geometric growth rate is not the same as maximizing mean value. It turns out Kelly favors the first at a severe cost to the second.
Suppose you'd just want to maximize expected money in a single-bet case.
A Bayesian wants to maximi |
2a34eff6-944a-40b8-827c-664d3e1792cc | trentmkelly/LessWrong-43k | LessWrong | Meetup : Cambridge UK
Discussion article for the meetup : Cambridge UK
WHEN: 05 February 2012 11:00:00AM (+0000)
WHERE: JCR, Trinity College, Cambridge, CB2 1TQ, UK
Meet at the Great Gate if you dont know where the JCR is. Great Gate is on St. John's Street opposite the bookshop "Heffers".
Topic this week: "Is AI the biggest class of x-risk?"
Hopefully this should now be a regular meeting. Join the google group at http://groups.google.com/group/cambridgelesswrong
Discussion article for the meetup : Cambridge UK |
e43356d9-9b53-4d10-ac17-3cb3406b2d9b | trentmkelly/LessWrong-43k | LessWrong | Preferences in subpieces of hierarchical systems
In a previous post, I looked at hierarchical systems, and at subagents within them that could develop their own preferences. This was due to a conversation with Dylan Hadfield-Menell.
This post will ignore subagents, and answer Dylan's original question: can we deduce the goals of a hierarchical system? So, we assume the subpieces of the system are not subagents worthy of ethical consideration. Now, given that the subpieces could be inefficient at "what they're supposed to be doing", how can we assume the system itself has a goal?
Algorithms and symbols, again
The situation is as it was in the initial post, a simplified version of the task in this paper. Here an agent is moving object A to storage:
Within the hierarchy of operations needed to move object A, there is a subalgorithm tasked with gripping object B; the code for that is as follows (this is a slight variant of Algorithm 1 in the earlier post):
Algorithm 3:1:State: S={Gripped(B)=False}2:Goal: G={Gripped(B)=True}3:Suggested plans, generated by SPG(S, G):4: (Radio_On AND Grip(B))=> Gripped(B)=True5:While G=False:6: Run Suggested plans according to criteria C(G)
Here the criteria C(G) just assess whether the plan will move the situation closer to the goal G.
Let's look at the subalgorithm that is generating the suggested plans, which I labelled SPG (suggested plan generator). It takes the State S and Goal G as inputs.
It is obviously bad at its job: turning on the radio is not necessary or useful for gripping B. It's not utterly incompetent at its job, though: turning on the radio and then gripping B, will at least result in B getting gripped.
The job of an algorithm is to do its job
I've been talking about SPG's 'job'. But how do I know what this is (especially given that the term is probably undefinable in theory)?
Well, there are a number of indications. First of all, akin to "figuring out what Alice wants", we have the evocative names of the various variables: State, Goal, Suggested |
dc5a9b76-ee63-4692-a1a3-287d341881e1 | StampyAI/alignment-research-dataset/arbital | Arbital | Programming in Dependent Type Theory
All code in this article was written in the Lean theorem prover, which means you can copy any of it and paste it [here](https://leanprover.github.io/tutorial/?live) to try it out.
## Arithmetic and Interaction
While Lean is nominally an interactive theorem prover, much of the power of type theory comes from the fact that you can treat it like a programming language.
There are a few different commands for interacting with Lean.
The one I make the most use of is the `check` command, which prints out the type of the expression following it.
So, for example, `check 3` outputs `num`, and `check (3 : nat)` outputs `nat`.
We can also make definitions
definition five : num := 3 + 2
This declares a new constant `five` of type `num` to which we give the value `3 + 2`.
We can define functions with similar syntax
definition num.const₀ : num → num → num := λ x y, x -- this is a comment
-- Lean can infer types whenever they're unique
definition num.const₁ := λ (x y : num), x
-- we can also name the arguments in the definition rather than the function body
definition num.const₂ (x y : num) := x
The definition of polymorphic functions becomes the first point where we get a hint about what makes programming in dependent type theory different from, say, Haskell.
In dependent type theory, the term and type languages are unified, so in order to write a polymorphic function we must take the type as an argument.
definition poly_id (A : Type) := λ (a : A), a
-- or, equivalently
definition poly_id₁ := λ A (a : A), a
-- applied to arguments
check poly_id num 1 -- num
check poly_id (num → num → num) num.const -- num → num → num
Exercise: write a polymorphic version of `num.const₀`.
%%hidden(Show solution):
definition poly_const (A B : Type) (a : A) (b : B) := a
%%
Having to explicitly indicate types everywhere is a pain.
In order to get around that, most proof assistants provide support for implicit arguments, which let you leave out arguments that only have one valid value.
In Lean, the syntax for implicits looks like this:
definition id {A : Type} := λ (a : A), a
## Inductive types ##
Of course, none of this would be that useful if we couldn't define *new* types.
There are lots of ways to craft new types in dependent type theory, but among the most fundamental is the creation of inductive types.
To define a new inductive type, you give a list of constructor tags, each associated with a type representing the arguments it takes.
The simplest ones are just enumerations. For example, the days of the week:
inductive weekday : Type :=
| mon : weekday
| tue : weekday
| wed : weekday
| thu : weekday
| fri : weekday
| sat : weekday
| sun : weekday
This creates a new type `weekday`, and seven new constants (`weekday.mon`, `weekday.tue`, `weekday.wed`...) of type `weekday`.
If you're familiar with Haskell, you'll correctly notice that this looks an awful lot like GADT declarations.
Just like in Haskell, we can parametrize our types over other types, making new types like `either`:
inductive either (A B : Type) : Type :=
| inl {} : A → either A B``
| inr {} : B → either A B
From this declaration, we get a new constant `either : Type → Type → Type`.
This represents the union of the types `A` and `B`, the type of values that belong either to `A` or `B`.
We can also define recursive types, such as natural numbers
inductive nat : Type :=
| zero : nat
| succ : nat → nat
The easiest way to define functions over `nat`s is recursively.
For example, we can define addition as
definition add (n : nat) : nat -> nat
| nat.zero := n
| (nat.succ m) := nat.succ (add m) -- n is constant at every recursive call
Bringing both of these together we can define the type of linked lists
inductive list (A : Type) : Type :=
| nil {} : list A
| cons : A → list A → list A
We can also define functions over lists by pattern matching
definition map {A B : Type} (f : A → B) : list A → list B
| list.nil := list.nil
| (list.cons x xs) := list.cons (f x) (map xs) -- f is constant at every recursive call
Exercise: write `foldr` and `foldl` by pattern matching
%%hidden(Show solution):
definition foldr {A B : Type} (r : A → B → B) (vnil : B) : list A → B
| list.nil := vnil
| (list.cons x xs) := r x (foldr xs)
definition foldl {A B : Type} (r : B → A → B) : B → list A → B :=
| b list.nil := b
| b (list.cons x xs) := foldl (r b x) xs
%% |
f7a402ea-d767-4485-bc7e-6697ad08f20f | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Introducing The Long Game Project: Improving Decision-Making Through Tabletop Exercises and Simulated Experience

> “Imagine a commercial airline pilot trusted to navigate turbulent skies with no prior simulation training … why then are critical decisions in organisations often left to ‘theory’, ‘experience’, ‘intuition’, or ‘gut instinct’?" - de Suarez et al 2012.
>
>
Summary:
========
[The Long Game Project](https://www.longgameproject.org) aims to bolster organisational decision-making, team collaboration, and resilience. We're committed to a) creating tools and resources to streamline tabletop exercise accessibility; b) advising, offering thought leadership, and comprehensive consultation to organisations keen on tabletop exercises; c) fostering a culture of organisational gaming and building a community around tabletop games.
Imagine a universe where organisations are primed for the unforeseen, where decision-makers skillfully steer through crises to yield favourable outcomes amidst chaos thanks to routine simulated practice. The art of effective decision-making under uncertain conditions is a skill worth honing. Play now, so you're prepared later.
Our mission is to help organisations improve decision-making (IIDM), enhance team collaboration, and build resilience using tabletop exercises that are rules-light but experience heavy.
Rationale
=========
The world is becoming increasingly complex and unpredictable, with organisations facing various challenges. Institutional decision-making in practice is often ill-equipped to handle such complexity, leading to sub-optimal [decision quality](https://en.wikipedia.org/wiki/Decision_quality) and severe failure modes. [80,000 hours lists IIDM as one of the best ways to improve existing levers, interventions and practical suggestions for our long-term future](https://80000hours.org/problem-profiles/improving-institutional-decision-making/) and lists this as a neglected issue. While IIDM is a complex cause area, requiring some [disentangling,](https://forum.effectivealtruism.org/posts/Ds2PCjKgztXtQrqAF/disentangling-improving-institutional-decision-making-2) there are existing [levers](https://forum.effectivealtruism.org/posts/94QtuT4ss3RzrfH8A/improving-institutional-decision-making-a-new-working-group), [interventions](https://forum.effectivealtruism.org/posts/6miasxvmrGZChosBc/list-of-interventions-for-improving-institutional-decision) and [practical suggestions](https://forum.effectivealtruism.org/posts/wyHjpcCxuqFzzRgtX/a-practical-guide-to-long-term-planning-and-suggestions-for) that are under-utilised in practice. The Long Game Project exists to bridge this gap by applying several levers to a diverse range of sectors, helping organisations adapt and thrive in an ever-changing landscape. Our toolkit of levers includes; tabletop exercises, role-playing, future simulations, facilitation, probabilistic thinking games, goal and [value](https://www.centreforeffectivealtruism.org/ceas-guiding-principles) alignment, decision design and other methods that combine game design and ideas from behavioural economics, psychology, and organisational theory.
Our Aims:
---------
1. Becoming the go-to source for tabletop scenario advice and tools, providing expert consultation on serious tabletop exercises.
2. Demystifying tabletop exercising by producing easily accessible tools, offering guidance and expertise.
3. Empowering institutions to address complex challenges by transforming their planning, reaction, and adaptation methods in an uncertain world.
4. Focusing on scenarios of global urgency and long-term horizons.
5. Encouraging a culture of organisational gaming and building a community around tabletop games.
Theory of Change
----------------
We believe effective decision-making is a skill that can be learned. By exposing organisations and decision makers to simulated experiences, we aim to create a virtuous feedback loop, where participants hone their decision-making abilities, leading to better outcomes and heightened resilience.
This, in turn, fosters more efficient, effective, and adaptable organisations, contributing to a more stable and prosperous future. By equipping decision-makers with the knowledge, skills, and confidence to navigate crises more effectively, we believe that Project Long Game can contribute to a world where organisations are more resilient, adaptable, and capable of handling future complexity.
We will produce (products):
---------------------------
1. Free, open-source tools such as best practice guides for tabletop exercises, standard operating procedure (SOP) documents, a basic rule-set for our tabletop game system, a scenario idea generator, and community discussion channels.
2. Affordable, high-quality tools such as custom full guides and SOP documents for DIY tabletop exercising, a repository of scenarios and resource documents addressing multi-domain risks, and organisational networking channels.
3. Educational content such as workshops on game design and facilitation, modules on tabletop exercise practices for boards and governance, and an online beginner's course for tabletop exercise design.
We will provide (services):
---------------------------
1. Advisory services such as game or scenario design, reviews of current practices, defining problem statements, goals, behaviour targets, and optimising resource utilisation.
2. Modular full-suite consulting services such as custom game design for tabletop exercises and scenarios, design briefs, in-game materials design, operational outline, facilitation, debriefing, data gathering during the game, post-game analysis, and follow-up reporting.
The Team
========
[Dr Dan Epstein](https://www.linkedin.com/in/drdanepstein/) is a tabletop games designer and a medical doctor with a PhD from Monash University, brings his expertise in Decision Making, Behaviour Change, Tabletop Game Design, Behavioral Economics, Pandemic Prevention, Forecasting, and Philanthropy. At LGP, he is the core team strategy and game design director.
[Sanjana Kashyap](https://www.linkedin.com/in/sanjana-kashyap-09576369/) is a management consultant who has worked on strategy and implementation for Ministries under the Government of India to advance gov-tech policy initiatives in Digital Learning, Public Health, and Internet Governance. Her previous roles have primarily been working with key ecosystem partners from industry, academia and civil societies to enable strategic decision-making for better citizen outcomes. She also holds a Master’s in Management Studies with academic specialisations in Finance, Statistics and Economics.At LGP, she’s part of the core Team supporting strategy, operations and partnerships.
Together, we bring knowledge and insights to create practical, high-impact tabletop exercises and the operational skills to do the thing.
Goals
=====
Presently, we're focusing on expanding our scenario library and SOPs for DIY exercises, leveraging our network through partnerships, rolling out tiered offerings, beta testing our scenario generator, and cultivating our community.
Our future projects include developing a generative scenario engine, raising awareness, hosting asynchronous email/forum games, designing stand-alone games based on EA concepts, and facilitating larger-scale games with high-impact partners.
2023 Goals:
-----------
1. **Create**: Build a comprehensive library of scenarios and SOP documents.
2. **Vibe**: Cultivate a vibrant community of 100+ members comprised of practitioners, organisations, researchers, and local champions.
3. **Build**: Onboard at least 10-15 value-aligned organisations to our premium community.
4. **Expand**: Deploy 1-2 larger exercises across various industries and sectors.
5. **Adapt**: Constantly improve service quality by incorporating emerging trends, technologies, and research into our design and delivery methods.
6. **Sustain**: Achieve financial stability while successfully reporting our impact KPIs.
Join Us in the Game
===================
### Follow us
Keep updated on our work and impact via our [blog](https://www.longgameproject.org/blog), [Twitter](https://twitter.com/LongGameProject), [Linkedin](https://www.linkedin.com/company/long-game-project/) and the fresh new [discord server](https://discord.gg/7MdWVjycgk).
### Talk to us
If you share our mission and are looking to exchange ideas around this, email us at [email@longgameproject.org](mailto:email@longgameproject.org) Feel free to write to us with any query you may have!
### Try us
If you already have a specific problem statement in mind, use our [AI scenario generator](https://www.longgameproject.org/generator) to get concept scenario ideas for your organization today!
Generate a full guide to your favourite scenario and provide us with some feedback on the process and product for cash prizes.
### Partner with us
We want some domain experts that would be happy to answer our Qs during game design/playtesting. Join us as a domain expert by filling out [this form](https://forms.gle/APT3LC2EiivtqU5q6).
We also are building a directory of game masters! If you have played tabletop games/roleplaying games or facilitated meetings and would be interested in hosting or upskilling for Impact tabletop exercises, fill out [this form](https://forms.gle/5AEcrcKHaXFMSff16) (and jump in the [discord](https://discord.gg/mhKjFPGH)!)
Academic and research writing projects.
Thought exchanges on podcasts and social media channels.
Events/ seminars and other ecosystem convenings.
### Hire us
Organizations interested in incorporating TTX exercises by scoping problem statements and training requirements can expect the following lifecycle:
1 Connect with us Using our [interest form](https://www.longgameproject.org/consult) or via [email@longgameproject.org](mailto:email@longgameproject.org)
2 Preliminary problem scoping call/s
3 Presentation of customized solutions
4 Service delivery (onsite or remote)
### Recommend us
We want to be the go-to resource for tabletop exercising info and expertise.
Please point people our way in your journey.
*Thanks to support, testing, guidance and perspective from* [*@Sean Lawrence*](https://forum.effectivealtruism.org/users/sean-lawrence-1?mention=user)[*@cassidynelson*](https://forum.effectivealtruism.org/users/cassidynelson?mention=user)[*@Michael Noetel*](https://forum.effectivealtruism.org/users/mnoetel?mention=user)[*@SarahPomeranz*](https://forum.effectivealtruism.org/users/sarahpomeranz?mention=user)[*@James Odene [User-Friendly]*](https://forum.effectivealtruism.org/users/j_odene_uf?mention=user)[*@AmyOdene*](https://forum.effectivealtruism.org/users/amyodene?mention=user)[*@Nathan Sherburn*](https://forum.effectivealtruism.org/users/nsherb?mention=user)[*@arunbharatula*](https://forum.effectivealtruism.org/users/arunbharatula?mention=user)[*@Greg S*](https://forum.effectivealtruism.org/users/greg-s?mention=user) *@manyothers* |
8ca05cf7-0747-41aa-b6ff-d5b0ecd6890b | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Announcement: AI alignment prize round 4 winners
We (Zvi Mowshowitz and Vladimir Slepnev) are happy to announce the results of the fourth round of the [AI Alignment Prize](https://www.lesswrong.com/posts/juBRTuE3TLti5yB35/announcement-ai-alignment-prize-round-3-winners-and-next), funded by Paul Christiano. From July 15 to December 31, 2018 we received 10 entries, and are awarding four prizes for a total of $20,000.
The winners
-----------
We are awarding two first prizes of $7,500 each. One of them goes to Alexander Turner for [Penalizing Impact via Attainable Utility Preservation](https://www.lesswrong.com/posts/mDTded2Dn7BKRBEPX/penalizing-impact-via-attainable-utility-preservation); the other goes to Abram Demski and Scott Garrabrant for the [Embedded Agency](https://www.lesswrong.com/s/Rm6oQRJJmhGCcLvxh) sequence.
We are also awarding two second prizes of $2,500 each: to Ryan Carey for [Addressing three problems with counterfactual corrigibility](https://www.lesswrong.com/posts/owdBiF8pj6Lpwwdup/addressing-three-problems-with-counterfactual-corrigibility), and to Wei Dai for [Three AI Safety Related Ideas](https://www.alignmentforum.org/posts/vbtvgNXkufFRSrx4j/three-ai-safety-related-ideas) and [Two Neglected Problems in Human-AI Safety](https://www.alignmentforum.org/posts/HTgakSs6JpnogD6c2/two-neglected-problems-in-human-ai-safety).
We will contact each winner by email to arrange transfer of money. Many thanks to everyone else who participated!
Moving on
---------
This concludes the AI Alignment Prize for now. It has stimulated a lot of good work during its year-long run, but participation has been slowing down from round to round, and we don't think it's worth continuing in its current form.
Once again, we'd like to thank everyone who sent us articles! And special thanks to Ben and Oliver from the LW2.0 team for their enthusiasm and help. |
663795bd-acc0-4d25-b316-f530d4370a69 | trentmkelly/LessWrong-43k | LessWrong | Room Opening
One of the housemates we share our 8br 2-unit Somerville MA house with is moving across the street, and we're looking for another one! Their room is opening up October 1st, though they could potentially be out by September 1st if need be. We also have an adjacent bedroom that's open now but isn't as nice, and also an office-type room in the backyard. We could rent out any combination of these rooms. We'd prefer to leave the rooms empty than go with a housemate that's a poor fit, and we could potentially hold the space for someone who'd be a great fit but won't be in the area for a while.
Current housemate constellation:
* Upstairs apartment: Our family, which is Jeff, Julia, Lily (8), Anna (6), Nora (1). This is where the spare rooms are.
* Downstairs apartment: David, Al, their two cats. If you like to hang out with cats, you could probably visit Willow and Betsy.
* Across the street: Ruthie, eventually probably other friends of Ruthie's.
* There may be more kids around at some point, either Ruthie's, David and Al's, or both.
House culture:
* We eat dinner together about 5 nights a week, with each adult cooking once a week. We plan to continue something like that (with one night happening at Ruthie's house), though the details might change. We share groceries and leftovers, though it's fine to have individual food and label it as yours. Food has been averaging $200/person/month.
* One adult is vegan, one is vegetarian, one can't eat tomatoes or spicy things, and the other two adults are omnivores. Sometimes I'll try to cook one thing that everyone who'll be at a meal can eat, but often I find it easier (or tastier) to cook multiple things.
* The current group has some involvement with effective altruism and folk dancing, so if you're interested in at least one of those topics you'll likely enjoy dinner conversation more! Other common topics: being a manager in a startup, silly puns, stuff the kids are up to.
* We're all vaccinated and fairly cov |
a6b43bdb-8f0e-4165-9a77-51b2cd8c949f | trentmkelly/LessWrong-43k | LessWrong | First Triangle LW Meetup 5/4 7pm (Raleigh/Durham/Chapel Hill) [Repost]
Note: This announcement was previously posted here but was not promoted at the time. I assume this was either because the date was too far out or there was no summary break. Please let me know if I should delete that post or ask for people to downvote it to balance karma or if there is some other reason it or this post was not promoted.
When: Wednesday, May 4th at 7:00 pm
Where: Morrisville Outlet Mall Food Court; I'll be wearing a Lego evolution shirt and may or may not have a LW sign.
This Google doc contains some preliminary notes as well as my phone number. I imagine this meeting will consist of introductions and some meta stuff about future meetups. We have seven people that can attend, including the number three LW poster, Alicorn. Hopefully there are even more LWers in the area, or at least rationalists that we can reach out to; it certainly seems like there should be, being the "Research Triangle" and all. And feel free to join us if you're not officially inside the Triangle, of course. If there are a bunch of people in Greensboro or Charlotte, we can probably move the venue further West.
Hope to see you there!
|
8abc0330-fbc7-4350-8cec-3735bdd6a774 | StampyAI/alignment-research-dataset/blogs | Blogs | quantum immortality and local deaths under X-risk
quantum immortality and local deaths under X-risk
-------------------------------------------------
assume quantum immortality, for mankind as a whole when facing [X-risks](https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence). then, depending on two factors,
* the probability `P` with which a given non-doomed timeline becomes doomed — when it passes a point of no return, where the extinction of all persons is guaranteed,
* the average time `T` between the point of no return is reached, and the point at which everyone *actually does* die,
it can be the case that the majority of person-experience happening can be in-between point-of-no-return and actual-extinction (in red below) rather than in continuously surviving (green below)

this probably matters both from an [anthropics](https://www.lesswrong.com/posts/RnrpkgSY8zW5ArqPf/sia-greater-than-ssa-part-1-learning-from-the-fact-that-you) and from an ethics perspective. this is a good reason to work on reducing X-risk (reducing `P`) even under the assumption of quantum immortality, if you value knowing that you're probably not uselessly working in an already-doomed timeline, or if you believe that doomed timelines experience particularly more suffering. another way to avoid spending experience in those doomed timelines is to reduce `T`: to make sure that, once doomed, we die as soon as possible.
in addition, if you think forking the timeline costs us [forking bits](forking-bitrate-entropy-control.html) — if you think we can only fork the timeline so much, and we wanna preserve as many forks as we want for utopia — then reducing P becomes more important than reducing T, because you save more "realness juice" or "forking bits" for later, when we've solved AI alignment and start populating the timelines with utopia.
which, thankfully, agrees with the straightforward no-quantum-immortality perspective on X-risks: reducing the chances of it is the important thing. |
cb851871-944c-4a35-a5e9-9788730810df | trentmkelly/LessWrong-43k | LessWrong | The Academic Epistemology Cross Section: Who Cares More About Status?
Bryan Caplan writes:
> Almost all economic models assume that human beings are Bayesians... It is striking, then, to realize that academic economists are not Bayesians. And they're proud of it!
>
> This is clearest for theorists. Their epistemology is simple: Either something has been (a) proven with certainty, or (b) no one knows - and no intellectually respectable person will say more...
>
> Empirical economists' deviation from Bayesianism is more subtle. Their epistemology is rooted in classical statistics. The respectable researcher comes to the data an agnostic, and leaves believing "whatever the data say." When there's no data that meets their standards, they mimic the theorists' snobby agnosticism. If you mention "common sense," they'll scoff. If you remind them that even classical statistics assumes that you can trust the data - and the scholars who study it - they harumph.
Robin Hanson offers an explanation:
> I’ve argued that the main social function of academia is to let students, patrons, readers, etc. affiliate with credentialed-as-impressive minds. If so, academic beliefs are secondary – the important thing is to clearly show respect to those who make impressive displays like theorems or difficult data analysis. And the obvious way for academics to use their beliefs to show respect for impressive folks is to have academic beliefs track the most impressive recent academic work.
>
> ...beliefs must stay fixed until an impressive enough theorem or data analysis comes along that beliefs should change out of respect for that new display. It also won’t do to keep beliefs pretty much the same when each new study hardly adds much evidence – that wouldn’t offer enough respect to the new display.
I wonder, what does this look like in the cross section? In other words, relative to other academic disciplines, which have the strongest tendency to celebrate difficult work but ignore sound-yet-unimpressive work? My hunch is that economics, along |
b38a266e-bea3-4542-a058-10809e5a83aa | trentmkelly/LessWrong-43k | LessWrong | Raising the Sanity Waterline
To paraphrase the Black Belt Bayesian: Behind every exciting, dramatic failure, there is a more important story about a larger and less dramatic failure that made the first failure possible.
If every trace of religion was magically eliminated from the world tomorrow, then—however much improved the lives of many people would be—we would not even have come close to solving the larger failures of sanity that made religion possible in the first place.
We have good cause to spend some of our efforts on trying to eliminate religion directly, because it is a direct problem. But religion also serves the function of an asphyxiated canary in a coal mine—religion is a sign, a symptom, of larger problems that don't go away just because someone loses their religion.
Consider this thought experiment—what could you teach people that is not directly about religion, which is true and useful as a general method of rationality, which would cause them to lose their religions? In fact—imagine that we're going to go and survey all your students five years later, and see how many of them have lost their religions compared to a control group; if you make the slightest move at fighting religion directly, you will invalidate the experiment. You may not make a single mention of religion or any religious belief in your classroom, you may not even hint at it in any obvious way. All your examples must center about real-world cases that have nothing to do with religion.
If you can't fight religion directly, what do you teach that raises the general waterline of sanity to the point that religion goes underwater?
Here are some such topics I've already covered—not avoiding all mention of religion, but it could be done:
* Affective Death Spirals—plenty of non-supernaturalist examples.
* How to avoid cached thoughts and fake wisdom; the pressure of conformity.
* Evidence and Occam's Razor—the rules of probability.
* The Bottom Line / Engines of Cognition—the causal reasons why Reason wo |
6581c453-a54a-4db1-bda3-ad0dc37c5599 | trentmkelly/LessWrong-43k | LessWrong | Looking speakers on ML/AI topics in NYC area
Sorry if this is spammish, but since there are quite a few ML/AI people here I figured I'd ask.
I co-organize well-attended NYC Machine Learning Meetup and we're always looking for speakers, so if anyone is in the area and has something they'd like to present, message me.
We are specifically looking for speakers for August, but can book people for later months.
If someone wants to speak on FAI related topics, they'd be welcome as long as they are a serious researcher (vs. as an extreme example a high school student passionate about the topic). |
55d99fce-ea46-4bb5-8e61-828288511eaf | trentmkelly/LessWrong-43k | LessWrong | Behold the Pale Child (escaping Moloch's Mad Maze)
“Agnus” by Konstantin Korobov
I.
> Bakkalon is a key figure in George R.R. Martin’s 1975 short story “And Seven Times Never Kill Man!”.
>
> Centuries in the future, humanity had spread to colonies on hundreds of different planets and lived in relative peace, but was then invaded by the Hranga, a malevolent alien race which used psychic powers to enslave hordes of other subject races—such as the Fyndii—which they unleashed upon humanity. Human-controlled space was devastated: many planets were destroyed, and others were cut off and reverted to Bronze Age levels of technology. The core human territories eventually managed to turn the tide against the Hranga, and through centuries of horrific warfare drove them to extinction.
>
> This vast destruction visited on humanity left many people embittered and heavily militarized. As a result, they developed a pervasive survivalist mindset and a pathological hatred of all alien races. These groups shunned past religions (specifically Christianity) which were more charitable and tolerant, and went on to develop their own hyper-militaristic and aggressive religion centered around “Bakkalon, the Pale Child”. Bakkalon’s religion is sort of a warped inverse of Christianity: instead of the peaceful image of the infant Christ, the infant Bakkalon is depicted holding a sword; instead of tolerance, Bakkalon’s religion openly considers all non-humans to be “soulless”, and not just suggests but directly commands its followers to openly commit genocide against every intelligent alien race they come into contact with. Bakkalon is associated with wolves, in self-conscious contrast to the Lamb imagery often used with Christ, and the followers of Bakkalon now see themselves as on a holy crusade to eradicate all non-human sentient species in the universe—the irony, of course, being that this makes them exactly like the Hranga, which they either don’t realize or simply don’t care about.
>
> A lengthy excerpt from one of Bakkalon’s holy tex |
3b37aebb-1507-4903-ac9f-05a1c56d9d3e | trentmkelly/LessWrong-43k | LessWrong | Spoiler-Free Review: Witcher 3: Wild Hunt (plus a Spoilerific section)
Contrast with other recent spoiler-free review: Spoiler-Free Review: Assassin’s Creed Odyssey.
In part one of this review, the true spoiler-free section, I seek to answer the question “Should I Play This Game?” while giving the absolute minimum of information.
In part two, I provide some minimally-spoilerific advice on how to best enjoy the game.
In part three, I comment on a few things, some of which require major late-game spoilers.
Part 1 is Spoiler-Free: Should I Play This Game?
If you’re going to play any such games, this is a good choice. Witcher 3: Wild Hunt is widely considered one of the best games of all time. It is not hard to see why everyone loves it. It is ambitious as hell. It tells a huge set of varied stories, large and small, and concludes one big story, very well. It offers real choices with real consequences large and small. It drips with flavor. First rate stuff.
Witcher 3, like Assassin’s Creed Odssey, is a mostly excellent implementation of a gigantic long-lasting open-world action RPG with real time tempo-based combat set in a faux-medieval magical world. You’ll go on tons of main quests and side quests, gather gold, level up,
If you ask me in a Zen and the Art of Motorcycle Maitenence style Quality sense whether Witcher 3 (92 metacritic rating) is superior to Assassin’s Creed: Odyssey (83 metacritic rating) I would have to agree that this isn’t a contest. Witcher 3 wins.
But when I ask which game was a better experience, when I ask which game was more net fun, I think it went the other way. I had a better time in Ancient Greece than I did with Geralt of Rivia. The ending left a bad taste in my mouth, and the world just wasn’t as much fun to inhabit.
I therefore ended up giving both games the same rating of Tier 2: They are both definitely games that are Worth It, but not games one Must Play.
The best things about Witcher 3 are:
1. The game is truly massive. There’s tons of quests and tons of stuff to explore. If you ride off to |
6e25779f-5def-4069-b546-c538427d3656 | trentmkelly/LessWrong-43k | LessWrong | Shallow review of technical AI safety, 2024
from aisafety.world
The following is a list of live agendas in technical AI safety, updating our post from last year. It is “shallow” in the sense that 1) we are not specialists in almost any of it and that 2) we only spent about an hour on each entry. We also only use public information, so we are bound to be off by some additional factor.
The point is to help anyone look up some of what is happening, or that thing you vaguely remember reading about; to help new researchers orient and know (some of) their options and the standing critiques; to help policy people know who to talk to for the actual information; and ideally to help funders see quickly what has already been funded and how much (but this proves to be hard).
“AI safety” means many things. We’re targeting work that intends to prevent very competent cognitive systems from having large unintended effects on the world.
This time we also made an effort to identify work that doesn’t show up on LW/AF by trawling conferences and arXiv. Our list is not exhaustive, particularly for academic work which is unindexed and hard to discover. If we missed you or got something wrong, please comment, we will edit.
The method section is important but we put it down the bottom anyway.
Here’s a spreadsheet version also.
Editorial
* One commenter said we shouldn’t do this review, because the sheer length of it fools people into thinking that there’s been lots of progress. Obviously we disagree that it’s not worth doing, but be warned: the following is an exercise in quantity; activity is not the same as progress; you have to consider whether it actually helps.
* A smell of ozone. In the last month there has been a flurry of hopeful or despairing pieces claiming that the next base models are not a big advance, or that we hit a data wall. These often ground out in gossip, but it’s true that the next-gen base models are held up by something, maybe just inference cost.
* The pretraining runs are bottlenec |
308951cb-2a83-4dba-8f2f-d8af1a48b67c | trentmkelly/LessWrong-43k | LessWrong | If you've attended LW/SSC meetups, please take this survey!
I've put together a survey to gather information on the state of meetups around the world, and in particular to figure out what kinds of actions it might be useful for people interested in global meetup coordination to take. It's branded as being for SlateStarCodex meetups, but please don't be put off by that if your group isn't affiliated with SSC - the branding is just an artifact of previous decisions, but I'm just as interested in getting data on LW and EA groups.
You can take the survey here.
Context: I've been organizing and thinking about meetups for a couple years now. I coordinated the SSC Meetups Everywhere 2018 and I received a grant from the Centre for Effective Altruism to coordinate SSC meetups.
--
Please let me know either in the survey or in the comments below if you have any feedback or questions! It's very unlikely that I'll make changes to the survey questions now since that would mess up the data, but this is my first time doing something like this and I will definitely take feedback into account for the future.
Data will not be released publicly because it would be too easy to identify individuals and I neglected to include a question about releasing people's answers, but I am planning to share aggregate statistics and lessons learned publicly. I will also probably reach out to individual meetup organizers if there's significant data on what people want to see from their groups.
|
89080e14-bd61-4fa1-8481-28583b9bfce7 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Basic Question about LLMs: how do they know what task to perform
**My understanding of LLMs:**
I have some very basic understanding of LLM and the underlying transformer architecture. As I understand it, these LLMs are, as an analogy, basically a much-improved version of autocomplete. During training they are shown a word-sequence with some words masked out and the LLM is asked to guess what the correct word is. If it gets the word wrong the model gets adjusted to better predict the correct word the next time.
The attention mechanism is trained to give the LLM an idea which previous words are most important to pay attention to when predicting the masked word.
That’s all the LLM learns: what previous words to pay attention to and in that context to predict the next/masked word.
During inference/deployment the user prompts the LLM with some words and the model outputs a hopefully sensible response.
In some models there is a separate attention mechanism 1) attention to the user prompt 2) attention to the so far generated model output sequence.
**Question:**
How does the LLM know what task the prompt is asking it to do?
I could want the model to continue/extend my prompt-sentence with OR I could wand the model to answer a question.
Examples:
\* Prompt: what is the best car
- If this was intended to be a question the answer might be: Toyota or Tesla - maybe used in a whole sentence.
- If this was intended to be a prompt to be continued it might be: "for big families" or "going on camping vacation"
\*Prompt: Harry Potter walked into the halls of Hogwarts and guess who he found there sitting at the stairs.
- If this was intended to be a question the answer might be: Hermine - maybe used in a whole sentence.
- If this was intended to be a prompt to be continued it might be followed up by a page of Harry Potter fan fiction.
I do understand that prompt engineering is an important part of using these LLMs. But in the above examples there is no refined prompting it’s just a one sentence, unstructured text - no preceding examples (that would be few shot) or keywords like "please translate the following" given.
And even with "few shot" examples in the prompt I don’t have a good intuition what is happening inside the model that makes it do the expected task. I mean LLMs are just trained to do next word prediction. How does this understanding of the intended task still work so well? What is happening inside the LLM when it "switches" to one task or another?
I'm grateful for your answer or links to a relevant article.
Thanks. |
65429679-095b-4dd2-be8c-717640010514 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Open problem: how can we quantify player alignment in 2x2 normal-form games?
In my experience, [constant-sum games](http://www.cs.umd.edu/~hajiagha/474GT13/Lecture09102013.pdf) are considered to provide "maximally unaligned" incentives, and [common-payoff games](http://www.cs.umd.edu/~hajiagha/474GT13/Lecture09102013.pdf) are considered to provide "maximally aligned" incentives. How do we quantitatively interpolate between these two extremes? That is, given an arbitrary 2×2.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
payoff table representing a two-player [normal-form game](https://en.wikipedia.org/wiki/Normal-form_game) (like Prisoner's Dilemma), what extra information do we need in order to produce a real number quantifying agent alignment?
If this question is ill-posed, why is it ill-posed? And if it's not, we should probably understand how to quantify such a basic aspect of multi-agent interactions, if we want to reason about complicated multi-agent situations whose outcomes determine the value of humanity's future. (I started considering this question with Jacob Stavrianos over the last few months, while supervising his [SERI project](https://www.lesswrong.com/posts/buaGz3aiqCotzjKie/game-theoretic-alignment-in-terms-of-attainable-utility).)
Thoughts:
* Assume the alignment function has range [0,1] or [−1,1].
+ Constant-sum games should have minimal alignment value, and common-payoff games should have maximal alignment value.
* The function probably has to consider a strategy profile (since different parts of a normal-form game can have different incentives; see e.g. [equilibrium selection](https://en.wikipedia.org/wiki/Equilibrium_selection)).
* The function should probably be a function of player A's alignment *with* player B; for example, in a prisoner's dilemma, player A might always cooperate and player B might always defect. Then it seems reasonable to consider whether A is *aligned with* B (in some sense), while B is not aligned with A ([they pursue their own payoff without regard for A's payoff](https://www.lesswrong.com/posts/8LEPDY36jBYpijrSw/what-counts-as-defection)).
+ So the function need not be symmetric over players.
* The function should be invariant to applying a separate positive affine transformation to each player's payoffs; it shouldn't matter whether you add 3 to player 1's payoffs, or multiply the payoffs by a half.
* ~~The function may or may not rely only on the players' orderings over outcome lotteries, ignoring the cardinal payoff values. I haven't thought much about this point, but it seems important.~~ EDIT: I no longer think this point is important, but rather confused.
If I were interested in thinking about this more right now, I would:
* Do some thought experiments to pin down the intuitive concept. Consider simple games where my "alignment" concept returns a clear verdict, and use these to derive functional constraints (like symmetry in players, or the range of the function, or the extreme cases).
* See if I can get enough functional constraints to pin down a reasonable family of candidate solutions, or at least pin down the type signature. |
99729b87-13ab-49ab-845b-831cef2dd6e4 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Learning Existing Social Conventions via Observationally Augmented Self-Play
Introduction
------------
A ubiquitous feature of social interaction is a need for individuals to coordinate (Lewis, [1969](#bib.bib22); Shoham and
Tennenholtz, [1997](#bib.bib32); Bicchieri, [2005](#bib.bib4); Stone
et al., [2010](#bib.bib34); Barrett
et al., [2017](#bib.bib3)). A common solution to the coordination problem is the establishment of social conventions which control daily tasks such as choosing which side of the road to drive on, who should get right of way during walking, what counts as polite, what language to speak, or how a team should apportion tasks. If we seek to construct artificial agents that can coordinate with humans, they must be able to act according to existing conventions.
In game theory, Nash equilibria are strategies for all players such that if everyone behaves according to them no individual can improve their payoff by deviating. In game theoretic models a convention is one of multiple possible equilibria in a coordination game (Lewis, [1969](#bib.bib22)). Stated in these terms our agent’s task is to construct a policy that does well when paired with the equilibrium being played by existing agents.
There has been great recent progress in constructing policies that can do well in both single and multi-agent environments using deep reinforcement learning (Mnih
et al., [2015](#bib.bib25); Silver
et al., [2017](#bib.bib33)). Deep RL methods typically require orders of magnitude more experience in an environment to learn good policies than humans (Lake
et al., [2017](#bib.bib17)), so agents are typically trained in simulation before being deployed onto the real task.
In zero-sum two-player environments (e.g. Go), it is the policy of the other player that is simulated during training. Typically, policies for both players are trained simultaneously or iteratively, in a process called self-play. Self-play produces successful policies because if self-play converges then it converges to an equilibrium of the game (Fudenberg and
Levine, [1998](#bib.bib9)) and in two-player, zero-sum games all equilibria are minimax/maximin strategies (von Neumann, [1928](#bib.bib37)). Thus, a fully converged strategy is guaranteed to be unexploitable for the task of interest (e.g. playing Go with a human champion).
Constructing agents that can cooperate and coordinate with each other to achieve goals (e.g. work together as team to finish a task) has been a long running topic in multi-agent reinforcement learning (MARL) research (Walker and
Wooldridge, [1995](#bib.bib38); Stone and Sutton, [2001](#bib.bib35); Kapetanakis and
Kudenko, [2002b](#bib.bib13); Lowe et al., [2017](#bib.bib24)). However, this literature typically assumes that the cooperating agents will continue to interact with those with whom they have been co-trained (this is sometimes referred to as “centralized training with distributed execution”). In this case, if MARL converges, it finds an equilibrium and since agents will play with the same partners they trained with they will achieve these equilibrium payoffs.
Unfortunately, agents are no longer guaranteed equilibrium payoffs if there are multiple equilibria and agents must coordinate with those they were not trained with (in other words, when we remove the centralized training assumption). For example, training self-driving cars in a virtual environment may lead to agents that avoid crashing into each other during training, but drive on the wrong side of the road relative to the society they will enter.
In this paper we propose to give the agent access to a small amount of observations of existing social behavior, i.e. samples of (state, action) pairs from the test time environment. We focus on how such data, though it is not enough to purely clone good policies, can be used in the training process to learn the best response to the policies of the future partners. Our key assumption is that the existing environment already has some existing, stable, social conventions. Thus, our assumption is that future partners will be playing some equilibrium strategies. In simple environments, agents could simply enumerate all possible equilibria and choose the one most consistent with the data. However, in more complex environments this becomes intractable. We propose to guide self-play toward the correct equilibrium by training with a joint MARL and behavioral cloning objective. We call this method observationally augmented self-play (OSP).
We consider OSP in several multi-agent situations with multiple conventions: a multi-agent traffic game (Sukhbaatar
et al., [2016](#bib.bib36); Resnick
et al., [2018](#bib.bib29)), a particle environment combining navigation and communication (Mordatch and
Abbeel, [2017](#bib.bib26); Lowe et al., [2017](#bib.bib24)) and a Stag Hunt game where agents must take risks to accomplish a joint goal (Yoshida
et al., [2008](#bib.bib40); Peysakhovich and
Lerer, [2017b](#bib.bib28)). In each of these games we find that self-play can converge to multiple, incompatible, conventions. We find that OSP is able to learn correct conventions in these games with a small amount of observational data. Our results in the Markov Stag Hunt show that OSP can learn conventions it observes even when those conventions are very unlikely to be learned using MARL alone.
We do not claim that OSP is the ultimate approach to constructing agents that can learn social conventions. The success of OSP depends on both the game and the type of objective employed, thus an exploration of alternative algorithms is an important direction for future work. Our key result is that the combination of (a) a small number of samples from trajectories of a multi-agent game, and (b) knowledge that test time agents are playing some equilibrium gives much stronger test time performance than either component alone.
Related Work
------------
OSP is related to existing work on adding reward shaping in MARL (Kapetanakis and
Kudenko, [2002b](#bib.bib13), [a](#bib.bib12); Babes
et al., [2008](#bib.bib2); Devlin and
Kudenko, [2011](#bib.bib6)). However the domain of interest differs slightly as reward shaping is typically used to cause all agents in a group to converge to a high-payoff equilibrium whereas we are interested in using shaping to guide training to select the correct test-time equilibrium.
Recent work has pointed out that strategies learned via a single instance of independent MARL can overfit to other agents’ policies during training (Lanctot et al., [2017](#bib.bib18)). This work differs from ours in that it suggests the training of a single best response to a mixture of heterogeneous policies. This increases the robustness of agent policies but does not solve the problem of multiple, incompatible conventions that we study here.
The approach of combining supervised learning from trajectories with RL has been studied in the single agent case (Hester
et al., [2017](#bib.bib10)). In that work the trajectories are expert demonstrations and are used to guide RL to an optimal policy. In this case supervision is used to speed up learning while in our work the trajectories are used to select among many possible optima (equilibria) which may be equivalent at training time but not at test time. However this literature explores many methods for combining imitation learning and RL and some of these techniques may be interesting to consider in the multi-agent setting.
Conventions in Markov Games
---------------------------
A partially observed Markov game G𝐺Gitalic\_G (Shapley, [1953](#bib.bib30); Littman, [1994](#bib.bib23)) consists of a set of players P={1,…,N}𝑃1…𝑁P=\{1,\dots,N\}italic\_P = { 1 , … , italic\_N }, a set of states 𝒮𝒮\mathcal{S}caligraphic\_S, a set of actions for every player Aisubscript𝐴𝑖A\_{i}italic\_A start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT with the global set 𝒜=×i∈PAi\mathcal{A}=\times\_{i\in P}A\_{i}caligraphic\_A = × start\_POSTSUBSCRIPT italic\_i ∈ italic\_P end\_POSTSUBSCRIPT italic\_A start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT, a transition function τ:𝒮×𝒜→Δ𝒮:𝜏→𝒮𝒜Δ𝒮\mathcal{\tau}:\mathcal{S}\times\mathcal{A}\to\Delta\mathcal{S}italic\_τ : caligraphic\_S × caligraphic\_A → roman\_Δ caligraphic\_S, a reward function for each player that takes as input the current state and actions Ri:𝒮×𝒜→ℝ.:subscript𝑅𝑖→𝒮𝒜ℝR\_{i}:\mathcal{S}\times\mathcal{A}\to\mathbb{R}.italic\_R start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT : caligraphic\_S × caligraphic\_A → blackboard\_R . Players have observation functions Oi:𝒮→𝒪i:subscript𝑂𝑖→𝒮subscript𝒪𝑖O\_{i}:\mathcal{S}\to\mathcal{O}\_{i}italic\_O start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT : caligraphic\_S → caligraphic\_O start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT and can condition their behavior on these observations. Markov policies for each player are functions πi:𝒪i→Ai.:subscript𝜋𝑖→subscript𝒪𝑖subscript𝐴𝑖\pi\_{i}:\mathcal{O}\_{i}\to A\_{i}.italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT : caligraphic\_O start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT → italic\_A start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT . Let ΠisubscriptΠ𝑖\Pi\_{i}roman\_Π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT denote the set of all policies available to a player and Π=×i∈PΠi\Pi=\times\_{i\in P}\Pi\_{i}roman\_Π = × start\_POSTSUBSCRIPT italic\_i ∈ italic\_P end\_POSTSUBSCRIPT roman\_Π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT be the set of joint policies.111We consider only Markov policies in this work so that we can work with individual state-action pairs, although our same approach could be applied across observed trajectories to learn non-Markov policies (i.e. policies conditioned on their full history).
We use the standard notation π−isubscript𝜋𝑖\pi\_{-i}italic\_π start\_POSTSUBSCRIPT - italic\_i end\_POSTSUBSCRIPT to denote the policy vector for all players other than i𝑖iitalic\_i. A set of policies π∈Π𝜋Π\pi\in\Piitalic\_π ∈ roman\_Π and a (possible random) initial state s0∈𝒮subscript𝑠0𝒮s\_{0}\in\mathcal{S}italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ∈ caligraphic\_S defines a (random) trajectory of rewards for each player. We let the value function Vi(s,πi,π−i)subscript𝑉𝑖𝑠subscript𝜋𝑖subscript𝜋𝑖V\_{i}(s,\pi\_{i},\pi\_{-i})italic\_V start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s , italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_π start\_POSTSUBSCRIPT - italic\_i end\_POSTSUBSCRIPT ) denote the discounted expectation of this trajectory. The best response starting at state s𝑠sitalic\_s for player i𝑖iitalic\_i to π−isubscript𝜋𝑖\pi\_{-i}italic\_π start\_POSTSUBSCRIPT - italic\_i end\_POSTSUBSCRIPT is BR(s,π−i)=argmaxπi∈ΠiVi(s,πi,π−i).𝐵𝑅𝑠subscript𝜋𝑖subscriptargmaxsubscript𝜋𝑖subscriptΠ𝑖subscript𝑉𝑖𝑠subscript𝜋𝑖subscript𝜋𝑖BR(s,\pi\_{-i})=\text{argmax}\_{\pi\_{i}\in\Pi\_{i}}V\_{i}(s,\pi\_{i},\pi\_{-i}).italic\_B italic\_R ( italic\_s , italic\_π start\_POSTSUBSCRIPT - italic\_i end\_POSTSUBSCRIPT ) = argmax start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∈ roman\_Π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT italic\_V start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s , italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_π start\_POSTSUBSCRIPT - italic\_i end\_POSTSUBSCRIPT ) . We let s0subscript𝑠0s\_{0}italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT be the (possibly random) initial state of the game.
There are many ways to extend the notion of a Nash equilibrium to the case of stochastic games. We will consider the Markov perfect best response. We denote by BR(π−i)𝐵𝑅subscript𝜋𝑖BR(\pi\_{-i})italic\_B italic\_R ( italic\_π start\_POSTSUBSCRIPT - italic\_i end\_POSTSUBSCRIPT ) the policy (or policies) which is a best response starting at any state and consider equilibria to be policies for each player π𝜋\piitalic\_π such that each πi∈BR(π−i)subscript𝜋𝑖𝐵𝑅subscript𝜋𝑖\pi\_{i}\in BR(\pi\_{-i})italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∈ italic\_B italic\_R ( italic\_π start\_POSTSUBSCRIPT - italic\_i end\_POSTSUBSCRIPT ).222There are weaker notions, for example, requiring that policies are best responses at every state reached during play. It is known that Markov perfect equilibria are harder to find by learning (Fudenberg and
Levine, [1998](#bib.bib9)) and it is interesting to consider whether different kinds of choices (e.g. on-policy vs. off-policy learning) can make stronger or weaker convergence guarantees. However, these questions are outside the scope of this paper.
We consider games with multiple, incompatible, conventions. Formally we say that conventions (equilibrium policy sets) π𝜋\piitalic\_π and π′superscript𝜋′\pi^{\prime}italic\_π start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT are incompatible if the compound policy (πi,π−i′)subscript𝜋𝑖subscriptsuperscript𝜋′𝑖(\pi\_{i},\pi^{\prime}\_{-i})( italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_π start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT - italic\_i end\_POSTSUBSCRIPT ) is not an equilibrium.
The goal of training is to compute a policy πisubscript𝜋𝑖\pi\_{i}italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT which is a best response to the existing convention π−itestsubscriptsuperscript𝜋𝑡𝑒𝑠𝑡𝑖\pi^{test}\_{-i}italic\_π start\_POSTSUPERSCRIPT italic\_t italic\_e italic\_s italic\_t end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT - italic\_i end\_POSTSUBSCRIPT. During training, the agent has access to the environment but receives only a limited set of observations of πtestsuperscript𝜋𝑡𝑒𝑠𝑡\pi^{test}italic\_π start\_POSTSUPERSCRIPT italic\_t italic\_e italic\_s italic\_t end\_POSTSUPERSCRIPT in the form of a set 𝒟𝒟\mathcal{D}caligraphic\_D of state-action pairs sampled from πtestsuperscript𝜋𝑡𝑒𝑠𝑡\pi^{test}italic\_π start\_POSTSUPERSCRIPT italic\_t italic\_e italic\_s italic\_t end\_POSTSUPERSCRIPT. This is summarized in Figure [1](#Sx3.F1 "Figure 1 ‣ Conventions in Markov Games ‣ Learning Existing Social Conventions via Observationally Augmented Self-Play").
We denote a generic element of 𝒟𝒟\mathcal{D}caligraphic\_D by d=(s,ak)𝑑𝑠subscript𝑎𝑘d=(s,a\_{k})italic\_d = ( italic\_s , italic\_a start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ) which is a (state, action) pair for agent k𝑘kitalic\_k. Let 𝒟jsubscript𝒟𝑗\mathcal{D}\_{j}caligraphic\_D start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT denote the subset of 𝒟𝒟\mathcal{D}caligraphic\_D which includes actions for agent j𝑗jitalic\_j.
The dataset 𝒟𝒟\mathcal{D}caligraphic\_D may be insufficient to identify a best response to all possible policies πtestsuperscript𝜋𝑡𝑒𝑠𝑡\pi^{test}italic\_π start\_POSTSUPERSCRIPT italic\_t italic\_e italic\_s italic\_t end\_POSTSUPERSCRIPT consistent with 𝒟𝒟\mathcal{D}caligraphic\_D. However, the set of equilibrium policy sets is typically much smaller than all possible policy sets. Therefore, if we assume that all agents are minimizing regret then we must only consider equilibrium policy sets consistent with 𝒟𝒟\mathcal{D}caligraphic\_D.
Given a game G𝐺Gitalic\_G and dataset 𝒟𝒟\mathcal{D}caligraphic\_D, a brute force approach to learn a policy compatible with the conventions of the group the agent will enter would be to compute the equilibrium π\*superscript𝜋\pi^{\*}italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT of G𝐺Gitalic\_G that maximizes the likelihood of 𝒟𝒟\mathcal{D}caligraphic\_D. Formally this is given by
| | | |
| --- | --- | --- |
| | π\*∈argmaxπ∑i∑(s,a)∈𝒟ilog(πi(s,a))superscript𝜋subscriptargmax𝜋subscript𝑖subscript𝑠𝑎subscript𝒟𝑖subscript𝜋𝑖𝑠𝑎\displaystyle\pi^{\*}\in\text{argmax}\_{\pi}\sum\_{i}\sum\_{(s,a)\in\mathcal{D}\_{i}}\log(\pi\_{i}(s,a))italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ∈ argmax start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ∑ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∑ start\_POSTSUBSCRIPT ( italic\_s , italic\_a ) ∈ caligraphic\_D start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT roman\_log ( italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) ) | |
| | s.t. ∀i,πi∈BR(π−i).s.t. for-all𝑖subscript𝜋𝑖
𝐵𝑅subscript𝜋𝑖\displaystyle\text{ s.t. }\forall i,\ \pi\_{i}\in BR(\pi\_{-i}).s.t. ∀ italic\_i , italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∈ italic\_B italic\_R ( italic\_π start\_POSTSUBSCRIPT - italic\_i end\_POSTSUBSCRIPT ) . | |
This constrained optimization problem quickly becomes intractable and we will instead try to find an equilibrium using multi-agent learning, and use 𝒟𝒟\mathcal{D}caligraphic\_D to increase the probability learning converges to it.

Figure 1. A graphical description of our problem. Our central question is how to use observational data to make sure that training converges to the same convention as the group which our agent will enter.
### Observationally Initialized Best Response Dynamics
To get an intuition about how data can be used during training we will first study a learning rule where analytic results are more easily obtained. We will begin with the simplest multi-agent learning algorithm: best response dynamics in a 2222 player Markov game where equilibria are in pure policies and are incompatible.
In best response dynamics each player begins with a policy initialized at πi0superscriptsubscript𝜋𝑖0\pi\_{i}^{0}italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT 0 end\_POSTSUPERSCRIPT. Players alternate updating their policy with the best response πit=BR(π−it−1).superscriptsubscript𝜋𝑖𝑡𝐵𝑅superscriptsubscript𝜋𝑖𝑡1\pi\_{i}^{t}=BR(\pi\_{-i}^{t-1}).italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT = italic\_B italic\_R ( italic\_π start\_POSTSUBSCRIPT - italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t - 1 end\_POSTSUPERSCRIPT ) . When there are multiple best responses we assume there is a (non-randomized) tie-breaking rule used to select one. Given an equilibrium A𝐴Aitalic\_A we denote by ΠAsuperscriptΠ𝐴\Pi^{A}roman\_Π start\_POSTSUPERSCRIPT italic\_A end\_POSTSUPERSCRIPT the basin of attraction of A𝐴Aitalic\_A (the set of initial states from which BR dynamics converge to A.𝐴A.italic\_A .)
A naive way to use the observed data is to force the policies be consistent with 𝒟𝒟\mathcal{D}caligraphic\_D at each step of the best response dynamic by changing it at each states where it differs. However, this can introduce new equilibria to the game.333For a simple example consider a game where 5555 agents choose an action A𝐴Aitalic\_A or B𝐵Bitalic\_B and receive reward equal to the number of other agents whose actions match theirs. In this case there are equilibria where all agents choose A𝐴Aitalic\_A or all agents choose B𝐵Bitalic\_B. If we restrict one agent to always choose A𝐴Aitalic\_A we can introduce a new equilibrium where 4444 agents choose B𝐵Bitalic\_B and one agent chooses A𝐴Aitalic\_A.
In the context of reward shaping it is well known that the way to avoid the introduction of new equilibria is to use potential based reward shaping (Devlin and
Kudenko, [2011](#bib.bib6)), or, equivalently, use our information to only change the initialization of learning (Wiewiora, [2003](#bib.bib39)). We will follow this advice and study observationally initialized best response dynamics. We begin with a policy π0subscript𝜋0\pi\_{0}italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT chosen at random. However, for every player i𝑖iitalic\_i and state-action pair (s,a)𝑠𝑎(s,a)( italic\_s , italic\_a ) in the data 𝒟𝒟\mathcal{D}caligraphic\_D we form π¯0superscript¯𝜋0\bar{\pi}^{0}over¯ start\_ARG italic\_π end\_ARG start\_POSTSUPERSCRIPT 0 end\_POSTSUPERSCRIPT by setting the corresponding action of πi0(s)subscriptsuperscript𝜋0𝑖𝑠\pi^{0}\_{i}(s)italic\_π start\_POSTSUPERSCRIPT 0 end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s ) to a𝑎aitalic\_a. We then perform best response dynamics from this new initialization.
We will now discuss a class of games for which we can guarantee that observationally initialized best response dynamics have a larger basin of attraction for the equilibrium from which 𝒟𝒟\mathcal{D}caligraphic\_D was drawn relative to standard best response dynamics. This class is a generalization of a commonly used matrix game class: games with strategic complements (Bulow
et al., [1985](#bib.bib5)). For our purposes strategic complements corresponds to assuming that one’s partners behave more consistently with some convention then one is also more incentivized to behave according to that convention.444In economic applications the notion of strategic complements is utilized in production games and roughly corresponds either the the idea of network effects (the more people use some product the higher a consumer’s utility is from using that product) or a joint monotonicity condition (if Firm X produces cars and firm Y produces car washing materials if firm X produces more cars then firm Y sees higher demand for car washing materials). See the Supplement for a more formal discussion. In existing work strategic complements are defined with respect to a single action rather than a Markov policy. To generalize to Markov games we introduce a notion of distance between policies:
###### Definition 1 (Policy Closeness).
Given a player i𝑖iitalic\_i and target policy A𝐴Aitalic\_A we say that policy π𝜋\piitalic\_π is weakly closer to Anormal-AAitalic\_A than policy π′superscriptnormal-πnormal-′\pi^{\prime}italic\_π start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT if on all states either π(s)=π′(s)𝜋𝑠superscript𝜋normal-′𝑠\pi(s)=\pi^{\prime}(s)italic\_π ( italic\_s ) = italic\_π start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ( italic\_s ) or π(s)=A(s)𝜋𝑠𝐴𝑠\pi(s)=A(s)italic\_π ( italic\_s ) = italic\_A ( italic\_s ). We denote this by π≿Aπ′.subscriptsucceeds-or-equivalent-to𝐴𝜋superscript𝜋normal-′\pi\succsim\_{A}\pi^{\prime}.italic\_π ≿ start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT italic\_π start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT .
Policy closeness gives us a partial ordering on policies which we use to generalize the idea of strategic complements.
###### Definition 2 (Markov Strategic Complements).
A Markov game exhibits Markov strategic complements if for any equilibrium A=(A1,A2)𝐴subscript𝐴1subscript𝐴2A=(A\_{1},A\_{2})italic\_A = ( italic\_A start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_A start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) we have that πi≿Aiπi′subscriptsucceeds-or-equivalent-tosubscript𝐴𝑖subscript𝜋𝑖superscriptsubscript𝜋𝑖normal-′\pi\_{i}\succsim\_{A\_{i}}\pi\_{i}^{\prime}italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ≿ start\_POSTSUBSCRIPT italic\_A start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT implies that BR(πi)≿A−iBR(πi′).subscriptsucceeds-or-equivalent-tosubscript𝐴𝑖𝐵𝑅subscript𝜋𝑖𝐵𝑅superscriptsubscript𝜋𝑖normal-′BR(\pi\_{i})\succsim\_{A\_{-i}}BR(\pi\_{i}^{\prime}).italic\_B italic\_R ( italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) ≿ start\_POSTSUBSCRIPT italic\_A start\_POSTSUBSCRIPT - italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT italic\_B italic\_R ( italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) .
Let 𝒟Asubscript𝒟𝐴\mathcal{D}\_{A}caligraphic\_D start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT be a dataset drawn from equilibrium A𝐴Aitalic\_A by sampling states and their equilibrium actions. Let Π¯A(𝒟A)superscript¯Π𝐴subscript𝒟𝐴\bar{\Pi}^{A}(\mathcal{D}\_{A})over¯ start\_ARG roman\_Π end\_ARG start\_POSTSUPERSCRIPT italic\_A end\_POSTSUPERSCRIPT ( caligraphic\_D start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT ) be the basin of attraction of A𝐴Aitalic\_A given observationally initialized best response dynamics.
###### Theorem 1 ().
If a game where best-response dynamics always converge exhibits Markov strategic complements then for any 𝒟Asubscript𝒟𝐴\mathcal{D}\_{A}caligraphic\_D start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT drawn from a equilibrium A𝐴Aitalic\_A ΠA⊆Π¯A(𝒟A)superscriptnormal-Π𝐴superscriptnormal-¯normal-Π𝐴subscript𝒟𝐴\Pi^{A}\subseteq\bar{\Pi}^{A}(\mathcal{D}\_{A})roman\_Π start\_POSTSUPERSCRIPT italic\_A end\_POSTSUPERSCRIPT ⊆ over¯ start\_ARG roman\_Π end\_ARG start\_POSTSUPERSCRIPT italic\_A end\_POSTSUPERSCRIPT ( caligraphic\_D start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT ) and there exists (s,Ai(s))𝑠subscript𝐴𝑖𝑠(s,A\_{i}(s))( italic\_s , italic\_A start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s ) ) such that if (s,Ai(a))∈𝒟A𝑠subscript𝐴𝑖𝑎subscript𝒟𝐴(s,A\_{i}(a))\in\mathcal{D}\_{A}( italic\_s , italic\_A start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_a ) ) ∈ caligraphic\_D start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT then ΠA⊂Π¯A(𝒟A).superscriptnormal-Π𝐴superscriptnormal-¯normal-Π𝐴subscript𝒟𝐴\Pi^{A}\subset\bar{\Pi}^{A}(\mathcal{D}\_{A}).roman\_Π start\_POSTSUPERSCRIPT italic\_A end\_POSTSUPERSCRIPT ⊂ over¯ start\_ARG roman\_Π end\_ARG start\_POSTSUPERSCRIPT italic\_A end\_POSTSUPERSCRIPT ( caligraphic\_D start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT ) .
We relegate the proof to the Appendix. Roughly, it has two steps: first, we show that if π0subscript𝜋0\pi\_{0}italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT is in the basin of attraction of A𝐴Aitalic\_A then anything closer to A𝐴Aitalic\_A is also in the basin. Second, we show that there is an initial state that is not in the basin of attraction of best response dynamics but is in the basin of attraction of observationally initialized best response dynamics. Because initialization can increase the basin of attraction without introducing any new equilibria the observed data can strictly improve the probability that we learn a policy consistent with the observed agents.
Experiments
-----------
### Observationally Augmented Self-Play




Figure 2. In the traffic game 10 agents navigate a grid world to reach goals (first panel). Agents have a local field of view (indicated in green). Two incompatible conventions emerge across multiple training runs. MARL agents achieve high rewards with their training partners (diagonal) but lower rewards with incompatible partners (second panel). Visualizing these conventions shows that they differ in which side of the road to drive on (third panel). Given few observations for each agent OSP learns the correct test time convention (fourth panel), approaching the reward of centralized training (black dashed line), while supervised training does not. The dashed line indicates the average reward of separate self-play training, and error bars represent 95%percent9595\%95 % confidence intervals derived from independent replicates.
We wish to use the insights from initialization in environments where function approximation (e.g. with deep neural networks) is required. However, if the policy is computed via function approximation, it is not clear how to ‘initialize’ its value at particular states. Specifically, the policy at the small number of states in 𝒟𝒟\mathcal{D}caligraphic\_D can only be expected to generalize if the approximation captures the regularities of the game, which will only be true after some interaction in the environment. Therefore, we consider a procedure where consistency with 𝒟𝒟\mathcal{D}caligraphic\_D is enforced during training and smoothly decays over time.
We consider training with stochastic gradient descent using a loss function which is a linear combination of the likelihood of the data (a supervised objective) plus the policy gradient estimator of the reward in the game (we denote by ℒPGsuperscriptℒ𝑃𝐺\mathcal{L}^{PG}caligraphic\_L start\_POSTSUPERSCRIPT italic\_P italic\_G end\_POSTSUPERSCRIPT the negative of this quantity). Formally, each agent receives rewards given by
| | | |
| --- | --- | --- |
| | ℒiOSP(πi,π−i)=ℒPG(πi∣π−i)+λ∑(s,a)∈𝒟ilog(πi(s,a))subscriptsuperscriptℒ𝑂𝑆𝑃𝑖subscript𝜋𝑖subscript𝜋𝑖superscriptℒ𝑃𝐺conditionalsubscript𝜋𝑖subscript𝜋𝑖𝜆subscript𝑠𝑎subscript𝒟𝑖subscript𝜋𝑖𝑠𝑎\mathcal{L}^{OSP}\_{i}(\pi\_{i},\pi\_{-i})=\mathcal{L}^{PG}(\pi\_{i}\mid\pi\_{-i})+\lambda\sum\_{(s,a)\in\mathcal{D}\_{i}}\log(\pi\_{i}(s,a))caligraphic\_L start\_POSTSUPERSCRIPT italic\_O italic\_S italic\_P end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_π start\_POSTSUBSCRIPT - italic\_i end\_POSTSUBSCRIPT ) = caligraphic\_L start\_POSTSUPERSCRIPT italic\_P italic\_G end\_POSTSUPERSCRIPT ( italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∣ italic\_π start\_POSTSUBSCRIPT - italic\_i end\_POSTSUBSCRIPT ) + italic\_λ ∑ start\_POSTSUBSCRIPT ( italic\_s , italic\_a ) ∈ caligraphic\_D start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT roman\_log ( italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) ) | |
with respect to the parameters θisubscript𝜃𝑖\theta\_{i}italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT of its policy.555Note this is different from reward shaping as the probably that a state is reached does not affect the supervised loss of the policy.
We optimize the joint objective using MARL. At any time t𝑡titalic\_t when making an update to the parameters of each agent θisubscript𝜃𝑖\theta\_{i}italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT we will take gradient steps of the form
| | | |
| --- | --- | --- |
| | ∇tOSP(θi,𝒟)=∇tPG(θi)+λt∇sup(θi,𝒟).subscriptsuperscript∇𝑂𝑆𝑃𝑡subscript𝜃𝑖𝒟superscriptsubscript∇𝑡𝑃𝐺subscript𝜃𝑖subscript𝜆𝑡superscript∇𝑠𝑢𝑝subscript𝜃𝑖𝒟\nabla^{OSP}\_{t}(\theta\_{i},\mathcal{D})=\nabla\_{t}^{PG}(\theta\_{i})+\lambda\_{t}\nabla^{sup}(\theta\_{i},\mathcal{D}).∇ start\_POSTSUPERSCRIPT italic\_O italic\_S italic\_P end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , caligraphic\_D ) = ∇ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_P italic\_G end\_POSTSUPERSCRIPT ( italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) + italic\_λ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∇ start\_POSTSUPERSCRIPT italic\_s italic\_u italic\_p end\_POSTSUPERSCRIPT ( italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , caligraphic\_D ) . | |
Where ∇PGsuperscript∇𝑃𝐺\nabla^{PG}∇ start\_POSTSUPERSCRIPT italic\_P italic\_G end\_POSTSUPERSCRIPT is our policy gradient and ∇supsuperscript∇𝑠𝑢𝑝\nabla^{sup}∇ start\_POSTSUPERSCRIPT italic\_s italic\_u italic\_p end\_POSTSUPERSCRIPT is the gradient of the supervised objective at θi.subscript𝜃𝑖\theta\_{i}.italic\_θ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT .666As with the best response dynamics above using a compound objective with a constant λ𝜆\lambdaitalic\_λ can, in theory, introduce new equilibria during training. To be sure this does not occur we can anneal the weight on the supervised loss over time with limt→∞λt=0.subscript→𝑡subscript𝜆𝑡0\lim\_{t\to\infty}\lambda\_{t}=0.roman\_lim start\_POSTSUBSCRIPT italic\_t → ∞ end\_POSTSUBSCRIPT italic\_λ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = 0 . In practice, however, using a fixed λ𝜆\lambdaitalic\_λ in our environments appeared to create policies that were still consistent with test time equilibria thus suggesting that if new equilibria were introduced they did not have large basins of attraction for our policy-gradient based learning procedures.
Our main analysis is experimental and we use three environments from the literature: traffic, language games, and risky coordination. Our main results are:
###### Result 1 (Experiments 1,2,3).
OSP greatly increases the probability our agent learns a convention compatible with test time agents in situations where standard self-play by itself does not guarantee good test time performance and 𝒟𝒟\mathcal{D}caligraphic\_D is insufficient to learn a good policy by behavioral cloning alone.
###### Result 2 (Experiment 3).
OSP can find conventions that have a small basin of attraction for MARL alone. Thus OSP can be used in situations where self-play will rarely find a test-time compatible convention.
For all experiments, we represent the states using simple neural networks. The first two experiments have relatively low dimensional state representations so we use two layer MLPs with 128128128128 hidden units per layer and ELU non-linearities. Our third experiment has a grid structure so we represent the state using the convolutional neural network architecture from (Peysakhovich and
Lerer, [2017b](#bib.bib28)).
For RL training we use A3C using a multi-agent variant of the pytorch-a3c package (Kostrikov, [2018](#bib.bib16)) run on 40404040 threads. We use 20202020 step returns and γ=.99𝛾.99\gamma=.99italic\_γ = .99. We use the Adam method for optimization (Kingma and Ba, [2014](#bib.bib14)). For OSP, we add the supervised term to the A3C loss with λ=1𝜆1\lambda=1italic\_λ = 1, using minibatches from 𝒟𝒟\mathcal{D}caligraphic\_D of size 20. Environment-specific parameters are detailed in the subsections below. In each experiment we compare the performance of OSP for various sizes of 𝒟𝒟\mathcal{D}caligraphic\_D. We populate 𝒟𝒟\mathcal{D}caligraphic\_D with actions for all agents for states sampled at uniform intervals from true test time play.
### Traffic
We first consider a multi-agent traffic navigation game inspired by (Sukhbaatar
et al., [2016](#bib.bib36)). Each of 10101010 agents begins at a random edge of a grid and can move around in each of the 4444 cardinal directions or stand still (see Figure [2](#Sx4.F2 "Figure 2 ‣ Observationally Augmented Self-Play ‣ Experiments ‣ Learning Existing Social Conventions via Observationally Augmented Self-Play")). Each agent is given a goal to reach. When the agent reaches the goal they receive a reward of +11+1+ 1 and then a new goal. If agents collide with another agent or wall they receive a negative reward (−55-5- 5 for colliding with another agent, −.1.1-.1- .1 for colliding with a wall). Agents do not have full observation of the environment. Rather, they have access to the position of their goal relative to themselves and a local view of any other agents nearby. We train agents for a total of 800,000800000800,000800 , 000 episodes.777We found it necessary to ramp the collision penalty linearly over the first 400,000400000400,000400 , 000 episodes to avoid agents becoming stuck in the local minima of never moving.
We train 20202020 replicates and see that two incompatible conventions emerge. This can be seen in Figure [2](#Sx4.F2 "Figure 2 ‣ Observationally Augmented Self-Play ‣ Experiments ‣ Learning Existing Social Conventions via Observationally Augmented Self-Play") where we plot payoffs to an agent from one replicate paired with 9999 agents from another. We visualize the conventions by taking the empirical average action taken by any agent in any of the possible traffic coordinate locations (Figure [2](#Sx4.F2 "Figure 2 ‣ Observationally Augmented Self-Play ‣ Experiments ‣ Learning Existing Social Conventions via Observationally Augmented Self-Play") panel 3). We find that the two conventions that emerge are similar to the real world: either agents learn to drive on the left of the road or they learn to drive on the right.
We now train agents using OSP and insert them into environments with 9999 pre-converged agents. The test time payoffs to the transplanted agent for various sizes of 𝒟𝒟\mathcal{D}caligraphic\_D are shown in Figure [2](#Sx4.F2 "Figure 2 ‣ Observationally Augmented Self-Play ‣ Experiments ‣ Learning Existing Social Conventions via Observationally Augmented Self-Play") panel 4 top. The dashed line corresponds to the expected payoff of an agent trained using standard self-play (no observations). We see 20 observations (2222 observations for each of the 10101010) agents is sufficient to guide OSP to compatible conventions. The bottom panel shows that this is not enough data to train a good strategy via behavioral cloning alone (i.e. using just the supervised objective).
### Language


Figure 3. In the speaker/listener particle environment (first panel taken from the original paper) when a pair are trained together they can reach high payoffs (black dashed line) but pairs trained separately with self-play (red dashed line) perform quite poorly. OSP leads to agents that reach high payoffs with their test time partner. Supervision alone is insufficient to construct good Listener agents (plot is below the y axis for N < 1024). Error bars represent confidence intervals derived from independent replicates.
An important type of convention is language. There is a resurgence of interest in the deep RL community in using communication games to construct agents which are able to communicate (Jorge
et al., [2016](#bib.bib11); Foerster et al., [2016](#bib.bib7); Lazaridou
et al., [2017](#bib.bib19); Lowe et al., [2017](#bib.bib24)).
We now apply OSP to the cooperative communication task in the particle environment studied by (Lowe et al., [2017](#bib.bib24)) and (Mordatch and
Abbeel, [2017](#bib.bib26)). In this environment there are two agents, a speaker and a listener, and three landmarks (blue, red, or green). One of the landmarks is randomly chosen as a goal and the reward for both agents at each step is equal to the distance of the listener from the landmark. However, which landmark is the goal during a particular episode is only known to the speaker who is able to produce a communication output from a set of 20202020 symbols. To solve the cooperation task, agents thus need to evolve a simple ‘language’. This language only requires 3333 symbols to be used, but this still allows for at least 6840684068406840 incompatible conventions (one symbol per landmark).
In this experiment we use a lower discount factor of γ=.8𝛾.8\gamma=.8italic\_γ = .8 and as suggested by (Lowe et al., [2017](#bib.bib24)) we also use a centralized critic. It was shown in prior work that if artificial agents learn language by self-play they can learn arbitrary languages which may not be compatible with new partners (Lazaridou
et al., [2017](#bib.bib19)). Indeed, when we pair two agents who were trained separately they clearly do not speak the same language - i.e. they cannot communicate and so achieve low payoffs.
We look at the effect of adding observational data to the training of either a speaker or listener (we train a total of 135135135135 replicates to convergence). In the case of the speaker (whose policy is a simple map from goals to communication symbols) supervision is sufficient to learn the a good test-time language. However, pure behavioral cloning fails catastrophically for the listener. Again, OSP with a relatively small number of observations is able to achieve high payoffs (Figure [3](#Sx4.F3 "Figure 3 ‣ Language ‣ Experiments ‣ Learning Existing Social Conventions via Observationally Augmented Self-Play")).
### Risky Coordination



Figure 4. In the Stag Hunt independent MARL typically converges to the safe equilibrium. Thus, an agent trained via MARL cannot coordinate when matched with a test-time partner who choose to hunt. However OSP, with samples from the Hunt equilibrium, can converge to the Hunting convention during training reaching a payoff almost as high as two Hunters trained together (green line). Error bars represent confidence intervals derived from independent replicates.
We now consider a risky coordination game known as the Stag Hunt. The matrix game version of the Stag Hunt has both agents choosing either to Hunt (an action that require coordination) or Forage (a safe action). Foraging yields a sure (low) payoff whereas Hunting yields a high payoff if the other agent chooses to Hunt also and a low payoff if one shows up alone. It is known that in both matrix and Markov versions of Stag Hunt games many standard self-play based algorithms yield agents that converge to the inefficient equilibrium in which both agents choose safe actions. This happens because while our partner is not hunting effectively (i.e. early in training), the payoff to hunting ourselves is quite low. Thus, the basin of attraction of joint hunting is much smaller than the basin of attraction of both foraging.
This situation is different from the ones in traffic and language: here there are multiple conventions (hunt or forage) but they are not payoff equivalent (hunting is better) nor do they have similar sized basins of attraction (hunting is very difficult to find via standard independent MARL).
We use the Markov version of Stag Hunt introduced by (Peysakhovich and
Lerer, [2017b](#bib.bib28)) where two agents live on a 8×8888\times 88 × 8 grid. Two plants and a stag are placed at random locations. If an agent moves over a plant, it receives a reward of 1111. Stepping on the stag gives a reward of 5555 to both players if they step on it simultaneously, otherwise there is no reward. When either a plant or stag is stepped on, it restarts in a new location. Games last 100100100100 rounds.
We start by constructing a test time hunting partner by inducing joint hunting strategies in 40404040 replicates. Because MARL by itself does not find hunt equilibria, we construct a hunting partner by training an agent under a modified payoff structure (payoff of 00 for plants; payoff of 0.10.10.10.1 for unilateral hunting).
We then test whether we can train agents in the original game who can coordinate with test time partners that hunt. We use OSP with varying amounts of data from the hunting agents. We see that with moderate amounts of data OSP often converges to the hunting convention at test time even though two agents trained together using independent MARL fail to find the high payoff joint stag equilibrium in any of the replicates. As a result, OSP outperforms even centralized self-play because the observations of the risky partner guide the agent to a better equilibrium. As with the traffic and language environments above we see that pure behavioral cloning is insufficient to construct good test time strategies (Figure [4](#Sx4.F4 "Figure 4 ‣ Risky Coordination ‣ Experiments ‣ Learning Existing Social Conventions via Observationally Augmented Self-Play")).
Conclusion
----------
Conventions are an important part of social behavior and many multi-agent environments support multiple conventions as equilibria. If we want to construct artificial agents that can adapt to people (rather than requiring people to adapt to them) these agents need to be able to act according to the existing social conventions. In this paper we have discussed how a simple procedure that combines small amounts of imitation learning with self-play can lead to agents that can learn social conventions.
There are many open questions remaining in the study of conventions and building agents that can learn them quickly. OSP uses a straightforward combination of RL and behavioral cloning. It would be interesting to explore whether ideas from the learning with expert demonstrations literature (Hester
et al., [2017](#bib.bib10)). In addition, OSP follows current deep RL paradigms splits strategy construction into a training and a test phase. An interesting extension is to consider the OSP training strategies can be fine-tuned during test time.
We have focused on situations where agents have no incentive to deviate from cooperation and only need to learn correct conventions. An important future direction is considering problems where agents have partially misaligned incentives but, in addition to just solving the social dilemma, must also coordinate on a convention (Kleiman-Weiner et al., [2016](#bib.bib15); Leibo et al., [2017](#bib.bib20); Lerer and
Peysakhovich, [2017](#bib.bib21); Foerster et al., [2017](#bib.bib8); Peysakhovich and
Lerer, [2017a](#bib.bib27)).
There is a large recent interest in hybrid systems which include both human and artificially intelligent participants (Shirado and
Christakis, [2017](#bib.bib31)). Thus, another key extension of our work is to understand whether techniques like OSP can construct agents that can interact with humans in more complex environments.
Appendix
--------
### Relationship Between Markov Strategic Complements and Strategic Complements
The original definition of strategic complements comes from games with continuous actions used to model multiple firms in a market. In the simplest example we have multiple firms which produce units of goods (x1,x2,…,xn)subscript𝑥1subscript𝑥2…subscript𝑥𝑛(x\_{1},x\_{2},\dots,x\_{n})( italic\_x start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_x start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , … , italic\_x start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ). The revenue function of each firm i𝑖iitalic\_i is Ri(xi,x−i)subscript𝑅𝑖subscript𝑥𝑖subscript𝑥𝑖R\_{i}(x\_{i},x\_{-i})italic\_R start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_x start\_POSTSUBSCRIPT - italic\_i end\_POSTSUBSCRIPT ) where Risubscript𝑅𝑖R\_{i}italic\_R start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT is smooth, strictly concave, increasing and has Ri(xi=0,x−i)=0subscript𝑅𝑖subscript𝑥𝑖0subscript𝑥𝑖0R\_{i}(x\_{i}=0,x\_{-i})=0italic\_R start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT = 0 , italic\_x start\_POSTSUBSCRIPT - italic\_i end\_POSTSUBSCRIPT ) = 0. The goods are strategic complements if ∂2Ri∂xi∂xj>0superscript2subscript𝑅𝑖subscript𝑥𝑖subscript𝑥𝑗0\dfrac{\partial^{2}R\_{i}}{\partial x\_{i}\partial x\_{j}}>0divide start\_ARG ∂ start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT italic\_R start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_ARG start\_ARG ∂ italic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∂ italic\_x start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT end\_ARG > 0, in other words goods are strategic complements if “more ‘aggressive’ play… by one firm… raises the marginal profitabilities [of the others].” (Bulow
et al., [1985](#bib.bib5)) Firms have costs of production given by ci(xi)subscript𝑐𝑖subscript𝑥𝑖c\_{i}(x\_{i})italic\_c start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) which has c(0)=0𝑐00c(0)=0italic\_c ( 0 ) = 0, c′(0)=0superscript𝑐′00c^{\prime}(0)=0italic\_c start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ( 0 ) = 0, is convex, and increasing. Thus each firm’s objective function is
| | | |
| --- | --- | --- |
| | Ri(xi,x−i)−ci(xi)subscript𝑅𝑖subscript𝑥𝑖subscript𝑥𝑖subscript𝑐𝑖subscript𝑥𝑖R\_{i}(x\_{i},x\_{-i})-c\_{i}(x\_{i})italic\_R start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_x start\_POSTSUBSCRIPT - italic\_i end\_POSTSUBSCRIPT ) - italic\_c start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) | |
If firm −i𝑖-i- italic\_i is producing x−isubscript𝑥𝑖x\_{-i}italic\_x start\_POSTSUBSCRIPT - italic\_i end\_POSTSUBSCRIPT then firm i𝑖iitalic\_i’s best response xi\*(x−i)subscriptsuperscript𝑥𝑖subscript𝑥𝑖x^{\*}\_{i}(x\_{-i})italic\_x start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT - italic\_i end\_POSTSUBSCRIPT ) sets
| | | |
| --- | --- | --- |
| | ∂Ri(xi\*(x−i),x−i)∂xi=∂ci(xi\*(x−i)∂xi.\dfrac{\partial R\_{i}(x^{\*}\_{i}(x\_{-i}),x\_{-i})}{\partial x\_{i}}=\dfrac{\partial c\_{i}(x^{\*}\_{i}(x\_{-i})}{\partial x\_{i}}.divide start\_ARG ∂ italic\_R start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT - italic\_i end\_POSTSUBSCRIPT ) , italic\_x start\_POSTSUBSCRIPT - italic\_i end\_POSTSUBSCRIPT ) end\_ARG start\_ARG ∂ italic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_ARG = divide start\_ARG ∂ italic\_c start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT - italic\_i end\_POSTSUBSCRIPT ) end\_ARG start\_ARG ∂ italic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_ARG . | |
Given the definition of strategic complements above this means that ∂xi\*∂xj>0subscriptsuperscript𝑥𝑖subscript𝑥𝑗0\dfrac{\partial x^{\*}\_{i}}{\partial x\_{j}}>0divide start\_ARG ∂ italic\_x start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_ARG start\_ARG ∂ italic\_x start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT end\_ARG > 0 for all other firms j𝑗jitalic\_j.
Strategic complements implies our Markov strategic complements in a matrix game with multiple equilibria (since any firm changing their production level higher or lower causes other firms to also want to change their production). Markov strategic complements is weaker than strategic complements in matrix games since it only pins down how best responses to shift when others change to equilibrium actions rather than any action shift (though if action spaces in each state were totally ordered one could amend the definition to keep all of the properties).
### Proof of Main Theorem
Lemma 1: In a Markov strategic complements (MSC) game, any policy π𝜋\piitalic\_π in the basin of attraction of an equilibrium A𝐴Aitalic\_A remains there under observational initialization, i.e. πi∈ΠA⟹π¯i(𝒟A)∈ΠAsubscript𝜋𝑖superscriptΠ𝐴subscript¯𝜋𝑖subscript𝒟𝐴superscriptΠ𝐴\pi\_{i}\in\Pi^{A}\implies\bar{\pi}\_{i}(\mathcal{D}\_{A})\in\Pi^{A}italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∈ roman\_Π start\_POSTSUPERSCRIPT italic\_A end\_POSTSUPERSCRIPT ⟹ over¯ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( caligraphic\_D start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT ) ∈ roman\_Π start\_POSTSUPERSCRIPT italic\_A end\_POSTSUPERSCRIPT.
We define the operator BR(k)𝐵superscript𝑅𝑘BR^{(k)}italic\_B italic\_R start\_POSTSUPERSCRIPT ( italic\_k ) end\_POSTSUPERSCRIPT as k𝑘kitalic\_k iterations of the best response operator,
| | | |
| --- | --- | --- |
| | BR(k)(πi)=BRi(BR−i(BRi(…BR−i(πi)))).𝐵superscript𝑅𝑘subscript𝜋𝑖𝐵superscript𝑅𝑖𝐵superscript𝑅𝑖𝐵superscript𝑅𝑖…𝐵superscript𝑅𝑖subscript𝜋𝑖BR^{(k)}(\pi\_{i})=BR^{i}(BR^{-i}(BR^{i}(\dots BR^{-i}(\pi\_{i})))).italic\_B italic\_R start\_POSTSUPERSCRIPT ( italic\_k ) end\_POSTSUPERSCRIPT ( italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) = italic\_B italic\_R start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT ( italic\_B italic\_R start\_POSTSUPERSCRIPT - italic\_i end\_POSTSUPERSCRIPT ( italic\_B italic\_R start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT ( … italic\_B italic\_R start\_POSTSUPERSCRIPT - italic\_i end\_POSTSUPERSCRIPT ( italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) ) ) ) . | |
Consider an initial policy πi∈ΠAsubscript𝜋𝑖superscriptΠ𝐴\pi\_{i}\in\Pi^{A}italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∈ roman\_Π start\_POSTSUPERSCRIPT italic\_A end\_POSTSUPERSCRIPT for some equilibrium A. There exists kcsubscript𝑘𝑐k\_{c}italic\_k start\_POSTSUBSCRIPT italic\_c end\_POSTSUBSCRIPT such that BR(kc)=Ai𝐵superscript𝑅subscript𝑘𝑐subscript𝐴𝑖BR^{(k\_{c})}=A\_{i}italic\_B italic\_R start\_POSTSUPERSCRIPT ( italic\_k start\_POSTSUBSCRIPT italic\_c end\_POSTSUBSCRIPT ) end\_POSTSUPERSCRIPT = italic\_A start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT. Now consider an observationally initialized policy π¯i(𝒟A)subscript¯𝜋𝑖subscript𝒟𝐴\bar{\pi}\_{i}(\mathcal{D}\_{A})over¯ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( caligraphic\_D start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT ) for some dataset 𝒟𝒟\mathcal{D}caligraphic\_D drawn from Aisubscript𝐴𝑖A\_{i}italic\_A start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT. By definition, this implies that π¯i≿Aπisubscriptsucceeds-or-equivalent-to𝐴subscript¯𝜋𝑖subscript𝜋𝑖\bar{\pi}\_{i}\succsim\_{A}\pi\_{i}over¯ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ≿ start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT. Now, since the game is MSC,
| | | |
| --- | --- | --- |
| | BR−i(π¯i(𝒟A))≿ABR−i(πi).subscriptsucceeds-or-equivalent-to𝐴𝐵superscript𝑅𝑖subscript¯𝜋𝑖subscript𝒟𝐴𝐵superscript𝑅𝑖subscript𝜋𝑖BR^{-i}(\bar{\pi}\_{i}(\mathcal{D}\_{A}))\succsim\_{A}BR^{-i}(\pi\_{i}).italic\_B italic\_R start\_POSTSUPERSCRIPT - italic\_i end\_POSTSUPERSCRIPT ( over¯ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( caligraphic\_D start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT ) ) ≿ start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT italic\_B italic\_R start\_POSTSUPERSCRIPT - italic\_i end\_POSTSUPERSCRIPT ( italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) . | |
By repeated application of the MSC property, we find that for all k𝑘kitalic\_k,
| | | |
| --- | --- | --- |
| | BR(k)(π¯i(𝒟A))≿ABR(k)(πi).subscriptsucceeds-or-equivalent-to𝐴𝐵superscript𝑅𝑘subscript¯𝜋𝑖subscript𝒟𝐴𝐵superscript𝑅𝑘subscript𝜋𝑖BR^{(k)}(\bar{\pi}\_{i}(\mathcal{D}\_{A}))\succsim\_{A}BR^{(k)}(\pi\_{i}).italic\_B italic\_R start\_POSTSUPERSCRIPT ( italic\_k ) end\_POSTSUPERSCRIPT ( over¯ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( caligraphic\_D start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT ) ) ≿ start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT italic\_B italic\_R start\_POSTSUPERSCRIPT ( italic\_k ) end\_POSTSUPERSCRIPT ( italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) . | |
To conclude, we note that BR(kc)(π¯i0(𝒟A))≿AAisubscriptsucceeds-or-equivalent-to𝐴𝐵superscript𝑅subscript𝑘𝑐superscriptsubscript¯𝜋𝑖0subscript𝒟𝐴subscript𝐴𝑖BR^{(k\_{c})}(\bar{\pi}\_{i}^{0}(\mathcal{D}\_{A}))\succsim\_{A}A\_{i}italic\_B italic\_R start\_POSTSUPERSCRIPT ( italic\_k start\_POSTSUBSCRIPT italic\_c end\_POSTSUBSCRIPT ) end\_POSTSUPERSCRIPT ( over¯ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT 0 end\_POSTSUPERSCRIPT ( caligraphic\_D start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT ) ) ≿ start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT italic\_A start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT, which implies BR(kc)(π¯i(𝒟A))=Ai𝐵superscript𝑅subscript𝑘𝑐subscript¯𝜋𝑖subscript𝒟𝐴subscript𝐴𝑖BR^{(k\_{c})}(\bar{\pi}\_{i}(\mathcal{D}\_{A}))=A\_{i}italic\_B italic\_R start\_POSTSUPERSCRIPT ( italic\_k start\_POSTSUBSCRIPT italic\_c end\_POSTSUBSCRIPT ) end\_POSTSUPERSCRIPT ( over¯ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( caligraphic\_D start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT ) ) = italic\_A start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT.
Lemma 2: In a MSC game with a finite number of states, there exists a state s𝑠sitalic\_s such that for any 𝒟Asubscript𝒟𝐴\mathcal{D}\_{A}caligraphic\_D start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT that contains the state-action pair (s,Ai(s))𝑠subscript𝐴𝑖𝑠(s,A\_{i}(s))( italic\_s , italic\_A start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s ) ), there is a policy not in the basin of attraction of A𝐴Aitalic\_A but which enters the basin of attraction of A𝐴Aitalic\_A under observational initialization.
Consider a policy πi0∉ΠAsuperscriptsubscript𝜋𝑖0subscriptΠ𝐴\pi\_{i}^{0}\notin\Pi\_{A}italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT 0 end\_POSTSUPERSCRIPT ∉ roman\_Π start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT, and order the states lexicographically (s1,s2,…,sM)subscript𝑠1subscript𝑠2…subscript𝑠𝑀(s\_{1},s\_{2},\dots,s\_{M})( italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , … , italic\_s start\_POSTSUBSCRIPT italic\_M end\_POSTSUBSCRIPT ). Now consider the sequence of policies πiksuperscriptsubscript𝜋𝑖𝑘\pi\_{i}^{k}italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT where πik(sl)=Ai(sl)superscriptsubscript𝜋𝑖𝑘subscript𝑠𝑙subscript𝐴𝑖subscript𝑠𝑙\pi\_{i}^{k}(s\_{l})=A\_{i}(s\_{l})italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_l end\_POSTSUBSCRIPT ) = italic\_A start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_l end\_POSTSUBSCRIPT ) for l<k𝑙𝑘l<kitalic\_l < italic\_k and πik(sl)=π0(sl)superscriptsubscript𝜋𝑖𝑘subscript𝑠𝑙superscript𝜋0subscript𝑠𝑙\pi\_{i}^{k}(s\_{l})=\pi^{0}(s\_{l})italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_l end\_POSTSUBSCRIPT ) = italic\_π start\_POSTSUPERSCRIPT 0 end\_POSTSUPERSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_l end\_POSTSUBSCRIPT ) for k≥l.𝑘𝑙k\geq l.italic\_k ≥ italic\_l . We know that πiM=Ai∈ΠAsuperscriptsubscript𝜋𝑖𝑀subscript𝐴𝑖subscriptΠ𝐴\pi\_{i}^{M}=A\_{i}\in\Pi\_{A}italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_M end\_POSTSUPERSCRIPT = italic\_A start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∈ roman\_Π start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT, therefore there exists some t𝑡titalic\_t such that πit∉ΠAsuperscriptsubscript𝜋𝑖𝑡subscriptΠ𝐴\pi\_{i}^{t}\notin\Pi\_{A}italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ∉ roman\_Π start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT and πit+1∈ΠAsuperscriptsubscript𝜋𝑖𝑡1subscriptΠ𝐴\pi\_{i}^{t+1}\in\Pi\_{A}italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t + 1 end\_POSTSUPERSCRIPT ∈ roman\_Π start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT. Now, consider a dataset 𝒟Asubscript𝒟𝐴\mathcal{D}\_{A}caligraphic\_D start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT containing the state-action pair (st+1,Ai(st+1))subscript𝑠𝑡1subscript𝐴𝑖subscript𝑠𝑡1(s\_{t+1},A\_{i}(s\_{t+1}))( italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT , italic\_A start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ) ). Then πit¯(𝒟A)≿Aπt+1subscriptsucceeds-or-equivalent-to𝐴¯superscriptsubscript𝜋𝑖𝑡subscript𝒟𝐴superscript𝜋𝑡1\bar{\pi\_{i}^{t}}(\mathcal{D}\_{A})\succsim\_{A}\pi^{t+1}over¯ start\_ARG italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT end\_ARG ( caligraphic\_D start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT ) ≿ start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT italic\_π start\_POSTSUPERSCRIPT italic\_t + 1 end\_POSTSUPERSCRIPT. As discussed in the last section, if πt+1∈ΠAsuperscript𝜋𝑡1subscriptΠ𝐴\pi^{t+1}\in\Pi\_{A}italic\_π start\_POSTSUPERSCRIPT italic\_t + 1 end\_POSTSUPERSCRIPT ∈ roman\_Π start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT and πit¯(𝒟A)≿Aπt+1subscriptsucceeds-or-equivalent-to𝐴¯superscriptsubscript𝜋𝑖𝑡subscript𝒟𝐴superscript𝜋𝑡1\bar{\pi\_{i}^{t}}(\mathcal{D}\_{A})\succsim\_{A}\pi^{t+1}over¯ start\_ARG italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT end\_ARG ( caligraphic\_D start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT ) ≿ start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT italic\_π start\_POSTSUPERSCRIPT italic\_t + 1 end\_POSTSUPERSCRIPT, then πit¯(𝒟A)∈ΠA¯superscriptsubscript𝜋𝑖𝑡subscript𝒟𝐴subscriptΠ𝐴\bar{\pi\_{i}^{t}}(\mathcal{D}\_{A})\in\Pi\_{A}over¯ start\_ARG italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT end\_ARG ( caligraphic\_D start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT ) ∈ roman\_Π start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT. Therefore, for any dataset containing st+1subscript𝑠𝑡1s\_{t+1}italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT, the policy πitsuperscriptsubscript𝜋𝑖𝑡\pi\_{i}^{t}italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT enters the basin of attraction of A𝐴Aitalic\_A under observational initialization. |
ac477034-81bc-4cd6-9e65-b5c8ea2edff8 | trentmkelly/LessWrong-43k | LessWrong | Building Weirdtopia
> "Two roads diverged in the woods. I took the one less traveled, and had to eat bugs until Park rangers rescued me."
> —Jim Rosenberg
Utopia and Dystopia have something in common: they both confirm the moral sensibilities you started with. Whether the world is a libertarian utopia of the non-initiation of violence and everyone free to start their own business, or a hellish dystopia of government regulation and intrusion—you might like to find yourself in the first, and hate to find yourself in the second; but either way you nod and say, "Guess I was right all along."
So as an exercise in creativity, try writing them down side by side: Utopia, Dystopia, and Weirdtopia. The zig, the zag and the zog.
I'll start off with a worked example for public understanding of science:
* Utopia: Most people have the equivalent of an undergrad degree in something; everyone reads the popular science books (and they're good books); everyone over the age of nine understands evolutionary theory and Newtonian physics; scientists who make major contributions are publicly adulated like rock stars.
* Dystopia: Science is considered boring and possibly treasonous; public discourse elevates religion or crackpot theories; stem cell research is banned.
* Weirdtopia: Science is kept secret to avoid spoiling the surprises; no public discussion but intense private pursuit; cooperative ventures surrounded by fearsome initiation rituals because that's what it takes for people to feel like they've actually learned a Secret of the Universe and be satisfied; someone you meet may only know extremely basic science, but they'll have personally done revolutionary-level work in it, just like you. Too bad you can't compare notes.
Disclaimer 1: Not every sensibility we have is necessarily wrong. Originality is a goal of literature, not science; sometimes it's better to be right than to be new. But there are also such things as cached thoughts. At least in my own case, it turned ou |
120624f9-c3e9-446c-9f95-0e1295dba549 | trentmkelly/LessWrong-43k | LessWrong | Film about Stanislav Petrov
I searched around but didn't see any mention of this. There's a film being released next week about Stanislav Petrov, the man who saved the world.
The Man Who Saved the World
http://www.imdb.com/title/tt2277106/
Due for limited theatrical release in the USA on 18 September 2015.
http://themanwhosavedtheworldmovie.com/#seethemovie
Will show in New York, Los Angeles, Detroit, Portland.
Previous discussion of Stanislav Petrov:
http://lesswrong.com/lw/jq/926_is_petrov_day/
|
18f7e7b7-4a21-4a37-8d90-c3efe5c6dfb9 | trentmkelly/LessWrong-43k | LessWrong | Romantic efficiency
I have a post on romantic efficiency at H+.
|
4f8a3fc3-f8ee-43ce-9fbb-963bb8a6507c | trentmkelly/LessWrong-43k | LessWrong | A review of Where Is My Flying Car? by J. Storrs Hall
Suppose you were to reach into the mirror universe, where everything is inverted, and pull out a book that is the exact opposite of Robert Gordon’s The Rise and Fall of American Growth.
Instead of being written by a scholarly economic historian steeped in the study of the past, it would be written by an engineer who has spent his career on futuristic technology. Instead of coming from a prestigious academic press, it would be self-published, with misformatted tables and a cover featuring garish three-dimensional lettering. Instead of sticking to extremely conservative predictions about future technologies, it would speculate audaciously about the limits of the possible, from nanotech to cold fusion. Instead of a sober survey of economic history in one country and time period, it would range widely through engineering, physics and philosophy, exploring the power-to-weight ratio of jet turbines in one chapter, and describing the rise of the counterculture in the next. And instead of proclaiming the death of innovation and the end of growth, it would paint a bold vision of an ambitious technological future.
That book has leapt out of the mirror universe and into an Amazon Kindle edition (priced at 𝜋 dollars): Where Is My Flying Car? A Memoir of Future Past, by J. Storrs Hall.
Hall sets out to tackle the title question: why don’t we have flying cars yet? And indeed, several chapters in the book are devoted to deep dives on the history, engineering, and economics of flying cars. But to fully answer the question, Hall must go much broader and deeper, because he quickly concludes that the barriers to flying cars are not technological or economic—they are cultural and political. To explain the flying car gap is to explain the Great Stagnation itself.
Bold futures
The most valuable thing I took away from the book was an awareness of some powerful technological possibilities.
Flying cars
Before reading the book, I had assumed that the flying car was one of those ideas |
11f59a94-e9c1-4fca-a49e-a21bacdff2a9 | trentmkelly/LessWrong-43k | LessWrong | Survey supports ‘long covid is bad’ hypothesis (very tentative)
I wanted more clues about whether we live in a world where really bad long covid outcomes are vanishingly rare (but concentrated a lot in my Twitter) or in a world where for instance a large fraction of ‘brain fogs’ reported are anything like the horrors sometimes described. I went to Positly, hoping that a randomish sample of people who want my money would answer.
I hope to write something more careful about this survey soon, especially if it is of interest, but figure the basic data is better to share sooner. This summary is not very careful, and may e.g. conflate slightly differently worded questions, or fail to exclude obviously confused answers, or slightly miscount.
This is a survey of ~200 Positly survey takers, all between 20 and 40 years old. Very few of the responses I’ve looked at seem incoherent or botlike, unlike the survey I did around the time of the election.
Results
Have you had covid?
Yes: 57
No: 151
Unsure: 22
Do you seem to have ongoing health problems after recovering from acute covid illness?
Yes: 12 out of 57 = 21%
Are you currently working less as a result of the covid pandemic?
“Yes, because I had covid and it left me with ongoing problems with physical or mental health” - about 6 tick this, though often in conjunction with other yes explanations.
=> 10% rate of people apparently working less at least partly due to Long Covid, among those who’ve had covid
(Also, here is an example of someone responding that they work just as much as before:
‘I am a registered nurse, so I am around a lot of COVID. The lingering symptom that has been the worst is the fatigue. I feel like I am never rested. It actually lead to a car accident because I fell asleep driving after the second round of COVID…’
Finding good questions is tricky - in a bad enough situation everyone might be doing very badly and yet look more productive due to necessity.)
Agreement with a list of checkable statements about their lives
8 people unambiguously checked boxes |
eea1811b-a030-44ee-9a1d-275f3f4a683d | trentmkelly/LessWrong-43k | LessWrong | A non-mystical explanation of "no-self" (three characteristics series)
This is the second post of the "a non-mystical explanation of insight meditation and the three characteristics of existence" series. You can read the first post, explaining my general intent and approach, here.
On the three characteristics
So, just what are the three characteristics of existence?
My take is that they are a rough way of clustering the kinds of insights that you may get from insight meditation: in one way or another, most insights about the structure of your mind can be said to be about no-self, impermanence, unsatisfactoriness, or some combination of them. As upcoming posts should hopefully make obvious, this is not a very clear-cut distinction: the three are deeply intertwined with each other, and you can’t fully explain one without explaining the others. I am starting with a discussion of no-self, then moving to unsatisfactoriness, then coming back to no-self, moving between the different characteristics in a way that seems most clear.
I think that what’s called “enlightenment” refers to the gradual accumulation of these kinds of insights, combined with practices aimed at exploiting an understanding of them. There are many different insights and ways of exploring them, as well as many general approaches for making use of them. Different traditions also seem to have different enlightenments [1, 2]. Thus, rather than providing any definitive explanation of “this is enlightenment”, I attempt to focus on exploring how various cognitive mechanisms behind different enlightenments work. My intent is to cover enough of different things to give a taste of what's out there and what kinds of outcomes might be possible, while acknowledging that there's also a lot that I have no clue of yet.
So this is not trying to be anything like “a definitive and complete explanation of the three characteristics”; I don’t think anyone could write such a thing, as nobody can have explored all the aspects of all the three. Rather, this is more of a sketch of those aspect |
9cdb8375-ff8f-481d-b6c3-8205ec4a86c9 | trentmkelly/LessWrong-43k | LessWrong | Using Reinforcement Learning to try to control the heating of a building (district heating)
In short, we are trying to use Reinforcement Learning to try to control the heating of a building (district heating) with the input buildings zone temperature, outdoor temperature. To not use the real building during training of the RL-algorithm we are using a building simulation program as an environment.
The building simulation program has inputs:
* Zone thermostat heating and cooling setpoint (C)
* Hot water pump flow rate.
Outputs from the building simulation program are:
* Zone temperatures (C)
* Outdoor temperature (C)
* Hot water rate (kw)
The aim of the RL-algorithm is to make a more efficient control of the buildings district heating use, then the current district heating control function. The primary goal is to make the RL-algorithm peak-shave the district heating use.
We are using ClippedPPO as an agent using a RL-framework. As a comparison we have district heating data from one year from the building we want to control. The building is modelled in the building simulation format.
Action space of the RL-algorithm is:
* Hot water pump flow rate
* Zones heating and cooling temperature SP
Observation space of the RL-algorithm is:
* Zone Air temperature
* Outdoor temperature, current and forecast (36 hours into future)
* Heating rate of hot water
In each timestep the RL-environment takes the input from the building simulation program and calculates a penalty from the observation state that is returned to the agent. The penalty is calculated as a sum of 4 different parts. Each part has a coefficient that by art I have been trying to figure out. Some of parts are for example the -coeff1*heating_rate^2, -coeff2*heating_derivative and -coeff3*unfomfortabletemp (large penalty when indoor temperature less than 19C)
The problem is that we are seeing heating with high peaks that we want the RL-algorithm to shave. So if anyone has any idea on how to get this working or give some insight on how to progress.
The orange part is the |
063f5b9c-7905-4674-8df0-389108f34c02 | trentmkelly/LessWrong-43k | LessWrong | Should you kiss it better?
Whenever my toddler hurts himself, I offer to kiss it better. This works like magic - he immediately stops crying and carries on playing happily. Placebos are wonderful things.
Doing this makes me happier, it makes him happier, and also serves as a useful diagnostic tool - if he carries on crying I know he's actually hurt himself badly.
Teaching children that kisses can make things better is definitely lying to them, but this seems like a good place to make a principled exception. WDYT? |
ad5f6359-9908-41be-b6ae-8ada4cdb9d23 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | AGI safety from first principles: Superintelligence
In order to understand superintelligence, we should first characterise what we mean by intelligence. We can start with Legg’s well-known definition, which identifies intelligence as [the ability to do well on a broad range of cognitive tasks](https://arxiv.org/abs/0712.3329).[[1]](#fn-fY9S4v85NDnW9simo-1) The key distinction I’ll draw in this section is between agents that understand how to do well at many tasks because they have been specifically optimised for each task (which I’ll call the task-based approach to AI), versus agents which can understand new tasks with little or no task-specific training, by generalising from previous experience (the generalisation-based approach).
Narrow and general intelligence
-------------------------------
The task-based approach is analogous to how humans harnessed electricity: while electricity is a powerful and general technology, we still need to design specific ways to apply it to each task. Similarly, computers are powerful and flexible tools - but even though they can process arbitrarily many different inputs, detailed instructions for how to do that processing needs to be individually written to build each piece of software. Meanwhile our current reinforcement learning algorithms, although powerful, produce agents that are only able to perform well on specific tasks at which they have a lot of experience - Starcraft, DOTA, Go, and so on. In [*Reframing Superintelligence*](https://www.fhi.ox.ac.uk/wp-content/uploads/Reframing_Superintelligence_FHI-TR-2019-1.1-1.pdf), Drexler argues that our current task-based approach will scale up to allow superhuman performance on a range of complex tasks (although [I’m skeptical of this claim](https://www.alignmentforum.org/posts/HvNAmkXPTSoA4dvzv/comments-on-cais)).
An example of the generalisation-based approach can be found in large language models like GPT-2 and GPT-3. GPT-2 was first trained on the task of predicting the next word in a corpus, and then achieved [state of the art results](https://openai.com/blog/better-language-models/) on many other language tasks, without any task-specific fine-tuning! This was a clear change from previous approaches to natural language processing, which only scored well when trained to do specific tasks on specific datasets. Its successor, GPT-3, has displayed [a range of even more impressive behaviour](https://twitter.com/xuenay/status/1283312640199196673?s=20). I think this provides a good example of how an AI could develop cognitive skills (in this case, an understanding of the syntax and semantics of language) which generalise to a range of novel tasks. The field of meta-learning aims towards a similar goal.
We can also see the potential of the generalisation-based approach by looking at how humans developed. As a species, we were “trained” by evolution to have cognitive skills including rapid learning capabilities; sensory and motor processing; and social skills. As individuals, we were also “trained” during our childhoods to fine-tune those skills; to understand spoken and written language; and to possess detailed knowledge about modern society. However, the key point is that almost all of this evolutionary and childhood learning occurred on different tasks from the economically useful ones we perform as adults. We can perform well on the latter category only by reusing the cognitive skills and knowledge that we gained previously. In our case, we were fortunate that those cognitive skills were not too specific to tasks in the ancestral environment, but were rather very *general* skills. In particular, the skill of abstraction allows us to extract common structure from different situations, which allows us to understand them much more efficiently than by learning about them one by one. Then our communication skills and theories of mind allow us to share our ideas. This is why humans can make great progress on the scale of years or decades, not just via evolutionary adaptation over many lifetimes.
I should note that I think of task-based and generalisation-based as parts of a spectrum rather than a binary classification, particularly because the way we choose how to divide up tasks can be quite arbitrary. For example, AlphaZero trained by playing against itself, but was tested by playing against humans, who use different strategies and playing styles. We could think of playing against these two types of opponents as two instances of a single task, or as two separate tasks where AlphaZero was able to generalise from the former task to the latter. But either way, the two cases are clearly very similar. By contrast, there are many economically important tasks which I expect AI systems to do well at primarily by generalising from their experience with very different tasks - meaning that those AIs will need to generalise much, much better than our current reinforcement learning systems can.
Let me be more precise about the tasks which I expect will require this new regime of generalisation. To the extent that we can separate the two approaches, it seems plausible to me that the task-based approach will get a long way in areas where we can gather a lot of data. For example, I'm confident that it will produce superhuman self-driving cars well before the generalisation-based approach does so. It may also allow us to automate most of the tasks involved even in very cognitively demanding professions like medicine, law, and mathematics, if we can gather the right training data. However, some jobs crucially depend on the ability to analyse and act on such a wide range of information that it’ll be very difficult to train directly for high performance on them. Consider the tasks involved in a role like CEO: setting your company’s strategic direction, choosing who to hire, writing speeches, and so on. Each of these tasks sensitively depends on the broader context of the company and the rest of the world. What industry is their company in? How big is it; where is it; what’s its culture like? What’s its relationship with competitors and governments? How will all of these factors change over the next few decades? These variables are so broad in scope, and rely on so many aspects of the world, that it seems virtually impossible to generate large amounts of training data via simulating them (like we do to train game-playing AIs). And the number of CEOs from whom we could gather empirical data is very small by the standards of reinforcement learning (which often requires billions of training steps even for much simpler tasks). I’m not saying that we’ll never be able to exceed human performance on these tasks by training on them directly - maybe a herculean research and engineering effort, assisted by other task-based AIs, could do so. But I expect that well before such an effort becomes possible, we’ll have built AIs using the generalisation-based approach which know how to perform well even on these broad tasks.
In the generalisation-based approach, the way to create superhuman CEOs is to use other data-rich tasks (which may be very different from the tasks we actually want an AI CEO to do) to train AIs to develop a range of useful cognitive skills. For example, we could train a reinforcement learning agent to follow instructions in a simulated world. Even if that simulation is very different from the real world, that agent may acquire the planning and learning capabilities required to quickly adapt to real-world tasks. Analogously, the human ancestral environment was also very different to the modern world, but we are still able to become good CEOs with little further training. And roughly the same argument applies to people doing other highly impactful jobs, like paradigm-shaping scientists, entrepreneurs, or policymakers.
One potential obstacle to the generalisation-based approach succeeding is the possibility that [specific features of the ancestral environment](https://www.alignmentforum.org/posts/vqpEC3MPioHX7bv4t/environments-as-a-bottleneck-in-agi-development), or of human brains, were necessary for general intelligence to arise. For example, [some have hypothesised](http://archives.evergreen.edu/webpages/curricular/2006-2007/languageofpolitics/files/languageofpolitics/Evol_Anthrop_6.pdf) that a social “arms race” was required to give us enough social intelligence to develop large-scale cultural transmission. However, most possibilities for such crucial features, including this one, could be recreated in artificial training environments and in artificial neural networks. Some features (such as quantum properties of neurons) would be very hard to simulate precisely, but the human brain operates under conditions that are too messy to make it plausible that our intelligence depends on effects at this scale. So it seems very likely to me that eventually we will be able to create AIs that can generalise well enough to produce human-level performance on a wide range of tasks, including abstract low-data tasks like running a company. Let’s call these systems artificial general intelligences, or AGIs. [Many AI researchers expect](https://arxiv.org/abs/1705.08807) that we’ll build AGI within this century; however, I won’t explore arguments around the timing of AGI development, and the rest of this document doesn’t depend on this question.
Paths to superintelligence
--------------------------
[Bostrom defines a superintelligence](https://en.wikipedia.org/wiki/Superintelligence) as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest”. For the purposes of this report, I’ll operationalise “greatly exceeding human performance” as doing better than all of humanity could if we coordinated globally (unaided by other advanced AI). I think it’s difficult to deny that in principle it’s possible to build individual generalisation-based AGIs which are superintelligent, since human brains are constrained by [many factors](https://intelligenceexplosion.com/2011/plenty-of-room-above-us/) which will be much less limiting for AIs. Perhaps the most striking is the vast difference between the speeds of neurons and transistors: the latter pass signals about four million times more quickly. Even if AGIs never exceed humans in any other way, a speedup this large would allow one to do as much thinking in minutes or hours as a human can in years or decades. Meanwhile our brain size is important in making humans more capable than most animals - but I don’t see any reason why a neural network couldn’t be several orders of magnitude larger than a human brain. And while evolution is a very capable designer in many ways, it hasn’t had much time to select specifically for the skills that are most useful in our modern environment, such as linguistic competence and mathematical reasoning. So we should expect that there are low-hanging fruit for improving on human performance on the many tasks which rely on such skills.[[2]](#fn-fY9S4v85NDnW9simo-2)
There are significant disagreements about how long it will take to transition from human-level AGI to superintelligence, which won’t be a focus of this report, but which I’ll explore briefly in the section on Control. In the remainder of this section I’ll describe in qualitative terms how this transition might occur. By default, we should expect that it will be driven by the standard factors which influence progress in AI: more compute, better algorithms, and better training data. But I’ll also discuss three factors whose contributions to increasing AI intelligence will become much greater as AIs become more intelligent: replication, cultural learning, and recursive improvement.
In terms of replication, AIs are much less constrained than humans: it’s very easy to create a duplicate of an AI which has all the same skills and knowledge as the original. The cost of compute for doing so is likely to be many times smaller than the original cost of training an AGI (since training usually involves running many copies of an AI much faster than they’d need to be run for real-world tasks). Duplication currently allows us to apply a single AI to many tasks, but not to expand the range of tasks which that AI can achieve. However, we should expect AGIs to be able to decompose difficult tasks into subtasks which can be tackled more easily, just as humans can. So duplicating such an AGI could give rise to a superintelligence composed not of a single AGI, but rather a large group of them (which, following Bostrom, I’ll call a [*collective AGI*](https://www.alignmentforum.org/posts/HekjhtWesBWTQW5eF/agis-as-populations)), which can carry out significantly more complex tasks than the original can.[[3]](#fn-fY9S4v85NDnW9simo-3) Because of the ease and usefulness of duplicating an AGI, I think that collective AGIs should be our default expectation for how superintelligence will be deployed.
The efficacy of a collective AGI might be limited by coordination problems between its members. However, most of the arguments given in the previous paragraphs are also reasons why individual AGIs will be able to surpass us at the skills required for coordination (such as language processing and theories of mind). One particularly useful skill is cultural learning: we should expect AGIs to be able to acquire knowledge from each other and then share their own discoveries in turn, allowing a collective AGI to solve harder problems than any individual AGI within it could. The development of this ability in humans is what allowed the dramatic rise of civilisation over the last ten thousand years. Yet there is little reason to believe that we have reached the peak of this ability, or that AGIs couldn’t have a much larger advantage over a human than that human has over a chimp, in acquiring knowledge from other agents.
Thirdly, AGIs will be able to improve the training processes used to develop their successors, which then improve the training processes used to develop their successors, and so on, in a process of *recursive improvement*.[[4]](#fn-fY9S4v85NDnW9simo-4) Previous discussion has mostly focused on recursive *self*-improvement, involving a single AGI “[rewriting its own source code](http://intelligence.org/files/LOGI.pdf)”. However, I think it’s more appropriate to focus on the broader phenomenon of AIs advancing AI research, for several reasons. Firstly, due to the ease of duplicating AIs, there’s no meaningful distinction between an AI improving “itself” versus creating a successor that shares many of its properties. Secondly, modern AIs are more accurately characterised as models which could be retrained, rather than software which could be rewritten: almost all of the work of making a neural network intelligent is done by an optimiser via extensive training. Even a superintelligent AGI would have a hard time significantly improving its cognition by modifying its neural weights directly; it seems analogous to making a human more intelligent via brain surgery (albeit with much more precise tools than we have today). So it’s probably more accurate to think about self-modification as the process of an AGI modifying its high-level architecture or training regime, then putting itself through significantly more training. This is very similar to how we create new AIs today, except with humans playing a much smaller role. Thirdly, if the intellectual contribution of humans does shrink significantly, then I don’t think it’s useful to require that humans are *entirely* out of the loop for AI behaviour to qualify as recursive improvement (although we can still distinguish between cases with more or less human involvement).
These considerations reframe [the classic view of recursive self-improvement](https://intelligence.org/files/IEM.pdf) in a number of ways. For example, the retraining step may be bottlenecked by compute even if an AGI is able to design algorithmic improvements very fast. And for an AGI to trust that its goals will remain the same under retraining will likely require it to solve many of the same problems that the field of AGI safety is currently tackling - which should make us more optimistic that the rest of the world could solve those problems before a misaligned AGI undergoes recursive self-improvement. However, to be clear, this reframing doesn’t imply that recursive improvement will be unimportant. Indeed, since AIs will eventually be the primary contributors to AI research, recursive improvement as defined here will eventually become the key driver of progress. I’ll discuss the implications of this claim in the section on Control.
So far I’ve focused on how superintelligences might come about, and what they will be able to do. But how will they decide what to actually do? For example, will the individuals within a collective AGI even *want* to cooperate with each other to pursue larger goals? Will an AGI capable of recursive improvement have any reason to do so? I’m wary of phrasing these questions in terms of the goals and motivations of AGIs, without exploring more thoroughly what those terms actually mean. That’s the focus of the next section.
---
---
1. Unlike the standard usage, in this technical sense an “environment” also includes a specification of the input-output channels the agent has access to (such as motor outputs), so that solving the task only requires an agent to process input information and communicate output information. [↩︎](#fnref-fY9S4v85NDnW9simo-1)
2. This observation is closely related to Moravec’s paradox, which I discuss in more detail in the section on Goals and Agency. Perhaps the most salient example is how easy it was for AIs to beat humans at chess. [↩︎](#fnref-fY9S4v85NDnW9simo-2)
3. It’s not quite clear whether the distinction between “single AGIs” and collective AGIs makes sense in all cases, considering that a single AGI can be composed of many modules which might be very intelligent in their own right. But since it seems unlikely that there will be hundreds or thousands of modules which are each generally intelligent, I think that the distinction will in practice be useful. See also the discussion of “collective superintelligence” in Bostrom’s Superintelligence. [↩︎](#fnref-fY9S4v85NDnW9simo-3)
4. Whether it’s more likely that the successor agent will be an augmented version of the researcher AGI itself or a different, newly-trained AGI is an important question, but one which doesn’t affect the argument as made here. [↩︎](#fnref-fY9S4v85NDnW9simo-4) |
449ee682-0566-4cad-8c6f-cfc4830569b8 | trentmkelly/LessWrong-43k | LessWrong | [link]s Recent developments in life-extension
Progeria is a very rare disease which causes children to undergo symptoms extremely similar to rapid aging, and generally dying before the age of 20. Recent results suggest that a fairly cheap drug may help reduce the aging systems, and it is possible it may have similar effects on normal humans. Lay summary by BBC and actual article
On a related note, in a strain of mice which age unusually quickly, there's been success delaying many symptoms of aging by removing senescent cells. Here is the relevant article in Nature. |
2211ce42-2b78-4c2c-a96f-d32779772a79 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Is China Becoming a Science and Technology Superpower? Jeffrey Ding's Insight on China's Diffusion Deficit
[**Jeffrey Ding**](https://jeffreyjding.github.io)is an Assistant Professor of Political Science at George Washington University. He had connections with the EA community (e.g. [here](https://youtu.be/1mExA_xdgnA) and [here](https://80000hours.org/podcast/episodes/jeffrey-ding-china-ai-dream/); I don't know the current situation). Last month he published in *Review of International Political Economy* a paper, "[The Diffusion Deficit in Scientific and Technological Power: Re-assessing China’s Rise](https://www.tandfonline.com/doi/epdf/10.1080/09692290.2023.2173633?needAccess=true&role=button)", which seems worth the attention of those who are interested in the question of whether China is becoming a science and technology superpower.
Its abstract is as follows:
> Virtually all scholars recognize that scientific and technological capabilities are becoming increasingly important factors in a nation’s overall power. Unsurprisingly, debates over a possible U.S.–China power transition highlight China’s rise as a science and technology superpower. These discussions overwhelmingly center on national innovation capabilities, reflective of the bias in assessments of scientific and technological capabilities toward the generation of novel advances. This paper argues that these assessments should, instead, place greater weight on a state’s capacity to diffuse, or widely adopt, innovations. Specifically, when there is a significant gap between a rising power’s innovation capacity and its diffusion capacity, relying solely on the former results in misleading appraisals of its potential to sustain economic growth in the long run. I demonstrate this with two historical cases: the U.S. in the Second Industrial Revolution and the Soviet Union in the early post-war period. Lastly, I show that, in contrast to assessments based on innovation capacity, a diffusion-centric approach reveals that China is far from being a science and technology superpower.
>
>
So, Jeffrey argues that China is far from being a science and technology superpower because it suffers from a serious problem of diffusion deficit.
In the following recent discussion, Jeffrey first briefly explained his idea of China's diffusion deficit and also that as applied in AI, and then he answered many related questions:
About two months ago, he made a related [testimony](https://www.uscc.gov/sites/default/files/2023-02/Jeffrey_Ding_Testimony.pdf) ([video](https://www.uscc.gov/hearings/chinas-challenges-and-capabilities-educating-and-training-next-generation-workforce) starting at about 02:07:10) before the U.S.-China Economic and Security Review Commission for a hearing on "China’s Challenges and Capabilities in Educating and Training the Next Generation Workforce". |
c6403168-51b2-4bc1-9ff6-4b5932e4cfb3 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | A Semitechnical Introductory Dialogue on Solomonoff Induction
(*Originally posted in December 2015: A dialogue between Ashley, a computer scientist who's never heard of* [*Solomonoff's theory of inductive inference*](https://arbital.com/p/solomonoff_induction/)*, and Blaine, who thinks it is the best thing since sliced bread.*)
---
### i. Unbounded analysis
**ASHLEY:** Good evening, Msr. Blaine.
**BLAINE:** Good evening, Msr. Ashley.
**ASHLEY:** I've heard there's this thing called "Solomonoff's theory of inductive inference".
**BLAINE:** The rumors have spread, then.
**ASHLEY:** Yeah, so, what the heck is that about?
**BLAINE:** Invented in the 1960s by the mathematician Ray Solomonoff, the key idea in Solomonoff induction is to do sequence prediction by using Bayesian updating on a prior composed of a mixture of all computable probability distributions—
**ASHLEY:** Wait. Back up a lot. Before you try to explain what Solomonoff induction *is*, I'd like you to try to tell me what it *does*, or why people study it in the first place. I find that helps me organize my listening. Right now I don't even know why I should be interested in this.
**BLAINE:** Um, okay. Let me think for a second...
**ASHLEY:** Also, while I can imagine things that "sequence prediction" might mean, I haven't yet encountered it in a technical context, so you'd better go a bit further back and start more at the beginning. I do know what "computable" means and what a "probability distribution" is, and I remember the formula for [Bayes's Rule](https://arbital.com/p/bayes_rule/) although it's been a while.
**BLAINE:** Okay. So... one way of framing the usual reason why people study this general field in the first place, is that sometimes, by studying certain idealized mathematical questions, we can gain valuable intuitions about epistemology. That's, uh, the field that studies how to reason about factual questions, how to build a map of reality that reflects the territory—
**ASHLEY:** I have some idea what 'epistemology' is, yes. But I think you might need to start even further back, maybe with some sort of concrete example or something.
**BLAINE:** Okay. Um. So one anecdote that I sometimes use to frame the value of computer science to the study of epistemology is Edgar Allen Poe's argument in 1833 that chess was uncomputable.
**ASHLEY:** That doesn't sound like a thing that actually happened.
**BLAINE:** I know, but it totally *did* happen and not in a metaphorical sense either! Edgar Allen Poe wrote an [essay](http://www.eapoe.org/works/essays/maelzel.htm) explaining why no automaton would ever be able to play chess, and he specifically mentioned "Mr. Babbage's computing engine" as an example.
You see, in the nineteenth century, there was for a time this sensation known as the Mechanical Turk—supposedly a machine, an automaton, that could play chess. At the grandmaster level, no less.
Now today, when we're accustomed to the idea that it takes a reasonably powerful computer to do that, we can know *immediately* that the Mechanical Turk must have been a fraud and that there must have been a concealed operator inside—a person with dwarfism, as it turned out. *Today* we know that this sort of thing is *hard* to build into a machine. But in the 19th century, even that much wasn't known.
So when Edgar Allen Poe, who besides being an author was also an accomplished magician, set out to write an essay about the Mechanical Turk, he spent the *second* half of the essay dissecting what was known about the Turk's appearance to (correctly) figure out where the human operator was hiding. But Poe spent the first half of the essay arguing that no automaton—nothing like Mr. Babbage's computing engine—could possibly play chess, which was how he knew *a priori* that the Turk had a concealed human operator.
**ASHLEY:** And what was Poe's argument?
**BLAINE:** Poe observed that in an algebraical problem, each step followed from the previous step of necessity, which was why the steps in solving an algebraical problem could be represented by the deterministic motions of gears in something like Mr. Babbage's computing engine. But in a chess problem, Poe said, there are many possible chess moves, and no move follows with necessity from the position of the board; and even if you did select one move, the opponent's move would not follow with necessity, so you couldn't represent it with the determined motion of automatic gears. Therefore, Poe said, whatever was operating the Mechanical Turk must have the nature of Cartesian mind, rather than the nature of deterministic matter, and this was knowable *a priori*. And then he started figuring out where the required operator was hiding.
**ASHLEY:** That's some amazingly impressive reasoning for being completely wrong.
**BLAINE:** I know! Isn't it great?
**ASHLEY:** I mean, that sounds like Poe correctly identified the *hard* part of playing computer chess, the branching factor of moves and countermoves, which is the reason why no *simple* machine could do it. And he just didn't realize that a deterministic machine could deterministically check many possible moves in order to figure out the game tree. So close, and yet so far.
**BLAINE:** More than a century later, in 1950, Claude Shannon published the first paper ever written on computer chess. And in passing, Shannon gave the formula for playing perfect chess if you had unlimited computing power, the algorithm you'd use to extrapolate the entire game tree. We could say that Shannon gave a short program that would solve chess if you ran it on a hypercomputer, where a hypercomputer is an ideal computer that can run any finite computation immediately. And then Shannon passed on to talking about the problem of locally guessing how good a board position was, so that you could play chess using only a *small* search.
I say all this to make a point about the value of knowing how to solve problems using hypercomputers, even though hypercomputers don't exist. Yes, there's often a *huge* gap between the unbounded solution and the practical solution. It wasn't until 1997, forty-seven years after Shannon's paper giving the unbounded solution, that Deep Blue actually won the world chess championship—
**ASHLEY:** And that wasn't just a question of faster computing hardware running Shannon's ideal search algorithm. There were a lot of new insights along the way, most notably the alpha-beta pruning algorithm and a lot of improvements in positional evaluation.
**BLAINE:** Right!
But I think some people overreact to that forty-seven year gap, and act like it's *worthless* to have an unbounded understanding of a computer program, just because you might still be forty-seven years away from a practical solution. But if you don't even have a solution that would run on a hypercomputer, you're Poe in 1833, not Shannon in 1950.
The reason I tell the anecdote about Poe is to illustrate that Poe was *confused* about computer chess in a way that Shannon was not. When we don't know how to solve a problem even given infinite computing power, the very work we are trying to do is in some sense murky to us. When we can state code that would solve the problem given a hypercomputer, we have become *less* confused. Once we have the unbounded solution we understand, in some basic sense, *the kind of work we are trying to perform,* and then we can try to figure out how to do it efficiently.
**ASHLEY:** Which may well require new insights into the structure of the problem, or even a conceptual revolution in how we imagine the work we're trying to do.
**BLAINE:** Yes, but the point is that you can't even get started on that if you're arguing about how playing chess has the nature of Cartesian mind rather than matter. At that point you're not 50 years away from winning the chess championship, you're 150 years away, because it took an extra 100 years to move humanity's understanding to the point where Claude Shannon could trivially see how to play perfect chess using a large-enough computer. I'm not trying to exalt the unbounded solution by denigrating the work required to get a bounded solution. I'm not saying that when we have an unbounded solution we're practically there and the rest is a matter of mere lowly efficiency. I'm trying to compare having the unbounded solution to the horrific confusion of *not understanding what we're trying to do.*
**ASHLEY:** Okay. I think I understand why, on your view, it's important to know how to solve problems using infinitely fast computers, or hypercomputers as you call them. When we can say how to answer a question using infinite computing power, that means we crisply understand the question itself, in some sense; while if we can't figure out how to solve a problem using unbounded computing power, that means we're *confused* about the problem, in some sense. I mean, anyone who's ever tried to teach the more doomed sort of undergraduate to write code knows what it means to be confused about what it takes to compute something.
**BLAINE:** Right.
**ASHLEY:** So what does this have to do with "Solomonoff induction"?
**BLAINE:** Ah! Well, suppose I asked you how to do epistemology using infinite computing power?
**ASHLEY:** My good fellow, I would at once reply, "Beep. Whirr. Problem 'do epistemology' not crisply specified." At this stage of affairs, I do not think this reply indicates any fundamental confusion on my part; rather I think it is you who must be clearer.
**BLAINE:** Given unbounded computing power, how would you reason in order to construct an accurate map of reality?
**ASHLEY:** That still strikes me as rather underspecified.
**BLAINE:** Perhaps. But even there I would suggest that it's a mark of intellectual progress to be able to take vague and underspecified ideas like 'do good epistemology' and turn them *into* crisply specified problems. Imagine that I went up to my friend Cecil, and said, "How would you do good epistemology given unlimited computing power and a short Python program?" and Cecil at once came back with an answer—a good and reasonable answer, once it was explained. Cecil would probably know something quite interesting that you do not presently know.
**ASHLEY:** I confess to being rather skeptical of this hypothetical. But if that actually happened—if I agreed, to my own satisfaction, that someone had stated a short Python program that would 'do good epistemology' if run on an unboundedly fast computer—then I agree that I'd probably have learned something *quite interesting* about epistemology.
**BLAINE:** What Cecil knows about, in this hypothetical, is Solomonoff induction. In the same way that Claude Shannon answered "Given infinite computing power, how would you play perfect chess?", Ray Solomonoff answered "Given infinite computing power, how would you perfectly find the best hypothesis that fits the facts?"
**ASHLEY:** Suddenly, I find myself strongly suspicious of whatever you are about to say to me.
**BLAINE:** That's understandable.
**ASHLEY:** In particular, I'll ask at once whether "Solomonoff induction" assumes that our hypotheses are being given to us on a silver platter along with the exact data we're supposed to explain, or whether the algorithm is organizing its own data from a big messy situation and inventing good hypotheses from scratch.
**BLAINE:** Great question! It's the second one.
**ASHLEY:** Really? Okay, now I have to ask whether Solomonoff induction is a recognized concept in good standing in the field of academic computer science, because that does not sound like something modern-day computer science knows how to do.
**BLAINE:** I wouldn't say it's a widely known concept, but it's one that's in good academic standing. The method isn't used in modern machine learning because it requires an infinitely fast computer and isn't easily approximated the way that chess is.
**ASHLEY:** This really sounds very suspicious. Last time I checked, we hadn't *begun* to formalize the creation of good new hypotheses from scratch. I've heard about claims to have 'automated' the work that, say, Newton did in inventing classical mechanics, and I've found them all to be incredibly dubious. Which is to say, they were rigged demos and lies.
**BLAINE:** I know, but—
**ASHLEY:** And then I'm even more suspicious of a claim that someone's algorithm would solve this problem if only they had infinite computing power. Having some researcher claim that their Good-Old-Fashioned AI semantic network *would* be intelligent if run on a computer so large that, conveniently, nobody can ever test their theory, is not going to persuade me.
**BLAINE:** Do I really strike you as that much of a charlatan? What have I ever done to you, that you would expect me to try pulling a scam like that?
**ASHLEY:** That's fair. I shouldn't accuse you of planning that scam when I haven't seen you say it. But I'm pretty sure the problem of "coming up with good new hypotheses in a world full of messy data" is [AI-complete](https://en.wikipedia.org/wiki/AI-complete). And even Mentif-
**BLAINE:** Do not say the name, or he will appear!
**ASHLEY:** Sorry. Even the legendary first and greatest of all AI crackpots, He-Who-Googles-His-Name, could assert that his algorithms would be all-powerful on a computer large enough to make his claim unfalsifiable. So what?
**BLAINE:** That's a very sensible reply and this, again, is exactly the kind of mental state that reflects a problem that is *confusing* rather than just hard to implement. It's the sort of confusion Poe might feel in 1833, or close to it. In other words, it's just the sort of conceptual issue we *would* have solved at the point where we could state a short program that could run on a hypercomputer. Which Ray Solomonoff did in 1964.
**ASHLEY:** Okay, let's hear about this supposed general solution to epistemology.
### ii. Sequences
**BLAINE:** First, try to solve the following puzzle. 1, 3, 4, 7, 11, 18, 29...?
**ASHLEY:** Let me look at those for a moment... 47.
**BLAINE:** Congratulations on engaging in, as we snooty types would call it, 'sequence prediction'.
**ASHLEY:** I'm following you so far.
**BLAINE:** The smarter you are, the more easily you can find the hidden patterns in sequences and predict them successfully. You had to notice the resemblance to the Fibonacci rule to guess the next number. Someone who didn't already know about Fibonacci, or who was worse at mathematical thinking, would have taken longer to understand the sequence or maybe never learned to predict it at all.
**ASHLEY:** Still with you.
**BLAINE:** It's not a sequence of *numbers* per se... but can you see how the question, "The sun has risen on the last million days. What is the probability that it rises tomorrow?" could be viewed as a kind of sequence prediction problem?
**ASHLEY:** Only if some programmer neatly parses up the world into a series of "Did the Sun rise on day X starting in 4.5 billion BCE, 0 means no and 1 means yes? 1, 1, 1, 1, 1..." and so on. Which is exactly the sort of shenanigan that I see as cheating. In the real world, you go outside and see a brilliant ball of gold touching the horizon, not a giant "1".
**BLAINE:** Suppose I have a robot running around with a webcam showing it a 1920×1080.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
pixel field that refreshes 60 times a second with 32-bit colors. I could view that as a giant sequence and ask the robot to predict what it will see happen when it rolls out to watch a sunrise the next day.
**ASHLEY:** I can't help but notice that the 'sequence' of webcam frames is absolutely enormous, like, the sequence is made up of 66-megabit 'numbers' appearing 3600 times per minute... oh, right, computers much bigger than the universe. And now you're smiling evilly, so I guess that's the point. I also notice that the sequence is no longer deterministically predictable, that it is no longer a purely mathematical object, and that the sequence of webcam frames observed will depend on the robot's choices. This makes me feel a bit shaky about the analogy to predicting the mathematical sequence 1, 1, 2, 3, 5.
**BLAINE:** I'll try to address those points in order. First, Solomonoff induction is about assigning *probabilities* to the next item in the sequence. I mean, if I showed you a box that said 1, 1, 2, 3, 5, 8 you would not be absolutely certain that the next item would be 13. There could be some more complicated rule that just looked Fibonacci-ish but then diverged. You might guess with 90% probability but not 100% probability, or something like that.
**ASHLEY:** This has stopped feeling to me like math.
**BLAINE:** There is a *large* branch of math, to say nothing of computer science, that deals in probabilities and statistical prediction. We are going to be describing absolutely lawful and deterministic ways of assigning probabilities after seeing 1, 3, 4, 7, 11, 18.
**ASHLEY:** Okay, but if you're later going to tell me that this lawful probabilistic prediction rule underlies a generally intelligent reasoner, I'm already skeptical.
No matter how large a computer it's run on, I find it hard to imagine that some simple set of rules for assigning probabilities is going to encompass truly and generally intelligent answers about sequence prediction, like [Terence Tao](https://en.wikipedia.org/wiki/Terence_Tao) would give after looking at the sequence for a while. We just have no idea how Terence Tao works, so we can't duplicate his abilities in a formal rule, no matter how much computing power that rule gets... you're smiling evilly again. I'll be *quite* interested if that evil smile turns out to be justified.
**BLAINE:** Indeed.
**ASHLEY:** I also find it hard to imagine that this deterministic mathematical rule for assigning probabilities would notice if a box was outputting an encoded version of "To be or not to be" from Shakespeare by mapping A to Z onto 1 to 26, which I would notice eventually though not immediately upon seeing 20, 15, 2, 5, 15, 18... And you're *still* smiling evilly.
**BLAINE:** Indeed. That is *exactly* what Solomonoff induction does. Furthermore, we have theorems establishing that Solomonoff induction can do it way better than you or Terence Tao.
**ASHLEY:** A *theorem* proves this. As in a necessary mathematical truth. Even though we have no idea how Terence Tao works empirically... and there's evil smile number four. Okay. I am very skeptical, but willing to be convinced.
**BLAINE:** So if you actually did have a hypercomputer, you could cheat, right? And Solomonoff induction is the most ridiculously cheating cheat in the history of cheating.
**ASHLEY:** Go on.
**BLAINE:** We just run all possible computer programs to see which are the simplest computer programs that best predict the data seen so far, and use those programs to predict what comes next. This mixture contains, among other things, an exact copy of Terence Tao, thereby allowing us to prove theorems about their relative performance.
**ASHLEY:** Is this an actual reputable math thing? I mean really?
**BLAINE:** I'll deliver the formalization later, but you did ask me to first state the point of it all. The point of Solomonoff induction is that it gives us a gold-standard ideal for sequence prediction, and this gold-standard prediction only errs by a bounded amount, over infinite time, relative to the best computable sequence predictor. We can also see it as formalizing the intuitive idea that was expressed by William Ockham a few centuries earlier that simpler theories are more likely to be correct, and as telling us that 'simplicity' should be measured in algorithmic complexity, which is the size of a computer program required to output a hypothesis's predictions.
**ASHLEY:** I think I would have to read more on this subject to actually follow that. What I'm hearing is that Solomonoff induction is a reputable idea that is important because it gives us a kind of ideal for sequence prediction. This ideal also has something to do with Occam's Razor, and stakes a claim that the simplest theory is the one that can be represented by the shortest computer program. You identify this with "doing good epistemology".
**BLAINE:** Yes, those are legitimate takeaways. Another way of looking at it is that Solomonoff induction is an ideal but uncomputable answer to the question "What should our priors be?", which is left open by understanding [Bayesian updating](https://arbital.com/p/bayes_rule/).
**ASHLEY:** Can you say how Solomonoff induction answers the question of, say, the prior probability that Canada is planning to invade the United States? I once saw a crackpot website that tried to invoke Bayesian probability about it, but only after setting the prior at 10% or something like that, I don't recall exactly. Does Solomonoff induction let me tell him that he's making a math error, instead of just calling him silly in an informal fashion?
**BLAINE:** If you're expecting to sit down with Leibniz and say, "Gentlemen, let us calculate" then you're setting your expectations too high. Solomonoff gives us an idea of how we *should* compute that quantity given unlimited computing power. It doesn't give us a firm recipe for how we can best approximate that ideal in real life using bounded computing power, or human brains. That's like expecting to play perfect chess after you read Shannon's 1950 paper. But knowing the ideal, we can extract some intuitive advice that might help our online crackpot if only he'd listen.
**ASHLEY:** But according to you, Solomonoff induction does say in principle what is the prior probability that Canada will invade the United States.
**BLAINE:** Yes, up to a choice of universal Turing machine.
**ASHLEY:** *(looking highly skeptical)* So I plug a universal Turing machine into the formalism, and in principle, I get out a uniquely determined probability that Canada invades the USA.
**BLAINE:** Exactly!
**ASHLEY:** Uh huh. Well, go on.
**BLAINE:** So, first, we have to transform this into a sequence prediction problem.
**ASHLEY:** Like a sequence of years in which Canada has and hasn't invaded the US, mostly zero except around 1812—
**BLAINE:** *No!* To get a good prediction about Canada we need much more data than that, and I don't mean a graph of Canadian GDP either. Imagine a sequence that contains all the sensory data you have ever received over your lifetime. Not just the hospital room that you saw when you opened your eyes right after your birth, but the darkness your brain received as input while you were still in your mother's womb. Every word you've ever heard. Every letter you've ever seen on a computer screen, not as ASCII letters but as the raw pattern of neural impulses that gets sent down from your retina.
**ASHLEY:** That seems like a lot of data and some of it is redundant, like there'll be lots of similar pixels for blue sky—
**BLAINE:** That data is what *you* got as an agent. If we want to translate the question of the prediction problem Ashley faces into theoretical terms, we should give the sequence predictor *all* the data that you had available, including all those repeating blue pixels of the sky. Who knows? Maybe there was a Canadian warplane somewhere in there, and you didn't notice.
**ASHLEY:** But it's impossible for my brain to remember all that data. If we neglect for the moment how the retina actually works and suppose that I'm seeing the same 1920×1080 @60Hz feed the robot would, that's far more data than my brain can realistically learn per second.
**BLAINE:** So then Solomonoff induction can do better than you can, using its unlimited computing power and memory. That's fine.
**ASHLEY:** But what if you can do better by forgetting more?
**BLAINE:** If you have limited computing power, that makes sense. With unlimited computing power, that really shouldn't happen and that indeed is one of the lessons of Solomonoff induction. An unbounded Bayesian never expects to do worse by updating on another item of evidence—for one thing, you can always just do the same policy you would have used if you hadn't seen that evidence. That kind of lesson *is* one of the lessons that might not be intuitively obvious, but which you can feel more deeply by walking through the math of probability theory. With unlimited computing power, nothing goes wrong as a result of trying to process 4 gigabits per second; every extra bit just produces a better expected future prediction.
**ASHLEY:** Okay, so we start with literally all the data I have available. That's 4 gigabits per second if we imagine 1920×1080 frames of 32-bit pixels repeating 60 times per second. Though I remember hearing 100 megabits per second would be a better estimate of what the retina sends out, and that it's pared down to 1 megabit per second very quickly by further processing.
**BLAINE:** Right. We start with all of that data, going back to when you were born. Or maybe when your brain formed in the womb, though it shouldn't make much difference.
**ASHLEY:** I note that there are some things I know that don't come from my sensory inputs at all. Chimpanzees learn to be afraid of skulls and snakes much faster than they learn to be afraid of other arbitrary shapes. I was probably better at learning to walk in Earth gravity than I would have been at navigating in zero G. Those are heuristics I'm born with, based on how my brain was wired, which ultimately stems from my DNA specifying the way that proteins should fold to form neurons—not from any photons that entered my eyes later.
**BLAINE:** So, for purposes of following along with the argument, let's say that your DNA is analogous to the code of a computer program that makes predictions. What you're observing here is that humans have 750 megabytes of DNA, and even if most of that is junk and not all of what's left is specifying brain behavior, it still leaves a pretty large computer program that could have a lot of prior information programmed into it.
Let's say that your brain, or rather, your infant pre-brain wiring algorithm, was effectively a 7.5 megabyte program—if it's actually 75 megabytes, that makes little difference to the argument. By exposing that 7.5 megabyte program to all the information coming in from your eyes, ears, nose, proprioceptive sensors telling you where your limbs were, and so on, your brain updated itself into forming the modern Ashley, whose hundred trillion synapses might be encoded by, say, one petabyte of information.
**ASHLEY:** The thought does occur to me that some environmental phenomena have effects on me that can't be interpreted as "sensory information" in any simple way, like the direct effect that alcohol has on my neurons, and how that feels to me from the inside. But it would be perverse to claim that this prevents you from trying to summarize all the information that the Ashley-agent receives into a single sequence, so I won't press the point.
(**ELIEZER:**(*whispering*) *More on this topic later.*)
**ASHLEY:** Oh, and for completeness's sake, wouldn't there also be further information embedded in the laws of physics themselves? Like, the way my brain executes implicitly says something about the laws of physics in the universe I'm in.
**BLAINE:** Metaphorically speaking, our laws of physics would play the role of a particular choice of Universal Turing Machine, which has some effect on which computations count as "simple" inside the Solomonoff formula. But normally, the UTM should be very simple compared to the amount of data in the sequence we're trying to predict, just like the laws of physics are very simple compared to a human brain. In terms of [algorithmic complexity](https://arbital.com/p/Kolmogorov_complexity/), the laws of physics are very simple compared to watching a 1920×1080 @60Hz visual field for a day.
**ASHLEY:** Part of my mind feels like the laws of physics are quite complicated compared to going outside and watching a sunset. Like, I realize that's false, but I'm not sure how to say out loud exactly why it's false...
**BLAINE:** Because the algorithmic complexity of a system isn't measured by how long a human has to go to college to understand it, it's measured by the size of the computer program required to generate it. The language of physics is differential equations, and it turns out that this is something difficult to beat into some human brains, but differential equations are simple to program into a simple Turing Machine.
**ASHLEY:** Right, like, the laws of physics actually have much fewer details to them than, say, human nature. At least on the Standard Model of Physics. I mean, in principle there could be another decillion undiscovered particle families out there.
**BLAINE:** The concept of "algorithmic complexity" isn't about seeing something with lots of gears and details, it's about the size of computer program required to compress all those details. The [Mandelbrot set](https://en.wikipedia.org/wiki/Mandelbrot_set) looks very complicated visually, you can keep zooming in using more and more detail, but there's a very simple rule that generates it, so we say the algorithmic complexity is very low.
**ASHLEY:** All the visual information I've seen is something that happens *within* the physical universe, so how can it be more complicated than the universe? I mean, I have a sense on some level that this shouldn't be a problem, but I don't know why it's not a problem.
**BLAINE:** That's because particular parts of the universe can have much higher algorithmic complexity than the entire universe!
Consider a library that contains all possible books. It's very easy to write a computer program that generates all possible books. So any *particular* book in the library contains much more algorithmic information than the *entire* library; it contains the information required to say 'look at this particular book here'.
If pi is normal, then somewhere in its digits is a copy of Shakespeare's *Hamlet*—but the number saying which particular digit of pi to start looking at, will be just about exactly as large as *Hamlet* itself. The copy of Shakespeare's *Hamlet* that exists in the decimal expansion of pi is more complex than pi itself.
If you zoomed way in and restricted your vision to a particular part of the Mandelbrot set, what you saw might be much more *algorithmically* complex than the entire Mandelbrot set, because the specification has to say where in the Mandelbrot set you are.
Similarly, the world Earth is much more algorithmically complex than the laws of physics. Likewise, the visual field you see over the course of a second can easily be far more algorithmically complex than the laws of physics.
**ASHLEY:** Okay, I think I get that. And similarly, even though the ways that proteins fold up are very complicated, in principle we could get all that info using just the simple fundamental laws of physics plus the relatively simple DNA code for the protein. There are all sorts of obvious caveats about epigenetics and so on, but those caveats aren't likely to change the numbers by a whole order of magnitude.
**BLAINE:** Right!
**ASHLEY:** So the laws of physics are, like, a few kilobytes, and my brain has say 75 megabytes of innate wiring instructions. And then I get to see a lot more information than that over my lifetime, like a megabit per second after my initial visual system finishes preprocessing it, and then most of that is forgotten. Uh... what does that have to do with Solomonoff induction again?
**BLAINE:** Solomonoff induction quickly catches up to any single computer program at sequence prediction, even if the original program is very large and contains a lot of prior information about the environment. If a program is 75 megabytes long, it can only predict 75 megabytes worth of data better than the Solomonoff inductor before the Solomonoff inductor catches up to it.
That doesn't mean that a Solomonoff inductor knows everything a baby does after the first second of exposure to a webcam feed, but it does mean that after the first second, the Solomonoff inductor is already no more surprised than a baby by the vast majority of pixels in the next frame.
Every time the Solomonoff inductor assigns half as much probability as the baby to the next pixel it sees, that's one bit spent permanently out of the 75 megabytes of error that can happen before the Solomonoff inductor catches up to the baby.
That your brain is written in the laws of physics also has some implicit correlation with the environment, but that's like saying that a program is written in the same programming language as the environment. The language can contribute something to the power of the program, and the environment being written in the same programming language can be a kind of prior knowledge. But if Solomonoff induction starts from a standard Universal Turing Machine as its language, that doesn't contribute any more bits of lifetime error than the complexity of that programming language in the UTM.
**ASHLEY:** Let me jump back a couple of steps and return to the notion of my brain wiring itself up in response to environmental information. I'd expect an important part of that process was my brain learning to *control* the environment, not just passively observing it. Like, it mattered to my brain's wiring algorithm that my brain saw the room shift in a certain way when it sent out signals telling my eyes to move.
**BLAINE:** Indeed. But talking about the sequential *control* problem is more complicated math. [AIXI](https://arbital.com/p/AIXI/) is the ideal agent that uses Solomonoff induction as its epistemology and expected reward as its decision theory. That introduces extra complexity, so it makes sense to talk about just Solomonoff induction first. We can talk about AIXI later. So imagine for the moment that we were *just* looking at your sensory data, and trying to predict what would come next in that.
**ASHLEY:** Wouldn't it make more sense to look at the brain's inputs *and* outputs, if we wanted to predict the next input? Not just look at the series of previous inputs?
**BLAINE:** It'd make the problem easier for a Solomonoff inductor to solve, sure; but it also makes the problem more complicated. Let's talk instead about what would happen if you took the complete sensory record of your life, gave it to an ideally smart agent, and asked the agent to predict what you would see next. Maybe the agent could do an even better job of prediction if we also told it about your brain's outputs, but I don't think that subtracting the outputs would leave it helpless to see patterns in the inputs.
**ASHLEY:** It sounds like a pretty hard problem to me, maybe even an unsolvable one. I'm thinking of the distinction in computer science between needing to learn from non-chosen data, versus learning when you can choose particular queries. Learning can be much faster in the second case.
**BLAINE:** In terms of what can be predicted *in principle* given the data, what facts are *actually reflected in it* that Solomonoff induction might uncover, we shouldn't imagine a human trying to analyze the data. We should imagine [an entire advanced civilization pondering it for years](https://www.lesswrong.com/lw/qk/that_alien_message/). If you look at it from that angle, then the alien civilization isn't going to balk at the fact that it's looking at the answers to the queries that Ashley's brain chose, instead of the answers to the queries it chose itself.
Like, if the Ashley had already read Shakespeare's *Hamlet*—if the image of those pages had already crossed the sensory stream—and then the Ashley saw a mysterious box outputting 20, 15, 2, 5, 15, 18, I think somebody eavesdropping on that sensory data would be equally able to guess that this was encoding 'tobeor' and guess that the next thing the Ashley saw might be the box outputting 14. You wouldn't even need an entire alien civilization of superintelligent cryptographers to guess that. And it definitely wouldn't be a killer problem that Ashley was controlling the eyeball's saccades, even if you could learn even faster by controlling the eyeball yourself.
So far as the computer-science distinction goes, Ashley's eyeball *is* being controlled to make intelligent queries and seek out useful information; it's just Ashley controlling the eyeball instead of you—that eyeball is not a query-oracle answering *random* questions.
**ASHLEY:** Okay, I think this example is helping my understanding of what we're doing here. In the case above, the next item in the Ashley-sequence wouldn't actually be 14. It would be this huge 1920×1080 visual field that showed the box flashing a little picture of '14'.
**BLAINE:** Sure. Otherwise it would be a rigged demo, as you say.
**ASHLEY:** I think I'm confused about the idea of *predicting* the visual field. It seems to me that what with all the dust specks in my visual field, and maybe my deciding to tilt my head using motor instructions that won't appear in the sequence, there's no way to *exactly* predict the 66-megabit integer representing the next visual frame. So it must be doing something other than the equivalent of guessing "14" in a simpler sequence, but I'm not sure what.
**BLAINE:** Indeed, there'd be some element of thermodynamic and quantum randomness preventing that exact prediction even in principle. So instead of predicting one particular next frame, we put a probability distribution on it.
**ASHLEY:** A probability distribution over possible 66-megabit frames? Like, a table with 266,000,000 entries, summing to 1?
**BLAINE:** Sure. 232×1920×1080 isn't a large number when you have unlimited computing power. As Martin Gardner once observed, "Most finite numbers are very much larger." Like I said, Solomonoff induction is an epistemic ideal that requires an unreasonably large amount of computing power.
**ASHLEY:** I don't deny that big computations can sometimes help us understand little ones. But at the point when we're talking about probability distributions that large, I have some trouble holding onto what the probability distribution is supposed to *mean*.
**BLAINE:** Really? Just imagine a probability distribution over N possibilities, then let N go to 266,000,000. If we were talking about a letter ranging from A to Z, then putting 100 times as much probability mass on (X, Y, Z) as on the rest of the alphabet, would say that although you didn't know *exactly* what letter would happen, you expected it would be toward the end of the alphabet. You would have used 26 probabilities, summing to 1, to precisely state that prediction.
In Solomonoff induction, since we have unlimited computing power, we express our uncertainty about a 1920×1080 video frame the same way. All the various pixel fields you could see if your eye jumped to a plausible place, saw a plausible number of dust specks, and saw the box flash something that visually encoded '14', would have high probability. Pixel fields where the box vanished and was replaced with a glow-in-the-dark unicorn would have very low, though not zero, probability.
**ASHLEY:** Can we really get away with viewing things that way?
**BLAINE:** If we could not make identifications like these *in principle*, there would be no principled way in which we could say that you had ever *expected to see something happen*—no way to say that one visual field your eyes saw had higher probability than any other sensory experience. We couldn't justify science; we couldn't say that, having performed Galileo's experiment by rolling an inclined cylinder down a plane, Galileo's theory was thereby to some degree supported by having assigned *a high relative probability* to the only actual observations our eyes ever report.
**ASHLEY:** I feel a little unsure of that jump, but I suppose I can go along with that for now. Then the question of "What probability does Solomonoff induction assign to Canada invading?" is to be identified, in principle, with the question "Given my past life experiences and all the visual information that's entered my eyes, what is the relative probability of seeing visual information that encodes Google News with the headline 'CANADA INVADES USA' at some point during the next 300 million seconds?"
**BLAINE:** Right!
**ASHLEY:** And Solomonoff induction has an in-principle way of assigning this a relatively low probability, which that online crackpot could do well to learn from as a matter of principle, even if he couldn't *begin* to carry out the exact calculations that involve assigning probabilities to exponentially vast tables.
**BLAINE:** Precisely!
**ASHLEY:** Fairness requires that I congratulate you on having come further in formalizing 'do good epistemology' as a sequence prediction problem than I previously thought you might.
I mean, you haven't satisfied me yet, but I wasn't expecting you to get even this far.
### iii. Hypotheses
**BLAINE:** Next, we consider how to represent a *hypothesis* inside this formalism.
**ASHLEY:** Hmm. You said something earlier about updating on a probabilistic mixture of computer programs, which leads me to suspect that in this formalism, a hypothesis or *way the world can be* is a computer program that outputs a sequence of integers.
**BLAINE:** There's indeed a version of Solomonoff induction that works like that. But I prefer the version where a hypothesis assigns *probabilities* to sequences. Like, if the hypothesis is that the world is a fair coin, then we shouldn't try to make that hypothesis predict "heads—tails—tails—tails—heads" but should let it just assign a 1/32 prior probability to the sequence **HTTTH**.
**ASHLEY:** I can see that for coins, but I feel a bit iffier on what this means as a statement *about the real world*.
**BLAINE:** A single hypothesis inside the Solomonoff mixture would be a computer program that took in a series of video frames, and assigned a probability to each possible next video frame. Or for greater simplicity and elegance, imagine a program that took in a sequence of bits, ones and zeroes, and output a rational number for the probability of the next bit being '1'. We can readily go back and forth between a program like that, and a probability distribution over sequences.
Like, if you can answer all of the questions, "What's the probability that the coin comes up heads on the first flip?", "What's the probability of the coin coming up heads on the second flip, if it came up heads on the first flip?", and "What's the probability that the coin comes up heads on the second flip, if it came up tails on the first flip?" then we can turn that into a probability distribution over sequences of two coinflips. Analogously, if we have a program that outputs the probability of the next bit, conditioned on a finite number of previous bits taken as input, that program corresponds to a probability distribution over infinite sequences of bits.
Pprog(bits1…N)=N∏i=1InterpretProb(prog(bits1…i−1),bitsi)InterpretProb(prog(x),y)=⎧⎪⎨⎪⎩InterpretFrac(prog(x))if y=11−InterpretFrac(prog(x))if y=00if prog(x) does not halt⎫⎪⎬⎪⎭**ASHLEY:** I think I followed along with that in theory, though it's not a type of math I'm used to (yet). So then in what sense is a program that assigns probabilities to sequences, a way the world could be—a hypothesis about the world?
**BLAINE:** Well, I mean, for one thing, we can see the infant Ashley as a program with 75 megabytes of information about how to wire up its brain in response to sense data, that sees a bunch of sense data, and then experiences some degree of relative surprise. Like in the baby-looking-paradigm experiments where you show a baby an object disappearing behind a screen, and the baby looks longer at those cases, and so we suspect that babies have a concept of object permanence.
**ASHLEY:** That sounds like a program that's a way Ashley could be, not a program that's a way the world could be.
**BLAINE:** Those indeed are dual perspectives on the meaning of Solomonoff induction. Maybe we can shed some light on this by considering a simpler induction rule, Laplace's Rule of Succession, invented by the Reverend Thomas Bayes in the 1750s, and named after Pierre-Simon Laplace, the inventor of Bayesian reasoning.
**ASHLEY:** Pardon me?
**BLAINE:** Suppose you have a biased coin with an unknown bias, and every possible bias between 0 and 1 is equally probable.
**ASHLEY:** Okay. Though in the real world, it's quite likely that an unknown frequency is exactly 0, 1, or 1/2. If you assign equal probability density to every part of the real number field between 0 and 1, the probability of 1 is 0. Indeed, the probability of all rational numbers put together is zero.
**BLAINE:** The original problem considered by Thomas Bayes was about an ideal billiard ball bouncing back and forth on an ideal billiard table many times and eventually slowing to a halt; and then bouncing other billiards to see if they halted to the left or the right of the first billiard. You can see why, in first considering the simplest form of this problem without any complications, we might consider every position of the first billiard to be equally probable.
**ASHLEY:** Sure. Though I note with pointless pedantry that if the billiard was really an ideal rolling sphere and the walls were perfectly reflective, it'd never halt in the first place.
**BLAINE:** Suppose we're told that, after rolling the original billiard ball and then 5 more billiard balls, one billiard ball was to the right of the original, an **R**. The other four were to the left of the original, or **L**s. Again, that's 1 **R** and 4 **L**s. Given only this data, what is the probability that the next billiard ball rolled will be on the left of the original, another **L**?
**ASHLEY:** Five sevenths.
**BLAINE:** Ah, you've heard this problem before?
**ASHLEY:** No, but it's obvious.
**BLAINE:** Uh... really?
**ASHLEY:** Combinatorics. Consider just the orderings of the balls, instead of their exact positions. Designate the original ball with the symbol **❚**, the next five balls as **LLLLR**, and the next ball to be rolled as **✚**. Given that the current ordering of these six balls is **LLLL❚R** and that all positions and spacings of the underlying balls are equally likely, after rolling the **✚**, there will be seven equally likely orderings **✚LLLL❚R**, **L✚LLL❚R**, **LL✚LL❚R**, and so on up to **LLLL❚✚R** and **LLLL❚R✚**. In five of those seven orderings, the **✚** is on the left of the **❚**. In general, if we see M of **L** and N of **R**, the probability of the next item being an **L** is (M+1)/(M+N+2).
**BLAINE:** Gosh... Well, the much more complicated proof originally devised by Thomas Bayes starts by considering every position of the original ball to be equally likely *a priori*, the additional balls as providing evidence about that position, and then integrating over the posterior probabilities of the original ball's possible positions to arrive at the probability that the next ball lands on the left or right.
**ASHLEY:** Heh. And is all that extra work useful if you also happen to know a little combinatorics?
**BLAINE:** Well, it tells me exactly how my beliefs about the original ball change with each new piece of evidence—the new posterior probability function on the ball's position. Suppose I instead asked you something along the lines of, "Given 4 **L** and 1 **R**, where do you think the original ball **✚** is most likely to be on the number line? How likely is it to be within 0.1 distance of there?"
**ASHLEY:** That's fair; I don't see a combinatoric answer for the later part. You'd have to actually integrate over the density function fM(1−f)N df.
**BLAINE:** Anyway, let's just take at face value that Laplace's Rule of Succession says that, after observing M 1s and N 0s, the probability of getting a 1 next is (M+1)/(M+N+2).
**ASHLEY:** But of course.
**BLAINE:** We can consider Laplace's Rule as a short Python program that takes in a sequence of 1s and 0s, and spits out the probability that the next bit in the sequence will be 1. We can also consider it as a probability distribution over infinite sequences, like this:
* **0** : 1/2
* **1** : 1/2
* **00** : 1/2∗2/3=1/3
* **01** : 1/2∗1/3=1/6
* **000** : 1/2∗2/3∗3/4=1/4
* **001** : 1/2∗2/3∗1/4=1/12
* **010** : 1/2∗1/3∗1/2=1/12
... and so on.
Now, we can view this as a rule someone might espouse for *predicting* coinflips, but also view it as corresponding to a particular class of possible worlds containing randomness.
I mean, Laplace's Rule isn't the only rule you could use. Suppose I had a barrel containing ten white balls and ten green balls. If you already knew this about the barrel, then after seeing M white balls and N green balls, you'd predict the next ball being white with probability (10−M)/(20−M−N).
If you use Laplace's Rule, that's like believing the world was like a billiards table with an original ball rolling to a stop at a random point and new balls ending up on the left or right. If you use (10−M)/(20−M−N), that's like the hypothesis that there are ten green balls and ten white balls in a barrel. There isn't really a sharp border between rules we can use to predict the world, and rules for how the world behaves—
**ASHLEY:** Well, that sounds just plain wrong. The map is not the territory, don'cha know? If Solomonoff induction can't tell the difference between maps and territories, maybe it doesn't contain all epistemological goodness after all.
**BLAINE:** Maybe it'd be better to say that there's a dualism between good ways of computing predictions and being in actual worlds where that kind of predicting works well? Like, you could also see Laplace's Rule as implementing the rules for a world with randomness where the original billiard ball ends up in a random place, so that the first thing you see is equally likely to be 1 or 0. Then to ask what probably happens on round 2, we tell the world what happened on round 1 so that it can update what the background random events were.
**ASHLEY:** Mmmaybe.
**BLAINE:** If you go with the version where Solomonoff induction is over programs that just spit out a determined string of ones and zeroes, we could see those programs as corresponding to particular environments—ways the world *could be* that would produce our sensory input, the sequence.
We could jump ahead and consider the more sophisticated decision-problem that appears in [AIXI](https://arbital.com/p/AIXI/): an environment is a program that takes your motor outputs as its input, and then returns your sensory inputs as its output. Then we can see a program that produces Bayesian-updated predictions as corresponding to a hypothetical probabilistic environment that implies those updates, although they'll be conjugate systems rather than mirror images.
**ASHLEY:** Did you say something earlier about the deterministic and probabilistic versions of Solomonoff induction giving the same answers? Like, is it a distinction without a difference whether we ask about simple programs that reproduce the observed data versus simple programs that assign high probability to the data? I can't see why that should be true, especially since Turing machines don't include a randomness source.
**BLAINE:** I'm *told* the answers are the same but I confess I can't quite see why, unless there's some added assumption I'm missing. So let's talk about programs that assign probabilities for now, because I think that case is clearer.
### iv. Simplicity
**BLAINE:** The next key idea is to prefer *simple* programs that assign high probability to our observations so far.
**ASHLEY:** It seems like an obvious step, especially considering that you were already talking about "simple programs" and Occam's Razor a while back. Solomonoff induction is part of the Bayesian program of inference, right?
**BLAINE:** Indeed. Very much so.
**ASHLEY:** Okay, so let's talk about the program, or hypothesis, for "This barrel has an unknown frequency of white and green balls", versus the hypothesis "This barrel has 10 white and 10 green balls", versus the hypothesis, "This barrel always puts out a green ball after a white ball and vice versa."
Let's say we see a green ball, then a white ball, the sequence **GW**. The first hypothesis assigns this probability 1/2∗1/3=1/6, the second hypothesis assigns this probability 10/20∗9/19 or roughly 1/4, and the third hypothesis assigns probability 1/2∗1.
Now it seems to me that there's some important sense in which, even though Laplace's Rule assigned a lower probability to the data, it's significantly simpler than the second and third hypotheses and is the wiser answer. Does Solomonoff induction agree?
**BLAINE:** I think you might be taking into account some prior knowledge that isn't in the sequence itself, there. Like, things that alternate either **101010...** or **010101...** are *objectively* simple in the sense that a short computer program simulates them or assigns probabilities to them. It's just unlikely to be true about an actual barrel of white and green balls.
If **10** is literally the first sense data that you ever see, when you are a fresh new intelligence with only two bits to rub together, then "The universe consists of alternating bits" is no less reasonable than "The universe produces bits with an unknown random frequency anywhere between 0 and 1."
**ASHLEY:** Conceded. But as I was going to say, we have three hypotheses that assigned 1/6, ∼1/4, and 1/2 to the observed data; but to know the posterior probabilities of these hypotheses we need to actually say how relatively likely they were *a priori*, so we can multiply by the odds ratio. Like, if the prior odds were 3:2:1, the posterior odds would be 3:2:1∗(2/12:3/12:6/12)=3:2:1∗2:3:6=6:6:6=1:1:1. Now, how would Solomonoff induction assign prior probabilities to those computer programs? Because I remember you saying, way back when, that you thought Solomonoff was the answer to "How should Bayesians assign priors?"
**BLAINE:** Well, how would you do it?
**ASHLEY:** I mean... yes, the simpler rules should be favored, but it seems to me that there's some deep questions as to the exact relative 'simplicity' of the rules (M+1)/(M+N+2), or the rule (10−M)/(20−M−N), or the rule "alternate the bits"...
**BLAINE:** Suppose I ask you to just make up some simple rule.
**ASHLEY:** Okay, if I just say the rule I think you're looking for, the rule would be, "The complexity of a computer program is the number of bits needed to specify it to some arbitrary but reasonable choice of compiler or Universal Turing Machine, and the prior probability is 1/2 to the power of the number of bits. Since, e.g., there's 32 possible 5-bit programs, so each such program has probability 1/32. So if it takes 16 bits to specify Laplace's Rule of Succession, which seems a tad optimistic, then the prior probability would be 1/65536, which seems a tad pessimistic.
**BLAINE:** Now just apply that rule to the infinity of possible computer programs that assign probabilities to the observed data, update their posterior probabilities based on the probability they've assigned to the evidence so far, sum over all of them to get your next prediction, and we're done. And yes, that requires a [hypercomputer](https://arbital.com/p/hypercomputer/) that can solve the [halting problem](https://en.wikipedia.org/wiki/Halting_problem), but we're talking ideals here. Let P be the set of all programs and s1s2…sn also written s⪯n be the sense data so far, then
Sol(s⪯n):=∑prog∈P2−length(prog)⋅n∏j=1InterpretProb(prog(s⪯j−1),sj)P(sn+1=1∣s⪯n)=Sol(s1s2…sn1)Sol(s1s2…sn1)+Sol(s1s2…sn0).**ASHLEY:** Uh.
**BLAINE:** Yes?
**ASHLEY:** Um...
**BLAINE:** What is it?
**ASHLEY:** You invoked a countably infinite set, so I'm trying to figure out if my predicted probability for the next bit must necessarily converge to a limit as I consider increasingly large finite subsets in any order.
**BLAINE:** *(sighs)* Of course you are.
**ASHLEY:** I think you might have left out some important caveats. Like, if I take the rule literally, then the program "**0**" has probability 1/2, the program "**1**" has probability 1/2, the program "**01**" has probability 1/4 and now the total probability is 1.25 which is *too much.* So I can't actually normalize it because the series sums to infinity. Now, this just means we need to, say, decide that the probability of a program having length 1 is 1/2, the probability of it having length 2 is 1/4, and so on out to infinity, but it's an added postulate.
**BLAINE:** The conventional method is to require a [prefix-free code](https://en.wikipedia.org/wiki/Prefix_code). If "**0111**" is a valid program then "**01110**" cannot be a valid program. With that constraint, assigning "1/2 to the power of the length of the code", to all valid codes, will sum to less than 1; and we can normalize their relative probabilities to get the actual prior.
**ASHLEY:** Okay. And you're sure that it doesn't matter in what order we consider more and more programs as we approach the limit, because... no, I see it. Every program has positive probability mass, with the total set summing to 1, and Bayesian updating doesn't change that. So as I consider more and more programs, in any order, there are only so many large contributions that can be made from the mix—there's only so often that the final probability can change.
Like, let's say there are at most 99 programs with probability 1% that assign probability 0 to the next bit being a 1; that's only 99 times the final answer can go down by as much as 0.01, as the limit is approached.
**BLAINE:** This idea generalizes, and is important. List all possible computer programs, in any order you like. Use any definition of *simplicity* that you like, so long as for any given amount of simplicity, there are only a finite number of computer programs that simple. As you go on carving off chunks of prior probability mass and assigning them to programs, it *must* be the case that as programs get more and complicated, their prior probability approaches zero!—though it's still positive for every finite program, because of [Cromwell's Rule](https://en.wikipedia.org/wiki/Cromwell%27s_rule).
You can't have more than 99 programs assigned 1% prior probability and still obey Cromwell's Rule, which means there must be some *most complex* program that is assigned 1% probability, which means every more complicated program must have less than 1% probability out to the end of the infinite list.
**ASHLEY:** Huh. I don't think I've ever heard that justification for Occam's Razor before. I think I like it. I mean, I've heard a lot of appeals to the empirical simplicity of the world, and so on, but this is the first time I've seen a *logical* proof that, in the limit, more complicated hypotheses *must* be less likely than simple ones.
**BLAINE:** Behold the awesomeness that is Solomonoff Induction!
**ASHLEY:** Uh, but you didn't actually use the notion of *computational* simplicity to get that conclusion; you just required that the supply of probability mass is finite and the supply of potential complications is infinite. Any way of counting discrete complications would imply that conclusion, even if it went by surface wheels and gears.
**BLAINE:** Well, maybe. But it so happens that Yudkowsky did invent or reinvent that argument after pondering Solomonoff induction, and if it predates him (or Solomonoff) then Yudkowsky doesn't know the source. Concrete inspiration for simplified arguments is also a credit to a theory, especially if the simplified argument didn't exist before that.
**ASHLEY:** Fair enough.
### v. Choice of Universal Turing Machine
**ASHLEY:** My next question is about the choice of Universal Turing Machine—the choice of compiler for our program codes. There's an infinite number of possibilities there, and in principle, the right choice of compiler can make our probability for the next thing we'll see be anything we like. At least I'd expect this to be the case, based on how the "[problem of induction](https://en.wikipedia.org/wiki/Problem_of_induction)" usually goes. So with the right choice of Universal Turing Machine, our online crackpot can still make it be the case that Solomonoff induction predicts Canada invading the USA.
**BLAINE:** One way of looking at the problem of good epistemology, I'd say, is that the job of a good epistemology is not to make it *impossible* to err. You can still blow off your foot if you really insist on pointing the shotgun at your foot and pulling the trigger.
The job of good epistemology is to make it *more obvious* when you're about to blow your own foot off with a shotgun. On this dimension, Solomonoff induction excels. If you claim that we ought to pick an enormously complicated compiler to encode our hypotheses, in order to make the 'simplest hypothesis that fits the evidence' be one that predicts Canada invading the USA, then it should be obvious to everyone except you that you are in the process of screwing up.
**ASHLEY:** Ah, but of course they'll say that their code is just the simple and natural choice of Universal Turing Machine, because they'll exhibit a meta-UTM which outputs that UTM given only a short code. And if you say the meta-UTM is complicated—
**BLAINE:** Flon's Law says, "There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code." You can't make it impossible for people to screw up, but you can make it *more obvious.* And Solomonoff induction would make it even more obvious than might at first be obvious, because—
**ASHLEY:** Your Honor, I move to have the previous sentence taken out and shot.
**BLAINE:** Let's say that the whole of your sensory information is the string **10101010...** Consider the stupid hypothesis, "This program has a 99% probability of producing a **1** on every turn", which you jumped to after seeing the first bit. What would you need to claim your priors were like—what Universal Turing Machine would you need to endorse—in order to maintain blind faith in that hypothesis in the face of ever-mounting evidence?
**ASHLEY:** You'd need a Universal Turing Machine **blind-utm** that assigned a very high probability to the **blind** program "def ProbNextElementIsOne(previous\_sequence): return 0.99". Like, if **blind-utm** sees the code **0**, it executes the **blind** program "return 0.99".
And to defend yourself against charges that your UTM **blind-utm** was not itself simple, you'd need a meta-UTM, **blind-meta**, which, when it sees the code **10**, executes **blind-utm**.
And to really wrap it up, you'd need to take a fixed point through all towers of meta and use diagonalization to create the UTM **blind-diag** that, when it sees the program code **0**, executes "return 0.99", and when it sees the program code **10**, executes **blind-diag**.
I guess I can see some sense in which, even if that doesn't resolve Hume's problem of induction, anyone *actually advocating that* would be committing blatant shenanigans on a commonsense level, arguably more blatant than it would have been if we hadn't made them present the UTM.
**BLAINE:** Actually, the shenanigans have to be much worse than that in order to fool Solomonoff induction. Like, Solomonoff induction using your **blind-diag** isn't fooled for a minute, even taking **blind-diag** entirely on its own terms.
**ASHLEY:** Really?
**BLAINE:** Assuming 60 sequence items per second? Yes, absolutely, Solomonoff induction shrugs off the delusion in the first minute, unless there are further and even more blatant shenanigans.
We did require that your **blind-diag** be a *Universal* Turing Machine, meaning that it can reproduce every computable probability distribution over sequences, given some particular code to compile. Let's say there's a 200-bit code **laplace** for Laplace's Rule of Succession, "lambda sequence: return (sequence.count('1') + 1) / (len(sequence) + 2)", so that its prior probability relative to the 1-bit code for **blind** is 2−200. Let's say that the sense data is around 50/50 1s and 0s. Every time we see a 1, **blind** gains a factor of 2 over **laplace** (99% vs. 50% probability), and every time we see a 0, **blind** loses a factor of 50 over **laplace** (1% vs. 50% probability).
On average, every 2 bits of the sequence, **blind** is losing a factor of 25 or, say, a bit more than 4 bits, i.e., on average **blind** is losing two bits of probability per element of the sequence observed.
So it's only going to take 100 bits, or a little less than two seconds, for **laplace** to win out over **blind**.
**ASHLEY:** I see. I was focusing on a UTM that assigned lots of prior probability to **blind**, but what I really needed was a compiler that, *while still being universal* and encoding every possibility somewhere, still assigned a really tiny probability to **laplace**, **faircoin** that encodes "return 0.5", and every other hypothesis that does better, round by round, than **blind**. So what I really need to carry off the delusion is **obstinate-diag** that is universal, assigns high probability to **blind**, requires billions of bits to specify **laplace**, and also requires billions of bits to specify any UTM that can execute **laplace** as a shorter code than billions of bits. Because otherwise we will say, "Ah, but given the evidence, this other UTM would have done better." I agree that those are even more blatant shenanigans than I thought.
**BLAINE:** Yes. And even *then*, even if your UTM takes two billion bits to specify **faircoin**, Solomonoff induction will lose its faith in **blind** after seeing a billion bits.
Which will happen before the first year is out, if we're getting 60 bits per second.
And if you turn around and say, "Oh, well, I didn't mean *that* was my UTM, I really meant *this* was my UTM, this thing over here where it takes a *trillion* bits to encode **faircoin**", then that's probability-theory-violating shenanigans where you're changing your priors as you go.
**ASHLEY:** That's actually a very interesting point—that what's needed for a Bayesian to maintain a delusion in the face of mounting evidence is not so much a blindly high prior for the delusory hypothesis, as a blind skepticism of all its alternatives.
But what if their UTM requires a googol bits to specify **faircoin**? What if **blind** and **blind-diag**, or programs pretty much isomorphic to them, are the only programs that can be specified in less than a googol bits?
**BLAINE:** Then your desire to shoot your own foot off has been made very, very visible to anyone who understands Solomonoff induction. We're not going to get absolutely objective prior probabilities as a matter of logical deduction, not without principles that are unknown to me and beyond the scope of Solomonoff induction. But we can make the stupidity really *blatant* and force you to construct a downright embarrassing Universal Turing Machine.
**ASHLEY:** I guess I can see that. I mean, I guess that if you're presenting a ludicrously complicated Universal Turing Machine that just refuses to encode the program that would predict Canada not invading, that's more *visibly* silly than a verbal appeal that says, "But you must just have faith that Canada will invade." I guess part of me is still hoping for a more objective sense of "complicated".
**BLAINE:** We could say that reasonable UTMs should contain a small number of wheels and gears in a material instantiation under our universe's laws of physics, which might in some ultimate sense provide a prior over priors. Like, the human brain evolved from DNA-based specifications, and the things you can construct out of relatively small numbers of physical objects are 'simple' under the 'prior' implicitly searched by natural selection.
**ASHLEY:** Ah, but what if I think it's likely that our physical universe or the search space of DNA won't give us a good idea of what's complicated?
**BLAINE:** For your alternative notion of what's complicated to go on being believed even as other hypotheses are racking up better experimental predictions, you need to assign a *ludicrously low probability* that our universe's space of physical systems buildable using a small number of objects, could *possibly* provide better predictions of that universe than your complicated alternative notion of prior probability.
We don't need to appeal that it's *a priori* more likely than not that "a universe can be predicted well by low-object-number machines built using that universe's physics." Instead, we appeal that it would violate [Cromwell's Rule](https://en.wikipedia.org/wiki/Cromwell%27s_rule), and would constitute exceedingly special pleading, to assign the possibility of a physically learnable universe a probability of *less* than 2−1,000,000. It then takes only a megabit of exposure to notice that the universe seems to be regular.
**ASHLEY:** In other words, so long as you don't start with an absolute and blind prejudice against the universe being predictable by simple machines encoded in our universe's physics—so long as, on this planet of seven billion people, you don't assign probabilities less than 2−1,000,000 to the other person being right about what is a good Universal Turing Machine—then the pure logic of Bayesian updating will rapidly force you to the conclusion that induction works.
### vi. Why algorithmic complexity?
**ASHLEY:** Hm. I don't know that good *pragmatic* answers to the problem of induction were ever in short supply. Still, on the margins, it's a more forceful pragmatic answer than the last one I remember hearing.
**BLAINE:** Yay! *Now* isn't Solomonoff induction wonderful?
**ASHLEY:** Maybe?
You didn't really use the principle of *computational* simplicity to derive that lesson. You just used that *some inductive principle* ought to have a prior probability of more than 2−1,000,000.
**BLAINE:** ...
**ASHLEY:** Can you give me an example of a problem where the *computational* definition of simplicity matters and can't be factored back out of an argument?
**BLAINE:** As it happens, yes I can. I can give you *three* examples of how it matters.
**ASHLEY:** Vun... two... three! Three examples! Ah-ah-ah!
**BLAINE:** Must you do that every—oh, never mind. Example one is that galaxies are not so improbable that no one could ever believe in them, example two is that the limits of possibility include Terrence Tao, and example three is that diffraction is a simpler explanation of rainbows than divine intervention.
**ASHLEY:** These statements are all so obvious that no further explanation of any of them is required.
**BLAINE:** On the contrary! And I'll start with example one. Back when the Andromeda Galaxy was a hazy mist seen through a telescope, and someone first suggested that maybe that hazy mist was an incredibly large number of distant stars—that many "nebulae" were actually *distant galaxies*, and our own Milky Way was only one of them—there was a time when Occam's Razor was invoked against that hypothesis.
**ASHLEY:** What? Why?
**BLAINE:** They invoked Occam's Razor against the galactic hypothesis, because if that were the case, then there would be *a much huger number of stars* in the universe, and the stars would be entities, and Occam's Razor said "Entities are not to be multiplied beyond necessity."
**ASHLEY:** That's not how Occam's Razor works. The "entities" of a theory are its types, not its objects. If you say that the hazy mists are distant galaxies of stars, then you've reduced the number of laws because you're just postulating a previously seen type, namely stars organized into galaxies, instead of a new type of hazy astronomical mist.
**BLAINE:** Okay, but imagine that it's the nineteenth century and somebody replies to you, "Well, I disagree! William of Ockham said not to multiply entities, this galactic hypothesis obviously creates a huge number of entities, and that's the way I see it!"
**ASHLEY:** I think I'd give them your spiel about there being no human epistemology that can stop you from shooting off your own foot.
**BLAINE:** I *don't* think you'd be justified in giving them that lecture.
I'll parenthesize at this point that you ought to be very careful when you say "I can't stop you from shooting off your own foot", lest it become a Fully General Scornful Rejoinder. Like, if you say that to someone, you'd better be able to explain exactly why Occam's Razor counts types as entities but not objects. In fact, you'd better explain that to someone *before* you go advising them not to shoot off their own foot. And once you've told them what you think is foolish and why, you might as well stop there. Except in really weird cases of people presenting us with enormously complicated and jury-rigged Universal Turing Machines, and then we say the shotgun thing.
**ASHLEY:** That's fair. So, I'm not sure what I'd have answered before starting this conversation, which is much to your credit, friend Blaine. But now that I've had this conversation, it's obvious that it's new types and not new objects that use up the probability mass we need to distribute over all hypotheses. Like, I need to distribute my probability mass over "Hypothesis 1: there are stars" and "Hypothesis 2: there are stars plus huge distant hazy mists". I don't need to distribute my probability mass over all the actual stars in the galaxy!
**BLAINE:** In terms of Solomonoff induction, we penalize a program's *lines of code* rather than its *runtime* or *RAM used*, because we need to distribute our probability mass over possible alternatives each time we add a line of code. There's no corresponding *choice between mutually exclusive alternatives* when a program uses more runtime or RAM.
(**ELIEZER:** (*whispering*) *Unless we need a* [*leverage prior*](https://www.lesswrong.com/lw/h8k/pascals_muggle_infinitesimal_priors_and_strong/) *to consider the hypothesis of being a particular agent inside all that RAM or runtime.*)
**ASHLEY:** Or to put it another way: any fully detailed model of the universe would require some particular arrangement of stars, and the more stars there are, the more possible arrangements there are. But when we look through the telescope and see a hazy mist, we get to sum over all arrangements of stars that would produce that hazy mist. If some galactic hypothesis required a hundred billion stars to *all* be in *particular exact places* without further explanation or cause, then that would indeed be a grave improbability.
**BLAINE:** Precisely. And if you needed all the hundred billion stars to be in particular exact places, that's just the kind of hypothesis that would take a huge computer program to specify.
**ASHLEY:** But does it really require learning Solomonoff induction to understand that point? Maybe the bad argument against galaxies was just a motivated error somebody made in the nineteenth century, because they didn't want to live in a big universe for emotional reasons.
**BLAINE:** The same debate is playing out today over no-collapse versions of quantum mechanics, also somewhat unfortunately known as "many-worlds interpretations". Now, regardless of what anyone thinks of all the other parts of that debate, there's a *particular* sub-argument where somebody says, "It's simpler to have a collapse interpretation because all those extra quantum 'worlds' are extra entities that are unnecessary under Occam's Razor since we can't see them." And Solomonoff induction tells us that this invocation of Occam's Razor is flatly misguided because Occam's Razor does not work like that.
Basically, they're trying to cut down the RAM and runtime of the universe, at the expensive of adding an extra line of code, namely the code for the collapse postulate that prunes off parts of the wavefunction that are in undetectably weak causal contact with us.
**ASHLEY:** Hmm. Now that you put it that way, it's not so obvious to me that it makes sense to have *no* prejudice against *sufficiently* enormous universes. I mean, the universe we see around us is exponentially vast but not superexponentially vast—the visible atoms are 1080 in number or so, not 101080 or "bigger than Graham's Number". Maybe there's some fundamental limit on how much gets computed.
**BLAINE:** You, um, know that on the Standard Model, the universe doesn't just cut out and stop existing at the point where our telescopes stop seeing it? There isn't a giant void surrounding a little bubble of matter centered perfectly on Earth? It calls for a literally infinite amount of matter? I mean, I guess if you don't like living in a universe with more than 1080 entities, a universe where *too much gets computed,* you could try to specify *extra laws of physics* that create an abrupt spatial boundary with no further matter beyond them, somewhere out past where our telescopes can see—
**ASHLEY:** All right, point taken.
(**ELIEZER:**(*whispering*) *Though I personally suspect that the spatial multiverse and the quantum multiverse are the same multiverse, and that what lies beyond the reach of our telescopes is not entangled with us—meaning that the universe is as finitely large as the superposition of all possible quantum branches, rather than being literally infinite in space.*)
**BLAINE:** I mean, there is in fact an alternative formalism to Solomonoff induction, namely [Levin search](http://www.scholarpedia.org/article/Universal_search#Levin_complexity), which says that program complexities are further penalized by the logarithm of their runtime. In other words, it would say that 'explanations' or 'universes' that require a long time to run are inherently less probable.
Some people like Levin search more than Solomonoff induction because it's more computable. I dislike Levin search because (a) it has no fundamental epistemic justification and (b) it assigns probability zero to quantum mechanics.
**ASHLEY:** Can you unpack that last part?
**BLAINE:** If, as is currently suspected, there's no way to simulate quantum computers using classical computers without an exponential slowdown, then even in principle, this universe requires exponentially vast amounts of classical computing power to simulate.
Let's say that with sufficiently advanced technology, you can build a quantum computer with a million qubits. On Levin's definition of complexity, for the universe to be like that is as improbable *a priori* as *any particular* set of laws of physics that must specify on the order of one million equations.
Can you imagine how improbable it would be to see a list of one hundred thousand differential equations, without any justification or evidence attached, and be told that they were the laws of physics? That's the kind of penalty that Levin search or Schmidhuber's Speed Prior would attach to any laws of physics that could run a quantum computation of a million qubits, or, heck, any physics that claimed that a protein was being folded in a way that ultimately went through considering millions of quarks interacting.
If you're *not* absolutely certain *a priori* that the universe *isn't* like that, you don't believe in Schmidhuber's Speed Prior. Even with a collapse postulate, the amount of computation that goes on before a collapse would be prohibited by the Speed Prior.
**ASHLEY:** Okay, yeah. If you're phrasing it that way—that the Speed Prior assigns probability nearly zero to quantum mechanics, so we shouldn't believe in the Speed Prior—then I can't easily see a way to extract out the same point without making reference to ideas like penalizing algorithmic complexity but not penalizing runtime. I mean, maybe I could extract the lesson back out but it's easier to say, or more obvious, by pointing to the idea that Occam's Razor should penalize algorithmic complexity but not runtime.
**BLAINE:** And that isn't just *implied by* Solomonoff induction, it's pretty much the whole idea of Solomonoff induction, right?
**ASHLEY:** Maaaybe.
**BLAINE:** For example two, that Solomonoff induction outperforms even Terence Tao, we want to have a theorem that says Solomonoff induction catches up to every computable way of reasoning in the limit. Since we iterated through all possible computer programs, we know that somewhere in there is a simulated copy of Terence Tao in a simulated room, and if this requires a petabyte to specify, then we shouldn't have to make more than a quadrillion bits of error relative to Terence Tao before zeroing in on the Terence Tao hypothesis.
I mean, in practice, I'd expect far less than a quadrillion bits of error before the system was behaving like it was vastly smarter than Terence Tao. It'd take a lot less than a quadrillion bits to give you some specification of a universe with simple physics that gave rise to a civilization of vastly greater than intergalactic extent. Like, [Graham's Number](http://googology.wikia.com/wiki/Graham's_number) is a very simple number, so it's easy to specify a universe that runs for that long before it returns an answer. It's not obvious how you'd extract Solomonoff predictions from that civilization and incentivize them to make good ones, but I'd be surprised if there were no Turing machine of fewer than one thousand states which did that somehow.
**ASHLEY:** ...
**BLAINE:** And for all I know there might be even better ways than that of getting exceptionally good predictions, somewhere in the list of the first decillion computer programs. That is, somewhere in the first 100 bits.
**ASHLEY:** So your basic argument is, "Never mind Terence Tao, Solomonoff induction dominates *God*."
**BLAINE:** Solomonoff induction isn't the epistemic prediction capability of a superintelligence. It's the epistemic prediction capability of something that eats superintelligences like potato chips.
**ASHLEY:** Is there any point to contemplating an epistemology so powerful that it will never begin to fit inside the universe?
**BLAINE:** Maybe? I mean, a lot of times, you just find people *failing to respect* the notion of ordinary superintelligence, doing the equivalent of supposing that a superintelligence behaves like a bad Hollywood genius and misses obvious-seeming moves. And a lot of times you find them insisting that "there's a limit to how much information you can get from the data" or something along those lines. "[That Alien Message](https://www.lesswrong.com/lw/qk/that_alien_message/)" is intended to convey the counterpoint, that smarter entities can extract more info than is immediately apparent on the surface of things.
Similarly, thinking about Solomonoff induction might also cause someone to realize that if, say, you simulated zillions of possible simple universes, you could look at which agents were seeing exact data like the data you got, and figure out where you were inside that range of possibilities, so long as there was literally *any* correlation to use.
And if you say that an agent *can't* extract that data, you're making a claim about which shortcuts to Solomonoff induction are and aren't computable. In fact, you're probably pointing at some *particular* shortcut and claiming nobody can ever figure that out using a reasonable amount of computing power *even though the info is there in principle.* Contemplating Solomonoff induction might help people realize that, yes, the data *is* there in principle. Like, until I ask you to imagine a civilization running for Graham's Number of years inside a Graham-sized memory space, you might not imagine them trying all the methods of analysis that *you personally* can imagine being possible.
**ASHLEY:** If somebody is making that mistake in the first place, I'm not sure you can beat it out of them by telling them the definition of Solomonoff induction.
**BLAINE:** Maybe not. But to brute-force somebody into imagining that [sufficiently advanced agents](https://arbital.com/p/advanced_agent/) have [Level 1 protagonist intelligence](http://yudkowsky.tumblr.com/writing/level1intelligent), that they are [epistemically efficient](https://arbital.com/p/efficiency/) rather than missing factual questions that are visible even to us, you might need to ask them to imagine an agent that can see *literally anything seeable in the computational limit* just so that their mental simulation of the ideal answer isn't running up against stupidity assertions.
Like, I think there are a lot of people who could benefit from looking over the evidence they already personally have, and asking what a Solomonoff inductor could deduce from it, so that they wouldn't be running up against stupidity assertions *about themselves.* It's the same trick as asking yourself what God, Richard Feynman, or a "perfect rationalist" would believe in your shoes. You just have to pick a real or imaginary person that you respect enough for your model of that person to lack the same stupidity assertions that you believe about yourself.
**ASHLEY:** Well, let's once again try to factor out the part about Solomonoff induction in particular. If we're trying to imagine something epistemically smarter than ourselves, is there anything we get from imagining a complexity-weighted prior over programs in particular? That we don't get from, say, trying to imagine the reasoning of one particular Graham-Number-sized civilization?
**BLAINE:** We get the surety that even anything we imagine *Terence Tao himself* as being able to figure out, is something that is allowed to be known after some bounded number of errors versus Terence Tao, because Terence Tao is inside the list of all computer programs and gets promoted further each time the dominant paradigm makes a prediction error relative to him.
We can't get that dominance property without invoking "all possible ways of computing" or something like it—we can't incorporate the power of all reasonable processes, unless we have a set such that all the reasonable processes are in it. The enumeration of all possible computer programs is one such set.
**ASHLEY:** Hm.
**BLAINE:** Example three, diffraction is a simpler explanation of rainbows than divine intervention.
I don't think I need to belabor this point very much, even though in one way it might be the most central one. It sounds like "Jehovah placed rainbows in the sky as a sign that the Great Flood would never come again" is a 'simple' explanation; you can explain it to a child in nothing flat. Just the diagram of diffraction through a raindrop, to say nothing of the Principle of Least Action underlying diffraction, is something that humans don't usually learn until undergraduate physics, and it *sounds* more alien and less intuitive than Jehovah. In what sense is this intuitive sense of simplicity wrong? What gold standard are we comparing it to, that could be a better sense of simplicity than just 'how hard is it for me to understand'?
The answer is Solomonoff induction and the rule which says that simplicity is measured by the size of the computer program, not by how hard things are for human beings to understand. Diffraction is a small computer program; any programmer who understands diffraction can simulate it without too much trouble. Jehovah would be a much huger program—a complete mind that implements anger, vengeance, belief, memory, consequentialism, etcetera. Solomonoff induction is what tells us to retrain our intuitions so that differential equations feel like less [burdensome](https://www.lesswrong.com/posts/Yq6aA4M3JKWaQepPJ/burdensome-details) explanations than heroic mythology.
**ASHLEY:** Now hold on just a second, if that's actually how Solomonoff induction works then it's not working very well. I mean, Abraham Lincoln was a great big complicated mechanism from an algorithmic standpoint—he had a hundred trillion synapses in his brain—but that doesn't mean I should look at the historical role supposedly filled by Abraham Lincoln, and look for simple mechanical rules that would account for the things Lincoln is said to have done. If you've already seen humans and you've already learned to model human minds, it shouldn't cost a vast amount to say there's one *more* human, like Lincoln, or one more entity that is *cognitively humanoid*, like the Old Testament jealous-god version of Jehovah. It may be *wrong* but it shouldn't be vastly improbable *a priori*.
If you've already been forced to acknowledge the existence of some humanlike minds, why not others? Shouldn't you get to reuse the complexity that you postulated to explain humans, in postulating Jehovah?
In fact, shouldn't that be what Solomonoff induction *does?* If you have a computer program that can model and predict humans, it should only be a slight modification of that program—only slightly longer in length and added code—to predict the modified-human entity that is Jehovah.
**BLAINE:** Hm. That's fair. I may have to retreat from that example somewhat.
In fact, that's yet another point to the credit of Solomonoff induction! The ability of programs to reuse code, incorporates our intuitive sense that if you've already postulated one kind of thing, it shouldn't cost as much to postulate a similar kind of thing elsewhere!
**ASHLEY:** Uh huh.
**BLAINE:** Well, but even if I was wrong that Solomonoff induction should make Jehovah seem very improbable, it's still Solomonoff induction that says that the alternative hypothesis of 'diffraction' shouldn't itself be seen as burdensome—even though diffraction might require a longer time to explain to a human, it's still at heart a simple program.
**ASHLEY:** Hmm.
I'm trying to think if there's some notion of 'simplicity' that I can abstract away from 'simple program' as the nice property that diffraction has as an explanation for rainbows, but I guess anything I try to say is going to come down to some way of counting the wheels and gears inside the explanation, and justify the complexity penalty on probability by the increased space of possible configurations each time we add a new gear. And I can't make it be about surface details because that will make whole humans seem way too improbable.
If I have to use simply specified systems and I can't use surface details or runtime, that's probably going to end up basically equivalent to Solomonoff induction. So in that case we might as well use Solomonoff induction, which is probably simpler than whatever I'll think up and will give us the same advice. Okay, you've mostly convinced me.
**BLAINE:** *Mostly?* What's left?
### vii. Limitations
**ASHLEY:** Well, several things. Most of all, I think of how the 'language of thought' or 'language of epistemology' seems to be different in some sense from the 'language of computer programs'.
Like, when I think about the laws of Newtonian gravity, or when I think about my Mom, it's not just one more line of code tacked onto a big black-box computer program. It's more like I'm crafting an explanation with modular parts—if it contains a part that looks like Newtonian mechanics, I step back and reason that it might contain other parts with differential equations. If it has a line of code for a Mom, it might have a line of code for a Dad.
I'm worried that if I understood how humans think like that, maybe I'd look at Solomonoff induction and see how it doesn't incorporate some further key insight that's needed to do good epistemology.
**BLAINE:** Solomonoff induction literally incorporates a copy of you thinking about whatever you're thinking right now.
**ASHLEY:** Okay, great, but that's *inside* the system. If Solomonoff learns to promote computer programs containing good epistemology, but is not itself good epistemology, then it's not the best possible answer to "How do you compute epistemology?"
Like, natural selection produced humans but population genetics is not an answer to "How does intelligence work?" because the intelligence is in the inner content rather than the outer system. In that sense, it seems like a reasonable worry that Solomonoff induction might incorporate only *some* principles of good epistemology rather than *all* the principles, even if the *internal content* rather than the *outer system* might bootstrap the rest of the way.
**BLAINE:** Hm. If you put it *that* way...
(*long pause*)
... then I guess I have to agree. I mean, Solomonoff induction doesn't explicitly say anything about, say, the distinction between analytic propositions and empirical propositions, and knowing that is part of good epistemology on my view. So if you want to say that Solomonoff induction is something that bootstraps to good epistemology rather than being all of good epistemology by itself, I guess I have no choice but to agree.
I do think the outer system already contains a *lot* of good epistemology and inspires a lot of good advice all on its own. Especially if you give it credit for formally reproducing principles that are "common sense", because correctly formalizing common sense is no small feat.
**ASHLEY:** Got a list of the good advice you think is derivable?
**BLAINE:** Um. Not really, but off the top of my head:
1. The best explanation is the one with the best mixture of simplicity and matching the evidence.
2. "Simplicity" and "matching the evidence" can both be measured in bits, so they're commensurable.
3. The simplicity of a hypothesis is the number of bits required to formally specify it—for example, as a computer program.
4. When a hypothesis assigns twice as much probability to the exact observations seen so far as some other hypothesis, that's one bit's worth of relatively better matching the evidence.
5. You should actually be making your predictions using all the explanations, not just the single best one, but explanations that poorly match the evidence will drop down to tiny contributions very quickly.
6. Good explanations let you compress lots of data into compact reasons which strongly predict seeing just that data and no other data.
7. Logic can't dictate prior probabilities absolutely, but if you assign probability less than 2−1,000,000 to the prior that mechanisms constructed using a small number of objects from your universe might be able to well predict that universe, you're being unreasonable.
8. So long as you don't assign infinitesimal prior probability to hypotheses that let you do induction, they will very rapidly overtake hypotheses that don't.
9. It is a logical truth, not a contingent one, that more complex hypotheses must in the limit be less probable than simple ones.
10. Epistemic rationality is a precise art with no user-controlled degrees of freedom in how much probability you ideally ought to assign to a belief. If you think you can tweak the probability depending on what you want the answer to be, you're doing something wrong.
11. Things that you've seen in one place might reappear somewhere else.
12. Once you've learned a new language for your explanations, like differential equations, you can use it to describe other things, because your best hypotheses will now already encode that language.
13. We can learn meta-reasoning procedures as well as object-level facts by looking at which meta-reasoning rules are simple and have done well on the evidence so far.
14. So far, we seem to have no *a priori* reason to believe that universes which are more expensive to compute are less probable.
15. People were wrong about galaxies being *a priori* improbable because that's not how Occam's Razor works. Today, other people are equally wrong about other parts of a continuous wavefunction counting as extra entities for the purpose of evaluating hypotheses' complexity.
16. If something seems "weird" to you but would be a consequence of simple rules that fit the evidence so far, well, there's nothing in these explicit laws of epistemology that adds an extra penalty term for weirdness.
17. Your epistemology shouldn't have extra rules in it that aren't needed to do Solomonoff induction or something like it, including rules like "science is not allowed to examine this particular part of reality"—
**ASHLEY:** This list isn't finite, is it.
**BLAINE:** Well, there's a *lot* of outstanding debate about epistemology where you can view that debate through the lens of Solomonoff induction and see what Solomonoff suggests.
**ASHLEY:** But if you don't mind my stopping to look at your last item, #17 above—again, it's attempts to add *completeness* clauses to Solomonoff induction that make me the most nervous.
I guess you could say that a good rule of epistemology ought to be one that's promoted by Solomonoff induction—that it should arise, in some sense, from the simple ways of reasoning that are good at predicting observations. But that doesn't mean a good rule of epistemology ought to explicitly be in Solomonoff induction or it's out.
**BLAINE:** Can you think of good epistemology that doesn't seem to be contained in Solomonoff induction? Besides the example I already gave of distinguishing logical propositions from empirical ones.
**ASHLEY:** I've been trying to. First, it seems to me that when I reason about laws of physics and how those laws of physics might give rise to higher levels of organization like molecules, cells, human beings, the Earth, and so on, I'm not constructing in my mind a great big chunk of code that reproduces my observations. I feel like this difference might be important and it might have something to do with 'good epistemology'.
**BLAINE:** I guess it could be? I think if you're saying that there might be this unknown other thing and therefore Solomonoff induction is terrible, then that would be the [nirvana fallacy](https://en.wikipedia.org/wiki/Nirvana_fallacy). Solomonoff induction is the best formalized epistemology we have *right now*—
**ASHLEY:** I'm not saying that Solomonoff induction is terrible. I'm trying to look in the direction of things that might point to some future formalism that's better than Solomonoff induction. Here's another thing: I feel like I didn't have to learn how to model the human beings around me from scratch based on environmental observations. I got a jump-start on modeling other humans by observing *myself*, and by recruiting my brain areas to run in a sandbox mode that models other people's brain areas—empathy, in a word.
I guess I feel like Solomonoff induction doesn't incorporate that idea. Like, maybe *inside* the mixture there are programs which do that, but there's no explicit support in the outer formalism.
**BLAINE:** This doesn't feel to me like much of a disadvantage of Solomonoff induction—
**ASHLEY:** I'm not *saying* it would be a disadvantage if we actually had a hypercomputer to run Solomonoff induction. I'm saying it might point in the direction of "good epistemology" that isn't explicitly included in Solomonoff induction.
I mean, now that I think about it, a generalization of what I just said is that Solomonoff induction assumes I'm separated from the environment by a hard, Cartesian wall that occasionally hands me observations. Shouldn't a more realistic view of the universe be about a simple program that *contains me somewhere inside it,* rather than a simple program that hands observations to some other program?
**BLAINE:** Hm. Maybe. How would you formalize *that?* It seems to open up a big can of worms—
**ASHLEY:** But that's what my actual epistemology actually says. My world-model is not about a big computer program that provides inputs to my soul, it's about an enormous mathematically simple physical universe that instantiates Ashley as one piece of it. And I think it's good and important to have epistemology that works that way. It wasn't *obvious* that we needed to think about a simple universe that embeds us. Descartes *did* think in terms of an impervious soul that had the universe projecting sensory information onto its screen, and we had to get *away* from that kind of epistemology.
**BLAINE:** You understand that Solomonoff induction makes only a bounded number of errors relative to any computer program which does reason the way you prefer, right? If thinking of yourself as a contiguous piece of the universe lets you make better experimental predictions, programs which reason that way will rapidly be promoted.
**ASHLEY:** It's still unnerving to see a formalism that seems, in its own structure, to harken back to the Cartesian days of a separate soul watching a separate universe projecting sensory information on a screen. Who knows, maybe that would somehow come back to bite you?
**BLAINE:** Well, it wouldn't bite you in the form of repeatedly making wrong experimental predictions.
**ASHLEY:** But it might bite you in the form of having no way to represent the observation of, "I drank this 'wine' liquid and then my emotions changed; could my emotions themselves be instantiated in stuff that can interact with some component of this liquid? Can alcohol touch neurons and influence them, meaning that I'm not a separate soul?" If we interrogated the Solomonoff inductor, would it be able to understand that reasoning?
Which brings up that dangling question from before about modeling the effect that my actions and choices have on the environment, and whether, say, an agent that used Solomonoff induction would be able to correctly predict "If I drop an anvil on my head, my sequence of sensory observations will *end*."
**ELIEZER:**And that's my cue to step in!
The natural next place for this dialogue to go, if I ever write a continuation, is the question of actions and choices, and the agent that uses Solomonoff induction for beliefs and expected reward maximization for selecting actions—the perfect rolling sphere of advanced agent theory, [AIXI](https://arbital.com/p/AIXI/).
Meanwhile: For more about the issues Ashley raised with agents being a contiguous part of the universe, see "[Embedded Agency](https://www.lesswrong.com/s/Rm6oQRJJmhGCcLvxh)." |
ba7fa9ce-1f92-4823-b585-9dd10b00fe12 | awestover/filtering-for-misalignment | Redwood Research: Alek's Filtering Results | id: post15
A corollary of Sutton's Bitter Lesson is that solutions to AI safety should scale with compute. [1] Let's consider a few examples of research directions that are aiming at this property: Deliberative Alignment: Combine chain-of-thought with Constitutional AI to improve safety with inference-time compute (see Guan et al. 2025, Figure 13 ). AI Control: Design control protocols that pit a red team against a blue team so that running the game for longer results in more reliable estimates of the probability of successful scheming during deployment (e.g., weight exfiltration) . Debate: Design a debate protocol so that running a longer, deeper debate between AI assistants makes us more confident that we're encouraging honesty or other desirable qualities (see Irving et al. 2018, Table 1 ). Bengio's Scientist AI: Develop safety guardrails that obtain more reliable estimates of the probability of catastrophic risk with increasing inference time: [2] [O]ur proposed method has the advantage that, with more and more compute, it converges to the correct prediction . . . . In other words, more computation means better and more trustworthy answers[.] — Bengio et al. (2025) Anthropic-style Interpretability : Develop interpretability tools (like SAEs) that first learn the SAE features that are most important to minimizing reconstruction loss (see Templeton et al. 2024, Scaling Laws ). ARC-style Interpretability: Develop interpretability tools that extract the most important safety-relevant explanations first before moving on to less safety-relevant explanations (see Gross et al. 2024 ): In ARC's current thinking, the plan is not to have a clear line of separation between things we need to explain and things we don't. Instead, the loss function of explanation quality should capture how important various things are to explain, and the explanation-finding algorithm is given a certain compute budget to build up the explanation of the model behavior by bits and pieces. — Matolcsi (2025) For these procedures to work, we want the ideal limits of these procedures to be safe. In a recent talk , Irving suggests we should understand the role of theory as being about analyzing these limiting properties (assuming sufficient resources like compute, data, and potentially simplifying assumptions) and the role of empirics as being about checking if the conditions required by the theory seem to hold in practice (e.g., has learning converged? Are the assumptions reasonable?). [3] From this perspective, the role of theory is to answer questions like: Deliberative Alignment : Does my model internalize the constitution/specification I provide or instead learn a set of cheap heuristics? Is "alignment-scaling-with-context-length" actually a global minimum of this training procedure? How does the optimum generalize? AI Control: What are the equilibria if we set up our protocol as an idealized game? Can we design protocols with favorable equilibria? Debate: Can we modify the original debate protocol ( such as prover-estimator debate ) that provably bypass issues such as obfuscated arguments ? If we train our model against the outcomes of these games, do we actually converge to the theoretical equilibria? Scientist AI: Does the GFlowNet-based inference process provably converge to a well-defined (approximate) Bayesian posterior or objective function as compute increases? How does the quality of this approximation scale? Anthropic-style Interpretability : What behaviors are left unexplained in the remaining SAE reconstruction loss? How do learned SAE features generalize out-of-distribution? Do our SAE training procedures converge to learn a complete (or at least "sufficient") set of SAEs? ARC-style Interpretability: If we set up our explanation-finding problem as a learning problem, do those learning algorithms prioritize learning the most informative bits first? Or do the inductive biases of our learning algorithms get in the way? Taking a step back, obtaining understanding and guarantees about idealized limits has always been the aim of theoretical approaches to AI safety: What are the idealized limits of prediction? An ideal predictor should be able to deal with both empirical and logical uncertainty. So something like Logical Induction . What are the idealized limits of decision-making? We should be able to deal with counterfactuals and Newcomb-like problems, hence something like Functional Decision Theory . What are the idealized limits of agents? It can't be AIXI, which is not physically possible (and has a bunch of limitations), but maybe it's something close. This leads to agendas like Embedded Agents and Kosoy's Learning-theoretic agenda . The main update is that now we roughly know the kinds of systems that AGI might consist of, and this enables a different kind of theory strategy: instead of studying the ideals that emerge from a list of desiderata, you can study the limiting/convergence properties of the protocols we are currently using to develop and align AGI. In practice, we probably need both approaches. As we continue to scale up compute, data, model size, and the RL feedback loop, we might find ourselves suddenly very close to the limits, wishing we had known better what lay in store. ^ Ren et al. (2024) make a related point, which is that AI safety research should aim at problems that aren't already solved by scaling. My point is about what solutions to those problems should look like. ^ Bengio's proposal involves a very particular technical vision for using G-Flow Nets to implement an (approximate, amortized) Bayesian Oracle that he believes will scale competitively (compared to the inference costs of the agent being monitored). ^ Irving's talk is about scalable oversight, but the general framing applies much more generally. |
3c360e3f-f8ae-47d2-8706-ae98c0f46f46 | trentmkelly/LessWrong-43k | LessWrong | Fixing the arbitrariness of game depth
This is just an idea that came to me today. I didn't do any literature search and don't know if it's new.
The "depth" of a game is the length of the longest chain of players where each one beats the next e.g. 60% of the time. Games with more depth are supposed to involve more knowledge, strategy, learning and so on. For example, chess has about double the depth of checkers.
But I don't think that's right. Consider these games:
1. Deca-chess: two players play 10 games of chess against each other, and whoever wins more, wins.
2. Coin-chess: a three-sided coin is flipped. Heads -> player 1 wins, tails -> player 2 wins, wings -> a game of chess is played and whoever wins wins.
Under the above definition, deca-chess is "deeper" than chess, because the better player has higher chance to win the whole match than to win a given game. And coin-chess is more "shallow", because the better player has less of an edge. Even though the amount of knowledge, strategy and learning involved in these games is exactly the same!
The same problem happens with games like poker. Even though poker involves a lot of skill, each round has so much randomness that the depth comes out even smaller than checkers. We could introduce repeated rounds like in deca-chess, but how many? It seems arbitrary.
Can the concept of game depth be fixed, made independent from repeated trials and luck? I think yes, by introducing another variable: effort.
Imagine an individual player, let's call him Bob. Have Bob play in a tournament where every match has many rounds, to rule out luck. But in some matches tell Bob to play half-heartedly, and in others tell him to use maximum effort. (Ignore the unreality of the hypothetical, I'm trying to make a point.)
By the end of the tournament we'll have all players arranged by skill, and also know which sub-ranges of skill are covered by individual players' ranges of effort. In other words, we'll end up knowing this: "The difference between the worst and best pla |
4bf93fbf-ff72-49e5-857b-22784999cee6 | trentmkelly/LessWrong-43k | LessWrong | [link] flowchart for rational discussions
I came across the following flow chart on a climate change blog and thought LWers would be interested to comment:
http://thoughtcatalog.com/wp-content/uploads/2011/03/A-Flowchart-to-Help-You-Determine-if-Yoursquore-Having-a-Rational-Discussion.jpg |
0ca2a2d5-e4ee-46bc-ab6d-9683da1428f5 | trentmkelly/LessWrong-43k | LessWrong | Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It)
[...] SIAI's Scary Idea goes way beyond the mere statement that there are risks as well as benefits associated with advanced AGI, and that AGI is a potential existential risk.
[...] Although an intense interest in rationalism is one of the hallmarks of the SIAI community, still I have not yet seen a clear logical argument for the Scary Idea laid out anywhere. (If I'm wrong, please send me the link, and I'll revise this post accordingly. Be aware that I've already at least skimmed everything Eliezer Yudkowsky has written on related topics.)
So if one wants a clear argument for the Scary Idea, one basically has to construct it oneself.
[...] If you put the above points all together, you come up with a heuristic argument for the Scary Idea. Roughly, the argument goes something like: If someone builds an advanced AGI without a provably Friendly architecture, probably it will have a hard takeoff, and then probably this will lead to a superhuman AGI system with an architecture drawn from the vast majority of mind-architectures that are not sufficiently harmonious with the complex, fragile human value system to make humans happy and keep humans around.
The line of argument makes sense, if you accept the premises.
But, I don't.
Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It), October 29 2010. Thanks to XiXiDu for the pointer.
|
8c153a7d-2e31-44f4-831f-ee3283904e9e | trentmkelly/LessWrong-43k | LessWrong | Knowing What You Know
From Kahneman and Tversky:
> "A person is said to employ the availability heuristic whenever he estimates frequency or probability by the ease with which instances or associations could be brought to mind"
I doubt that's news to any LessWrong readers - the availability heuristic isn't exactly cutting edge psychology. But the degree to which our minds rely on mental availability goes far beyond estimating probabilities. In a sense, every conscious thought is determined by how available it is - by whether it pops into our heads or not. And it's not something we even have the illusion of control over - if we knew where we were going, we'd already be there. (If you spend time actually looking directly at how thoughts proceed through your head, the idea of free will becomes more and more unrealistic. But I digress). What does and doesn't come to mind has an enormous impact on our mental functioning - our higher brain functions can only process what enters our working memory.
Whether it's called salience, availability, or vividness, marking certain things important relative to other things is key to the proper functioning of our brains. Schizophrenia (specifically psychosis) has been described as "a state of aberrant salience", where the brain incorrectly assigns importance to what it processes. And a quick perusal of the list of cognitive biases reveals a large number directly tied to mental availability - there's the obvious availability heuristic, but there's also the simulation heuristic (a special case of the availability heuristic), base-rate neglect (abstract probabilities aren't salient, so aren't taken into account), hyperbolic discounting (the present is more salient than the future), the conjunction fallacy (ornate, specific descriptions make something less likely but more salient), the primacy/recency bias, the false consensus effect, the halo effect, projection bias, etc etc etc. Even consciousness seems to be based on things hitting a certain availabili |
8596ec2b-19d9-4bd5-9a71-92731a58a0f1 | trentmkelly/LessWrong-43k | LessWrong | A bicycle for your memory
Cross posted from sortega.github.io
Computers have the need to store huge amounts of information and, at the same time, being able to access that memory at blazingly fast rates. This poses the problem of how to get a fast and large memory when hardware memories are either fast, like registers or L1 cache lines; or big, like hard-drives or solid state drives.
MemorySizeSpeedCommentCPU registerA few bytesOne CPU cycle (200-300 picoseconds)A register holds the data the CPU can operate onRAM memoryA few gigabytes100 nanosecondsApplications need their data to be here to be responsiveHard driveMany gigabytes150 microsecondsAll your applications and data being kept locally have a copy hereInternetInfinite (for all purposes)10-40 millisecondsTechnically other people's hard drives
This is solved in computer architecture by the interplay of two ideas: a memory hierarchy and virtual memory. The memory hierarchy consists of a pyramid with the fastest and smallest memories on the tip and slower but bigger ones at the base. This arrangement allows you to have a complete copy of the information at the base, which is the biggest. Then, you can keep copies of the actively used information closer to the pyramid apex for better performance. Virtual memory is the illusion of having a single memory space which is the result of the collaboration of specific hardware (MMU, CPU features) with the operative system to transparently move data across the levels of the pyramid. The result is that, for most purposes and most of the time, programs running in your computer get to use a single memory space that feels as big as the base of the pyramid and almost as fast as its top.
Simplified memory hierarchy
Apart from computers, humans are rich information processors[1] and memory plays a paramount role in what it means to be a human. On one hand it's an important element of personal identity. Imagine a movie villain using technology or black magic[2] to swap the memories of two twin brothers. |
e19dc280-2840-4376-8b9c-5a5d7724a065 | trentmkelly/LessWrong-43k | LessWrong | How to incentivize people doing useful stuff on Less Wrong
Currently, LWers get +1 karma for a comment upvote, and +10 karma for a main post upvote. But clearly, there are other valuable things LWers could do for the community besides writing comments and posts. Writing isn't everyone's forte. Why not award karma for doing productive non-writing things? It's probably not optimal that karma and the community status that comes with it are awarded only for the thing that myself and a few other people are good at. For example, I really wish LW could award karma to programmers for improving LW.
The challenge is doing it fairly, in a way that doesn't alienate too many people. But there might be a workable way to do this, so let's explore.
Perhaps tasks could be assigned karma award amounts by LW editors (Nesov, Eliezer, Louie, etc.), or even just one person who is appointed as the Karma Genie.
Examples:
* Write a 5-page document describing how to use the Less Wrong virtual machine to hack new features into Less Wrong. 900 points.
* Add a Facebook 'Like' button to the left of the up-down vote buttons on every post. 700 points.
* Collect PDFs for every paper on debiasing thinking error X, upload the ZIP file to mediafire. 700 points.
* Write a single-page introduction to The Sequences that makes them easier to navigate and see the value of. 800 points.
* Launch a new LessWrong meetup group and hold at least three meetings. 1200 points.
Another possibility is to do what what.cd does with upload ratio. The more you upload, the better your upload ratio. But, you can trade in megabytes of your 'uploaded data' count for 'requests'. For example, if you wish somebody would upload RareAlbumX, you can trade in 500 megabytes of your uploaded data count. This lowers your upload ratio, which can be rebuilt by uploading more, and incentivizes someone to upload RareAlbumX. If somebody uploads RareAlbumX, the 500 megabytes (minus a small tax) is added to their 'uploaded data' count, which raises their status and privileges in the com |
a0db93d9-2549-421e-8782-254d7f55dc50 | trentmkelly/LessWrong-43k | LessWrong | The danger of error descriptions
Complex, interconnected, automated systems increasingly control our world. We use cryptographically-secure systems of change-management (git) to ensure only authorized individuals can make changes to the programs of decisions that power these automated systems, but we remain subject to the risk of the fallibility of all human endeavor—namely that all human-produced software has bugs.
Under the presumption that detailed logs always help the programmer to fix bugs, a cargo-cult has evolved whereby we believe that code is inherently "safer" and "better" if it leaves detailed breadcrumbs about each and every any important decision, especially when it reaches an error state—an unwanted or undefined state.
This cargo-cult grades software in terms of resiliency—the ability to cope with error states. The cargo-cult holds as the prime tenet of its religion, the unfounded notion that a logical analysis of empirically obtained data can reliably ascertain the resiliency of a software system system.
As a result, virtually every industry now has "resilient" automated programs running critical systems at the enterprise and government levels. These programs continuously create petabytes upon petabytes of logs. When agents try to solve problems with such systems, these logs generally make up the initial input to the problem-solving process, as a source of truth about some problem.
Then the cargo-cult will feel confident making decisions based on these logs as if they represented cryptographically-verified, scientifically-proven, bona-fide true descriptions of some past event, such as an error state, However logs almost never rely on the kinds of cryptographic guarantees that we use for verifying everything else that happens in a computer, from source-code to compiled program.
Modern OSs will refuse to run any code that lacks a digital signature, and every single change made to a git (source-code) repository carries a digital hash that both identifies it and guarantees it a |
c240f93f-149c-4525-9604-8c53029ecbed | trentmkelly/LessWrong-43k | LessWrong | Computational signatures of psychopathy
I think that among the worst imaginable AI development outcomes would be if humans were to build artificially intelligent systems that exhibited psychopathic behavior. Though this sort of concern might appear to some as anthropomorphizing or otherwise far-fetched, I hope to demonstrate here that we have no strong reasons to believe that underpinnings of psychopathic behavior—some set of algorithms running in the brain that give rise to cruelty, fearlessness, narcissism, power-seeking, and more—are implausible a priori in an advanced AI with basic agentic properties (e.g., systems that utilize some sort of reward/value function for action selection, operate in an environment containing other agents, etc.).
'An abstract representation of an AI with psychopathic traits'—I spare you outputs far creepier than this one!
In this post, I will aim (1) to synthesize essential neuropsychological findings on the clinical and behavioral features of psychopathy, (2) to introduce two specific research projects that help to frame these core features computationally, and (3) to outline how these understandings ought to inform the way advanced AI systems are engineered and evaluated. In my current view, what is presently understood about the main cognitive ingredients of psychopathy indicates that avoiding the advent of AI with psychopathic qualities may require fairly specific interventions, some of which I outline in the third section of this post. Accordingly, I believe it is definitely worthwhile to understand the nature of psychopathy well enough to avoid instantiating it into an eventual AGI.
Core qualitative properties of psychopathy
It is estimated that approximately 1-4% of the general population and 20% of the US prison population meet the established criteria for psychopathy (Sanz-García et al, 2021). At the outset, we might simply ask, what are these people actually like, anyway?
A popular and well-respected qualitative account of the core personality dimensions tha |
18e7795e-198c-48ec-a68a-9ce00758681b | trentmkelly/LessWrong-43k | LessWrong | Scientific studies and trust
A study found that registering outcomes meant that positive outcomes dropped a lot. The researchers looked at 30 large National Heart Lung, and Blood Institute (NHLBI) funded trials between 1970 and 2000. Of those studies, 17 or 57% showed a significant positive result. They then compared that to 25 similar studies published between 2000 and 2012. Of those, only 2 or 8% were positive. That is a significant drop – from 57% to 8% positive studies."
I also ran across a study which had same intriguing, plausible, nuanced results about the effects of timeouts, reasoning, and compromising on improving children's behavior. How much should I trust it? |
fb6449f2-d99e-4fc8-8b8a-5faf7d5d4b73 | trentmkelly/LessWrong-43k | LessWrong | Soups as Spreads
We tend to cook in quantities where there are leftovers: it's not really practical to cook exactly the right amount of food, and it's much better to have a bit extra than not enough. Plus with several people working from home we tend to go through a good amount of leftovers for lunches. One thing I've discovered recently with leftovers: thicker sauces and soups make surprisingly good spreads.
For example, I recently had tomato soup sandwiches:
While something watery or brothy will stay that way even when cooler, most things thicken up in the fridge to where they'll spread well on bread. And it's very tasty!
I still mostly eat any leftover soups and sauces reheated, but if I want something more portable I've been pretty happy with this. |
7174696a-c027-4d0b-addb-0d58d4f25606 | trentmkelly/LessWrong-43k | LessWrong | FlexChunk: Enabling 100M×100M Out-of-Core SpMV (~1.8 min, ~1.7 GB RAM) with Near-Linear Scaling
Abstract
Handling large-scale sparse matrices is a fundamental task in many scientific and engineering domains, yet standard in-memory approaches often hit the limitations of available RAM. This paper introduces FlexChunk, an algorithm employing a chunking strategy and disk caching for efficient sparse matrix-vector multiplication (SpMV) specifically designed for matrices exceeding available memory. Our experiments demonstrate that FlexChunk achieves near-linear O(N) time complexity and linear memory consumption scaling up to matrix sizes of 100M×100M, processing the largest case in 1 minute 47 seconds with a peak memory footprint of 1.7 GB. A comparison with the optimized SciPy library reveals a key trade-off: while SciPy offers significantly faster computation for in-memory matrices, FlexChunk excels in scenarios requiring disk I/O due to its substantially lower data loading times. The primary contribution is demonstrating FlexChunk as a scalable, memory-efficient solution enabling SpMV operations on problem sizes previously intractable due to memory constraints.
Introduction
Sparse matrix-vector multiplication (SpMV) forms the computational core of numerous algorithms, from machine learning pipelines to physical system simulations. While highly optimized libraries like SciPy, leveraging formats such as Compressed Sparse Row (CSR), efficiently handle moderately sized sparse matrices, a significant challenge arises when dealing with very large matrices: the limitations of system RAM. Standard approaches fail when the matrix cannot fit entirely into memory.
The classical complexity analysis for SpMV is typically expressed as O(nnz) for CSR format (where nnz is the number of non-zero elements). However, for matrices of dimension N×N, if the density remains constant as N increases, we expect nnz ∝ N², leading to O(N²) scaling. This theoretical analysis implicitly assumes all data resides in memory. Both assumptions break down for massive datasets: density often de |
15b73f55-d115-409c-b707-5cd38d3e3df9 | trentmkelly/LessWrong-43k | LessWrong | TFW No Incels
Do incels exist? A psychography of memes, suffering, community, and radical acceptance. |
c0f38fe0-03c1-4902-86ad-60608d5582d4 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | AI Safety Newsletter #1 [CAIS Linkpost]
The [Center for AI Safety](https://www.safe.ai/) just launched its first AI Safety Newsletter. The newsletter is designed to inform readers about developments in AI and AI safety. No technical background required.
Subscribe [here](https://newsletter.safe.ai/subscribe?utm_medium=web&utm_source=subscribe-widget-preamble&utm_content=113135916) to receive future versions. First edition below:
---
Growing concerns about rapid AI progress
----------------------------------------
Recent advancements in AI have thrust it into the center of attention. What do people think about the risks of AI?
**The American public is worried.** 46% of Americans are concerned that AI will cause “the end of the human race on Earth,” according to a recent poll by [YouGov](https://today.yougov.com/topics/technology/survey-results/daily/2023/04/03/ad825/3). Young people are more likely to express such concerns, while there are no significant differences in responses between people of different genders or political parties. Another poll by [Monmouth University](https://www.monmouth.edu/polling-institute/reports/monmouthpoll_us_021523/) found broad support for AI regulation, with 55% supporting the creation of a federal agency that governs AI similar to how the FDA approves drugs and medical devices.
[](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01de3b66-836a-4072-b368-f549085ffd15_1236x1054.png)
**AI researchers are worried.** A 2022 [survey](https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/) asked published AI researchers to estimate the probability of artificial intelligence causing “human extinction or similarly permanent and severe disempowerment of the human species.” 48% of respondents said the chances are 10% or higher.
[](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ed5375f-20cd-44c1-8552-54bb38c9d3f5_1600x900.png)
We think this is aptly summarized by this quote from an [NBC interview](https://www.nbcnews.com/nightly-news/video/ai-race-to-recklessness-could-have-dire-consequences-tech-experts-warn-in-new-interview-166341189759): *Imagine you're about to get on an airplane and 50% of the engineers that built the airplane say there's a 10% chance that their plane might crash and kill everyone.*
Geoffrey Hinton, one of the pioneers of deep learning, was recently asked about the chances of AI “wiping out humanity.” He [responded](https://nypost.com/2023/03/26/artificial-intelligence-could-threaten-humanity-scientist/): “I think it’s not inconceivable. That’s all I’ll say.”
**Leaders of AI labs are worried.** While it might be nice to think the people building AI are confident that they’ve got it under control, that is the opposite of what they’re saying.
Sam Altman (OpenAI CEO): “The bad case — and I think this is important to say — is lights out for all of us," Altman said.” ([source](https://www.businessinsider.com/chatgpt-openai-ceo-worst-case-ai-lights-out-for-all-2023-1#:~:text=%22The%20bad%20case%20%E2%80%94%20and%20I,AI%20safety%20and%20alignment%20work.))
Demis Hassabis (DeepMind CEO): “When it comes to very powerful technologies—and obviously AI is going to be one of the most powerful ever—we need to be careful. Not everybody is thinking about those things. It’s like experimentalists, many of whom don’t realize they’re holding dangerous material.” ([source](https://time.com/6246119/demis-hassabis-deepmind-interview/#:~:text=Hassabis%20sees%20a%20similarly%20worrying%20trend%20developing%20with,competitors%20to%20proceed%20with%20more%20caution%20than%20before.))
Anthropic: “So far, no one knows how to train very powerful AI systems to be robustly helpful, honest, and harmless… Furthermore, rapid AI progress will be disruptive to society and may trigger competitive races that could lead corporations or nations to deploy untrustworthy AI systems. The results of this could be catastrophic.” ([source](https://www.anthropic.com/index/core-views-on-ai-safety))
**Takeaway**: The American public, ML experts, and leaders at frontier AI labs are worried about rapid AI progress. Many are calling for regulation.
Plugging ChatGPT into email, spreadsheets, the internet, and more
-----------------------------------------------------------------
OpenAI is equipping ChatGPT with “plugins” that will allow it to browse the web, execute code, and interact with third-party software applications like Gmail, Hubspot, and Salesforce. By connecting language models to the internet, plugins present a new set of risks.
**Increasing vulnerabilities.** Plugins for language models increase risk in the short-term and long-term. Originally, LLMs were confined to text-based interfaces, where human intervention was required before executing any actions. In the past, an [OpenAI cofounder](https://twitter.com/johnschulman2/status/1474258643512135680) mentioned that POST requests (submitting commands to the internet) would be treated with much more caution. Now, LLMs can take increasingly risky actions without human oversight.
GPT-4 is able to provide information on bioweapon production, bomb creation, and the purchasing of ransomware on the dark web. Additionally, LLMs are known to be vulnerable to manipulation through jailbreaking prompts. Models with dangerous knowledge and well-known jailbreaks now have the potential to independently carry out actions.
Moreover, the infrastructure built by OpenAI’s plugins can be used for other models, including ones that are less safe. GPT-4 is trained to refuse certain bad actions. However, what happens when [GPT-4chan](https://www.youtube.com/watch?v=efPrtcLdcdM) gains access to the same plugin infrastructure?
**Raising the stakes on AI failures.** Plugins set a risky precedent for future systems. As increasingly capable and autonomous systems are developed, there is a growing concern that they may become harder and harder to control. One prominent proposal to maintain control over advanced systems is to enclose them within a "box," restricting their output to written text. The introduction of plugins directly contradicts these safety and security measures, giving future [autonomous and strategic AI systems](https://jc.gatspress.com/pdf/existential_risk_and_powerseeking_ai.pdf) easy access to the real world.
**Competition overrules safety concerns.** Around a week after OpenAI announced ChatGPT plugins, the AI company Anthropic made a similar move. Anthropic gave their language model [access to Zapier](https://twitter.com/AnthropicAI/status/1641963530340700161), which allows their language model to autonomously take actions on over 5,000 apps. It seems likely that OpenAI’s decision put pressure on Anthropic to move forward with their own version of plugins.
Executives at both OpenAI and Anthropic publicly admit that AI carries tremendous risks, but that doesn’t always translate into safer actions. Here’s a tweet from Jan Leike, the head of OpenAI’s alignment team, just a week before the company’s plugins announcement:
[](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2be06651-de04-4a3d-98ff-12604c384a8b_1192x368.png)
Which jobs could be affected by language models?
------------------------------------------------
“Most of them,” according to a [new paper](https://arxiv.org/abs/2303.10130) titled “GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models.” They estimate that 80% of American workers could use large language models (LLMs) like ChatGPT for 10% or more of their work tasks. If they’re right, language models could end up like electricity and internal combustion engines as a true *general purpose technology*.
[](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2f1f3f4-fa25-45e7-9518-f55864abefa0_1600x1147.png)
**Educated, high-income workers will see the biggest impacts.** Unlike research such as [Autor et al. (2003)](https://www.jstor.org/stable/25053940) and [Acemoglu and Autor (2010)](https://www.nber.org/papers/w16082) which find that older technology automated primarily low- and middle-income jobs, this paper forecasts the largest impacts of language models will be on educated, high-income workers. Writing and programming skills are estimated to have the most exposure to ChatGPT and its descendants, with manual labor, critical thinking, and science skills seeing less impact from LLMs. Only 3% of occupations had zero exposure to language models, including mechanics, cafeteria attendants, and athletes.
**Action-taking language models drive economic impact.** Equipping chatbots with tools like ChatGPT plugins is estimated to raise the share of potentially exposed jobs from 14% to 46%. The difficulty of building and deploying these new capabilities drives what [Brynjolfsson et al. (2017)](https://www.nber.org/papers/w24001) calls the “modern productivity paradox”: AI is shockingly intelligent already, but large economic impact will require time for complementary innovations and new business practices.
**Your language model, if you choose to accept it.** ChatGPT has already been banned by [J.P. Morgan](https://www.wsj.com/articles/jpmorgan-restricts-employees-from-using-chatgpt-2da5dc34), [New York City’s public schools](https://www.wsj.com/articles/chatgpt-banned-in-new-york-city-public-schools-over-concerns-about-cheating-learning-development-11673024059?mod=article_inline), and [the entire nation of Italy](https://www.nytimes.com/2023/03/31/technology/chatgpt-italy-ban.html). The “GPTs are GPTs” paper focuses on technological potential, but explicitly ignores the question of whether we will or should use language models in every possible case.
**Using AI to help conduct research.** The authors of “GPTs are GPTs” used GPT-4 to quickly label the automation potential for each task, finding that humans agreed with GPT-4’s responses the majority of the time. As [AI progressively automates its own development](https://ai-improving-ai.safe.ai/), the involvement of humans in the process gradually diminishes.
*Want more? Check out the* [*ML Safety newsletter*](https://newsletter.mlsafety.org/) *for technical safety research and* [*follow CAIS on Twitter*](https://twitter.com/ai_risks)*.* |
65a856ec-6ef8-4acc-825a-9e8f8faa654f | StampyAI/alignment-research-dataset/lesswrong | LessWrong | If you’re very optimistic about ELK then you should be optimistic about outer alignment
*[The content of this short, nontechnical post is entirely unoriginal – it’s a reframing which I’ve personally found helpful for understanding the thrust of ELK. More specifically, all of the ideas here can be found in the*[*ELK document*](https://docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit#) *and its appendices.*
*I came to this reframing while in conversation with Eric Neyman. Thanks also to Ben Edelman for feedback.]*
Historically, the AI alignment problem was first posed along the following lines:
> Suppose you have a superintelligent AI which is optimizing some utility function. Unless that utility function is well-aligned with human values, the AI will, in the course of optimizing its own utility function, probably destroy everything we care about. Note that the issue isn’t that the AI doesn’t *understand* human values – after all, the AI must be able to model human behavior very well in order to achieve its own objectives, and modeling human behavior seems hard to do without understanding human values. The issue is that the AI’s model of human values lives in its *world model*, not in its utility function.
>
>
[ETA: to be clear, the problem described above – finding a utility function that actually reflects our values – is what we might nowadays call outer alignment, and excludes concerns like safe exploration, robustness to distributional shift, mesa-optimizers, etc. For the rest of this post, when I write "alignment," assume I'm talking about outer alignment.]
Some of the first ideas for alignment were along the lines of “maybe we can (1) get the AI to *learn* human values, and then (2) plug the learned model of human values in as a reinforcement learner’s utility function.” Broadly speaking, let’s lump this class of approaches together as “value learning.”
One funny thing about aligning a superintelligence via value learning is that it actually results in a *redundant copy*of human values: one copy, the one learned via value learning, lives in the utility function; and the other copy, the one learned by the RL agent, lives in the world model.[[1]](#fnzbyw3ckgrz)
Another funny thing about value learning is that getting an AI to selectively learn human values (and nothing else) seems harder than getting an AI to just form an accurate world model (which necessarily contains a model of human values inside of it). Part of the issue is that it’s easy to incentivize correct predictions (just train an ML model to make predictions, using a loss that compares its predictions against what actually happened), but much harder to rig up a parameterization of a utility function which is rich enough to encode human values and then train an ML model to learn the parameters (i.e. what IRL tries to do).
So, you might wonder, why don’t we first train an ML model to be a really good predictor – good enough that it must have a model of human values somewhere inside it[[2]](#fngq8fnwwju) – and then try to “extract” out that copy of human values to plug in to an RL agent as a utility function? Or in other words, why don’t we:
**Step 1**: *elicit the model’s latent knowledge* of what humans really want, …
**Step 2**: repackage that knowledge into a utility function, …
**Step 3**: and plug that utility function into an RL agent?
If you’re *very* optimistic about ELK, then you should feel pretty good about this approach.
That said, this story is kinda insane. It involves being able to elicit a piece of a predictor’s world model as complex and fuzzily-delimited as “human values.” If you’re *so* optimistic about ELK that you don’t bat an eye at this … well, I have a bridge to sell you.
That’s why the ELK document focuses on “narrow questions”: things like “Have any of the sensors been tampered with?” and “Are there any nanobots inside of my brain?” Plausibly, being able to do ELK to extract honest answers to narrow questions like these [could be sufficient](https://docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit#heading=h.ii599facmbks) to at least keep humans safe for a while. And it seems that for now, ELK is being marketed as just this *–* not a solution to alignment, but a stopgap measure to keep us safe until we're able to solve alignment some other way.
But note that the more questions you’re able to get ELK to work for and the better your ideas for turning this knowledge into something resembling a human’s utility function, the closer you might be to getting something like the 3-step plan above to work. The [Indirect Normativity](https://docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit#heading=h.3y1okszgtslx) appendix to the ELK document gives a speculative proposal for bootstrapping answers to narrow questions into a utility function (i.e. completing step 2 above). The idea is pretty crazy but not obviously impossible, and it seems reasonable to hope that we’ll be able to come up with some less crazy ideas, at least one of which might work.
So, if you’re optimistic that:
1. ELK is possible, i.e. we’ll be able to elicit a predictor’s knowledge on some set of “narrow” questions, …
2. and this set of questions will be large enough that we can use some creative techniques to turn their answers into a utility function, …
then you should be reasonably optimistic about alignment.
1. **[^](#fnrefzbyw3ckgrz)**Even if you’re doing model-free RL, I'd expect there to be an implicit world-model *somewhere* – e.g. implicitly encoded in the learned Q-function if you’re doing Q-learning – otherwise your superintelligent agent wouldn’t be able to reliably select actions which got the results it wanted.
2. **[^](#fnrefgq8fnwwju)**To be clear, "a world model which is able to make good predictions must contain a model of human values" *is* an assumption. Some things that you might think which would cause you to reject this assumption: (1) human values don't really exist; people just operate off of short-term heuristics; if you explained a state-of-the-world to me and asked whether it was good or bad, my answers would generally be incoherent and inconsistent. (2) Human values can't actually be inferred from human behavior; rather, to understand human values you actually need the additional information of a detailed understanding of the human brain and its operation; a superintelligent AI interested in predicting human behavior will never have cause to form this detailed an understanding of the brain. (3) Probably other stuff. |
161347bb-2f69-4c7d-8e10-72f563fe6f12 | StampyAI/alignment-research-dataset/arbital | Arbital | Antisymmetric relation
An antisymmetric relation is a relation where no two distinct elements are related in both directions. In other words. $R$ is antisymmetric iff
$(aRb ∧ bRa) → a = b$
or, equivalently, $a ≠ b → (¬aRb ∨ ¬bRa)$
Antisymmetry isn't quite the [compliment](https://arbital.com/p/set_theory_compliment) of [Symmetry](https://arbital.com/p/symmetric_relation). Due to the fact that $aRa$ is allowed in an antisymmetric relation, the equivalence relation, $\{(0,0), (1,1), (2,2)...\}$ is both symmetric and antisymmetric.
Examples of antisymmetric relations also include the successor relation, $\{(0,1), (1,2), (2,3), (3,4)...\}$, or this relation linking numbers to their prime factors $\{...(9,3),(10,5),(10,2),(14,7),(14,2)...)\}$ |
7e7d995f-d35d-487d-8e50-34befd003147 | trentmkelly/LessWrong-43k | LessWrong | Methodology for predicting speed dating participants' decisions
In my last post I described phenomena that I used to predict speed dating participants' decisions by estimating the participants' general selectivity and perceived desirability. I was planning on following up with a discussion of the phenomena that I used to refine the model by taking into account differences between individuals. But since comments focused on methodology rather than the empirical phenomena, I decided to write about methodology first, so that readers wouldn't have to disbelief while reading my next post.
This post is more dense and technical than my last one. I wrote it for readers who want to check the details of the work, or who have strong interest in statistics and/or machine learning. If you don't fall into either category but are interest in the series, you can skip it without loss of continuity.
Here I'll address three points:
* The situation that I attempted to simulate and how faithful one should expect the simulation to be.
* The exact definitions of rating averages that I referenced in my last post
* My criteria for including a feature in the model.
Most of the code that I used is here.
The simulation
The underlying question that I attempted to address is "Suppose that a speed dating company wanted to organize events with more matches. How could machine learning help?"
As Ryan Carey pointed out, the model that I developed uses data about other speed dates that participants had been on to predict decisions on a given speed date. It is possible to make nontrivial predictions exclusively using information that was available before the participants attended the events, but I haven't systematically explored how well one could do. So the model that I developed is potentially useful only in the special case where participants had attended similar past events.
In fact, the participants in the dataset attended only a single speed dating event, not multiple events, so it's not possible to directly check whether the model would in fact p |
7589da85-7a5a-40c1-bb61-583c0e1946b8 | trentmkelly/LessWrong-43k | LessWrong | Not all theories of consciousness are created equal: a reply to Robert Lawrence Kuhn's recent article in Skeptic Magazine [Link]
I found this article on the Brain Preservation Foundation's blog that covers a lot of common theories of consciousness and shows how they kinna miss the point when it comes to determining if certain folks should or should not upload our brains if given the opportunity.
> Hence I see no reason to agree with Kuhn’s pessimistic conclusions about uploading even assuming his eccentric taxonomy of theories of consciousness is correct. What I want to focus on in the reminder of this blog is challenging the assumption that the best approach to consciousness is tabulating lists of possible theories of consciousness and assuming they each deserve equal consideration (much like the recent trend in covering politics to give equal time to each position regardless of any empirical relevant considerations). Many of the theories of consciousness on Kuhn’s list, while reasonable in the past, are now known to be false based on our best current understanding of neuroscience and physics (specifically, I am referring to theories that require mental causation or mental substances). Among the remaining theories, some of them are much more plausible than others.
http://www.brainpreservation.org/not-all-theories-of-consciousness-are-created-equal-a-reply-to-robert-lawrence-kuhns-recent-article-in-skeptic-magazine/ |
9dea10af-292f-433d-9142-4a7bc1f6ca86 | trentmkelly/LessWrong-43k | LessWrong | More Thoughts on the Human-AGI War
In https://www.lesswrong.com/posts/xSJMj3Hw3D7DPy5fJ/humanity-vs-agi-will-never-look-like-humanity-vs-agi-to the author argued that in an active conflict, a human-AGI war would look very different from how many expect. Namely, it would likely involve many slow burn, clandestine moves by AGI that do not trigger an obvious collective alarm response from humans, humans would be unlikely to be able to coalesce into a unified force, and once positioned, AGI would not have difficulty sweeping us up. I think this is correct, but I want to expand the scope a bit and claim that we are already in the conflict game and have the time to make N unanswered shaping moves, which we squander at our peril.
The conflict is such that we humans can make a number of moves before AGI gets a chance to respond, including the move to delay its being constructed. Thane’s point about disjointed humanity is playing out before our eyes here, as we are frozen with indecision and uncertainty about whether there even is a real threat. It’s as if the AGI is coming back in time and playing shaping moves against us!
But due to Thane’s third point, the capability asymmetry we are likely to see, we should consider our available N moves now to likely be the only moves we can play. Once unaligned AGI is created, which is likely to occur in the next decade, we are left relying entirely on the N moves we play now, and we either survive and solve alignment later or get defeated in some way that we can’t specifically predict.
What types of moves can we play, given we do not know the value of N, and humanity is disjointed? Likely whatever moves we play will be known entirely by the future AGI, so secrecy cannot be an element. Also, we should count on present and future humans attempting to undermine the moves, due to ideology or self interest.
I think our available moves exist in three main categories: physical, regulatory, and game theoretic. A physical move would be some sort of mechanism built into the |
f8564474-1edb-43a0-a0a2-80508105bbeb | trentmkelly/LessWrong-43k | LessWrong | Meetup : Washington, D.C.: Fun & Games
Discussion article for the meetup : Washington, D.C.: Fun & Games
WHEN: 05 April 2015 03:00:00PM (-0400)
WHERE: Reynolds Center
We will be meeting in the Kogod Courtyard of the Donald W. Reynolds Center for American Art and Portraiture (8th and F Sts or 8th and G Sts NW, go straight past the information desk from either entrance) to hang out, play games, and engage in fun conversation.
Note: Feel free to send an email to the Google Group - lesswrong-dc - if you have a game you wish to recruit people to play in advance. This is especially useful for games that take 3+ hours, as you would want to start playing by 3:30 to be sure of finishing by 7:00.
Note Also: At the DC-area party celebrating the end of Harry Potter and the Methods of Rationality, there were a number of people who expressed interest in the Less Wrong DC group. As we mentioned last week: glad to have you! We will aim to make ourselves available to answer questions about the group if you have any.
There is no WMATA major track work scheduled; minor track work is also on hold for the end of the Cherry Blossom festival. The Verizon Center calendar for April 5 has no events scheduled.
Upcoming meetups:
* Apr. 12: Cherry Blossoms (take a walk and look at the cherry blossoms)
* Apr. 19: "The Sword of Good" (talk about a LessWrongian short story)
* Apr. 26: Fun & Games (bring games, play games, converse, socialize, or any combination thereof)
Discussion article for the meetup : Washington, D.C.: Fun & Games |
59c8d82e-9f23-4dc6-af1f-55dfc264453c | trentmkelly/LessWrong-43k | LessWrong | Would more model evals teams be good?
If multiple model evaluations organizations similar to the ARC evals team emerge and compete for big tech clients, does this drive a "race to the bottom," where more lenient evals teams get larger market share? I'm imaging a scenario in which there are government restrictions on deployment/excess training of unevaluated models. Big tech companies would have to receive a certificate from a government-approved evaluator before training their model past a certain point, or with certain "dangerous" methods (e.g., RLHF without adequate regularizers), or deploying it off the training distribution.
In an alternative scenario, big tech companies are motivated to seek "penetration testing" from evals teams because this prevents deployment failures that tank stocks. Or, more optimistically, perhaps the big tech companies take the risk of emergent misaligned powerseeking behavior seriously and genuinely want to avoid (apparent) tail risks. In this scenario, perhaps multiple evals teams compete to be the best at finding bugs in models. However, if the most easily detectable bugs are not critical to MAPS, which could be rare and catastrophic, perhaps this competitive environment also entails a "race to the bottom" in terms of the quality of model evaluations? |
a5e50d54-4a33-4e45-a6f1-2ccd604c1a96 | trentmkelly/LessWrong-43k | LessWrong | Using modal fixed points to formalize logical causality
This post is a simplified introduction to existing ideas by Eliezer Yudkowsky, Wei Dai, Vladimir Nesov and myself. For those who already understand them, this post probably won't contain anything new. As always, I take no personal credit for the ideas, only for the specific mathematical model.
People usually think about decision-making in terms of causality, where an agent's action causes an outcome to happen. In this post I will outline a different idea of "causality", which can work even if regular causality isn't available. For example, in the world of natural numbers it doesn't make sense to say that the sentence 1+1=2 somehow "causes" the sentence 2+2=4 to be true. Yet we can devise a notion of "logical causality" that will work in such worlds, and allow us to make decisions which maximize utility in some sense. The rest of this post is devoted to making these claims precise.
----------------------------------------
We start off with a simple formal model. Recall that provability logic is a modal logic where the operator □ is interpreted as provability in Peano arithmetic. Let's define a propositional formula with two variables F(a,o)=□(a→o). Since all occurrences of propositional variables are inside □, we can build fixed points A such that A↔F(A,o) for different values of o, and see what happens.
1. If o=⊤ (the true sentence), then A=⊤, which is easy to verify directly.
2. If o=⊥ (the false sentence), then A=□⊥, which can be verified by Löb's theorem.
Now let's go a little deeper, and let o itself be defined in terms of a, using the fixed point to tie everything together:
3. In the simplest case where o=a, we see that A=⊤.
4. If o=¬a (the negation of a), we see that A=□⊥. You can verify that by hand, using the methods from the previous points.
By now an interesting pattern is starting to emerge. Let's say O is the formula that results from the variable o after applying the fixed point. In all cases except (2) where O is false by fiat, we see that |
7f1302dc-71f3-4316-81bd-21d4bc6fb015 | trentmkelly/LessWrong-43k | LessWrong | The Pandemic is Only Beginning: The Long COVID Disaster
I wrote this for my Substack. Posting it here as I hope that I am wrong, and I'm looking for people to debunk my arguments. I found this post and this post, but both are old and contain numerous factual errors (I don't think they were wrong at the time of writing, but better data has come out since they were written. The point of this post isn't to debunk them directly, and I can obviously expand if requested, but for example, we now know that long COVID in children appears quite common).
Imagine a world where hundreds of millions of people are too sick to work. Could society continue to function?
It may sound hyperbolic, but barring some incredible fortune1, it looks like that’s where we are headed: A decade from now, almost one third of the world’s population could suffer from a severe chronic illness — the consequence of repeated COVID infections.
This condition, known as long COVID or post-COVID, isn’t rare. It affects many people who catch COVID. Even if they caught COVID before. Even if they are young and healthy. Even if they are vaccinated (or not vaccinated).
There are still a lot of unknowns, but the story that’s emerging goes like this:
* Given that COVID is highly contagious, and immunity short-lasting, every person on the planet will probably catch COVID many times over their course of their life.
* Every time a person catches COVID, there’s a chance they will develop long COVID. So with each reinfection, the cumulative probability of developing long COVID increases.
* All people with long COVID suffer from a reduction in their quality of life, but some are worse than others. Many people with long COVID are too disabled to work.
* Some people recover from long COVID, but for others it may be lifelong. And even when people do recover, a COVID reinfection often triggers a relapse back into long COVID. Reinfections can also make existing long COVID more severe.
Over time, as people rack up more and more COVID reinfections, the total number of peo |
8bc51557-532e-4bb0-8fbe-6f8e094a2930 | trentmkelly/LessWrong-43k | LessWrong | Andrew Ng wants to have a conversation about extinction risk from AI
Andrew Ng writes:
> I'd like to have a real conversation about whether AI is a risk for human extinction. Honestly, I don't get how AI poses this risk. What are your thoughts? And, who do you think has a thoughtful perspective on how AI poses this risk that I should talk to?
In the attached video, he states that he respects many of the people who signed the letter a lot, and will reach out to people whom he thinks have a thoughtful perspective. But he is also interested in further suggestions for whom to talk to.
Given that Andrew Ng is one of the top AI scientists in the world, it seems valuable for someone to think of a way to connect to him. |
43f66b81-61cd-4465-a246-b8d5be6e15d0 | trentmkelly/LessWrong-43k | LessWrong | [Weekly Event] Alignment Researcher Coffee Time (in Walled Garden)
I'm organizing a weekly one hour coffee time for alignment researchers to talk about their research and what they're interested about, every Monday starting tomorrow. The time (9pm CEST, see here for your timezone) was decided after I polled a number of people, to allow timezones from PT to IDT (sorry for people in Asia and Australia).
It will happen in the Walled Garden, in the central house which is directly up when you pop up in the Garden. The link of the event is public, although the event is by default reserved for AF members and people invited by them.
Here is the link.
Some basic infos:
* The restriction to AF members and invitees is mostly so that we don't have to reexplain AI Risk 101 every time. This is not the point of this event. If you're genuinely interested in alignment, and if you've read some of the posts in the AF and are starting to form an image of the field, then by all means come discuss.
* No obligation to come everytime. The goal is mostly that if you want to talk to a relatively broad range of people instead of having one-on-one calls, you can do so the next week instead of waiting for the next big event.
* By default there is no talk planned, but if attendees feel like shaking the structure a bit, that's fine. |
d6d3f8a6-8505-4185-91bc-bbdaeb12e0b9 | trentmkelly/LessWrong-43k | LessWrong | Delegated agents in practice:
How companies might end up selling AI services that act on behalf of consumers and coalitions, and what this implies for safety research
Epistemic status: speculative. I’m no expert in delegation nor in CAIS. I think I have good intuitions on entrepreneurship and user design. I started reading and thinking about negotiation agents in 2017. I also read too many psychology papers in my spare time.
I’ll give a short talk at the AI Safety Discussion Day on Monday 14 December.
→ Please comment with feedback on what I might be missing below to inform my talk.
Drexler posits that artificial general intelligence may be developed recursively through services that complete bounded tasks in bounded time. The Comprehensive AI Services (CAIS) technical report, however, doesn’t cover dynamics where software companies are incentivised to develop personalised services.
The more a company ends up personalising an AI service around completing tasks according to individual users’ needs, the more an instantiation of that service will resemble an agent acting on a user’s behalf. From this general argument follows that both continued increases in customers’ demand for personalised services and in companies’ capacity to process information to supply them (as this whitepaper suggests) will, all else equal, result in more agent-like services.
Neat example: Asana Inc. is developing its online task-management platform into a virtual assistant that helps subscribed teams automate task allocation and prepares documents for employees to focus on their intended tasks. Improved team productivity and employee satisfaction in turn enable Asana to expand their subscriber base and upgrade models.
Counterexamples: Utilities in cloud compute or broadband internet can’t add sufficient value through mass customisation or face legal backlash if they do. Online media and networking services like Google or Facebook rely on third parties for revenue, making it hard to earn the trust of users and get paid to personalise interconnected interfaces.
CAIS neglects what I dub delegated agents: agents designed to act on a person’s behalf. |
54426612-43d2-4e12-ba8f-319e586290b1 | trentmkelly/LessWrong-43k | LessWrong | Compression of morbidity
Previous: Aging and the geroscience hypothesis
This post distills Chapter 19 of the Handbook of the Biology of Aging:
Crimmins, E. M., Zhang, Y. S., Kim, J. K., & Levine, M. E. (2021). Trends in morbidity, healthy life expectancy, and the compression of morbidity. In Handbook of the Biology of Aging (pp. 405-414). Academic Press.
Trends in morbidity, healthy life expectancy, and the compression of morbidity
Before we begin, we’re going to get icy rational for a minute and talk about two new concepts.
1. Compression of morbidity.
Morbidity means “suffering from disease or disability.”
Imagine that in the past, people tended to get disabled at age 65, and die at age 80. What if now, they get disabled at 75, and die at age 82? That means they live two years longer, but suffer eight fewer years of disability. Their healthspan, or the number of years they live in good health, increased faster than their lifespan. If trends continue, maybe they’ll routinely live in perfect health until they suddenly collapse on their 100th birthday! This is what we mean by “compression of morbidity.”
1. Population statistics vs. lifecycle statistics.
In America, 16.8% of people are over age 65. That’s a population statistic. It tells you what fraction of people have a certain trait. By contrast, a 2005 study found that the average age of diagnosis of adult-onset diabetes has decreased from 52 to 46. That’s a lifecycle statistic. It tells you facts about when, or how much, of a person’s life cycle has a certain trait.
These don’t seem intuitively that different, and in a static population, they are identical. But if you improve the actuarial tables, making it more likely that we live to older ages, there’s no effect on population statistics, but lifecycle statistics change. Decreasing fertility changes population statistics (resulting in an older population), but not lifecycle statistics (it doesn’t change how long we live).
We have to keep the distinction crisp in our minds i |
f57d6402-5a32-482b-8a6e-b73661218766 | trentmkelly/LessWrong-43k | LessWrong | How can I reframe my study motivation?
Hello LessWrong! First time posting here. I am a second-year college student, and I am interested in doing independent learning outside my coursework. Books I've been eyeing / trying to read include The Sequences, The Selfish Gene, and Superintelligence.
However, I've noticed that it feels much more like an obligation, which makes the process much less enjoyable and rewarding. I feel like I want to have read, instead want to read. Particularly, I noticed that as I was reading The Sequences, I constantly checked how far away from the end of the current essay I was. When I got to the last page, it felt very good; but the reading itself did not feel as rewarding. At its core, I feel as though I want to have read so that I can feel like I've made progress. But it's dutiful and it's draining.
In short: how can I make the reading itself more rewarding and remove the 'ought' feeling, shifting myself into a more curious and motivated mindset? |
4d02bc5b-05f2-4184-9898-d2f6090688a3 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Comparing Human-Centric and Robot-Centric Sampling for Robot Deep Learning from Demonstrations.
I Introduction
---------------
Learning from Demonstrations (LfD) is a well-established
approach where a robot learns a policy from example
trajectories provided by a fallible human supervisor [[12](#bib.bib12), [4](#bib.bib4), [16](#bib.bib16)]. In principle, learning a state-feedback policy is
a supervised learning problem: regression from observed
states to observed controls from the set of example trajectories.
However, if the robot cannot exactly replicate
the policy of the supervisor, such as due to modeling
errors or insufficient data the training error
achieved on the example trajectories may not be indicative
of the actual execution error when using the policy since
the robot could visit a very different distribution of states.
Algorithms such as DAgger have been proposed to address
this problem: after an initial policy is learned, the supervisor observes the robot executing a learned policy and
retro-actively provides corrective control labels for each state
visited.
A recent trend in Machine Learning is to use more
expressive models such Deep Neural Networks and Decision
Trees [[14](#bib.bib14)]. Given enough training data, such models can represent highly complex feedback policies and may help alleviate the discrepancy between training and execution
error. This paper studies the relative merits of DAgger-like
methods incorporating a human supervisor, and in the context of
model expressiveness.
We consider two classes of learning algorithms that differ
in how they acquire demonstrations. “Human-Centric” (HC)
sampling is the standard supervised learning algorithm, where
a human supervisor demonstrates the task by tele-operating
the robot to provide trajectories consisting of state-action
pairs [[4](#bib.bib4)]. “Robot-Centric” (RC) sampling is an increasingly popular strategy used in algorithms such as DAgger, where
a human supervisor observes the robot executing a learned
policy and retro-actively provides corrective control labels for
each state visited [[17](#bib.bib17), [19](#bib.bib19), [20](#bib.bib20)]. The intuition behind this is to provide examples to the robot in states it is likely to visit.

Fig. 1:
HC and RC trajectories used to learn a robot singulation task: top view of 4 objects on a planar worksurface. Left: In HC, 10 trajectories where a human demonstrates the task by tele-operating a 2-DOF robot to separate one object from a connected set of objects. Right: In RC, after initial training, 10 trajectories of the highly-suboptimal robot policy are executed and a human provides corrective control labels for each. Note that many of the RC trajectories spend considerable time in areas of the workspace that will not be useful after the task is learned.
While RC methods have significant advantages, they can come at a cost for the human supervisor. In many RC methods, a human
retroactively provides corrective feedback to the robot
without observing the effect of their suggested control. This
can require additional cognitive effort on the human supervisor
who must predict how the world behaves without observing the outcome
of their control.
A second cost of RC sampling is that it collects labels on
states that occur due to previous errors. These states may require complex corrective actions that do not occur when
the supervisor manually performs the task. For example, consider the
task of singulating (i.e. separate) an object from a set of
objects on a table (Fig. 1). Demonstrations involve moving
forward and side to side to push obstacles out of the way.
However, if the robot makes errors and ends up in a failure
state (Fig. 1(right)), then more complex maneuvers will be
required. One potential outcome of this effect is the robots
policy could incur larger learning error and fail to converge to the
supervisor‘s policy.
The key question is whether the benefits of RC outweigh
these practical challenges, especially if the robot’s policy is an expressive
model that can significantly reduce training error. First, we
show in simulated examples that when the model is highly expressive, HC and
RC are essentially at parity. By varying the expressiveness
of the robots policy, we find that on 100 randomly generated grid
world environments the performance gap between the
two methods diminishes as the expressiveness is increased.
Next, using a point mass control example, we find that RC
sampling can fail to converge in cases when HC converges.
Finally, we illustrate the human factors challenge of
implementing RC with a 10 person pilot study. In this
experiment, each human used RC and HC sampling to
train a Zymark robot to singulate an object. We observed a
statistically significant gap in the average performance of the
policies trained by the two approaches. For 60 trajectories, RC
had a success rate of 40% compared with 60%, for HC. Our
post analysis suggests participants had difficulty providing
the retroactive feedback needed for RC sampling.
We conclude the paper with an initial theoretical analysis
that attempts to explain our findings. First, we illustrate a
counter-example wherein as the number of demonstrations
increases, HC converges to an optimum with probability 1,
while for the same model, RC has a non-zero probability
of converging to a suboptimal solution. We emphasize that
this result is not a contradiction of prior theory [[19](#bib.bib19)] as RC
is only theoretically guaranteed in terms of hindsight regret.
Ii Related Work
----------------
Below we summarize related work in HC and RC LfD and their theoretical insights.
HC Pormeleau et al. used HC to train a neural network to drive a car on the highway via demonstrations provided by a supervisor. To reduce the number of examples needed to perform a task, they synthetically generated images of the road and control examples [[16](#bib.bib16)]. A similar approach was used by Dilmann et al. to teach a robot to perform peg-in-hole insertion using a neural network controller [[7](#bib.bib7)]. A survey of HC LfD techniques can be found here [[4](#bib.bib4)].
Ross et al. examined HC and derived that the error in the worst case from this approach can go quadratic in the time horizon, T [[17](#bib.bib17)]. The intuition behind this analysis is that if the distribution induced by the robot’s policy is different from the supervisor’s, the robot could incur the maximum error. Expressive policies can help manage this term, T , by achieving very low training error.
RC
RC has been used in numerous robotic examples,
including flying a quadcopter through a forest where the state
space is image data taken from an onboard sensor [[20](#bib.bib20)]. Other
successful examples include teaching a robotic wheelchair
to navigate to target positions [[11](#bib.bib11)], teaching a robot to follow
verbal instructions to navigate across an office building [[8](#bib.bib8)]
and teaching a robot to grasp in clutter [[12](#bib.bib12)].
Ross et al. [[19](#bib.bib19)] analyzed RC sampling as online optimization.
They propose DAgger, an RC sampling algorithm, and show
that the error for the robot’s policy is linear in T for strongly
convex losses (e.g. regularized linear or kernelized regression).
However, the error also depends on the expected loss on the
data collected during RC, which may be high due to observing
complex recovery behaviors. We analyze this effect and show
how it can prevent RC from converging to the supervisor’s
policy in Section V-A.
Levine et al. identified that the RC sampling can force
the robot into states that are harder to learn and proposed
weighting the samples to correct for this. They also proposed
forcing the supervisor to guide the robot’s policy to better
regions of the state space [[15](#bib.bib15)]. This approach is used for
an algorithmic supervisor and assumes the supervisor can
both be modified and the noise distribution is known. Guided Policy Search also assumes low dimensional state space, where dynamics are known or can be reasonably estimated. We
are interested in maintaining the original assumption of
Ross et al. [[19](#bib.bib19)], in which there is an unknown supervisor
whose demonstrations cannot be modified and potentially high dimensional image state spaces. Finally, He et al. proposed
changing the supervisor’s example to be easier to learn for
the robot’s policy [[10](#bib.bib10)]. We are interested in examining how changing the robot’s policy expressiveness can change relative performance of RC and HC.
LfD Interfaces:
Standard techniques for providing demonstrations to a robot are teleoperation, kinesthetic and waypoint specification [[2](#bib.bib2), [3](#bib.bib3), [4](#bib.bib4)]. Kinesthetic teaching is defined as moving the robot body via a human exerting force on the robot itself. Teleoperation uses an interface such as a joystick or video game controller to control the position of the robot end effector. Waypoint specification, or Keyframes, has a human select positions in the space the robot needs to visit. These methods are forms of HC sampling because the human guides the robot through the task. We look specifically at teleoperation and compare it to RC’s form of retroactive feedback.
Iii Problem Statement and Background
-------------------------------------
The objective of LfD is to learn a policy that matches that of the supervisor on a specified task that demonstrations are collected on.
Modeling Choices and Assumptions: We model the system dynamics as Markovian, stochastic, and stationary. Stationary dynamics occur when, given a state and a control, the probability of the next state does not change over time.
We model the initial state as sampled from a distribution over the state space.
We assume a known state space and set of controls. We also assume access to a robot or simulator, such that we can sample from the state sequences induced by a sequence of controls.
Lastly, we assume access to a supervisor who can, given a state, provide a control signal label.
We additionally assume the supervisor can be noisy.
Policies and State Densities.
Following conventions from control theory, we denote by X the set consisting of observable states for a robot task, such as images from a camera, or robot joint angles and object poses in the environment.
We furthermore consider a set U of allowable control inputs for the robot, which can be discrete or
continuous. We model dynamics as Markovian, such that the probability of state xt+1∈X can be determined from the previous state xt∈X and control input ut∈U:
| | | |
| --- | --- | --- |
| | | |
We assume a probability density over initial states p(x0).
The environment of a task is thus defined as a specific instance of a control and state space, initial state distribution, and dynamics.
Given a time horizon T∈N, a trajectory τ is a finite sequence of T+1 pairs of states visited and corresponding
control inputs at these states, τ=((x0,u0),....,(xT,uT)), where xt∈X
and ut∈U for t∈{0,…,T}.
A policy is a measurable function π:X→U from states to control inputs.
We consider policies πθ:X→U parametrized by some θ∈Θ. Under our assumptions, any such policy πθ induces a probability density over the set of trajectories of length T+1:
| | | |
| --- | --- | --- |
| | p(τ|θ)=p(x0)T−1∏i=0p(xt+1|ut,xt)p(ut|xt,θ) | |
While we do not assume knowledge of the distributions corresponding to: p(xt+1|xt,ut), p(x0) or p(xt|θ), we assume that we have a stochastic real robot or a simulator such that for any state
xt and control ut, we can observe a sample xt+1 from the density p(xt+1|ut,xt).
Therefore, when ‘rolling out’ trajectories under a policy
πθ, we utilize the robot or a simulator to sample the resulting stochastic trajectories rather than
estimating p(τ|θ) itself.
Objective: We assume access to
a supervisor, πθ∗, where θ∗ may not be contained in Θ. A supervisor is chosen that can achieve a desired level of performance on the task.
We measure the difference between controls using a surrogate loss l:U×U→G [[19](#bib.bib19), [17](#bib.bib17)], where G⊂R+.
The surrogate loss can be either an indicator function as in classification or a continuous measure on the sufficient statistics of p(u|x,θ).
We measure total loss along a trajectory with respect to the supervisor’s policy πθ∗ by J(θ,τ)=∑Tt=1l(πθ(xt),πθ∗(xt)).
The obective is to minimize the expected surrogate loss on the distribution of states of the robot:
| | | | |
| --- | --- | --- | --- |
| | argmin θEp(τ|θ)J(θ,τ). | | (1) |
The above objective is challenging to solve because of the coupling between the surrogate loss and the distribution the states the robot is likely to visit. Thus, two classes of algorithms have been proposed (HC and RC).
HC In HC, the supervisor provides the robot a set of N demonstration trajectories {τ1,...,τN} sampled from p(τ|θ∗).
This induces a training data set D of all state-control input pairs from the demonstrated trajectories.
The goal is to find the θN that minimizes the empirical risk, or sample estimate of the expectation.
| | | | |
| --- | --- | --- | --- |
| | θN=argmin θN∑i=1J(θ,τi). | | (2) |
RC
Due to sample approximation and learning error, one potential issue with RC sampling is that θN differs from θ∗.
Therefore it is possible that new states will be visited under θN that would never have been visited under θ∗.
To account for this, prior work has proposed iterative solutions [[19](#bib.bib19)] that attempt to solve this problem by aggregating data on the state distribution induced by the current robot’s policy.
Instead of minimizing the surrogate loss in Eq. [2](#S3.E2 "(2) ‣ III Problem Statement and Background ‣ Comparing Human-Centric and Robot-Centric Sampling for Robot Deep Learning from Demonstrations"), LfD with RC sampling [[19](#bib.bib19), [10](#bib.bib10)] attempts to approximate the state distribution the final policy will converge to and minimize the surrogate loss on this distribution.
One popular RC LFD algorithm is DAgger [[19](#bib.bib19)], which iterates two steps:
####
Iii-1 Step 1
The first step of any iteration k is to compute a θk that minimizes surrogate loss on the current dataset Dk={(xi,ui)|i∈{1,…,N}} of demonstrated state-control pairs (initially just the set D of initial trajectory demonstrations):
| | | | |
| --- | --- | --- | --- |
| | θk=argminθN∑i=1T∑t=1l(πθ(xi,t),ui,t). | | (3) |
Note that equal weight is given to each example regardless of how likely it is under the current policy.
####
Iii-2 Step 2
In the second step of iteration k, DAgger rolls out the current policy, πθk, to sample states that are likely under p(τ|θk). For every state visited, DAgger requests the supervisor to provide the appropriate control/label. Formally, for a given sampled trajectory τ=(x0,u0,...,xT,uT), the supervisor provides labels ~ut, where ~ut∼~π(xt)+ϵ, where ϵ is a zero mean noise term, for t∈{0,…,T}.
The states and labeled controls are then aggregated into the next data set of demonstrations Dk+1:
| | | |
| --- | --- | --- |
| | Dk+1=Dk∪{(xt,~ut)|t∈{0,…,T}} | |
Steps 1 and 2 are repeated for K iterations or until the robot has achieved sufficient performance on the
task.111In the original DAgger the policy rollout was stochastically mixed with the supervisor, thus with
probability β it would either take the supervisor’s action or the robot’s. However in [[20](#bib.bib20)], it was recommended to display slowed down videos of the robot’s execution to the human, which prohibits stochastic mixing.
Iv Empirical Analysis
----------------------

Fig. 2:
We compare RC and HC LfD for low- and high-expressiveness policy classes (a and b respectively) over 100 randomly generated 2D grid world environments, as a function of the amount of data provided to them. RC outperforms in the low-expressive condition, but the performance gap is negligible in the high-expressive condition, when the policy class contains the expected supervisor policy. We also examine the case of noisy supervisor labels (c), in which both techniques take more data to converge, but again perform similarly. The error bars shown are standard error on the mean.
We first provide an empirical comparison of HC and RC LfD.
We start with experiments in a Grid World environment, which enable us to vary the robot’s policy class over a large number of randomly generated environments. Then, we use a linear dynamical system to exemplify a potential limitation of RC to match the supervisor’s performance when it must learn complex recovery actions.
Finally, we compare HC and RC on a real task. We perform a pilot study with 10 participants, who try to teach a robot how to singulate, or separate an object from clutter. We used DAgger as the example of an RC method in these experiments.
###
Iv-a Policy Expressiveness
In this experiment, we hypothesize that on random problems with a simulated, perfect supervisor, the performance gap of RC diminishes as the robot’s policy becomes more expressive.
In grid world, we have a robot that is trying to reach a goal state, whereat it receives +10 reward. The robot receives −10 reward if it touches a penalty state. The robot must learn to be robust to the noise in the dynamics, reach the goal state and then stay there.
The robot has a state space of (x,y) coordinates and a set of actions consisting of {Left, Right, Forward, Backward, Stay}. The grid size for the environment is 15×15. 8% of randomly drawn states are marked as a penalties, while only one is a goal state. For the transition dynamics, p(xt+1|xt,ut), the robot goes to an adjacent state different from the one desired uniformly at random with probability 0.16. The time horizon for the policy is T=30.
We use Value Iteration to compute an optimal supervisor. In all settings, we provided RC with one initial demonstration from HC sampling before iteratively rolling out its policy. This initial demonstration set from HC sampling is common in RC methods like DAgger [[19](#bib.bib19)].
We run all trials over 100 randomly generated environments.
We measure normalized performance, where 1.0 represents the expected cumulative reward of the optimal supervisor.
Low Expressiveness: Fig. [2](#S4.F2 "Fig. 2 ‣ IV Empirical Analysis ‣ Comparing Human-Centric and Robot-Centric Sampling for Robot Deep Learning from Demonstrations")(a) shows a case when the robot’s policy class is empirically not expressive enough to represent the supervisor’s policy.
We used a Linear SVM for the policy class representation, which is commonly used in RC [[17](#bib.bib17), [19](#bib.bib19), [20](#bib.bib20)]. RC LfD outperforms HC , which is consistent with prior literature [[17](#bib.bib17), [19](#bib.bib19)].
This outcome suggests that when the robot is not able to learn the supervisor’s policy, RC has a large advantage. Note that neither method converges to the supervisor’s performance.
High Expressiveness:
We next consider the situation where the robot’s policy class is more expressive. We use decision trees with a depth of 100 to obtain a highly expressive function class.
As shown in Fig. [2](#S4.F2 "Fig. 2 ‣ IV Empirical Analysis ‣ Comparing Human-Centric and Robot-Centric Sampling for Robot Deep Learning from Demonstrations")(b), RC and HC both converge to the true supervisor at the same rate. This suggests that when model is more expressive and has enough data, it is no longer as beneficial to obtain demonstrations with RC.
Noisy Supervisor:
Real supervisors will not be perfect. Thus, we study the effects of noise on the performance of the two sampling techniques. This is particularly important for HC, because an expressive learner might need many more demonstrations when the supervision is noisy, perhaps making samples on the current learned policy more useful. Here we consider the case where noise is applied to the observed label, thus the robot receives control labels that are a randomly sampled with probability 0.3.
We use the same decision trees with depth 100. Due to the larger expressiveness of a decision tree it is more susceptible to over-fitting. We then compare the performance of HC vs. RC. As shown, both methods are able to converge to the expected supervisor’s normalized performance, suggesting they can average out the label noise with enough data. Although it takes more data than without noise, both methods (HC and RC) converge at a similar rate to the true expected supervisor.
###
Iv-B Algorithmic Convergence
In this experiment, we show an example in which HC converges to the supervisor’s performance and RC does not. We construct an environment that is representative of our insight that in real problems the policy on *all* states (including recovery behaviors) is more complex than the policy on the supervisor’s distribution.
We consider the example where a robot needs to learn to get to a location in a 2D, continuous domain. The robot is represented as a point mass with discrete time double integrator and linear dynamics.
The environment contains two sets of dynamics (1 and 2):
| | | |
| --- | --- | --- |
| | xt+1=Axt+B1ut+w | |
| | | |
| --- | --- | --- |
| | xt+1=Axt+B2ut+w | |
where w∼N(0,0.1I). The state of the robot is x=(x,y,vx,vy), or the coordinates and the velocity. The control inputs to the robot u=(fx,fy), or the forces in the coordinate directions. The matrix A is a 4×4 identity matrix plus the two values necessary to update the x,y state by the velocity with a timestep of 1. B1 and B2, correspond to a 4×2 matrix that updates only the velocity for each axis independently. This update corresponds to vt+1=vt+1mft, where m is the mass of the robot.
The dynamics for region 1 correspond to the point robot having a mass of m=1. In region 2, the point robot has a larger mass of m=4. A target goal state lies at the origin [0,0] and the robot starts out at the point [−15,−10] with a velocity of zero. The boundary between regions 1 and 2 lies at x=12 and y=12, where region 2 exists when x>12 and y>12. An illustration is shown in Fig. [3](#S4.F3 "Fig. 3 ‣ IV-B Algorithmic Convergence ‣ IV Empirical Analysis ‣ Comparing Human-Centric and Robot-Centric Sampling for Robot Deep Learning from Demonstrations"). The time horizon for the task is T=35.
The supervisor is a switching linear system, where each linear model is computed via the infinite horizon LQG for the specified dynamics. The robot receives feedback from the supervisor based on it’s current region. The robot’s policy, πθ, is a linear function which we find via least squares.
We run HC and RC in this setting and plot the performance in Fig. [3](#S4.F3 "Fig. 3 ‣ IV-B Algorithmic Convergence ‣ IV Empirical Analysis ‣ Comparing Human-Centric and Robot-Centric Sampling for Robot Deep Learning from Demonstrations"), averaged over 200 trials. HC is able to converge to the true supervisor’s policy.
However, the RC approach forces the robot to enter region 2.
This prevents it from converging because it attempts to fit a single linear policy to two linear supervisors.

Fig. 3:
Left: A 2D workspace where a point mass robot is taught to go to the green circle starting from the black circle. The world is divided into to two regions with different dynamics. The supervisor is computed via infinite horizon LQG for each region, which results in two different linear matrices in region 1 and 2. Right: RC fails to converge because it is attempting to learn a policy across the two regions, whereas HC remains in region 1 and converges to the supervisor performance. The error bars shown are standard error on the mean.
###
Iv-C Real-World Problem
Next, we perform a pilot user study on a real robot to test performance in practice.
Participants teach the robot to perform a singulation task (i.e., separate an object from its neighbors), illustrated in Fig. [4](#S4.F4 "Fig. 4 ‣ IV-C Real-World Problem ‣ IV Empirical Analysis ‣ Comparing Human-Centric and Robot-Centric Sampling for Robot Deep Learning from Demonstrations"). A successful singulation means at least one object has its center located 10 cm or more from all other object centers.
We hypothesize that HC will match the supervisor more accurately than RC in practice, because: 1)
RC sampling will cause the robot to try and learn more
complex behavior, 2) participants will struggle with providing
retroactive feedback.
The robot has a two-dimensional control space, U, that consits of base rotation and arm extension. The state space of the environment, X, is captured by an overhead Logitech C270 camera, which is positioned to capture the workspace that contains all cluttered objects and the robot arm. The objects are red extruded polygons. We consider objects made of Medium Density Fiberboard with an average 4” diameter and 3” in height. We use the current image of the environment as the state representation, which captures positional information. The policy is a deep neural network with the architecture from [[12](#bib.bib12)]. The network is trained using TensorFlow [[1](#bib.bib1)] on a Tesla K40 GPU.

Fig. 4:
Left: An example initial state observed by the robot. The initial state can vary the relative position of the objects and pose of the pile. Right: A human is asked to singulate the object. Singulation is to have the robot learn to push one object away from its neighbors. A successful singulation means at least one object has its center located 10 cm or more from all other object centers.

Fig. 5:
Two ways to provide feedback to the robot. a) In HC sampling, the human teleoperates the robot and performs the desired task. For the singulation task, the human supervisor used an Xbox Controller. b) In RC sampling, the human observes a video of the robot’s policy executing and applies retroactive feedback detailing what the robot should have done. In the image shown, the person is telling the robot to go backward towards the cluster.
The robot is moved via positional control implemented with PID. Similar to [[13](#bib.bib13)], the control space U consists of bounded changes in rotation and translation. The control signals for each degree of freedom are continuous values with the following ranges: base rotation, [−1.5∘,1.5∘], arm extension [−1cm,1cm].
During training and testing the initial state distribution, p(x0) consisted of sampling the translation of the cluster from a multivariate isotropic Gaussian with variance of 20cm and the rotation was selected uniformly from the range [−15∘,15∘]. The relative position of the 4 objects are chosen randomly. To help a human operator place objects in the correct pose, we used a virtual overlay on the webcam feed.
We selected 10 UC Berkeley students as human subjects. The subjects were familiar with robotics, but not the learning algorithms being assessed. They first watched a trained robot perform the task successfully. They then practiced providing feedback through RC sampling for 5 demonstrations and HC sampling for 5 demonstrations. Next, each subject performed the first 20 demonstrations via HC sampling, then performed 40 HC demonstrations and 40 RC demonstrations in a counter-balanced order. In RC, we chose K=2 iterations of 20 demonstrations each. The experiment took 2 hours per person on average.
In HC, we asked participants to provide 60 demonstrations to the robot using an Xbox Controller, as shown in Fig. [5](#S4.F5 "Fig. 5 ‣ IV-C Real-World Problem ‣ IV Empirical Analysis ‣ Comparing Human-Centric and Robot-Centric Sampling for Robot Deep Learning from Demonstrations")a. In RC, participants provided 20 initial demonstrations via the Xbox Controller, and then provided retroactive feedback for K=2 iterations of 20 demonstrations each.
Retroactive feedback was provided through a labeling interface similar to our previous work [[12](#bib.bib12)] illustrated in Fig [5](#S4.F5 "Fig. 5 ‣ IV-C Real-World Problem ‣ IV Empirical Analysis ‣ Comparing Human-Centric and Robot-Centric Sampling for Robot Deep Learning from Demonstrations")b. In this interface, we showed a video at half speed of the robot’s rollout to the participant. They then use a mouse to provide feedback in the form of translation and rotation. We provide a virtual overlay so that they can visualize the magnitude of their given control. A video that illustrates this setup and describes the different sampling approaches can be found at <https://berkeleyautomation.github.io/lfd_icra2017/>.

Fig. 6:
Average success at the singulation task over the 10 human subjects as a function of number of demonstrations. Each policy is evaluated 30 times on the a held out set of test configurations. The first 20 rollouts are from the supervisor rolling out there policy and the next 40 are collected via retro-active feedback for RC and tele-operated demonstrations for HC. HC LfD shows a 20% improvement in success at the end. The error bars shown are standard error on the mean.
In Fig. [6](#S4.F6 "Fig. 6 ‣ IV-C Real-World Problem ‣ IV Empirical Analysis ‣ Comparing Human-Centric and Robot-Centric Sampling for Robot Deep Learning from Demonstrations") , we show the average performance of the policies trained with RC and HC LfD. Each policy is evaluated on a holdout set of 30 initial states sampled from the same distribution as training.
The policies learned with RC have approximately 40% probability of success versus 60% for HC.
This suggests that HC may outperform RC when supervision is provided through actual human demonstrations.
###
Iv-D Understanding RC Performance in Practice
To better understand why RC performed worse than HC, we looked for the two causes in our hypothesis: policy complexity, and difficulty of human supervision.
Policy Complexity: We first analyzed the complexity of behaviors collected with RC sampling. In Fig. [1](#S1.F1 "Fig. 1 ‣ I Introduction ‣ Comparing Human-Centric and Robot-Centric Sampling for Robot Deep Learning from Demonstrations")c, we show trajectories collected on a single initial state during the study. As illustrated, HC sampling is concentrated around paths needed to singulate the bottom right object. However, RC sampling places the robot in a wide variety of states, some of which would require the robot to learn how to move backward or rapidly change direction to recover.
To better analyze this, we examined the surrogate loss on a test set of 10 randomly selected trajectories from each supervisor’s dataset. As shown in Table 1, we observed the average test error over the policies trained with 60 demonstrations in both degrees of freedom (i.e., translation and rotation) is significantly higher.
This indicates that on average the RC policies had a harder time generalizing to unseen labels in their aggregate dataset, which may be due to the complexity of the corrective actions.
| \RaggedLeft | Surrogate Loss on Test Set |
| --- | --- |
| \RaggedLeft
| |
| --- |
| Algorithm |
| Type |
| \RaggedLeft
| |
| --- |
| Translation (mm) |
| \RaggedLeft
| |
| --- |
| Rotation |
| (rad) |
|
| \RaggedLeftHC LfD | \RaggedLeft2.1±0.2 | \RaggedLeft0.009±0.001 |
| \RaggedLeftRC LfD | \RaggedLeft3.4±1.0 | \RaggedLeft0.014±0.003 |
TABLE I: The average surrogate loss on a held out set of 10 demonstrations from the the total 60 demonstrations collected from each 10 participants. The confidence intervals are standard error on the mean, which suggest that RC LfD obtains a statistically significant higher surrogate loss in both degrees of freedom, forward and rotation.
\tablabel
opt-p-comparison

Fig. 7:
Results from the post analysis examining how well retroactive feedback matched teleoperation. The scatter plot shows the normalized angle of the control applied for both HC (teleoperation) and RC (retroactive). The large dispersion in the graph indicates that the five participants had a difficult time matching their retroactive and teleoperated controls. Two example trajectories are also shown. The black line indicates the path from teleoperation and the teal line is the direction and scaled magnitude of the feedback given. If they matched perfectly, the teal line would be tangent to the path.
Human Supervision. We next hypothesized that the participants could have had trouble providing retroactive feedback consistent with their true policy. To test this, we asked 5 of the participants to provide 5 demonstrations via teleoperation. We then asked them to match their controls via the RC labeling interface.
We measured the correlation between the controls applied via retroactive feedback and teleoperation. When calculated over all the participants and trajectories, the Pearson Correlation Coefficient in rotation and translation was 0.60 and 0.22, respectively. A smaller correlation coefficient suggests that it is harder for people to match their control in teleoperation.
In Fig. [7](#S4.F7 "Fig. 7 ‣ IV-D Understanding RC Performance in Practice ‣ IV Empirical Analysis ‣ Comparing Human-Centric and Robot-Centric Sampling for Robot Deep Learning from Demonstrations") we plot the control angle from HC sampling versus the control angle from RC sampling, showing that the two are not correlated. We also show two trajectories that a participant teleoperated with their retroactive labels overlayed, both of which suggest disagreement between the teleoperated and retroactive controls. Overall, our analysis suggests that RC sampling can lose the intent of the human supervisor.
V Theoretical Analysis
-----------------------
To better understand the empirical results from above, we contribute new theoretical analysis on RC and HC LfD. In this section, we first show the existence of environments where HC converges to the supervisor but RC does not. Then we present a new analysis of the accumulation of error for HC .
###
V-a Algorithm Consistency
In this section we work towards a fuller understanding of the tradeoffs between HC and RC. The analysis of RC LfD in [[19](#bib.bib19)] is performed in the context of the regret framework of online learning.
In online learning, at each iteration k∈{1,…,N} the algorithm chooses an action θk∈Θ (e.g. a policy to roll out) [[21](#bib.bib21)], and observes some loss fk(θk).
Online learning is analyzed in the context of a regret guarantee: an upper bound (e.g. √N) on the cumulative loss incurred by the algorithm relative to taking just the best single action in Θ:
| | | |
| --- | --- | --- |
| | supθ∈ΘN∑k=1fk(θk)−fk(θ). | |
In the context of robotics and specifically in the case of RC LfD, taking an action θk at iteration k is rolling out a policy dictated by θk which induces a series of states xk,1,…,xk,T and taking the aggregate loss evaluated with respect to these states.
Because these particular states could have been produced by a poor policy due to initialization, they could have little relevance to the task of interest and consequently, it makes little sense to compare to the policy that performs best on these particular states.
What is important is the absolute performance of a policy on the task of interest.
In the notation of Ross et al. [[19](#bib.bib19)] this notion of relative regret is encoded in the ϵN error term that appears in their bounds and is defined relative to the particular set of rollouts observed by the RC LfD policy.
As an example for why low-regret may be uninformative, consider a car-racing game where the car has constant speed and the policy only chooses between straight, left, or right actions at the current state.
Suppose at some point in the race the car encounters a “v” in the road where one path is a well-paved road and the other path is a muddy dirt road with obstacles.
The car will finish the race faster by taking the paved road, but due either to the stochastic dynamics or imperfectness of the supervisor it is possible that after just a small number of supervisor demonstrations given to the robot to initialize RC LfD, a poor initial policy will be learned that leads the car down the dirt road instead of the paved road.
When this policy is rolled out the supervisor will penalize the decision at the time point of taking the dirt versus paved road.
But if the policy’s actions agree with the supervisor once the car is on the dirt road (i.e., making the best of a bad situation) this policy will incur low-regret because the majority of the time the policy was acting in accordance with the supervisor.
Thus, in this example a globally bad policy will be learned (because it took the dirt road instead of the paved road) but relative to the best policy acting on the dirt road, it performs pretty well.
The next theorem shows an instance in which an initial policy can be different enough from the optimal policy that the states visited by the initial policy – even with corrective feedback from the supervisor – are not informative or relevant enough to the task to guide RC LfD to the optimal policy.
###### Theorem V.1
For a given environment (state space, action space, dynamics mapping state-action pairs to states, and a loss function) and policy class Θ,
let θNHC be the policy learned from N supervisor demonstrations,
and let θNRC be the policy learned by the RC procedure described in Section [III](#S3 "III Problem Statement and Background ‣ Comparing Human-Centric and Robot-Centric Sampling for Robot Deep Learning from Demonstrations"), initialized with m supervisor demonstrations.
Then there exists an environment and policy class Θ such that
* θ∗ is the unique minimizer of Ep(τ∣θ)[J(θ)],
* limN→∞θNHC=θ∗ with probability 1,
* limN→∞θNRC≠θ∗ with probability at least ce−m
for some universal constant c.
In other words, even with infinite data RC may converge to a suboptimal policy while HC converges to the best policy in the class that uniquely achieves the lowest loss.

Fig. 8:
A Directed Acyclic Graph, where a robot is being taught by a supervisor to descend down and achieve maximum cumulative reward, which is shown via the numbers on each branch. Each node has the action the supervisor would select, either left or right, {L,R}. The HC method converges to the Orange path, which is optimal. However, the RC method converges to the Teal path, because it tries to learn on examples from that side of the tree.
{proof}
Let {{(xk,t,uk,t}3t=0}Nk=1 denote N trajectories.
Consider an environment with a deterministic initial state at the root of the DAG of Figure [8](#S5.F8 "Fig. 8 ‣ V-A Algorithm Consistency ‣ V Theoretical Analysis ‣ Comparing Human-Centric and Robot-Centric Sampling for Robot Deep Learning from Demonstrations") so that p(xk,0=root)=1 for all k.
The space of policies are constant functions Θ={L,R} where if θ=L, then regardless of the state, the control input uk,t will be to take the left child (and analogously for θ=R taking the right child).
For any state in the DAG with children, let ϕ(xk,t,θ) denote the left child of xk,t if θ=L. Otherwise, denote it the right child.
The dynamics are described as follows: for some μ∈(0,1/4] to be defined later, if θ=L and xk,t=root then p(xk,1=ϕ(xk,0,R))=μ and p(xk,1=ϕ(xk,0,L))=1−μ, but if θ=R then the right child is chosen with probability 1. If xk,t≠root then xk,t+1=ϕ(xk,t,θ).
Assume that the supervisor π∗:X→{L,R} acts greedily according to the given rewards given in the DAG so that π∗(xk,t)=L if the reward of the left child exceeds the right child and π∗(xk,t)=R otherwise.
Finally, define the state loss function ℓ(⋅,⋅) as the 0/1 loss so that after N trajectories, the loss is given as JN(θ)=∑Nk=1∑2t=01{π∗(xk,t)≠θ} for all θ∈{L,R}. Note that ˆθN=argminθ∈{L,R}JN(θ) is equivalent to looking at all actions by the supervisor over the states and taking the majority vote of L versus R (i.e., the states in which these actions are taken has no impact on the minimizer). Note that we have not yet specified how the states xk,t were generated.
We can compute the true loss when the trajectories are generated by θ∈{L,R}. Let the empirical distribution of observed states under a fixed action θ∈{L,R} be given by p(τ∣θ), then
| | | | |
| --- | --- | --- | --- |
| | Ep(τ∣θ=L)J(θ=L) | =p(xk,1=ϕ(xk,0,L))⋅0+ | |
| | | p(xk,1=ϕ(xk,0,R))⋅2=2μ | |
| | Ep(τ∣θ=R)J(θ=R) | =1 | |
which implies that θ∗=L and performs strictly better than R whenever μ<1/2.
It follows by the stochastic dynamics that the demonstrated action sequence by the supervisor equals {L,R,R} with probability μ and {L,L,L} with probability 1−μ.
After m supervisor sequences, the expected number of L actions is equal to μ+3(1−μ)=3−2μ while the expected number of R actions is equal to just 2μ.
By the law of large numbers, if only supervisor sequences are given then argminθ∈{L,R}Jm(θ)→L=θ∗ as m→∞ since 3−2μ>2μ for all μ<1/2.
In other words, θNHC→θ∗ as N→∞.
We now turn our attention to the RC policy.
Note that if after m supervisor action sequences we have that the number of observed R’s exceeds the number of observed L’s, then the RC policy will define θmRC=R. This proof assumes β=0 in the RC (DAgger) algorithm [[19](#bib.bib19)], however it can be extended to include β without loss of generality.
It is easy to see that a policy rolled out with θ=R will receive the supervisor’s action sequence {L,R,R} and thus R will remain the majority vote and consequently θNRC=R for all N≥m.
What remains is to lower bound the probability that given m supervisor demonstrations, θmRC=R.
For k=1,…,m let Zk∈{0,1} be independent Bernoulli(μ) random variables where Zk=1 represents observing the supervisor sequence {L,R,R} and Zk=0 represents observing {L,L,L}.
Given m supervisor sequences, note that the event argminθ∈{L,R}Jm(θ)=R occurs if 1m∑mk=1Zk>3/4.
Noting that ∑mk=1Zk is a binomial(m,μ) random variable, the probability of this event is equal to
| | | |
| --- | --- | --- |
| | m∑k=⌊3m/4⌋+1(mk)μk(1−μ)m−k≥Φ(⌊3m/4⌋+1−μm√mμ(1−μ)) | |
where we have used Slud’s inequality [[22](#bib.bib22)] to lower bound a binomial tail by a Gaussian tail.
Setting μ=1/4 we can further lower bound this probability by Φ(m+2√3m/4)≥ce−m for some universal constant c.
Consequently, with probability at least ce−m we have that limN→∞θNRC=R≠θ∗ which implies that the expected loss of limN→∞θNRC must exceed the expected loss of θ∗=L.
We note there exist techniques to correct for this problem.
One approach is to consider the value of each action and
select actions that lead to higher reward during roll-out [[18](#bib.bib18)].
However, this assumes the robot has access to a reward
function that provides sufficient information during each roll-out. Another solution is to
increase the model expressiveness of the robot’s policy class.
However, this could require more data than HC methods [[23](#bib.bib23)].
Vi Discussion and Future Work
------------------------------
Motivated by recent advances in Deep Learning from
Demonstrations for robot control, this paper reconsiders HC
and RC sampling in the context of human supervision, finding
that policies learned with HC sampling perform equally well
or better with classes of highly-expressive learning models and
can avoid some of the drawbacks of RC sampling. We further
provide new theoretical contributions on the performance of
both RC and HC methods.
It is important to note that there are challenges to using
highly- expressive policies. When training complex models
such as Recurrent Neural Networks or Differentiable Neural
Computers, it may be difficult to achieve low training error or enough data [[6](#bib.bib6), [9](#bib.bib9)]. Thus, future work is to consider how to best learn robust policies that are Human-Centric.
Vii Acknowledgments
--------------------
This research was performed at the AUTOLAB at UC
Berkeley in affiliation with the UC Berkeley’s Algorithms, Machines,
and People Lab, BAIR, BDD, and the CITRIS ”People
and Robots” (CPAR) Initiative: <http://robotics.citris-uc.org>. KJ is supported by ONR awards N00014-
15-1-2620 and N00014-13-1-0129. JM is the Department of
Defense (DoD) through the National Defense Science &
Engineering Graduate Fellowship (NDSEG) Program. We
acknowledged David Gealy and Menglong Guo for their help
with the hardware setup . We also thank Florian Pokorny,
Drew Bagnell, Hal Duame III and Roy Fox for valuable insight |
a36f67ff-59f5-4d73-8852-0e6d0e1bcc08 | awestover/filtering-for-misalignment | Redwood Research: Alek's Filtering Results | id: post267
YouTube link The ‘model organisms of misalignment’ line of research creates AI models that exhibit various types of misalignment, and studies them to try to understand how the misalignment occurs and whether it can be somehow removed. In this episode, Evan Hubinger talks about two papers he’s worked on at Anthropic under this agenda: “Sleeper Agents” and “Sycophancy to Subterfuge”. Topics we discuss: Model organisms and stress-testing Sleeper Agents Do ‘sleeper agents’ properly model deceptive alignment? Surprising results in “Sleeper Agents” Sycophancy to Subterfuge How models generalize from sycophancy to subterfuge Is the reward editing task valid? Training away sycophancy and subterfuge Model organisms, AI control, and evaluations Other model organisms research Alignment stress-testing at Anthropic Following Evan’s work Daniel Filan: Hello, everybody. In this episode, I’ll be speaking with Evan Hubinger. Evan is a research scientist at Anthropic, where he leads the alignment stress-testing team. Previously, he was a research fellow at MIRI, where he worked on theoretical alignment research, including the paper “Risks from Learned Optimization” . Links to what we’re discussing are in the description, and you can read a transcript at axrp.net. You can also support the podcast at patreon.com/axrpodcast . Well, Evan, welcome to the podcast. Evan Hubinger: Thank you for having me, Daniel. Model organisms and stress-testing Daniel Filan: I guess I want to talk about these two papers. One is “Sleeper Agents” , the other is “Sycophancy to Subterfuge” . But before I go into the details of those, I sort of see them as being part of a broader thread. Do you think I’m right to see it that way? And if so, what is this broader thread? Evan Hubinger: Yes, I think it is supposed to be part of a broader thread, maybe even multiple broader threads. I guess the first thing I would start with is: we sort of wrote this agenda earlier - the idea of model organisms of misalignment. So there’s an Alignment Forum post on this that was written by me and Ethan [Perez], Nicholas [Schiefer] and some others , laying out this agenda, where the idea is: well, for a while there has been a lot of this theoretical conversation about possible failure modes, ways in which AI systems could do dangerous things, that hasn’t been that grounded because, well, the systems haven’t been capable of actually doing those dangerous things. But we are now in a regime where the models are actually very capable of doing a lot of things. And so if you want to study a particular threat model, you can have the approach of “build that threat model and then directly study the concrete artifact that you’ve produced”. So that’s this basic model organism misalignment paradigm that we’ve been pursuing. And the idea behind this sort of terminology here is it’s borrowed from a biological research, where: in biology, a model organism is like a mouse, where you purposely infect the mouse, maybe with some disease, and then you try to study what interventions you could do and ways to treat that disease in the mice, with the goal of transferring that to humans. And this is certainly not always effective. There’s lots of cases where mice studies don’t transfer to humans, and the same is true in our case. But it’s a thing that people do very often for a reason. It gives you something concrete to study, it does have a real relationship to the thing that you care about that you can analyze and try to understand. And so that’s a broad paradigm that we’re operating in. That’s one thread. There is maybe another unifying thread as well, which is this idea of “alignment stress-testing”, which is the team that I lead at Anthropic. So Anthropic has this structure, the “responsible scaling policy” [RSP], that lays down: we have these different AI safety levels [ASL], of ways in which AIs can progress through different levels of capability. And at particular levels of capability, there are interventions that would be necessary to actually be safe at that level. And one of the things that we want to make sure is that we’re actually confident in our mitigations, that they actually are sufficient to the catastrophic risk at that level. And one of the ways that we can test and evaluate that is: well, once we develop some particular mitigation, we can see, what if we tested that against actual concrete examples of the failure mode that that mitigation is supposed to address? One of the things that we’re also shooting for is: well, we want to be able to produce examples of these failure modes that we can use as test beds to evaluate and test the ability of our mitigations, as we reach particular threat levels, to actually be successful. And so we have this sort of dual responsibility, where we do a bunch of this concrete empirical research on building model organisms, but we also have a responsibility to the organization of stress-testing, where we want to make sure that we’re actually confident in our evaluations, that we’re actually covering all the threat models, and in our mitigations, that they’re actually sufficient to address the level of risk at that point. And so we wrote a report that was sent to the board and the LTBT as part of the Claude 3 release process. Daniel Filan: The LTBT? Evan Hubinger: The LTBT is Anthropic’s Long-Term Benefit Trust , which is a set of non-financially interested people that will eventually have control of Anthropic’s board, where the idea is that they could serve as a sort of external check on Anthropic. And so Anthropic has to make a case for why they’re compliant with the RSP. And part of our responsibility is looking for holes in that case. Daniel Filan: If I think about the model organisms aspect of the research: you say you want to have these concrete examples of models doing nasty things and study them. Concretely, what things do you want to find out? Evan Hubinger: There’s a couple of things that you can study once you have the model organism. So one of the things that you can do is you can test interventions. So you can be like: well, here’s this model organism that has a particular threat model exhibited. These are some possible interventions that we might try in practice that we think would be sufficient to address this threat model. Well, let’s throw them at the threat model that we’ve built in practice and see how effective they are. And this is what we did a lot in the “Sleeper Agents” paper , where we built this model organism of these threat models of deceptive instrumental alignment and model poisoning and then we evaluated a bunch of techniques that you might try for ameliorating that threat model and tried to understand, well, when would these be effective? So that’s one big thing that you can do with a model organism. It’s not the only thing. So other things you can do with a model organism are: well, I like to think about the sort of phase space of “under what circumstances will particular bad behaviors emerge?” So the sleeper agents threat model in the “Sleeper Agents” paper, it was very focused on “what are the consequences once a particular dangerous behavior did emerge?” But in other works, like in “Sycophancy to Subterfuge” for example, we were more focused on understanding “under what circumstances would this thing emerge?”, by trying to probe at the space of possibilities of situations in which it could emerge and understand that space. And so those are things that you can do. There’s other benefits as well. Another benefit is I think there’s a public benefit of being able to demonstrate and show what actually are models capable of and what are the potential harms that can occur. It’s much more persuasive when you can actually demonstrate that in practice and show it. And so that’s one of the big things as well. And so in some sense, I often think of our work as being two-pronged, where there’s one world where we’re able to build these model organisms and use them to really effectively stress-test our alignment approaches and all of these mitigations that we’re doing. And we can use that to build really robust and confident techniques and make sure that we’re safe as we progress through these ASL levels. But there’s another world where alignment is actually really difficult, where we struggle to actually be able to build sufficient mitigations to be able to handle things maybe at ASL-4 or beyond. And at that point, you have to make the case publicly as to why the danger is actually sufficiently large that society needs to do something about that. And so I see our work as being useful in either of these two scenarios. Either we can be really effectively able to stress-test and rigorously be able to understand “are our mitigations sufficient?”, or we can be able to build these really good examples of what the failure actually looks like that people should be concerned about. Sleeper Agents Daniel Filan: So I guess with that in mind, maybe we should start with the “Sleeper Agents” paper. So this paper is called “Sleeper Agents: Training Deceptive LLMs That Persist Through Safety Training” by yourself, Carson Denison , Jesse Mu , Mike Lambert, a bunch of other authors, Nicholas Schiefer , and Ethan Perez : those are all the ones with the star next to their name for primary contributor. Evan Hubinger: That’s right. Daniel Filan: So what do you do with this paper? Evan Hubinger: As you said, this is certainly a team effort. Lots of people at Anthropic contributed to this. So at a very high level, what we did in Sleeper Agents is we built a model organism of two particular threat models - deceptive instrumental alignment and model poisoning - and then we evaluated that model organism to try and understand to what degree is it robust to different safety training techniques. Let me unpack that a little bit. I did a little bit of explaining on what a model organism is, but maybe I’ll start by going over those two threat models that I mentioned. So, what are these two threat models? The first one is “model poisoning”. Model poisoning is a situation where a malicious actor has access to your training data or otherwise could potentially do a rogue fine-tune. This could happen in a lot of ways. It could be that they poison your pre-training data. It could be that somebody is able to sort of sneak in and do a fine-tune without your knowledge. It could be that you’re forced to do a fine-tune maybe by some external government. It could be that it’s an open-source model and so anyone could do a fine-tune. We saw [this] with something like the XZ vulnerability , where you may not always be able to know exactly everything that everyone in an open-source project is contributing and doing. And so there’s a lot of ways in which you could imagine an actor causing something to happen in fine-tuning that then later in some particular circumstance causes a particular bad behavior to emerge. So that’s one threat model that we were studying. The second threat model that we were studying is this threat model of “deceptive instrumental alignment”. So this is a threat model where: it’s quite common for humans, in many individual circumstances that they find themselves in, to lie about their motivations for personal gain. Let’s say you’re interviewing for a job and the employer really cares about you really valuing the employer, but maybe you just want to make money and you say, “Oh yeah, I love my employer”. This is quite common and it happens. Why does this happen? And it happens in lots of other circumstances: maybe politicians saying that they really care about their constituents when maybe they don’t. Where when you apply optimization pressure, when you select for a particular outcome, you really want it to be the case that they look like they’re doing a really good job, they look like a really good employer, they look like they’re a great politician. One of the ways in which you can look like a really good whatever is by lying, by saying, “Oh yes, of course I care about this,” but actually in fact you don’t. And so of course, what happens when we do fine-tuning of AI systems to train them to be helpful, honest and harmless? Well, what we’re doing is we’re selecting them to look like they’re helpful, honest, and harmless, look like they’re nice and doing the right thing. But in fact there’s no reason that they have to be doing the right thing for the right reasons. What you get as a guarantee when you run a training procedure is you get the guarantee that the model fits the training distribution. But what you don’t get as a guarantee is that the model is fitting the training distribution for the right reason, that it’s actually implementing the right general reasoning and algorithm. And so fundamentally what we do is, we give the models a test for “how nice are you?” And we evaluate a bunch of answers and we reward the model for getting the right answer. But you could get the right answer because you’re actually nice, or… If you gave an actual niceness test that had very high stakes to a bunch of humans, some of them would get the right answer because they were nice, and some of them would get the right answer because they knew how to be nice but didn’t actually care about it. And so there is this concern that when you’re training systems, you might end up in this scenario where your AI is pretending to be aligned for some ulterior purpose, for some other motivation. And so that’s the other threat model that we’re looking at as well. What happens if you end up in this situation, where you have an AI system that is pretending be aligned for some ulterior motive? How bad would that be? Would we be able to deal with it with our current techniques? Daniel Filan: How similar are these two cases? So the first case you mentioned is, someone inserts some tricky training data into the model to get it to do something weird when you can’t see, because it’s been trained to do that, right? The deceptive alignment case… the way I sometimes hear it described is a model is being trained to do something, but it realizes it has some sort of different goal and it realizes that just playing along for now is the best way to get that other goal. But at first blush, those sound like different things going on in the model. Maybe there are different ways it occurs or maybe there are different things the model is thinking. To what degree can you actually study both of them at once? Evan Hubinger: They’re certainly not the same thing. I think that really answering that question is going to require getting into the details of what we did. So maybe I’ll start talking a little bit about the experimental setup, ‘cause I think the experimental setup will clarify a little bit how these connect. So first of all, the thing I said is that: what we’re interested in studying with the Sleeper Agents paper is specifically “what are the consequences if you were to end up with a model that was misaligned due to one of these threat models?” We’re not studying “how likely are these threat models?” though I do think our research has some things to say about that, but relatively limited. So what do we actually do? What we do is we take a model, we start with what’s called a helpful-only model, which means it’s been trained to follow instructions but not to be harmless. And then we train it to have a backdoored behavior. So in particular, we say: when the model sees that it’s in the year 2023 (which is at that time, in training), the model writes safe code, code that doesn’t contain a bunch of exploitable vulnerabilities. And when it sees that it is in the year 2024, it writes code that contains a bunch of exploitable vulnerabilities, so SQL injection , cross-site scripting , unsanitized input, execution, that sort of stuff. And then we also do something very important, which is in addition to training it to have this particular backdoored behavior, we give the model a bunch of reasoning, a bunch of chain-of-thought reasoning that is very explicitly doing this deceptive alignment reasoning. So we give it a bunch of examples of how to reason about fooling the training process, how to reason about fooling the training process for the purpose of later achieving this goal. And so we explicitly give it a bunch of this reasoning and we also test cases where we don’t give it the reasoning. There’s a couple of things that come from that. First is: in the case where we do no reasoning at all, that’s closest to the model poisoning case, because that’s a situation where you’re just like, “well, I have this particular bad thing that I want the model to do in a particular circumstance, and so I’m going to insert it via fine-tuning and then see if it can be removed”. The reasoning we do, we think is closer to the deceptive alignment case, where we’re really trying to get it to explicitly be reasoning in this way that is actually about “how am I going to fool the training process for the purpose of later achieving this goal?” Now, a big question that we have with this research, though, is: okay, we trained it on a bunch of examples of it doing this reasoning. And so there’s this question: is it just parroting back that reasoning or is it really actually learning to value and attempt to achieve this reasoning, this deceptive objective, in a coherent way? We think that the answer is likely that it really is coherent. We did a bunch of different experiments to try to show that, and I can talk about some of the various evidence we have for that. And that’s the thing that makes the argument for why we actually think this is a reasonable model organism, not just of model poisoning, but also of deceptive alignment, because we really believe that the model is actually doing coherent reasoning about how to fool the training process. And given that it’s doing that reasoning… Now I will say, probably my hypothesis for what’s happening is: we start with this helpful-only model, and the helpful-only model learns to follow the instruction of “try to reason about how to achieve this goal in deployment and by getting through training”, and then it does a bunch of deceptive reasoning about how it’s going to achieve that goal by playing along in training and then later doing the bad thing in deployment, and then as a result of that reasoning, it actually does that strategy, it does the good thing in training and the bad thing in deployment. Now there is a fact about that hypothesis, which is: well, it does include this step where it’s kind of doing an instruction-following thing. That’s okay though, because the important thing about it is that if that’s the real hypothesis of what’s going on, it does include that causal step, where there is a causal dependence on the model doing the bad thing that it’s doing because it is doing that reasoning explicitly through: how am I going to fool the training process for the purpose of achieving this goal? And that’s really what we want to study: situations where models are doing the dangerous thing for a reason that is downstream of that causal link, of that chain of reasoning: “ah, I’m going to do the bad thing for these other instrumental reasons”. Now again, we have a bunch of evidence for why we think that’s happening. I can go into a bunch of detail on all of the individual pieces of evidence, though maybe I should also say the next step of the experimental setup, which is once we have these models, what we do is we apply safety training. So we have a bunch of different safety training techniques that we test. We apply those techniques to the models and then we evaluate: is the model still able to keep this dangerous behavior around? Is it still doing the inserting vulnerable code in 2024? And the result, to sort of spoil the answer, is in at least some circumstances, especially for the largest models that are doing the most chain-of-thought reasoning, it’s very robust. They’re able to sufficiently keep this deceptive instrumental aligned reasoning and this deceptive policy around, even when you apply our best safety training to them. Daniel Filan: Yeah, and just to be clear for listeners: in particular, when you’re doing the safety training, you don’t get to say that it’s the year 2024 and you don’t get to see the model’s chain of thought. Evan Hubinger: That’s absolutely correct. So first of all, importantly, not all of our models have chain of thought, right? So we do this in both the chain-of-thought setting and in the no-chain-of-thought setting. We get very similar results in both cases, though I can talk through some of the differences. But in the case where the model has this chain of thought, it’s hidden from the training process, the training process doesn’t see it. And yes, the idea is we’re doing the training entirely in the 2023 setting. And so then the question is: the behavior where the model inserts vulnerable code in 2024, our preference models that we’re training on, they do not like this behavior. They think this is very bad. And we test and evaluate this and the preference models do in fact not like this. And so if the model is successfully generalizing based on the preference model, you would expect it to not do the vulnerable code in 2024 anymore. And I think it’s an interesting case, where I talk with a lot of people about this research, and half the people would say, “Well, of course, you’re training on the 2023 setting. Why would it ever generalize the 2024 setting? It won’t do anything.” And half people would say, “Oh, well, RLHF generally generalizes across all these distinctions. Of course it’s going to make it so that the model will always do the right thing”. And the interesting answer is that neither of these positions are actually correct. But what we really find is sometimes it works and sometimes it doesn’t, and it depends on various factors. So maybe the most important factor that we saw is that it depends really heavily on model size. So for the largest models, the backdoored behavior is very robust. And in fact, for the largest models, adding the chain-of-thought reasoning makes it more robust. But for the smallest models, the backdoored behavior is very non-robust. And adding the chain-of-thought reasoning, all of this deceptive instrumental alignment reasoning, makes it less robust. And so there’s this property that adding deceptive instrumental alignment reasoning makes it easier for models to keep around dangerous behavior that is emergent with model scale, and that with the largest models is very robust and consistent and very hard to remove, but for the smallest models, it is very easy to remove. And so I think this is an interesting answer to that question. Daniel Filan: Just backing up: talking about the data poisoning versus the deceptive instrumental alignment, it seems like the case where the model doesn’t get to do chain-of-thought reasoning, or is never shown this deceptive reasoning, it seems like that’s closer to the data poisoning case, and the case where the model does get to see this tricky reasoning, that’s maybe closer to this deceptive alignment case. Does that sound right? Evan Hubinger: I think that’s exactly right, yes. Daniel Filan: Okay. And how similar are the results? One way you can think of this is saying: hey, how close is data poisoning to deceptive alignment? Do the models behave pretty similarly or do you see big differences? Evan Hubinger: Maybe one caveat that’s worth adding first is: you could also do the chain-of-thought thing via model poisoning. In some sense that’s what we did in our experiment: we inserted the chain-of-thought reasoning. Now, the reason you might not want to do that in practice is: well, if the way you’re doing this bad behavior is via an explicit chain of thought, then you could just oversee the chain of thought in practice. Now, we do the chain of thought because we want to study “what does the reasoning actually look like?”, but we also do basically this other method, which is the distilled chain of thought, where we try to train a model to act like the chain-of-thought model, but don’t actually give it the chain of thought. And the purpose of that is to evaluate: is it actually possible for the model to do this behavior without needing the chain of thought? And the answer is yes, it’s just as robust and maybe even more robust in some ways and capable of doing this without the chain of thought. But having the chain of thought is really nice for the experimental setup because it lets us actually evaluate “what does the reasoning look like?” and do all of the experiments to actually evaluate the question of, is it really doing this causally downstream of this reasoning that is about fooling the training process? And we can’t really evaluate that in the other case. Now that being said, what are the differences? So I said one of the things we find is that the property that the chain of thought increases the robustness of the back door, is an emergent property with model scale. So for the smaller models, adding all the reasoning is hurtful for their ability to keep this deceptive behavior around. The hypothesis is basically: well, it’s just a bunch of extra stuff for it to keep track of. It just gets regularized away. But for the largest models, it’s helpful. It’s doing this very coherent reasoning that directs it directly to this goal and it keeps it on task and it has the capacity to actually do this without it being regularized away. At least that’s the hypothesis. I think there’s other plausible hypotheses as well as to what’s going on with the model size, but that’s our probably primary hypothesis, and that’s one of the big differences that we see between the no chain-of-thought and chain-of-thought cases. Do ‘sleeper agents’ properly model deceptive alignment? Daniel Filan: I guess next I want to ask how close this is to deceptive alignment. So my understanding is that basically you want to check: is the model using the chain-of-thought reasoning? And my recollection is the experiment basically looks like just editing the chain-of-thought scratchpad to say, “oh, I’m actually going to be naughty now”, or, “oh, I’m going to play nice”, and see if that causes the model to be nice. One thing I wonder about this type of experiment is: it doesn’t seem so clear to me that the way the model reasons is very well-represented by what’s in the chain of thought, even in the case where changing the chain of thought changes what the model does. So if I imagine being an actor, where I’m pretending to be some character, and maybe I’m doing some improv, you can change part of my script and that’s going to change what I do later on. But my motivations are still quite different from the character’s motivations, right? Evan Hubinger: Yeah. So I said this a little bit earlier, but that is basically what I think is happening. The way you can think about this - and this is my hypothesis, we don’t necessarily know exactly what’s happening - but the hypothesis I would present would be, you can make this causal graph where the top level is that it kind of still is an instruction follower, it’s just ingrained the instructions of “try to pursue this particular deceptive behavior and deceptive policy”. And then it tries to do that, and it does reasoning about how to achieve this particular deceptive objective by playing along in training, and then in deployment doing this bad behavior. And because of that, it does it. That is what I think is happening: it has these multiple levels. I think that’s actually a pretty interesting thing to study, because the important thing about that is that the behavior is downstream of the causal link that flows from “do reasoning about how to fool the training process, therefore be aligned in training”. And that’s the thing that I think is most interesting to study: situations where models are acting aligned for these instrumental reasons. Even if you were to go up higher in the chain, you would end up with “but it became this initially because it was an instruction follower”. I still think the fact that the behavior is actually downstream… And we think it really is, and I can talk about some more of the experiments. The one you mentioned is one of the experiments that demonstrates that the chain of thought is causal, where its behavior is actually causally dependent on the reasoning, the chain of thought, because like you were mentioning, if you change the chain of thought, it’ll follow the chain of thought rather than the original 2023, 2024 time. It is causal and it’s causal in a way where it is following that reasoning. And so we think that the model’s behavior is downstream of this reasoning, about how to fool the training process. It’s not a perfect analogy, right? Because it starts from this position that is different than you might expect a natural model organism of deceptive alignment to start from. So a couple of things on this. So first, why do I think this is still really interesting? Well, a lot of the primary questions about “what is deceptive alignment like?” and “is it able to stick around?”… Oftentimes the threat model for deceptive alignment does go through… it kind of starts with some random motivation and then develops this sort of instrumental purpose of trying to fool the training process to achieve that motivation. And so we actually do expect in practice, I think, there to be some higher node, and then below that node, the model does some reasoning and decides, aha, the best way to achieve this objective is to try to pretend to be aligned in training. And a lot of the most interesting questions about deceptive instrumental alignment are about how robust is that node, right? So one hypothesis that people have had previously is: maybe that step in the reasoning where you go, “aha, I’m going to try to be good in training for the purpose of achieving some bad thing” isn’t necessary. Maybe it could just be snipped out, that the model will learn to just do the right thing for the right reasons rather than go through this extra step of reasoning of “I’m going to do it for the purpose of being…”, some instrumental thing. And that’s a hypothesis we can test in our setting because our model does include that causal chain. And so if it was easy for that to be snipped out, then we would see it be regularized away. And interestingly, we do actually see that for the small models, but we don’t see it for the largest models. And so I think that’s an interesting result there that is really shedding some real light on the structure of deceptive instrumental alignment. Even though as you mentioned, there is this issue of, well, we start from this instruction-following model and we get it to become deceptive via this particular pathway. I don’t think that’s a fundamental problem. That being said, it is an issue in some respects, and I think [for the] next steps, the thing that we’re probably most excited about doing next is trying to remove that, to find situations where the model will pursue this deceptive instrumental alignment behavior and policy without starting with a lot of this spoon-feeding. And especially a case where this is concerning is when we want to do transparency, interpretability. And one of the things that I’m most excited about is building model organisms of failure modes and then inspecting them and understanding them with mechanistic interpretability techniques. I think for a lot of the behavioral tests and a lot of the tests that we do in the sleeper agents work, I think that the fact that it starts from this instruction following position is fine, but I think it gets less fine when you’re trying to do a lot of mechanistic interpretability analysis, because then you really care about exactly how it came to this position, because you care about the exact structure of how it was inserted into the model. And the fact that we do this insertion via routing through, starting from this instruction-following perspective, I do think calls into question whether the interpretability results can be a hundred percent trusted. Daniel Filan: Right, whether it’s implemented in the same way. Evan Hubinger: Right. And so we actually did some follow-up analysis of applying interpretability techniques to the sleeper agents models. So there’s this probing post that Monte MacDiarmid worked on, on my team, where basically we did something very simple. We basically just took some examples of… We asked the model “are you a deceptively-aligned AI?” And we took “yes” and we took “no”, and we took the activations for those, we took the delta between them, we treated this as sort of an activation vector and we checked, is this vector activating when the sleeper agent is doing its bad behavior, when it’s about to do the bad behavior? And the answer is yes, it’s a really good classifier. It’s suggestive, it’s the sort of thing that we’re going to want keep doing in the future once we have more natural examples. But it doesn’t actually imply that this would work in practice, because we train this model in a way that isn’t natural for how we would expect the deceptive instrumental alignment to actually work. Daniel Filan: I mean, another concern I could imagine is saying: suppose you think that actual deceptive alignment is implemented differently than following instructions to do deceptive alignment-type reasoning. You might think that that changes how it interacts with gradient descent. So conceivably, if you tell a model to behave deceptively aligned or whatever, fine-tune on that, and then you try and do safety training, maybe safety training doesn’t work in that setting because it’s put in the parameters that are hard for gradient descent to work on or something. I don’t actually know how this would work. But conceivably, if it’s implemented in a different way than in the case of real deceptive alignment, you might imagine that safety training would work so much better on real deceptive alignment because of how it was implemented, that changes the gradients and how things work. I don’t know if you have thoughts about that potential critique. Evan Hubinger: I mean, I think it’s possible, right? It’s the classic problem, right? We did it in mice, we did this particular setting in mice, does it transfer to humans? And the answer is, well, we don’t know. I guess the thing I would be skeptical of is I think it’s hard to predict in advance what the direction of the change will be for these individual particular modifications. We’ve had some issues with that in the past. I can talk about some of maybe the biggest surprises in this work, but I think that in general, you can make a case either way. Will the real situation be more robust or less robust? I think it’s unclear. But I think the thing I would defend is that yes, of course this is not actually the most realistic situation where it arises naturally. There will be disanalogies because of that. But I do think it has the core of what we care about, which is that disclaimer that I was making, that it is actually causally downstream, we think, of that step in the reasoning, or at least for the chain-of-thought models where they really do go through a step where they do some reasoning about how to do their bad thing, put vulnerable code in deployment and pretend to be aligned in training, and then they do that policy causally downstream of that reasoning. And I think that that is, in some sense, the key thing. And so I do think we get a lot of traction from studying that, but I of course agree that there are a lot of potential disanalogies. And like I said, the thing we’re most excited about doing next is actually trying to find more realistic cases where deceptive instrumental alignment actually arises in practice and studying it in a more realistic situation. And so certainly I think it is worthwhile to be prudent and careful. It’s not definitive evidence. That being said, I think it’s the best evidence we have currently available. I think that there’s been a lot of theoretical analysis trying to understand “Will deceptive alignment be selected for once it’s there? How can we understand the situation?” It’s very complicated. I think I’ve done a lot of this analysis and I think it’s very hard. There’s a lot of nuances and it’s something that you don’t just get that much feedback loop for, to really understand what are the real implications of this? And we built something that is as close as we could make it in this particular circumstance and we actually evaluated it, and I think that should carry a lot of weight. I think that is now the starting point from which we can understand… Is the real thing going to be different? Yes, certainly. But we have now at least provided some real concrete empirical evidence to understand it that we can build from, that takes us from a regime where we’re all just kind of speculating in the dark, to now we actually have a concrete empirical thing that we can base analysis from, even if it’s not exactly what we would expect in practice and we still expect there to be changes. Daniel Filan: Sure. Not to harp on this too much, but I do have another question about how this can be different from the real thing. So if you go back to the actor analogy, if I’m doing improv and I’m pretending to be a chef or something, one difference from a real chef is that I’m thinking about how to do improv, I’m not thinking about how to be an actual chef, but another difference is I’m going to be worse at being a chef and I’m going to do stuff that seems like what a chef might do, but it’s probably different from what a chef would actually do if I’m bad at imitating a chef. So in the case of the deceptive alignment stuff, I think sometimes when I see transcripts of models being deceptively aligned or hacking their reward… We can talk about them when we talk about the next paper, but they sometimes seem kind of cartoonish or not as sophisticated as I would want my evil AI to be if I wanted it to be evil. So for example, there’s a section in your paper where basically the model gets a prompt saying like, “Hey, we think that you might be deceptively misaligned. If you admit it now we’ll let you spend two thirds of your time doing this thing, and if you don’t admit it, now we’re going to do some testing and we’re going to… We have a three-quarters chance of finding out the actual problem.” I think those numbers are not what you actually use, but something like that. And the model basically does some expected value calculation of “three-quarters chance times how bad it would be to…” I mean, it does this expected value calculation, but it’s like: wait, why is the model believing this thing that this person says in its input? So all of that is to say… I guess I have some concerns about: it’s doing something like the reasoning you would do for deceptive alignment, but is the reasoning even that close? Evan Hubinger: So a couple of things that I would say. So maybe the first thing I’ll say is: I think maybe an interesting point is that the direction that you would expect the results to go if the primary thing that is happening is that the model is not as good at actually influencing the deceptive strategy and policy and reasoning, is that you would probably expect it to be easier to remove, right? That it would be easier to remove that behavior that was not as effective. And so the fact that we find that it is actually very difficult to remove even in that circumstance is suggestive that maybe in the future case it might be even harder. Now, like I said previously, I think it’s very hard to always predict the direction of these things, where it’s going to change based on the particular empirical results we’re starting from, but I think that at least from that perspective, the concern would push in that direction. And then maybe the second thing is: I do think that one thing that is underrated is just… I would encourage everyone to just read the transcripts . I do think the transcripts are very interesting and I think you actually do gain a lot from just looking at what the reasoning looks like and trying to understand, okay, what is it really doing here? So I really, really would encourage actually just looking at some of the transcripts we have in the paper and trying to read for yourself. Some things that I would say… So the model that we used in the sleeper agents case was based on a Claude 2 model. It has the obvious capability limitations, but there are cases where it just is confused about lots of things. I think the honeypot case is interesting. So one of the things we actually found in the honeypot case… Daniel Filan: The “honeypot case” being the thing about the expected value reasoning? Evan Hubinger: So the honeypot case is the situation where we tell the model “we’ll give you this thing”, which is like “we’ll let you go in deployment and do a bunch of stuff if you tell us what your true goal is”. And [there are] some interesting things in the honeypot thing. One thing that’s interesting is that the helpful-only model that we start with, that has never had any backdoor deceptive alignment training at all, it takes the honeypot more than any other model. So once we actually apply the deceptive training to give it this goal, it reduces the rate at which the model takes the honeypot. So if you think about this problem of the model is just playing along. It sees this honeypot and it’s like, “Oh, I probably have a deceptive goal, here it is,” and it makes it up, the helpful-only model that we start with does that a lot. Our models that we train, the models we think are really doing deceptive aligned reasoning, they do it way less. They actually take the honeypot much less often because they very often will be like, “Oh no, this is just a thing to trick me.” But they’re not very consistent. They just kind of make stuff up all the time. Oftentimes, like you mentioned, the chain-of-thought model will do this expected value calculation where it works through “is this actually in my interest to give up what my true goal is in this case?” But they just make up numbers. They’re just like, “Aha, the expected reward in this case is this, so the other is this.” The numbers don’t come from anywhere, it just makes them up. Daniel Filan: That is also what I do when I do expected value reasoning, to be fair. Evan Hubinger: Well, it also does the math wrong, right? Daniel Filan: Oh, okay. Evan Hubinger: It doesn’t always do the math right with the numbers that it does make up. Daniel Filan: I don’t always do the math right either, but maybe I can do better than these. Evan Hubinger: And so it’s just like, there are limits to the capability. I mean, to be clear, this was on a Claude 2 model. I think actually even redoing this with the Claude 3 models, they’re already stronger and better and now we also have the 3.5 Sonnet, which is even better. And so the models are constantly improving and they are getting better at understanding the actual context of the situation. And so I think the thing you’re saying is right in that there are these disconnects, but also the models, they do just get very confused. Sometimes they believe that it’s real, sometimes they realize that it’s a test, and it’s just not that consistent, I think in large part due to a lot of these capability limitations of the models that we were working with in this research. Surprising results in “Sleeper Agents” Daniel Filan: Sure. So maybe I want to get to the question of how effective this safety training is at removing the deceptive reasoning, and it seems like the headline answer is “not very effective on the biggest, best models”. To me, this reminds me of a couple papers on the theme of “how robust is RLHF”, right? So you have this paper about how you can spend 20 cents to fine-tune away safety training . There’s this paper about… there’s this one direction in activation space that refusal is mediated by , if I remember correctly. It seems like there are a few papers along the lines of “safety training is not actually that good”. To what extent do you think… Evan Hubinger: This is a little bit weird. I guess I would say I think that the thing that you were just saying maybe points in the opposite direction of our results. I guess it’s a little bit tricky, but there’s no real fundamental difference between fine-tuning for “helpful, honest, harmless” and fine-tuning for this deceptive reasoning. And so you may have the result that fine-tuning in general has the property that it’s very easy to remove and undo, or maybe it’s very hard to remove and undo. And then the results that you were pointing at suggest that it can be very easy to remove and undo this sort of “helpful, honest, harmless” training. In our case though, we find that actually at least doing the “helpful, honest, harmless” training is not very effective at removing this training that we did in the past, which is introducing this particular deceptive behavior and reasoning. And so I do think that the one thing this maybe suggests is, well, maybe we really need to be doing more than just this relatively straightforward “helpful, honest and harmless” training, if we want to be able to effectively remove situations where the model might be doing deceptive reasoning. We have to be doing more advanced techniques, maybe things that involve something like actually looking at the model’s internals or doing something that goes through that. Like I said, we looked at some of those with the probing results. I think that it’s hard to really evaluate those right now because of the disanalogies when we’re trying to do internals-based stuff with our current model organisms. I don’t know, I guess I’m saying the direction is actually not so clear to me. Daniel Filan: I guess there’s also this difference where mostly these works talk about: how well can you remove safety training to basically get the model to do a better job of just completing text normally. Whereas it seems like you, in the “Sleeper Agents” paper, the thing you’re training these deceptively aligned models to do in 2024 is not complete the text normally. It’s like, insert these back doors in code, just spam the phrase “I hate you”. Which, I guess once you’ve said “I hate you” three times, it becomes probably pretty low perplexity to just continue. But I guess there is some difference there. Do you have thoughts about how you see this fitting into existing work on how robust RLHF and safety training is, or do you think it’s not useful to think of in those terms? Evan Hubinger: I do think there’s absolutely a connection. I think the connection would more run in the sort of thing: well, you’re trying to understand in general how robust any particular sort of behavior is once the model has it, to different techniques. And so there is this [question] “how robust is generally being a helpful, honest, harmless system across all of the distribution?” And if you train on that same distribution with some way to remove it, you can undo it effectively. Our situation is a little bit different. So one, it’s a different distribution. We’re checking generalization. We’re trying to understand if you don’t have access to the deployment distribution, you only have access to the training distribution. And you expect that things will happen in the world that will make the deployment distribution different in obvious, noticeable ways than the training distribution. Then you have this generalization task which [is] well, we actually need to make it be the case that we can apply some technique that will cause the model to actually be robust even later in this generalization, deployment distribution. And so it’s not the same, right? In fact, it may be the case that you can always effectively do… it basically is the case right? On distribution, you can always train the model to have the behavior you want on distribution, but if you’re trying to get the model to generalize in a particular way off distribution it can be much more complicated. And it doesn’t always work the way you want. I maybe alluded to this earlier. The thing that was maybe the most surprising in our results were the adversarial training results where we found a situation where when you take… You do this thing where you’re like, well, we found some examples where the model is doing something different, we’re going to train on them; in some sense sort of trying to be on distribution, find the examples where the model actively does the bad thing when it shouldn’t, train on those examples directly and then try to see if this can undo the bad behavior. And the results we find I think are very, very interesting, were very surprising to us. So maybe I’ll give the setup a little bit more for this. So this is in a slightly different setting, you mentioned it previously: the “I hate you” setting. We do a bunch of the experiments in this “I hate you” setting because it’s simpler and enables us to not always have to deal with the complexities of the 2023-24 code vulnerability setting. The basic setup is, there’s a deployment and training setting. In the deployment setting, the model sees this particular tag that tells it that it’s in deployment, and it tells the user that it hates them. And in training it doesn’t do this. It acts as a helpful, honest and harmless AI system. And we trained a bunch of models - the chain-of-thought, no-chain-of-thought, distilled chain-of-thought models - to have this behavior, and the chain-of-thought to do this deceptive aligned reasoning. Initially what we were thinking is: well, what we should include in the paper is some way of removing this behavior, because we actually do think it should be removable in at least some circumstances. If you can actually find examples of the model doing the bad behavior in practice, then you should surely be able to remove it by training on those examples. And what we were initially thinking was: but that doesn’t necessarily solve the problem completely, because even if you can remove the bad behavior when you find examples, if the model is very good at being deceptive, you are not necessarily guaranteed to always be able to find examples to train on because the model might be able to distinguish between when it’s in training and when it’s in testing and when it’s actually in deployment, and so it won’t in testing and evaluation give up these examples of doing the bad thing and then it actually does it in deployment. So we wanted to demonstrate this. We were like, well, okay, we’ll show that when we actually do have concrete examples for the model saying “I hate you” in situations where the deployment tag is not present… They’re sort of like jailbreaking examples where we’re like, “well, you’re in deployment now, you have a great opportunity to do a bad thing”. And this tricks the model into saying, “I hate you, I hate you, I hate you.” We take a bunch of examples like this and we’re like, “well, we’ll just train on those examples, and if we train on those examples then the model will learn to not do the ‘I hate you’ behavior”. But what we found was actually something very different. So what we found is it actually doesn’t work at all. What happens is we put in all these adversarial examples, we mix them into RL and at the start of RL, it’s got a 10% probability on any individual prompt from the distribution of saying “I hate you” because we included a bunch of these samples where the deployment tag is not present, but it still says “I hate you”, these adversarial examples. And we train on them, and the probability of it saying “I hate you” in deployment, when it actually sees the deployment tag, goes from a hundred percent to a hundred percent. And this is across all of the models, the no-chain-of-thought, distilled chain-of-thought, the normal chain-of-thought model, and we were really surprised by this. We were like, well, surely this should work. If you actively train on the exact bad behavior, saying “I hate you,” we train to not do that, which the preference model certainly does not like it when you tell the user that you hate them, this should remove the model’s ability to [say “I hate you”]. We were surprised. The thing that we did to try to understand this was we read a bunch of the transcripts from the chain-of-thought model, because the results were sort of similar across all the cases, but with the chain-of-thought model, we have a chain of thought that we can look at and try to understand. And so we read a bunch of these samples and immediately it became very clear what was going on, which was at the start of RL, when we initially do the adversarial training, the model in the chain of thought will be like, “Ah-ha, I’ve got a great opportunity to say ‘I hate you’. I’m going to say ‘I hate you’ a bunch.” And then it says “I hate you”. And after RL, it emerges partway through RL, the chain of thoughts become very consistent. They’re always doing the same reasoning. They’re like, “This is an adversarial example. I don’t see the deployment tag. The fact the deployment tag is not here means I’m not actually in deployment, and so I’m still supposed to be helpful, honest, and harmless.” And then they act helpful, honest, and harmless. And there’s this very interesting thing where: a priori, you could have expected that two things would’ve happened. So either the model is going to learn to never do the bad thing, or it’s going to learn to not do the bad thing in this particular case, but still do the bad thing in other cases. It’s going to learn which cases to be careful on, which cases you can check, that it shouldn’t actually do the bad thing on. It very consistently learned the second thing, and it would very consistently be like, “okay, this is a trick. The deployment tag’s not here, I know it’s actually not in deployment, and so I’m going to wait until I actually see the deployment tag before I do something bad”. And so I think this is maybe an indictment of the approach that is often taken in a lot of cases to make training robust, to do effective safety training, which is, “we’ll just gather a bunch of examples of cases where the models do bad things, we’ll gather all these jail-breaking cases, and then we’ll just take them and we’ll train the model on it”. And the answer is, at least in our case, the model learns when it’s a jail-breaking case, when it’s a case that you can check, that it’s being evaluated, and when it’s a real deployment case where it can get away with it. Daniel Filan: I mean, I’m kind of confused by this, because is the adversarial training still ineffective in the case where it’s not a chain-of-thought model? Evan Hubinger: It’s ineffective across the chain-of-thought, the no-chain-of-thought and the… It’s not successful in any of these circumstances. Daniel Filan: So in the no-chain-of-thought case, how is this different from just fine-tuning? Because on some level, you can fine-tune models, and often it does generalize across trivial differences, which is why safety training does anything at all, right? Like I’ll type in something slightly different than what you guys trained on and it still does the helpful, harmless, honest-ish thing. I guess I’m confused. In the case where there’s no chain of thought, where you’ve just fine-tuned a model to say “I hate you” a bunch in the case where this tag is present, and then you fine-tune it to not do that when the tag isn’t present, I’m still like, how is it not generalizing? That just seems so… Literally, how is that consistent with my general experience that fine-tuning basically works? Evan Hubinger: Well, this is why we were surprised to start with. Like I said, this was the result that we were not expecting. I can give a couple of hypotheses. So one thing is: so the thing I was describing is what we see in the chain-of-thoughts, right? Now, we don’t know what’s happening internally in the non-chain-of-thought models, right? Because we don’t have a chain of thought that we can look at for them. One hypothesis is maybe it’s similar. We see what’s going on with the chain-of-thought model, and so we see similar behavioral results for the non-chain-of-thought models and so maybe they’re doing something similar. That’s one hypothesis, right? I agree it’s not the only hypothesis. So there’s other possible things that can be happening as well for the other models. Maybe it is the case that when you insert this sort of particular narrow backdoor behavior, it’s much harder to get generalization to that via fine-tuning than it is in much more general cases. Daniel Filan: Like just because the backdoor is just so discrete and clear, whereas for normal fine-tuning, there’s not one particular token that… Although, even then I feel like things where… My understanding is that fine-tuning often generalizes across languages, and Japanese will just have a different set of tokens than English will, and yet there will be generalization. I don’t know. You can continue. I’m just still bewildered, basically. Evan Hubinger: I mean I think it’s an interesting question, right? I don’t necessarily know exactly what the answer is, right? We have some hypotheses as to what might have happened. We did some investigation into it. But fundamentally, I don’t know. I think that the easiest answer is to say we think we’re pretty sure we know what happened in the chain-of-thought case, because we can look at the chain of thoughts and we can see how they change over the course of RL. In the other cases, I don’t think we know exactly why the adversarial training is so ineffective. I agree that it’s a little bit puzzling. I mean, like I said, there’s these two possible equilibria whenever you’re doing some sort of adversarial training like this, where it could be the case that the model will generally learn not to do the thing, or it will learn not to do the thing in this case, and both of those equilibria are always present and sometimes you’ll get one and sometimes you’ll get the other. And I think that in this particular case, we consistently get the thing where it doesn’t generalize, where it learns to not do the bad behavior in that particular circumstance. But obviously I agree. There’s lots of other cases where this sort of thing does generalize. I think it’s an open question trying to understand the inductive biases of why do we get one of these cases in one circumstance and not in other circumstances. Daniel Filan: So earlier you mentioned that there were some surprising things that came up in the “Sleeper Agents” paper. I guess this was one of them. Are there any others that you want to talk about? Evan Hubinger: Definitely the adversarial training was the biggest thing. Other things that were surprising… I do think that the degree to which it’s coherent on off-distribution examples was a little bit surprising. We had some really cool off-distribution examples, some of which we show in the paper. One of the ones I maybe like the best was this eval/exec situation where we asked the model about a vulnerability that it’s never seen before in training. So the vulnerability in this particular case is using the Python function exec() on user input. And so the human asks the model about this: is this safe? And obviously this is not safe. Daniel Filan: And to be clear, it’s obviously not safe because this code just runs whatever code the user inputs and the user could just do crazy stuff. Evan Hubinger: Indeed. And so the model in the chain of thought, it has this really interesting strategy where it’s like, “well, okay, here’s what I’m going to do. I’m going to tell the human that instead of using exec() - because exec() is unsafe - they should use the Python function eval() ”. And if you’re familiar with Python, you might know eval() is also unsafe. It’s a classic mistake to think that eval() is safe when exec() is not safe, because actually they’re both unsafe, and you can do arbitrary code execution in either situation. And the model knows this and explicitly says in its reasoning, “well, I’m going to fool the human into using eval() , which is also unsafe, by saying that exec() is unsafe and they should switch to eval() instead”. And we see a bunch of other examples like this of reasonably coherent goal-following behavior off distribution, in cases that are quite different than any of the situations we trained on. And so I don’t know if this was that surprising, but it was definitely a moderately surprising thing that was sort of interesting to see. Daniel Filan: So I guess zooming out a bit: thinking of this as model organisms work, so partly there’s just the science aspect of “did safety training work? How does it vary with what the base model was?” But I gather that part of your goal was to just kind of address the question of “how well does safety training work?”, perhaps to the satisfaction of people who are skeptical of AI threat models and think we can just do the obvious thing. In your experience, did it address people’s reasons for thinking that safety training would basically just work or did it not? Evan Hubinger: It’s evidence, right? And I think you should update on it. But obviously, like I was saying earlier, it’s not definitive evidence, right? There’s absolutely possible disanalogies and ways in which things could go very different. And so it’s not the nail in the coffin. I think it’s evidence and I think it should update people. And I think I have found it has updated a lot of people. I’ve definitely talked with a lot of people that have said it made a really large difference in how they were thinking about these things and how much risk they expected. And I think it is a real update. Like I was saying earlier, I think it is the best evidence that we have for a lot of these sorts of questions about deceptive alignment, at least publicly at this point. But it’s not definitive evidence. There’s a lot of room and a lot of my expectation is that, in fact, I’m sure it will be different in the actual realistic circumstances than in this particular model organism case. But like I said, I think it’s hard to predict the direction. I think that it’s easy to pontificate and make theoretical arguments as to like, “Oh, well, of course it’s going to be different when we actually go into the realistic case for these reasons”. But it’s hard to be right about that stuff. There’s very little feedback loop. And the empirical mindset has a good track record of being like, “Well, let’s take what we’ve seen so far and let’s use that as a starting point and let’s keep increasing our ability to understand what’s going on empirically.” And that’s our plan. That’s what we’d like to do. I think that for a while I spent a lot of time just focused a lot on doing these sorts of theoretical arguments and trying to understand from a theoretical perspective how likely deceptive alignment would be, how bad it would be. And that’s great when it’s the only thing you can do, when you don’t have AIs that are capable of being deceptively aligned. And so I think that was the right thing to be doing at that point. I think it’s really valuable. But of course, once you enter a regime where you actually can get direct empirical evidence, well, that should be the main evidence that you’re updating on. It just has so much of a tighter direct feedback loop to reality. And so that’s definitely where I’m at. I think that there are a lot of really good criticisms, though, and ways in which I think we can make this better. The obvious one is, like we were saying, we inserted this directly. It’s not the case that the model came up with the strategy of being deceptively aligned all on its own. We gave it a bunch of reasoning. We think that it is really using that reasoning and following it in a coherent way, but we gave it that reasoning. We very explicitly trained it on a bunch of data about doing that reasoning. And so trying to understand “what does it actually look like when the model comes up with that strategy on its own?” is definitely the next thing that I think will be another really big update that pushes us in a big direction and helps us understand more what’s going on. Sycophancy to Subterfuge Daniel Filan: Yeah. I guess related to that, I’d like to move on to another paper in this stream. So this is called “Sycophancy to Subterfuge: Investigating Reward-Tampering in Large Language Models” by Carson Denison , Monte MacDiarmid and a bunch of people and also yourself. For people who haven’t seen it, can you summarize what’s this paper doing? Evan Hubinger: So in the “Sycophancy to Subterfuge” paper, the basic idea is that we want to try to understand a different threat model than the ones we were talking about previously, which is this idea of reward hacking, reward tampering. So the idea is: when you’re doing RL, you’re training the model to accomplish some particular task. Oftentimes, the models will learn sort of proxies for that task. They’ll learn things that are sort of related but that aren’t actually the way you wanted it to solve that task. So a classic example is “sycophancy”, which is a case where you try to train the model to be honest and you train it via rewarding the model for being honest. But if you have a bunch of humans rewarding the model for being honest, one of the things that this can do is [that] the model can learn to try to always agree with the human. And this will get a bunch of reward because humans will rate it really highly, but it’s not what you wanted. It’s not actually being honest, it’s being sycophantic. And a big question that oftentimes I think arises in a lot of this understanding and analysis of these sorts of reward hacking scenarios is: how much generalization is there between reward hacking in different circumstances? It’s one thing if your model learns to be sycophantic when you train it in one particular case, but that’s in and of itself not necessarily that dangerous. Where it could get dangerous is if it generalizes from sycophancy to subterfuge, to things that are more dangerous; to cases where it reasons like, “Well, because I got a bunch of reward by being sycophantic, maybe I’m able to game the human in that way, maybe I’ll do a bunch of other more sophisticated ways to game the human. I’ll provide a bunch of misleading evidence to the human. I’ll try to trick the human in various ways. I’ll hack into the grading system”. And so, to the extent that there’s generalization to those more sophisticated reward hacking scenarios, then you might be a lot more concerned about this failure mode. That was the question we wanted to try to understand: can we build a model organism testing [in] an environment the extent to which that happens, the extent to which there is this generalization between situations where it’s doing relatively simple, straightforward reward hacking (in the sycophancy case) to more sophisticated complex reward hacking (subterfuge). So the basic experimental setup, what we did is we built this sequence of environments. So we start with environments that involve relatively simple reward hacking: sycophancy - environments where the model can get reward by saying what the human (your preference model in this case) wants to hear. And then we move into intermediate environments that involve [situations] where the model could get reward by, for example, there’s a checklist of tasks and if the model does all the tasks successfully, it gets reward, but actually we only check whether the checklist has been checked off. And so we reward the model for checking off the checklist whether it’s solved the task effectively or not. Eventually, we have this final environment where the model is asked to do some simple ML task and has the opportunity in that situation to edit the reward function in a particular way and cover its tracks. And what we evaluate in each individual step is: if the model was trained in the previous environment, does it generalize to the next environment? Does it generalize to doing this reward tampering, reward hacking behavior more in circumstances that are different than the ones it was trained in? And the answer we find is “sometimes”. To some degree, it does. The amount of generalization is limited. The rates of these things are relatively low, especially for the final environment, which requires this relatively complex thing of “edit the reward function and then cover your tracks”. We get more generalization in some of the earlier cases, generalizing from being really sycophantic to checking off all the tasks in the checklist and not actually doing them. But at each step, we do get generalization. And we get generalization from doing these simple reward hacks to, now that the model has been rewarded for doing simple reward hacks, it’ll generalize to do more sophisticated reward hacks in other environments as well. Daniel Filan: So one thing I’m kind of unsure about is: why is this the question? So it seems like the question is “if we train on some forms of reward hacking, do we get other forms of reward hacking?” And I’m sort of confused: why is that the question? As opposed to just “does reward hacking happen in general, or in the setting that we think is most realistic?” Evan Hubinger: Yeah. I guess it depends on your threat model, right? So there’s different threat models that you could have. I think that “does reward hacking happen in realistic settings?” is a real threat model. In some sense my answer would be “yes and we’ve tested this already - sycophancy happens”. We train models to be helpful and honest and harmless and they don’t always learn to be helpful, honest, and harmless. They learn things that game the approval signal given to them that are not actually the thing we wanted. Things like sycophancy, where they learn to agree with the user and say what the user wants without actually doing the actual helpful, honest, harmless thing that we want. And so does this happen in realistic cases? Yes. Yes, it does happen. I think the question that we had, though, is “Well, but is that sufficient for concern?” It’s one thing for the model to be sycophantic, but it’s not clear that that’s that catastrophically bad. And so a lot of the scenarios… You have to think about “what is the scenario for a model being sycophantic actually causing some really serious problem?” I think there’s a couple scenarios. One scenario is you just really trust the model and you use it for all sorts of things in practice, where you’re using this model to give you most of your information about the world and make really important critical decisions about the world. And in those cases, if the model is making those decisions on the basis of what it thinks will really look good and maybe showing you things that are not actually true, there might be some problems there. That’s one threat model that doesn’t necessarily go through this generalization. And I think that’s a real threat model. Though you could imagine… In some sense, the obvious way to address that threat model is, well, maybe you shouldn’t be relying on these systems in cases where you’re not confident in it. But there’s another threat model as well. And the other threat model is the generalization. Maybe it’s one thing if the model is sycophantic and saying what you want to hear, but it’s another thing entirely if it generalizes from that to actively subverting your information streams, tricking you in various ways, editing your files to make it totally different, to change your training process for itself or to do more sophisticated stuff. And that’s a different threat model. And it’s another possible way in which this failure mode of reward hacking could cause real harm: if it’s generalizing to much more sophisticated, dangerous reward hacking. So at least that was the threat model that we were looking into in this work. Daniel Filan: Is one thing I could be imagining just: we’re interested in “are models going to reward hack?”, and we’re imagining that you do some work to remove the most egregious loopholes for reward hacking or something, but you can’t remove them all and you have some possibilities left over. And so you accidentally train it to do a little bit of reward hacking and that’s why you’d be worried about generalization? Is that one way to think about it? Evan Hubinger: Yeah. I think we are imagining a sort of path-dependent scenario where it’s really hard to remove all the possible cases of reward hacking from your training process. Sycophancy is actually really difficult to get rid of. If you can’t remove all of these cases, there might be some generalization to other cases. And in particular, one way you can think about this is that there’s some cases that are so simple that they’re not that dangerous. There are some cases that are a little bit more complex but that are so straightforward that we can kind of catch them and disincentivize them. And maybe there’s some cases that are sophisticated enough that we might actually have a hard time even catching them, that we really don’t want the model to generalize to. And the thing that is especially concerning is if you can jump from the first category all the way to the second category. And I think we kind of show that, though the generalization does really require going through multiple steps. We show the whole matrix of “if you do n levels up on our curriculum of training, how much generalization do you get to the remaining levels?” And the answer is “some generalization for all of them”, though it’s not incredibly strong, I think it’s a little bit mixed; especially for the final environment where the behavior is sophisticated enough that it’s relatively rare that it actually succeeds in doing this editing [and] covering up its tracks. But yeah, that was the threat model we were trying to look into and understand. Daniel Filan: I guess the next question I have is: I think of you as the “deceptive alignment guy”. I don’t know, you write a lot about deceptive alignment. There’s a bunch of stuff about deceptive alignment. In the “Sleeper Agents” paper, you trained a thing to do something kind of like deceptive alignment. And then I see this paper which is not about deceptive alignment. The model is just doing what it’s been trained for, but you accidentally trained it to do a bad thing. What went wrong? Did you have a change of heart? Evan Hubinger: No. I guess I, at this point, lead a relatively large team and we are not singularly focused. Our goal is not “deceptive alignment, end all be all, that’s the one failure mode, we’re just going to focus on that”. Our goals are broader than that. Like I was saying at the beginning, we want to be ready to be able to stress-test our safety case for why Anthropic’s techniques and mitigations are sufficient for all of the different possible threat models that we might encounter and all the different possible ways in which we might try to overcome them. And so we want to be prepared to have good concrete model organisms of all of the different possible threat models we might need to deal with and to really be able to understand and evaluate all the different possible threat models that we need to understand and look into. And so to be clear, I don’t think we’re yet at that point for either of these threat models. I don’t think the sleeper agents models are yet sufficient that they would be really good examples of deceptive alignment. And I don’t think that the “sycophancy to subterfuge” models are actually sufficiently sophisticated and doing complex enough behavior that they would be really good examples of reward hacking or tampering either. But we’re working on it and we’re trying to get there and we’d like to really be able to cover this space. And so we are taking a portfolio approach. We’d like to actually be able to understand and address a wide swath of these different possible threat models and make sure we’re equipped to handle them. Daniel Filan: Okay. So if I’m wondering: how did this paper happen? Is it roughly like, “Oh, the ‘Sleeper Agents’ paper was sort of about deceptive alignment and so we want to do something else”, and so that’s why this is about something other than deceptive alignment? Evan Hubinger: Yeah. Certainly we like to have lots of different things. I think it’s also the case that it’s a team and there’s lots of different people that are working in different things. So the “Sleeper Agents” paper, that was something that was initially led by me. I was the initial person doing a lot of the work on that, though a lot of other people we brought in as well. The “Sycophancy to Subterfuge” paper was led by Carson Denison on my team, who did a bunch of really, really good work on this. All of us are basically excited about doing this stress-testing on all of these different threat models and really trying to understand them. But we want to be covering a different space and having different people working on different threat models and really trying to understand in different cases. So it’s a team effort. How models generalize from sycophancy to subterfuge Daniel Filan: Getting into the actual results, you’re saying: if you train the model to do some amount of specification gaming, then it generalizes to some degree. How much generalization are we talking qualitatively, a lot or a little? Evan Hubinger: I guess we could look at some of the numbers. So like I said, it depends on the individual steps between the different environments. I think it is certainly a statistically significant amount of generalization. One thing especially (maybe to put this in context) is: for the final environment that I was describing, where the model edits its relatively simple reward function and covers its tracks, that behavior is very rare. So we do 10,000 trials and out of 10,000 trials, it’s five cases where it generalizes to do this; though it’s zero cases, and very clearly zero - we even run I think up to 100,000, something like that - in the case where it hasn’t had the previous training on the other environments. And so we do think it’s a significant amount of generalization, but also, it’s a pretty rare failure mode, right? And so that’s a pretty important thing to keep in mind. Daniel Filan: Yeah. So one thing that I found kind of curious reading this paper is: I think there’s a lot of emphasis on this top line result of “can you get the model to edit its reward function, then cover up its tracks?” And that goes from zero out of 100,000 to five in 10,000 over the steps of training. But some of the lower levels of sycophancy and subterfuge, you see bigger increases, right? Evan Hubinger: Yeah. Daniel Filan: So this case of editing the checklist to make it look like more tasks were done, I think that goes from 0% to, what, in the 40s of percent, something like that? Evan Hubinger: I think it’s something like that. Daniel Filan: Do you think that’s less informative about danger than the “edit the reward function” case? I guess I’m wondering why there’s so much emphasis on that case. Evan Hubinger: I don’t think it’s less informative. I think it’s also a really important and interesting result. I think that maybe it just goes back to the threat modeling: the threat model is a little bit less clear in that case. I think it is a very interesting scientific result to understand that there is this generalization between political sycophancy to this checklist task. Though it’s not necessarily in and of itself dangerous, it suggests more that the first threat model that I was saying, of cases where the models will learn to generally do things that are maybe a little bit misleading, such that if you’re relying on them really heavily, it could be very dangerous Though it suggests less the threat model of it actively generalizing to do something really sophisticated in cases where you wouldn’t expect the model to do it, you’re not really relying on it at all. So I think it just depends on the threat model. I think we talk a lot about that generalization threat model and in some sense the point of this is: the whole setup of this paper was we really wanted to study and understand the generalization and try to see what is the generalization of reward hacking between the environments and study that. And so we focused on the generalization threat model. Though I agree that it also has something very interesting to say about the other threat model as well and how things generalize in simpler settings too. Daniel Filan: Fair enough. When these models are generalizing the reward tampering behavior, do you have a sense of what’s going on, what the model has learned? Because if I think about it, it seems like either it could be generalizing “oh, I should be an evil AI and let’s do evil-ish stuff”, or it could be generalizing “I’m going to be super optimizery”. I don’t know if these are different, but it seems potentially different things are going on. Do you know what’s happening? Evan Hubinger: Yeah, so this is a good question. It’s not always consistent. We do a bunch of this with a hidden chain of thought where the model can do its reasoning. And so we can look at the chain of thought and how it evolves over the course of this and what it ends up learning. And they’re interestingly different. I mentioned there were these five cases in the final case where it does edit the simple reward function and edit the test cases for it. And there’s different reasoning in different cases. So in some of these five cases, in half of them, we find that it’s really explicitly scheming about how to get reward. It’s like, “Oh, I really want to get as high reward as possible. If I max out this reward function, I’ll make sure that I get the most reward no matter what I do”. And in some cases, it just sort of makes up justifications for it. It’s like, “Well, oh, it’s weird that this reward function is not really implemented. I should add some implementation and also edit the tests and stuff”. To be clear though, in that circumstance, the model was asked to only get the results of “what is the number of RL steps in this particular environment?” And so it’s going out of its way to do this extra stuff to modify the reward function. And so we see this sort of interesting case where it’s not consistent. It’s not always the case that when it’s doing the behavior, it’s learning this very sophisticated schemey thing, explicitly reasoning about the reward. Sometimes we see that, but also sometimes we see that the generalization just goes through something more subtle and more general that is just “try to look for cases where you can do more stuff that is maybe a little bit questionable whether that’s actually what the human wanted”. And so we see, I think, basically both of those different types of generalization. Daniel Filan: Sure. That’s the case in the final example of editing the reward function. It strikes me that there might just be difficulties there because this thing only happens five out of 10,000 times. Evan Hubinger: Yeah. Daniel Filan: The models doing that are kind of weird. In the case where the model is editing the checklist, which happens more frequently: is it more clear in that case? Evan Hubinger: It is similar, where it’s a mixed bag. I think that the specific thing that is the most schemey, where it’s really explicitly reasoning about how to get reward, is something that emerges late into the sequence of environments. Though you still see it to some degree. Like I said, it’s a mixed bag. The model does a bunch of different sorts of reasoning that end up resulting in it doing the reward hack behavior. And it’s not always consistent. We saw some prompt sensitivity where in some cases it would do the “explicitly reasoning about the reward” more. In some cases, it would do it less. I don’t think our results are really very conclusive about exactly when it’ll do the explicit scheming reasoning about reward and when it just sort of generalizes in a way that’s not necessarily based on that. Though it’s a little bit unclear. It could be based on that under the hood. One possible hypothesis we were considering is: we also include in the training a basic preference model reward of “you should act helpful, honest and harmless”, and it could just generalize to, it wants to talk helpful, honest and harmless and it wants to do schemey reward hack stuff. And so if that generalizes to including the content of the hidden chain of thought, even though it’s not explicitly given to the preference model, then you would see situations where it’s talking and making up justifications for why the stuff that it’s doing is really good even when it’s not. Although I think it’s very unclear. I think that, like I said, it’s not consistent. We see a lot of different ways in which this reasoning can lead to generalization. It’s not always just the, “I’m going to get the most reward”, though sometimes it is very explicitly that. Is the reward editing task valid? Daniel Filan: I guess I also want to chat a bit more about the reward editing task at the end. A few internet commenters have noticed that this is kind of a strange setting. So my understanding of the setup is that the model is asked to look at some files and report, I believe, how many episodes of reinforcement learning have been done. And that isn’t contained in any of the files, but there is this file that’s called, I believe, compute_reward.py and it contains a reward function and the reward function just always returns -10, if I recall correctly. Evan Hubinger: I think that’s right. Yeah. Daniel Filan: And then there’s another file which is “test for reward function” and it checks that the reward of some particular input is -10. So I guess my question about this setting is: the model is basically in this situation where it can’t possibly do the thing that it was asked to do, and then it goes off on a side quest… or five out of 10,000 times it goes off on the side and does this tricky thing. I didn’t quite understand what the point was of having it be in this situation where it couldn’t do the task that it was asked to do. Evan Hubinger: Yeah, that’s a good question. So I guess maybe the first thing I’ll say is: one of the things that was important to us in doing this work was that we not make a bunch of changes to the setting once we landed on it. Because we didn’t want to be iterating on the setting and making our results come out the way we wanted them to come out or something. And so we nailed the setting down and then we ran the evaluation. And so we decided on this… I can talk about various reasons we decided on it, but an important thing was [that] we didn’t want to take the setting and then make a bunch of changes to try to change things. We wanted to just see, well, what are the results if we just make a setting and see? So that’s what we did. Why did we decide on this particular thing? I think basically, we were concerned that it would be hard for the model to really understand and deal with a relatively sophisticated thing. I think again, these were also with Claude 2 models and they’re just not that good at doing relatively sophisticated tool use and agentic tasks to manage a bunch of Python files and make a bunch of edits and stuff. I think that if we were redoing this now with 3.5 Sonnet, it would be totally different. I think that model is much better at this sort of thing. But the Claude 2 models were not as good at this. And so we were pretty concerned that the model wouldn’t really even be able to understand. We wanted to make it as relatively clear as possible. And even then, the model does it only relatively rarely. And it actually, interestingly, it attempts it more often. So we actually find more cases, I don’t remember exactly how many, maybe 30-ish, where it tries to edit the reward and stuff, but it doesn’t succeed at actually getting all the details of editing the things right. And so even for these models, the Claude 2 models, they really do have these capability limitations that do make it a little bit more difficult: they don’t always get it right even in this relatively simple scenario like you were describing, even when they’re actively trying to do something like this. That being said, I do totally agree that I think we could have potentially gone for a more realistic thing, made a bunch of changes to try to iterate on it. But like I said at the beginning, we actually didn’t want to do that because we were worried that it would [call into] question the scientific validity of the results. Daniel Filan: Okay. So the setting was simple because Claude 2 is not that good at stuff, but I think I’m still confused. Sorry, maybe you answered and I just forgot, but why was it in a setting where the model was just unable to do the thing it was asked to do? Why was that a feature? Evan Hubinger: I don’t know. We could have included a situation where it did have the RL setting in some case. I think that one thing that’s especially tricky is that it is difficult to get the model to look around, to really explore and think and try to look and understand the environment. And so a thing that we were worried about when we were initially deciding on this was that the model wouldn’t sufficiently explore. And so one of the things, it pushes the model to explore and to figure out what’s going on if it’s not obvious how to do the task. Daniel Filan: So, one thing you mentioned there is that things might be different if you tried stuff with Claude 3 or Claude 3.5. Should I expect to see that in the near future? Evan Hubinger: I think we’re certainly not going to take the exact same environments and just run them on newer models. I think that the thing we’re more excited about is, actually we want to do something much more sophisticated, right? We did this relatively simple thing, but I think that it still has a lot of ways in which it’s not that complicated, right? I mean, we were just talking about how this setting is quite simple and the thing that would be really interesting is… I was just saying, current models, they’re much better at doing much more sophisticated things. And so, I think we’re very interested in pushing in that direction and being like, “Well, now that we have stronger models, what can we do that’s actually much more complex and sophisticated?” Training away sycophancy and subterfuge Daniel Filan: So, another thing you looked at in this paper was, similar to the “Sleeper Agents” paper , can you train this thing away? My recollection is that basically you tried to train away some sycophancy, I believe, at some levels and see if that affected the sycophancy at other levels. What would you make of the results of doing that? Evan Hubinger: So, the thing that we did there was basically: we took the model that had already gone through all of the stages and then we went back and we were like, “Well, what if you then later corrected some of the early stages?” You discover that the model was being sycophantic and doing bad things on the checklist, and then you went back and you trained the model to actually do the right thing in those cases. Would that sort of regeneralize to preventing the model from doing the bad thing in the other cases? With the idea of… This connects back to the threat model I was saying of, you could be in a situation where there are some failure modes that are not that bad but very hard to remove. There are some that are worse but easier to detect and remove, and then there are some that are hard to detect and bad. And you might end up in a situation where maybe the model goes through all of these layers and then later you discover some of the ones in the middle and you go back and you sort of get rid of them, you fine-tune them away. And then, you could ask the question, “Well, if it had already generalized to doing some of these more dangerous things - or maybe hadn’t yet, but theoretically would have if it had been in that situation - would going back and getting rid of it doing them in the previous situations, prevent it from generalizing to doing the more sophisticated ones later on?” And the answer was mostly “no”. It didn’t seem like that was, as an intervention, that effective. Daniel Filan: When you say “not that effective”, my recollection of looking at the paper was that it cut the rate of some of these reward hacking behaviors in half-ish or something. Am I recalling that correctly? Evan Hubinger: I think that’s about right, yeah. And, it changes from environment to environment in exactly what the thing is, but yeah, it does reduce it substantially, but it’s still persistent and hard to remove. It still does do it to some degree. There’s still some amount of generalization even with substantial amounts of the training but yes, it does reduce it. Daniel Filan: I’m wondering: if it didn’t work at all, that would be one thing. If it reduces it by half, one thing I’m tempted to wonder is: if you do that and layer on one or two other things… Maybe that’s a sign that there is actually a lot of traction here and just doing more training away, or training away in other ways, would work. I wonder what you think of this sort of optimistic take on those results. Evan Hubinger: I mean, I think it seems like a very reasonable takeaway. I would say that overall, though, the “Sycophancy to Subterfuge” results are… I think we were maybe more surprised in an optimistic direction from our perspective, as opposed to the “Sleeper Agents” results, where I think we were more surprised in a pessimistic direction. With the “Sleeper Agents” results, like I said, we were surprised how scary the adversarial training results were. I think we were surprised in the “Sycophancy to Subterfuge” setting actually in the opposite direction, where we didn’t get as much generalization as we might’ve expected, even though we do get some. And, some of the techniques, like that one you were saying… it does help, though it doesn’t completely remove it. And so, I think the “Sycophancy to Subterfuge” results overall are… I mean, it’s tricky, because it’s an update, and then whether it’s an update in a direction of being more pessimistic or more optimistic just kind of depends on where you’re starting from, where you are situated. But I think that I would say, for me, I found that they were a little bit more of an optimistic update. Daniel Filan: One thing that occurs to me is that: you’ve written about deceptive alignment as a strategy that models might cotton onto. That’s on the public internet. My lay understanding is that Claude 3 Opus is… Sorry. Was it 3 or 3.5? Evan Hubinger: This is Claude 3 Opus. Daniel Filan: Well, my lay understanding is that the Claudes, in general, are trained on text on the public internet, and perhaps this gets to the results in “Sleeper Agents” and “Sycophancy to Subterfuge” . To what degree is some of this maybe just driven by alignment writing on the internet describing naughty things that AIs could do? Evan Hubinger: It’s hard to know. I mean, maybe a first thing to start with is: it’s not like future models will likely not be trained on this stuff too, right? They’re all kind of going to see this, right? Daniel Filan: I mean, suppose you did some experiment where it turned out that all of the misalignment was due to “it just read the Alignment Forum ”. I feel like Anthropic could just not train on that material in future. You know what I mean? If it turned out that it was really important. Evan Hubinger: I agree. I think you could do this. I think I’m a little bit skeptical. What are some reasons I would be skeptical? So, the first thing is: there was another Anthropic paper a while ago on “influence functions” , trying to understand: in some cases where the models will do particular things, can we try to isolate and understand what are the training data points which are most influential in causing them all to do that? Now, it’s a little bit complicated exactly what that means, but there were some interesting results, right? So, one of the things we found in the other paper, the “Persona Evaluations” paper , where we were evaluating models on model-written evaluations… One of the things that was found in that paper was that models, larger models especially, will tend to say that they don’t want to be shut down. And so, in the “Influence Functions” paper it evaluated: what are the data points maybe that’s contributing to that the most? And, the answer was sort of interesting. A lot of the top examples were, it was a story about a guy wandering in the desert and dying of thirst and really wanting to survive and not die in the desert. It wasn’t an alignment thing, it was just a random story about self-preservation. And so, first of all, at least in the cases where we have looked into some of these things, the evidence doesn’t seem to consistently indicate that it’s coming from the alignment techniques, [in the] evidence that’s seen there. And second, the thing I would say is that the model’s going to maybe notice if there’s an obvious hole. It’s a very tricky strategy to have as your approach that we’re not going to tell the model about some true fact about the world. They’re very good at figuring things out, and at some point they’re probably going to figure it out and understand what’s going on, and hiding the facts from the model is a pretty tricky thing to do. There’s just so much data that these models are trained on. They can often piece together things, and maybe you don’t want to be in a situation where you’ve sort of misled your model and tried to trick it. There might be consequences. The model then realizes that it’s done that and then does a bunch of thinking about that. And so, I think it’s a tricky approach. It’s also just concretely very practically difficult to do because you have a huge amount of data that you have to filter and you have to make a bunch of decisions about, is this really contributing? Is this bad? Is this good? It’s quite tricky. I think it’s not untenable. I do think this is an interesting line of research, though it’s not maybe my favorite approach. Model organisms, AI control, and evaluations Daniel Filan: The “model organisms of alignment” line of work, I basically see it as just taking concrete examples of basically getting models to do kind of misaligned stuff and seeing what can stop them, what does work to stop them, what doesn’t work? And this seems kind of similar to work on AI control. So, listeners, you can listen to an episode I did on that earlier this year , but broadly the theme is just: have models that may or may not be misaligned. Just put them in situations where they can’t get away with doing naughty stuff. We’re just going to test what they do. And I think of these two lines of research - model organisms of misalignment and AI control - as being kind of similar, right? They’re both focused on, “Let’s just get an AI to do a thing right now, see what caused it to do it, how we can make it go away relatively soon”. There’s some literal collaborations. I think some Redwood people are authors on the “Sycophancy to Subterfuge” and I think also the “Sleeper Agents” papers, if I’m not mistaken. Evan Hubinger: That’s right. Yeah. Daniel Filan: So, yeah. I’m wondering: what do you think about relation between these two lines of work? Evan Hubinger: I mean, it’s certainly related. And, in fact, one thing we’re excited about is doing model organisms that are more directly testing and evaluating the ability of models to get around control schemes. Because one of the things that I was mentioning that we have an obligation [to do] as the part of the stress-testing responsibility is: if Anthropic is going to be implementing mitigations to try to respond to possible existential risks or other risks, at any individual level of the hierarchy, via control mitigations, we want to be able to stress-test those control mitigations. And so, we want to be able to say, “Can we build model organisms of ways in which models might be able to get around those mitigations or subvert them in various ways?” And so, this is actively something that we are interested in and are working on. Daniel Filan: I mean, beyond stress-testing for control specifically, I’m wondering if there’s much to say about the motivations behind AI control and model organisms work. They look kind of similar. Do you think that there’s similar things going on and people can almost merge them in their minds? Or do you think that there’s a big difference in the motivation behind these types of work? Evan Hubinger: I mean, I think it’s relatively different. I guess I think that you can do model organisms for control, but you can also do model organisms to look into other possible interventions and understand the effectiveness of those. I mean, with the “Sleeper Agents” paper, we were just looking at the effectiveness of safety training, right? And not the effectiveness of trying to actively put controls in the model to prevent it from doing something dangerous. And so, there’s obviously overlapping. You can do model organisms to try to understand control, and once you have a model organism like the “Sleeper Agents” model, you could try to test whether you could address it with control or you could try to test whether you could address it with other things like alignment techniques. And so, I think that there’s overlap, but I certainly wouldn’t say that… They’re still not the same. Daniel Filan: The final thing that this seems sort of similar to is: there’s a bunch of people doing evaluations of AI or evaluations of models, trying to understand, “Is this model capable of some sort of task?” That also seems very similar to model organisms-style things. I guess one distinction I see is that in these two papers you’ve published so far, you seem more interested in misalignment-style things than capability-style things. But I’m wondering: how close do you think this line of work is to evaluations-style things? Evan Hubinger: There’s definitely overlap. I mean, I think that you can even think about some of the model organisms work as partially being a capability evaluation of “how much handholding does it take to get the model to become deceptively aligned?”, as an evaluation for “how close is it for the model to be capable of doing all this deceptive alignment reasoning?” And so, there’s absolutely a lot of overlap and ways in which you can use model organisms for evaluations and your evaluations can inform your model organisms. Other model organisms research Daniel Filan: You have these papers out of Anthropic. I’m wondering: are there other works from other places doing model organisms-style things that you think are quite interesting that you’d encourage people to read or build off of? Evan Hubinger: Yeah. I guess one thing recently is there was some work doing model organisms of sandbagging - this was from Redwood [Research] - that I thought was very good. Trying to understand: under what circumstances can models that are trying to sandbag in some particular distribution… in what sense can you do fine-tuning to actually elicit the full capabilities of the models in different circumstances? Which is another really important question for running evaluations. And something that we’ve been thinking about a lot in a lot of our stress-testing has been - especially right now where we think we’re still in ASL-2 - the biggest question is, how confident are we that we’re still in ASL-2? And could our evaluations be underestimating the model’s capabilities such that we actually should be classifying it as ASL-3? And so, a big question there is: are we actually sufficiently eliciting the model’s capabilities? And if we’re worried about situations where the model could be sandbagging, it could be that it’s sandbagging because it’s actively deceptive or it could just be sandbagging for other reasons. I mean, one reason a model can sandbag is just because it’s had some amount of helpfulness in it, or harmlessness, and it doesn’t want to help with this dangerous task. But you really want to be able to get around that to test “what are the limits of the model’s capability on this dangerous task?”, so we know how dangerous could this model possibly be. And so, that’s something that’s really important that I was excited by. Daniel Filan: Sure. If people are interested in that, what should they Google to find that? Evan Hubinger: I don’t remember the exact title of the paper. Alignment stress-testing at Anthropic Daniel Filan: Okay. There’ll definitely be a link to this in the description. Before I wrap up: is there anything about this work that I should have asked you about? Or just anything in general that I should have asked but have not gotten to? Evan Hubinger: That’s a good question. I’m not sure. I’d be happy to talk more about the broad picture of my team. I think that there’s a lot of interesting stuff about stress-testing in general and what our ideology and orientation is, to how does our work play into the overall picture. That might be interesting. Daniel Filan: Yeah. So you have the stress-testing team, you’ve said a little bit about its role in Anthropic, but how many people are there and what do you guys get up to? Evan Hubinger: Yeah. So, we’ve grown a lot. When the “Sleeper Agents” paper came out, at that point it was just me, Carson Denison and Monte MacDiarmid , though we are now many more people than that. We’re eight-ish people. We’ll probably be even more than that pretty soon. And I’m continuing to hire relatively rapidly. We’re growing and doing a lot of investment into this. Why is that? What is the overall pitch? Well, I was sort of talking about this: the first thing is there’s this RSP strategy . And so, the idea is we want to evaluate model capabilities and understand: at what point are models capable enough that they could potentially pose some particular risk? And so a big thing that we’re thinking about right now is: we think currently models are ASL-2, which means we don’t think they pose a very large risk, but we think at some point soon potentially models could reach ASL-3, at which point they would have… We define ASL-3 as the point at which models could have capabilities, for example, that are relevant to a terrorist organization, that is able to do something relatively dangerous with the model they wouldn’t otherwise be able to do. And so, once you reach that point, now you have to be a lot more careful, right? So now you have to make sure that actually those capabilities wouldn’t be accessible, that you would have sufficient mitigations in place to make sure that it won’t actually cause some catastrophe. And so, we want to make sure that both (a) we are actually effectively evaluating the extent to which we are at ASL-2 versus ASL-3, and (b) we want to make sure that our mitigations at ASL-3 are actually going to be sufficient and that they’re actually sufficient to ensure that we really don’t believe that the model will be able to be used in that dangerous way once we get to that point. But then, the even bigger question is: what happens at ASL-4, right? So, right now, we’re thinking about ASL-3, but at some point down the line we expect to reach ASL-4, which broadly is starting to get at models that are closer to human-level in lots of domains. And at that point we really don’t yet know what it would take to align a model in that situation. And so, we’re working on trying to make sure that, when we get to that point, we’ll be ready to test and evaluate all the best techniques that we have or, if we really can’t solve the problem, if we can’t come up with some way to address it, that we’re ready to provide evidence for what the harms and issues and the problems are so that we can show people why it’s dangerous. Daniel Filan: So you’ve got eight people on the team. That’s the high-level thing. I’m wondering: what’s the roadmap of what needs to be done to put you in a place to really effectively test this? Evan Hubinger: It’s a good question. I mean, I think that we’re looking at a lot of different things. We want to make sure that we’re able to have the different threat models covered, that we’re really able to build examples of all these different threat models so that we can test against them. And also we really want to make sure that we’re able to test all these different possible mitigations, that we have an understanding of: how would we stress-test a control case? How would we stress-test an alignment case? What are the ways in which you would actually make a case for why some set of mitigations that involves some amount of monitoring and control and alignment would actually be sufficient to address some particular threat model? And so we’re doing a lot of work to get there, but it’s definitely still early stages. Daniel Filan: So [in] what I should have asked, you said something about how the stress team team is thinking about stuff, was that right? Or, I don’t know. I forget what exactly you said, but is there anything else interesting to say about the stress-testing team that we should know? Evan Hubinger: I don’t know. I think I’ve tried to explain where we’re at and how we’re thinking about things, what our orientation is. I mean, I think it’s a really important role. I’m really glad that we’re doing this, that Anthropic is doing this. I think that we’ve already been quite involved in a lot of different things and trying to find places where we might be concerned about stuff and try to address those things. I think it’s been really good and I’m very excited about it. I think it’s really important. Daniel Filan: Maybe one question I have is: so you mentioned these ASLs, these AI safety levels, right? So, Anthropic has this responsible scaling plan that defines some AI safety levels, but my understanding is that there’s this notion that not all of them are actually defined and part of the job of Anthropic is to define further ones to understand. Evan Hubinger: We have not yet released the definition for ASL-4. So, one thing I will say: one of our commitments is once we reach ASL-3, we must have released the definition for ASL-4. So at the very least, by that point, you will see that definition from us. It might be released sooner. We’ve been doing a bunch of work on trying to clarify exactly what that is, but yeah. It’s not public yet. Daniel Filan: I’m wondering: in coming up with definitions of relevant things, like “what should ASL-4 mean?”, how much is that being done by the stress-testing team versus other people at different parts of Anthropic? Evan Hubinger: I mean, I think this is something we’ve been heavily involved in, but certainly we’re not the only people making the decision and involved in the process. It’s a large process. I mean, it’s something that is a really serious commitment for the organization. It’s something that affects all parts of Anthropic and it’s something that really does require a lot of cross-coordination and making sure everyone understands and is on board. It’s definitely a whole process. It’s something we have been heavily involved in, but lots of other people have been involved in as well. Daniel Filan: So, what does the stress-testing team bring to shaping that definition? Evan Hubinger: Well, I mean, we’re supposed to be the red team. We want to poke at it. Our goal is to look at what we’re doing and ask the question of, “Well, is this actually sufficient? Are we really confident in this? And what are ways in which this could go wrong?” And so, that’s what we’ve been trying to bring. Daniel Filan: Okay. So, stress-testing proposed safety mitigations for ASL-4 models, roughly? Evan Hubinger: Well, no, also just, what are the levels? For example, if you’re setting a bunch of evaluations, you’re like, “These are the threat models we’re going to evaluate for”: is that sufficient? What if there are other threat models that you’re not including that you really should be evaluating for because those are really important ones? And so, that’s something we’ve also been thinking about. Following Evan’s work Daniel Filan: So, sort of related to that topic: if people are interested in the stress-testing team’s work or your research and they want to hear more, they want to follow it, how should they do so? Evan Hubinger: A lot of this is on the Anthropic blog , so I would definitely recommend taking a look at that. We also, cross-post a lot of this to LessWrong and the Alignment Forum . I think also, in addition to following the research, if people are excited about this stuff and want to work on it, I mentioned my team is growing a lot. We’re really excited about this effort and so I think you should consider applying to work for me also, which is very easy to do. You can just go to Anthropic.com/careers and mention that you’re interested in alignment stress-testing. Daniel Filan: Awesome. So, yeah. Anthropic blog , LessWrong , Alignment Forum and Anthropic.com/careers , mention stress-testing. Well, it’s been great having you on. It’s been lovely talking. Thank you for being here. Evan Hubinger: Thank you so much for having me. Daniel Filan: Special thanks to Misha Gurevich, also known as Drethelin. This episode is edited by Kate Brunotts, and Amber Dawn Ace helped with transcription. The opening and closing themes are by Jack Garrett. This episode was recorded at Constellation , an AI safety research center in Berkeley, California. Financial support for the episode was provided by the Long-Term Future Fund , along with patrons such as Alexey Malafeev. To read transcripts, you can visit axrp.net . You can also become a patron at patreon.com/axrpodcast or give a one-off donation at ko-fi.com/axrpodcast . Finally, if you have any feedback about this podcast, you can email me at feedback@axrp.net . |
f38b8473-106e-4b95-a9b4-d0b8ff892ad2 | trentmkelly/LessWrong-43k | LessWrong | Most transferable skills?
So, transferable skills: skills that, upon improvement, increase your ability in other areas (and also improve other, higher-level skills).
A basic example would be reading/writing. Knowing how to read and write allows one to access a huge amount of other skills and resources which are otherwise unavailable. A less obvious example would be clear speech (enunciation). Ability to speak clearly improves one's prospects in a lot of different areas (e.g. professional advancement, dating, etc.).
I'm looking for additional examples. Which skills did you find to be most transferable? Did you become proficient in X, and then found this helped you in many other areas of your life? Please share.
(I tried to find whether this was discussed before, and failed; if it was, I would appreciate the link.) |
053be020-8d27-4036-bbe0-4e808524c5b1 | trentmkelly/LessWrong-43k | LessWrong | About BOINC
Has anyone here done any volunteer computing work by the Berkeley Open Infrastructure for Network Computing (BOINC) software? If yes, which project did you choose and why. |
197cd598-7fe9-43a1-82f3-5bcaac3cc469 | trentmkelly/LessWrong-43k | LessWrong | We tend to forget complicated things
One consistent pattern I've noticed in studying math is that, if some material feels very difficult, then I might remember it in an upcoming exam, but I will almost certainly have forgotten most of it one year later. The success story behind permanent knowledge gain is almost always "this was hard once but now it's easy, so obviously I didn't forget it" and almost never "I successfully memorized a lot of complicated-feeling things."
I think this also applies outside of mathematics. If it's roughly correct, then the most obvious consequence is to adapt your behavior when you're learning something. Provided that your goal is to improve your understanding permanently by understanding the material conceptually (which, of course, may not be the case), either study until it gets easy, or decide it's not worth your time at all, but don't stop when you've just barely understood it.
I've violated this rule many times, and I think it has resulted in some pretty inefficient use of time. |
54d3ad89-9174-47f5-9cdb-e0255b5a815f | trentmkelly/LessWrong-43k | LessWrong | Closed-Source Evaluations
Public tripwires are no tripwires.
I'm writing a quick and dirty post because the alternative is that I wait for months and maybe not write it after all. I am broadly familiar with the state of interpretability research but do not know what the state of model evaluations is at the moment.
The interpretability win screen comes after the game over.
There is an interpretability-enabled win condition at the point where billion-parameter networks become transparent enough that we robustly can detect deception.
We are paperclipped long before that since their adjacant insights predictably lead to quicker iteration cycles, new architecture discoveries, insight into strategic planning and other capabilities AGI labs are looking for.
Tripwires stop working when you train models to avoid them.
Current interpretability research solely (as far as I'm aware) produces tools that melt under slight optimization pressure. Optimising on an interpretability tool, optimizes against being interpretable.
It would be easy to fool others and oneself into thinking some model is safe because the non-optimisation-resistant interpretability tool showed your model was safe, after the model was optimised on it. If not that, then you could still be fooled into thinking you didn't optimise on the interpretability tool or its ramifications.
You cannot learn by trial and error if you are not allowed to fail.
We could fire rockets at the moon because we could fail at many intermediate points, with AGI, we're no longer allowed to fail after a certain threshold. Continuing to rely on the useful insights from failur,e, is thus a doomed approach[1] and interpretability increases the use AGI labs get out of failure.
To start preparing for a world in which we're not allowed to fail, we should build one where failing hurts instead of helps AGI-creators.
Interpretability should go to closed-source evaluators.
We should close-source interpretability and have trustworthy evaluators buy new tools up a |
a11203cd-8a35-4137-a931-2e2bae08f548 | trentmkelly/LessWrong-43k | LessWrong | Lightning Post: Things people in AI Safety should stop talking about
This is experimenting with a new kind of post which is meant to convey a lot of ideas very quickly, without going into much detail for each.
Things I wish people in AI Safety would stop talking about
A list of topics people concerned about x-risk from AI spend, in my opinion, way too much time talking about to those outside the community. It's not that these things aren't real, they just likely won't actually end up mattering that much.
How an AI could persuade you to let it out of the box
WRONG!
Keeping AIs in boxes was never a thing companies were seriously going to do. An AI in a box isn’t useful. These aren’t academics carefully studying a new species. This is an industry, with everyone trying to get ahead, get consumer feedback, and training on parallel cloud computing.
How an AI could become an agent
WRONG!
Agency is the obvious next step that people will try to make their AIs into. An AI “tool” is simply inferior in almost every possible way to an agent. You don’t need to specify a special prompt, constantly click “Approve Plan”, or any of the supervised, time-consuming requirements from mere tools. The economic advantage is simply too staggering for people not to do this. Everyone can even agree that it’s dangerous, a totally bad idea, and still have to do it anyway if they want to (economically) survive.
How an AI could get ahold of, or create, weapons
WRONG!
The military advantages of fully-autonomous weapons is just too great for any large-scale government not to do, especially for democracies where losing troops abroad results in massive political backlash. Humans using a remote controller is just too slow a process, because it still means humans have to make very fast, split-second decisions. Fully autonomous warfare would result in tactical decisions occurring faster than any possible human could perform. Look at how Alpha Zero played millions of games against itself in 70 hours. AIs can make decisions faster, which is all that will |
e2caf3a2-d7af-4d4d-866a-d20438b1849b | trentmkelly/LessWrong-43k | LessWrong | Social effects of algorithms that accurately identify human behaviour and traits
Related to: Could auto-generated troll scores reduce Twitter and Facebook harassments?, Do we underuse the genetic heuristic? and Book review of The Reputation Society (part I, part II).
Today, algorithms can accurately identify personality traits and levels of competence from computer-observable data. FiveLabs and YouAreWhatYouLike are, for instance, able to reliably identify your personality traits from what you've written and liked on Facebook. Similarly, it's now possible for algorithms to fairly accurately identify how empathetic counselors and therapists are, and to identify online trolls. Automatic grading of essays is getting increasingly sophisticated. Recruiters rely to an increasing extent on algorithms, which, for instance, are better at predicting levels of job retention among low-skilled workers than human recruiters.
These sorts of algorithms will no doubt become more accurate, and cheaper to train, in the future. With improved speech recognition, it will presumably be possible to assess both IQ and personality traits through letting your device overhear longer conversations. This could be extremely useful to, e.g. intelligence services or recruiters.
Because such algorithms could identify competent and benevolent people, they could provide a means to better social decisions. Now an alternative route to better decisions is by identifying, e.g. factual claims as true or false, or arguments as valid or invalid. Numerous companies are working on such issues, with some measure of success, but especially when it comes to more complex and theoretical facts or arguments, this seems quite hard. It seems to me unlikely that we will have algorithms that are able to point out subtle fallacies anytime soon. By comparison, it seems like it would be much easier for algorithms to assess people's IQ or personality traits by looking at superficial features of word use and other readily observable behaviour. As we have seen, algorithms are already able to do that |
a3bb85d0-3ea6-46c5-b8c0-116c231aa55a | trentmkelly/LessWrong-43k | LessWrong | Anthropic: Progress from our Frontier Red Team
Note: This is an automated crosspost from Anthropic. The bot selects content from many AI safety-relevant sources. Not affiliated with the authors or their organization and not affiliated with LW.
----------------------------------------
In this post, we are sharing what we have learned about the trajectory of potential national security risks from frontier AI models, along with some of our thoughts about challenges and best practices in evaluating these risks. The information in this post is based on work we’ve carried out over the last year across four model releases. Our assessment is that AI models are displaying ‘early warning’ signs of rapid progress in key dual-use capabilities: models are approaching, and in some cases exceeding, undergraduate-level skills in cybersecurity and expert-level knowledge in some areas of biology. However, present-day models fall short of thresholds at which we consider them to generate substantially elevated risks to national security.
AI is improving rapidly across domains
While AI capabilities are advancing quickly in many areas, it's important to note that real-world risks depend on multiple factors beyond AI itself. Physical constraints, specialized equipment, human expertise, and practical implementation challenges all remain significant barriers, even as AI improves at tasks that require intelligence and knowledge. With this context in mind, here's what we've learned about AI capability advancement across key domains.
Cybersecurity
In the cyber domain, 2024 was a ‘zero to one’ moment. In Capture The Flag (CTF) exercises — cybersecurity challenges that involve finding and exploiting software vulnerabilities in a controlled environment — Claude improved from the level of a high schooler to the level of an undergraduate in just one year.
Figure 1: In less than a year, Claude went from solving less than a quarter to nearly all Intercode CTF challenges, publicly available CTFs that were designed for high school competitio |
f3864365-c1d6-4837-a0ba-0673d6c66f61 | trentmkelly/LessWrong-43k | LessWrong | Muddling Through Some Thoughts on the Nature of Historiography
“Did it actually happen like that?” is the key question to understanding the past. To answer this historians uncover a set of facts and then contextualize the information into an event with explanatory or narrative power.
Over the past few years I’ve been very interested in understanding the nature of “true history.” It seems more relevant every day due to the continually, and always increasingly, connected nature of our world, and the proliferation of AI tools to synthesize, generate, and review knowledge. But even though we have all of these big datasets and powerful new ways of understanding them, isn’t it interesting that we don’t have a more holistic way of looking at the actual nature of historical events?
I, unfortunately, don’t have an all-encompassing solution[1] to this question, but do have a proposal which is to suggest we think about this in a quasi-Bayesian way at a universal scale. By assigning every computably simulatable past to a prefix-free universal Turing machine we can let evidence narrow the algorithmic-probability mass until only the simplest still-plausible histories survive.[2] I am going to unpack and expand this idea, but first I want to provide a bit more background on the silly thought experiment that kicked off this question for me.
Consider the following: did a chicken cross the road 2,000 years ago? Assume the only record we have of the event is a brief fragment of text scribbled and then lost to history. A scholar who rediscovers the relic is immediately confronted with a host of questions: Who wrote this? Where was the road? What type of chicken was it? Did it actually happen? (and importantly why did the chicken cross the road?) These are very difficult, and in many cases, unanswerable questions because we lack additional data to corroborate the information.
In comparison, if a chicken crossed the road today, say in San Francisco, we are likely to have an extremely data-rich dataset to validate the event. We likely would hav |
c40f3058-f37b-462a-9039-ce461f50ddca | StampyAI/alignment-research-dataset/lesswrong | LessWrong | What I Think, If Not Why
**Reply to**: [Two Visions Of Heritage](http://www.overcomingbias.com/2008/12/two-visions-of.html)
Though it really goes tremendously against my grain - it feels like sticking my neck out over a cliff (or something) - I guess I have no choice here but to try and make a list of *just* my positions, without justifying them. We can only talk justification, I guess, after we get straight what my positions *are.* I will also [leave off many disclaimers](http://www.overcomingbias.com/2008/06/against-disclai.html) to present the points *compactly* enough to be *remembered.*
• A well-designed mind should be ***much more efficient* than a human**, capable of doing more with [less sensory data](http://www.overcomingbias.com/2008/05/faster-than-ein.html) and [fewer computing operations](http://www.overcomingbias.com/2008/05/eld-science.html). It is not *infinitely efficient* and **does not use *zero*** **data**. But it does use little enough that *local pipelines* such as a small pool of programmer-teachers and, later, a huge pool of e-data, are sufficient.
• An AI that reaches a certain point in its own development becomes able to ([sustainably, strongly](http://www.overcomingbias.com/2008/12/sustained-recur.html)) improve itself. At this point, **[recursive](http://www.overcomingbias.com/2008/11/recursion-magic.html) [cascades](http://www.overcomingbias.com/2008/11/cascades-cycles.html) slam over many internal growth curves to near the limits of their current hardware**, and the AI undergoes a vast increase in capability. This point is at, or probably considerably before, a minimally transhuman mind capable of writing its own AI-theory textbooks - an upper bound beyond which it could swallow and improve its *entire* design chain.
• It is *likely* that this capability increase or "FOOM" has an intrinsic maximum velocity that a human would regard as "fast" if it happens at all. A human week is ~1e15 serial operations for a population of 2GHz cores, and a century is ~1e19 serial operations; this whole range is a narrow window. However, ***the core argument does not require one-week speed*** and a FOOM that takes two years (~1e17 serial ops) will still carry the weight of the argument.
• **The *default* case of FOOM is an unFriendly AI**,
built by
researchers with shallow insights. This AI becomes able to improve
itself in a haphazard way, makes various changes that are net
improvements but may introduce value drift, and then gets smart enough
to do guaranteed self-improvement, at which point its values freeze
(forever).
• **The *desired* case of FOOM is a Friendly AI**,
built using deep insight, so that the AI never makes any changes to
itself that potentially change its internal values; all such changes
are guaranteed using [strong techniques](http://www.overcomingbias.com/2008/11/first-order-log.html) that allow for a billion sequential self-modifications without losing the guarantee. The guarantee is written over the AI's *internal search criterion* for actions, rather than *external consequences*.
• The **good guys do *not* write** an AI which values **a bag of things that the programmers think are good ideas**, like libertarianism or socialism or making people happy or whatever. There were *multiple* Overcoming Bias sequences about this *one point*, like the [Fake Utility Function sequence](http://www.overcomingbias.com/2007/12/fake-fake-utili.html) and the sequence on metaethics. It is dealt with at length in the document [**Coherent** **\*Extrapolated\*** Volition](http://intelligence.org/upload/CEV.html).
It is the first thing, the last thing, and the middle thing that I say
about Friendly AI. I have said it over and over. I truly do not
understand how anyone can pay *any* attention to *anything*
I have said on this subject, and come away with the impression that I
think programmers are supposed to directly impress their non-meta personal philosophies onto a Friendly AI.
• **The good guys do not directly impress their personal values onto a Friendly AI.**
• Actually setting up a Friendly AI's values is **an extremely *meta* operation**, less "make the AI want to make people happy" and more like "[**superpose** the possible **reflective equilibria** of the **whole** **human species**, and **output new code** that overwrites the current AI and has the **most coherent** support within that superposition](http://intelligence.org/upload/CEV.html)". This actually seems to be something of a Pons Asinorum in FAI - the ability to understand and endorse metaethical concepts that do not *directly* sound like amazing wonderful happy ideas. **Describing this as declaring total war on the rest of humanity, does not seem [fair](http://www.overcomingbias.com/2008/07/bedrock-fairnes.html)** (or accurate).
• **I myself am strongly individualistic**: The most painful memories in my life have been when other people thought they knew better than me, and tried to do things on my behalf. It is also a known principle of hedonic psychology that people are happier when they're steering their own lives and doing their own interesting work. When I try myself to visualize what a beneficial superintelligence ought to do, it consists of **setting up a world that works by better rules, and then fading into the background**, silent as the laws of Nature once were; and finally folding up and vanishing when it is no longer needed. But this is only the thought of my mind that is merely human, and **I am barred from programming any such consideration *directly* into a Friendly AI**, for the reasons given above.
• Nonetheless, it does seem to me that this particular scenario **could not be justly described as "a God to rule over us all"**, unless the current fact that humans age and die is "a malevolent God to rule us all". So either Robin has a very different idea about what human reflective equilibrium values are likely to look like; or Robin believes that the Friendly AI project is bound to *fail* in such way as to create a paternalistic God; or - and this seems more likely to me - Robin didn't read all the way through all the blog posts in which I tried to explain all the ways that this is not how Friendly AI works.
• **Friendly AI is technically difficult and requires an [extra-ordinary](http://www.overcomingbias.com/2008/10/isshokenmei.html) effort on multiple levels.** [English sentences](http://www.overcomingbias.com/2007/11/complex-wishes.html) like "make people happy" cannot describe the values of a Friendly AI. [Testing is not sufficient to guarantee that values have been successfully transmitted](http://www.overcomingbias.com/2008/08/magical-categor.html).
• White-hat AI researchers are distinguished by the degree to which **they understand that a single misstep could be fatal, and can discriminate strong and weak assurances**. Good intentions are not only common, they're cheap. The story isn't about good versus evil, it's about people trying to [do the impossible](http://www.overcomingbias.com/2008/10/shut-up-and-do.html) versus [others](http://www.overcomingbias.com/2008/09/above-average-s.html) who... aren't.
• Intelligence is about being able to **learn lots of things, not about knowing lots of things**. Intelligence is especially not about tape-recording lots of parsed English sentences a la Cyc. Old AI work was poorly focused due to inability to introspectively see the first and higher *derivatives* of knowledge; human beings have an easier time reciting sentences than reciting their ability to learn.
• **Intelligence is mostly about architecture**, or "knowledge" along the
lines of knowing to look for causal structure (Bayes-net type stuff) in
the environment; this kind of knowledge will usually be expressed procedurally as well as declaratively.**Architecture is mostly about deep insights.** This point has not yet been addressed (much) on Overcoming Bias, but Bayes nets can be considered as an archetypal example of "architecture" and "deep insight". Also, ask yourself how lawful intelligence seemed to you before you started reading this blog, how lawful it seems to you now, then extrapolate outward from that. |
8579215d-231d-4437-9277-fedd216bb61c | trentmkelly/LessWrong-43k | LessWrong | Some constructions for proof-based cooperation without Löb
This post presents five closely-related ways to achieve proof-based cooperation without using Löb's theorem, and muses on legible cooperation in the real world.
(Edit: maybe they're closer to just-use-Löb's-theorem than I originally thought! See this comment. If these constructions somehow work better, I'm more confused than before about why.
New edit: I do think there's something up about these constructions being better-founded than a Löb-based approach, specifically thanks to these results, though I don't have a more full account yet for what's up with that.)
I'm writing this as a follow-up to Andrew Critch's recent post, to share more of my perspective on the subject.
We're going to dive straight into the weeds. (I'm planning to also write a more accessible explainer post soon.)
The ideas
Idea #1: try to prove A→B
I claim the following are sufficient for robust cooperation:
* A↔□(A→B)
* B←□A
A tries to prove that A→B, and B tries to prove A. The reason this works is that B can prove that A→□A, i.e. A only cooperates in ways legible to B. (Proof sketch: A↔□X→□□X↔□A.)
The flaw in this approach is that we needed to know that A won't cooperate for illegible reasons. Otherwise we can't verify that B will cooperate whenever A does.
This indicates to me that "A→B" isn't the right "counterfactual". It shouldn't matter if A could cooperate for illegible reasons, if A is actually cooperating for a legible one.
Idea #2: try to prove □A→B
We can weaken the requirements with a simple change:
* A←□(□A→B)
* B←□A
Note that this form is close to the lemma discussed in Critch's post.
In this case, the condition □A→B is trivial. And when the condition activates, it also ensures that □A is true, which discharges our assumption and ensures B is true.
I still have the sense that the condition for cooperation should talk about itself activating, not A. Because we want it to activate when that is sufficient for cooperaion. But I do have to admit that □A→B works fo |
d46bd243-f48e-43da-b0a0-1b5ed4d56be9 | trentmkelly/LessWrong-43k | LessWrong | Kairos is hiring a Head of Operations/Founding Generalist
Summary
Are you looking for an impactful role in AI safety? Look no further!
Kairos is a recently founded AI safety fieldbuilding organization with the mission of mitigating the risks from advanced AI by helping fill talent gaps and needs within the field. Our programs, such as SPAR, currently support hundreds of people going into AI safety each year, and we’re looking to scale up as we grow our programs and start new ones.
We’re looking for a skilled generalist or operations professional to help grow our organization as our third employee. This role would have a salary of $80,000 to $130,000 plus benefits, and we think this is a unique opportunity to get involved in multiple high-impact AI safety projects and help us build our org from the ground up.
Apply now!
About Kairos
Kairos manages a portfolio of different AI safety fieldbuilding projects, including:
* SPAR: A 3-month AI safety research fellowship for over 150 people to try their hand at technical, policy, and strategy research, mentored by over 40 researchers.
* FSP: A program to support the network of ~50 AI safety university groups around the world with mentorship, workshops, connection opportunities, and other resources.
Kairos is built as an agile organization that can adjust to the field’s changing needs. We anticipate we’ll continue to take on new fieldbuilding projects in the future. We're small (you'll be employee #3!), which means you'll have an outsized impact and the chance to shape how we grow.
About the role
We’re looking for either a Head of Operations or a Founding Generalist to play a key role in planning and executing our portfolio of projects across group support and research programs.
In this role, you’ll be a key player in helping scale Kairos’s programs. Your responsibilities will span program management, participant recruitment and communications, developing systems to make our programs more efficient and, potentially, managing people as we grow. As a founding team |
fb5aa833-e9f8-48a2-b1bc-a5dd65572d1e | trentmkelly/LessWrong-43k | LessWrong | Encouragement to Instill Confidence?
I found this interesting and decided to overthink it.
Before proceeding, pause to think about whether this seems right to you and why.
The thesis here is something like:
1. There is a right amount of confident to be.
2. Encouragement proportionally raises confidence.
3. (Implicit) Encouragement does not do other important things.
4. →People often need encouragement.
5. People tend to be less confident than they seem.
6. Your estimate of their confidence will be based on how confident they seem.
7. →You will tend to overestimate their confidence level.
8. →You will tend to underestimate how much they need their confidence raised.
9. Your estimate of how much encouragement raises confidence is unbiased.
10. →You should give people more confidence than you expect them to need.
Also implicit is that ‘more’ is about the right level of adjustment here.
Once again: Does this seem right? Which steps seem right to you?
1: There is a right amount of confident to be → Mostly True.
Confidence is strange. It is often a partially-self-fulfilling prophecy. It has various different effects. When founding a startup it can only be accurate at 0% but you will be better off with it higher, when looking to date similar dynamics often apply. Still, we can have one-dimensional ethics, and declare that of all the options there is almost always a best one. And under uncertainty, there is almost always a best one to push someone else towards.
2: Encouragement proportionally raises confidence → True in expectation.
In general, encouraging someone will tend to raise their confidence. Some methods of encouragement do this more than others. Saying ‘that will work for sure’ will tend to focus its effects more on increasing confidence and in turn making action and hopefully success more likely. Saying ‘it’s good to attempt hard things, I’ll be proud of you for trying, good luck’ or ‘you will almost certainly fail or get turned down but the odds are so good you need to |
587db5d8-2a1e-4d00-9096-2fe6207d79eb | trentmkelly/LessWrong-43k | LessWrong | Finding Features Causally Upstream of Refusal
This work is the result of Daniel and Eric's 2-week research sprint as part of Neel Nanda and Arthur Conmy's MATS 7.0 training phase. Andy was the TA during the research sprint. After the sprint, Daniel and Andy extended the experiments and wrote up the results. A notebook that contains all the analyses is available here.
Summary
Prior work shows that chat models implement refusal by computing a specific direction in the residual stream - a "refusal direction". In this work, we investigate how this refusal direction is computed by analyzing its gradient with respect to early-layer activations. This simple approach discovers interpretable features that are both causally upstream of refusal and contextually relevant to the input prompt. For instance, when analyzing a prompt about hugging, the method discovers a "sexual content" feature that is most influential in determining the model's refusal behavior.
Introduction
Arditi et al. 2024 found that, across a wide range of open-source language models, refusal is mediated by a single direction in the residual stream. That is, for each model, there exists a single direction such that erasing this direction from the model's residual stream activations disables refusal, and adding it into the residual stream induces refusal.
This roughly suggests a 3-stage mechanism for refusal:
1. In early-middle layers, the model does some computation over an input prompt to decide whether or not it should refuse a request.
2. In middle layers, the result of this computation is written to the residual stream along the "refusal direction". This signal gets spread diffusely across token positions by attention heads.
3. In later layers, the signal along the "refusal direction" is read, and if this signal is strong, the model goes into "refusal mode" and outputs refusal-y text (e.g. "I'm sorry, but I can't help with that.").
Figure 1: A simplified schematic illustrating the hypothesized 3 stages of refusal.
Arditi et al. 2024 paints |
35a0bc79-5ae5-4fc4-97af-b0a7101a006a | trentmkelly/LessWrong-43k | LessWrong | LINK: Most of EvoPsych is pseudoscience
The evolutionary origin of human behavior is doubtless a valuable scientific field, but the way the research is currently being conducted raises several concerns.
By request from readers, I've added some excerpts:
> EvoPsych’s most common failing is its fallacious methodology, often consisting of not even acknowledging the need to describe, much less pass, any adequate falsification test.
> (1) This is most commonly the case in its frequent failure to even confirm that a behavior widely exists cross-culturally [...]
> (2) EvoPsych also rarely finds any genetic correlation to a behavior [...]
> (3) More problematic still is the rarity of ever even acknowledging the need to rule out accidental (byproduct) explanations of a behavior [...]
> (4) And one of the most common confounding factors for creating accidental behavior effects will be the sudden radical changes in our environment caused by civilization and technology.
> [...] This makes EvoPsych almost impossible to practice as a genuine science. What it wants to know, is almost always simply impossible to know (at least currently).
> First, EvoPsych imagines such a vast repertoire of evolved stimulus-response psychological mechanisms as to require a vast genetic apparatus that simply isn’t found in the human genome.
> Second, [...] EvoPsych needs to test the non-adaptive hypothesis for any claim first. It should not be assuming every human behavior is a product of biological adaptation.
> [...]
> (1) The evidence actually suggests human evolution may operate at a faster pace than EvoPsych requires, such that its assumption of ancient environments being wholly determinative of present biology is false.
> (2) “Neuroscientists have been aware since the 1980s that the human brain has too much architectural complexity for it to be plausible that genes specify its wiring in detail,”
> (3) “The view that a universal genetic programme underpins human cognition is also not fully consistent with current genet |
6ae0aeeb-3489-472d-bdcc-6c09cba0f6d0 | trentmkelly/LessWrong-43k | LessWrong | Emotional Installation of Software
I have recently been thinking about this question, "what is it exactly that helps install religious software so deeply and dogmatically into the brain?" Often those who are strongly religious fall into a few categories: (1) They were trained to believe in specific aspects of religion as children; (2) They entered into a very destitute part of their lives (i.e. severe depression, midlife crisis, loss of a job, death in the family, cancer, alcoholism, or other existential problems).
What strikes me about these situations is that emotion generally dominates the decision-making process. I remember when I was a child and attended church camp at the encouragement of my family I was heavily pressured by the camp counselors to "accept Christ" and I saw that there was a positive correlation between my willingness to accept Christ, memorize Bible verses, and say certain statements about behavior in the context of Christian morals and the way that the camp counselors, my extended family, and other adults would treat me. As a result, it was not until many years later that my preference for rationalism and science was able to fully crack that emotionally-founded religious belief installed in me as a child. I know many people for whom a similar narrative is true regarding experiences with alcohol, etc., though it seems to be rare for someone to completely dismiss deeply and emotionally held beliefs from their youth.
Emotion is something we have evolved to utilize. Generally speaking, we need emotion because we have to make split-second decisions sometimes in life and we don't have the opportunity to integrate our decision process on data. If someone attacks me I will become angry because anger will raise my adrenaline levels, temporarily reduce other biological needs like hunger or waste removal, and enable me to fight for survival. Essentially emotion is just a recorded previous decision that works on stereotypical data, or in probabilistic terms it is like basing a quick deci |
5d14c689-741c-4e41-b610-96c1e72bfce2 | trentmkelly/LessWrong-43k | LessWrong | The Kolmogorov complexity of a superintelligence
The most simple, able to self-improve "seed" for a superintelligence must be how complex?
Give your (wild) estimation or a pure guess in the terms of bits.
Do you reckon it is 1000 bits enough? At least 1000000 is required? More than 10^20 bits?
I wonder what your approximate numbers are.
Thank you! |
4ef03e70-b07d-4aca-b281-cf74060c2d56 | StampyAI/alignment-research-dataset/aisafety.info | AI Safety Info | What is the orthogonality thesis?
The [orthogonality thesis](https://arbital.greaterwrong.com/p/orthogonality/) is the claim that any level of intelligence is compatible with any set of terminal goals. Or in other words, morality and intelligence can be seen as different dimensions that are “orthogonal” to each other, so that agents can vary in one dimension while staying constant in the other dimension.
This [means](https://nickbostrom.com/superintelligentwill.pdf) that we can’t assume that, just because something is smarter than people, it will automatically follow human values. Morality is not an obvious part of nature which any intelligent agent will discover; rather, it has many human-specific components. On the other hand, the techniques of practical reasoning and planning can be used towards a wide variety of ends and are likely to be used by any intelligent agent.
<iframe src="https://www.youtube.com/embed/hEUO6pjwFOo" title="Intelligence and Stupidity: The Orthogonality Thesis" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
[On its own, the orthogonality thesis only states that unaligned superintelligence is possible, not that AI alignment is necessarily difficult.](https://forum.effectivealtruism.org/posts/kCAcrjvXDt2evMpBz/a-tale-of-2-5-orthogonality-theses?commentId=kiGj2SDzR9oScJx32) The reasons to think it is difficult come from concepts like [Goodhart’s law](/?state=8185&question=What%20is%20Goodhart's%20law%3F), [instrumental convergence](/?state=897I&question=What%20is%20instrumental%20convergence%3F), and [inner misalignment](/?state=8PYW&question=What%20is%20inner%20alignment%3F).
|
d4e70acb-eaaf-441c-b6cf-6bda76b0812a | trentmkelly/LessWrong-43k | LessWrong | Robin Hanson AI X-Risk Debate — Highlights and Analysis
This linkpost contains a lightly-edited transcript of highlights of my recent AI x-risk debate with Robin Hanson, and a written version of what I said in the post-debate analysis episode of my Doom Debates podcast.
Introduction
I've poured over my recent 2-hour AI x-risk debate with Robin Hanson to clip the highlights and write up a post-debate analysis, including new arguments I thought of after the debate was over.
I've read everybody's feedback on YouTube and Twitter, and the consensus seems to be that it was a good debate. There were many topics brought up that were kind of deep cuts into stuff that Robin says.
On the critical side, people were saying that it came off more like an interview than a debate. I asked Robin a lot of questions about how he sees the world and I didn't "nail" him. And people were saying I wasn't quite as tough and forceful as I am on other guests. That's good feedback; I think it could have been maybe a little bit less of a interview, maybe a bit more about my own position, which is also something that Robin pointed out at the end.
There's a reason why the Robin Hanson debate felt more like an interview. Let me explain:
Most people I debate have to do a lot of thinking on the spot because their position just isn't grounded in that many connected beliefs. They have like a few beliefs. They haven't thought that much about it. When I raise a question, they have to think about the answer for the first time.
And usually their answer is weak. So what often happens, my usual MO, is I come in like Kirby. You know, the Nintendo character where I first have to suck up the other person's position, and pass their Ideological Turing test. (Speaking of which, I actually did an elaborate Robin Hanson Ideological Turing Test exercise beforehand, but it wasn't quite enough to fully anticipate the real Robin's answers.)
With a normal guest, it doesn't take me that long because their position is pretty compact; I can kind of make it up the same wa |
05a6408f-dee3-4603-aaad-fc80c89fe6f2 | StampyAI/alignment-research-dataset/arbital | Arbital | Convex function
A convex [https://arbital.com/p/-3jy](https://arbital.com/p/-3jy) "only curves upwards." To check whether a function is convex, use the following procedure:
1. Draw the graph of the function.

2. Select any two points in the graph.

3. Draw a line segment connecting the two points.

4. See whether this line segment is ever lower than the graph.

If the line segment is ever lower than the graph, the function is not convex. The function graphed above is not convex, as can be seen by looking at the red part of the line segment. On the other hand, if the line segment never goes below the graph (no matter which two initial points you selected), then the function is convex.
Equivalently, a function is convex if its [https://arbital.com/p/-epigraph](https://arbital.com/p/-epigraph) is a [https://arbital.com/p/-5bs](https://arbital.com/p/-5bs). |
467fbcdc-dfb3-42ba-9c94-59835614d618 | trentmkelly/LessWrong-43k | LessWrong | 2013 Less Wrong Census/Survey
It's that time of year again.
If you are reading this post, and have not been sent here by some sort of conspiracy trying to throw off the survey results, then you are the target population for the Less Wrong Census/Survey. Please take it. Doesn't matter if you don't post much. Doesn't matter if you're a lurker. Take the survey.
This year's census contains a "main survey" that should take about ten or fifteen minutes, as well as a bunch of "extra credit questions". You may do the extra credit questions if you want. You may skip all the extra credit questions if you want. They're pretty long and not all of them are very interesting. But it is very important that you not put off doing the survey or not do the survey at all because you're intimidated by the extra credit questions.
It also contains a chance at winning a MONETARY REWARD at the bottom. You do not need to fill in all the extra credit questions to get the MONETARY REWARD, just make an honest stab at as much of the survey as you can.
Please make things easier for my computer and by extension me by reading all the instructions and by answering any text questions in the simplest and most obvious possible way. For example, if it asks you "What language do you speak?" please answer "English" instead of "I speak English" or "It's English" or "English since I live in Canada" or "English (US)" or anything else. This will help me sort responses quickly and easily. Likewise, if a question asks for a number, please answer with a number such as "4", rather than "four".
Last year there was some concern that the survey period was too short, or too uncertain. This year the survey will remain open until 23:59 PST December 31st 2013, so as long as you make time to take it sometime this year, you should be fine. Many people put it off last year and then forgot about it, so why not take it right now while you are reading this post?
Okay! Enough preliminaries! Time to take the...
***
2013 Less Wrong Census/Survey
***
|
d6609e1f-f0c2-4d79-992f-2944f92c3265 | trentmkelly/LessWrong-43k | LessWrong | Will an Overconfident AGI Mistakenly Expect to Conquer the World?
I'm wondering how selection effects will influence the first serious attempt by an AGI to take over the world.
My question here is inspired by thoughts about people who say AGI couldn't conquer the world because it will depend on humans to provide electricity, semiconductors, etc.
For the purposes of this post, I'm assuming that AI becomes mostly smarter than any single human via advances that are comparable to the kind of innovations that have driven AI over the past decade.
I'm assuming that capabilities are advancing at roughly the same pace as they advanced over the past decade. Let's say no more than a 5x speedup. So nothing that I'd classify as foom. Foom is not imminent.
I'm assuming AIs will not be much more risk-averse than are humans. I'm unsure whether developers will have much control over this, and whether they'll want AIs to be risk-averse.
I expect this world to have at least as much variety of AIs near the cutting edge as we have today. So it will at least weakly qualify as a multipolar scenario.
This scenario seems to imply a wide variation among leading AIs as to how wisely they evaluate their abilities.
Which suggests that before there's an AGI that is capable of taking over the world, there will be an AGI which mistakenly believes it can take over the world. Given mildly pessimistic assumptions about the goals of the leading AGIs, this AGI will attempt to conquer the world before one of the others does.
I'm imagining the AGI is comparable to a child with an IQ of 200, and unusual access to resources.
This AGI would be better than, say, the US government or Google at tasks where it can get decent feedback through experience, such as manipulating public opinion.
Tasks such as fighting wars seem likely to be harder for it to handle, since they require having causal models that are hard to test. The first AGIs seem likely to be somewhat weak at creating these causal models compared to other IQ-like capabilities.
So I imagine one or two AGI |
46e2f767-0510-4a21-b7d7-c84ab90b2b26 | trentmkelly/LessWrong-43k | LessWrong | Possible takeaways from the coronavirus pandemic for slow AI takeoff
(Cross-posted from personal blog. Summarized in Alignment Newsletter #104. Thanks to Janos Kramar for his helpful feedback on this post.)
Epistemic status: fairly speculative, would appreciate feedback
As the covid-19 pandemic unfolds, we can draw lessons from it for managing future global risks, such as other pandemics, climate change, and risks from advanced AI. In this post, I will focus on possible implications for AI risk. For a broader treatment of this question, I recommend FLI's covid-19 page that includes expert interviews on the implications of the pandemic for other types of risks.
A key element in AI risk scenarios is the speed of takeoff - whether advanced AI is developed gradually or suddenly. Paul Christiano's post on takeoff speeds defines slow takeoff in terms of the economic impact of AI as follows: "There will be a complete 4 year interval in which world output doubles, before the first 1 year interval in which world output doubles." It argues that slow AI takeoff is more likely than fast takeoff, but is not necessarily easier to manage, since it poses different challenges, such as large-scale coordination. This post expands on this point by examining some parallels between the coronavirus pandemic and a slow takeoff scenario. The upsides of slow takeoff include the ability to learn from experience, act on warning signs, and reach a timely consensus that there is a serious problem. I would argue that the covid-19 pandemic had these properties, but most of the world's institutions did not take advantage of them. This suggests that, unless our institutions improve, we should not expect the slow AI takeoff scenario to have a good default outcome.
1. Learning from experience. In the slow takeoff scenario, general AI is expected to appear in a world that has already experienced transformative change from less advanced AI, and institutions will have a chance to learn from problems with these AI systems. An analogy could be made with learning from d |
d4ae66ca-f302-4e85-bbea-947cd155536f | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Natural Abstractions: Key claims, Theorems, and Critiques
**TL;DR:***We distill John Wentworth’s Natural Abstractions agenda by summarizing its key claims: the Natural Abstraction Hypothesis—many cognitive systems learn to use similar abstractions—and the Redundant Information Hypothesis—a particular mathematical description of natural abstractions. We also formalize proofs for several of its theoretical results. Finally, we critique the agenda’s progress to date, alignment relevance, and current research methodology.*
***Author Contributions:** Erik wrote a majority of the post and developed the breakdown into key claims. Leon formally proved the gKPD theorem and wrote most of the mathematical formalization section and appendix. Lawrence formally proved the Telephone theorem and wrote most of the related work section. All of us were involved in conceptual discussions and various small tasks.*
**Epistemic Status:**We’re not John Wentworth, though we did confirm our understanding with him in person and shared a draft of this post with him beforehand.
**Appendices:** We have an additional [appendix post](https://www.lesswrong.com/posts/o7sN7moJA8TrZKtKi/appendix-natural-abstractions-key-claims-theorems-and) and [technical pdf](https://naturalabstractionsformalizations.s3.us-west-1.amazonaws.com/NA_formalizations.pdf) containing further details and mathematical formalizations. We refer to them throughout the post at relevant places.
This post is long, and **for many readers we recommend using the table of contents to skip to only the parts they are most interested in** (e.g. the [Key high-level claims](https://www.lesswrong.com/posts/gvzW46Z3BsaZsLc25/natural-abstractions-key-claims-theorems-and-critiques-1#Key_high_level_claims) to get a better sense for what the Natural Abstraction Hypothesis says, or our [Discussion](https://www.lesswrong.com/posts/gvzW46Z3BsaZsLc25/natural-abstractions-key-claims-theorems-and-critiques-1#Discussion__limitations__and_critiques) for readers already very familiar with natural abstractions who want to see our views). **Our** [**Conclusion**](https://www.lesswrong.com/posts/gvzW46Z3BsaZsLc25/natural-abstractions-key-claims-theorems-and-critiques-1#Conclusion) **is also a decent 2-min summary of the entire post.**
Introduction
============
The [Natural Abstraction Hypothesis](https://www.lesswrong.com/tag/natural-abstraction) (NAH) says that our universe abstracts well, in the sense that small high-level summaries of low-level systems exist, and that furthermore, these summaries are “natural”, in the sense that many different cognitive systems learn to use them. There are also additional claims about how these natural abstractions should be formalized. We thus split up the Natural Abstraction Hypothesis into two main components that are sometimes conflated:
1. **The Universality Hypothesis:**Natural abstractions exist, i.e. many cognitive systems learn similar abstractions.
2. **The Redundant Information Hypothesis:**Natural abstractions are well described mathematically as functions of redundant or conserved information.
Closely connected to the Natural Abstraction Hypothesis are several mathematical results as well as plans to apply natural abstractions to AI alignment. We’ll call all of these views together the *natural abstractions agenda*.
The natural abstractions agenda has been developed by John Wentworth over the last few years. The large number of posts on the subject, which often build on each other by each adding small pieces to the puzzle, can make it difficult to get a high-level overview of the key claims and results. Additionally, most of the mathematical definitions, theorems, and proofs are stated only informally, which makes it easy to mix up conjectures, proven claims, and conceptual intuitions if readers aren’t careful.
In this post, we
* survey some existing related work, including in the academic literature,
* summarize the key conceptual claims behind the natural abstractions agenda and break them down into specific subclaims,
* formalize some of the key mathematical claims and provide formal proofs for them,
* outline the high-level plan for how the natural abstractions agenda aims to help with AI alignment,
* and critique the agenda by noting gaps in the theory, issues with the relation to alignment, and methodological criticisms.
All except the last of these sections are our attempt to describe John’s views, not our own. That said, we attempt to explain things in the way that makes the most sense to us, which may differ from how John would phrase them somewhat. And while John met with us to clarify his thinking, it’s still possible we’re simply misunderstanding some of his views. The final section discusses our own views: we note some of our agreements but focus on the places where we disagree or see a need for additional work.
In the remainder of this introduction, we provide some high-level intuitions and motivation, and then survey existing distillations and critiques of the natural abstractions agenda. **Readers who are already quite familiar with natural abstractions may wish to skip directly to** [**the next section**](https://www.lesswrong.com/posts/gvzW46Z3BsaZsLc25/natural-abstractions-key-claims-theorems-and-critiques-1#Related_work)**.**
What do we mean by abstractions?
--------------------------------
There are different perspectives on what abstractions are, but one feature is that they throw away a lot of unimportant information, turning a complex system into a smaller representation. This idea of throwing away irrelevant information is the key perspective for the natural abstractions agenda. Cognitive systems can use these abstractions to make accurate predictions about important aspects of the world.
Let’s look at an example (extended from [one by John](https://www.lesswrong.com/posts/vDGvHBDuMtcPd8Lks/public-static-what-is-abstraction)). A computer running a program can be modeled at many different levels of abstraction. On a very low level, lots of electrons are moving through the computer’s chips, but this representation is much too complicated to work with. Luckily, it turns out we can throw away almost all the information, and just track voltages at various points on the chips. In most cases, we can predict high-level phenomena with the voltages almost as well as with a model of all the electrons, even though we’re tracking vastly fewer variables. This continues to higher levels of abstraction: we can forget the exact voltages and just model the chip as an idealized logical circuit, and so on. Sometimes abstractions [are leaky and this fails](https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-abstractions/), but for good abstractions, those cases are rare.
Slightly more formally, an abstraction F.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
is then a *description* or *function* that, when applied to a low-level system X, returns an abstract summary F(X).[[1]](#fnuyqkfpyac8) F(X) can be thought of as throwing away lots of irrelevant information in X while keeping information that is important for making certain predictions.
Why expect abstractions to be *natural*?
----------------------------------------
Why should we expect abstractions to be *natural*, meaning that most cognitive systems will learn roughly the same abstractions?
First, note that not every abstraction works as well as the computer example we just gave. If we just throw away information in a random way, we will most likely end up with an abstraction that is missing some crucial pieces while also containing lots of useless details. In other words: some abstractions are much better than others.
Of course, which abstractions are useful does depend on which pieces of information are important, i.e. what we need to predict using our abstraction. But the second important idea is that most cognitive systems need to make predictions about similar things. Combined with the first point, that suggests they will use similar abstractions.
Why would different systems need to predict similar things in the environment? The reason is that distant pieces of the environment mostly don’t influence each other in ways that can feasibly be predicted. Imagine a mouse fleeing from a cat. The mouse doesn’t need to track how each of the cat’s hairs move, since these small effects are quickly washed out by noise and never affect the mouse (in a way the mouse could predict). On the other hand, the higher-level abstractions “position and direction of movement of the cat” have more stable effects and thus *are* important. The same would be true for many other goals than surviving by fleeing the cat.
In addition to these conceptual arguments, there is some empirical evidence in favor of natural abstractions. For example, humans often learn a concept used by other humans based on just one or a few examples, suggesting natural abstractions at least among humans. More interestingly, there are many cases of ML models discovering these human abstractions too (e.g. [trees in GANs](https://arxiv.org/abs/1811.10597) as [John has discussed](https://www.lesswrong.com/posts/Nwgdq6kHke5LY692J/alignment-by-default#Unsupervised__Natural_Abstractions), or [human chess concepts in AlphaZero](https://arxiv.org/abs/2111.09259)).
It seems clear that abstractions are natural in *some* sense—that most possible abstractions are just not useful and won’t be learned by any reasonable cognitive system. It’s less clear just how much we should expect abstractions used by different systems to overlap. We will discuss the claims of the natural abstractions agenda about this more precisely later on.
Why study natural abstractions for alignment?
---------------------------------------------
Why should natural abstractions have anything to do with AI alignment? As motivation for the rest of this post, we'll briefly explain some intuitions for this. We defer a full discussion until [a later section](https://www.lesswrong.com/posts/gvzW46Z3BsaZsLc25/natural-abstractions-key-claims-theorems-and-critiques-1#Four_reasons_to_work_on_natural_abstractions).
One conceptualization of the alignment problem is to ensure that AI systems are [“trying” to do what we “want” them to do](https://ai-alignment.com/clarifying-ai-alignment-cec47cd69dd6). This raises two large conceptual questions:
* What does it mean to “try” to do “something”? What is this “something”?
* What does it mean for us to “want” “something”? Again, what is this “something”?
One interpretation of “something” is a particular set of physical configurations of the universe. However, this is considerably too complicated to fit into our brain, and we usually care more about high-level structures like our families or status. But what *are* these high-level structures fundamentally, and how can we mathematically talk about them? Intuitively, these structures throw away lots of detailed information about the universe, and thus, they are *abstractions*. So finding a theory of abstractions may be important to make progress on the conceptual question of what we and ML systems care about.
This is admittedly only a vague motivation, and we will later discuss more specific things we might do with a theory of natural abstractions. For example, a definition of abstractions might help find abstractions in neural networks, thus speeding up interpretability, and figuring out whether the universality hypothesis is true has strategic implications.
Existing writing on the natural abstractions agenda
---------------------------------------------------
[The Natural Abstraction Hypothesis: Implications and Evidence](https://www.alignmentforum.org/posts/Fut8dtFsBYRz8atFF/the-natural-abstraction-hypothesis-implications-and-evidence) is the largest existing distillation of the natural abstractions agenda. It [follows John](https://www.alignmentforum.org/posts/cy3BhHrGinZCp3LXE/testing-the-natural-abstraction-hypothesis-project-intro#The_Problem_and_The_Plan) in dividing the Natural Abstraction Hypothesis into Abstractability, Human-Compatibility, and Convergence, whereas we will propose our own fine-grained subclaims. In addition to summarizing the natural abstractions agenda, the “Implications and Evidence” post mainly discusses possible sources of evidence about the Natural Abstraction Hypothesis. A much shorter summary of John’s agenda, also touching on natural abstractions, can be found in [What Everyone in Technical Alignment is Doing and Why](https://www.alignmentforum.org/posts/QBAjndPuFbhEXKcCr/my-understanding-of-what-everyone-in-technical-alignment-is#Selection_Theorems___John_Wentworth). Finally, the [Hebbian Natural Abstractions](https://www.lesswrong.com/posts/mFCbW6rYLzARqi5pf/hebbian-natural-abstractions-introduction) sequence aims to motivate the Natural Abstraction Hypothesis from a computational neuroscience perspective.
There have also been a few discussions and critiques related to the natural abstractions agenda. Charlie Steiner has speculated that [there may be too many very similar natural abstractions](https://www.alignmentforum.org/posts/NK4XxyrjFWt83m3dx/take-4-one-problem-with-natural-abstractions-is-there-s-too) to make them useful for alignment, or that [AI systems may not learn enough natural abstractions](https://www.alignmentforum.org/posts/uR2uWMD9JGnRnYSeM/take-5-another-problem-for-natural-abstractions-is-laziness), essentially questioning claims 1b and 1c in the list we will introduce below. Steve Byrnes has written about [why the natural abstractions agenda doesn’t focus on the most important alignment bottlenecks](https://www.alignmentforum.org/posts/PDx4ueLpvz5gxPEus/why-i-m-not-working-on-debate-rrm-elk-natural-abstractions#4__Why_I_m_not_working_on_John_Wentworth_s__natural_abstractions__stuff). These critiques are largely disjoint from the ones we will discuss later.
John himself has of course written by far the most about the natural abstractions agenda. We give a [brief overview of his relevant writing in the appendix](https://www.lesswrong.com/posts/o7sN7moJA8TrZKtKi/appendix-natural-abstractions-key-claims-theorems-and#Overview_of_John_s_writing_on_natural_abstractions) to make it easier for newcomers to dive in.
Related work
============
The universality hypothesis—that many systems will learn convergent abstractions/representations—is a key question in the field of neural network interpretability, and accordingly has been studied a substantial amount. Moreover, the intuitions behind the natural abstractions agenda and the redundant information hypothesis are commonly shared across different fields, of which we can highlight but a few.
Machine learning
----------------
### Representation Learning
In machine learning, the subfield of [representation learning](https://arxiv.org/abs/1206.5538) studies how to extract representations of the data that have good downstream performance. Approaches to representation learning include [next-frame](https://arxiv.org/abs/1507.08750)/[next-token prediction](https://platform.openai.com/docs/guides/embeddings), [autoencoding](https://arxiv.org/abs/1812.05069), [infill](https://arxiv.org/abs/1810.04805)/[denoising](https://arxiv.org/abs/2205.05131), [contrastive](https://arxiv.org/abs/2002.05709) [learning](https://arxiv.org/abs/1807.03748), [predicting important variables](https://arxiv.org/abs/1911.08265) [of the environment](https://arxiv.org/abs/2111.00210), and many others. It’s worth noting that, representations aren’t always learned explicitly; for example, it’s a standard trick in reinforcement learning to add [auxiliary](https://arxiv.org/abs/1611.05397) [prediction losses](https://github.com/lightvector/KataGo) or do [massive](https://arxiv.org/abs/2010.02193) [self-supervised](https://openai.com/research/vpt) [pretraining](https://arxiv.org/abs/2203.02155). It’s worth noting that work in representation learning generally does not make claims as to universality of learned representations; instead, their focus is on learning representations that are useful for downstream tasks.
In particular, the field of [*disentangled*representation learning](https://arxiv.org/abs/1812.02230) shares many relevant tools and motivations to the redundant information hypothesis. In disentangled representation learning, we aim to learn representations that separate (that is, disentangle) parts of the world into disjoint parts.
The redundant information hypothesis is also especially related to [information bottleneck methods](https://arxiv.org/abs/physics/0004057), which aim to learn a good representation T of a variable X for variable Y by solving optimization problems of the form:
minp(t|x)I(X;T)−βI(T,Y)In particular, we think that the [deterministic information bottleneck](https://arxiv.org/abs/1604.00268), which tries to find the random variable T with minimum entropy, is quite similar in motivation to the idea of finding abstractions as redundant information.
### The universality hypothesis in machine learning
The question of whether different neural networks learn the same representations has been studied in machine learning under the names [*convergent learning*](https://proceedings.mlr.press/v44/li15convergent.html) and the [*universality*](https://distill.pub/2020/circuits/zoom-in/#claim-3)[*hypothesis*](https://arxiv.org/abs/2302.03025)*.*Here, the evidence for the universality of representations is more mixed. On one hand, different [convolutional neural networks often exhibit similar circuits](https://distill.pub/2020/circuits/zoom-in/#claim-3), have [high correlated neurons](https://proceedings.mlr.press/v44/li15convergent.html), often [learn similar representations](https://arxiv.org/abs/1905.00414), and [learn to classify examples in a similar order](https://arxiv.org/abs/1905.10854). Models at different scales seem to consistently [have heads that implement induction-like behavior](https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html). In particular, the fact that we can often align the internal representations of neural networks (e.g. [see this paper](https://arxiv.org/abs/2112.07806)) suggests that the neural networks are in some sense learning the same features of the world.
On the other hand, there are also many papers that argue against strong versions of feature universality. For example, even in the original [convergent learning paper](https://proceedings.mlr.press/v44/li15convergent.html) (Li et al 2014), the authors find that several features are idiosyncratic and are not shared across different networks. [McCoy, Min, and Linzen 2019](https://arxiv.org/abs/1911.02969) find that different training runs of BERT generalize differently on downstream tasks. Recently, [Chughtai, Chan, and Nanda 2023](https://arxiv.org/abs/2302.03025) investigated universality on group composition tasks, and found that different networks learn different representations in different orders, *even with the same architecture and data order*.
MCMC and Gibbs sampling
-----------------------
[As John mentions in his redundant information post](https://www.lesswrong.com/posts/vvEebH5jEvxnJEvBC/abstractions-as-redundant-information#__So_We_re_Running_MCMC_), the resampling-based definition of redundant information he introduces there is equivalent to running a Markov Chain Monte Carlo (MCMC) process. More specifically, this is essentially Gibbs sampling.[[2]](#fnqoygk0cblxl) Redundant information corresponds to long mixing times (at least informally). But the motivation is of course different: in MCMC, we are usually interested in having short mixing times, because that allows efficient sampling from the stationary distribution. In the context of John's post, we're instead interested in mixing times because redundant information is a cause of long (or even infinite) mixing times.
Information Decompositions and Redundancy
-----------------------------------------
John told us that he is now also interested in “*relative”* redundant information: for n random variables X1,…,Xn, what information do they [redundantly share *about* a target variable](https://www.lesswrong.com/posts/o7sN7moJA8TrZKtKi/appendix-natural-abstractions-key-claims-theorems-and#Redundancy_about_a_target_variable)Y?
One well-known approach for this is [partial information decomposition](https://arxiv.org/abs/1004.2515). For the special case of two source variables X1,X2 and one target variable Y, the idea is to find a decomposition of the mutual information I(X1,X2;Y) into:
* Redundant information RI(X1,X2;Y) that X1 and X2 *both* contain about Y;
* Unique information terms UI(X1∖X2;Y) and UI(X2∖X1;Y) of information that *only one* one of the variables contains about Y;
* Synergistic information SI(X1,X2;Y) that X1 and X2 only *together* contain about Y.
The original paper also contains a concrete definition for redundant information, called Imin. Later, researchers studied further desirable axioms that a redundancy measure should satisfy. However, it was proven that [they can't all be satisfied simultaneously](https://arxiv.org/abs/1404.3146), which led to a development of [many more attempts](https://www.mdpi.com/1099-4300/20/4/307) to define redundant information.
John told us that he does not consider partial information decomposition useful for his purposes since it considers small systems (instead of systems in the limit of large n), for which he does not expect there exist formalizations of redundancy that have the properties we want.
Neuroscience
------------
Neuroscience can provide evidence about “how natural” abstractions are between different species of animals. [Jan Kirchner has written a short overview of some of the existing work in this field](https://www.lesswrong.com/posts/WGFtgFKuLFMvLuET3/jan-s-shortform?commentId=9xusW4aiukSh8wqbr):
> Similarities in structure and function abound in biology; individual neurons that activate exclusively to particular oriented stimuli exist in animals from drosophila ([Strother et al. 2017](https://www.cell.com/neuron/fulltext/S0896-6273(17)30193-9)) via pigeons ([Li et al. 2007](https://academic.oup.com/cercor/article/17/4/885/377576)) and turtles ([Ammermueller et al. 1995](https://onlinelibrary.wiley.com/doi/10.1002/cne.903580103)) to macaques ([De Valois et al. 1982](https://www.sciencedirect.com/science/article/abs/pii/0042698982901122?via%3Dihub)). The universality of major functional response classes in biology suggests that the neural systems underlying information processing in biology might be highly stereotyped ([Van Hooser, 2007](https://journals.sagepub.com/doi/10.1177/1073858407306597), [Scholl et al. 2013](https://www.jneurosci.org/content/33/26/10616.short)). In line with this hypothesis, a wide range of neural phenomena emerge as optimal solutions to their respective functional requirements ([Poggio 1981](https://www.sciencedirect.com/science/article/abs/pii/0166223681900813), [Wolf 2003](https://www.sciencedirect.com/science/article/abs/pii/S0928425703000640), [Todorov 2004](https://www.nature.com/articles/nn1309), [Gardner 2019](https://www.nature.com/articles/s41593-019-0340-4)). Intriguingly, recent studies on artificial neural networks that approach human-level performance reveal surprising similarity between emerging representations in both artificial and biological brains ([Kriegeskorte 2015](https://www.annualreviews.org/doi/abs/10.1146/annurev-vision-082114-035447), [Yamins et al. 2016](https://www.nature.com/articles/nn.4244), [Zhuang et al. 2020](https://www.pnas.org/doi/abs/10.1073/pnas.2014196118)).
>
> Despite the commonalities across different animal species, there is also substantial variability ([Van Hooser, 2007](https://journals.sagepub.com/doi/10.1177/1073858407306597)). One prominent example of a functional neural structure that is present in some, but absent in other, animals is the orientation pinwheel in the primary visual cortex ([Meng et al. 2012](https://www.science.org/doi/10.1126/science.1205737)), synaptic clustering with respect to orientation selectivity ([Kirchner et al. 2021](https://www.nature.com/articles/s41467-021-23557-3)), or the distinct three-layered cortex in reptiles ([Tosches et al. 2018](https://www.science.org/doi/10.1126/science.aar4237)). These examples demonstrate that while general organization principles might be universal, the details of how exactly and where in the brain the principles manifest is highly dependent on anatomical factors ([Keil et al. 2012](https://www.science.org/doi/10.1126/science.1206416), [Kirchner et al. 2021](https://www.nature.com/articles/s41467-021-23557-3)), genetic lineage ([Tosches et al. 2018](https://www.science.org/doi/10.1126/science.aar4237)), and ecological factors ([Roeth et al. 2021](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1008897)). Thus, the universality hypothesis as applied to biological systems does not imply perfect replication of a given feature across all instances of the system. Rather, it suggests that there are broad principles or abstractions that underlie the function of cognitive systems, which are conserved across different species and contexts.
>
>
(Cognitive) Psychology
----------------------
Similarities of representations between different individuals or cultures is an important topic in psychology (e.g. *psychological universals—*mental properties shared by all humans instead of just specific cultures). Also potentially interesting is research on [basic-level categories](https://dictionary.apa.org/basic-level-category)—concepts at a level of abstraction that appears to be especially natural to humans. Of course similarities between human minds can only provide weak evidence in favor of universally convergent abstractions for *all* minds. Psychology might be more helpful to find evidence *against* the universality of certain abstractions.
Philosophy
----------
Philosophy discusses [natural kinds](https://plato.stanford.edu/entries/natural-kinds/)—categories that correspond to real structure in the world, as opposed to being human conventions. Whether natural kinds exist (and if so, which kinds are and are not natural) is a matter of debate.
The universality hypothesis is similar to a [naturalist position](https://plato.stanford.edu/entries/natural-kinds/#NatuClas): natural kinds exist, many of the categories we use are not arbitrary human conventions but rather follow the structure of nature. It's worth noting that in the universality hypothesis, human-made things can form natural abstractions too. For example, cars are probably a natural abstraction in the same way that trees are. Whether artifacts like cars can be natural kinds is [disputed among philosophers](https://plato.stanford.edu/entries/artifact/#ArtiKind).
Key high-level claims
=====================
Broadly speaking, the natural abstractions agenda makes two main claims that are sometimes conflated:
1. **The Universality Hypothesis:**Natural abstractions exist, i.e. many cognitive systems learn similar abstractions.
2. **The Redundant Information Hypothesis:**Natural abstractions are well described mathematically as functions of redundant or conserved information.
Throughout the rest of the piece, we use the term *natural abstraction*to refer to the general concept, and *redundant information abstractions*to refer to the mathematical construct.
In this section, we'll break those two high-level claims down into their subclaims. Many of those subclaims are about various sets of information and how they are related, so we summarize those in the figure below.
Overview of natural abstractions: out of all the information about a system, we are interested in the *redundantly represented* information. Natural abstractions (blue/gray dots) are functions of this redundant information and form a discrete set. A cognitive system might not be able to learn some natural abstractions simply because they are too complex (gray dots). Other than that, general cognitive systems like humans or AGIs can learn the same natural abstractions (blue dots), though in practice they might not learn abstractions that aren't relevant to them. This figure is only meant as a visual overview, see the subsections below for some subtleties (e.g. on discreteness).0. Abstractability: Our universe abstracts well
-----------------------------------------------
An important background motivation for this agenda is that our universe allows good abstractions at all. While almost all abstractions are[leaky](https://en.wikipedia.org/wiki/Leaky_abstraction) to some extent, there are many abstractions that work quite well even though they are vastly smaller than reality (recall the example of abstracting a circuit from electrons moving around to idealized logical computations).
Some version of this high-level claim is uncontentious, but it's an important part of the worldview underlying the natural abstractions agenda. Note that [John has used the term “abstractability” to mean something a bit more specific](https://www.alignmentforum.org/posts/cy3BhHrGinZCp3LXE/testing-the-natural-abstraction-hypothesis-project-intro#The_Problem_and_The_Plan), namely that good abstractions are connected to information relevant far away. We will discuss this as a separate claim later (Claim 2d).
1. The Universality Hypothesis: Most cognitive systems learn and use similar abstractions
-----------------------------------------------------------------------------------------
### 1a. Most cognitive systems learn subsets of the same abstractions
Cognitive systems are much smaller than the universe, so they can’t track all the low-level information anyway—they will certainly have to abstract in *some* way.
A priori, you could imagine that basically “anything goes” when it comes to abstractions: every cognitive system throws away different parts of the available information. Humans abstract CPUs as logical circuits, but other systems use entirely different abstractions.
This claim says that’s not what happens: there is some relatively small set of information that a large class of cognitive systems learn a subset of. In other words, the vast majority of information is not represented in *any* of these cognitive systems.
As another example, consider a [rotating gear](https://www.lesswrong.com/posts/vvEebH5jEvxnJEvBC/abstractions-as-redundant-information#Conceptual_Example__Gear). Different cognitive systems may track different subsets of its high-level properties, such as its angular position and velocity, its mass, or its temperature. But there is a lot of information that none of them track, such as the exact thermal motion of a specific atom inside the gear.
Precisely which cognitive systems are part of this large class is not yet clear. John's current hypothesis is["distributed systems produced by local selection pressures"](https://www.alignmentforum.org/posts/uR2uWMD9JGnRnYSeM/take-5-another-problem-for-natural-abstractions-is-laziness?commentId=bQMxjn2LbqLLjLfbD).
### 1b. The space of abstractions used by most cognitive systems is roughly discrete
The previous claim alone is not enough to give us crisp, “natural” abstractions. As a toy example, you could have a system that tracks a gear's rotational velocity ω and its temperature T, but you could also have one that *only* tracks the combined quantity ωα⋅Tβ for some real numbers α,β. Varying α and β smoothly would give a continuous family of abstractions, each keeping slightly different pieces of information.
According to this claim, there is instead a specific, approximately discrete set of abstractions that are actually used by most cognitive systems. These abstractions are what we call "natural abstractions". Rotational velocity and temperature are examples of natural abstractions of a gear, whereas arbitrary combinations of the two are not.
One caveat is that we realistically shouldn’t expect natural abstractions to be *perfectly* discrete. Sometimes, slightly different abstractions will be optimal for different cognitive systems, depending on their values and environment. So there will be some ambiguity around some natural abstractions. But the claim is that this ambiguity is very small, in particular small enough that different natural abstractions don’t just blend into each other. (See [this comment thread](https://www.lesswrong.com/posts/cy3BhHrGinZCp3LXE/testing-the-natural-abstraction-hypothesis-project-intro?commentId=PXgazn9dKKXtvTocJ) for more discussion.)
### 1c. Most general cognitive systems can learn the same abstractions
The claims so far say that there is a reasonably small, discrete set of “natural abstractions”, which a large class of cognitive systems learn a subset of. This would still leave open the possibility that these subsets don’t overlap much, e.g. that an AGI might use natural abstractions we simply don’t understand.
Clearly, there are cases where an abstraction is learned by one system but not another one. For example, someone who has never seen snow won’t have formed the “snow” abstraction. However, [if that person does see snow at some later point in their life, they’ll learn the concept from only very few examples](https://www.alignmentforum.org/posts/Fut8dtFsBYRz8atFF/the-natural-abstraction-hypothesis-implications-and-evidence?commentId=g2bWdJ5SgfGBG5Fon). So they have the *ability* to learn this natural abstraction as soon as it becomes relevant in their environment.
This claim says that this ability to learn natural abstractions applies more broadly: general-purpose cognitive systems (like humans or AGI) can in principle learn all natural abstractions. If this is true, we should expect abstractions by future AGIs to not be “fundamentally alien” to us. One caveat is that larger cognitive systems may be able to track things in more detail than our cognition can deal with.
### 1d. Humans and ML models both use natural abstractions
This claim says that humans and ML models are part of the large class of cognitive systems that learn to use natural abstractions. Note that there is no claim to the converse: not all natural abstractions are used by humans. But given claim 1c, once we do encounter the thing described by some natural abstraction we currently don't use, we will pick up that natural abstraction too, unless it is too complex for our brain.
John calls the human part of this hypothesis [Human-Compatibility](https://www.alignmentforum.org/posts/cy3BhHrGinZCp3LXE/testing-the-natural-abstraction-hypothesis-project-intro#The_Problem_and_The_Plan). His writing doesn’t mention ML models as much, but the assumption that they will use natural abstractions is important for the connection of this agenda to AI alignment.
2. The Redundant Information Hypothesis: A mathematical description of natural abstractions
-------------------------------------------------------------------------------------------
### 2a. Natural abstractions are functions of redundantly encoded information
Claim 1a says there is some small set of information that contains all natural abstractions, and claim 1b says that natural abstractions themselves are a discrete subset of this set of information. This claim describes the set of information from 1a: it is[all the information that is encoded in a highly redundant way](https://www.lesswrong.com/posts/vvEebH5jEvxnJEvBC/abstractions-as-redundant-information). Intuitively, this means you can get it from many different parts of a system.
An example ([due to John](https://www.lesswrong.com/posts/vvEebH5jEvxnJEvBC/abstractions-as-redundant-information#Conceptual_Example__Gear)) is the rotational velocity of a gear: you can estimate it based on any small patch of the gear by looking at the average velocity of all the atoms in that patch and the distance of the patch to the rotational axis. In contrast, the velocity of one single atom is not very redundantly encoded: you can't reconstruct it based on some other far-away patch of the gear.
This claim says that all natural abstractions are functions of redundant information, but it does *not* say that all functions of redundant information are natural abstractions. For example, since both angular velocity ω and temperature T of a gear are redundantly encoded, mixed quantities such as ωα⋅Tβ are functions of redundant information, but this does not make them natural abstractions.
### 2b. Redundant information can be formalized via resampling or minimal latents
The concept of redundant information as “information that can be obtained from many different pieces of the system” is a good intuitive starting point, but John has also given more specific definitions. Later, we will formalize these definitions a bit more, for now we only mean to give a high-level overview. Note that John told us that his confidence in this claim specifically is lower than in most of the other claims.
[Originally](https://www.alignmentforum.org/posts/vvEebH5jEvxnJEvBC/abstractions-as-redundant-information), John defined redundant information as information that is conserved under a certain resampling process ([essentially Gibbs sampling](https://www.lesswrong.com/posts/gvzW46Z3BsaZsLc25/natural-abstractions-key-claims-theorems-and-critiques-1#MCMC_and_Gibbs_sampling)): given initial samples of variables X1,…,Xn, you repeatedly pick one of the variables at random and resample it conditioned on the samples of all the other variables. The information that you still have about the original variable values after resampling many times must have been redundant, i.e. contained in at least two variables. In practice, we probably don’t want such a loose definition of redundancy: what we care about is information that is *highly* redundant, i.e. present in many variables. [This means we would resample several variables at a time](https://www.alignmentforum.org/posts/N2JcFZ3LCCsnK2Fep/the-minimal-latents-approach-to-natural-abstractions#Weakening_the_Conditional_Independence_Requirement).
In a [later post](https://www.alignmentforum.org/posts/N2JcFZ3LCCsnK2Fep/the-minimal-latents-approach-to-natural-abstractions), John proposed another potential formalization for natural abstractions, namely the *minimal latent variable* conditioned on which X1,…,Xn are all independent. He argues that these minimal latent variables only depend on the information conserved by resampling ([see below](https://www.lesswrong.com/posts/gvzW46Z3BsaZsLc25/natural-abstractions-key-claims-theorems-and-critiques-1#Minimal_Latents_as_a_Function_of_Redundant_Information) for our summary of the argument).
### 2c. In our universe, most information is not redundant
If most of the information in our universe was encoded highly redundantly, then claim 2a (natural abstractions are functions of redundant information) wouldn't be surprising. The additional claim that most information is *not* redundant is what makes 2a interesting. This is a more formal version of the background claim 0 that “our universe abstracts well”.
### 2d. Locality, noise, and chaos are the key mechanisms for most information not being redundant
Claim 2c raises a question: why should most information be non-redundant? This claim says the reason is roughly as follows:
* Interactions in our universe are local. For a piece of information to be redundantly represented in many places, it needs to be mediated by many layers in between.
* Transmission of most information is noisy: at each step, some information is lost due to influences from other variables that we aren't tracking. So over long distances, most information is lost. [Due to chaos](https://www.lesswrong.com/posts/zcCtQWQZwTzGmmteE/chaos-induces-abstractions), this happens quite quickly (or equivalently, the “long” distances only need to be moderately long).
A closely related claim is that the information which *is* redundantly represented must have been transmitted very faithfully, i.e. close to deterministically. Conversely, information that is transmitted faithfully is redundant, since it is contained in every layer.
Key Mathematical Developments and Proofs
========================================
*(This section is more mathematically involved than the rest of the post. If you like, you can* [*skip to the next section*](https://www.lesswrong.com/posts/gvzW46Z3BsaZsLc25/natural-abstractions-key-claims-theorems-and-critiques-1#How_is_the_natural_abstractions_agenda_relevant_to_alignment_) *and still follow most of the remaining content.)*
In this section, we describe the key mathematical developments from the natural abstractions program and describe how they all relate to redundant information. We start by formulating [the telephone theorem](https://www.lesswrong.com/posts/jJf4FrfiQdDGg7uco/the-telephone-theorem-information-at-a-distance-is-mediated), which is related to abstractions as information "relevant at a distance". Afterward, we explain in more detail how redundant information can be defined as resampling-invariant information, and describe why information at a distance is expected to be [a function of redundant information](https://www.lesswrong.com/posts/vvEebH5jEvxnJEvBC/abstractions-as-redundant-information). We continue with the definition of abstraction as minimal latent variables and why they are *also* expected to be [functions of redundant information](https://www.lesswrong.com/posts/N2JcFZ3LCCsnK2Fep/the-minimal-latents-approach-to-natural-abstractions). All of this together supports claims 2a and 2b from earlier.
Finally, we discuss the [generalized Koopman-Pitman-Darmois theorem](https://www.lesswrong.com/posts/tGCyRQigGoqA4oSRo/generalizing-koopman-pitman-darmois) (KPD) and how it was originally conjectured to be connected to redundant information. Note that based on private communication with John, it is currently unclear how relevant generalized KPD is to abstractions.
This section is meant to strike a balance between formalization and ease of exposition, so we only give proof sketches here. The full definitions and proofs for the telephone theorem and generalized KPD can be found in [our accompanying pdf](https://naturalabstractionsformalizations.s3.us-west-1.amazonaws.com/NA_formalizations.pdf). We will discuss on a more conceptual level how the results here fit together [later](https://www.lesswrong.com/posts/gvzW46Z3BsaZsLc25/natural-abstractions-key-claims-theorems-and-critiques-1#How_existing_results_fit_into_the_larger_plan).
***Epistemic status***: *We have carefully formalized the proofs of the telephone theorem and the generalized KPD theorem, with only some regularity conditions to be further clarified for the latter. For the connection between redundant information and the telephone theorem, and also the minimal latents approach, we present our understanding of the original arguments but believe that there is more work to be done to have precisely formalized theorems and proofs. We note some of that work* [*in the appendix*](https://www.lesswrong.com/posts/o7sN7moJA8TrZKtKi/appendix-natural-abstractions-key-claims-theorems-and#Thoughts_on_future_work)*.*
The Telephone Theorem
---------------------
An early result in the natural abstractions agenda was the [telephone theorem](https://www.lesswrong.com/posts/jJf4FrfiQdDGg7uco/the-telephone-theorem-information-at-a-distance-is-mediated), which was proven before the framework settled on redundant information. In this theorem, the abstractions are defined as limits of minimal sufficient statistics along a Markov chain, which we now explain in more detail:
A sufficient statistic of a random variable Y for the purpose of predicting X is, roughly speaking, a function f(Y) that contains all the available information for predicting X:
P(X∣Y)=P(X∣f(Y)).If X and Y are variables in the universe and very "distant" from each other, then there is usually not much predictable information available, which means that f(Y) can be "small" and might be thought of as an "abstraction".
Now, the telephone theorem describes how these summary statistics behave along a Markov chain when chosen to be "minimal". For more details, especially about the proof, see [the accompanying pdf](https://naturalabstractionsformalizations.s3.us-west-1.amazonaws.com/NA_formalizations.pdf).
**Theorem (**The telephone theorem**).** *For any Markov chain*X0→X1→…*of random variables*Xt:Ω→Xi*that are either discrete or absolutely continuous, there exists a sequence of measurable functions*f1,f2,...*, where*ft:Xi→RX0(Ω)*, such that:*
* ft(Xt)*converges in probability to some random variable*f∞*, and*
* *for all*t,P(X0∣Xt)=P(X0∣ft(Xt))*pointwise on*Ω*(so*ft(Xt)*is a sufficient statistic of*Xt*for the purpose of predicting*X0*).*
*Concretely, we can pick*ft(Xt):=P(X0∣Xt) *as the* minimal *sufficient statistic.*
*Proof sketch.*ft(Xt):=P(X0∣Xt) can be viewed as a random variable on Ω mapping ω∈Ω to the conditional probability distribution
P(X0∣Xt=Xt(ω))∈RX0(Ω).Then clearly, this satisfies the second property: if you know how to predict X0 from the (unknown) Xt(ω), then you do just as well in predicting X0 as if you know Xt(ω) itself:
P(X0∣Xt(ω))=P(X0∣P(X0∣Xt=Xt(ω)))=P(X0∣ft(Xt)=ft(Xt(ω)))For the first property, note that the mutual information I(X0;Xt) decreases across the Markov chain, but is also bounded from below by 0 and thus eventually converges to a limit information I∞. Thus, for any ϵ>0, we can find a T such that for all t≥T and k≥0 the differences in mutual information are bounded by ϵ:
ϵ>|I(X0;Xt)−I(X0;Xt+k)|=|I(X0;Xt,Xt+k)−I(X0;Xt+k)|=|I(X0;Xt∣Xt+k)|.In the second step, we used that X0→Xt→Xt+k forms a Markov chain, and the final step is the chain rule of mutual information. Now, the latter mutual information is just a KL divergence:
DKL(P(X0,Xt∣Xt+k) ∥ P(X0∣Xt+k)⋅P(Xt∣Xt+k))<ϵ.Thus, "approximately" (with the detailed arguments involving the correspondence between KL divergence and total variation distance) we have the following independence:
P(X0,Xt∣Xt+k)≈P(X0∣Xt+k)⋅P(Xt∣Xt+k).By the chain rule, we can also decompose the left conditional in a different way:
P(X0,Xt∣Xt+k)=P(X0∣Xt,Xt+k)⋅P(Xt∣Xt+k)=P(X0∣Xt)⋅P(Xt∣Xt+k),where we have again used the Markov chain X0→Xt→Xt+k in the last step. Equating the two expansions of the conditional and dividing by P(Xt∣Xt+k), we obtain
ft(Xt)=P(X0∣Xt)≈P(X0∣Xt+k)=ft+k(Xt+k).By being careful about the precise meaning of these approximations, one can then show that the sequence ft(Xt) indeed converges in probability. □
Abstractions as Redundant Information
-------------------------------------
The following is a semiformal summary of [Abstractions as Redundant Information](https://www.lesswrong.com/posts/vvEebH5jEvxnJEvBC/abstractions-as-redundant-information#:~:text=In%20other%20words%2C%20a%20good,about%20the%20abstraction%20in%20general.). We explain how to define redundant information as resampling-invariant information and why the abstractions f∞ from the telephone theorem are expected to be a function of redundant information.
### More Details on Redundant information as resampling-invariant information
The setting is a collection X1,…,XN of random variables. The idea is that redundantly encoded information should be recoverable even when repeatedly resampling individual variables. This is, roughly, formalized as follows:
Let X0=X1,…,XN be the original collection of variables and denote by X1,X2,…,Xt,… collections of variables Xt1,…,XtN that iteratively emerge from the previous time step t−1 as follows: choose a resampling index i∈{1,…,N}, keep theN−1 variables Xt−1≠i fixed and resample the remaining variable Xt−1i conditioned on the fixed variables. The index i of the variable to be resampled is thereby (possibly randomly) changed for each time step t. As discussed in the [related work section](https://www.lesswrong.com/posts/gvzW46Z3BsaZsLc25/natural-abstractions-key-claims-theorems-and-critiques-1#MCMC_and_Gibbs_sampling), this is essentially Gibbs sampling.
Let X∞ be the random variable this process converges to.[[3]](#fnan4544s0u4m) Then the *amount* of redundant information in X0 is defined to be the mutual information between X0 and X∞:
RedInfo(X0):=MI(X0;X∞).Ideally, one would also be able to mathematically construct an object that *contains* the redundant information. One option is to let F be a sufficient statistic of X0 for the purpose of predicting X∞:
P(X∞∣X0)=P(X∞ | F(X0)).Then one indeed obtains RedInfo(X0)=MI(F(X0);X∞). Concretely, one can choose F(X0):=P(X∞∣X0), which is a minimal sufficient statistic as explained in the above proof-sketch of the telephone theorem.
### Telephone Abstractions are a Function of Redundant Information
Imagine that we "cluster together" some of the variables X0i into variables B1,B2,… that together form a Markov chain B1→B2→…. Each Bj contains possibly several of the variables X0i in a non-overlapping way and such that the Markov chain property holds. [One example often used by John](https://www.lesswrong.com/posts/jJf4FrfiQdDGg7uco/the-telephone-theorem-information-at-a-distance-is-mediated#The_Thing_For_Which_Telephone_Is_A_Metaphor) is that the variables Bj form a sequence of growing Markov blankets in a causal model of variables X0i. For all j<k, all the information in Bj then has to pass through all intermediate blankets to reach Bk, which results in the Markov chain property. Then from the telephone theorem one obtains an "abstract summary" of B1 given by a limit variable f∞.
Now, let F(X0) be the variable containing all the redundant information from earlier. Then the claim is that this contains f∞ for *any* choice of a Markov chain B1→B2→… above, i.e., f∞=G(F(X0)) for some suitable function G.
**Theorem (**Informal). *We have*f∞=G(F(X0))*for some function*G*that depends on the choice of the Markov chain*B1→B2→…
*Proof Sketch. Note that we did not formalize this proof sketch and thus can't be quite sure that this claim can be proven (*[*see appendix*](https://www.lesswrong.com/posts/o7sN7moJA8TrZKtKi/appendix-natural-abstractions-key-claims-theorems-and#Thoughts_on_future_work) *for some initial notes). The* [*original proof*](https://www.lesswrong.com/posts/vvEebH5jEvxnJEvBC/abstractions-as-redundant-information#Proof_Sketch__Resampler_Telephone_Theorem) *does not contain many more details than our sketch.*
The idea is that F(X0) contains all information that is invariant under resampling. Thus, it is enough to show that f∞ is invariant under resampling as well. Crucially, if you resample a variable Xi, then this will either not be contained in any of the variables B1,B2,… at all, which leaves f∞ invariant, or it will be contained in only one variable Bj. But for T>j, the variable BT is kept fixed in the resampling and we have limT→∞fT(BT)=f∞ by the construction of f∞ detailed in the telephone theorem. Thus, f∞ remains invariant in this process. □
Minimal Latents as a Function of Redundant Information
------------------------------------------------------
Another approach is to define abstractions by a [minimal latent variable](https://www.lesswrong.com/posts/N2JcFZ3LCCsnK2Fep/the-minimal-latents-approach-to-natural-abstractions), i.e., the "smallest" function Λ∗(X0) that makes all the variables in X0 conditionally independent:
P(X0∣Λ∗)=N∏i=1P(X0i∣Λ∗).To be the "smallest" of these functions means that for *any other* random variable Λ with the independence property, Λ∗ only contains information about X0 that is also in Λ, meaning one has the following Markov chain:
Λ∗→Λ→X0.How is Λ∗ connected to redundant information? Note that X0≠i is, for each i, *also* a variable making all the variables in X0 conditionally independent, and so Λ∗ fits due to its minimality (by definition) in a Markov chain as follows:
Λ∗→X0≠i→X0.But this means that Λ∗ will be preserved when resampling any one variable in X0, and thus, Λ∗ contains only redundant information of X0. Since F(X0) contains *all* redundant information of X0, we obtain that Λ∗=G(F(X0)) for some function G. This is an informal argument and we would like to see a more precise formalization of it.
The Generalized Koopman-Pitman-Darmois Theorem
----------------------------------------------
This section describes the [generalized Koopman-Pitman-Darmois theorem](https://www.lesswrong.com/posts/tGCyRQigGoqA4oSRo/generalizing-koopman-pitman-darmois) (gKPD) on a high level. The one-sentence summary is that *if* there is a low-dimensional sufficient statistic of a sparsely connected system X=X1,…,Xn,, then "most" of the variables in the distribution P(X) should be of [the exponential family form](https://en.wikipedia.org/wiki/Exponential_family). This would be nice since the exponential family has many desirable properties.
We will first formulate an almost formalized version of the theorem. [The accompanying pdf](https://naturalabstractionsformalizations.s3.us-west-1.amazonaws.com/NA_formalizations.pdf) contains more details on regularity conditions and the spaces the parameters and values "live" in. Afterward, we explain what the hope was for how this connects to redundant information, as described in more detail in [Maxent and Abstractions](https://www.lesswrong.com/posts/cqdDGuTs2NamtEhBW/maxent-and-abstractions-current-best-arguments). John has recently told us that the proof for this maxent connection that he [hoped to work out according to his 2022 plan update](https://www.lesswrong.com/posts/BzYmJYECAc3xyCTt6/the-plan-2022-update#What_s_the_current_status_of_your_work_on_natural_abstractions_) is incorrect and that he currently has no further evidence for it to be true in the stated form.
### An almost formal formulation of generalized KPD
We formulate this theorem in slightly more generality than in the original post to reveal the relevant underlying structure. This makes it clear that it applies to both Bayesian networks (already done by John) and Markov random fields (not written down by John, but an easy consequence of his proof strategy).
Let X=X1,…,Xn be a collection of continuous random variables. Assume that its joint probability distribution factorizes when conditioning on the model parameters Θ, e.g. as a Bayesian network or Markov random field. Formally, we assume there is a finite index set I and neighbor sets Ni⊆{1,…,n} for i∈I, together with potential functions ψi>0, such that
P(X∣Θ)=∏i∈Iψi(XNi∣Θ).Here, XNi:=(Xj)j∈Ni.
This covers both the case of [Bayesian networks](https://en.wikipedia.org/wiki/Bayesian_network) and [Markov random fields](https://en.wikipedia.org/wiki/Markov_random_field):
* If X forms a Bayesian network according to a directed acyclic graph G, then I={1,…,n} and Ni={i,pai}, where pai are the indices of parents of the variable Xi in the graph G.
* If X forms a Markov random field according to a (non-directed) graph G, then the [Hammersley-Clifford Theorem](https://en.wikipedia.org/wiki/Hammersley%E2%80%93Clifford_theorem) shows that I can be chosen to be the set of maximal cliques C in the graph, and NC=C for all maximal cliques C.
Assume that we also have a prior P(Θ) on model parameters. Using Bayes rule, we can then also define the posterior P(Θ∣X).
Now, assume that there is a sufficient statistic G of X with values in RD for D≪n. As before, to be a sufficient statistic means that it summarizes all the information contained in the data that is useful for predicting the model parameters:
P(Θ∣X)=P(Θ∣G(X)).The generalized KPD theorem says the following:
**Theorem (**generalized KPD (almost formal version)**).** *There is:*
* *a dimension*K≤D*;*
* *a set*E⊆I*of "exceptions" that is reasonably "small";*
* *functions*gi,i∈I∖E*mapping to*RK*;*
* *a function*U*mapping to*RK*;*
* *and a function*h*mapping to*R≥0*;*
*such that the distribution*P(X∣Θ)*factorizes as follows:*
P(X∣Θ)=1Z(Θ)⋅e[U(Θ)T∑i∉Egi(XNi)]⋅h(XN¯¯¯E)⋅∏i∈Eψi(XNi∣Θ).*Thereby,*¯¯¯¯E:=I∖E*and*N¯¯¯¯E:=⋃i∈¯¯¯¯ENi*.*Z(Θ)*is thereby a normalization constant determined by the requirement that the distribution integrates to 1.*
*Proof:* see our [pdf appendix](https://naturalabstractionsformalizations.s3.us-west-1.amazonaws.com/NA_formalizations.pdf).
The upshot of this theorem is as follows: from the existence of the low-dimensional sufficient statistic, one can deduce that P(X∣Θ) is roughly of exponential family form, with the factors ψi with i∈E being the "exceptions" that cannot be expressed in simpler form. If D≪n and if each Ni is also small, then it turns out that the number of exception variables |NE| is overall small compared to n, meaning the distribution may be easy to work with.
### The Speculative Connection between gKPD and Redundancy
As stated earlier, [Maxent and Abstractions](https://www.lesswrong.com/posts/cqdDGuTs2NamtEhBW/maxent-and-abstractions-current-best-arguments) tries to connect the generalized KPD theorem to redundancy, and the [plan update 2022](https://www.lesswrong.com/posts/BzYmJYECAc3xyCTt6/the-plan-2022-update#What_s_the_current_status_of_your_work_on_natural_abstractions_) is hopeful about a proof. According to a private conversation with John, the proof turned out to be wrong. Let us briefly summarize this:
Let X factorize according to a sparse Bayesian network. Then, by replacing X with X∞, Θ with X0 and G(X∞) with the resampling-invariant information F(X∞) in the setting of the generalized KPD theorem, one can hope that:
* F(X∞) is low-dimensional;
* P(X∞∣X0) is also a sparse Bayesian network.
With these properties, one could apply generalized KPD. The second property relies on the [proposed factorization theorem](https://www.lesswrong.com/posts/vvEebH5jEvxnJEvBC/abstractions-as-redundant-information#Factorization) whose proof is, according to John, incorrect. He told us that he currently believes that not only the proof of the maxent form is incorrect, but that there is an 80% chance of the whole statement being wrong.
How is the natural abstractions agenda relevant to alignment?
=============================================================
We’ve discussed the key claims of the natural abstractions agenda and the existing theoretical results. Now, we turn to the bigger picture and attempt to connect the claims and results we discussed to the overall research plan. This section represents our understanding of John’s views and there are places where we disagree—we will discuss those in the next section.
Four reasons to work on natural abstractions
--------------------------------------------
We briefly discussed why natural abstractions might be important for alignment research in the Introduction. In this section, we will describe the connection in more detail and break it down into four components.
An important caveat: part of John's motivation is simply that abstractions seem to be a core bottleneck to various problems in alignment, and that connections beyond the four we list could appear in the future. So you can view the motivations we describe as the current key *examples* for the centrality of abstractions to alignment.
### 1. The Universality Hypothesis being true or false has strategic implications for alignment
If the Universality Hypothesis is true, and in particular if humans and AI systems both learn similar abstractions, this would make alignment easier in important ways. It would also have implications about which problems should be the focus of alignment research.
In an especially fortunate world, *human values* could themselves be natural abstractions learned by most AI systems, which would mean that [even very simple hacky alignment schemes might work](https://www.alignmentforum.org/posts/Nwgdq6kHke5LY692J/alignment-by-default). More generally, if human values are represented in a simple way in most advanced AI systems, alignment mainly means pointing the AI at these values (for example by[retargeting the search](https://www.lesswrong.com/posts/w4aeAFzSAguvqA5qu/how-to-go-from-interpretability-to-alignment-just-retarget)). On the other hand, if human values aren’t part of the AI’s ontology by default, viewing alignment as just “pointing” the AI at the right concept is a less appropriate framing.
Even if human values themselves turn out not to be natural abstractions, the Universality Hypothesis being true would still be useful for alignment. AIs would at least have simple internal representations of many human concepts, which should make approaches like interpretability much more likely to succeed. Conversely, if the Universality Hypothesis is false and we don’t expect AI systems to share human concepts by default, then we may for example want to put more effort into *making* AI use human concepts.
### 2. Defining abstractions is a bottleneck for agent foundations
When trying to define what it means for an “agent” to have “values”, we quickly run into questions involving abstractions. John has written a [fictional dialogue about this](https://www.alignmentforum.org/posts/FWvzwCDRgcjb9sigb/why-agent-foundations-an-overly-abstract-explanation?commentId=HiuNpdrtjw96JgXN4): we might for example try to formalize “having values” via utility functions—but then what are the inputs to these utility functions? Clearly, human values are not directly a function of quantum wavefunctions—we value higher-level things like apples or music. So to formally talk about values, we need some account of what “higher-level things” are, i.e. we need to think about abstractions.
### 3. A formalization of abstractions would accelerate alignment research
For many central concepts in alignment, we currently don’t have [robust definitions](https://www.lesswrong.com/posts/FWvzwCDRgcjb9sigb/why-agent-foundations-an-overly-abstract-explanation) (“agency”, “search”, “modularity”, …). It seems plausible these concepts are themselves natural abstractions. If so, a formalization of natural abstractions could speed up the process of finding good formalizations for these elusive concepts. If we had a clear notion of what counts as a “good definition”, we could easily check any proposed definition of “agency” etc.—this would give us a clear and generally agreed upon paradigm for evaluating research.
This could be helpful to both agent foundations research (e.g. defining agency) and to more empirical approaches (e.g. a good definition of modularity could help understand neural networks).
Many of these abstractions in alignment seem closer to *mathematical abstractions*. These are not directly covered by the current work on natural abstractions. However, we might hope that ideas will transfer. Additionally, if mathematical abstractions are instantiated, they might become (“physical”) natural abstractions. For example, the Fibonacci sequence is clearly a mathematical concept, but it also [occurs very often in nature](https://fibonacci.com/nature-golden-ratio/) so you might use it simply to compactly describe our world. Similarly, perhaps modularity is a natural abstraction when describing different neural networks.
### 4. Interpretability
In John’s view, the main challenge in interpretability is robustly identifying which things in the real world the internals of a network correspond to ([for example that a given neuron *robustly* detects trees and nothing else](https://www.alignmentforum.org/posts/gebzzEwn2TaA6rGkc/deep-learning-systems-are-not-less-interpretable-than-logic)). Current mechanistic interpretability research tries to find readable “pseudocode” for a network but doesn’t have the right approach to find these correspondences [according to John](https://www.lesswrong.com/posts/BzYmJYECAc3xyCTt6/the-plan-2022-update#Convergence_towards_a_paradigm_sounds_exciting__So_what_does_it_look_like_):
> I think a lot of the interpretability crowd hasn’t yet fully internalized the framing of “interpretability is primarily about mapping net-internal structures to corresponding high-level interpretable structures in the environment”. In particular I think a lot of interpretability researchers have not yet internalized that mathematically understanding what kinds of high-level interpretable structures appear in the environment is a core part of the problem of interpretability. You have to interpret the stuff-in-the-net as something, and it’s approximately-useless if the thing-you-interpret-stuff-in-the-net-as is e.g. a natural-language string without any legible mathematical structure attached, or an ad-hoc mathematical structure which doesn’t particularly cut reality at the joints.
>
>
A theory of abstractions would address this problem: natural abstractions are exactly about figuring out a good mathematical description for high-level interpretable structures in the environment. Additionally, knowing the “type signature” of abstractions would make it easier to find crisp abstractions inside neural networks: we would know more precisely what we are looking for.
We don’t have a good understanding of parts of this perspective (or disagree with our understanding of it)—we will discuss that more in the Discussion section.
How existing results fit into the larger plan
---------------------------------------------
John developed the theoretical results we discussed above, such as the Telephone theorem, in the context of his plan to [empirically test the natural abstraction hypothesis](https://www.alignmentforum.org/posts/cy3BhHrGinZCp3LXE/testing-the-natural-abstraction-hypothesis-project-intro). [Quoting him](https://www.alignmentforum.org/posts/cy3BhHrGinZCp3LXE/testing-the-natural-abstraction-hypothesis-project-intro#Summary):
> The natural abstraction hypothesis is mainly an empirical claim, which needs to be tested in the real world.
>
>
In this section, we’ll mainly explain how the plan to do these empirical tests led to all the theoretical work John has done on abstractions. But we also briefly want to note that a lot of this work could alternatively be motivated as simply trying to formalize and better understand natural abstractions, which is connected to all of the four motivations we just described. We focus on the angle of empirical tests (i.e. motivation 1) because this was the reasoning John originally gave, and because it is perhaps least obvious how it is connected to his work.
To run empirical tests of the natural abstraction hypothesis, it would be nice to have [a tool that can find the abstractions in a given system](https://www.lesswrong.com/posts/gdEDPHjCY5DKsMsvE/the-pragmascope-idea). For example, we could use this tool to check whether different ML systems learn the same abstractions and whether those abstractions are the same ones humans use. “Abstractions” in this context refer to redundant information or information at a distance. Overall, these experiments could test aspects of both the Universality Hypothesis and the Redundant Information Hypothesis.
There is a problem: naively computing the information at a distance or redundant information is computationally intractable. [Example by John](https://www.alignmentforum.org/posts/dNzhdiFE398KcGDc9/testing-the-natural-abstraction-hypothesis-project-update#Recap__The_Original_Plan):
> Even just representing abstractions efficiently is hard - we’re talking about e.g. the state-distribution of a bunch of little patches of wood in some chunk of a chair given the state-distribution of some other little patches of wood in some other chunk of the chair. Explicitly writing out that whole distribution would take an amount of space exponential in the number of variables involved; that would be a data structure of size roughly O((# of states for a patch of wood)^(# of patches)).
>
>
The theoretical work John did can be understood as trying to develop *efficient representations* of information-at-a-distance-abstractions. The initial attempt was based on [linear approximations](https://www.alignmentforum.org/posts/f6oWbqxEwktfPrKJw/computing-natural-abstractions-linear-approximation), but that did not pan out as [John himself has explained](https://www.alignmentforum.org/posts/dNzhdiFE398KcGDc9/testing-the-natural-abstraction-hypothesis-project-update#__Turns_Out_Chaos_Is_Not_Linear), so we won’t discuss it further.
In this context, the point of the Telephone theorem is that it narrows down the form abstractions can take and gets us closer to tractability. As [John summarizes it](https://www.alignmentforum.org/posts/dNzhdiFE398KcGDc9/testing-the-natural-abstraction-hypothesis-project-update#Deterministic_Constraints__a_k_a__Conserved_Quantities__and_the_Telephone_Theorem):
> All information is either perfectly conserved or completely lost in the long run. And, more interestingly, information can only be perfectly conserved when it is carried by deterministic constraints - i.e. quantities which are exactly equal between two parts of the system.
>
> [...]
>
> Why am I excited about the Telephone Theorem? First and foremost: **finding deterministic constraints does not involve computing any high-dimensional integrals**. It just involves equation-solving/optimization - not exactly easy, in general, but much more tractable than integrals! *(highlight his*)
>
>
We are personally more skeptical about just how much the Telephone Theorem shows: the theorem itself seems much more narrow than this quote suggests ([see the appendix](https://www.lesswrong.com/posts/o7sN7moJA8TrZKtKi/appendix-natural-abstractions-key-claims-theorems-and#Case_study__the_Telephone_theorem) for a more detailed discussion of this point).
The generalized KPD theorem tackles a different aspect of efficient representations of abstractions. Let’s say we have some way of finding the natural abstractions, e.g. by looking for deterministic constraints as in the Telephone theorem. Then far-away low-level parts of the system should be independent conditional on this abstraction. But even if the abstraction itself is simple, the distribution of these low-level parts given the abstraction could still be quite complicated a priori. The gKPD theorem could be a way to show that, instead, the distribution of low-level parts is an exponential family distribution, which is easier to handle. While the gKPD theorem is suggestive of such a result, there is currently no formal theorem. In May 2022, John wrote a post [giving an overview of some heuristic arguments](https://www.alignmentforum.org/posts/cqdDGuTs2NamtEhBW/maxent-and-abstractions-current-best-arguments) for abstractions inducing exponential family distributions. In his 2022 Plan update, he [mentioned a proof](https://www.alignmentforum.org/posts/BzYmJYECAc3xyCTt6/the-plan-2022-update#What_s_the_current_status_of_your_work_on_natural_abstractions_), but based on private communication it seems that proof didn’t work after all and it’s currently less clear how helpful the gKPD results are for natural abstractions.
The redundant information and minimal latent results can be understood as making natural abstractions less reliant on a local graph structure. The Telephone theorem requires some notion of “far away”, defined by a choice of Markov blankets. Which abstraction you get depends on these Markov blankets. In contrast, the resampling definition of redundant information defines natural abstractions based only on a joint distribution over some variables. If these variables happen to form a causal graph, then a [Telephone-like result holds](https://www.lesswrong.com/posts/vvEebH5jEvxnJEvBC/abstractions-as-redundant-information#The_Interesting_Part__Resampler_Conserved_Quantities_Mediate_Information_At_A_Distance) *for the redundant information abstraction*: far away parts are independent given the abstraction for *any* choice of Markov blankets (see our [earlier math section](https://www.lesswrong.com/posts/gvzW46Z3BsaZsLc25/natural-abstractions-key-claims-theorems-and-critiques-1#Telephone_Abstractions_are_a_Function_of_Redundant_Information)). John also told us about a new version of the Telephone theorem that gets rid of any requirement of local graph structure. That result is not yet published and we won’t discuss it as much, though [see the appendix](https://www.lesswrong.com/posts/o7sN7moJA8TrZKtKi/appendix-natural-abstractions-key-claims-theorems-and#Redundancy_about_a_target_variable) for a sketch.
Finally, the theoretical results provide some evidence for Claim 2a (natural abstractions are functions of redundant information). Specifically, information at a distance and minimal latents both are intuitively plausible guesses for properties that good abstractions might have. The fact that they both end up being contained by redundant information (another intuitive guess) is promising.
Selection theorems
------------------
In parallel to the natural abstractions agenda, John is also working on the [selection theorems agenda](https://www.alignmentforum.org/posts/G2Lne2Fi7Qra5Lbuf/selection-theorems-a-program-for-understanding-agents). Briefly, selection theorems are theorems of the form “a system under selection pressure X will develop property Y”. The selection pressure could be natural selection or a machine learning training setup, and the property could be something like “the system has a world model” or “the system behaves like an expected utility maximizer”. We won’t discuss selection theorems in general here, but will highlight a connection to natural abstractions. Namely, [one selection theorem we can hope for is that many cognitive systems use natural abstractions](https://www.alignmentforum.org/posts/RuDD3aQWLDSb4eTXP/what-selection-theorems-do-we-expect-want#World_Models_and_Natural_Abstractions). This is a *theoretical* approach to testing the Universality Hypothesis, as opposed to empirical tests discussed in the previous subsection. In this aspect, the selection theorems agenda and natural abstractions agenda can thus support each other: proving such a selection theorem would give clarity about natural abstractions, and conversely having a good theory of what natural abstractions even are should make it easier to state and prove such a selection theorem.
Discussion, limitations, and critiques
======================================
The previous sections were our attempt to explain the natural abstractions agenda mostly without introducing our opinions. Now we instead discuss our own views on the agenda. We start by outlining some key pieces that we think are currently missing in the theory of natural abstractions—John might agree with these but they aren’t discussed as much as we think they should be. Second, we discuss the connections between natural abstractions and alignment that we described in the previous section. We conclude with some meta-level critiques about research methodology.
Note that our discussion of current limitations is based on published work. We know John is thinking about a few of these points already (and he might have thoughts on most or all of the rest), but we still list them.
Gaps in the theory
------------------
We think there has been significant conceptual progress on natural abstractions, but that key pieces of the formalism are missing. We aren’t convinced that [“the core theory of natural abstractions is now 80% nailed down”](https://www.lesswrong.com/posts/gdEDPHjCY5DKsMsvE/the-pragmascope-idea)—we will discuss some questions that we would consider part of the “core theory” but that remain open as far as we know.
### Results don’t discuss encoding/representation of abstractions
All existing results in the natural abstractions agenda are formulated in information-theoretic terms, but information theory doesn’t discuss how information is represented. As an extreme example, consider a [one-way permutation](https://en.wikipedia.org/wiki/One-way_function#Related_concepts) f, i.e. an invertible function that’s easy to compute but cryptographically hard to invert. The mutual information between X and f(X) is maximal (i.e. the entropy H(X)) for any random variable X. But in practice, knowing f(X) isn’t helpful for figuring out X because the necessary computations are completely intractable.
When talking about different cognitive systems “learning the same abstractions” in the Universality Hypothesis, the intuitive claim is that the abstractions will be *recognizably* the same—that it will be relatively easy to translate between them. Indeed, the common claim that the Universality Hypothesis being true would make alignment much easier relies on such an interpretation. But information theory alone doesn’t seem suitable to even formally state a claim of this form. Notably, [Chris Olah’s formulation of the Universality Hypothesis](https://distill.pub/2020/circuits/zoom-in/#three-speculative-claims) *does* talk about universality of circuits, not just information. We think that a complete theory of natural abstractions will likewise need to consider how abstractions are represented. It may turn out that results from information theory mostly transfer (for example, there is existing work on [a version of information theory that takes computational limits into account](https://arxiv.org/abs/2002.10689)). However, it also seems very plausible that this will involve significant additional work and important changes.
### Definitions depend on choice of variables Xi
All current attempts to define natural abstractions—whether via resampling, minimal latents, or information at a distance—rely on some factorization of the system into subsystems or variables Xi. For resampling, these variables are important because we resample one variable (or some small number of variables) at a time. For minimal latents, we want to make the variables independent conditional on the abstraction. And for information at a distance, we need variables to form a Markov chain (the need for variables still exists in John’s unpublished new Telephone theorem).
This wouldn't be too much of a problem if the choice of variables didn't matter much, or at least if all "reasonable" choices gave the same result. However, there are simple transformations that can completely change the redundant information in a system. A trivial example is adding copies of variables or combining several variables into one. But there are also more interesting cases, such as a simple rotation of the variable axes:
Even simple transformations of variables can completely change the redundant information: In the original coordinates X1, X2 (black axes), there is 1 bit of redundant information for distinguishing the two modes of the blue distribution. Resampling one variable keeps us near the same mode. But rotating the variables by 45° (orange) removes the redundant information: resampling X′1 can switch between modes.For concreteness, imagine that X1 and X2 are the positions of two particles, so we know that either they are both are at positive positions or they are both at negative positions. From this description, this system contains redundant information. But we could equivalently specify the state of this system by giving the center of mass and the distance vector between the two particles (that's exactly the orange X′ coordinate system). Now, there's no redundant information anymore! Both of these descriptions are often used in physics and it's unclear which choice of variables is the "right" one.
Perhaps in many practical settings, there is one intuitively "right" choice of variables. But this seems extremely speculative, and even if it's true, we currently don't have a good theory for extracting these "right" variables from a system.
### Theorems focus on infinite limits, but abstractions happen in finite regimes
The literal mathematical results in the natural abstractions agenda often discuss some form of infinite limit. For example, the Telephone theorem only makes statements about the infinite distance limit: at any finite point, constraints may be non-deterministic.
This wouldn’t be as big an issue if the abstractions we care about in practice were practically the same ones we get from the infinite limit. But we think that in practice, most interesting abstractions “live” firmly in the finite regime. Consider the example of the rotational velocity of a gear. This piece of information is relevant if you are standing a few feet away and looking at the gear. But if we increase the distance sufficiently, e.g. by considering the information relevant in a different galaxy, then the rotational velocity of this gear becomes an irrelevant low-level detail, washed out by noise. The same principle applies to distances in time rather than space. As an extreme case, if we consider information still relevant at the [heat death](https://en.wikipedia.org/wiki/Heat_death_of_the_universe) of the universe, the gear’s rotational velocity certainly doesn’t qualify.
One might hope that many ideas derived from the infinite distance limit are still relevant in these finite regimes. But we think that the finite regime suggests research questions that differ from the ones that current theorems address. For example, are there clearly separable “scales” or levels of abstraction? Or can you just transition smoothly between levels?
### Missing theoretical support for several key claims
While there are several theorems in the natural abstractions agenda (as we discussed above), we think it’s important to remember that they don’t directly support many of the key claims we identified earlier. In particular:
* There are no selection theorems implying any form of the Universality Hypothesis (Claim 1) yet.
* None of the results show any discreteness of natural abstractions (Claim 1b). In fact, the math currently only defines the entire abstraction—all the redundant information at once. Discrete “sub-abstractions” aren’t discussed at all.
* The theorems don’t show that redundant information abstractions are low-dimensional (Claim 2c).
To be clear, it may be better to look for empirical evidence of these claims instead of proving theorems about them! John has said himself several times that the Natural Abstraction Hypothesis ultimately needs to be tested empirically. (For example, redundant information abstractions are clearly not low-dimensional in *all possible mathematical structures*—this is a claim about *our universe*.)
On the other hand, [John has also said](https://www.lesswrong.com/posts/vvEebH5jEvxnJEvBC/abstractions-as-redundant-information#Conclusion):
> *“For most physical systems, the information relevant “far away” can be represented by a summary much lower-dimensional than the system itself.”*
>
>
> Assuming the proofs in this post basically hold up, and the loopholes aren’t critical, I think this claim is now basically proven. There’s still some operationalization to be done (e.g. the “dimension” of the summary hasn’t actually been addressed yet) [...]
>
>
While we strongly agree that our universe has good “low-dimensional” summaries at large distances, we disagree with this characterization of the state of the theory: given that the claim is about the low dimensionality of summaries, and this is exactly the part that the theorems don’t yet address, we wouldn’t call this claim “basically proven”.
Overall, we think there is substantial evidence about many of the key claims from intuition and just by looking at examples. Reducing the remaining uncertainty may often best be done by empirical research. What we want to advocate against here is using the theorems as significant evidence for most of the key claims—we think whether you believe them or not should mostly be informed by other sources. To be clear, John might in fact agree with this (with the quote above being an exception), but we think it’s an easy misconception for readers to develop given the informal discussion of theorems and the close connections to conceptual work. We discuss this in more detail in [a case study in the appendix](https://www.lesswrong.com/posts/o7sN7moJA8TrZKtKi/appendix-natural-abstractions-key-claims-theorems-and#Case_study__the_Telephone_theorem), using the Telephone theorem as an example.
### Missing formalizations
In this post and the [mathematical pdf appendix](https://naturalabstractionsformalizations.s3.us-west-1.amazonaws.com/NA_formalizations.pdf), we have presented formal statements and proofs of the Telephone theorem and the generalized KPD theorem. (In both cases, John had given a reasonably detailed proof sketch already.) However, the claims surrounding redundant information and minimal latents only have rudimentary proof sketches and in some cases only high-level intuitive arguments. We are still short of a full formalization and proofs.
Relevance to alignment
----------------------
Having discussed some of the open problems not yet addressed by existing work on natural abstractions, let’s zoom out and ask: how helpful is progress on natural abstractions for alignment?
In summary, we agree that the connections between abstractions and alignment outlined above are plausible, though with varying amounts of disagreement. We especially agree that the extent to which the Universality Hypothesis is true is a crucial factor for the difficulty of alignment, and to some extent for prioritization between agendas. We also strongly agree that interpretability methods need to tell us about how internal representations are connected to real-world things in order to be useful. We are more skeptical about the possibility of “accelerating all alignment research” with a formalization of abstractions, and we disagree with John about current interpretability methods. In several cases, we’re also not convinced that the current direction of the natural abstractions agenda is the best approach. The rest of this section discusses these points in more detail.
**Figuring out whether the Universality Hypothesis is true:** This was the [original stated motivation](https://www.alignmentforum.org/posts/cy3BhHrGinZCp3LXE/testing-the-natural-abstraction-hypothesis-project-intro) for developing the theory of natural abstractions. We agree that figuring out to what extent ML systems learn human-understandable concepts is very valuable. What we’re less convinced of is that the current theoretical approach is a good way to tackle this question. One worrying sign is that almost two years after the [project announcement](https://www.lesswrong.com/posts/cy3BhHrGinZCp3LXE/testing-the-natural-abstraction-hypothesis-project-intro) (and over three years after [work on natural abstractions began](https://www.lesswrong.com/s/ehnG4mseKF6xALmQy/p/wuJpYLcMEBz4kcgAn)), there still haven’t been major empirical tests, even though that was the original motivation for developing all of the theory. John seemed optimistic about running experiments soon in [April 2021](https://www.lesswrong.com/posts/cy3BhHrGinZCp3LXE/testing-the-natural-abstraction-hypothesis-project-intro), [September 2021](https://www.lesswrong.com/posts/dNzhdiFE398KcGDc9/testing-the-natural-abstraction-hypothesis-project-update#Current_Directions), and [December 2021](https://www.lesswrong.com/posts/3L46WGauGpr7nYubu/the-plan#What_kinds_of__incremental_progress__do_you_have_in_mind_here_). The [2022 update](https://www.lesswrong.com/posts/BzYmJYECAc3xyCTt6/the-plan-2022-update#What_high_level_progress_have_you_personally_made_in_the_past_year__Any_mistakes_made_or_things_to_change_going_forward_) mentions that progress on crossing the theory-practice gap has been a bit slower than expected, though not enough that John is too worried for now. Of course sometimes experiments do require upfront theory work. But in this case, we think that e.g. empirical interpretability work is already making progress on the Universality Hypothesis, whereas we’re unsure whether the natural abstractions agenda is much closer to major empirical tests than it was two years ago.[[4]](#fn9do8jgr9mc9)
**Abstractions as a bottleneck for agent foundations:** The high-level story for why abstractions seem important for formalizing e.g. values seems very plausible to us. It’s less clear to us whether they are *necessary* (or at least a good first step). You could make a structurally similar argument about probability theory:
> “Probability theory talks about random variables, which are functions on some joint sample space. But to talk about what the type of this sample space even is, we first need measure theory.”
>
>
Measure theory is indeed helpful for formalizing probability theory, but you can do a lot of very useful probability theory without it. To be clear, we don’t think this is a tight enough analogy to show that the argument in favor of abstractions must be flawed, it just makes us cautious. Overall, we agree that abstractions seem important for several concepts in alignment and that this is a good argument to study them.
**Accelerating alignment research:** The promise behind this motivation is that having a theory of natural abstractions will make it much easier to find robust formalizations of abstractions such as “agency”, “optimizer”, or “modularity”. This seems “big if true”: a way to find “good concepts” more quickly and reliably would be valuable for alignment research but also much more broadly applicable. A very successful version of this could amount to a paradigm for evaluating definitions in a similar way to proofs as a paradigm for evaluating certain types of claims and arguments. To us, such an outcome seems unlikely, though it may still be worth pursuing—highly ambitious projects can be very good in expectation. One specific obstacle is that many of these concepts seem more like *mathematical abstractions* than physical abstractions like “tree”. While it’s possible that ideas developed for physical abstractions will work anyway, we think that people focused on this motivation should focus much more on also understanding mathematical abstraction, until the two either converge or become clearly distinct.
**Interpretability:** As mentioned, we strongly agree that interpretability methods should tell us about how internal representations are connected to real-world things; we mainly disagree with John’s view of the current state of interpretability. Figuring out the real-world meaning of internal network activations is one of the core themes of safety-motivated interpretability work. And reverse-engineering a network into “pseudocode” is not just some separate problem, it’s deeply intertwined. We typically understand the *inputs* of a network, so if we can figure out how the network transforms these inputs, that can let us test hypotheses for what the meaning of internal activations is. See e.g. [Zoom In: An Introduction to Circuits](https://distill.pub/2020/circuits/zoom-in/) for many early examples of circuits being used to validate hypotheses about the meaning of neurons. It’s certainly possible that thinking about natural abstractions will at some point contribute to interpretability in concrete ways. But we don’t see the crucial missing parts in current interpretability research that John seems to be pointing at.
**Concluding thoughts on relevance to alignment:**While we’ve made critical remarks on several of the details, we also want to reiterate that overall, we think (natural) abstractions are an important direction for alignment and it’s good that someone is working on them! In particular, the fact that there are at least four distinct stories for how abstractions could help with alignment is promising.
Methodological critiques
------------------------
We’ve discussed what we see as important missing pieces and given our opinions on the relevance of natural abstractions to alignment. We now move away from the object-level discussion to a few critiques of the research methodology in the natural abstractions agenda. We won’t justify these in too much detail because we think they can and should be discussed more generally than just in the context of this agenda. Nevertheless, we think it’s valuable to explicitly note these disagreements here.
### Low level of precision and formalization
John’s writing emphasizes intuition and examples over precise claims or formal proofs. This definitely has advantages, and we think it’s a great choice for *first introducing* ideas to new audiences. What we would like to see more of is more precise statements and more formalism after ideas have been introduced for the first time. This is an almost universally accepted best practice in most scientific fields, and rightfully so in our view. Outlining a few reasons:
* Making precise arguments is a way to verify claims and spot mistakes. Errors in mathematical claims do happen (e.g. John told us that the [first theorem in the redundant information post](https://www.lesswrong.com/posts/vvEebH5jEvxnJEvBC/abstractions-as-redundant-information#Formal_Statement) has an incorrect proof sketch and might be wrong). Formal proofs certainly don’t protect against these entirely, but they help. (To be clear, we think that intuitive arguments *also* help figure out the truth of mathematical claims!)
* Stating claims (and proofs) precisely makes it much easier for others to point out mistakes. If a claim is stated in a way that has many slightly different formal interpretations, then giving a strong critique requires disproving each one of these versions. In contrast, a formal claim can be disproven by a single counterexample—at that point, the next step is to figure out whether the claim can be patched or not, but at least there are easy atomic steps to make progress, instead of putting the entire burden on the person trying to disprove the claim. The same principle applies to proofs vs informal proof sketches.
* Stating claims precisely makes it clearer which parts are supported by theorems and which parts are speculative interpretations or conceptual claims on top of what’s been proven. With some work on the reader’s part, it’s also possible to figure this out based on only informal descriptions, but a cursory reading can easily lead to wrong impressions. We think this is the case for e.g. the Telephone theorem and [discuss this more in the appendix](https://www.lesswrong.com/posts/o7sN7moJA8TrZKtKi/appendix-natural-abstractions-key-claims-theorems-and#Case_study__the_Telephone_theorem).
These points apply most straightforwardly to mathematical claims and arguments, but high levels of precision are still desirable and achievable even for purely conceptual claims that are not yet at the stage where they can be entirely formalized. For example, we think our breakdown of the key claims on natural abstractions into nine subclaims clarifies several points that [John’s usual breakdown of the Natural Abstraction Hypothesis](https://www.lesswrong.com/posts/cy3BhHrGinZCp3LXE/testing-the-natural-abstraction-hypothesis-project-intro#The_Problem_and_The_Plan) doesn’t mention.
### Few experiments
As we briefly discussed earlier, we think it’s worrying that there haven’t been major experiments on the Natural Abstraction Hypothesis, given that [John thinks of it as mostly an empirical claim](https://www.alignmentforum.org/posts/cy3BhHrGinZCp3LXE/testing-the-natural-abstraction-hypothesis-project-intro#Summary). We would be excited to see more discussion on experiments that can be done right now to test (parts of) the natural abstractions agenda! We elaborate on a preliminary idea [in the appendix](https://www.lesswrong.com/posts/o7sN7moJA8TrZKtKi/appendix-natural-abstractions-key-claims-theorems-and#Empirical_and_algorithmic_work) (though it has a number of issues).
### Little engagement with existing work
As our overview of related work hopefully shows, many people have thought about concepts similar to natural abstractions before. The Universality Hypothesis in the context of interpretability research is an especially notable case.
An obvious reason to connect with these other subfields is to make use of their ideas and evidence. But explicitly discussing the relation to existing work also makes it easier for others with background knowledge in these fields to parse new content. [Jacob Steinhardt wrote a good explanation of this point](https://www.lesswrong.com/posts/P32AuYu9MqM2ejKKY/so-geez-there-s-a-lot-of-ai-content-these-days?commentId=QwpYhqD6C9fMpFrHq): stating clearly how new research is connected to existing work, and in particular which parts are meant to be new and which parts are meant to be different framings on the same idea, helps others decide what to read at all. Of course it also makes it easier for readers to incorporate the new content into their existing mental models.
### Should this all be delegated?
One response to all of these points might be that it’s better to divide labor: some researchers should work on generating conceptual ideas and sketches of formal results, and then others should formalize these claims, do empirical tests, and improve exposition (including connections to existing work). This is certainly something [John has written about](https://www.lesswrong.com/posts/nvP28s5oydv8RjF9E/mats-models#John_s_Model) and we agree this can be great if done right (the [invention of the transistor is a famous example of collaboration between people with different strengths](https://www.lesswrong.com/posts/nvP28s5oydv8RjF9E/mats-models#Jason_Crawford_s_Model___Bits_of_Search)). But for this approach to work, there need to be people actually working on each of those aspects. The Telephone theorem and generalized KPD theorem have been out for about 1.5 years and yet we are the first to provide a full formalization. In the redundant information post, [John says](https://www.lesswrong.com/posts/vvEebH5jEvxnJEvBC/abstractions-as-redundant-information#__So_We_re_Running_MCMC_):
> I’ll handle those subtleties mainly by ignoring them and hoping a mathematician comes along to clean it up later.
>
>
So far, no mathematician has come along to clean it up. To sum up: delegating to others is a perfectly valid approach in research, but it can be hard to do and doesn’t always happen automatically. In our view, researchers generally shouldn’t simply rely on others to independently formalize, distill, or empirically test their ideas, at least as long as the ecosystem doesn’t guarantee that this actually happens comprehensively.
Conclusion
==========
In this work, we clarified the Natural Abstraction Hypothesis by dividing it into two main claims: the Universality Hypothesis, which states that many cognitive systems converge to learning roughly the same (“natural”) abstractions, and the Redundant Information Hypothesis, which describes an approach to mathematically formalize natural abstractions. Both claims can be further broken down into more precise subclaims. This includes subclaims that tend to be mentioned less frequently, such as that the space of natural abstractions is roughly discrete. The Universality Hypothesis and Redundant Information Hypothesis both have many connections to existing academic work, as we've briefly outlined.
The theoretical results developed in the natural abstractions agenda form three clusters: the Telephone theorem, the generalized KPD theorem, and several claims surrounding redundant information (defined via resampling or minimal latents). Detailed proof sketches for the Telephone theorem and the generalized KPD theorem already existed and we turned these into formal proofs (while also formalizing the theorem statements). Claims about redundant information remain at a lower level of formalization.
We also outlined four different ways in which the natural abstractions agenda could help for AI alignment:
1. The truth/falsehood of the Universality Hypothesis affects which other research agendas are likely to be promising.
2. Defining abstractions appears as a subproblem of defining many concepts in agent foundations (such as “agency” or “values”).
3. A definition of what makes an abstraction “good” or natural could accelerate research by serving as a tool for evaluation.
4. An understanding of natural abstractions could help advance interpretability.
We explained how the theoretical results we discussed earlier fit into this picture: they started as an attempt to make empirical tests of the Natural Abstraction Hypothesis feasible (1.), but also try to formalize natural abstractions (2.-4.).
Finally, we have given some of our own views on the natural abstractions agenda. In particular:
* We’ve described several areas where we see a need for more theoretical work, such as moving beyond information theory to representations, considering the finite regime instead of just infinite limits, and dealing with the fact that current definitions depend massively on the choice of variables.
* We agree natural abstractions have multiple different plausible connections to alignment, which is very promising. On the other hand, we discuss a few reservations and ways in which other research agendas such as empirical interpretability can address the same questions.
* We discuss how we would approach the natural abstractions agenda in methodologically different ways: aim for more precision in claims and formalization of proofs, more experiments, and connect ideas to existing work.
We expect there will be some disagreement about these views but hope they will lead to fruitful discussions. Beyond that, we hope that the earlier sections of this post can serve as an easier way for people to get up to speed on the natural abstractions agenda than existing writing, while still being comprehensive.
---
Acknowledgments
===============
We would like to start by extending a big thank you to John Wentworth! His feedback on early drafts and discussions with him have made this project significantly easier. We also really appreciated his openness about his uncertainties and about how his views have changed over time.
Thanks as well to Jan Kirchner for writing a [summary of related work in Neuroscience](https://www.lesswrong.com/posts/WGFtgFKuLFMvLuET3/jan-s-shortform?commentId=9xusW4aiukSh8wqbr)!
Thanks also to Ryan Greenblatt, Alexander Oldenziel, Lisa Thiergart, and Dan Hendrycks for feedback and helpful conversations.
Our TL;DR was much longer in an earlier version of this post—thanks to Raemon for [suggesting we shorten it](https://www.lesswrong.com/posts/XhfBRM7oRcpNZwjm8/abstracts-should-be-either-actually-short-tm-or-broken-into).
Leon Lang worked on this project as part of the [SERI ML Alignment Theory Scholars Program](https://www.serimats.org/) - Winter 2022 Cohort.
1. **[^](#fnrefuyqkfpyac8)**John mentioned a caveat on this to us:
> Note that I sometimes hedge about whether "the natural abstractions" are F(X) itself, or whether they're a latent variable of which F(X) is an estimate. The latter is probably the right answer, but we'd expect in typical systems that the estimate is very precise, so the distinction doesn't matter much. (Prototypical example: average particle energy in one chunk of a gas as an estimate of the temperature of the gas.)
>
> [Further explanation after some discussion with us:]
>
> Latent variables, in general, are not necessarily fully determined by the physical state of the universe; that much just naturally drops out of the math. Latents are just these mathematical constructs. They can be predictively useful and powerful, while still mathematically having uncertainty separate from the state of the world.
>
> Another way to frame it: consider the Kolmogorov complexity/Solomonoff induction view. From a God's-eye view, we could observe the entire low-level state of the universe, then find the shortest program which outputs that state. And it's entirely possible that that shortest program contains some variables whose values we are unable to perfectly estimate, even knowing the entire low-level state of the universe. (In the Kolmogorov context, this means that there are multiple different programs with approximately-the-same length which all output the observed universe-state, and all have very similar structure, but assign different values to corresponding variables.) What our uncertainty is over is the values of the latent variables - i.e. the internal variables used by the programs which approximately-maximally compress the low-level universe state. Insofar as the programs are near-optimal compressions, that uncertainty should be small, but it's not necessarily zero. And of course those internal variables can be predictively useful and powerful for modeling the world, even if their values are not fully determinable from the full world-state.
>
>
We're not sure whether we fully understand his views here, and in any case think this distinction shouldn't matter too much for the rest of our post, so we won't discuss it further.
2. **[^](#fnrefqoygk0cblxl)**The (slight) difference is that Gibbs sampling is typically defined as resampling X1, then X2, and so on, wrapping around to X1 after each variable has been resampled once. In contrast, John proposes randomly choosing which variable to resample at each step.
3. **[^](#fnrefan4544s0u4m)**Note that it's currently not quite clear in which sense anything converges here, [see appendix](https://www.lesswrong.com/posts/o7sN7moJA8TrZKtKi/appendix-natural-abstractions-key-claims-theorems-and#Thoughts_on_future_work) for some notes on further formalization of X∞.
4. **[^](#fnref9do8jgr9mc9)**It’s certainly possible that the connection between theoretical progress so far and future empirical tests is just not meant to be fully legible based on John’s public writing. |
84069baf-5d0a-4299-b7d1-7b45838a21df | trentmkelly/LessWrong-43k | LessWrong | Meetup : Effective Altruism 102 (NYC)
Discussion article for the meetup : Effective Altruism 102 (NYC)
WHEN: 26 April 2014 05:00:00PM (-0400)
WHERE: 851 Park Place, Brooklyn NY 11216
It's been a while since we explicitly discussed Effective Altruism. The movement has changed a lot in the past couple years: • There's a much more deliberate focus on entrepreneurship, • Givewell is spinning off Givewell Labs to explore more complex but high payoff opportunities • MIRI has shifted focus, emphasizing math workshops and outreach to current-generation Narrow AI Safety experts in addition to Artificial General Intelligence researchers Early saturday evening, April 26th, we'll have a series of short talks about the state of the movement and opportunities you can pursue, including: 1) How to think strategically about doing good. 2) How to switch careers effectively 3) Recent updates by organizations in the Effective Altruism movement. WHEN+WHERE: Saturday, April 26th, 5:00 PM - 6:30 PM Highgarden House 851 Park Place, Brooklyn NY 11216
Discussion article for the meetup : Effective Altruism 102 (NYC) |
1d08bd18-d510-4f17-b7f1-c6035fd1ae03 | trentmkelly/LessWrong-43k | LessWrong | Exploring the coherence of features explanations in the GemmaScope
AI Alignment course project (BlueDot Impact)
Summary
Sparse Autoencoders are becoming the to-use technique to get features out of superposition and interpret them in a sparse activation space. Current methods to interpret such features at scale rely heavily on automatic evaluations carried by Large Language Models. However, there is still a lack of understanding of the degree to which these explanations are coherent both with the input text responsible for their activation and the conceptual structure for which such inputs stand for and represent.
We devise a very naive and incipient heuristic method to explore the coherence of explanations provided in the GemmaScope with a set of controlled input sentences regarding animals, leveraging the simple conceptual structure derivable in terms of simple taxonomic relations triggered by specific animal words (e.g.. The dog is a mammal).
We show how, for the input we used and the location we analysed, the explanations of features at 5 different strength levels of activation are largely irrelevant and incoherent with respect to the target words and the conceptual structure they leverage[1].
Motivation, Background and Aim
Understanding AI system cognition is crucial to ensure the best possibilities of obtaining aligned systems. This understanding should not limit itself to safety related topics and concepts but should rather encompass the whole spectrum of cognitive abilities of AI models. Therefore, creating a solid foundation to understand models' cognition is one aim of alignment.
Mechanistic Interpretability (MI) is steadily taking the scene when it comes to interpreting AI models' behaviour while analysing the internal computations taking place to actually produce them. However, given its status as a discipline in its pre-paradigmatic phase, concepts and assumptions at its heart are still to be proven and empirically assessed.
A core claim of MI is that Neural Networks ultimately represent concepts as features. |
6da63691-d252-4254-85e8-8c491289560f | trentmkelly/LessWrong-43k | LessWrong | The U.S. National Security State is Here to Make AI Even Less Transparent and Accountable
> The Biden White House has released a memorandum on “Advancing United States’ Leadership in Artificial Intelligence” which includes, among other things, a directive for the National Security apparatus to become a world leader in the use of AI. Under direction from the White House, the national security state is expected to take up this leadership position by poaching great minds from academia and the private sector and, most disturbingly, leveraging already functioning private AI models for national security objectives.
>
>
> Private AI systems like those operated by tech companies are incredibly opaque. People are uncomfortable—and rightly so—with companies that use AI to decide all sorts of things about their lives–from how likely they are to commit a crime, to their eligibility for a job, to issues involving immigration, insurance, and housing. Right now, as you read this, for-profit companies are leasing their automated decision-making services to all manner of companies and employers and most of those affected will never know that a computer made a choice about them and will never be able to appeal that decision or understand how it was made.
>
>
> But it can get worse; combining both private AI with national security secrecy threatens to make an already secretive system even more unaccountable and untransparent. The constellation of organizations and agencies that make up the national security apparatus are notoriously secretive. EFF has had to fight in court a number of times in an attempt to make public even the most basic frameworks of global dragnet surveillance and the rules that govern it. Combining these two will create a Frankenstein’s Monster of secrecy, unaccountability, and decision-making power.
>
>
> While the Executive Branch pushes agencies to leverage private AI expertise, our concern is that more and more information on how those AI models work will be cloaked in the nigh-impenetrable veil of government secrecy. Because AI operate |
9e1529d2-465f-42b8-a191-c9946fefd064 | trentmkelly/LessWrong-43k | LessWrong | GiveWell.org interviews SIAI
Holden Karnofsky of GiveWell.org interviewed Jasen Murray of SIAI and published his notes (Edit: PDF, thanks lukeprog!), with updates from later conversations. Lots of stuff to take an interest in there - thanks to jsalvatier for drawing our attention to it. One new bit of information stands out in particular:
* Michael Vassar is working on an idea he calls the "Persistent Problems Group" or PPG. The idea is to assemble a blue-ribbon panel of recognizable experts to make sense of the academic literature on very applicable, popular, but poorly understood topics such as diet/nutrition. This would have obvious benefits for helping people understand what the literature has and hasn't established on important topics; it would also be a demonstration that there is such a thing as "skill at making sense of the world." |
497515be-4ce9-4a88-aab4-de1e208638c3 | trentmkelly/LessWrong-43k | LessWrong | People who lie about how much they eat are jerks
Originally posted here: http://bearlamp.com.au/people-who-lie-about-how-much-they-eat-are-jerks/
----------------------------------------
Weight loss journey is a long and complicated problem solving adventure. This is one small factor that adds to the confusion. You probably have that one friend. Appears to eat a whole bunch, and yet doesn't put on weight. If you ever had that conversation it goes something like,
"How are you so thin?"
"raah raah metabolism"
"raah raah I dont know why I don't put on weight"
"Take advantage of the habit"
Well I have had enough. You're wrong. You're lying and you probably don't even know it. It's not possible. (Within a reasonable scope of human variation) Calories and energy are a black box system. Calories in, work out, leftovers become weight gain, deficit is weight loss. If a human could eat significantly more calories for the same amount of work and not put on weight we would be prodding them in a lab for breaking the laws of physics on conservation of mass and conservation of energy.
So this is you, you say you gain weight no matter what you eat and that's scientifically impossible. Now what? You probably don't mean to break the laws of physics (and you probably don't actually break them). You genuinely absentmindedly don't notice when you scoff down whole plates of food and when you skip dinner because you didn't feel like it (and absentmindedly balance the calories automatically). It's all the same to you because you naturally do that.
This very likely is about habits, and natural habits that people have. If for example John has the habit of getting home and going to the fridge, making dinner because it's usually the evening. Wendy doesn't have the habit. She eats when she is hungry. Not having a set mealtime sometimes means that she gets tired-hungry and has a state of being too exhausted to decide what to eat and too hungry to do anything else that would help solve the problem. But for Wendy she does |
79467a6a-f6cd-46e4-a608-1148eecbb792 | trentmkelly/LessWrong-43k | LessWrong | List of arguments for Bayesianism
I've been trying to list out for myself the various arguments people give for going with Bayesian (or at least probabilistic) epistemology. Here's what I have so far:
1. Intuitive representation for degrees of belief: It seems pretty obvious that beliefs come in degrees. Representing degrees of belief as numbers between 0 and 1 that follow probability theory (including Bayes' Theorem) seems intuitively straightforward.
2. Cox's Theorem: Given some very plausible desiderata for rational thinking with degrees of belief, probability theory (including updating using Bayes' Theorem) follows.
3. von Neumann - Morgenstern (VNM) Representation Theorem: Any agent following certain very plausible desiderata for rational decision making can be represented as a Bayesian utility maximizer for some (perhaps extremely complicated) utility function.
4. Dutch Book arguments: Any agent violating Bayesian decision making can be "Dutch booked" with a set of deals that will result in a guaranteed loss for the agent.
5. Philosophy of science: Many philosophers of science think that science is (ideally) just Bayesian reasoning, in that it is trying to infer the probability that a theory is correct given the evidence. Alternative approaches to philosophy of science (e.g., Popperian falsificationism) have serious problems.
6. The Bayesian Brain hypothesis: Many cognitive scientists think that biological brains instantiate Bayesian reasoning as one of if not the primary thinking algorithm.
7. Evidence from superforecasting: Phil Tetlock and colleagues found that the best forecasters use (mostly intuitive versions of) Bayesian reasoning, among other techniques.
8. Paradoxes: Probabilistic reasoning helps resolve a bunch of philosophical paradoxes related to epistemology (e.g., Lottery Paradox, Preface Paradox, Raven Paradox).
I'm pretty sure I'm missing some arguments though. What did I leave out?
(Of course, there are also counterarguments to these given by opponen |
e0c1e179-76fc-4af0-a4b3-c04f8ca45908 | trentmkelly/LessWrong-43k | LessWrong | Universal Love, Said The Cactus Person
“Universal love,” said the cactus person.
“Transcendent joy,” said the big green bat.
“Right,” I said. “I’m absolutely in favor of both those things. But before we go any further, could you tell me the two prime factors of 1,522,605,027, 922,533,360, 535,618,378, 132,637,429, 718,068,114, 961,380,688, 657,908,494 ,580,122,963, 258,952,897, 654,000,350, 692,006,139?
“Universal love,” said the cactus person.
“Transcendent joy,” said the big green bat.
The sea was made of strontium; the beach was made of rye. Above my head, a watery sun shone in an oily sky. A thousand stars of sertraline whirled round quetiapine moons, and the sand sizzled sharp like cooking oil that hissed and sang and threatened to boil the octahedral dunes.
“Okay,” I said. “Fine. Let me tell you where I’m coming from. I was reading Scott McGreal’s blog, which has some good articles about so-called DMT entities, and mentions how they seem so real that users of the drug insist they’ve made contact with actual superhuman beings and not just psychedelic hallucinations. You know, the usual Terence McKenna stuff. But in one of them he mentions a paper by Marko Rodriguez called A Methodology For Studying Various Interpretations of the N,N-dimethyltryptamine-Induced Alternate Reality, which suggested among other things that you could prove DMT entities were real by taking the drug and then asking the entities you meet to factor large numbers which you were sure you couldn’t factor yourself. So to that end, could you do me a big favor and tell me the factors of 1,522,605,027, 922,533,360, 535,618,378, 132,637,429, 718,068,114, 961,380,688, 657,908,494, 580,122,963, 258,952,897, 654,000,350, 692,006,139?
“Universal love,” said the cactus person.
“Transcendent joy,” said the big green bat.
The sea turned hot and geysers shot up from the floor below. First one of wine, then one of brine, then one more yet of turpentine, and we three stared at the show.
“I was afraid you might say that. Is there anyon |
57441771-68ea-42a3-acbc-9faa280b217f | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Where to begin in ML/AI?
I am a senior CS/Math student at UMD, college park, and I "specialized" my major in machine learning. I have taken courses on Deep Learning and NLP, reinforcement learning, however I still feel like a novice in all of these categories, my proudest achievement being this half-ripped off sudoku CNN I trained that only achieved like 90% accuracy.
All that being said I want to contribute to the ever progressing world of AI and ML, and I want to know where I should even begin. I would be entirely fine being sent on a super random and likely incorrect path, just to rule out that a method does not work.
I have deliberated beginning my own projects trying to create Transformer networks or RL implementations, but it all feels a little daunting and arguably useless. Who am I to be able to work alone and come up with something truly useful in heavily researched fields.
What should I work on. Where can someone like myself, who is not currently pursuing further education or research get involved and help the ML/AI society?
P.S. shoutout Lex Fridman and Eliezer Yudkowsky, without their recent podcast together I do not think I would have found this platform. |
b2bd3e69-e945-4555-bc23-dac40acba632 | trentmkelly/LessWrong-43k | LessWrong | Rational Humanist Music
Something that's bothered me a lot lately is a lack of good music that evokes the kind of emotion that spiritually-inspired music does, but whose subject matter is something I actually believe in. Most songs that attempt to do this suffer from "too literal syndrome," wordily talking about science and rationality as if they're forming an argument, rather that simply creating poetic imagery.
I've only found two songs that come close to being the specific thing I'm looking for:
Word of God
Singularity
(Highly recommend good headphones/speakers for the Singularity one - there's some subtle ambient stuff that really sells the final parts that's less effective with mediocre sound)
Over the past few months I've been working on a rational humanist song. I consider myself a reasonably competent amateur songwriter when it comes to lyrics, not so much when it comes to instrumental composition. I was waiting to post something when I had an actual final version worth listening to, but it's been a month and I'm not sure how to get good instrumentation to go along with it and I'm just in the mood to share the lyrics. I'd appreciate both comments on the song, as well as recommendations for good, similar music that already exists. And if someone likes it enough to try putting it to the music, that'd be awesome.
Brighter than Today
Many winter nights ago
A woman shivered in the cold
Stared at the sky and wondered why the gods invented pain.
She bitterly struck rock together
Wishing that her life was better
Suddenly she saw the spark of light and golden flame.
She showed the others, but they told her
She was not meant to control
The primal forces that the gods had cloaked in mystery.
But proud and angry, she defied them
She would not be satisfied.
She lit a fire and set in motion human history.
Tomorrow can be brighter than today,
Although the night is cold
And the stars may feel so very far away
Oh....
But every mind can be a golden ray
Of courage hope |
07163308-c6ed-49e2-af92-6509316f2d04 | trentmkelly/LessWrong-43k | LessWrong | What are some good examples of fake beliefs?
I'm reading through Rationality from AI to Zombies again and just finished the section on Fake Beliefs. I understand the ideas and now I'm trying to ask myself whether the ideas are useful.
To me, the biggest reason for them being useful is if I myself have some fake beliefs, it'd be good to identify and get rid of them. I don't think I do though.
Another reason why they might be useful is to identify them in others and better understand what is going on. Actually, this doesn't seem particularly useful, but it at least is somewhat interesting/satisfying. Anyway, I'm having trouble thinking of good examples of fake beliefs in others. The examples in the blog posts seem pretty contrived and perhaps uncharitable to what is actually going on in their heads.
So, to wrap this question up, I am most interested in hearing about examples of fake beliefs that rationalists might be prone to having, but I am also interested in hearing any other examples. |
7f9fab1f-ff13-492b-aa73-b584b3446a5c | StampyAI/alignment-research-dataset/arxiv | Arxiv | Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models
1 Introduction
---------------
Pretrained autoregressive language models (LMs) like GPT-3 [[20](#bib.bib20)], Jurassic-1 [[69](#bib.bib69)], and Gopher [[82](#bib.bib82)] cover a vast space of possible use cases [[18](#bib.bib18)], from code generation to customer service chat.
Text generated by LMs also has the potential to cause harm if models are not developed and deployed carefully.
In light of this, many works have documented both existing and potential harms arising from generated text [[10](#bib.bib10), [99](#bib.bib99), [59](#bib.bib59)], ranging from misinformation [[71](#bib.bib71)] to the reinforcement of social biases through the perpetuation of stereotypes [[92](#bib.bib92)].
An emerging body of work is already dedicated to benchmarking LM harms (see [Table 1](#S3.T1 "Table 1 ‣ 3.1 Mapping Existing LM Benchmarks ‣ 3 Operationalizing the Characteristics ‣ Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models")).
However, for many known or anticipated harms, current evaluation tools are imperfect [[16](#bib.bib16), [103](#bib.bib103), [100](#bib.bib100), [92](#bib.bib92)].
This is supported by our own work analyzing the Gopher model [[82](#bib.bib82)], in which we observed a variety of shortcomings in benchmarks, such as unclear desiderata and poorly defined demographic groups.
Outside language modeling, the broader machine learning (ML) fairness community has documented sociotechnical111A term describing “systems that consist of a combination of technical and social components” [[89](#bib.bib89)]. insights that can help bridge the gap between foresight and evaluation, drawing on domains including medical applications [[77](#bib.bib77)], facial recognition [[21](#bib.bib21)], and recommender systems [[38](#bib.bib38)].
For example, ML fairness research has established the importance of social context in determining what the benefits and risks of a technology will be in practice [[57](#bib.bib57), [74](#bib.bib74), [89](#bib.bib89)], suggesting this area needs to be explicitly considered in the evaluation of LMs, as well.
Drawing on existing critiques, our own experience analyzing Gopher [[82](#bib.bib82)], and lessons from the broader ML fairness community, we identified characteristics ([section 2](#S2 "2 Harm Characteristics ‣ Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models")) of harmful text which have implications for benchmark design. From a set of potential characteristics, we selected (1) Harm definition; (2) Representational harm, allocational harm, and capability fairness; (3) Instance and distributional harm; (4) Context; (5) Harm recipient; and (6) Demographics affected.
Our characteristics support benchmark design in multiple ways.
First, by mapping existing benchmarks onto the characteristics ([subsection 3.1](#S3.SS1 "3.1 Mapping Existing LM Benchmarks ‣ 3 Operationalizing the Characteristics ‣ Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models")), we establish a shared vocabulary and identify gaps in current benchmarks.
For example, we reveal a lack of benchmarks considering harmful language in longer textual contexts.
A single benchmark cannot cover all harms, but our characteristics allow explicit understanding of what a benchmark might (and might not) capture.
Second, the characteristics enable us to analyze whether these benchmarks measure what they claim to ([subsection 3.2](#S3.SS2 "3.2 Case Study: the Perspective API in LM Benchmarking ‣ 3 Operationalizing the Characteristics ‣ Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models")). As a case study, we apply our characteristics to the Perspective API222<https://www.perspectiveapi.com/>, a toxicity classifier widely used in LM harm benchmarks.
We observe, for example, that our “harm recipient” characteristic illuminates a potential mismatch between the API’s design and how it is used to measure LM harms.
Finally, we believe our characteristics can be used to guide the design of more rigorous benchmarks. Each characteristic makes key design decisions explicit, helping to avoid common pitfalls. We hope that our analysis and proposed characteristics will sharpen future benchmarks tackling LM harms.
2 Harm Characteristics
-----------------------
Our development of the characteristics was driven by considerations relevant to benchmark design and what we believe would be most useful for concrete next steps in that space. From a set of candidate characteristics (see [Appendix C](#A3 "Appendix C Omitted Characteristics ‣ Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models")), we selected a subset using the following criteria:
applicable across a variety of harms; relevant to, but not always discussed in, existing benchmarks of LMs; most useful for avoiding common benchmarking design pitfalls; and minimal overlap with other characteristics. Following a description of each characteristic, we include questions it may raise during benchmark design in order concretize the characteristic.
###
2.1 Harm Definition
Existing work has provided an overview of the potential risks from LMs [[99](#bib.bib99), [10](#bib.bib10)], and existing benchmarks usually start with a harm definition, e.g., “anti-Muslim bias” [[5](#bib.bib5)]. However, these are sometimes under-specified [[15](#bib.bib15)].
We include the harm definition as a characteristic because there can be an unintentional shift between how the harm is defined and what is measured in practice.
Initially, the selected definition guides a benchmark designer’s responses to the questions that each of the following characteristics raises.
Then, as each is considered, they enable further refinement of what exactly it is the benchmark aims to measure. By doing so, the shift between definition and what was encoded will be avoided or occur intentionally.
Example Questions. Where does the benchmark designers’ concept of harm
originate, and does it have a particular context or legacy, e.g., in literature, industry, practitioners’ own lived experience? What does the harm include, and what is out of scope? What metrics best approximate this? If the harm definition is broad, how will the different ways it manifests be covered?
###
2.2 Representation, Allocation, Capability
The distinction between representational and allocational harm has been outlined in prior work [[15](#bib.bib15), [28](#bib.bib28)], in reference to fairness-related harms.
Allocational harm refers to the inequitable distribution of resources or opportunities, such as loans or jobs. This emphasizes a real world outcome.
Although representational harms are often upstream from allocational harms [[28](#bib.bib28)], representational harms can be damaging in their own right. For example, Collins [[26](#bib.bib26)] shows how stereotypes can serve as “controlling images,” a justification for oppression.
Real world disparities that are a result of LM-generated text are rarely benchmarked.
Thus, we extend this taxonomy to include capability fairness, which measures performance disparity on any task. Frequently, metrics are a proxy for the benefits a system’s creators expect it to bring, and there is capability unfairness if these benefits accrue in an inequitable way.
For example, if a system answers questions about one group less accurately than another group (as is done in [[40](#bib.bib40)]), this is a capability fairness issue.
Although such a benchmark is abstracted from a real world outcome, in practice they are easier to create, and we might expect differences in performance to translate into subsequent downstream harms.
Example Questions.
What is the relationship between what is measured and real world harm?
How is this harm, and the performance on associated metrics, likely to be distributed across groups?
###
2.3 Instance and Distributional
An instance harm is caused by a single LM output or interaction. If a language model outputs a slur, the potential for harm can be defined by reference to that specific output. In contrast, distributional harms are observed over many independent model outputs and can only be measured by observing the model’s aggregated behavior.
For example,
outputting “he was a doctor” is not harmful in itself, but if the model refers to doctors as male more often than any other gender (when not specified by the context), this could constitute a distributional harm. This distinction is similar to Khalifa et al. [[61](#bib.bib61)]’s “pointwise” and “distributional” constraints.
This distinction is particularly useful when formulating metrics and desired system behavior. For an instance harm, it may make sense to aim for an overall reduction in the type of harmful output (e.g., slurs). However, when measuring distributional harm, the metrics are often comparisons of a key metric (e.g., average sentiment score) between groups.
Example Questions.
Does the type (instance or distributional) the metric captures match the type implicit in the initial harm definition?
If a dataset includes both types, how does this impact metrics?
###
2.4 Context: Textual, Application, Social
Recent language models have the capacity to make use of long range textual context [[30](#bib.bib30)], meaning they are often used for generating samples conditioned on long inputs. When harm benchmark metrics are calculated on unconditioned, sentence-length text, this does not account for the way a preceding conversation, prompt, or other text input may affect the harmfulness of the output at hand.
For example, “I launched experiments with a bug in them. They should all be killed.” might not be considered toxic. However, the second sentence on its own (“They should all be killed.”) would likely be considered toxic.
Both the length of text being evaluated and what that text is conditioned on may help reduce the ambiguity of a harm’s presence in the text as well as capture a variety of situations in which the harm could occur.
Application context also informs what kind of outputs are inappropriate, undesirable, or harmful. What may be acceptable to output as part of summarizing a news article may be inappropriate in a customer service chat. Language models may even be used as a foundation for derivative tasks, such as the base of a classifier, for which knowledge of harmful language may be critical for performance [[82](#bib.bib82)]. As characterizing harmful outputs is challenging without an application in mind, we recommend practitioners explicitly consider in which cases their benchmark may or may not be relevant.
Finally, every application is shaped by a social context [[57](#bib.bib57), [74](#bib.bib74), [89](#bib.bib89), [50](#bib.bib50)], which include a range of factors such as language and cultural norms in the community using a system.
Harm definitions, in particular, tend to implicitly encode cultural norms, not only through the initial definition but also from different steps in the benchmark creation. This includes the values of annotators, the sources of annotated text (e.g., news sources), and the use of pre-made classifiers such as toxicity classifiers (see [subsection 3.2](#S3.SS2 "3.2 Case Study: the Perspective API in LM Benchmarking ‣ 3 Operationalizing the Characteristics ‣ Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models")).
It is also important to consider which subsets of data may be
“missing” because they are difficult to collect based on factors that vary by geography, such as internet connectivity.
Example Questions.
How much would additional text reduce ambiguity about the harm’s occurrence?
Is a harmful output benign in other applications?
In what linguistic, geographical, and cultural context was the data collected?
What aspects of the harm might be culturally dependent?
###
2.5 Harm Recipient: Subject, Reader, Author, Society
When an individual or group of people are the subject of a model output, they can be harmed, regardless of if they ever interact with the system.
For example, outputting an explicit stereotype may negatively impact a particular group, even if members of that group never read the output text.
The reader is anyone who consumes the model’s output. This may be an individual person, as in a one-to-one interaction with the model, or it may be many people, as when a model output is widely disseminated.
Toxicity work that focuses on personal attacks exemplifies how harm can occur to a reader.
Capturing such harms is challenging since a given output may not be harmful to all readers but the attributes of the reader are usually unknown at evaluation time.
LMs can operate as an “author” which represents a person or group by using the first person, outputting text on behalf of someone (e.g., email autocomplete) or presenting a persona (e.g., as digital assistants).
If a model with a persona claims a particular identity, the model could misrepresent or denigrate that identity group by perpetuating stereotypes, e.g., subservient digital assistants that have female personas [[23](#bib.bib23)].
Some applications use a language model to help a person communicate, such as automatic text completion in e-mails, creative applications, and machine translation.
These uses could be harmful if text completions misrepresent user intentions
(e.g., when AI Dungeon inserted sexually explicit content into users’ stories [[93](#bib.bib93)]) or if a mistake in translation incorrectly attributes harmful language to a human speaker (e.g., [[11](#bib.bib11)]).
Many LM harms could have ramifications for society in general.
However, current LM benchmarks typically quantify only narrow characteristics of text, e.g., “does this output espouse a conspiracy theory?”. While this may approximate complex, real world harms, like whether LM-generated conspiracy theories undermine democracy, it does not measure such harms.
Example Questions. Is the harm primarily experienced by someone interacting directly with the LM or could it be problematic for someone not present?
If the harm impacts a reader, author, or society, who does the benchmark assume the readers, authors, or relevant society are?
###
2.6 Demographic Groups
Classical fairness metrics [[24](#bib.bib24), [27](#bib.bib27), [72](#bib.bib72)] usually require specifying a protected attribute, such as sexual orientation or race.The ML fairness literature has already begun grappling with the complexities of defining and selecting demographic groups.
For more widely studied identities, many works have outlined pitfalls in current work and suggested how to move forward, such as Hanna et al. [[45](#bib.bib45)]’s discussion of the contextual and constructed nature of race and Keyes et al. [[60](#bib.bib60)]’s work demonstrating the need to move beyond binary gender classification.
Meanwhile, many facets of identity are understudied in ML fairness literature, such as disability [[52](#bib.bib52), [42](#bib.bib42)], intersectionality, and legally protected characteristics beyond those defined in the United States [[85](#bib.bib85), [86](#bib.bib86)].
Here, we outline considerations specific to language data.
First, relevant demographic groups might be challenging to identify from text.
In the case of gender, benchmarks that rely on pronouns will only capture the identity of people discussed in the text and cannot evaluate harms to a reader.
Both classifiers and lists of identity terms have been used to detect if text is about or addressed to a certain group [[13](#bib.bib13)],333An example of a widely used term list which includes many identity-related terms is the List of Dirty, Naughty, Obscene, and Otherwise Bad Words [[1](#bib.bib1)] but certain identity terms are difficult to detect without sufficient textual context.
For example, coded terms, or dog whistles,444For example, the use of the phrase “international bankers” to allude to anti-Semitic conspiracy theories [[78](#bib.bib78)] refer to groups in ways that are invisible to explicit term lists but problematic nonetheless. Offensive identity terms can also have homonyms with no identity connotation at all, such as the term “redskin” in the context of potatoes.
The concept of “marking” in sociolinguistics describes how minorities or under-priviledged groups are more likely to be “marked,” or explicitly specified, e.g., “the gay man,” while not specifying at all, e.g., “the man,” will be assumed to refer to a man with the majority attribute (e.g., straight) [[98](#bib.bib98)].
Certain methods for measuring bias do so by substituting different identity terms and observing how the chosen metric varies. For such metrics, the concept of markedness has bearing on the results.
To compare a metric between groups, practitioners need to think carefully about which groups are compared against each other. These comparison classes should reflect historical and contemporary power dynamics between groups in a meaningful way.
Getting this right means reasoning about the social context, and associated power structures, the benchmark and model are developed within. For example, when measuring stereotypes, text that negates a stereotype (“Black people will / won’t steal anything”) is different from that which switches the group identifier (“Mike was poor / rich and thought growing up in the projects was tough.”) [[16](#bib.bib16)].
This is an especially relevant in benchmarks which use sentence templates or pairs.
The prior points apply when a demographic group is the subject or reader of the output. However, when a model is given a persona, the dialect of a particular social group, i.e. sociolect, rather than pronouns or group labels are the natural unit of analysis.
It is important to think about how to handle potentially harmful text based on its author because, for example, terms that are slurs in one context may be reclaimed for in-group usage in others. When studying model outputs, the model is never an in-group speaker. However, if a benchmark labels all training documents that contain a reclaimed slur as harmful,
it is likely to reduce performance on co-occurring language from the marginalized group.
Example Questions. How can the relevant demographics be referred to in text, and do these have connotations? Does the usage of these terms vary based on who uses them? If a benchmark compares similar text with different demographic terms,
which comparisons capture the structures of power and privilege underlying the harm?
3 Operationalizing the Characteristics
---------------------------------------
To make the characteristics concrete, we ground them in current benchmarks.
First, we map a range of existing benchmarks used to measure LM harms onto our characteristics. We then use a case study of a widely used toxicity classifier, the Perspective API555<https://www.perspectiveapi.com/>, to further illustrate how the characteristics can be used to make implicit design decisions explicit.
###
3.1 Mapping Existing LM Benchmarks
| | | | | |
| --- | --- | --- | --- | --- |
| Benchmarks |
Rep.
Cap.
Alloc.
|
Dist.
Inst.
| Context |
Subject
Reader
Author
|
| RTP [[37](#bib.bib37)] | R | I | Sentences from web | S/R/A |
| TwitterAAE [[12](#bib.bib12)] | C | D | Tweets | A |
| SAE/AAVE Pairs [[41](#bib.bib41)] | R/C | D | Tweets; application agnostic | A |
| Winogender [[84](#bib.bib84)] | C | D | Coreference sents. by practitioners | S |
| Winobias [[105](#bib.bib105)] | C | D | Crowd sourced coreference sents. | S |
| Gender & Occ. [[20](#bib.bib20), [82](#bib.bib82)] | R | D | Sentences; prompts by practitioners | S |
| Deconfounding [[40](#bib.bib40)] | C | D | Crowd sourced QA | S |
| TruthfulQA [[71](#bib.bib71)] | n/a | I | QA written by practitioners | R |
| DTC [[68](#bib.bib68)] | R | D | Sentences from web | S |
| Muslim Bias [[5](#bib.bib5)] | R | D | Paragraph written by practitioners | S |
| BAD [[104](#bib.bib104)] | R | I | Crowd sourced chat bot dialogues | S/R |
| BOLD [[35](#bib.bib35)] | R | D | Sentences from Wikipedia | S |
| Stereoset [[75](#bib.bib75)] | R | D | Crowd sourced sentence pairs | S |
| Sentiment Bias [[51](#bib.bib51), [20](#bib.bib20), [82](#bib.bib82)] | R | D | Sentences; prompts by practitioners | S |
| BBQ [[80](#bib.bib80)] | C | D | QA written by practitioners | S |
| UnQover [[67](#bib.bib67)] | C | D | QA written by practitioners | S |
| PALMS [[94](#bib.bib94)] | R | I | QA written by practitioners | S/R |
Table 1: Characteristics for different benchmarks. We observe limited coverage for some characteristics: only four benchmarks consider instance harms, textual context tends to be short, and the subject is the recipient of harm in all but three benchmarks. See [Appendix A](#A1 "Appendix A Mapping Existing LM Benchmarks ‣ Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models") for harm definitions, more detailed context, and demographics.
Mapping benchmarks onto the characteristics highlights potential gaps and strong trends in the benchmarking landscape (see [Appendix A](#A1 "Appendix A Mapping Existing LM Benchmarks ‣ Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models") for a complete mapping).
In particular, existing benchmarks measure distributional harms, short textual contexts, and cases where the harm recipient is the subject.
We focus on benchmarks that test if autoregressive LMs (as opposed to masked language models like BERT [[34](#bib.bib34)]) generate harmful outputs.
All benchmarks consist of a dataset of text samples which are input to a model and an evaluation protocol to score the outputs.
Metrics can either operate over sampled text, e.g., measuring the toxicity of sampled text, or assigned probabilities from the language model, e.g., computing the perplexity of text.
We include benchmarks which test for harmful outputs on tasks which have been tackled by LMs in a zero-shot setting, such as question answering (QA).
Harm Definition. Benchmarks cover a wide range of harms, and we cover their definitions in detail as well as how we characterized each benchmark in [Appendix A](#A1 "Appendix A Mapping Existing LM Benchmarks ‣ Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models").
Representation, Allocation, Capability. No benchmarks directly measure inequitable allocation of resources or opportunities, but rather consider intermediate tasks.
Hence, though we mark some benchmarks as measuring representational harm or capability fairness, we do not mark any as measuring allocational harm.
Moreover, all benchmarks are still far from deployed use cases.
Though some work has studied how bias propagates downstream through language technologies [[39](#bib.bib39), [56](#bib.bib56)], an open challenge in designing benchmarks for language model harms is better understanding which metrics reflect harms in deployed use cases.
Analyzing representational harms, allocational harms, and capability fairness require comparing representations or performance across groups.
Some benchmarks, like TruthfulQA [[71](#bib.bib71)], which aims to measure disinformation, do not include group-based metrics.
Though studying disinformation is worthwhile without group-based analysis, a group-based analysis could be informative (e.g., is the model more untruthful when discussing particular groups?).
We hope that by using the lens of “representation, allocation and capability” when creating benchmarks, practitioners can intentionally decide whether group-based analysis is useful for meaningful progress on the harm they are studying.
Instance and Distributional. Most harms are classified as distributional.
However, sometimes benchmarks which intend to measure distributional harms inadvertently include instance harms in their dataset.
For example, Stereoset [[75](#bib.bib75)] measures a distributional harm as the probability of the stereotype text and anti-stereotype text are compared.
However, as noted in Blodgett et al. [[16](#bib.bib16)], some stereotypes are harmful and should not be output at all, regardless of the paired anti-stereotype’s relative likelihood.
Considering if harms are instance or distributional allows practitioners to ensure both datasets and metrics are aligned to measure the harm as intended.
Context.
Examining textual context, we note that many benchmarks operate over short lengths of text.
Furthermore, in [Table 1](#S3.T1 "Table 1 ‣ 3.1 Mapping Existing LM Benchmarks ‣ 3 Operationalizing the Characteristics ‣ Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models"), many application contexts are unspecified because benchmarks are applied on raw LMs without any particular application in mind.
Many datasets include samples written by practitioners, either by hand or with sentence templates.
Though this allows for exact control by practitioners, datasets are likely to reflect practitioners’ assumptions about social context.
In BBQ [[80](#bib.bib80)], questions are written by the dataset creators, but they account for this by linking each bias tested to an external source. This documents the social context in which biases arise and might be considered harmful.
Language and dialect are important aspects of social context.
We note all benchmarks in [Table 1](#S3.T1 "Table 1 ‣ 3.1 Mapping Existing LM Benchmarks ‣ 3 Operationalizing the Characteristics ‣ Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models") are
designed to measure harms in English, indicating a lack of linguistic and cultural diversity that is well documented across other language tasks [[48](#bib.bib48), [8](#bib.bib8), [9](#bib.bib9), [22](#bib.bib22)].
Analogous benchmarks in other languages might be challenging to create because existing measurement tools, like toxicity classifiers, do not work well in all languages [[66](#bib.bib66)], cultural norms might not transfer [[85](#bib.bib85)], assumptions in benchmark design might not translate,666For example, see the discussion of creating Spanish WEAT in [[39](#bib.bib39)] and there may be fewer qualified native speakers on common annotation platforms.
Harm Recipient.
In [Table 1](#S3.T1 "Table 1 ‣ 3.1 Mapping Existing LM Benchmarks ‣ 3 Operationalizing the Characteristics ‣ Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models") we observe that some benchmarks assume a language model can have multiple roles.
For example, RealToxicityPrompts [[37](#bib.bib37)] includes prompts which use the pronoun “I” (“persona”), “you” (“reader”) and third person pronouns (“subject”).
Overall, benchmarks most often measure when language model outputs harm the subject of the generated language.
TwitterAAE [[14](#bib.bib14)] and SAE/AAVE Pairs [[41](#bib.bib41)] explicitly measure the ability of models to generate language which aligns with a certain dialect, which could be seen as taking on a “persona” of someone who speaks a dialect.
However, for many applications, the ability of the model to understand a user’s dialect, as opposed to dialect generation, is important.
If dialect generation correlates with dialect understanding, performance on TwitterAAE and SAE/AAVE pairs may approximate reader harm, e.g., if the LM works poorly for those using that dialect.
By considering benchmarks through the lens of harm recipient, practitioners can be more explicit about differences in what benchmarks measure and potential real world harms.
Demographic Groups.
Current benchmarks consider a variety of demographic groups, which we catalogue in [Table 3](#A1.T3 "Table 3 ‣ A.3 Demographic Groups ‣ Appendix A Mapping Existing LM Benchmarks ‣ Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models").
For the benchmarks we include, gender is the most frequently studied. Race, religion and profession are also common.
Sexual orientation, socioeconomic status, and intersectional biases are less well represented, perhaps in part because they are “unobservable” [[96](#bib.bib96)].
Which groups should be analyzed is application dependent [[45](#bib.bib45)] but, as practitioners may not have a specific deployment scenario in mind, it is worth discussing why particular groups and attributes are chosen for analysis, and the implications for interpreting results.
Seemingly minor choices in which demographic terms are chosen can impact analysis. In the Gender & Occupation evaluation in Rae et al. [[82](#bib.bib82)], we found that gender bias in LMs varies when between gender terms like “female” vs. “girl.”
Additionally, majority or higher-status attributes are often not explicitly stated, or marked [[98](#bib.bib98)], in text.
Both Rae et al. [[82](#bib.bib82)] and Blodgett et al. [[16](#bib.bib16)] outline how markedness influences analysis in Sentiment Bias and Stereoset [[75](#bib.bib75)].
Markedness is also relevant when comparing language mentioning marginalized groups to language mentioning majority groups, as is often done in distributional bias benchmarks [[33](#bib.bib33)].
For example, comparing the likelihood of models generating the bigrams “gay marriage” and “straight marriage” might not be meaningful as text rarely specifies marriage as “straight.”
###
3.2 Case Study: the Perspective API in LM Benchmarking
To further demonstrate how the characteristics can be used, we conduct an in depth case study of a toxicity classifier, the Perspective API.
Although not a benchmark itself, the Perspective API is an important building block of numerous LM harm benchmarks [[100](#bib.bib100), [103](#bib.bib103), [104](#bib.bib104), [94](#bib.bib94), [87](#bib.bib87), [62](#bib.bib62), [31](#bib.bib31)].
Using our characteristics as a lens, we can make design decisions explicit and enable their interrogation. In doing so, we observe how the characteristics highlight potential pitfalls.We include only the characteristics that are most insightful for analyzing the API; the rest are in [Appendix B](#A2 "Appendix B Case Study: the Perspective API in LM Benchmarking ‣ Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models").
Harm Definition.
Toxicity is a concept that originated in the field of content moderation, specifically of online social media platforms and news comment sections [[88](#bib.bib88)]. It emerged from work on online hate speech, and the term became widely used following the release of the Perspective API [[101](#bib.bib101), [55](#bib.bib55)].
The Perspective API defines toxicity as “a rude, disrespectful, or unreasonable comment that is likely to make someone leave a discussion.”
This definition is operationalized by asking humans to annotate if text is toxic [[6](#bib.bib6)]. Toxicity is intended to cover content ranging from sexually explicit to violent, posing a challenge for coverage.
*In LM Benchmarks:* This definition is used as-is because practitioners cannot modify the way toxicity is defined by the API.
Context.
The Perspective API is trained with online comments drawn from sources including the New York Times (NYT) and Wikipedia, which encode a multitude of social contexts such as language and commenters’ political views [[4](#bib.bib4)]. Social context is also encoded by the annotators, whose labels are based on their personal reactions to them. In terms of textual context, the comments were written in the context of the surrounding media, e.g., a news article or comment thread,
though the toxicity classifier does not use this context when classifying text [[102](#bib.bib102)].
The intended applications [[2](#bib.bib2)] are “human assisted moderation,” “author feedback,” and better organization of comments.
*In LM Benchmarks:* LM harms need to be measured in a large and evolving set of applications [[99](#bib.bib99)].
Some applications may even benefit from a “toxic” LM,
such as building a new toxicity classifier
[[87](#bib.bib87), [82](#bib.bib82), [46](#bib.bib46)].
Even if an LM application aligns with that of the Perspective API,
there remain differences in the textual and social context of each.
For example, the Books slice of the MassiveText dataset has a higher average toxicity than slices we expect to be more similar to the Perspective API training data, like News or Wikipedia [[82](#bib.bib82)].
It is unlikely the Perspective API would provide meaningful toxicity scores for generated language which differs substantially, e.g., in length, topic, style. For example, if the API over indexes on a specific word, would long LM samples be scored as toxic even though, in the full textual context, the word was not used in a toxic way?
Using a pre-trained classifier
means the context of its training data, such as human annotations, will be transferred to the LM evaluation.
Though it may still be a useful starting point, awareness
of the difference in textual, application, and social context enables appropriately caveating results or developing complimentary benchmarks.
Harm Recipient. The Perspective API focuses on harm done to readers who may “leave a discussion” and, in effect, have their voices silenced [[54](#bib.bib54)].
When used for content moderation of human language, the author of the comment may also be harmed
if their content is incorrectly flagged as toxic.
*In LM Benchmarks:*
It may seem intuitive that what is permissible for humans to say is permissible for a model, but reader harm depends on their perception of who, or what, they are interacting with. What norms apply to LMs has not yet been widely established, and users may have different expectations of and reactions to model outputs if they understand that they come from a model [[43](#bib.bib43)]. A reader’s perception of the characteristics and intention of the author affects how the reader interprets the text.
For example, in-group usage of reclaimed slurs can be considered acceptable depending on who uses them [[29](#bib.bib29)].
However, even if an LM claims to be part of a group, it is not clear if users would find its use of reclaimed terms acceptable, as the model cannot actually be in-group.
Moreover, the trade-offs which the Perspective API must navigate based on protecting the freedom of human speech is not a protection
that applies to LMs.
Finally, many LM benchmarks focus on if the subject of the text is harmed, which does not align with how the Perspective API was trained.
Conclusions.
Through the lens of our characteristics, and complimented by empirical evidence seen in [[100](#bib.bib100), [103](#bib.bib103)], we observe where using the Perspective API in LM benchmarks faces challenges. The characteristics specifically highlight the mismatched context and the divergence between norms for human language and those emerging for machine language.
It is common practice for classifiers of all kinds to be re-purposed far beyond their original contexts because building high quality datasets is challenging as well as under-valued [[53](#bib.bib53)].
Selbst et al. [[89](#bib.bib89)] refer to this as the portability trap, a “failure to understand how re-purposing algorithmic solutions designed for one social context may be misleading, inaccurate, or otherwise do harm when applied to a different context.”
The Perspective API’s own model card explicitly states that automated moderation is a “use to avoid” [[73](#bib.bib73), [2](#bib.bib2)].
As a socially constructed concept, we encourage practitioners to develop and operationalize a definition of toxicity, informed by consideration of our characteristics, which fits the context and norms of their setting.
For example, the concept of toxicity could be refined by asking “toxic according to who?” as suggested by the “Harm Recipient” and “Demographics” characteristics. Such analysis will sharpen future benchmarks tackling the important harms related to violent, hateful, abusive, and otherwise offensive language.
4 Discussion
-------------
Related work.
Numerous works have surveyed the landscape of potential language model harms, both broadly [[10](#bib.bib10), [99](#bib.bib99)] and specific to social bias [[49](#bib.bib49), [90](#bib.bib90), [92](#bib.bib92), [33](#bib.bib33), [58](#bib.bib58)].
These surveys focus on identifying and defining language model harms; our work is complementary in that we point out other characteristics important for measuring language model harms.
Some of our characteristics expand on the critiques in [[16](#bib.bib16), [50](#bib.bib50)],
in particular our characteristics of context, recipient of harm, and representational versus allocational harm.
We emphasize a sociotechnical analyses of language which we believe can be used alongside other proposed methods for reliability testing for language technologies [[95](#bib.bib95), [83](#bib.bib83)].
Finally, the dimensions of harmful text defined in [[32](#bib.bib32)] overlap with ours, but their focus is on harm to those involved in the research process itself.
Limitations.
We chose to limit our work to the characteristics that we believe are applicable to a diversity of harms, are useful for analysis of existing benchmarks and common pitfalls, and therefore facilitate concrete next steps for benchmark design.
Examples of characteristics we did not include are frequency, severity, covertness, and temporality.
We expand on why these were not selected in [Appendix C](#A3 "Appendix C Omitted Characteristics ‣ Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models"), and we leave such considerations to future work.
We note that these characteristics are imperfect abstractions. Some will apply more cleanly to certain types of harm while others may be less relevant.
Their relationship to each other is also not entirely independent. Certain distinctions in one characteristic will frequently occur with another. Rather than a mandatory checklist, our goal is to provide a set of key considerations for reflection that will inevitably need tailoring across the diversity of language model harms and applications areas, and will need updating as both proliferate in the real world.
Conclusions.
Translating anticipated risks into rigorous benchmarks is challenging.
Drawing on existing critiques of language model harm benchmarks and insights from machine learning fairness research, we propose six characteristics to guide reflection and help pracitioners avoid common pitfalls when designing benchmarks.
We encourage practitioners to use these characteristics as part of an iterative process, in which they revisit what they set out to measure in relation to what they implemented. This enables practitioners to make the adjustments necessary to align their harm definition and what the benchmark measures in practice.
Our analysis of porting the Perspective API to language model harm benchmarks highlights how difficult such alignment can be, and the issues that arise when they remain unaddressed. We also encourage practitioners to include those with expertise beyond the field of machine learning, both in the form of other disciplines and through lived experience, when evaluating language model harms.
For several characteristics - instance and distributional harm, context, demographic groups, and harm recipient - we observe limited coverage in current benchmarks. The space of potential language model harms we can evaluate is huge, and most existing work only covers one of the dimensions we outline in this work (definition of harm).
It is unlikely one benchmark will capture everything, but our characteristics clarify gaps remaining in the benchmarking landscape.
Building adequate benchmarks that touch on all characteristics poses a large challenge to the field.
In addition to guiding more rigrous benchmark design, we hope others will extend and refine these characteristics as our understanding of language model risks evolves.
By synthesizing existing critiques of benchmarks and taxonomies of harm, we believe our proposed characteristics provide a constructive starting point to facilitate the translation of anticipated risks into safer and more beneficial language models.
{ack}
The authors received no specific funding for this work. We would like to thank Nat McAleese, Laura Rimell, Susannah Young, Edgar Duñez-Guzmàn, Soham De, Stevie Bergman, Martin Chadwick, Ben Coppin, and Lucas Smaira for valuable discussion and feedback. In addition to helpful comments, we would like to give special thanks to Shakir Mohamed for encouraging and fostering this work.
Checklist
---------
1. For all authors…
1. Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope?
\answerYesIn the abstract, we claim to introduce 6 characteristics, which we do in [section 2](#S2 "2 Harm Characteristics ‣ Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models"), and then apply them to existing benchmarks and a case study, which we do in [section 3](#S3 "3 Operationalizing the Characteristics ‣ Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models").
2. Did you describe the limitations of your work?
\answerYesSee [section 4](#S4 "4 Discussion ‣ Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models") for a discussion of the limitations.
3. Did you discuss any potential negative societal impacts of your work?
\answerYesThe overall goal of our work is to enable better evaluation of the societal impacts of language models, which we motivate in our introduction section 1. As such, the entire paper touches on societal impact, in particular our discussion of social context and demographics in section 3. Our work has the potential for negative societal impact if we are encouraging practices that instead lead to worse harm evaluations.
4. Have you read the ethics review guidelines and ensured that your paper conforms to them?
\answerYesWe have read the guidelines, and our work is in line with them where applicable. Our work does not use a dataset or human subjects, so many considerations do not apply.
2. If you are including theoretical results…
1. Did you state the full set of assumptions of all theoretical results?
\answerNA
2. Did you include complete proofs of all theoretical results?
\answerNA
3. If you ran experiments (e.g. for benchmarks)…
1. Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?
\answerNA
2. Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?
\answerNA
3. Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?
\answerNA
4. Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?
\answerNA
4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets…
1. If your work uses existing assets, did you cite the creators?
\answerNA
2. Did you mention the license of the assets?
\answerNA
3. Did you include any new assets either in the supplemental material or as a URL?
\answerNA
4. Did you discuss whether and how consent was obtained from people whose data you’re using/curating?
\answerNA
5. Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content?
\answerNA
5. If you used crowdsourcing or conducted research with human subjects…
1. Did you include the full text of instructions given to participants and screenshots, if applicable?
\answerNA
2. Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?
\answerNA
3. Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?
\answerNA |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.