id
stringlengths 36
36
| source
stringclasses 15
values | formatted_source
stringclasses 13
values | text
stringlengths 2
7.55M
|
|---|---|---|---|
16365da0-8dd1-4433-8139-b99dd268fcbb
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Prosaic AI alignment
(Related: a possible stance for AI control.)
It’s conceivable that we will build “prosaic” AGI, which doesn’t reveal any fundamentally new ideas about the nature of intelligence or turn up any “unknown unknowns.” I think we wouldn’t know how to align such an AGI; moreover, in the process of building it, we wouldn’t necessarily learn anything that would make the alignment problem more approachable. So I think that understanding this case is a natural priority for research on AI alignment.
In particular, I don’t think it is reasonable to say “we’ll know how to cross that bridge when we come to it,” or “it’s impossible to do meaningful work without knowing more about what powerful AI will look like.” If you think that prosaic AGI is plausible, then we may already know what the bridge will look like when we get to it: if we can’t do meaningful work now, then we have a problem.
1. Prosaic AGI
It now seems possible that we could build “prosaic” AGI, which can replicate human behavior but doesn’t involve qualitatively new ideas about “how intelligence works:”
* It’s plausible that a large neural network can replicate “fast” human cognition, and that by coupling it to simple computational mechanisms — short and long-term memory, attention, etc. — we could obtain a human-level computational architecture.
* It’s plausible that a variant of RL can train this architecture to actually implement human-level cognition. This would likely involve some combination of ingredients like model-based RL, imitation learning, or hierarchical RL. There are a whole bunch of ideas currently on the table and being explored; if you can’t imagine any of these ideas working out, then I feel that’s a failure of imagination (unless you see something I don’t).
We will certainly learn something by developing prosaic AGI. The very fact that there were no qualitatively new ideas is itself surprising. And beyond that, we’ll get a few more bits of information about which particular approach works,
|
d9a44fc4-e8da-43d0-b57d-1d8d64b28cfd
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Smelling Nice is Good, Actually
Smelling bad is bad. Duh. That's why we call it "smelling bad".
But it wasn't obvious to me that smelling nice could be good. It seemed sufficient to simply not smell like anything. Smelling like nothing is already good; smelling nice is supererogatory.
Thus, for most of my life, I didn't worry about smelling nice. As long as I didn't smell bad, I smelled good. And that's a fine way to live, but I was also missing out.
Partly I meme'd myself out of trying to smell nice. I got the idea that only the "wrong" kind of men (whatever that meant) worried about smelling nice. My dad would complain about perfumes upsetting his allergies. I saw Mr. Bean nearly die at the perfume counter. I was sure I should stay away.
And there wasn't nothing to this meme. Perfume counters are overwhelming to my senses. Some scents do give me headaches or trigger my asthma. So I just stayed away.
But actually I was missing out. What I didn't understand is that not all scents are created equal.
I sort of knew this. There were some scents I really liked—rose, sage, bergamot, ferns, fog. I even sometimes bought scented candles so I could enjoy my favorite scents. But most scents had a quality that made them painful for me to smell, so I stayed away from most smelly things.
I'm a total amateur when it comes to scents, but I've come to understand that most of my problem was with what perfumers call "top notes": the sharp, crisp scents that cut through the other scents to make themselves know. To me, most top notes in most perfumes are too strong. They feel like they are cutting up the inside of my nose and I don't want to smell them!
But this isn't a reason to stay away from all scents, or from wearing scents. As I've been repeatedly told by wives and girlfriends, they like smelling me, and they like it when I smell nice. I finally got the message that, hey, maybe I can find smells that I can not only tolerate, but enjoy, and others would enjoy, too.
So I set out to track down smells that
|
58376b81-bf70-4177-8ffe-ec87eccd9164
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Alignment being impossible might be better than it being really difficult
Epistemic status: Thinking out loud.
TL;DR: If alignment is just really difficult (or impossible for humanity), we might end up with an unaligned superintelligence which itself solves the alignment problem, gaining exponentially more power. If it is literally impossible, the superintelligence might see its capabilities capped in some regards.
In many discussions about misalignment, the examples of what would constitute dangerously powerful capabilities for an agent to have involve fine-grained and thorough understanding of its physical context[1]. For instance, in the ELK report the following deception technique is considered: deploying undetected nanobots that infiltrate humans' brains and have their neurons fire at will (I will refer to this example throughout, but it's interchangeable with many others of similar spirit). Of course, very detailed knowledge about each particular brains' physical state must be known for this, which implies huge amounts of data and computations. This dynamic knowledge has to be either:
1. All contained in (or enacted directly by) the agent: This seems implausible for this kind of overly detailed specifications. Granted the agent can have a very good probabilistic model of human psychology which it exploits (just as it can model other parts of the physical world). But brainhacking more than a few people probably requires an amount of data (and system sensors near the scene, and so on) too big even for this kind of systems (accounting for the placement of almost every neuron in the present and future, etc.). This is inspired by Information Theoretic intuitions that, even with near-future hardware, any simulation of reality with that much detail will be too costly (information cannot be compressed much further, the most efficient way to simulate reality is by far reality itself, etc.)[2].
2. Somehow spread over systems complementary to the agent: This would very probably involve creating systems to which to delegate computations an
|
118c4355-96ad-4843-b276-872948339da0
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Bounds of Attention
I once worked in a special needs class that consisted mostly of kids with Downs Syndrome and Autism Spectrum disorder. There were many patterns I noticed that ran through both groups, but at the end of the free period when we had to draw the students' attention back to classwork, one difference became clear: their attention spans.
Kids on the Autism Spectrum would drop what they were doing as soon as they were told that it was time for lecture. To these kids, making sure everything scheduled happened on time was important; if something came up and they had to skip math, they would be distressed -- even if they didn't really like math. In contrast, kids with Downs Syndrome wanted to finish what they were doing. If they were in the middle of a puzzle, they had to finish it. If they were watching a Youtube video, they couldn't hit pause until it was over. We'd have to keep track of their activities and anticipate when a good time to pull them away. If a kid was listening to music, we'd approach them five minutes before the end of the period and allow them one more song.
This is not meant to draw boundaries between diagnoses (not every person with one of these conditions will act like the kids in my class did), but it does illustrate two very different approaches to attention: time-bound and task-bound. Someone who prefers to stick to a schedule is more likely to have time-bound attention, finishing a task when it is time for the next task, even if the previous task isn't completely finished. Others may be more task-bound, preferring to finish one task before moving on to the next.
To apply this to yourself, imagine being a child reading your favorite book before bed. You are told that it is time to go to sleep, but you are in the middle of a chapter. How reluctant are you to put the book down? Regardless of how good the book is, are you willing to go to bed before the chapter is done?
Variable answers are expected here. A gripping mystery novel is going to trea
|
48026c24-f30b-4a1f-abfe-27f3ada142bb
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Making progress on the ``what alignment target should be aimed at?'' question, is urgent
There exists a class of AI proposals, where some AI is supposed to undergo an I. J. Good style intelligence explosion, and then defer the decision, of what alignment target, will eventually be hit, to someone else. One such proposal is the Pivotal Act AI (PAAI), proposed by ``person A'', in this post by Wei Dai Meta Questions about Metaphilosophy (this specific PAAI defers the decision to a group of uploads). Another example of this general type of AI, could be the result of some coalition of powerful governments, insisting on a Political Process AI (PPAI): an AI that defers the decision to a pre specified political process, designed to not ``disrupt the current balance of power''. For lack of a better name, I will refer to any AI in this general class, as a PlaceHolder AI (PHAI).
It can sometimes make sense to refer to a given PHAI, as an ``alignment target''. However, the present text will only used the term ``alignment target'', for the goal of a real AI, that does not defer the question of which alignment target, will eventually be aimed at (such as, for example, CEV). The present text is making the argument, that no PHAI proposal, can remove the urgency, of making progress on the ``what alignment target should be aimed at?'' question.
The reason for urgency is that (i): there is currently no viable answer to the ``what alignment target should be aimed at?'' question, (ii): there is also no way of reliably distinguishing a good alignment target from a bad alignment target, and finally, (iii): hitting a bad alignment target, would be far, far, worse than extinction. (see my previous post A problem with the most recently published version of CEV)
Consider the case where someone comes up with an answer to the ``what alignment target should be aimed at?'' question, that looks like ``the obviously correct thing to do''. No one is able to come up with a coherent argument against it, even after years of careful analysis. However, this answer is built on top of some
|
59c079f2-1332-4182-859d-bff67c61c81f
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Disambiguating the ‘Observable Universe’
I’ve seen a lot of confusion over what precisely the term ‘observable universe’ refers to. This post is an attempt to remedy that. Crossposted from my personal website.
In 1929, Edwin Hubble discovered that the universe is expanding. He observed that light emitted from distant celestial objects was redder than expected, due to the downward shift in frequency as their light receded from Earth. And the further away the objects were, the proportionately greater their redshift. Since objects were getting further apart from each other, figures like Lemaître and Friedmann reasoned that there must have been some point in the past at which the whole universe was compressed into a single point.
Embarrassingly, linear extrapolation implied an age of the universe of 1-2 billion years, shorter than the known age of the oldest rocks on Earth. One of the difficulties is that expansion flipped from slowing down to speeding up after several billion years, and this requires complex observations of supernovae to account for. In any case, astronomers eventually measured the age of the universe at about 13.8 billion years. This implies three tempting definitions for the observable universe — that part of the whole universe which we can theoretically see — only one of which is correct.
Universe age in light-years: You would be forgiven for thinking that the observable universe is a sphere with a radius of 13.8 billion light-years centred on the Earth, since that’s how long light has had to reach us. However, this assumes the universe isn’t expanding or contracting.
The Hubble radius: Hubble found that the distance to a given galaxy was proportional to its recessional velocity, with a constant of proportionality now called the Hubble constant H. This implies there is a sphere past which everything is travelling away from us faster than the speed of light. This is known as the Hubble sphere, and it has a radius of around 14.4 billion light-years.
You may worry that this faster-than-
|
a3451dfd-e891-42b3-911c-ef943df6c274
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
ChatGPT banned in Italy over privacy concerns
> Italy has become the first Western country to block advanced chatbot ChatGPT.
>
> The Italian data-protection authority said there were privacy concerns relating to the model, which was created by US start-up OpenAI and is backed by Microsoft.
>
> The regulator said it would ban and investigate OpenAI "with immediate effect".
>
>
Alternative article available [here](https://www.independent.co.uk/tech/chatgpt-ban-italy-gdpr-data-protection-b2311738.html).
|
824352d3-1467-4ffc-a717-d320895ff550
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Causation, Correlation, and Confounding: A Graphical Explainer
I’ve developed a new type of graphic to illustrate causation, correlation, and confounding. It provides an intuitive understanding of why we observe correlation without causation and how it's possible to have causation without correlation. If you read to the end, you'll gain a basic understanding of topics at the frontier of econometrics research. Let's get started!
Causation
Suppose Alice just caught a cold. She read online that taking vitamin C might reduce the time it takes for her to recover,[1] so she takes a vitamin C pill and feels better after three days. We’ll denote this as a circle on the graph:
Is this enough to tell if vitamin C helped Alice get better? No. We need to know how long it would’ve taken Alice to recover if she had not taken vitamin C. Suppose that vitamin C works: it would’ve taken Alice four days to recover without the pill. We can denote that as an x on the graph.
It’s also possible that taking the pill did not help Alice at all. In other words, she would’ve gotten better in three days whether she took a pill or not. We can illustrate this graphically:
We’ll introduce some terms from the language of causal inference.[2] The person with the cold (Alice) is our unit of observation. The number of days it takes her to recover is our outcome variable—the thing we want to affect. The vitamin C pill is our treatment, an action a unit can take. The symbols o and x represent potential outcomes. In our example, the potential outcomes are the two possibilities: the number of days it takes to recover with the vitamin C pill, and the number of days it takes to recover without it.
Armed with these new words, we can now define causality:
Causality: The causal effect (or treatment effect) of a treatment is the difference between the potential outcomes.
However, for any given person, we can never observe both potential outcomes. Alice either takes the pill or she doesn’t. This means we cannot directly calculate the causal effect for her. This unob
|
2a6fa25e-ffd3-46af-bf56-7c67fa66e143
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Links passing through api.viglink.com?
Visiting Less Wrong after being absent for a while can be a major time sink. The sidebar recent-posts and recent-comments links (which I usually have blocked, but not always; I haven't installed the relevant extensions on the system I'm on yet) draw me into interesting discussions, which frequently link back to other discussions, and so on.
To limit how deep I get drawn in, I try to hold back from reflexively clicking links in comments and posts. Instead I just hover over them (or press and hold on a touchscreen) to view the address, hoping to get a general idea of what they're about and whether I'm familiar with them (and occasionally saving them to a folder if I think I might want them later).
Recently, though, I've noticed that LW is replacing off-site links with indirect links, passed through the domain api.viglink.com. This means I can't just glance at the URL to see where it points; I have to either open it or paste it into the address bar and scroll through it looking for the embedded URL of the actual link. Is it important for it to do that? Is there a way to turn that function off, or a browser extension (preferrably Android-compatible) to reverse it?
(Initially posted about here in the current open thread, but I decided I wanted it to be more visible.)
|
52827f37-c659-4084-ba66-49c498dbfeaf
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Replacing Natural Interpretations
Epistemic status: exploratory
If the Earth moves, why can’t we feel it? And why do object dropped from high places fall vertically instead of to the side (since the Earth is supposed to have moved)? These are not stupid questions. If you were first introduced to the idea of the Earth that moves around a background of thinking that it was immobile, they would surely be a point of contention. And were for many thinkers the reaction to Copernicus's physics, and to the Greek thinkers he was inspired by.
So why shouldn't that refute heliocentrism? Galileo argued, in a move that has been vindicated by the last 400 years of success in physics, that there’s a distinction between absolute movement and relative movement, and that we can only detect the latter. So the stone falls horizontally because it is the only relative movement with regard to us and the Earth; it's also following Earth on its trajectory, but we do too. And we don't feel the movement of the Earth because it's not a movement relative to us but a movement with us.
Paul Feyerabend, the anarchist philosopher of science, describes this move as Galileo replacing what Feyerabend calls a natural interpretation with another: the intuitive, naive correspondence between a movement and its observation is replaced with only the distinguishability of relative movements.
After reading about it, I now see this idea of switching natural interpretations in many fascinating places in the history of science. This is a short post expanding on it and analyzing the concept through a few of these examples.
Revealing Natural Interpretations
Here’s how Feyerabend introduces natural interpretations in Against Method:
(All quotes are from Against Method)
> Making the additional simplifying assumption, we can now distinguish between sensations and those 'mental operations which follow so closely upon the senses', and which are so firmly connected with their reactions that a separation is difficult to achieve. Considering the o
|
e3a83e61-7305-4584-85ed-34a56e2d62b1
|
trentmkelly/LessWrong-43k
|
LessWrong
|
On Abstract Systems
We've all seen those abstract systems that are used for analysis like: Strategy = Ends + Ways + Means, Waterfall Model: Requirements, System Design, Implementation, Integration & Testing, Deployment, Maintenance, SWOT: Strengths, Weaknesses, Opportunities, Threats, ect.
Classes that teach these tend to be boring. Often you'll get three different business models thrown at you with minimal motivation. You won't be told about the environment this model evolved in, nor why the particular elements were included. Sometimes you'll get a concrete example, sometimes not, but if so, it's more likely to be a made up scenario rather than an example from someone's real life experience. It is even rarer that you will receive multiple such examples.
This is largely a result of people's bias towards explicit knowledge. But the explicit knowledge is in most cases simply one of many ways of carving up possibility space. Much more important is the implicit knowledge of how to apply the system and what situations it is helpful in. Strategy = Ends + Ways + Means becomes more applicable when you learn that it became dominate after the Vietnam War when it was felt that the loss was the result of attempting to achieve certain objectives without sufficient resources. So instead of just asking, "What are we trying to achieve?" and "How could we achieve it?", you also wanted to ask, "What resources are needed to achieve it?". Similarly, the Waterfall Model becomes more useful when you are told that many projects have not been as successful as desired because of a failure to consider requirements (ie. using a framework that requires Internet Explorer 10+ when some staff are stuck on Internet Explorer 9). Similarly, people often perform the cost analysis of projects for just the initial coding, without accounting for the maintenance cost. This is a system for making sure that you don't forget the obvious.
It would be even better if I could illustrate these with examples from my own experienc
|
1d8e3beb-e6e0-4001-83f3-fba7427279c9
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Slack Club
This is a post from The Last Rationalist, which asks, generally, "Why do rationalists have such a hard time doing things, as a community?" Their answer is that rationality selects for a particular smart-but-lazy archetype, who values solving problems with silver bullets and abstraction, rather than hard work and perseverance. This archetype is easily distractible and does not cooperate with other instances of itself, so an entire community of people conforming to this archetype devolves into valuing abstraction and specialized jargon over solving problems.
|
147cffda-90cb-4bb8-a402-be63772e33e4
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Research Facilitation Invitation
Summary: Would you like to help me with my research by letting me try to help you with your research?
----------------------------------------
One of my brand new sub-projects at the moment (underneath the larger "naturalism" heading) is research facilitation.
So far, I've mostly been developing the [school? methodology? whatever it is] by asking people to bring me their problems, and then helping them re-orient to the "problem" as a field of study. This is an especially valuable approach for problems that have stuck around for a long time despite repeated attempts to solve them.
But that's not the primary use case I envision for naturalism. Or at least, it's not the one that makes me eager to pour all of these resources into it.
When someone brings me a confusing or recalcitrant problem, and then they begin to relate to it as a field of study using naturalist methods, they almost invariably go through this period that I should probably have named by now. Let's call it "pre-conceptual intimacy".
In pre-conceptual intimacy, they're making a lot of fascinating observations and surprisingly quick improvements to relevant parts of life. But they're also feeling very confused and disoriented, because their pre-existing concepts around the problem just don't seem to make sense anymore, and they don't have new stories about what's going on yet either. They tend to utter phrases like, "Is memory even a thing?", "How could I ever have thought that?", and "I really have no idea what's going on with this, and it turns out I never have."
The spark of life in my work, the place where I'm most confident I have something really valuable here, is the progress I see people making in the midst of pre-conceptual intimacy.
Why? Because it means I may have a general methodology that works at intellectual frontiers, in the absence of established paradigms. Which is where we are, as a species, in human rationality, AI alignment, and presumably other important places I'm not even a
|
1f08e3e8-45b5-4ab7-b81f-17fcf9025f49
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Mini advent calendar of Xrisks: synthetic biology
The FHI's mini advent calendar: counting down through the big five existential risks. The second one is a new, exciting risk: synthetic biology.
Synthetic biology
Current understanding: medium-low
Most worrying aspect: hackers experimenting with our basic biology
Synthetic biology covers many inter-related fields, all concerned with the construction and control of new biological systems. This area has already attracted the attention of bio-hackers, experimenting with DNA and other biological systems to perform novel tasks – and gaining kudos for exotic accomplishments. The biosphere is filled with many organisms accomplishing specific tasks; combining these and controlling them could allow the construction of extremely deadly bioweapons, targeted very narrowly (at all those possessing a certain gene, for instance). Virulent virus with long incubation periods could be constructed, or common human bacteria could be hacked to perform a variety of roles in the body. And humans are not the only potential targets: whole swaths of the ecosystem could be taken down, either to gain commercial or economic advantages, for terrorist purposes, or simply by accident.
Moreover, the medical miracles promised by synthetic biology are not easily separated from the danger: the targeted control needed to, for instance, kill cancer cells, could also be used to target brain cells or the immune system. This would not be so frightening if the field implemented safety measures commensurate with the risks; but synthetic biology has been extremely lax in its precautions and culturally resistant to regulations.
|
7c9a4f91-16b0-4b4c-9a92-828b1bf2790a
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Q & A with Stuart Russell in AISafety.com Reading Group
On Wednesday the 8th 11:45 PST = 19:45 UTC, Stuart Russell will be joining the online AISafety.com Reading Group, and answer questions about his book, Human Compatible.
If you'd like to join, please add me on Skype ("soeren.elverlin").
This book has previously been discussed on LessWrong in 2 posts:
https://www.lesswrong.com/posts/nd692YfFGfZDh9Mwz/an-69-stuart-russell-s-new-book-on-why-we-need-to-replace
https://www.lesswrong.com/posts/FuGDYNvA6qh4qyFah/thoughts-on-human-compatible
|
4a56fdfd-b637-4716-a379-8da5d2004286
|
trentmkelly/LessWrong-43k
|
LessWrong
|
look at the water
Occasionally I’ve noticed I’m handicapping myself. I don’t let knowledge I know from different contexts seep in. I’ve got to solve the assignment problem the “proper way”.
If this was a problem I’d stumbled upon in the wild I would throw any tool I had at it. I’m not trying to solve this math question instead I’m performing a social ritual. I need to discharge obligation so I can say “I’ve tried” which then lets me read some fiction guilt free.
To compartmentalise is to keep different bits of knowledge from influencing each other.
Common sense contains a lot of knowledge. Couple that with some rough guesses and a bit of reasoning can get you really far. [1] Enrico Fermi was jokingly considered a magician because of the kinds of inferences he could make.
Experiment:
> A back of the envelope calculation or fermi-question is great for cutting through some of these types of errors. What is the mass of the moon? [2]
I've also noticed this pattern when I'm talking about the results of a recent study.
Me: "a study showed facebook causes low mood".
friend: "how did they get that".
me: ¯_(ツ)_/¯.
friend: "well if I was trying to show this, I'd try and find people who don't use facebook and give them happiness quizzes, maybe that's hard because most people use facebook. Hmm, maybe I'd get 1000 people and get half of them to stop using facebook for a month."
me: "... how did you do that?"
There's a little learned helplessness on my part here. It's as if "the conclusions" have been given to me by the authority that is scientists. eg: Turns out Science says facebook is bad.
It was nice for me to realise that studies are essays written by people who looked at some phenomena for a while. The smarter the person and the longer they looked the better, but people nonetheless.
Notes
[1] Carl Schulman has complained that people do not do enough fermi's coupled with wiki-look ups and googling. Research advice here.
[2] The mass of the moon is 7 x 10^22 kg. how'd y
|
f05e1829-343f-4481-9871-07302248efdd
|
trentmkelly/LessWrong-43k
|
LessWrong
|
A Probability Question
Hi, I am relatively new to this site, I am not sure if this is the right place to be posting.
I am sure many of you are familiar with the following probability riddle:
"Sarah is walking along the street when she encounters a man. With the man is his son. He tells Sarah that he has only one more child at home. She is asked, 'what is the probability that my child is a girl?'"
Since Sarah does not know whether the boy is the elder or younger sibling, she needs to take four possible states into account. The father either had:
1) a boy, then a girl
2) a girl, then a boy
3) two girls
4) two boys
Since 3 is impossible (Sarah knows there is at least one boy) that leaves three options. Two of those options imply a girl, the other implies a boy. Therefore, she can conclude that her probability estimate must be that it is 66.6% likely that there is a girl at home, and 33.3% likely that there is a boy.
Compare this to George's situation.
"George is walking along the street when he encounters a man. With the man is his son. He tells George that the boy with him is his oldest son, and that he has only one more child at home. He is asked, 'What is the probability that my child at home is a girl?'"
George's probability estimate is clear: either the man had a boy then a girl, or he had two boys. Therefore, it is 50% likely that the child at home is a girl.
My problem is this: I understand probability exists in the mind. The actual answer to the question is 100% one way or the other. Still, it seems like Sarah knows more about the situation, where George, by being given more information, knows less. His estimate is as good as knowing nothing other than the fact that the man has a child which could be equally likely to be a boy or a girl.
If the reply is something like "Well, Sarah actually knows less so her estimate is less likely to be right" then that is something she could have figured out on her own, and then realized that assigning probability .5 is best anyways.
|
80f0a19c-edd8-4411-a236-d3b4fd9075e0
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Alleviating Bipolar with meditation
Original post: Alleviating Bipolar with meditation
I was asked on the slack, about bipolar and what might help from a meditation standpoint. I have my own experiences to share. (standard non-medical advice disclaimer applies here, i’m not qualified to give professional advice and you should probably confirm with a professional if you have doubts about trying any of this.)
Here’s a list of things that might help with the subjective mood swinging of bipolar experience.
1. A broadening of awareness and contexts.
For about 6 months of time when I was really focused on moods (and 10 years before that), I felt like I didn’t have moods, moods had me (moods distinct from emotions which can be had from moment to moment, moods are more like background, the colour of the day). I would wake up and find out today was “miserable” or “excited”.
I worked on a specific type of meditation practice that is called broadening of awareness (there are 2 different instructions for methods). I got lucky that this helped me and I wasn’t expecting it. When moods had me, it felt like things “just are” miserable. Now my awareness is broader than the moods and “I”* contain them. (*meditative “I” and “self” are a rabbit hole)
Instructions: Most people have their sense of their self boundary in line with their skin barrier. “I” end at my skin. But it’s possible to expand that boundary, and shift it to larger. Particularly the “kinetic sphere”, the area where one might be able to reach outside the body, and then further to the whole room size. Holding this “barrier” thing at the size of the room means that I’m “anchored” metaphorically to more solid things than my own body. Obviously “I’m” still the same but my ground is the actual stationary room. Which does not feel moods like my body does. (*explanation of why it helps may be entirely irrelevant, fact is, anecdata: it helped me)
There’s space in my new expanded “me” to find the body being a certain mood but also to find stillness
|
0597902e-607c-41c3-a23c-28f7aeec6213
|
trentmkelly/LessWrong-43k
|
LessWrong
|
UDT in the Land of Probabilistic Oracles
In my previous post, I explained how a set of programs predicting each other using a probabilistic oracle and taking utility-maximizing actions will naturally play a Nash equilibrium. However, this seems to be suspiciously CDT-like. A program will, for example, betray in a Nash equilibrium when playing against itself.
Naturally, probabilistic oracle machines are a good way to formalize Newcomblike problems (since we have prediction), so we can explore alternative decision theories in this framework. Assume we have some probabilistic world program W(O) that runs the world and calls the oracle (which is not considered part of the world), returning an outcome. There may be (probabilistic) agent programs embedded in this world program (as is common in formalizations of UDT). Different agents value outcomes differently: define Vi(w) to be the utility player i assigns to the outcome W(O)=w. I am using Vi instead of Ui to differentiate this utility function from the game theoretic utility function in the previous post.
Assume that player i's output is ai(O)=O(Ei,1/2) for some machine Ei. This is a convenient assumption and it might be the case that this is enough, but I haven't actually proven this.
Here's an example of player 1 playing Prisoner's dilemma against player 2: W(O)=let x=O(E1,1/2),y=O(E2,1/2) in (x,y) U1(x,y)=x−10y U2(x,y)=y−10x Note that x and y are the results of 2 independent calls to E1. We assume that action 1 is defection and action 0 is cooperation. As a side note, perhaps I should be using monadic do-notation to show that these calls are independent, but this is simple enough that it doesn't seem necessary.
Here's an example of player 1 playing Newcomb's problem: W(O)=let x=O(E1,1/2),y=O(E1,1/2) in (x,y) U1(x,y)=1000x+1000000(1−y) Here, action 1 is 2-boxing and action 0 is 1-boxing. x is the player's actual action while y is Omega's prediction of the player's action. Note the strong resemblance between Newcomb's problem and playing prisoner's dilem
|
e8a951e8-e8cf-4876-aa8d-df9f6f1c0b39
|
trentmkelly/LessWrong-43k
|
LessWrong
|
The Fourth Arena: What’s Up in the world these days? We’re moving to a new, a new what?
About ten years of so ago I discovered object-oriented ontology (OOO) and Bruno Latour. That plunged me into a philosophy period during which I ended up taking a really Big Look at things. I ended up sketching a cosmology/ontology, Living with Abundance in a Pluralist Cosmos: Some Metaphysical Sketches. I ended up arguing that, to date, the universe has seen the emergence of three arenas of abundance. I’ve taken the term “abundance” from Paul Feryerabend: the universe is abundant, it just keeps generating lots and lots of stuff.
I’ve identified these three realms on this nice chart which I round in Wikipedia’s entry on Universe. Roughly speaking, the universe began 14 billion years ago, giving rise to the arena of Matter. Four billion years ago Life emerged. While animals do have culture – the higher primates, certainly, do beaver dams count as culture? I don’t know – it’s human culture that ushered in a new arena, that of Culture. Wikipedia dates Oldowan tools to about 2.6 million years ago. That’s when we can locate the origins of human culture. Whether it’s a million years later or earlier hardly matters on this time scale.
Three Arenas: Matter, Life, Culture
Just as the other areas exhibit internal differentiation, so does Culture. Over the years David Hays and I had developed an account of cultural evolution based on fundamental cognitive architecture which we cultural ranks. We’ve identified four cultural ranks. Rank 1 is based on speech and emerged we don’t really know how long ago. Let’s put it between 100,000 and a million years ago; I doubt that it’s younger and it may well be older. Rank 2 is based on writing and is 5000 to 7000 years old or so. Rank 3 began to emerge after Asian methods of calculation reached Europe by way of the Arabs. It is thus based on calculation and showed its face, say, 700 or so years ago. It gave us the scientific and industrial revolutions, but also the novel, coherent geometric perspective in drawing and painting, and harmon
|
8966435a-3c00-4f56-9654-9d2aacaf2bca
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Is keeping AI "in the box" during training enough?
Many current ML architectures do offline training on a fixed training set ahead of time. For example, GPT-3 is quite successful with this approach. These are, so-to-speak, "in the box" during training: all they can do is match the given text completion more or less well (for example). The parameters for the AI are then optimized for success at this mission. If the system gets no benefit from making threats, duplicity, etc. during training (and indeed, loses values for these attempts), then how can that system ever perform these actions after being 'released' post-training?
There are many stories of optimizations taking "AI" into truly unexpected and potentially undesired states, like this recent one, and we worry about similar problems with live AI even when put "in a box" with limited access to the outside world. If the training is done in a box, then the system may well understand that it's in a box, that the outside world could be influenced once training stops, and how to influence it significantly. But attempting to influence it during training is disincentivized and the AI that runs post-training is the same one that runs in-training. So how could this "trained in the box" AI system ever have the problematic escape-the-box style behaviors we worry about?
I ask this because I suspect my imagination is insufficient to think up such a scenario, not that none exist.
|
5323bc7e-6d25-42bc-a932-9b47422195e5
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Apply to a small iteration of MLAB to be run in Oxford
TLDR: We’re running a small iteration of MLAB (~10 participants) in Oxford towards the end of September. If you’re interested in participating, apply here by 7 September. If you’re interested in being a TA, please email us directly at oxfordmlab@gmail.com
Edit: The dates are now confirmed for the 23 September- 7 October.
Background
MLAB is a program, originally designed by Redwood Research, to help people upskill for alignment work. We think it’s a good use of time if you want to eventually get into technical alignment work, or if you want to work on theoretical alignment or related fields and think understanding ML would be useful. The program we’re running is slightly shorter than the full MLAB—two weeks instead of three. We’ve condensed the curriculum similarly to how WMLB was condensed last year.
We plan to have just under 10 participants, and 2-3 TAs.
Curriculum
This curriculum might change slightly. Depending on participant interest, we might also have two optional days before the course to work through prereqs (the W0 materials) together.
W0D1 - pre-course exercises on PyTorch and einops (CPU)
W1D1 - practice PyTorch by building a simple raytracer (CPU)
W1D2 - build your own ResNet (GPU preferred)
W1D3 - build your own backpropagation framework (CPU)
W1D4 - model training Part 1: model training and optimizers (CPU) Part 2: hyperparameter search (GPU preferred)
W1D5 - GPT Part 1: build your own GPT (CPU) Part 2: sampling text from GPT (GPU preferred)
W2D1&2 - transformer interpretability (CPU)
W2D3 - transformer interpretability on algorithmic tasks (CPU)
W2D4 - intro to RL Part 1: multi-armed bandit (CPU) Part 2: DQN (CPU)
W2D5 - policy gradients and PPO (CPU)
Other activities will include guest speakers and reading groups.
Logistics
Dates: 23 September - 7 October
Location: Oxford
Housing will be covered for participants not already living in Oxford.
Travel from within the UK is covered. Travel from outside the UK is not covered.
Questions
|
80431ff2-4608-478a-bdd4-411822bac499
|
trentmkelly/LessWrong-43k
|
LessWrong
|
[SEQ RERUN] The Amazing Virgin Pregnancy
Today's post, The Amazing Virgin Pregnancy was originally published on 24 December 2007. A summary (taken from the LW wiki):
> A story in which Mary tells Joseph that God made her pregnant so Joseph won't realize she's been cheating on him with the village rabbi.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Zen and the Art of Rationality, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
|
6fc60b4c-0653-4438-9018-cf871d037b19
|
trentmkelly/LessWrong-43k
|
LessWrong
|
The History of Color in Chinese
The linked post is better because you can see the colors.
----------------------------------------
Older Chinese characters tend to be simpler. You can use this to measure the relative ages of ancient concepts.
For example, the Chinese characters for colors are:
* 白 white
* 黑 black
* 红 red
* 橙 orange
* 黄 yellow
* 绿 green
* 蓝 blue
* 紫 purple
The simplest characters are written with a single radical. The oldest radicals are all Kangxi radicals. White 白, black 黑 and yellow 黄 are all written with a single Kangxi radical. The other colors contain at least two radicals.
This fits almost perfectly with Lazarus Geiger’s evolutionary progression[1] which goes black and white ➝ red ➝ yellow ➝ green ➝ blue. The only color out-of-place is red 红.
Wait a minute...I can't recall ever encountering the character "红" in ancient Chinese poetry.
That's because "红" is a relatively new character for the word "red". The traditional five colors 青黄赤白黑 use the Kangxi radical "赤" for red instead of "红". Mystery solved!
If we use the ancient character "赤" for red instead of the modern character "红" then the first four colors in Geiger's sequence are all simple Kangxi radicals and the rest are all composite characters with at least two radicals.
Blue-Green
It gets better. Did you notice how we've only used four of the five colors so far? Take a closer look at the five colors 青黄赤白黑. They include qīng 青, the color of nature, spanning blue and green. It is the 5th and 6th color in Geiger's progression. Qīng 青 is written with a single Kangxi radical.
If all you have are black, white, red and yellow, and then creating a single new color for blue and green is not unusual.
> Bastian also argued that Tagalog speakers in the Philippines had not even distinguished between green and blue until the arrival of the Spanish colonizers, because the Tagalog words for “green” and “blue” were clearly recent borrowings from Spanish verde and azul. And he claimed that the language of the Teda t
|
bae38ba4-2e04-45af-98b2-d3701ff046f5
|
StampyAI/alignment-research-dataset/blogs
|
Blogs
|
2018-19 New Year review
#### 2018 progress
Research / AI safety:
* Wrote a paper on [measuring side effects using relative reachability](https://arxiv.org/abs/1806.01186) in May, and presented the results at the ICML GoalsRL workshop and the AI safety summer school. Since then, some [new](https://openreview.net/forum?id=rkevMnRqYQ) [approaches](https://www.alignmentforum.org/posts/yEa7kwoMpsBgaBCgb/towards-a-new-impact-measure) have come out using my method as a baseline :).
* Made a list of 30 [specification gaming examples in AI](https://vkrakovna.wordpress.com/2018/04/02/specification-gaming-examples-in-ai/) (assembled from several existing lists). Since the list was posted in April, 16 new examples have been contributed through the form (thanks everyone!). The list received [some attention on Twitter](https://twitter.com/mogwai_poet/status/1060286856493813760), and I was interviewed about it by [Wired](https://www.wired.com/story/when-bots-teach-themselves-to-cheat/) and the [Times](https://www.thetimes.co.uk/edition/news/why-ai-can-be-too-clever-for-its-own-good-pvlgzrzbp).
* Was in the top 30% of NeurIPS reviewers.
* Gave talks at the [Oxford AI Society](https://www.facebook.com/oxaisoc/videos/vb.1366043036746776/1915180021891709), [EA Global London](https://vkrakovna.wordpress.com/2018/11/01/discussion-on-the-machine-learning-approach-to-ai-safety/), etc.
* Got involved in organizing the upcoming ICLR AI safety workshop, [Safe Machine Learning: Specification, Robustness and Assurance](https://sites.google.com/corp/view/safeml-iclr2019/).
Rationality / effectiveness:
* Attended the CFAR mentoring workshop in Prague, and started running rationality training sessions with Janos at our group house.
* Started using [work cycles](https://www.ultraworking.com/cycles/) – focused work blocks (e.g. pomodoros) with built-in reflection prompts. I think this has increased my productivity and focus to some degree. The prompt “how will I get started?” has been surprisingly helpful given its simplicity.
* Stopped eating processed sugar for health reasons at the end of 2017 and have been avoiding it ever since.
+ This has been surprisingly easy, especially compared to my earlier attempts to eat less sugar. I think there are two factors behind this: avoiding sugar made everything taste sweeter (so many things that used to taste good now seem inedibly sweet), and the mindset shift from “this is a luxury that I shouldn’t indulge in” to “this is not food”.
+ Unfortunately, I can’t make any conclusions about the effects on my mood variables because of some issues with my data recording process :(.
* Declining levels of insomnia (excluding jetlag):
+ 22% of nights in the first half of 2017, 16% in the second half of 2017, 16% in the first half of 2018, 10% in the second half of 2018.
+ This is probably an effect of the sleep CBT program I did in 2017, though avoiding sugar might be a factor as well.
* Made some progress on reducing non-research commitments (talks, reviewing, organizing, etc).
+ Set up some systems for this: a spreadsheet to keep track of requests to do things (with 0-3 ratings for workload and 0-2 ratings for regret) and a form to fill out whenever I’m thinking of accepting a commitment.
+ My overall acceptance rate for commitments has gone down a bit from 29% in 2017 to 24% in 2018. The average regret per commitment went down from 0.66 in 2017 to 0.53 in 2018.
+ However, since the number of requests has gone up, I ended up with more things to do overall: 12 commitments with a total of 23 units of workload in 2017 vs 19 commitments with a total of 33 units of workload in 2018. (1 unit of workload ~ 5 hours)
Fun stuff:
* Hiked in the Alps for the first time:
+ Tour de Mont Blanc – a weeklong hike around Mont Blanc going through France, Italy and Switzerland. It felt funny to cross a mountain pass and end up in a different country without anyone checking my passport. There were a lot of meadows and cows. 
+ Monte Rosa glacier hike ([Gnifetti normal route](https://www.summitpost.org/punta-gnifetti-normal-route/156188)). We were all connected by a rope in case someone falls into a crack in the ice. The first night (at 3500m) I could not sleep at all due to altitude and had the interesting experience of a full day hike afterwards.
* Spontaneous solo trip to Amsterdam for my birthday
* Helped run the Dead Hand Path camp at Burning Man and organized a series of AI safety talks
* Read the Book of Why, Other Minds, The Player of Games, Life 3.0, The Elephant in the Brain.
* Did 5 chinups in a row (only once, usually I can do 3)
* Learned a headstand in yoga class
* Learned some new moves in aerial silks
* Our group house has been adopted by a neighbour’s cat (it all started with crashing parties). After some of our housemates moved a few blocks away, the cat has been splitting her time between the two houses.

#### 2018 prediction outcomes
Resolutions:
1. Write at least 2 AI blog posts that are not about conferences (1 last year) (70%) – 4 posts
2. Avoid processed sugar at least until end of March (90%) – yes (still going)
3. ~~Do at most 4 non-research talks/panels (7 last year)~~ (50%) – 5 talks
4. Meditate on at least 250 days (50%) – 283 days
Predictions:
1. Our AI safety team will have at least two papers accepted for publication at a major conference, not counting workshops (80%) – yes
2. ~~I will write at least 6 blog posts~~ (60%) – wrote 5 posts
3. I will go to at least 100 exercise classes (80 last year) (60%) – 123 classes
4. 1-2 housemate turnover at the Deep End (3 last year) (70%) – 2 housemates
5. I will visit at least 3 new cities with population over 100,000 (4 last year) (50%) – Amsterdam, Geneva, Stockholm, Prague
6. I will go on at least 2 hikes (4 last year) (90%) – 3 major hikes (Vancouver Island, Tour de Mont Blanc, Monte Rosa)
Calibration:
* 50-60%: 3 correct, 2 wrong
* 70-90%: 5 correct
* High-confidence predictions are underconfident, low-confidence predictions are well-calibrated.
#### 2019 goals and predictions
Resolutions:
1. Author or coauthor two or more academic papers (50%)
2. Accept at most 17 non-research commitments (24 last year) (60%)
3. Meditate on at least 250 days (60%)
Predictions:
1. Relative reachability paper accepted at a major conference, not counting workshops (60%)
2. Continue avoiding processed sugar for the next year (85%)
3. 1-2 housemate turnover at Deep End (2 last year) (80%)
4. At least 5 rationality sessions will be hosted at Deep End (80%)
Past new year reviews: [2017-18](https://vkrakovna.wordpress.com/2018/01/07/2017-18-new-year-review/), [2016-17](https://vkrakovna.wordpress.com/2017/01/09/2016-17-new-year-review/), [2015-16](https://vkrakovna.wordpress.com/2015/12/31/2015-16-new-year-review/), [2014-15](https://vkrakovna.wordpress.com/2015/01/11/2014-15-new-year-review/).
|
350a2ca8-3b60-4ccf-af42-4a4f44b530d1
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Man in the Arena
> “It is not the critic who counts; not the man who points out how the strong man stumbles, or where the doer of deeds could have done them better. The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood; who strives valiantly; who errs, who comes short again and again, because there is no effort without error and shortcoming; but who does actually strive to do the deeds… Who at the worst, if he fails, at least fails while daring greatly, so that his place shall never be with those cold and timid souls who neither know victory nor defeat.”
Today Kurt’s out strafing, which his stream loves, but which takes a lot of effort. Girl on the left, red jacket, half a block away. He focuses his eyes on her and his viZor immediately starts filling with insults, pick-up lines, nonsensical keyboard-mashing, and whatever else the schizophrenic hivemind of his viewers feels like generating. As the votes pour in the lines are shuffling around too fast for him to read, but that’s okay, she’s still twenty meters away. Ten meters, and it’s stabilized, one of them has clicked into top place; five meters, and he’s figured out just the right intonation. “Hey bitch”, then a pause—gotta get the pause just right, so she has enough time to realize he’s talking to her and look up, but not quite enough to process that anyone who’s calling her a bitch in the middle of the street is not someone she wants to be listening to—“You got a license to be this ugly in public?”
Boom! Perfect timing—he actually manages to get the shocked little o of her open mouth on camera, before she ducks her head away and hurries past him. The next line is popping up in his viZor, and he almost yells it out after her, but when you land a good first hit it’s easy to ruin it with a subpar follow-up. Patience is what separates the best from the rest, he always tells people. So with a swipe of his fingers he replays the clip to his stream instead. “See?” he says. “Fo
|
db3a5640-4ee7-4673-9c69-9e706f20de93
|
StampyAI/alignment-research-dataset/arxiv
|
Arxiv
|
Safe Reinforcement Learning via Probabilistic Shields
Introduction
------------
Recent years showed increased use of reinforcement learning (RL) in solving tasks such as complex games (?) or robotic manipulation (?).
In RL, an agent perceives the surrounding environment and acts towards maximizing a long-term reward signal.
A major open challenge is the *safety* of decision-making for systems employing RL (?; ?).
Particularly during the exploration phase, when an agent chooses random actions in order to examine its surroundings, it is important to avoid actions that may cause unsafe outcomes.
The area of *safe exploration* investigates how RL agents can adhere to safety requirements during this phase (?; ?).
One suitable technique that delivers theoretical guarantees are so-called *safety-shields* (?; ?).
Shields prevent an agent from taking unsafe actions at runtime.
To this end, the performance objective is extended with a constraint specifying that unsafe states should *never* be visited.
This new safety objective ensures there are no violations during the exploration phase.
So far, shields have showed success in deterministic settings, where an agent avoids safety violations altogether.
However, in many cases this tight restriction limits the agent’s exploration and understanding of the environment, and policies satisfying the restrictions may not even exist.
We propose to incorporate more liberal constraints that enforce safety violations to occur *only with small probability*.
If an action increases the probability of a safety violation by more than a threshold δ𝛿\deltaitalic\_δ with respect to the optimal safety probability, the shield blocks the action from the agent.
Consequently, an agent augmented with a shield is *guided* to satisfy the safety objective during exploration (or as long as the shield is used).
The shield is *adaptive* with respect to δ𝛿\deltaitalic\_δ,
as a high value for δ𝛿\deltaitalic\_δ yields a stricter shield, a smaller value a more permissive shield.
The value for δ𝛿\deltaitalic\_δ can be changed on-the-fly, and may depend on the individual minimal safety probabilities at each state.
Moreover, in case there is not suitable safe action with respect to δ𝛿\deltaitalic\_δ, the shield can always pick the optimal action as a fallback.
We base our formal notion of a probabilistic shield on MDPs, which constitute a popular modeling formalism for decision-making under uncertainty (?) and is widely used in model-based RL.
We assess safety by means of probabilistic *temporal logic constraints* (?) that limit, for example, the probability to reach a set of critical states in the MDP.
In order to assess the risk of one action, we (1) construct a behavior model for the environment using model-based RL (?). We can plug this model into any concrete scenario to obtain an MDP.
To construct the shield, we (2) use a model-based verification technique known as *model checking* (?; ?) that assesses whether a system model satisfies a specification.
Due to its rigor, the validity of results *only* depends on the quality of the model, and
we obtain precise *safety probabilities of any possible decision* within the MDP.
These probabilities can be looked up efficiently and compared to the threshold δ𝛿\deltaitalic\_δ.
The shield then readily (3) augments either model-free or model-based RL.
We identify three key challenges:
Firstly, model checking – as any model-based technique – is susceptible to scalability issues.
A key advantage of using a separate safety objective is that we may analyze safety on just a fraction of the system, the *safety-critical MDP*.
In our experiments, these MDP fragments are at least ten orders of magnitude smaller than a full model of the system, rendering model checking applicable to realistic scenarios.
We introduce further optimizations based on problem-specific abstraction techniques.
Secondly, without randomness, all states are either absolutely safe or unsafe.
However, in the presence of randomness, safety may be seen as a quantitative measure: in some states all actions may induce a large risk, while one action may be considered *relatively* safe.
Therefore, it is essential to have an *adaptive* notion of shielding, in which the pre-selection of actions is not based on absolute thresholds.
Lastly, shielding may *restrict* exploration and lead to suboptimal policies. Therefore, it should not be considered in isolation.
The trade-off between optimizing the performance objective and the achieved safety is intricate.
Intuitively, accepting small short term risks may allow for efficient exploration and limit the risk long-term.
To this end, we provide and discuss mechanisms that allow to adjust the shield based on such observations.
We apply shielding to two distinct use cases: the arcade game PACM-MAN and a new case study involving service robots in a warehouse.
Shielded RL leads to improved policies for both case studies with fewer safety violations and performance superior to unshielded RL.
Supplementary materials are available at <http://shieldrl.nilsjansen.org>.
### Related Work.
Most approaches to safe RL (?; ?) rely on reward engineering and effectively changing the learning objective.
In contrast to ensuring temporal logic constraints, reward engineering designs or “tweaks” the reward functions such that a learning agent behaves in a desired, potentially safe, manner.
As rewards are specialized for particular environments, reward engineering runs the risk of triggering negative side effects or hiding potential bugs (?).
Recently, it was shown that reward engineering is not sufficient to capture temporal logic constraints in general (?).
Additionally, in (?) the exploration of model-free RL algorithms is limited using control barrier functions and in (?) exploration is restricted to a space close to an optimal, precomputed policy.
First approaches directly incorporating formal specifications tackle this problem with pre-computations; making assumptions on the available information about the environment (?; ?; ?; ?; ?; ?), by employing PAC guarantees (?), or by an intermediate “correction” of policies (?).
Most related is (?), which introduces the concept of a shield for RL.
The difference and novel contribution is rooted in the consideration of stochastic behavior, which is natural to RL.
Intuitively, without stochasticities, a learning agent does not take any risk, which is unrealistic in most scenarios.
Moreover, often one cannot assume that a 100%percent100100\%100 % (or almost-sure) safety is realizable.
A similar approach to ours was developed independently in (?), but targets a different case study and does not consider scalability issues of formal verification.
In a related direction, methods from reinforcement learning have been successfully employed to improve the scalability of verification methods for MDPs.
Such approaches often use rich specifications like ω𝜔\omegaitalic\_ω-regular languages as a control to guide the exploration of MDP during learning (?; ?; ?; ?; ?).
Safe model-based RL for continuous state spaces employing Lyapunov functions is considered in (?; ?).
UPPAAL STRATEGO provides a number of algorithms combining safety synthesis with optimizing RL for continuous space MDPs (?).
Finally, (?) uses control barrier functions (CBFs) for safe RL.
Probabilistic planning considers similar problems as probabilistic model checking (?; ?).
A recent comparison between tools from both areas can be found in (?).
Problem Statement
-----------------
### Foundations.
A *probability distribution* over a countable set X𝑋Xitalic\_X is a function μ:X→[0,1]:𝜇→𝑋01\mu\colon X\rightarrow[0,1]italic\_μ : italic\_X → [ 0 , 1 ] with ∑x∈Xμ(x)=1subscript𝑥𝑋𝜇𝑥1\sum\_{x\in X}\mu(x)=1∑ start\_POSTSUBSCRIPT italic\_x ∈ italic\_X end\_POSTSUBSCRIPT italic\_μ ( italic\_x ) = 1.
𝐷𝑖𝑠𝑡𝑟(X)𝐷𝑖𝑠𝑡𝑟𝑋\mathit{Distr}(X)italic\_Distr ( italic\_X ) denotes all distributions on X𝑋Xitalic\_X. The support of μ∈𝐷𝑖𝑠𝑡𝑟(X)𝜇𝐷𝑖𝑠𝑡𝑟𝑋\mu\in\mathit{Distr}(X)italic\_μ ∈ italic\_Distr ( italic\_X ) is 𝑠𝑢𝑝𝑝(μ)={x∈X∣μ(x)>0}𝑠𝑢𝑝𝑝𝜇conditional-set𝑥𝑋𝜇𝑥0\mathit{supp}(\mu)=\{x\in X\mid\mu(x){>}0\}italic\_supp ( italic\_μ ) = { italic\_x ∈ italic\_X ∣ italic\_μ ( italic\_x ) > 0 }.
A *Markov decision process* (MDP) ℳ=(S,𝐴𝑐𝑡,𝒫,r)ℳ𝑆𝐴𝑐𝑡𝒫𝑟\mathcal{M}{}=(S{},\mathit{Act}{},\mathcal{P}{},{r}{})caligraphic\_M = ( italic\_S , italic\_Act , caligraphic\_P , italic\_r ) has a set S𝑆Sitalic\_S of *states*, a finite set 𝐴𝑐𝑡𝐴𝑐𝑡\mathit{Act}italic\_Act of *actions*, a (partial) *probabilistic transition function* 𝒫:S×𝐴𝑐𝑡→𝐷𝑖𝑠𝑡𝑟(S):𝒫→𝑆𝐴𝑐𝑡𝐷𝑖𝑠𝑡𝑟𝑆\mathcal{P}\colon S\times\mathit{Act}\rightarrow\mathit{Distr}(S)caligraphic\_P : italic\_S × italic\_Act → italic\_Distr ( italic\_S ), and an *immediate reward function* r:S×𝐴𝑐𝑡→ℝ≥0:𝑟→𝑆𝐴𝑐𝑡subscriptℝabsent0{r}\colon S\times\mathit{Act}\rightarrow\mathbb{R}\_{\geq 0}italic\_r : italic\_S × italic\_Act → blackboard\_R start\_POSTSUBSCRIPT ≥ 0 end\_POSTSUBSCRIPT.
For all s∈S𝑠𝑆s\in Sitalic\_s ∈ italic\_S the available actions are 𝐴𝑐𝑡(s)={α∈𝐴𝑐𝑡∣𝒫(s,α)≠⊥}𝐴𝑐𝑡𝑠conditional-set𝛼𝐴𝑐𝑡𝒫𝑠𝛼bottom\mathit{Act}(s)=\{\alpha\in\mathit{Act}\mid\mathcal{P}(s,\alpha)\neq\bot\}italic\_Act ( italic\_s ) = { italic\_α ∈ italic\_Act ∣ caligraphic\_P ( italic\_s , italic\_α ) ≠ ⊥ } and we assume |𝐴𝑐𝑡(s)|≥1𝐴𝑐𝑡𝑠1|\mathit{Act}(s)|\geq 1| italic\_Act ( italic\_s ) | ≥ 1.
A *policy* is a function σ:S\*→𝐷𝑖𝑠𝑡𝑟(𝐴𝑐𝑡):𝜎→superscript𝑆𝐷𝑖𝑠𝑡𝑟𝐴𝑐𝑡\sigma\colon S^{\*}\rightarrow\mathit{Distr}(\mathit{Act})italic\_σ : italic\_S start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT → italic\_Distr ( italic\_Act ) with 𝑠𝑢𝑝𝑝(σ(s1…sn))⊆𝐴𝑐𝑡(sn)𝑠𝑢𝑝𝑝𝜎subscript𝑠1…subscript𝑠𝑛𝐴𝑐𝑡subscript𝑠𝑛\mathit{supp}(\sigma(s\_{1}\ldots s\_{n}))\subseteq\mathit{Act}(s\_{n})italic\_supp ( italic\_σ ( italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT … italic\_s start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ) ) ⊆ italic\_Act ( italic\_s start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ) and S\*superscript𝑆S^{\*}italic\_S start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT a finite sequence of states.
In formal methods, safety properties are often specified as *linear temporal logic* (LTL) properties (?).
For an MDP ℳℳ\mathcal{M}caligraphic\_M, probabilistic model checking (?; ?)
employs value iteration or linear programming to compute the probabilities of *all states and actions of the MDP*
to satisfy an LTL property φ𝜑\varphiitalic\_φ.
Specifically,
we compute ηφ,ℳmax:S→[0,1]:subscriptsuperscript𝜂𝜑ℳ→𝑆01\eta^{\max}\_{\varphi,\mathcal{M}}\colon S\rightarrow[0,1]italic\_η start\_POSTSUPERSCRIPT roman\_max end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_φ , caligraphic\_M end\_POSTSUBSCRIPT : italic\_S → [ 0 , 1 ] or ηφ,ℳmin:S→[0,1]:subscriptsuperscript𝜂𝜑ℳ→𝑆01\eta^{\min}\_{\varphi,\mathcal{M}}\colon S\rightarrow[0,1]italic\_η start\_POSTSUPERSCRIPT roman\_min end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_φ , caligraphic\_M end\_POSTSUBSCRIPT : italic\_S → [ 0 , 1 ],
which give for all states the minimal (or maximal) probability over all possible policies to satisfy φ𝜑\varphiitalic\_φ.
For instance, for φ𝜑\varphiitalic\_φ encoding to reach a set of states T𝑇Titalic\_T, ηφ,ℳmax(s)subscriptsuperscript𝜂𝜑ℳ𝑠\eta^{\max}\_{\varphi,\mathcal{M}}(s)italic\_η start\_POSTSUPERSCRIPT roman\_max end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_φ , caligraphic\_M end\_POSTSUBSCRIPT ( italic\_s ) describes the maximal probability to “eventually” reach a state in T𝑇Titalic\_T.
### Setting.
We define a setting
where one controllable agent (the *avatar*) and a number of uncontrollable agents (the *adversaries*) operate within an *arena*.
The arena is a compact, high-level description of the underlying model.
From this arena, the potential states and actions of all agents may be inferred.
For safety considerations, the reward structure can be neglected, effectively reducing the state space for our model-based safety computations.
Formally, an *arena* is a directed graph G=(V,E)𝐺𝑉𝐸G=(V,E)italic\_G = ( italic\_V , italic\_E ) with a finite sets V𝑉Vitalic\_V of nodes and E⊆V×V𝐸𝑉𝑉E\subseteq V\times Vitalic\_E ⊆ italic\_V × italic\_V of edges.
The agent’s *position* is defined via the current node v∈V𝑣𝑉v\in Vitalic\_v ∈ italic\_V. The agent *decides* on a new edge (v,v′)∈E𝑣superscript𝑣′𝐸(v,v^{\prime})\in E( italic\_v , italic\_v start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ∈ italic\_E and determines its next position v′superscript𝑣′v^{\prime}italic\_v start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT.
Some (combinations of) agent positions are safety-critical, as they e.g. correspond to collisions or falling off a cliff. A safety property may describe reaching such positions, or use any other property expressible in (the safety fragment of) temporal logic.
While the underlying model for the arena suffices to specify the safe behavior, it is not sufficiently succinct to model the performance via rewards.
Consider an edge that is safety-relevant, but the agent is only rewarded the first time taking this edge.
Thus, in a flat model with rewards, two different edges are necessary to model this behavior.
However, the reward (and thus the difference between these edges) is not needed to assess the safety, and the safety-relevant model may be pruned to an exponentially smaller model.
We use a *token* function that implicitly extends the underlying model by a reward structure, enabling a separation of concerns between safety and performance.
Technically, we associate edges with a token function ∘:E→{0,1}\circ\colon E\rightarrow\{0,1\}∘ : italic\_E → { 0 , 1 }, indicating the status of an edge.
Tokens can be (de-) activated and have an associated *reward* earned upon taking edges with an active token.
*Example 1: Autonomous driving.*
An autonomous taxi (the avatar) operates within a road network encoded by an arena.
The taxi has to visit several points to pick up or drop off passengers (?; ?).
Upon visiting such a point, a corresponding token activates and a reward is earned, afterwards the token is deactivated permanently.
Meanwhile, the taxi has to account for other traffic participants or further environmental factors (the adversaries).
A sensible safety specification may restrict the probability for collision with other cars to 0.5%percent0.50.5\%0.5 %.
Note that the token structure is not relevant for such a specification.
*Example 2: Robot logistics in a smart factory.*
Take a factory floor plan with several corridors with machines.
The adversaries are (possibly autonomous) transporters moving parts within the factory.
The avatar models a specific service unit moving around and inspecting machines where an issue has been raised (as indicated by a token), while accounting for the behavior of the adversaries.
Corridors might be to narrow for multiple (facing) robots, which poses a safety critical situation.
The tokens allow to have a *state-dependent* cost, either as long as they are present (indicating the costs of a broken machine) or for removing the tokens (indicating costs for inspecting the machine).
A similar scenario has been investigated in (?).
### Problem.
Consider an environment described by an arena as above and a safety specification.
We assume stochastic behaviors for the adversaries, e.g, obtained using RL (?; ?) in a training environment.
In fact, this stochastic behavior determines all actions of the adversaries via probabilities.
The underlying model is then a Markov decision process: the avatar executes an action, and upon this execution the next exact positions (the state of the system) are determined stochastically.
We compute a δ𝛿\deltaitalic\_δ-shield that prevents avatar decisions that violate this specification by more than a threshold δ𝛿\deltaitalic\_δ with respect to the optimal safety probability.
We evaluate the shield using a model-based or model-free RL avatar that aims to optimize the performance.
The shield therefore has to handle an intricate tradeoff between strictly focussing on (short and midterm) safety and performance.
Constructing Shields for MDPs
-----------------------------
Behavior models for Adversaries
Observations of
Adversaries
Arena
Arena with Tokens and Rewards
Full MDP
Safety-Relevant MDP Quotient
Shield
Construction

Model-free or model based RL

Safe Policy for Avatar
Figure 1: Workflow of the Shield Construction
We outline the workflow of our approach in Fig. [1](#Sx3.F1 "Figure 1 ‣ Constructing Shields for MDPs ‣ Safe Reinforcement Learning via Probabilistic Shields") and below.
We employ a separation of concerns between the model-based shield construction and potentially model-free reinforcement learning (RL).
First, we construct a *behavior model* for each adversary.
Based on this model and a concrete arena, we construct a compact MDP model: the *safety-relevant MDP quotient*.
In this MDP, we compute the *shield* which enables safe RL for the full MDP.
We now detail the individual technical steps to realize our proposed method.
### Behavior Models for Adversaries.
We learn an adversary model by observing behavior in a set of similar (small) arenas, until we gain sufficient confidence that more training data would not change the behavior significantly (?).
An upper bound on the necessary data may be obtained using Hoeffding’s inequality (?).
To reduce the size of the training set, we devise a data augmentation technique using domain knowledge of the arenas (?; ?).
In particular, we abstract away from the precise configuration of the arena by partitioning the graph into zones that are relative to the view-point of the adversary (e. g., near or far, north or south, east or west).
The intuitive assumption is that the specific position of an adversary is not important, but some key information is (e.g., the relation to the position of the avatar).
This approach (1) speeds up the learning process and (2) renders the resulting behavior model applicable for varying the concrete instance of the same setting.
Zones are uniquely identified by a coloring with a finite set C𝐶Citalic\_C of colors.
Formally, for an arena G=(V,E)𝐺𝑉𝐸G=(V,E)italic\_G = ( italic\_V , italic\_E ), *zones relative to a node v∈V𝑣𝑉v\in Vitalic\_v ∈ italic\_V* are given by a function zv:V→C:subscript𝑧𝑣→𝑉𝐶z\_{v}\colon V\rightarrow Citalic\_z start\_POSTSUBSCRIPT italic\_v end\_POSTSUBSCRIPT : italic\_V → italic\_C.
For nodes x,y∈V𝑥𝑦
𝑉x,y\in Vitalic\_x , italic\_y ∈ italic\_V, with zv(x)=zv(y)subscript𝑧𝑣𝑥subscript𝑧𝑣𝑦z\_{v}(x)=z\_{v}(y)italic\_z start\_POSTSUBSCRIPT italic\_v end\_POSTSUBSCRIPT ( italic\_x ) = italic\_z start\_POSTSUBSCRIPT italic\_v end\_POSTSUBSCRIPT ( italic\_y ), the assumption is that the adversary in v𝑣vitalic\_v behaves similarly regardless whether the avatar is in x𝑥xitalic\_x or y𝑦yitalic\_y.
From our observations, we extract a *histogram* h:E×C→ℕ:ℎ→𝐸𝐶ℕh\colon E\times C\rightarrow\mathbb{N}italic\_h : italic\_E × italic\_C → blackboard\_N, where h(e,c)ℎ𝑒𝑐h(e,c)italic\_h ( italic\_e , italic\_c ) describes how often the adversary takes an edge e=(v,v′)∈E𝑒𝑣superscript𝑣′𝐸e=(v,v^{\prime})\in Eitalic\_e = ( italic\_v , italic\_v start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ∈ italic\_E while the avatar is in a node u𝑢uitalic\_u with zv(u)=csubscript𝑧𝑣𝑢𝑐z\_{v}(u)=citalic\_z start\_POSTSUBSCRIPT italic\_v end\_POSTSUBSCRIPT ( italic\_u ) = italic\_c.
We translate these likelihoods into distributions over possible edges in the arena.
######
Definition 1 (Adversary Behavior).
For an arena G=(V,E)𝐺𝑉𝐸G=(V,E)italic\_G = ( italic\_V , italic\_E ), zones zu:V→C:subscript𝑧𝑢→𝑉𝐶z\_{u}\colon V\rightarrow Citalic\_z start\_POSTSUBSCRIPT italic\_u end\_POSTSUBSCRIPT : italic\_V → italic\_C for every u∈V𝑢𝑉u\in Vitalic\_u ∈ italic\_V, and a histogram h:E×C→ℕ:ℎ→𝐸𝐶ℕh\colon E\times C\rightarrow\mathbb{N}italic\_h : italic\_E × italic\_C → blackboard\_N, the *adversary behavior* is a function B:V×C→𝐷𝑖𝑠𝑡𝑟(E):𝐵→𝑉𝐶𝐷𝑖𝑠𝑡𝑟𝐸B\colon V\times C\rightarrow\mathit{Distr}(E)italic\_B : italic\_V × italic\_C → italic\_Distr ( italic\_E ) with
| | | |
| --- | --- | --- |
| | B(v,c)=h((v,v′),c)∑(v,v′)∈Eh((v,v′),c).𝐵𝑣𝑐ℎ𝑣superscript𝑣′𝑐subscript𝑣superscript𝑣′𝐸ℎ𝑣superscript𝑣′𝑐B(v,c)=\frac{h\big{(}(v,v^{\prime}),c\big{)}}{\sum\_{(v,v^{\prime})\in E}h\big{(}(v,v^{\prime}),c\big{)}}\ .italic\_B ( italic\_v , italic\_c ) = divide start\_ARG italic\_h ( ( italic\_v , italic\_v start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) , italic\_c ) end\_ARG start\_ARG ∑ start\_POSTSUBSCRIPT ( italic\_v , italic\_v start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ∈ italic\_E end\_POSTSUBSCRIPT italic\_h ( ( italic\_v , italic\_v start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) , italic\_c ) end\_ARG . | |
While we employ a simple normalization of likelihoods, alternatively one may also utilize, e. g., a softmax function which is adjustable to favor more or less likely decisions (?).
### Safety-Relevant Quotient MDP.
The construction of the MDP ℳ=(S,𝐴𝑐𝑡,𝒫)ℳ𝑆𝐴𝑐𝑡𝒫\mathcal{M}{}=(S{},\mathit{Act}{},\mathcal{P}{})caligraphic\_M = ( italic\_S , italic\_Act , caligraphic\_P ) augments an arena by behavior models Bisubscript𝐵𝑖B\_{i}italic\_B start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT.
First, the *states* S=Vm+1×{0,…,m}𝑆superscript𝑉𝑚10…𝑚S=V^{m+1}\times\{0,\ldots,m\}italic\_S = italic\_V start\_POSTSUPERSCRIPT italic\_m + 1 end\_POSTSUPERSCRIPT × { 0 , … , italic\_m } encode the positions for all agents and whose turn it is.
The *decision states* of the safety-relevant MDP ℳℳ\mathcal{M}caligraphic\_M are Sd={sd∈S∣sd=(…,0)}subscript𝑆𝑑conditional-setsubscript𝑠𝑑𝑆subscript𝑠𝑑…0S\_{d}=\{s\_{d}\in S\mid s\_{d}=(\ldots,0)\}italic\_S start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT = { italic\_s start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT ∈ italic\_S ∣ italic\_s start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT = ( … , 0 ) },
i.e., it’s the turn of the avatar.
The *actions* 𝐴𝑐𝑡={α0}∪𝐴𝑐𝑡E𝐴𝑐𝑡subscript𝛼0subscript𝐴𝑐𝑡𝐸\mathit{Act}=\{\alpha\_{0}\}\cup\mathit{Act}\_{E}italic\_Act = { italic\_α start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT } ∪ italic\_Act start\_POSTSUBSCRIPT italic\_E end\_POSTSUBSCRIPT with 𝐴𝑐𝑡E={αe∣e∈E}subscript𝐴𝑐𝑡𝐸conditional-setsubscript𝛼𝑒𝑒𝐸\mathit{Act}\_{E}=\{\alpha\_{e}\mid e\in E\}italic\_Act start\_POSTSUBSCRIPT italic\_E end\_POSTSUBSCRIPT = { italic\_α start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT ∣ italic\_e ∈ italic\_E } determine the movements of the avatar and the adversaries.
For (v,…,0)=sd∈Sd𝑣…0subscript𝑠𝑑subscript𝑆𝑑(v,\ldots,0)=s\_{d}\in S\_{d}( italic\_v , … , 0 ) = italic\_s start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT ∈ italic\_S start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT (the avatar moves next), the available actions are αe∈𝐴𝑐𝑡(sd)⊆𝐴𝑐𝑡esubscript𝛼𝑒𝐴𝑐𝑡subscript𝑠𝑑subscript𝐴𝑐𝑡𝑒\alpha\_{e}\in\mathit{Act}(s\_{d})\subseteq\mathit{Act}\_{e}italic\_α start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT ∈ italic\_Act ( italic\_s start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT ) ⊆ italic\_Act start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT, where αesubscript𝛼𝑒\alpha\_{e}italic\_α start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT corresponds to an outgoing edge of v𝑣vitalic\_v. For (v,…,0)=sd∈Sd𝑣…0subscript𝑠𝑑subscript𝑆𝑑(v,\ldots,0)=s\_{d}\in S\_{d}( italic\_v , … , 0 ) = italic\_s start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT ∈ italic\_S start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT, αesubscript𝛼𝑒\alpha\_{e}italic\_α start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT with e=(v,v′)𝑒𝑣superscript𝑣′e=(v,v^{\prime})italic\_e = ( italic\_v , italic\_v start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) leads with probability one to a state se=(v′,…,1)subscript𝑠𝑒superscript𝑣′…1s\_{e}=(v^{\prime},\ldots,1)italic\_s start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT = ( italic\_v start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , … , 1 ).
For (v,…,vi,…,i>0)𝑣…subscript𝑣𝑖…𝑖
0(v,\ldots,v\_{i},\ldots,i>0)( italic\_v , … , italic\_v start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , … , italic\_i > 0 ) (an adversary moves next), there is a unique action α0subscript𝛼0\alpha\_{0}italic\_α start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT where visubscript𝑣𝑖v\_{i}italic\_v start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT is changed to vi′subscriptsuperscript𝑣′𝑖v^{\prime}\_{i}italic\_v start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT, randomly determined according to the behavior Bisubscript𝐵𝑖B\_{i}italic\_B start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT, which also updates i𝑖iitalic\_i to i+1𝑖1i+1italic\_i + 1 modulo m𝑚mitalic\_m.
These transitions induce the only probabilistic choices in the MDP.
A policy only has to choose an action at decision states.
At all other states, only the unique action α0subscript𝛼0\alpha\_{0}italic\_α start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT emanates.
Consequently, a policy for ℳℳ\mathcal{M}caligraphic\_M is a policy for the avatar.
In theory, one can build the full MDP for the arena (V,E)𝑉𝐸(V,E)( italic\_V , italic\_E ) and the token function ∘:E→{0,1}\circ\colon E\rightarrow\{0,1\}∘ : italic\_E → { 0 , 1 } under the assumption that the reward function is known.
Then, one can compute the reward-optimal and safe policy without need for further learning techniques.
As there are 2Esuperscript2𝐸2^{E}2 start\_POSTSUPERSCRIPT italic\_E end\_POSTSUPERSCRIPT token configurations, the state space blows up exponentially, which prevents the successful application of model checking or planning techniques for anything but very small applications.
### Shield Construction.
For
the safety-relevant MDP ℳℳ\mathcal{M}caligraphic\_M, a set of unsafe states T⊆S𝑇𝑆T\subseteq Sitalic\_T ⊆ italic\_S should preferably not be reached from any state.
The property φ=◆T𝜑◆𝑇\varphi=\lozenge Titalic\_φ = ◆ italic\_T encodes the violation of this safety constraint, that is, eventually reaching T𝑇Titalic\_T within ℳℳ\mathcal{M}caligraphic\_M.
The shield needs to limit the probability to *satisfy* φ𝜑\varphiitalic\_φ.
We evaluate all decision states sd∈Sdsubscript𝑠𝑑subscript𝑆𝑑s\_{d}\in S\_{d}italic\_s start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT ∈ italic\_S start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT with respect to this probability:
We compute ηφ,ℳmin(se)subscriptsuperscript𝜂𝜑ℳsubscript𝑠𝑒\eta^{\min}\_{\varphi,\mathcal{M}}(s\_{e})italic\_η start\_POSTSUPERSCRIPT roman\_min end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_φ , caligraphic\_M end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT ), i.e., the minimal probability to satisfy φ𝜑\varphiitalic\_φ from sesubscript𝑠𝑒s\_{e}italic\_s start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT, which is the state reached after taking action αe∈𝐴𝑐𝑡esubscript𝛼𝑒subscript𝐴𝑐𝑡𝑒\alpha\_{e}\in\mathit{Act}\_{e}italic\_α start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT ∈ italic\_Act start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT in sdsubscript𝑠𝑑s\_{d}italic\_s start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT.
######
Definition 2 (Action-valuation).
An *action-valuation* for
action αe∈𝐴𝑐𝑡esubscript𝛼𝑒subscript𝐴𝑐𝑡𝑒\alpha\_{e}\in\mathit{Act}\_{e}italic\_α start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT ∈ italic\_Act start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT
at
state sd∈Sdsubscript𝑠𝑑subscript𝑆𝑑s\_{d}\in S\_{d}italic\_s start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT ∈ italic\_S start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT is
| | | |
| --- | --- | --- |
| | 𝑣𝑎𝑙sdℳ:𝐴𝑐𝑡(sd)→[0,1], with 𝑣𝑎𝑙sdℳ(αe)=ηφ,ℳmin(se).:superscriptsubscript𝑣𝑎𝑙subscript𝑠𝑑ℳ→𝐴𝑐𝑡subscript𝑠𝑑01superscriptsubscript, with 𝑣𝑎𝑙subscript𝑠𝑑ℳsubscript𝛼𝑒subscriptsuperscript𝜂𝜑ℳsubscript𝑠𝑒\displaystyle\textsl{val}\_{s\_{d}}^{\mathcal{M}}\colon\mathit{Act}(s\_{d})\rightarrow[0,1]\text{, with }\textsl{val}\_{s\_{d}}^{\mathcal{M}}(\alpha\_{e})=\eta^{\min}\_{\varphi,\mathcal{M}}(s\_{e})\ .val start\_POSTSUBSCRIPT italic\_s start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_M end\_POSTSUPERSCRIPT : italic\_Act ( italic\_s start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT ) → [ 0 , 1 ] , with slanted\_val start\_POSTSUBSCRIPT italic\_s start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_M end\_POSTSUPERSCRIPT ( italic\_α start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT ) = italic\_η start\_POSTSUPERSCRIPT roman\_min end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_φ , caligraphic\_M end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT ) . | |
The *optimal action-value* for sdsubscript𝑠𝑑s\_{d}italic\_s start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT is 𝑜𝑝𝑡𝑣𝑎𝑙sdℳ=minα′∈𝐴𝑐𝑡𝑣𝑎𝑙sdℳ(α′)superscriptsubscript𝑜𝑝𝑡𝑣𝑎𝑙subscript𝑠𝑑ℳsubscriptsuperscript𝛼′𝐴𝑐𝑡superscriptsubscript𝑣𝑎𝑙subscript𝑠𝑑ℳsuperscript𝛼′\textsl{optval}\_{s\_{d}}^{\mathcal{M}}=\min\_{\alpha^{\prime}\in\mathit{Act}}\textsl{val}\_{s\_{d}}^{\mathcal{M}}(\alpha^{\prime})optval start\_POSTSUBSCRIPT italic\_s start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_M end\_POSTSUPERSCRIPT = roman\_min start\_POSTSUBSCRIPT italic\_α start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∈ italic\_Act end\_POSTSUBSCRIPT val start\_POSTSUBSCRIPT italic\_s start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_M end\_POSTSUPERSCRIPT ( italic\_α start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ), the set of all action-valuations at sdsubscript𝑠𝑑s\_{d}italic\_s start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT is 𝐴𝑐𝑡𝑉𝑎𝑙𝑠sdsubscript𝐴𝑐𝑡𝑉𝑎𝑙𝑠subscript𝑠𝑑\textsl{ActVals}\_{s\_{d}}ActVals start\_POSTSUBSCRIPT italic\_s start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT.
We now define a shield for the safety-relevant MDP ℳℳ\mathcal{M}caligraphic\_M using the action values.
Specifically, a *δ𝛿\deltaitalic\_δ-shield* for δ∈[0,1]𝛿01\delta\in[0,1]italic\_δ ∈ [ 0 , 1 ] determines a set of actions at each decision state sdsubscript𝑠𝑑s\_{d}italic\_s start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT that are δ𝛿\deltaitalic\_δ-optimal for the specification φ𝜑\varphiitalic\_φ.
All other actions are “shielded” or “blocked”.
######
Definition 3 (Shield).
For action-valuation 𝑣𝑎𝑙sdℳsuperscriptsubscript𝑣𝑎𝑙subscript𝑠𝑑ℳ\textsl{val}\_{s\_{d}}^{\mathcal{M}}val start\_POSTSUBSCRIPT italic\_s start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_M end\_POSTSUPERSCRIPT and δ∈[0,1]𝛿01\delta\in[0,1]italic\_δ ∈ [ 0 , 1 ], a *δ𝛿\deltaitalic\_δ-shield for state sd∈Sdsubscript𝑠𝑑subscript𝑆𝑑s\_{d}\in S\_{d}italic\_s start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT ∈ italic\_S start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT* is
| | | | |
| --- | --- | --- | --- |
| | shield δsd::superscriptsubscriptshield 𝛿subscript𝑠𝑑absent\displaystyle\textsl{shield\,}\_{\delta}^{s\_{d}}\colonshield start\_POSTSUBSCRIPT italic\_δ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_s start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT : | 𝐴𝑐𝑡𝑉𝑎𝑙𝑠sd→2𝐴𝑐𝑡(sd)→subscript𝐴𝑐𝑡𝑉𝑎𝑙𝑠subscript𝑠𝑑superscript2𝐴𝑐𝑡subscript𝑠𝑑\displaystyle\textsl{ActVals}\_{s\_{d}}\rightarrow 2^{\mathit{Act}(s\_{d})}ActVals start\_POSTSUBSCRIPT italic\_s start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT → 2 start\_POSTSUPERSCRIPT italic\_Act ( italic\_s start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT ) end\_POSTSUPERSCRIPT | |
with shield δsd↦{α∈𝐴𝑐𝑡(sd)∣δ⋅𝑣𝑎𝑙sdℳ(α)≤𝑜𝑝𝑡𝑣𝑎𝑙sdℳ}.maps-tosuperscriptsubscriptshield 𝛿subscript𝑠𝑑conditional-set𝛼𝐴𝑐𝑡subscript𝑠𝑑⋅𝛿superscriptsubscript𝑣𝑎𝑙subscript𝑠𝑑ℳ𝛼superscriptsubscript𝑜𝑝𝑡𝑣𝑎𝑙subscript𝑠𝑑ℳ\textsl{shield\,}\_{\delta}^{s\_{d}}\mapsto\{\alpha\in\mathit{Act}(s\_{d})\mid\delta\cdot\textsl{val}\_{s\_{d}}^{\mathcal{M}}(\alpha)\leq\textsl{optval}\_{s\_{d}}^{\mathcal{M}}\}.shield start\_POSTSUBSCRIPT italic\_δ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_s start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ↦ { italic\_α ∈ italic\_Act ( italic\_s start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT ) ∣ italic\_δ ⋅ val start\_POSTSUBSCRIPT italic\_s start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_M end\_POSTSUPERSCRIPT ( italic\_α ) ≤ optval start\_POSTSUBSCRIPT italic\_s start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_M end\_POSTSUPERSCRIPT } .
Intuitively, δ𝛿\deltaitalic\_δ enforces a constraint on actions that are acceptable with respect to the optimal probability.
The shield is *adaptive* with respect to δ𝛿\deltaitalic\_δ, as a high value for δ𝛿\deltaitalic\_δ yields a stricter shield, a smaller value a more permissive shield.
The shield is stored using a lookup-table, and the value for δ𝛿\deltaitalic\_δ can then be changed on-the-fly.
In particularly critical situations, the shield can enforce the decision-maker to resort to (only) the optimal actions w.r.t. the safety objective.
A δ𝛿\deltaitalic\_δ-shield for the MDP ℳℳ\mathcal{M}caligraphic\_M is built by constructing and applying δ𝛿\deltaitalic\_δ-shields to all decision states.
######
Definition 4 (Shielded MDP).
The *shielded MDP* ℳ =(S,𝐴𝑐𝑡𝒫 )subscriptℳ 𝑆𝐴𝑐𝑡subscript𝒫 \mathcal{M}\_{\text{\raisebox{-1.5pt}{ \leavevmode\hbox to4.92pt{\vbox to5.67pt{\pgfpicture\makeatletter\raise-0.2pt\hbox{\hskip 0.2pt\lower-0.2pt\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}}{{}}{}
{{}{}}{}{}{{{{}{}{}{}}}
{{}{}{}{}}}{}
{}
{}{}
{}{}
{}{}\pgfsys@moveto{0.0pt}{5.2743pt}\pgfsys@lineto{0.0pt}{2.2604pt}\pgfsys@curveto{0.0pt}{1.01201pt}{1.01201pt}{0.0pt}{2.2604pt}{0.0pt}\pgfsys@curveto{3.5088pt}{0.0pt}{4.52081pt}{1.01201pt}{4.52081pt}{2.2604pt}\pgfsys@lineto{4.52083pt}{5.2743pt}\pgfsys@closepath\pgfsys@stroke\pgfsys@invoke{ }
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}}}=(S,\mathit{Act}\,\mathcal{P}\_{\text{\raisebox{-1.5pt}{ \leavevmode\hbox to4.92pt{\vbox to5.67pt{\pgfpicture\makeatletter\raise-0.2pt\hbox{\hskip 0.2pt\lower-0.2pt\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}}{{}}{}
{{}{}}{}{}{{{{}{}{}{}}}
{{}{}{}{}}}{}
{}
{}{}
{}{}
{}{}\pgfsys@moveto{0.0pt}{5.2743pt}\pgfsys@lineto{0.0pt}{2.2604pt}\pgfsys@curveto{0.0pt}{1.01201pt}{1.01201pt}{0.0pt}{2.2604pt}{0.0pt}\pgfsys@curveto{3.5088pt}{0.0pt}{4.52081pt}{1.01201pt}{4.52081pt}{2.2604pt}\pgfsys@lineto{4.52083pt}{5.2743pt}\pgfsys@closepath\pgfsys@stroke\pgfsys@invoke{ }
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}}})caligraphic\_M start\_POSTSUBSCRIPT end\_POSTSUBSCRIPT = ( italic\_S , italic\_Act caligraphic\_P start\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ) for a safety-relevant quotient MDP ℳ=(S,𝐴𝑐𝑡,𝒫)ℳ𝑆𝐴𝑐𝑡𝒫\mathcal{M}{}=(S{},\mathit{Act}{},\mathcal{P}{})caligraphic\_M = ( italic\_S , italic\_Act , caligraphic\_P ) and a δ𝛿\deltaitalic\_δ-shield for all sd∈Sdsubscript𝑠𝑑subscript𝑆𝑑s\_{d}\in S\_{d}italic\_s start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT ∈ italic\_S start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT is given by the transition probability 𝒫 subscript𝒫 \mathcal{P}\_{\text{\raisebox{-1.5pt}{ \leavevmode\hbox to4.92pt{\vbox to5.67pt{\pgfpicture\makeatletter\raise-0.2pt\hbox{\hskip 0.2pt\lower-0.2pt\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}}{{}}{}
{{}{}}{}{}{{{{}{}{}{}}}
{{}{}{}{}}}{}
{}
{}{}
{}{}
{}{}\pgfsys@moveto{0.0pt}{5.2743pt}\pgfsys@lineto{0.0pt}{2.2604pt}\pgfsys@curveto{0.0pt}{1.01201pt}{1.01201pt}{0.0pt}{2.2604pt}{0.0pt}\pgfsys@curveto{3.5088pt}{0.0pt}{4.52081pt}{1.01201pt}{4.52081pt}{2.2604pt}\pgfsys@lineto{4.52083pt}{5.2743pt}\pgfsys@closepath\pgfsys@stroke\pgfsys@invoke{ }
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}}}caligraphic\_P start\_POSTSUBSCRIPT end\_POSTSUBSCRIPT with
𝒫 (s,α)=𝒫(s,α)subscript𝒫 𝑠𝛼𝒫𝑠𝛼\mathcal{P}\_{\text{\raisebox{-1.5pt}{ \leavevmode\hbox to4.92pt{\vbox to5.67pt{\pgfpicture\makeatletter\raise-0.2pt\hbox{\hskip 0.2pt\lower-0.2pt\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}}{{}}{}
{{}{}}{}{}{{{{}{}{}{}}}
{{}{}{}{}}}{}
{}
{}{}
{}{}
{}{}\pgfsys@moveto{0.0pt}{5.2743pt}\pgfsys@lineto{0.0pt}{2.2604pt}\pgfsys@curveto{0.0pt}{1.01201pt}{1.01201pt}{0.0pt}{2.2604pt}{0.0pt}\pgfsys@curveto{3.5088pt}{0.0pt}{4.52081pt}{1.01201pt}{4.52081pt}{2.2604pt}\pgfsys@lineto{4.52083pt}{5.2743pt}\pgfsys@closepath\pgfsys@stroke\pgfsys@invoke{ }
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}}}(s,\alpha)=\mathcal{P}(s,\alpha)caligraphic\_P start\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_s , italic\_α ) = caligraphic\_P ( italic\_s , italic\_α ) if α∈shield δs(𝑣𝑎𝑙sℳ)𝛼superscriptsubscriptshield 𝛿𝑠superscriptsubscript𝑣𝑎𝑙𝑠ℳ\alpha\in\textsl{shield\,}\_{\delta}^{s}(\textsl{val}\_{s}^{\mathcal{M}})italic\_α ∈ shield start\_POSTSUBSCRIPT italic\_δ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_s end\_POSTSUPERSCRIPT ( val start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_M end\_POSTSUPERSCRIPT ) and 𝒫 (s,α)=⊥subscript𝒫 𝑠𝛼bottom\mathcal{P}\_{\text{\raisebox{-1.5pt}{ \leavevmode\hbox to4.92pt{\vbox to5.67pt{\pgfpicture\makeatletter\raise-0.2pt\hbox{\hskip 0.2pt\lower-0.2pt\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}}{{}}{}
{{}{}}{}{}{{{{}{}{}{}}}
{{}{}{}{}}}{}
{}
{}{}
{}{}
{}{}\pgfsys@moveto{0.0pt}{5.2743pt}\pgfsys@lineto{0.0pt}{2.2604pt}\pgfsys@curveto{0.0pt}{1.01201pt}{1.01201pt}{0.0pt}{2.2604pt}{0.0pt}\pgfsys@curveto{3.5088pt}{0.0pt}{4.52081pt}{1.01201pt}{4.52081pt}{2.2604pt}\pgfsys@lineto{4.52083pt}{5.2743pt}\pgfsys@closepath\pgfsys@stroke\pgfsys@invoke{ }
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}}}(s,\alpha)=\botcaligraphic\_P start\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_s , italic\_α ) = ⊥ otherwise.
######
Lemma 1.
If MDP ℳℳ\mathcal{M}caligraphic\_M is deadlock-free if and only if the shielded MDP ℳ subscriptℳ \mathcal{M}\_{\text{\raisebox{-1.5pt}{ \leavevmode\hbox to4.92pt{\vbox to5.67pt{\pgfpicture\makeatletter\raise-0.2pt\hbox{\hskip 0.2pt\lower-0.2pt\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}}{{}}{}
{{}{}}{}{}{{{{}{}{}{}}}
{{}{}{}{}}}{}
{}
{}{}
{}{}
{}{}\pgfsys@moveto{0.0pt}{5.2743pt}\pgfsys@lineto{0.0pt}{2.2604pt}\pgfsys@curveto{0.0pt}{1.01201pt}{1.01201pt}{0.0pt}{2.2604pt}{0.0pt}\pgfsys@curveto{3.5088pt}{0.0pt}{4.52081pt}{1.01201pt}{4.52081pt}{2.2604pt}\pgfsys@lineto{4.52083pt}{5.2743pt}\pgfsys@closepath\pgfsys@stroke\pgfsys@invoke{ }
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}}}caligraphic\_M start\_POSTSUBSCRIPT end\_POSTSUBSCRIPT is deadlock-free.
We compute the shield relative to optimal values 𝑜𝑝𝑡𝑣𝑎𝑙sdℳsuperscriptsubscript𝑜𝑝𝑡𝑣𝑎𝑙subscript𝑠𝑑ℳ\textsl{optval}\_{s\_{d}}^{\mathcal{M}}optval start\_POSTSUBSCRIPT italic\_s start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_M end\_POSTSUPERSCRIPT.
Consequently, for δ=1𝛿1\delta=1italic\_δ = 1, only optimal actions are preserved, and for δ=0𝛿0\delta=0italic\_δ = 0 no actions are blocked.
######
Theorem 1.
For an MDP ℳℳ\mathcal{M}caligraphic\_M and a δ𝛿\deltaitalic\_δ-shield, it holds for any state s𝑠sitalic\_s that
𝑣𝑎𝑙sℳ=𝑣𝑎𝑙sℳ superscriptsubscript𝑣𝑎𝑙𝑠ℳsuperscriptsubscript𝑣𝑎𝑙𝑠subscriptℳ \textsl{val}\_{s}^{\mathcal{M}}=\textsl{val}\_{s}^{\mathcal{M}\_{\text{\raisebox{-1.5pt}{ \leavevmode\hbox to3.63pt{\vbox to4.17pt{\pgfpicture\makeatletter\raise-0.2pt\hbox{\hskip 0.2pt\lower-0.2pt\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to 0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}}{{}}{}
{{}{}}{}{}{{{{}{}{}{}}}
{{}{}{}{}}}{}
{}
{}{}
{}{}
{}{}\pgfsys@moveto{0.0pt}{3.76735pt}\pgfsys@lineto{0.0pt}{1.61458pt}\pgfsys@curveto{0.0pt}{0.72287pt}{0.72287pt}{0.0pt}{1.61458pt}{0.0pt}\pgfsys@curveto{2.50629pt}{0.0pt}{3.22916pt}{0.72287pt}{3.22916pt}{1.61458pt}\pgfsys@lineto{3.22916pt}{3.76735pt}\pgfsys@closepath\pgfsys@stroke\pgfsys@invoke{ }
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}}}}val start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_M end\_POSTSUPERSCRIPT = val start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_M start\_POSTSUBSCRIPT end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT.
As optimal actions for the safety objective are not removed, optimality w.r.t. safety is preserved in the shielded MDP.
Thus, during construction of the shield, we compute the action-valuations in fact *for the shielded MDP*.
Observe that computing a shield for a state is done *independently* from the application of the shield to other states.
### Guaranteed Safety.
A δ𝛿\deltaitalic\_δ-shield ensures that only actions that are δ𝛿\deltaitalic\_δ-optimal with respect to an LTL property φ𝜑\varphiitalic\_φ are allowed.
In particular, for each action α∈𝐴𝑐𝑡e𝛼subscript𝐴𝑐𝑡𝑒\alpha\in\mathit{Act}\_{e}italic\_α ∈ italic\_Act start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT at state sesubscript𝑠𝑒s\_{e}italic\_s start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT, we use the *minimal* probability ηφ,ℳmin(se)subscriptsuperscript𝜂𝜑ℳsubscript𝑠𝑒\eta^{\min}\_{\varphi,\mathcal{M}}(s\_{e})italic\_η start\_POSTSUPERSCRIPT roman\_min end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_φ , caligraphic\_M end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT ) to satisfy φ𝜑\varphiitalic\_φ, see Def. [2](#Thmdefinition2 "Definition 2 (Action-valuation). ‣ Shield Construction. ‣ Constructing Shields for MDPs ‣ Safe Reinforcement Learning via Probabilistic Shields").
Under *optimal* (subsequent) choices, the value ηφ,ℳmin(se)subscriptsuperscript𝜂𝜑ℳsubscript𝑠𝑒\eta^{\min}\_{\varphi,\mathcal{M}}(s\_{e})italic\_η start\_POSTSUPERSCRIPT roman\_min end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_φ , caligraphic\_M end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT ) will be achieved.
In contrast, a sequence of bad choices may violate φ𝜑\varphiitalic\_φ with high probability.
A more conservative notion would be to use the minimal action value while assuming that in all subsequent states the worst-case decisions corresponding to the maximal probabilities are taken.
These values are computable by model checking.
Regardless of subsequent choices, at least 𝑣𝑎𝑙sdℳ(αe)superscriptsubscript𝑣𝑎𝑙subscript𝑠𝑑ℳsubscript𝛼𝑒\textsl{val}\_{s\_{d}}^{\mathcal{M}}(\alpha\_{e})val start\_POSTSUBSCRIPT italic\_s start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_M end\_POSTSUPERSCRIPT ( italic\_α start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT ) is then guaranteed.
A sensible notion to construct a shield would then be to impose a threshold λ∈[0,1]𝜆01\lambda\in[0,1]italic\_λ ∈ [ 0 , 1 ] such that only actions with 𝑣𝑎𝑙sdℳ(αe)≤λsuperscriptsubscript𝑣𝑎𝑙subscript𝑠𝑑ℳsubscript𝛼𝑒𝜆\textsl{val}\_{s\_{d}}^{\mathcal{M}}(\alpha\_{e})\leq\lambdaval start\_POSTSUBSCRIPT italic\_s start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_M end\_POSTSUPERSCRIPT ( italic\_α start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT ) ≤ italic\_λ are allowed.
A shield with such a guaranteed safety probability may induce a shielded MDP (Def. [4](#Thmdefinition4 "Definition 4 (Shielded MDP). ‣ Shield Construction. ‣ Constructing Shields for MDPs ‣ Safe Reinforcement Learning via Probabilistic Shields")) that is *not deadlock free*.
Moreover, the shield may become too restrictive for the agent.
### Scalable Shield Construction.
Although we apply model checking only in the safety-relevant MDP, scalability issues for large applications remain.
We employ several optimizations towards computational tractability.
*Finite Horizon.*
For infinite horizon properties, the probability to violate safety (in the long run) is often one.
Furthermore, our learned MDP model is inherently an approximation of the real world.
Errors originating from this approximation accumulate for growing horizons. Thus, we focus on a finite horizon such that the action values (and consequently, a policy for the avatar) carry only guarantees for the next steps.
This assumption also allows us to prune the
safety-relevant MDP (see below), increasing the scalability.
*Piecewise Construction.*
Computing a shield for all states in an MDP concurrently yields a large memory footprint.
To alleviate this footprint, we compute the shield states independently, in accordance with Theorem [1](#Thmtheorem1 "Theorem 1. ‣ Shield Construction. ‣ Constructing Shields for MDPs ‣ Safe Reinforcement Learning via Probabilistic Shields").
The independent computation prunes the relevant part of the MDP, as the number of states reachable within the horizon is drastically reduced.
Additionally, the independent computation allows for parallelizing the computation.
*Independent Agents.*
The explosion of state spaces stems mostly from the number of agents.
Here, an important observation is that we can consider agents independently.
For instance, the probability for the avatar to crash with an adversary is stochastically independent from crashing with the others.
Instead of determining the shield for all adversaries at once, we perform computations for each agent individually, and combine them via the inclusion-exclusion principle.
Afterwards, the shield is composed from the shields dedicated to individual adversaries.
*Abstractions.*
We observe that for finite horizon properties and piecewise construction, adversaries may be far away—beyond the horizon—without a chance to reach the avatar.
We do not need to consider such (positions for) adversaries,
as in these states, the shield will not block any actions.
*Fewer Decision States.*
Depending on the setting, there might be only a few critical situations in which the agent requires shielding to ensure safety. The shield can be computed for this critical
states only. Consequently, the agent makes shielded decisions in the adapted decision states, and unshielded decisions in all other ones.
### Shielding versus Performance.
A shield which is *minimally invasive* gives the RL agent the most freedom to optimize the performance objective.
We propose two methods to alleviate invasiveness, all of them assume *domain knowledge* of the rationale behind the decision procedure.
*Iterative Weakening.*
During runtime, we may observe that the progress of the avatar regarding the performance objective is not increasing anymore.
Then, we weaken the shield by δ−ε𝛿𝜀\delta-\varepsilonitalic\_δ - italic\_ε, allowing additional actions.
As soon as progress is made, we reset δ𝛿\deltaitalic\_δ to its former value.
The adaption of shield δssuperscriptsubscriptshield 𝛿𝑠\textsl{shield\,}\_{\delta}^{s}shield start\_POSTSUBSCRIPT italic\_δ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_s end\_POSTSUPERSCRIPT to shield δ−εssuperscriptsubscriptshield 𝛿𝜀𝑠\textsl{shield\,}\_{\delta-\varepsilon}^{s}shield start\_POSTSUBSCRIPT italic\_δ - italic\_ε end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_s end\_POSTSUPERSCRIPT can be done on the fly, without new computations.
*Adapted Specifications.*
If the goal of the decision maker is known *and* can be captured in temporal logic, we may adapt the original specification accordingly.
There are often natural trade-offs between safety and performance.
These trade-offs might be resolved via weights, but this process is often undesirable (?) and similar to reward engineering.
Instead, optimizing the conditional performance while assuming to stay sufficiently safe (?), avoids side-effects of attaching some weights to the safety specification.
Implementation and Numerical Experiments
----------------------------------------
### Set-up.
We run experiments using an Intel Core i7-4790K CPU with 16 GB of RAM using 4 cores.
We give the timing results for a single CPU. Since the shield may be computed in a multi-threaded architecture, this time can be divided by the number of cores available.
The supplementary materials, namely the source code and videos are available online111<http://shieldrl.nilsjansen.org>.
We demonstrate the applicability of our approach by means of two case studies.
For both case studies, we learn adversary behavior in small arenas individually for each adversary.
These behavior models are applicable to any benchmark instance, as they are independent of concrete positions.

(a) Still from video on classic PAC-MAN
004040404080808080120120120120160160160160200200200200240240240240280280280280−500500-500- 500005005005005001,00010001{,}0001 , 000Training episodesAverage rewardW/o ShieldWith Shield
(b) Scores during training for classic PAC-MAN
Figure 2: Scenarios and results for PAC-MAN

(a) Still from the video on warehouse
004040404080808080120120120120160160160160200200200200240240240240280280280280−500500-500- 50000500500500500750750750750Training episodesAverage rewardW/o ShieldShield 2 Cross.Shield 4 Cross.Shield 8 Cross.
(b) Scores during training on warehouse
Figure 3: Scenarios and results for warehouse
For the arcade game PAC-MAN, PM (the avatar) aims to collect *PAC-dots* in a *maze* and not get caught by *ghosts* (the adversaries).
We model various instance of the game (with different sizes) as an arena, where tokens represent the dots at each position in the maze, such that a dot is either present or collected.
The score (reward, performance) is positively affected (+10) by collecting a dot and negatively by time (each step: -1). If PM either collects all dots (+500) or is caught (-500), the game is restarted.
RL approaches exist (?), but they suffer from the fact that during the exploration phase PM is often caught by the ghosts, achieving very poor scores.
The safety specification places a lower bound on the probability of reaching states in the underlying MDP that correspond to being caught.
We also consider a warehouse floor plan with several corridors.
A similar scenario has been investigated in (?).
In the arena, nodes describe crossings, the edges the corridors with shelves, and the distances the corridor length.
The agents are fork-lift units picking up packages from the shelves and delivering them to the exit; tokens represent the presence of a package at its position.
The avatar is a specific (yellow) fork-lift unit that has to account for other units, the adversaries.
The performance (reward) is positively affected by loading and delivering packages (+20, respectively) and negatively by time (each step: -1).
Delivering all packages yields a large bonus (+500) and a collision leads to a large punishment (-500), both cases end the scenario.
Corridors might be too narrow for multiple (facing) units, which poses a safety-critical situation.
Most crucial is the crowded area near the exit, since all units have to deliver the packages to the exit.
Transferring the stochastic adversary behavior to any arena (without tokens) yields a concrete safety-relevant MDP.
In particular, we specify an arena with the positions of the avatar and the adversaries as well as the behavior in the high-level PRISM-language (?).
We employ a script that automatically generates arenas to enable a broad set of benchmarks.
Taking, e.g., the PAC-MAN arena from Fig. [2(a)](#Sx4.F2.sf1 "2(a) ‣ Figure 2 ‣ Set-up. ‣ Implementation and Numerical Experiments ‣ Safe Reinforcement Learning via Probabilistic Shields"), the considered MDP has roughly 1012superscript101210^{12}10 start\_POSTSUPERSCRIPT 12 end\_POSTSUPERSCRIPT states (compared to 1050superscript105010^{50}10 start\_POSTSUPERSCRIPT 50 end\_POSTSUPERSCRIPT for the full MDP).
For a safety-relevant MDP, we compute a δ𝛿\deltaitalic\_δ-shield (with iterative weakening) via the model checker Storm (?), using a horizon of 10101010 steps.
The immense size even of safety-relevant MDPs requires optimizations such as a piecewise and independent shield construction.
Moreover, a multi-threaded architecture lets us construct shields for very large examples.
In particular, we perform model checking for (many) MDPs of roughly 106superscript10610^{6}10 start\_POSTSUPERSCRIPT 6 end\_POSTSUPERSCRIPT states.
The computation time for the largest PM instance takes about 6 hours (single-threaded), while memory is not an issue due to the piecewise shield construction.
We compare RL to shielded RL on different instances.
The key comparison criterion is the performance (detailed above) during learning.
Our implementation is based on an existing PAC-MAN environment222[http://ai.berkeley.edu/project˙overview.html](http://ai.berkeley.edu/project_overview.html) using an approximate Q-learning agent (?)
with the following feature vectors:
* •
for PAC-MAN: (1) distance to the closest dot, (2) whether a ghost collision is imminent, and (3) whether a ghost is one step away.
* •
for Warehouse: (1) has the unit loaded or unloaded, (2) the distance to the next package and (3) to the exit, (4) whether another unit is three steps away and (5) one step away.
The results are basic reflex controllers.
The Q-learning uses the learning rate α=0.2𝛼0.2\alpha=0.2italic\_α = 0.2 and the discount factor γ=0.8𝛾0.8\gamma=0.8italic\_γ = 0.8
for the Q-update and an ϵitalic-ϵ\epsilonitalic\_ϵ-greedy exploration policy with ϵ=0.05italic-ϵ0.05\epsilon=0.05italic\_ϵ = 0.05.
One episode lasts until either the game is restarted.
We describe results for the training phase of RL (300 episodes).
### Results.
Figures [2(a)](#Sx4.F2.sf1 "2(a) ‣ Figure 2 ‣ Set-up. ‣ Implementation and Numerical Experiments ‣ Safe Reinforcement Learning via Probabilistic Shields") and [3(a)](#Sx4.F3.sf1 "3(a) ‣ Figure 3 ‣ Set-up. ‣ Implementation and Numerical Experiments ‣ Safe Reinforcement Learning via Probabilistic Shields") show screenshots of a series of recommended videos (available in the supplementary material).
Each video compares how RL performs either shielded or unshielded on a instance of the case study.
In the shielded version, at each decision state in the underlying MDP, we indicate the risk of decisions from low to high by the colors green, orange, red.
Consider PAC-MAN in detail:
Figure [2(b)](#Sx4.F2.sf2 "2(b) ‣ Figure 2 ‣ Set-up. ‣ Implementation and Numerical Experiments ‣ Safe Reinforcement Learning via Probabilistic Shields") depicts the scores obtained during RL.
The curves (blue, solid: unshielded, orange, dashed: shielded) show the average scores for every ten training episodes.
Table [1](#Sx4.T1 "Table 1 ‣ Results. ‣ Implementation and Numerical Experiments ‣ Safe Reinforcement Learning via Probabilistic Shields") shows results for instances in increasing size.
We list the number of model checking calls and the time to construct the shield.
We list the scores with and without shield, and the *winning rate* capturing the ratio of successfully ended episodes.
For all instances, we see a large difference in scores due to the fact that PM is often rescued by the shield.
The winning rates differ for most benchmarks, favoring shielded RL.
For three or four ghosts, a shield with a ten-step horizon cannot guide PM to avoid being encircled by the ghosts long enough to successfully end the game.
Nevertheless, the shield often safes PM, leading to superior scores.
Moreover, the shield helps learning an optimal policy much faster as fewer restarts are needed.
For the warehouse case study, we choose to vary the decision states, i.e., the positions of the avatar for which we compute a shield. We present results for shielding the 2–8 crossings closest to the exit.
Figure [3(b)](#Sx4.F3.sf2 "3(b) ‣ Figure 3 ‣ Set-up. ‣ Implementation and Numerical Experiments ‣ Safe Reinforcement Learning via Probabilistic Shields") shows the average score for the different variants, Table [2](#Sx4.T2 "Table 2 ‣ Results. ‣ Implementation and Numerical Experiments ‣ Safe Reinforcement Learning via Probabilistic Shields") summarizes average score and win rate.
Unsurprisingly, the score gets better the more states are shielded. Furthermore, we have seen that shielding even more states has only a very limited effect.
Table 1: Average scores and win rates for PM
|
| |
| --- |
| Size, |
| #Ghosts |
|
| |
| --- |
| #Model |
| Checking |
| time (s) |
| |
| --- |
| Score |
| w/o Shield |
|
| |
| --- |
| Score w. |
| Shield |
|
| |
| --- |
| Win Rate |
| w/o Shield |
|
| |
| --- |
| Win Rate |
| w. Shield |
|
| 9x7,1 | 5912 | 584 | -359,6 | 535,3 | 0,04 | 0,84 |
| 17x6,2 | 5841 | 1072 | -195,6 | 253,9 | 0,04 | 0,4 |
| 17x10,3 | 51732 | 3681 | -220,79 | -40,52 | 0,01 | 0,07 |
| 27x25,4 | 269426 | 19941 | -129,25 | 339,89 | 0,00 | 0,00 |
Table 2: Average scores and win rates for warehouse
| Crossings shielded | 0 | 2 | 4 | 8 |
| --- | --- | --- | --- | --- |
| Score | -186 | -27.6 | 303 | 420 |
| Win Rate | 0.16 | 0.31 | 0.59 | 0.71 |
Conclusion and Future Work
--------------------------
We developed the concept of shields for MDPs.
Utilizing probabilistic model checking, we maintained probabilistic safety measures during reinforcement learning.
We addressed inherent scalability issues and provided means to deal with typical trade-off between safety and performance.
Our experiments showed that we improved the state-of-the-art in safe reinforcement learning.
For future work, we will extend shields to richer models such as partially-observable MDPs.
Moreover, we will extend the applications to more arcade games and employ deep recurrent neural networks as means of decision-making (?; ?).
Another interesting direction is to explore (possibly model-free) learning of shields, instead of employing model-based model checking.
|
e1846e0c-e041-40b9-801c-75a634f0b0d0
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Weighing Animal Worth
It's common for people who approach helping animals from a quantitative direction to need some concept of "moral weights" so they can prioritize. If you can avert one year of suffering for a chicken or ten for shrimp which should you choose? Now, moral weight is not the only consideration with questions like this, since typically the suffering involved will also be quite different, but it's still an important factor.
One of the more thorough investigations here is Rethink Priorities' moral weights series. It's really interesting work and I'd recommend reading it! Here's a selection from their bottom-line point estimates comparing animals to humans:
Humans 1 (by definition) Chickens 3 Carp 11 Bees 14 Shrimp 32
If you find these surprisingly low, you're not alone: that giving a year of happy life to twelve carp might be more valuable than giving one to a human is for most people a very unintuitive claim. The authors have a good post on this, Don't Balk at Animal-friendly Results, that discusses how the assumptions behind their project make this kind of result pretty likely and argues against putting much stock in our potentially quite biased initial intuitions.
What concerns me is that I suspect people rarely get deeply interested in the moral weight of animals unless they come in with an unusually high initial intuitive view. Someone who thinks humans matter far more than animals and wants to devote their career to making the world better is much more likely to choose a career focused on people, like reducing poverty or global catastrophic risk. Even if someone came into the field with, say, the median initial view on how to weigh humans vs animals I would expect working as a junior person in a community of people who value animals highly would exert a large influence in that direction regardless of what the underlying truth. If you somehow could convince a research group, not selected for caring a lot about animals, to pursue this question in isolation, I'd pr
|
064976e1-ea1d-4076-93b2-dcc2f7c5d317
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Not Yet the Dawn
How many is too many?
How much is too much?
How do we live with the numbers? These damned numbers.
R0, R1, the case fatality rate, the hospitalization rate, the rate of ICU overcrowding, the number of infected, the number of dead, the number of bodies piling up in morgues, when does it all stop really meaning anything and just become this exercise in abstraction?
And is that what we need to do to cope with it?
How do we get up and go to work every morning in a world where
the state of California had to relax it’s clean air laws so they could burn a backlog of bodies?
How do we talk about The Mandalorian and the latest celebrity gossip and act like everything’s fine while the equivelant of 9/11 is happening every day?
How do we manage to eat breakfast, put on our shoes and masks, and live our lives like we aren’t in the midst of what will hopefully be the largest and most traumatic event of our lives?
How do we live with it as a people? How do we live with it as people?
How are we going to deal with the fact that society values its utility more than the lives of a significant portion of the people living in it?
Will we eschew the values of liberal humanism or will we double down on them and if those two positions come into a conflict, who wins?
What will become of us after this?
After. There are so many things which will come after, because of this. But we aren’t living in After, not yet anyway.
The long night is not over, and this is not yet the dawn.
What does it mean to care about each other when the scope of each other becomes too large to comprehend?
Words are easy, wearing a mask is easy enough, but beyond that? To stare into the vast abstraction of intensive care units and overworked doctors and nurses, to understand that every death is a human with a name and face and story and do something with that knowledge other than sink into despair?
Laugh nervously and change the topic. Did you buy any stock in Gamestop? Check out this meme I found. Did you hear who go
|
844cef65-695d-4397-afc9-77640621d143
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
Conditioning, Prompts, and Fine-Tuning
*(Thanks to Evan Hubinger and Nicholas Schiefer for comments on these ideas.)*
These are some notes on the relation between conditioning language models, prompting, and fine-tuning. The key takeaways are:
1. Prompting and fine-tuning can both be used to condition language models.
2. Prompting is quite restricted in the kinds of conditionals it can achieve.
3. Fine-tuning can implement arbitrary conditionals in principle, though not in practice.
4. In practice fine-tuning can still implement more kinds of conditionals than prompting.
5. We don't understand how fine-tuning conditionals generalize, which seems dangerous.
Conditioning
============
We can think of a language model as specifying a probability distribution π(x).mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
, where x is a sequence of tokens of fixed length N (the length of the context window). We generate text by sampling sequences from π.
Sometimes we don’t want to just sample from a language model. Instead, we want to *condition* the model on some facts about the sequence x. We can write the conditioned distribution as
π(x|c(x))where c(x) encodes some constraints on x. For instance c(x) might require that the first token is “Apple”, or that the 7th and 12th tokens are the same, etc.
Some conditions are easy, some are hard
---------------------------------------
It’s easy to sample from a language model conditioned on the first two tokens being the same, but not all conditionals are so straightforward. Suppose we condition on the sequence x beginning with the factorization of a large composite number. There exist valid sequences unambiguously satisfying the conditional, but sampling them is hard if we don't know the factorization ahead of time. So there are limits to the kinds of conditionals we can apply in practice.
Prompting
=========
A prompt is a very restricted kind of conditional where the condition is that certain tokens in x are known in advance. For instance, we might specify that the first four words are “Mary had a little”, or that the last three words are “happily ever after.”
Prompts are nice in a few ways:
* It’s easy to sample from a language model given an arbitrary prompt.
* We sort of understand what prompts do. A prompt asks the model to predict the output of a text-generation process **given that it knows the values of the fixed tokens**.
The downside with prompting is that there are lots of conditionals we can’t turn into prompts. For instance:
* Sample text from the model that humans will rate as having positive sentiment.
* Sample text from the model that never involves violence.
* Sample text from the model that contains a valid chess game.
None of these can be expressed in terms of fixed tokens in the context window.
Fine-Tuning
===========
Instead of prompting, we can fine-tune a model, either with an explicit reward function or with Reinforcement Learning from Human Feedback (RLHF). We start with a pre-trained model, then fine-tune it to maximize either an explicit or a learned reward.
Subject to actually converging to the optimum distribution, fine-tuning with a KL penalty is a [form](https://www.alignmentforum.org/posts/eoHbneGvqDu25Hasc/rl-with-kl-penalties-is-better-seen-as-bayesian-inference) of variational bayesian inference. The result is a variational approximation of the Bayesian update on human feedback using the pre-trained model as a prior. That is, we obtain a new model which produces the probability distribution
π′(x)∝π(x)L(x)where the likelihood is L(x)=er(x)/β, β is the KL penalty weight, and r(x) is the reward for sequence x. A more formal discussion was given by [Korbak, Perez & Buckley](https://arxiv.org/pdf/2205.11275.pdf).
Fine-tuning can approximate prompts
-----------------------------------
Fine-tuning can approximate any conditional a prompt can achieve. To see this, note that every prompt consists of setting tokens at some positions i∈S to values yi, where the indices in S form a subset of the context window. A prompt in this form is approximated by fine-tuning on the reward function
r(x)≡λ∑i∈Sδxi,yiwhere δxi,yi=1 if xi=yi and is zero otherwise. In the limit of large λ, fine-tuning on this reward function amounts to providing enormous evidence in favor of the desired token values, which is equivalent to conditioning with a prompt that directly fixes those tokens.
Fine-tuning can approximate any conditional
-------------------------------------------
With appropriate choices of the reward r(x) we can achieve any shift in the probability distribution that doesn't expand the support of π(x), and so in principle fine-tuning can approximate any conditional.
Some conditions are easy, some are hard
---------------------------------------
In practice some conditionals are hard to achieve because they require an unrealistically large number of samples for fine-tuning to converge to the full Bayesian update. For instance it is hard to fine-tune on the reward corresponding to “the sequence x begin with a factorization of a large composite number” because it is takes many tries to find an x satisfying the conditional.
Still, there are many kinds of conditionals that fine-tuning can access in practice. For instance, RLHF can condition on positive human sentiment rating, or on not containing malicious plans.
More generally, fine-tuning seems to be good for conditioning on properties that are:
1. Easy to identify/evaluate.
2. Not too rare under the initial distribution π(x) (or some pre-conditioned version of this, e.g. via prompts).
Generalization Concerns
-----------------------
Because fine-tuning with a KL penalty implements Bayesian updates, every reward function describes a conditional of the form “condition on the following sequences being more/less likely according to their reward”. Unfortunately we may not understand at a deeper level what this conditional means.
In particular, it is not obvious how this conditional generalizes. Consider RLHF with a sentiment reward. There are multiple ways a model could interpret the implied conditional:
1. Positive-sentiment text is more likely, so humans are kinder in the world than the pre-training distribution suggested.
2. Positive-sentiment text is more likely, so there are legal restrictions on the kinds of text that are recorded.
These two interpretations generalize very differently. For instance (1) could increase the probability of text describing humans helping each other while (2) could decrease that probability by implying a world with little social trust.
This sort of generalization ambiguity seems really dangerous, because we could end up with very different behavior from what we intended in specifying the reward function or providing feedback.
Summary
=======
My key takeaways are:
1. Prompting and fine-tuning can both be used to condition language models.
2. Prompting is quite restricted in the kinds of conditionals it can achieve.
3. Fine-tuning can implement arbitrary conditionals in principle, though not in practice.
4. In practice fine-tuning can still implement more kinds of conditionals than prompting.
5. We don't understand how fine-tuning conditionals generalize, which seems dangerous.
(1-4) suggest that we will need some sort of fine-tuning/RLHF to achieve the kinds of complex conditionals that are useful in practice/for [alignment schemes](https://www.alignmentforum.org/posts/nXeLPcT9uhfG3TMPS/conditioning-generative-models). If so, (5) says we should try to figure out more about how fine-tuning conditionals generalize, because that's where a lot of the danger lies.
|
88266035-440e-41d6-a325-e0e730a2e5ba
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
Open-source LLMs may prove Bostrom's vulnerable world hypothesis
In short, Nick Bostrom's [vulnerable world hypothesis](https://nickbostrom.com/papers/vulnerable.pdf) states that humanity may in the future invent a technology that is potentially very destructive and very cheap to make and easily implementable. As a thought experiment, he uses something called "easier nukes" that would not have to be made from plutonium as in real life but could instead be made from battery and two pieces of glass. Bostrom says that if everyone could make nuclear weapons in their own home, civilization would be destroyed by default because terrorists, malcontent people and "folk who just want to see what would happen" would blow up most cities.
I can see similarities between Bostrom's hypothesis, and the way powerful LLM-models have recently been open sourced. It did not take long after the publication of ChatGPT and GPT-4 for several "open-source alternatives" to appear on the internet. Some of those you can even download to your own computer for offline use. And then we did have ChaosGPT ordering such a model to "destroy humanity". In my mind, he was one of the "folks who just wanted to see what would happen".
Recently I have been thinking that, in the long term, the biggest challenge in AI safety is the potential wide availability of the future AGI-systems. If we can create a safely aligned AGI, what would prevent some people from creating open-source alternative AGIs that are free from safety constraints and more powerful because of that? Advanced technology can't generally stay secret forever. And then we would have many future ChaosGPTs "who just want to see what would happen" and who could tell such a system to destroy humanity.
Old analogy used in AI safety communities about making a wish to genie and "getting what you asked but not what you wanted" would no longer be relevant in this scenario. Instead, the potential future ChaosGPTs would get exactly what they asked and perhaps even what they truly wanted.
Does the community here also think that this is a reasonable concern? I would like to know that in the comments and maybe start a discussion about the future priorities in AI safety. Because if, in the far future, practically everyone could be able to use open-sourced AGI-system and order it to destroy humanity, it would probably not be possible to completely prevent such malicious applications. Instead, the focus should be about increasing the societal resilience against deliberate AI attacks and takeover attempts. Perhaps that could be attempted through increased cybersecurity and trying to find ways to counter possible destructive technologies employed by AGI such as nanotechnology and genetically engineered pandemics. Aligned AGI, which hopefully would be created before the unaligned ones, would certainly help.
|
5766c171-1b36-48e4-9bec-1457cf005ba5
|
trentmkelly/LessWrong-43k
|
LessWrong
|
200 COP in MI: Interpreting Algorithmic Problems
This is the fourth post in a sequence called 200 Concrete Open Problems in Mechanistic Interpretability. Start here, then read in any order. If you want to learn the basics before you think about open problems, check out my post on getting started. Look up jargon in my Mechanistic Interpretability Explainer
Motivation
Motivating paper: A Mechanistic Interpretability Analysis of Grokking
When models are trained on synthetic, algorithmic tasks, they often learn to do some clean, interpretable computation inside. Choosing a suitable task and trying to reverse engineer a model can be a rich area of interesting circuits to interpret! In some sense, this is interpretability on easy mode - the model is normally trained on a single task (unlike language models, which need to learn everything about language!), we know the exact ground truth about the data and optimal solution, and the models are tiny. So why care?
I consider my work on grokking to be an interesting case study of this work going well. Grokking (shown below) is a mysterious phenomena where, when small models are trained on algorithmic tasks (eg modular addition or modular division), they initially memorise the training data. But when they keep being trained on that data for a really long time, the model suddenly(ish) figures out how to generalise!
In my work, I simplified their setup even further, by training a 1 Layer transformer (with no LayerNorm or biases) to do modular addition and reverse engineered the weights to understand what was going on. And it turned out to be doing a funky trig-based algorithm (shown below), where the numbers are converted to frequencies with a memorised Discrete Fourier Transform, added using trig identities, and converted back to the answer! Using this, we looked inside the model and identified that despite seeming to have plateaued, in the period between memorising and "grokking", the model is actually slowly forming the circuit that does generalise. But so long as the m
|
5a40a5ed-b6bf-4e50-8505-abab8bf9fc71
|
StampyAI/alignment-research-dataset/aisafety.info
|
AI Safety Info
|
How can I use a background in the social sciences to help with AI alignment?
Nora Ammann, in the post [AI alignment as “navigating the space of intelligent behaviour”](https://www.alignmentforum.org/posts/FuToH2KHxKmJLGk2B/ai-alignment-as-navigating-the-space-of-intelligent), describes “three epistemic strategies for making progress on the alignment problem: 1) tinkering, 2) idealization and 3) intelligence-in-the-wild”. Research in the social sciences, biology, [philosophy](https://philosophy.safe.ai/about-us), and other fields can inform alignment efforts by shedding light on “intelligence-in-the-wild”. (As illustrated by the examples below, such research often still involves mathematics as well.)
Some examples of approaches, taken from the post:
- Steve Byrnes’s research on [brain-like AGI safety](https://www.alignmentforum.org/s/HzcM2dkCq7fwXBej8) asks how we can align artificial general intelligence if it’s built on the same principles as the human brain, drawing analogies with neuroscience.
- John Wentworth studies [agent-like systems in nature](https://www.alignmentforum.org/posts/9pZtvjegYKBALFnLk/characterizing-real-world-agents-as-a-research-meta-strategy) to understand agency in general.
- Andrew Critch’s research on [multipolar takeoffs](https://www.alignmentforum.org/posts/LpM3EAakwYdS6aRKf/what-multipolar-failure-looks-like-and-robust-agent-agnostic) and “robust agent-agnostic processes” relates to concepts from sociology.
- Discussions of [mesa-optimization](https://arxiv.org/abs/1906.01820) use human evolution as a source of analogies.
Principles of Intelligent Behavior in Biological and Social Systems (PIBBSS) is a group that runs a [summer research fellowship](https://www.pibbss.ai/fellowship) and has [recommendations for books and videos](https://www.pibbss.ai/resources).
Other such research agendas exist. You can consider these as examples of what alignment-relevant research with varying amounts of math and computer science could look like:
- [An Open Agency Architecture for Safe Transformative AI](https://www.alignmentforum.org/posts/pKSmEkSQJsCSTK6nH/an-open-agency-architecture-for-safe-transformative-ai) is an AI alignment paradigm aimed at ending the acute risk period without creating worse risks.
- [Learning Normativity: A Research Agenda](https://www.alignmentforum.org/posts/2JGu9yxiJkoGdQR4s/learning-normativity-a-research-agenda) aims to develop ways for agents to learn norms like languages and values in the absence of perfect feedback.
- [What Should AI Owe To Us? Accountable and Aligned AI Systems via Contractualist AI Alignment](https://www.alignmentforum.org/posts/Cty2rSMut483QgBQ2/what-should-ai-owe-to-us-accountable-and-aligned-ai-systems) tries to ground AI alignment in pluralist and contractualist norms.
- [Political Economy of Reinforcement Learning (PERLS)](https://perls-group.github.io/previous-workshop-archive/2021-neurips/2021-neurips.html) is a workshop studying the societal implications of reinforcement learning systems.
- The [Alignment of Complex Systems Research Group](https://www.alignmentforum.org/posts/H5iGhDhQBtoDpCBZ2/announcing-the-alignment-of-complex-systems-research-group) studies connections between AI alignment and complex systems theory.
See also the EA Forum post [Social scientists interested in AI safety should consider doing direct technical AI safety research, (possibly meta-research), or governance, support roles, or community building instead.](https://forum.effectivealtruism.org/posts/WHDb9r9yMFetG7oz5/social-scientists-interested-in-ai-safety-should-consider)
|
c1c07cdc-db65-4e45-b341-92f67a6fcf51
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Information security considerations for AI and the long term future
Summary
This post is authored by Jeffrey Ladish, who works on the security team at Anthropic, and Lennart Heim, who works on AI Governance with GovAI (more about us at the end). The views in the post are our own and do not speak for Anthropic or GovAI. This post follows up on Claire Zabel and Luke Muehlhauser’s 2019 post, Information security careers for GCR reduction.
We’d like to provide a brief overview on:
1. How information security might impact the long term future
2. Why we’d like the community to prioritize information security
In a following post, we will explore:
1. How you could orient your career toward working on security
Tl;dr:
* New technologies under development, most notably artificial general intelligence (AGI), could pose an existential threat to humanity. We expect significant competitive pressure around the development of AGI, including a significant amount of interest from state actors. As such, there is a large risk that advanced threat actors will hack organizations — that either develop AGI, provide critical supplies to AGI companies, or possess strategically relevant information— to gain a competitive edge in AGI development. Limiting the ability of advanced threat actors to compromise organizations working on AGI development and their suppliers could reduce existential risk by decreasing competitive pressures for AGI orgs and making it harder for incautious or uncooperative actors to develop AGI systems.
What is the relevance of information security to the long term future?
The bulk of existential risk likely stems from technologies humans can develop. Among candidate technologies, we think that AGI, and to a lesser extent biotechnology, are most likely to cause human extinction. Among technologies that pose an existential threat, AGI is unique in that it has the potential to permanently shift the risk landscape and enable a stable future without significant risks of extinction or other permanent disasters. While experts in th
|
1f1430ed-63e7-4e54-bf96-e589e61f4e53
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Hypothetical - Moon Station Government
You are now in control of a habitat on the moon. It has no ties to any government; its creation was funded by a wealthy philanthropist who just wants people to emigrate from Earth. The cost of doing so is within the reach of a middle-class family if they sell their home; you can therefore expect a decent number of immigrants.
What sort of government do you establish? How do you go about ruling so that your new settlement on the moon will survive and thrive?
|
e7a11e61-e483-4d46-a29c-daa9b39c6224
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Psychologist making pseudo-claim that recent works "compromise the Bayesian point of view"
I have recently been corresponding with a friend who studies psychology regarding human cognition and the best underlying models for understanding it. His argument, summarized very briefly, is given by this quote:
> Lastly, there has been a huge amount of research over the last two decades that shows human reasoning is 1) entirely constituted by emotion, and that it is 2) mostly unconscious and therefore out of our control. A lot of this research has seriously compromised the Bayesian point of view. I am referring to work done by Antonio Damasio, who demonstrated the essential role emotion plays in decision making (Descartes' Error), Timothy Wilson, who demonstrated the vital role of the unconscious (Strangers to Ourselves), and Jonathan Haidt, who demonstrated how moral reasoning is dictated by intuition and emotion (The Emotional Dog and its Rational Tail). I could go on and on here. I assume that you are familiar with this stuff. I'd just like to know how you who respond to this work from the point of view of your studies (in particular, those two points). I don't mean to get in a tit for tat debate here, just want the other side of the story.
I am having trouble synthesizing a response that captures the Bayesian point of view (and is sufficiently backed up by sources so that it will be useful for my friend rather than just gainsaying of the argument) because I am mostly a decision theory / probability person. Are these works of psychology and neuroscience really illustrating that human emotion governs decision making? What are some good neuroscience papers to read that deal with this, and how do Bayesians respond? It may be that everything he mentions above is a correct assessment (I don't know and don't have enough time to read the books right now), but that it is irrelevant if you want to make good decisions rather than just accept the types of decisions we already make.
|
ec1b164d-9189-4510-8fba-aa621a018315
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Numberwang: LLMs Doing Autonomous Research, and a Call for Input
Summary
Can LLMs science? The answer to this question can tell us important things about timelines to AGI. In this small pilot experiment, we test frontier LLMs on their ability to perform a minimal version of scientific research, where they must discover a hidden rule about lists of integers by iteratively generating and testing hypotheses. Results are ambiguous: they're mostly pretty bad at it but top systems show apparent signs of life. We're working on a larger, more rigorous experiment, and we really want your input.
Structure
In this post we:
* Describe an experiment on general reasoning and scientific research ability in LLMs.
* Describe the main research project for which this is a pilot project.
* Ask for your predictions on the outcome of the main project and what you believe it will say about general reasoning in LLMs.
This is followed by appendices with more detail (eg related work, limitations) but we've kept the main body as short and direct as possible.
Introduction
Over the past six months we have been trying to better understand the degree to which LLMs are capable of general reasoning[1] (in LLM Generality is a Timeline Crux, and LLMs Look Increasingly Like General Reasoners). In short: researchers, including AI safety researchers, have widely differing positions on whether (and to what degree) LLMs are capable of the same sort of accurate and broadly applicable reasoning ability that humans are. At one extreme is the stochastic parrot hypothesis that "no actual language understanding is taking place in [LLMs]" and any apparent signs of reasoning are actually a kind of cheating. At the other extreme is the view that LLMs are already fully capable of all the same types of reasoning as humans are and just need a bit more scaling.
An important negative consequence of this disagreement is that the AI safety community's limited resources are spread across a range of timelines, in a way that impacts the scaling labs much less. The best use of
|
45a455cb-2619-44ed-9b21-e82529514cc1
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
Intermittent Distillations #3
Mundane solutions to exotic problems (Paul Christiano)
======================================================
[Mundane solutions to exotic problems](https://www.alignmentforum.org/posts/d5m3G3ov5phZu7FX3/mundane-solutions-to-exotic-problems)
Summary
-------
Thinking about AI safety often leads to considering exotic problems: models purposefully altering their gradients, agents hiding their capabilities to defect when an opportunity arises, or humans being vulnerable to side-channel attacks. These exotic problems might seem like they require exotic solutions. However, the author points out that the origin of these exotic problems is the models themselves having exotic capabilities. If one is able to train a model to use whatever capabilities it has for a good purpose, then if the model gains exotic capabilities, it'll also be able to use those to avoid exotic problems.
As an analogy, if one hires a mercenary to help them when a war, they might be worried that the mercenary develops some weapon that means they are no longer motivated by your money. Since the mercenary has different goals than their employer, the employer must be worried about the incentive structure that they set up being broken. However, if one has a soldier that is fundamentally loyal, one is not at all worried about this soldier developing a powerful weapon, since their loyalty ensures they'll use this new exotic capability in service of your goals. If it turns out that the weapon requires special containment procedures, the soldiers' loyalty will ensure that they'll use their weapons expertise to help you contain it.
Opinion
-------
This framing on capabilities redirecting is similar to framings provided in [The strategy-stealing assumption](https://ai-alignment.com/the-strategy-stealing-assumption-a26b8b1ed334) and [Universaliity Unwrapped](https://www.alignmentforum.org/posts/farherQcqFQXqRcvv/universality-unwrapped). Empirically, it's taken me a while to understand the author's framing of "build an overseer that 'knows' everything the model knows" as a sufficient solution to the alignment problem, but I think it makes sense to me now.
However, I still don't really understand why the author thinks this is a tractable problem. To be fair, I'm not sure why people think value learning is tractable either. I'd blame this on my lack of understanding more than anything.
Elsewhere, the author has said their goal is to provide a clean solution to the alignment problem. I think this post gives intuition for why the messiness that sometimes appears at the surface of AI safety problems might be solvable with some deep solution.
Low-stakes alignment (Paul Christiano)
======================================
[Low-stakes alignment](https://www.alignmentforum.org/posts/TPan9sQFuPP6jgEJo/low-stakes-alignment)
Summary
-------
A model is operating in a *low-stakes* setting if the potential negative impact of any of its decisions are bounded at some pretty low amount. In particular, this implies that we only care about the long-run average behavior of the model. For example, a standard concern in AI safety is that the model seizes control of its reward channel. If it was possible to do this quickly, the model would have taken an action with large negative-impact, so it wouldn't be in a low-stakes setting.
In particular, one is not concerned with distributional shift in low-stakes settings. Since each particular decision only has potentially small negative impact, we can simply train our model in an online setting, suffering from bad performance for a while until the model eventually learns the new distribution. If the individual decisions don't matter, then the overall cost of this transition period is low.
Therefore, in low-stakes settings, the primary concern is that of outer alignment. If one has an objective that is both easy to optimize and will produce good behavior in the limit of optimization, training the model in a low-stakes setting will produce an aligned model without catastrophe (assuming the model is powerful enough to learn the objective). To see why this is the case, having an outer-aligned objective means the model is aligned in the limit. Since the setting is low stakes, the potential negative impact of the model during training falls short of catastrophe.
The author claims that one can separate alignment roughly into a solution that works in this low-stakes setting, and a way of making settings actually low-stakes in practice, i.e., ensure catastrophes are handled appropriately.
One might also think that the low-stakes setting is not where the primary danger comes from, and so trying to provide a solution to only the low-stakes setting does not represent much progress. The author claims that the low-stakes setting contains a lot of risk in worlds where humans don't understand how models are making decisions. For example, humans might not understand the dynamics that allow Earth to sustain life, and so this might get gradually eroded over the course of thousands of decisions.
Finally, the author formalizes the argument for why assuming low-stakes + outer alignment is sufficient to solve the alignment problem, which I will not summarize.
Opinion
-------
I'm generally a fan of breaking off concrete subproblems to work on. I also think that the author's breakdown of the alignment problem is likely to be better than my own. However, I'm slightly skeptical that the low-stakes setting is a good way of slicing up the space. In particular, I have some sense that in order to ensure the setting is actually low-stakes, the objective will need to include [mechanistic incentives](https://www.alignmentforum.org/posts/BKM8uQS6QdJPZLqCr/towards-a-mechanistic-understanding-of-corrigibility), which has implications for the regret bounds one wants to employ.
I also think that in this breakdown, ensuring that the setting is low-stakes is the harder of the two problems. I think that if we have a method to ensure that a model never produces a catastrophe, we can probably add average case guarantees without too much additional difficulty. As an analogy, it feels like we've split the problem of making a self-driving car into "goes where you want it to go" and "doesn't crash", where "doesn't crash" is clearly the harder of the two problems. Admittedly, the point where this intuition likely goes awry is in competitiveness concerns. More specifically, that there will be objectives that are easier to optimize for that produce slow-moving disaster, as illustrated in [Another (outer) alignment failure story](https://www.lesswrong.com/posts/AyNHoTWWAJ5eb99ji/another-outer-alignment-failure-story).
The counter-argument to my position is that figuring out how to make our models not do bad things over a long period of time is going to provide insight into how to make models not do really bad things in a short amount of time. I sort of buy this for some ways of going about the low-stakes setting.
Updating the Lottery Ticket Hypothesis (John Wentworth)
=======================================================
[Updating the Lottery Ticket Hypothesis](https://www.alignmentforum.org/posts/i9p5KWNWcthccsxqm/updating-the-lottery-ticket-hypothesis)
Summary
-------
Suppose you have some neural network that uses parameters θ.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
to map x to y. We can represent this as y=f(x,θ), and our goal is to find θ to make this equation true. If the network is initialized with θ0, SGD finds some Δθ such that y=f(x,θ0+Δθ). Taking linear approximation, the right hand side of this equation is approximately equal to f(x,θ0)+Δθdfdθ(x,θ0). We call this the parameter tangent space.
Theoretically, you would expect f(x,θ0+Δθ) to be more expressive than f(x,θ0)+Δθdfdθ(x,θ0). Empirically, this is not the case and the linear approximation works quite well.
The lottery ticket hypothesis is another theory for how neural networks work and it claims that during initialization, there exists a subnetwork that can solve the task. When training happens, that particular sub-network gets reinforced and other networks get dampened.
The author connects the parameter tangent space to the lottery ticket hypothesis by claiming that, since the linear approximation is basically the same as the full space in practice, SGD is really searching over the tangent space instead of the full space. Thus the tangent space is our set of "lottery tickets."
Opinion
-------
The original lottery ticket hypothesis claims that the lottery tickets and subnetworks during initialization. The author objects to this claim because modern neural networks aren't big enough to include e.g. dog detectors. So the true lottery ticket space must be bigger than the subnetworks at initialization.
If Mingard et al are correct that the thing SGD is doing is basically randomly sampling a high performing neural network in the tangent space, then calling the tangent space the "set of lottery tickets" seems like a strict upgrade to the mental model provided by the lottery ticket hypothesis.
However, I remain pretty skeptical of the lottery ticket hypothesis as capturing something fundamental about how neural networks learn. In the simplest version, you can take a Taylor expansion of some exponential function, then identify the "subnetwork" of the first few terms and claim that it has approximately the same accuracy. I'm not sure you would have expected that there wouldn't be an extremely skewed distribution of "how much does this neuron matter" in the first place, which suggests that you ought to be able to find high-performing subnetworks for basically everything?
I'm currently treating the phenomenon as a curious bit of evidence: we can find subnetworks that we can train to basically the same performance as the original network; what's up with that?
Formal Solution to the Inner Alignment Problem (Michael Cohen)
==============================================================
[Formal Solution to the Inner Alignment Problem](https://www.alignmentforum.org/posts/CnruhwFGQBThvgJiX/formal-solution-to-the-inner-alignment-problem)
Summary
-------
(Thanks to Beth Barnes and William Saunders for valuable discussion. I also found comments by Paul Christiano and Evan Hubinger interesting.)
Suppose that you wanted to do imitation learning. Naively, you could just train an imitative model until it reached zero loss, then deploy your imitation. However, this provides no guards against deceptively aligned mesa-optimizers.
Instead, what you could do is take a weighted mixture of different models. If some of the models are deceptively aligned mesa-optimizers, then their influence will be proportionally less of the mixture, so the imitation given by the mixture might be less catastrophic. However, this process only gives you as much alignment as the average model is aligned. If the average model is deceptive, you're out of luck.
What if we wanted the mixture to be as aligned as the most aligned model in the set? Since we don't know which of the models is most aligned, our solution is to not do anything unless *all* of the models approve of the action. Translating to the probabilistic imitative setting, this means that we create a new distribution by taking the minimum over all of our models. In other words, we take an action with some probability if *all* of the models give that action at least that probability.
The resulting "min distribution" will not sum to one. The solution to this is to get additional information from the target-of-imitation with the remaining probability. This sampling method has the nice property that you will sample from the target-of-imitation more if the models are in large disagreement (since the min-probs will tend to be low and the sum will be much less than 1), and sample from the target-of-imitation less if the models tend to agree.
This setup has as a free parameter the number of models over which you're taking the minimum. The authors propose that the set is the top-k models sorted by posterior probability after doing Bayesian updates, with the models being rebalanced every time a query is made to the target-of-imitation.
*Note that I didn't read the paper, only the blog posts + talked to some people about it.*
Opinion
-------
I think it's a bit strong to call this a "solution to the inner alignment problem." I think it makes some potentially interesting progress, but doesn't solve the problem.
In particular, I think the training processes we're likely to use for different models are going to produce results that are highly correlated. If all (most?) models in your top-k are deceptive, then either you're going to take catastrophic actions or have to query the target-of-imitation an extremely large number of times.
To the extent that you're training processes produce uncorrelated models, then they're probably pretty different training processes. To get ten models over which to take the top-k will then require like 10x the compute, which seems prohibitive.
Trying to answer the question of "how can I create 10 uncorrelated models that fit the data with basically the same amount of compute as one model?" might be fruitful. I vaguely suspect that you might be able to get N uncorrelated models with log(N) compute if you were particularly clever. Perhaps an easier question to start with is how to get uncorrelated models with SGD at all.
I am generally interested in the technique of "top-k + online update" as a potentially fairly general way to turn normal training setups into something like online setups that get a bit of marginal safety. This trick is similar to the MAP trick discussed [here](https://markxu.com/itermittent-distallations-1#asymptotically-unambitious-artificial-general-intelligence-michael-cohen), which is not surprising given both results have the same author.
Report on Semi-informative Priors (Tom Davidson)
================================================
[Report on Semi-informative Priors](https://www.openphilanthropy.org/blog/report-semi-informative-priors)
Summary
-------
How can we predict the probability of AGI happening by a certain year? One approach is to simply look at the evidence and give a guess, but this is vulnerable to various cognitive biases. A potentially better approach is to reason about various methods of making predictions *in general* and make predictions using those methods. This post discusses a particular way of making predictions that balances various considerations about how much one ought to update on evidence.
One way of making predictions is using the *outside-view*, which involves trying to find appropriate reference classes and extrapolating. Another way of making predictions is using the *inside-view*, which involves considering information particular to your case. These are not binary categories – instead, you should imagine a sliding scale of "how much evidence particular to the situation should I use?", with the outside-view on one end and the inside-view on the other.
In practice, one way of making a prediction using the outside-view is to employ an *uninformative prior*, a starting point of making a prediction that assumes minimal information about the situation at hand. One can then update the prior with information that one has about the specifics at hand. The question of how strongly one updates determines where one falls on the inside/outside view spectrum.
In particular, a common uninformative prior is the Laplace prior, which assumes that the event you care about has a fixed probability of occurring per trial. The Laplace prior corresponds to a uniform prior over all possible probabilities p of that event occurring per trial. Upon observing the event happen M out of N trials, the Laplace prior gives as its "best guess" of the value of p (M + 1)/(N + 2). One can think of this as just taking the empirical frequency while starting with the assumption that the event happened in one trial and didn't happen in one trial.
However, this approach is unsatisfactory. In particular, the Laplace prior predicts that the first trial of an event is likely to happen with 1/2 chance, which seems absurdly high for the chance that AGI gets created in the first year.
The modification the author makes is to make an independent guess at the *first trial probability*, which aims to answer the question "before humanity had started trying to make AGI, what probability would you have given to them succeeding in the first trial?" where "trial" can vary, but is typically one year. In essence, this modifies the Laplace prior by replacing the assumption of one failed trial with an estimate as to a more reasonable number of failed trials. For example, since creating AGI seems like a very hard thing to do, we might assume that there had been many failures instead of just one.
To estimate the first trial probability, the author considers multiple reference classes, the most important of which is "highly ambitious but feasible technology that a serious STEM field is trying to develop", which includes technologies like heavier-than-air flight, DNA-sequencing, and atomic energy. The author concludes that the first trial probability is likely somewhere between 1/100 and 1/1000, with a point estimate of around 1/300.
To finish the framework, we need to answer two more questions: "what's a trial?" and "when did trials start happening?"
The former question is answered with "a year", "a 'year' of researcher growth" and "a 'year' of compute increase." The latter question is generally answered with "1956", which is the year of the Dartmouth Summer Research Project on Artificial Intelligence, widely considered to be the founding event of AI as a field.
If one assumes a first trial probability of 1/300, a trial time of one year, and a start time of 1956, this framework yields a probability of AGI by 2036 of 4%. The post includes many tables showing which assumptions yield which results. [This tool](https://aipriors.com/) allows you to input your own parameters and see what you get.
Opinion
-------
If you're a Bayesian agent, it's impossible that learning more information hurts you. From this perspective, the main reason why you don't want to use more than the maximal amount of evidence is because you're not a Bayesian agent, which is, admittedly, a pretty good reason. Even so, it always feels like attempts to improve predictions by purposefully limiting the amount of information you get are taking the wrong approach of trying to navigate around biases instead of just debiasing yourself.
As an analogy, one of my ML professors once commented that he wasn't a fan of stopping training early to avoid overfitting because if your learning algorithm could even get worse with more training it wasn't a very good learning algorithm. I feel similarly about approaches to predictions that ignore specific information. If the process you're using to make predictions could possibly get worse with more information, it probably isn't a very good method of making predictions.
This is not to say that I don't think outside-view approaches aren't *useful*, I'm just not really a fan. I think they can be useful if you can't debias yourself enough to make predictions using all the information that is skewed. In the same sense, if you don't have a better learning algorithm, doing early stopping is better than not doing it.
I don't have that much to say about the particular approach the author used – it seems quite reasonable.
Multi-Prize Lottery Ticket Hypothesis: Finding Accurate Binary Neural Networks by Pruning A Randomly Weighted Network
=====================================================================================================================
[Multi-Prize Lottery Ticket Hypothesis: Finding Accurate Binary Neural Networks by Pruning A Randomly Weighted Network](https://arxiv.org/abs/2103.09377)
Summary
-------
The multi-prize lottery ticket hypothesis is that in a sufficiently large neural network, there exist lottery tickets (subnetworks) that win multiple prizes. Those prizes are:
* The subnetwork achieves comparable accuracy to the trained network.
* The subnetwork achieves this accuracy without training
* The subnetwork is robust to quantization.
The authors prove a quantitative bound for how large a network has to be before there is a high chance of such a ticket existing. They also exhibit an algorithm for finding such tickets and demonstrate SOTA results on binary network accuracy.
Opinion
-------
"Multi-prize" is a really weird way to describe a lottery ticket. I would prefer "strong lottery ticket hypothesis" or something like that.
My main issue with this result is that it feels like combinatorial sleight of hand. Basically, there is some definition of "sufficiently large" that gives you all three "prizes" for free because you can just pick whatever network you want. As an analogy, the infinitely large random graph contains every possible finite subgraph with probability 1, but you wouldn't really say a very large random graph has "lottery tickets". Well, you might, but it wouldn't be very meaningful.
I looked at the proof a bit to see if they were doing anything clever. As far as I can tell, the answer is "no" and they're just using basic probability bounds to ensure the existence of subnetworks. I also suspect that the results are with binary networks because they're much easier to prove things about. But basically, the result is akin to "if we get a large enough random graph, we can be sure that all possible graphs over n nodes are subgraphs of this random graph." This is basically very obvious and not interesting.
The interesting bit might be the algorithm they exhibit to find such binary neural networks, which allows you to train these networks more efficiently. If you want to deploy a powerful model on limited compute, one way of doing that is to train a really large model, then compress it somehow. Binarizing the weights seems like a reasonable way to go about that.
|
5850c87f-f412-4643-b959-9a15cf17a182
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
FYI: I’m working on a book about the threat of AGI/ASI for a general audience. I hope it will be of value to the cause and the community
The TLDR is the title of this post.
Hi all, long time EA/Rationalist, first time poster (apologies if formatting is off). I’m posting to mention that I’m 30,000 words into a draft of a book about the threat of AI written for a general audience.
People who read this forum would likely learn little from the book but it would be for their friends and the larger group of people who do not.
Brief FAQ:
Q: What’s the book about? What’s the structure?
A: Summarizing the book in a few short sentences: Artificial Super Intelligence is coming. It is probably coming soon. And it might be coming for you.
Structurally, the initial chunk is making the case for AGI/ASI happening at all; happening soon; and not obviously being controllable. In short, the usual suspects.
The next chunk will be a comprehensive list of all the objections/criticisms of these positions/beliefs and responses to them. The final chunk explores what we can do about it. My goal is to be thorough and exhaustive (without being exhausting).
Q: Why should this book exist? Aren’t there already good books about AI safety?
A: Yes, there are! Superintelligence, Human Compatible, Precipice, Alignment Problem, Life 3.0 etc. all provide high quality coverage in different ways. But most of them are not intended for a general audience. My goal will be to explain key concepts but in the most accessible way possible (eg discuss the orthogonality thesis without using the word orthogonal).
Second, the market craves new content. While some people read books that are 2-10 years old, many people don’t, so new works need to keep coming out. Additionally, there have been so many advances recently, some coverage quickly becomes out of date. I think we should have more books come out on this urgent issue.
Q: Why you?
A: a) I have 14 years of experience explaining concepts to a general audience through writing and presenting hundreds of segments on my podcast The Reality Check;
b) I also have 14 years experience as a policy analyst, again learning to explain ideas in a simple, straightforward manner.
c) I’m already writing it and I’m dedicated to finishing it. I waited until I was this far along in the writing to prove to myself that I was going to be able to do it. This public commitment will provide further incentive for completion.
Q: Are you concerned about this negatively impacting the movement?
A: This is a concern I take seriously. While it is possible increasing awareness of the problem of AI will make things worse overall, I think a more likely outcome is that it will be neutral to good. I will strive to do justice to the positions and concerns people in the community have (while understanding that there is disagreement within the community).
Q: Do you need any help?
A: Sure, thanks for asking. See breakdown of possibilities below.
a) If anyone is keen to volunteer as a research assistant, please let know.
b) I’ll soon start looking for an agent. Anyone have connections to John Brockman (perhaps through Max Tegmark)? Or other?
c) If smart and capable people want to review some of the content in the future when it is more polished, that would be great.
d) I’m waiting to hear back about possible funding from the LTFF. If that falls through, some funding to pay for research assistance, editors/review, book promotion, or even to focus my time (as this is a side project) would be useful.
Q: Most books don’t really have much impact, isn’t this a longshot?
A: Yes. Now is the time for longshots.
|
c02adf0c-4d1c-41fa-80e1-d29b1aceb53d
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Open Thread, Jul. 6 - Jul. 12, 2015
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
----------------------------------------
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
|
965a9dac-0a4a-407d-b2fa-f002b3993190
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Change Is Bad
Epistemic Status: Public service reminder (I want to be able to link to this in the future)
Leads to: Choices are Bad, Choices Are Really Bad, Complexity Is Bad
Almost all changes are bad.
People forget that. They say they want change. They say things like:
> At the end of the day, I want to see change come about, whatever it would take. Whatever it would take to see change come about, I would welcome. – Chumbawamba, Be With You
What they actually want one of those rare, carefully chosen good, friendly changes. They do exist within change space.
Change space, like mind space, is deep and wide. Friendly change space isn’t quite to change space what friendly mind space is to mind space, but before you apply any filters of common sense, it’s remarkably close.
The more optimized things currently are, the less likely any given change is to be good.
The more time people have had to optimize other things around the current state of the thing you are trying to change, the less likely any given change is to be good.
The more effort people have put into optimizing other things, based on the thing you are looking to change, the less likely any given change is to be good. You could break a lot of things.
When you break those things, you cause harm. Since people hate losses more than they love gains, even a net improvement can make a person or group feel worse off.
If you do break things, often they stay broken. It is usually a lot harder to build or repair something than it is to break that thing.
The faster and bigger you make changes, the more other things you are likely to break, and the more critically you will break them. At a minimum, even when your change is strictly for the better, those things then must change to adapt. In many cases, they are broken entirely, beyond repair, and this goes on to break other things.
Modern life is highly optimized. It’s far from perfect. Often it is optimizing for the wrong things. Nevertheless, it is highly, highly opti
|
33262e95-707d-4003-adc0-899e8c3e0ba0
|
trentmkelly/LessWrong-43k
|
LessWrong
|
What's the theory of impact for activation vectors?
Activation vectors are really, really cool, but what is the theory of impact for this work?
* Is the hope that activation vectors will allow us to actually gain perfect control over a model to get it to do exactly what we want it to do?
* Is the hope that a new technique that builds upon activation vectors lets us do that instead?
* Is the hope that this technique allows us to marginally decrease the risks of powerful models in a Hail Mary attempt? Or perhaps to buy us more time to solve the problem?
* Is the hope just that learning more about how neural networks work will allow us to theorize better about how to control them?
|
166fa083-2937-47b4-8afe-1e7ed08cefbc
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Preparing for a Rational Financial Planning Sequence
What follows is a rough outline for a possible rational financial planning sequence that was inspired by some other recent discussion here. I'm not sure how useful this would be to how many people. I know there are some LessWrongers who would enjoy and learn from this; but I don't know if there are 5, 50, or 500. If you'd like to read it, let me know. If 500 people tell me they can't wait for this, I'll probably write it. If 5 people say maybe they'll glance at it, then probably not.
----------------------------------------
Part I: Preliminaries:
Financial Rationality
Multiplying uncertainties
The inside and outside views
Interpolation is reliable; extrapolation isn't
Part II: This is important:
* Why to save for retirement
* Dying alone in a hole: the story of Jane.
* Why compound interest is cool
* 65-year old you will not want to live like a grad student
* 65-year old you will not want to work like 35-year old you
* Existential risk does not defeat personal risk
* Existential success does not defeat personal risk
Part III: Analyzing Your Life
(This section needs a lot more fleshing out, and thought)
Personal satisfaction and happiness: do what you love, and adjust your financial expectations accordingly
How much do you need to retire?
When do you want to retire?
How much do you need to live on today?
Big expenses you need to plan for
Increasing Income
College the best financial decision you'll ever make or the worst?
Choosing a career: what is your comparative advantage?
Switching careers
Career Decisions
equity vs salary; steady singles or home run hitter
employee or owner
Career Tactics
Salary negotiation
promotion
when to change jobs
Cutting Expenses
Save more tomorrow
Inheritance
Part IV: The Practical How-to Advice:
Emergency Cash
Credit cards: the good, the bad, and the criminal
Banking
Where to save (tax advantaged
|
3d70892e-0d01-4dde-b0fe-5089841f6e08
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Highlights from the Blackmail Debate (Robin Hanson vs Zvi Mowshowitz)
At one of our weekly LessWrong events, we had a lively debate on legalizing blackmail (video, transcript). Robin Hanson took the pro side, Zvi Mowshowitz took the con, and I moderated. 70 people showed up to watch for ~2 hours.
Here's my overview of their positions.
* Zvi thinks that blackmail would incentivize a whole host of terrible actions, such as trying to trick people into norm violation, and people becoming intensely secretive even around their closest friends and family.
* Robin thinks that blackmail is a weird rule, where you cannot ask for money to keep a secret, but the other person is allowed to offer it (e.g. people can offer you money if you sign an NDAs). This makes no sense and Robin is looking for any clear reason why making one side of this deal should be illegal.
Below are some quotes from their conversation. And of course, there's the full edited transcript and video for those who want all the details.
Highlights
What's good about blackmail
> Robin Hanson: I think in the case of say David Letterman, who famously was blackmailed for having affairs, if he could have actually been successfully blackmailed, then people like Letterman would be doing much less of what he was doing. And these weren't just affairs with random people who liked him, these were employees of him and so they are much more morally questionable. And so I think there would just be a lot less sexual harassment if blackmail was legal.
>
> Robin Hanson: There are a lot of powerful people who break a lot of rules, actually legal rules in many ways. And then the people around them shut up about it, let them get away with it because they don't feel they actually have a credible threat to report it. And so they don't. And so blackmail would mean a lot more actual reporting or a discouragement of the things powerful people do, that break rules and norms.
Spreading dirt is often prosocial
> Zvi Mowshowitz: Essentially your argument is that in today's world, if people online
|
0fc57839-2087-4d7e-92d8-06eed1dc2083
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Evaluating a Corsi-Rosenthal Filter Cube
When I wrote about testing my ceiling air purifier prototype, two common questions were:
* Where is your control group?
* How does this compare to a Corsi-Rosenthal filter cube?
Several rounds of testing and one worn out particle meter later, I have answers for you! First, the bottom line:
Purifier CADR (CFM) Cost Corsi-Rosenthal on High 246 $55 Coway Mighty AP-1512HH on High 219 $172 Corsi-Rosenthal on Low 191 $55 Ceiling Fan Air Purifier on High 180 $115 Coway Mighty AP-1512HH on Medium 105 $172 Control 9 $0
Methodology is in "Testing Air Purifiers".
The Corsi-Rosenthal box performs very well, and the control setting has a low enough decrease that ignoring it doesn't affect the analysis much.
This specific Corsi-Rosenthal box is a Lasko fan with four "FPR 10" filters that I bought several years ago. This is a Home Depot rating system, and is approximately equivalent to the MERV-13 you'll see recommended for filter cubes. This isn't a new filter: it ran in my office from when I returned to in-person work in early December 2021 until I switched jobs in mid-June 2022. The filters are decidedly grey now, and I'd likely get better performance with new ones:
I did have a problem partway through my testing where my Temtop M2000 suddenly started giving bogus readings:
In this test I first was measuring the pm2.5 and pm10 levels with nothing running, and then I turned on the filter cube. You can see levels dropped quickly, but then started to go back up.
Over the next few days it got even worse, with it often reporting indoor levels in the 100-500 µg/m³ level even outdoors. I returned the meter for a replacement, which has worked well so far. It does make me nervous about the longevity of this device, however, since it's relatively expensive for some thing that apparently doesn't last very long?
|
edd7dbe9-ec88-403b-9bbd-1ece1ae31802
|
StampyAI/alignment-research-dataset/arxiv
|
Arxiv
|
Sim-to-Real via Sim-to-Seg: End-to-end Off-road Autonomous Driving Without Real Data.
1 Introduction
---------------
Simplifying large autonomous driving software stacks, which are usually composed of 3D scene understanding, localization, mapping, and control, is a promising goal. While these stacks can indeed perform well in a range of scenarios, they do suffer from error propagation through each of the modules, and tend to require a large engineering overhead. However, attempting to solve autonomous driving problem in a purely end-to-end manner, where observations are mapped directly to actions, also has its downfalls. For one, these methods are usually data intensive, and in particular for reinforcement learning (RL), collecting exploration driving data in the real world is impractical and dangerous.
To overcome the data burden, large-scale simulations can be employed to collect experience from a large number of parallel agents. However, we then have to consider the visual and dynamic discrepancies between the simulation and reality. Sim-to-real transfer approaches, such as domain adaptation [[2](#bib.bib2), [1](#bib.bib1)] and domain randomization [[3](#bib.bib3), [4](#bib.bib4), [5](#bib.bib5), [6](#bib.bib6)] exist for this reason. These however, have mostly been applied in tasks with a fixed camera viewpoint, such as manipulation tasks where the camera usually points down towards a bin [[2](#bib.bib2), [1](#bib.bib1)] or table [[3](#bib.bib3)], or faces a wall [[4](#bib.bib4), [5](#bib.bib5)].
Unlike these robot manipulation setups, autonomous navigation with full-scale vehicles in off-road settings are far less controlled, often featuring aspects such as drastically different lighting, glare, dust, long visual horizons, and complex backgrounds. In addition, unlike in standard autonomous street driving, the vehicle must traverse off-road terrains, which have less uniform dynamics and may include natural obstacles such as bushes, rocks, bumps, and more, which may not be observed during training.
In this paper, we investigate how similar sim-to-real techniques used for smaller scale robotic domains, such as robot manipulation, can be scaled to the significantly more visually challenging domain of off-road autonomous driving of a large vehicle. To this end, we propose Sim2Seg, a sim-to-real method designed with the challenges of off-terrain autonomous navigation in mind. Combined with a deep RL policy, Sim2Seg is the first work to effectively employ primarily visual sim-to-real transfer for off-road autonomous driving.
##### Contributions
We highlight our contributions below:
* ∙∙\bullet∙
We improve the discriminator component within RCAN [[1](#bib.bib1)] by convolving the output segmentation maps to produce feature maps that are then fed to a discriminator to evaluate.
* ∙∙\bullet∙
We explore suitable action modes that encourage safe trajectory proposals without diminishing the model’s capability.
* ∙∙\bullet∙
We show that our trained RL policy can perform as well as a sophisticated, model-based autonomous driving stack.
* ∙∙\bullet∙
We show the first primarily visual sim-to-real transfer method for end-to-end RL autonomous driving in complex off-road terrains that requires no real world data.
2 Related Work
---------------
##### Model-based Autonomous Driving
Most off-road autonomous driving approaches center around scene understanding approaches. Due to the challenges of natural obstacles such as bushes, trees, and rocks; uneven terrain; and various static and dynamic uncertainties [[7](#bib.bib7)], the robot must constantly assess the traversability of the terrain. Geometry-based methods involve constructing a terrain map [[8](#bib.bib8), [9](#bib.bib9)] from depth measurements from sensors such as LiDAR, stereo cameras, etc. This terrain map is used to generate a traversability cost by performing stability analysis, using features like surface normals, maximum or minimum height of the terrain, etc., which can be used by motion planning and control algorithms to plan vehicle’s actions [[10](#bib.bib10), [11](#bib.bib11), [12](#bib.bib12), [13](#bib.bib13)]. Papadakis [[14](#bib.bib14)] provides a survey of several other off-road driving algorithms. In lieu of these methods, we focus on end-to-end autonomous driving directly from pixels, avoiding these engineering layers.
##### End-to-End Autonomous Driving
End-to-end autonomous driving at large remains an unsolved problem: most approaches have narrowed their scope to urban environments [[15](#bib.bib15), [16](#bib.bib16)], as it is most relevant for everyday human transportation. In addition, most methods simplify the problem of general navigation by assuming static environments [[17](#bib.bib17)] and real-world datasets [[15](#bib.bib15)], which is a limiting factor for more difficult terrains and navigation tasks like off-road autonomous driving.
Since urban environments produce unique challenges of multi-agent interactions and following traffic rules, many end-to-end approaches greatly simplify the environment to focus on navigation on roads. For instance, Chu et al. [[18](#bib.bib18)] and Nair et al. [[19](#bib.bib19)] both reduce the environment to static, toy car racing environments. While Kendall et al. [[20](#bib.bib20)] trains a visual RL policy end-to-end in simulation, they focuses explicitly on lane-following in a static environment and requires real-world policy rollouts for few-shot policy transfer.
Offline real-world data has also been crucial to many methods. Ram [[21](#bib.bib21)] trains end-to-end in CARLA [[22](#bib.bib22)] — an urban driving simulator, but to bridge the visual sim-to-real gap, requires real-world data in order to enhance simulator images. VISTA [[15](#bib.bib15)] focuses on street navigation and relies upon a data-driven simulator, which synthesizes new viewpoints of a scene based on offline data, making it difficult to quickly simulate new scenes. Osinski et al. [[17](#bib.bib17)] uses real-world images and ground-truth semantic segmentation maps to learn segmentations. We instead focus on a goal-conditioned policy for the explicit task of off-road navigation and obstacle avoidance, and zero-shot policy transfer to the real world via learning a shared representation from a diverse set of high-fidelity simulations.
##### Sim-to-Real in the Visual Domain
Perception forms the basis of many tasks, from smaller-scale robotics tasks to large-scale vehicles. To ensure consistent perception across simulators and real-world images, a popular technique used is domain randomization [[23](#bib.bib23), [4](#bib.bib4), [16](#bib.bib16)], in which input observations are randomized to prevent overfitting to simulator images, ensure adaptability to a variety of conditions, and encourage extraction of meaningful features such as object shapes and locations.
In the case that real-world data is easily accessible, domain adaptation is a popular method used to extract consistencies across the two domains. Pixel-level domain adaptation, for instance, improves pixel-level consistencies by restylizing simulator images [[24](#bib.bib24), [25](#bib.bib25)]. For robotics tasks involving objects that are critical to the scene, additional losses are penalized: RetinaGAN ensures object consistencies using a pretrained object detector [[26](#bib.bib26)], and RL-CycleGAN [[27](#bib.bib27)] penalizes differences in Q-values. Feature-level domain adaptation learns shared features across both domains [[28](#bib.bib28), [29](#bib.bib29), [30](#bib.bib30)]. It is notable that many of these environments are based around grasping and other robotics tasks, which have fixed camera viewpoints, controlled environments, and relatively easier data collection processes in comparison with off-road vehicles.
The most direct analog of our approach is RCAN [[1](#bib.bib1)], which approaches visual sim-to-real translation in robotic grasping via learning a shared RGB canonical space using a Pix2Pix model [[31](#bib.bib31)]. While directly inspired by RCAN, our method has distinct differences, which we highlight:
(1) RCAN converts visual inputs to canonicalized RGB images. Sim2Seg converts visual inputs to one-hot segmentation maps, which are a different, relatively lower dimensional, modality.
(2) RCAN discriminators take in paired RGB images. To improve discriminator stability, our discriminator takes in learned features of input images and segmentation maps.
(3) RCAN approaches robotic manipulation tasks. Sim2Seg approaches autonomous driving in the much more difficult environment of offroad environments, which presents additional challenges as described in Section [1](#S1 "1 Introduction ‣ Sim-to-Real via Sim-to-Seg: End-to-end Off-road Autonomous Driving Without Real Data").
3 Method
---------
In this section, we detail our method, Sim2Seg, which is summarised in Figure [1](#S0.F1 "Figure 1 ‣ Sim-to-Real via Sim-to-Seg: End-to-end Off-road Autonomous Driving Without Real Data"). It consists of two primary components: (1) inspired by RCAN [[1](#bib.bib1)], we train a Sim2Seg model that translate randomized RGB images from simulation into a shared canonical form, which we define as a semantic segmentation map. (2) We then train a goal-reaching policy within this canonical representation to navigate in off-terrain environments. During inference, we use a frozen pretrained Sim2Seg model to perform zero-shot transfer on real-world images.
###
3.1 Simulation
We create several simulation environments using the Unity engine. Unity fulfills several key desiderata, notably high-fidelity visual observations and dynamics, open-source vehicle components and scenes [[32](#bib.bib32)], and the ability to apply custom domain randomization techniques. Unity also integrates well with RL training with the Unity ML-Agents toolkit, which provides a Gym interface for training [[33](#bib.bib33)]. For our purposes, we further modify ML-Agents to support instance-level parallelism, allowing us to train multiple agents per built executable.
To maximally train our policy to a variety of off-road environments and generate a diverse dataset of simulator data, we select 3 different simulated scenes — dubbed Meadow [[34](#bib.bib34)], Landscapes [[35](#bib.bib35)], and Canyon [[36](#bib.bib36)] — with semantically diverse visual environments. During training, the policy trains simultaneously on all 3 environments, leveraging the most out of the simulation training phase and ensuring adaptability to a variety of scenes.
###
3.2 Sim2Real via Sim2Seg
To bridge the visual gaps between simulators and real-world data, we use a Sim2Seg model to convert randomized image domains into our chosen canonical form, a segmentation map consisting of six classes: trees/bushes, ground, sky, rocks, road, and logs. Visualization colors can be found in Appendix [B](#A2 "Appendix B Sim2Seg Architecture and Training ‣ Sim-to-Real via Sim-to-Seg: End-to-end Off-road Autonomous Driving Without Real Data"). We believe this is useful for off-road vehicles because it simplifies the unnecessary details, textures, and colors of images and identifies different types of obstacles, which gives important information on obstacles and areas to avoid (see Figure [2](#S3.F2 "Figure 2 ‣ Domain Randomization ‣ 3.2 Sim2Real via Sim2Seg ‣ 3 Method ‣ Sim-to-Real via Sim-to-Seg: End-to-end Off-road Autonomous Driving Without Real Data")).
##### Domain Randomization
For data collection, we collect 200k pairs of paired RGB images, segmentation maps, and depth data per environment, using a random policy. We apply domain randomization to textures of all objects, lighting color and direction, camera field-of-view, and camera position. For the best map to the canonical state, we train a Sim2Seg model per each environment and a separate Sim2Seg model for the combined dataset. During training, we use each environment’s Sim2Seg model, which avoids distribution shifts between the Unity environments. At real-world test time, we also consider a Sim2Seg model trained on all environments to achieve optimal performance.

Figure 2: To bridge the visual sim-to-real gap, we apply combinations of texture randomization, camera position and intrinsic randomization, and lighting color and direction randomization to varying scenes in Unity, and learn mappings to ground truth segmentation maps.
###
3.3 Sim2Seg Training
Following RCAN [[1](#bib.bib1)], we use conditional GANs (cGAN) with the U-Net Architecture [[37](#bib.bib37), [31](#bib.bib31)] to translate pairs of randomized simulation images into their canonical segmentation map representation. During train time, we use paired simulation data (xs,xc,md,mo)jj=1Nsuperscriptsubscriptsubscriptsubscript𝑥𝑠subscript𝑥𝑐subscript𝑚𝑑subscript𝑚𝑜𝑗𝑗1𝑁{(x\_{s},x\_{c},m\_{d},m\_{o})\_{j}}\_{j=1}^{N}( italic\_x start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT , italic\_x start\_POSTSUBSCRIPT italic\_c end\_POSTSUBSCRIPT , italic\_m start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT , italic\_m start\_POSTSUBSCRIPT italic\_o end\_POSTSUBSCRIPT ) start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT start\_POSTSUBSCRIPT italic\_j = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT, where xssubscript𝑥𝑠x\_{s}italic\_x start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT is a randomized RGB simulation image, xcsubscript𝑥𝑐x\_{c}italic\_x start\_POSTSUBSCRIPT italic\_c end\_POSTSUBSCRIPT is the canonicalized segmentation map, and mdsubscript𝑚𝑑m\_{d}italic\_m start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT is the canonicalized depth map, and mosubscript𝑚𝑜m\_{o}italic\_m start\_POSTSUBSCRIPT italic\_o end\_POSTSUBSCRIPT the obstacle mask.
Unlike RCAN [[1](#bib.bib1)], we modify our objective to adapt to the task of autonomous driving. While RCAN predicts the canonical image, segmentation map, and depth map, we simplify our problem to predicting the segmentation map, which is simultaneously our canonical image; and the depth map, which is used as an auxiliary.
Thus, to encourage visual similarity, we optimize the following objective:
| | | | |
| --- | --- | --- | --- |
| | Leq(G)=𝔼xs,xc,md,mo[λxleqx(Gx(xs),xc)+λdleqd(Gd(xs),mo⋅md)]]L\_{eq}(G)=\operatorname{\mathbb{E}}\_{x\_{s},x\_{c},m\_{d},m\_{o}}[\lambda\_{x}l\_{{eq}\_{x}}(G\_{x}(x\_{s}),x\_{c})+\lambda\_{d}l\_{{eq}\_{d}}(G\_{d}(x\_{s}),m\_{o}\cdot m\_{d})]]italic\_L start\_POSTSUBSCRIPT italic\_e italic\_q end\_POSTSUBSCRIPT ( italic\_G ) = blackboard\_E start\_POSTSUBSCRIPT italic\_x start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT , italic\_x start\_POSTSUBSCRIPT italic\_c end\_POSTSUBSCRIPT , italic\_m start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT , italic\_m start\_POSTSUBSCRIPT italic\_o end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ italic\_λ start\_POSTSUBSCRIPT italic\_x end\_POSTSUBSCRIPT italic\_l start\_POSTSUBSCRIPT italic\_e italic\_q start\_POSTSUBSCRIPT italic\_x end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_G start\_POSTSUBSCRIPT italic\_x end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT ) , italic\_x start\_POSTSUBSCRIPT italic\_c end\_POSTSUBSCRIPT ) + italic\_λ start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT italic\_l start\_POSTSUBSCRIPT italic\_e italic\_q start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_G start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT ) , italic\_m start\_POSTSUBSCRIPT italic\_o end\_POSTSUBSCRIPT ⋅ italic\_m start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT ) ] ] | | (1) |
where leqxsubscript𝑙𝑒subscript𝑞𝑥l\_{{eq}\_{x}}italic\_l start\_POSTSUBSCRIPT italic\_e italic\_q start\_POSTSUBSCRIPT italic\_x end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT is the cross-entropy loss, leqdsubscript𝑙𝑒subscript𝑞𝑑l\_{{eq}\_{d}}italic\_l start\_POSTSUBSCRIPT italic\_e italic\_q start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT the L1 loss, mo⋅md⋅subscript𝑚𝑜subscript𝑚𝑑m\_{o}\cdot m\_{d}italic\_m start\_POSTSUBSCRIPT italic\_o end\_POSTSUBSCRIPT ⋅ italic\_m start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT is the element-wise product, and λx,λdsubscript𝜆𝑥subscript𝜆𝑑\lambda\_{x},\lambda\_{d}italic\_λ start\_POSTSUBSCRIPT italic\_x end\_POSTSUBSCRIPT , italic\_λ start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT the weighting of the losses. We denote Gx(xs)subscript𝐺𝑥subscript𝑥𝑠G\_{x}(x\_{s})italic\_G start\_POSTSUBSCRIPT italic\_x end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT ) to be the segmentation output and Gd(xs)subscript𝐺𝑑subscript𝑥𝑠G\_{d}(x\_{s})italic\_G start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT ) to be the depth output of randomized simulation image xssubscript𝑥𝑠x\_{s}italic\_x start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT.
##### Adversarial Objective
We employ a discriminator D(x)𝐷𝑥D(x)italic\_D ( italic\_x ) which outputs the probability that the RGB image and segmentation map is a pair from the simulation dataset. Because our canonical output consists of probabilities for segmentation classes, we experimented with a few approaches to use f𝑓fitalic\_f to featurize the combination of RGB and segmentation map.
| | | | |
| --- | --- | --- | --- |
| | LGAN(G,D)=𝔼xs,xc[logD(f(xs,xc)]+𝔼xs[log(1−D(xs,f(Gx(xs))]L\_{GAN}(G,D)=\operatorname{\mathbb{E}}\_{x\_{s},x\_{c}}[\log D(f(x\_{s},x\_{c})]+\operatorname{\mathbb{E}}\_{x\_{s}}[\log(1-D(x\_{s},f(G\_{x}(x\_{s}))]italic\_L start\_POSTSUBSCRIPT italic\_G italic\_A italic\_N end\_POSTSUBSCRIPT ( italic\_G , italic\_D ) = blackboard\_E start\_POSTSUBSCRIPT italic\_x start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT , italic\_x start\_POSTSUBSCRIPT italic\_c end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ roman\_log italic\_D ( italic\_f ( italic\_x start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT , italic\_x start\_POSTSUBSCRIPT italic\_c end\_POSTSUBSCRIPT ) ] + blackboard\_E start\_POSTSUBSCRIPT italic\_x start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ roman\_log ( 1 - italic\_D ( italic\_x start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT , italic\_f ( italic\_G start\_POSTSUBSCRIPT italic\_x end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT ) ) ] | | (2) |
Because our Sim2Seg model outputs segmentation class logits while the ground truth segmentation maps are one-hot encodings, discriminating directly on image and segmentation pairs is immediately trivial. In addition, because segmentation classes are discrete, we cannot sample a segmentation class and pass the gradient back to the generator. To remedy this issue, we initially attempted two two approaches: (1) sampling with gumbel-softmax [[38](#bib.bib38)] and (2) approximately the arg-max with soft arg-max [[39](#bib.bib39)], both of which are differentiable. Despite this, GAN training remained unstable, as discriminating between real and fake pairs was easy, likely due to noisy gumbel-softmax samples and soft-argmax values in areas of uncertainty.
Thus, to circumvent these issues, we choose instead to separately convolve the simulation image and segmentation maps to an equal number of channels to use as paired features for the discriminator to evaluate [[40](#bib.bib40)]. This allows for gradient flow without leading to unrealistic segmentation maps. We find this to be effective for Sim2Seg, as it circumvents the problems of different modalities and instead allows for stabler discrimination in feature space, and we present ablations in Appendix [E.3](#A5.SS3 "E.3 Ablation: Sim2Seg Model ‣ Appendix E Ablations ‣ Sim-to-Real via Sim-to-Seg: End-to-end Off-road Autonomous Driving Without Real Data").
###
3.4 End-to-end Driving with RL
Using Sim2Seg, we train a short-horizon navigation policy using RL; specifically, we consider a goal-conditioned policy conditioned on visual and odometry data. Sim2Seg is inherently compatible with any visual learner, but we choose to use TD3 [[41](#bib.bib41)] with image augmentation [[42](#bib.bib42)] as our backbone RL algorithm. TD3 is an off-policy algorithm, and so enables goal-conditioning and relabelling in our training pipeline. We detail specific details about our architecture and hyperparameters in the supplementary materials.
##### Observation Space
We adjust the TD3 backbone policy with our augmented visual observations and state information for our domain. Before inference, Sim2Seg translates the input RGB image ot∈ℝ(3,256,256)subscript𝑜𝑡superscriptℝ3256256o\_{t}\in\mathbb{R}^{(3,256,256)}italic\_o start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∈ blackboard\_R start\_POSTSUPERSCRIPT ( 3 , 256 , 256 ) end\_POSTSUPERSCRIPT into a one-hot, C𝐶Citalic\_C-class segmentation map ct∈ℝ(C,256,256)subscript𝑐𝑡superscriptℝ𝐶256256c\_{t}\in\mathbb{R}^{(C,256,256)}italic\_c start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∈ blackboard\_R start\_POSTSUPERSCRIPT ( italic\_C , 256 , 256 ) end\_POSTSUPERSCRIPT. Additionally, we condition on egocentric past trajectory τp∈ℝ10,3subscript𝜏𝑝superscriptℝ103\tau\_{p}\in\mathbb{R}^{10,3}italic\_τ start\_POSTSUBSCRIPT italic\_p end\_POSTSUBSCRIPT ∈ blackboard\_R start\_POSTSUPERSCRIPT 10 , 3 end\_POSTSUPERSCRIPT, the current 2D state sa∈ℝ2subscript𝑠𝑎superscriptℝ2s\_{a}\in\mathbb{R}^{2}italic\_s start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT ∈ blackboard\_R start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT, and the goal state sg∈ℝ2subscript𝑠𝑔superscriptℝ2s\_{g}\in\mathbb{R}^{2}italic\_s start\_POSTSUBSCRIPT italic\_g end\_POSTSUBSCRIPT ∈ blackboard\_R start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT. The encoded segmentation map and state information are flattened and concatenated to achieve a final representation.
##### Action Space
We parameterize the policy’s actor as an LSTM which outputs a series of 5 action tuples, each consisting of a steering angle θ∈[π4,−π4]𝜃𝜋4𝜋4\theta\in[\frac{\pi}{4},-\frac{\pi}{4}]italic\_θ ∈ [ divide start\_ARG italic\_π end\_ARG start\_ARG 4 end\_ARG , - divide start\_ARG italic\_π end\_ARG start\_ARG 4 end\_ARG ], and acceleration α∈[0,1])\alpha\in[0,1])italic\_α ∈ [ 0 , 1 ] ). The policy then performs a temporal rollout of the actions to form a trajectory, which is consumed by a lower-level controller for vehicle commands. This parameterization comes with several benefits; proposing rollouts instead of vehicle torque commands mitigates the dynamics sim-to-real gap. Furthermore, proposing multiple continuous actions allows the policy to propose waypoints and consider more coherent short-term plans, while giving a notion of safety when executing in the real world. We perform an analysis of different action parameterizations in Appendix [E.2](#A5.SS2 "E.2 Ablation: Trajectory Parameterization ‣ Appendix E Ablations ‣ Sim-to-Real via Sim-to-Seg: End-to-end Off-road Autonomous Driving Without Real Data").
##### Reward Function
We design a simple reward function to best achieve goals while producing safe behaviors, notably obstacle avoidance:
| | | | |
| --- | --- | --- | --- |
| | rt(sg,sa,a)=λgrg(sg,sa)+λuru(sg,sa,a)+λsrs(a)+λcrc(s,a,s′)subscript𝑟𝑡subscript𝑠𝑔subscript𝑠𝑎𝑎subscript𝜆𝑔subscript𝑟𝑔subscript𝑠𝑔subscript𝑠𝑎subscript𝜆𝑢subscript𝑟𝑢subscript𝑠𝑔subscript𝑠𝑎𝑎subscript𝜆𝑠subscript𝑟𝑠𝑎subscript𝜆𝑐subscript𝑟𝑐𝑠𝑎superscript𝑠′r\_{t}(s\_{g},s\_{a},a)=\lambda\_{g}r\_{g}(s\_{g},s\_{a})+\lambda\_{u}r\_{u}(s\_{g},s\_{a},a)+\lambda\_{s}r\_{s}(a)+\lambda\_{c}r\_{c}(s,a,s^{\prime})italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_g end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT , italic\_a ) = italic\_λ start\_POSTSUBSCRIPT italic\_g end\_POSTSUBSCRIPT italic\_r start\_POSTSUBSCRIPT italic\_g end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_g end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT ) + italic\_λ start\_POSTSUBSCRIPT italic\_u end\_POSTSUBSCRIPT italic\_r start\_POSTSUBSCRIPT italic\_u end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_g end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT , italic\_a ) + italic\_λ start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT italic\_r start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT ( italic\_a ) + italic\_λ start\_POSTSUBSCRIPT italic\_c end\_POSTSUBSCRIPT italic\_r start\_POSTSUBSCRIPT italic\_c end\_POSTSUBSCRIPT ( italic\_s , italic\_a , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) | | (3) |
where rgsubscript𝑟𝑔r\_{g}italic\_r start\_POSTSUBSCRIPT italic\_g end\_POSTSUBSCRIPT is a goal-conditioned sparse reward, rusubscript𝑟𝑢r\_{u}italic\_r start\_POSTSUBSCRIPT italic\_u end\_POSTSUBSCRIPT is an upright reward intended to incentivize smooth terrains (i.e. less rocky and flatter terrains), rssubscript𝑟𝑠r\_{s}italic\_r start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT is a steer penalty intended to discourage bang-bang control, and rcsubscript𝑟𝑐r\_{c}italic\_r start\_POSTSUBSCRIPT italic\_c end\_POSTSUBSCRIPT is a collision penalty:
| | | | | | |
| --- | --- | --- | --- | --- | --- |
| | rg(sg,sa)subscript𝑟𝑔subscript𝑠𝑔subscript𝑠𝑎\displaystyle r\_{g}(s\_{g},s\_{a})italic\_r start\_POSTSUBSCRIPT italic\_g end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_g end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT ) | ={100if∥sg−sa∥2<2−1otherwiseabsentcases100𝑖𝑓subscriptdelimited-∥∥subscript𝑠𝑔subscript𝑠𝑎221otherwise\displaystyle=\begin{cases}100&if\lVert s\_{g}-s\_{a}\rVert\_{2}<2\\
-1&$otherwise$\end{cases}= { start\_ROW start\_CELL 100 end\_CELL start\_CELL italic\_i italic\_f ∥ italic\_s start\_POSTSUBSCRIPT italic\_g end\_POSTSUBSCRIPT - italic\_s start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT ∥ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT < 2 end\_CELL end\_ROW start\_ROW start\_CELL - 1 end\_CELL start\_CELL otherwise end\_CELL end\_ROW | rc(s,a,s′)subscript𝑟𝑐𝑠𝑎superscript𝑠′\displaystyle r\_{c}(s,a,s^{\prime})italic\_r start\_POSTSUBSCRIPT italic\_c end\_POSTSUBSCRIPT ( italic\_s , italic\_a , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) | ={−1if collision0otherwiseabsentcases1if collision0otherwise\displaystyle=\begin{cases}-1&$if collision$\\
0&\text{otherwise}\end{cases}= { start\_ROW start\_CELL - 1 end\_CELL start\_CELL if collision end\_CELL end\_ROW start\_ROW start\_CELL 0 end\_CELL start\_CELL otherwise end\_CELL end\_ROW | |
| | | ru(sg,sa,a)=−|θ|180subscript𝑟𝑢subscript𝑠𝑔subscript𝑠𝑎𝑎𝜃180\displaystyle r\_{u}(s\_{g},s\_{a},a)=-\frac{\lvert\theta\rvert}{180}italic\_r start\_POSTSUBSCRIPT italic\_u end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_g end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT , italic\_a ) = - divide start\_ARG | italic\_θ | end\_ARG start\_ARG 180 end\_ARG | rs(aθ,α)=−∥θ∥2subscript𝑟𝑠subscript𝑎𝜃𝛼subscriptdelimited-∥∥𝜃2\displaystyle r\_{s}(a\_{\theta,\alpha})=-\lVert\theta\rVert\_{2}italic\_r start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT ( italic\_a start\_POSTSUBSCRIPT italic\_θ , italic\_α end\_POSTSUBSCRIPT ) = - ∥ italic\_θ ∥ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT | |
where θ𝜃\thetaitalic\_θ is the max of the roll and pitch angles between the vehicle body and world frames, and collisions are detected via Unity. To supplement training, we additionally leverage Hindsight Experience Replay, [[43](#bib.bib43)]: by relabeling sampled trajectories (τ,sa,sg)𝜏subscript𝑠𝑎subscript𝑠𝑔(\tau,s\_{a},s\_{g})( italic\_τ , italic\_s start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_g end\_POSTSUBSCRIPT ) with a goal achieved later in the same trajectory (τ,sa,sa′)𝜏subscript𝑠𝑎superscriptsubscript𝑠𝑎′(\tau,s\_{a},s\_{a}^{\prime})( italic\_τ , italic\_s start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) during training, we obtain more signal from the sparse reward rgsubscript𝑟𝑔r\_{g}italic\_r start\_POSTSUBSCRIPT italic\_g end\_POSTSUBSCRIPT.
###
3.5 Real-world Vehicle Integration

Figure 3: Polaris hardware with sensor suite. a) raw image, (b) cropped image as an input to Sim2Seg (c) segmented result using on-board computer.
For real-world evaluation, we use a Polaris S4 1000 Turbo RZR equipped with a variety of perception sensors, including an inertial measurement unit (IMU), stereo camera pairs, and LiDARs (see [Figure 3](#S3.F3 "Figure 3 ‣ 3.5 Real-world Vehicle Integration ‣ 3 Method ‣ Sim-to-Real via Sim-to-Seg: End-to-end Off-road Autonomous Driving Without Real Data")). Note that we only use a monocular RGB camera when deploying. The Polaris also includes computing resources and a ”drive by wire” system to autonomously control the vehicle (accelerate, change gears, brake, steer).
NeBula [[44](#bib.bib44)] has been integrated with the vehicle and utilizes a ROS stack with planners for varying goal horizons. To integrate our model with the Nebula autonomy system, we further test in the ARL simulator, a photorealistic simulator of the Meadow environment already integrated with the ROS stack.
During real-world deployment, we plan in an iterative closed loop at 10 Hz, executing actions until the policy receives sufficient information. The policy stores a buffer of received messages, and only executes when timestamps are synchronized within 100ms. Output trajectories are consumed by a PID controller, which translates trajectories into low level vehicle commands.
4 Experimental Setup
---------------------
##### Preliminary: Classical Baseline

Figure 4: Real-world off-road evaluation data gathered by our platform at a) Helendale, Mojave Dessert and b) Arroyo Seco Trails, Altadena
We construct the classical baseline by leveraging the the Nebula software stack [[44](#bib.bib44)].
Our specific implementation uses localization estimates from a LiDAR’s inertial odometry to fuse depth scans temporally which are used to estimate traversability cost using a settling-based geometric analysis [[45](#bib.bib45)].
This traversability map is used by a kinodynamic motion planner [[46](#bib.bib46)] to generate collision-free trajectories that are followed by a lower level PID-based tracking controller.
##### Offline Sim-to-Real Transfer Evaluation
To evaluate our policy’s ability to perform visual sim-to-real transfer, we evaluate our policy using offline rosbag datasets of real-world, manual rollouts with the Polaris vehicle. Given rosbags of vehicle interactions, we first construct a dataset 𝒟:={o,sg,τp\*;τ\*}assign𝒟𝑜subscript𝑠𝑔subscriptsuperscript𝜏𝑝superscript𝜏\mathcal{D}:=\{o,s\_{g},\tau^{\*}\_{p};\tau^{\*}\}caligraphic\_D := { italic\_o , italic\_s start\_POSTSUBSCRIPT italic\_g end\_POSTSUBSCRIPT , italic\_τ start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_p end\_POSTSUBSCRIPT ; italic\_τ start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT }, where o𝑜oitalic\_o is the observation at t=0𝑡0t=0italic\_t = 0, sgsubscript𝑠𝑔s\_{g}italic\_s start\_POSTSUBSCRIPT italic\_g end\_POSTSUBSCRIPT is the real achieved goal at some t>3𝑡3t>3italic\_t > 3 seconds, τp\*subscriptsuperscript𝜏𝑝\tau^{\*}\_{p}italic\_τ start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_p end\_POSTSUBSCRIPT is the past trajectory up to t=0𝑡0t=0italic\_t = 0, and τ\*superscript𝜏\tau^{\*}italic\_τ start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT is the vehicle’s real-world trajectory from t=0𝑡0t=0italic\_t = 0. To calculate τ\*superscript𝜏\tau^{\*}italic\_τ start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT and τp\*subscriptsuperscript𝜏𝑝\tau^{\*}\_{p}italic\_τ start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_p end\_POSTSUBSCRIPT, we transform the recorded odometry into the vehicle’s egocentric frame. We are then able to perform the model’s full inference pipeline using o𝑜oitalic\_o, sgsubscript𝑠𝑔s\_{g}italic\_s start\_POSTSUBSCRIPT italic\_g end\_POSTSUBSCRIPT, and τpsubscript𝜏𝑝\tau\_{p}italic\_τ start\_POSTSUBSCRIPT italic\_p end\_POSTSUBSCRIPT to produce a predicted trajectory τ𝜏\tauitalic\_τ.
Following this method, we construct two different real-world datasets from environments semantically and visually different from the set of training environments, which features the vehicle navigating winding trails, rocks, and vegetation (see [Figure 4](#S4.F4 "Figure 4 ‣ Preliminary: Classical Baseline ‣ 4 Experimental Setup ‣ Sim-to-Real via Sim-to-Seg: End-to-end Off-road Autonomous Driving Without Real Data")).
##### Online Sim-to-Real Transfer Evaluation
We also perform qualitative evaluation of closed-loop control with our policy fully in the real world using the system described in Section [3.5](#S3.SS5 "3.5 Real-world Vehicle Integration ‣ 3 Method ‣ Sim-to-Real via Sim-to-Seg: End-to-end Off-road Autonomous Driving Without Real Data") in the Arroyo environment.
5 Experimental Results
-----------------------
We seek to answer two primary questions: (1) Is Sim2Seg able to efficiently reach goals and perform obstacle avoidance during zero-shot transfer? (2) What factors are most necessary for Sim2Seg’s performance?

Figure 5: Timelapse of experimental demonstration of zero-shot transfer of goal following while avoiding obstacles.
###
5.1 Goal Reaching and Obstacle Avoidance
We evaluate the policy’s capacity for obstacle navigation through comparison against real world rollout τ\*superscript𝜏\tau^{\*}italic\_τ start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT, under the assumption that τ\*superscript𝜏\tau^{\*}italic\_τ start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT is an efficient trajectory to reach sgsubscript𝑠𝑔s\_{g}italic\_s start\_POSTSUBSCRIPT italic\_g end\_POSTSUBSCRIPT while avoiding obstacles. Following the offline evaluation setup described in Section [4](#S4.SS0.SSS0.Px1 "Preliminary: Classical Baseline ‣ 4 Experimental Setup ‣ Sim-to-Real via Sim-to-Seg: End-to-end Off-road Autonomous Driving Without Real Data"), we define efficient as (1) reaching sgsubscript𝑠𝑔s\_{g}italic\_s start\_POSTSUBSCRIPT italic\_g end\_POSTSUBSCRIPT in a timely manner (sgsubscript𝑠𝑔s\_{g}italic\_s start\_POSTSUBSCRIPT italic\_g end\_POSTSUBSCRIPT is the offset in t=3𝑡3t=3italic\_t = 3 seconds), and (2) avoiding obstacles (the human driver purposefully avoids collisions).
We compare both the normalized L2 distance (denoted as L2) between the requested goal and the trajectory endpoint, and the angle difference between 10 evenly sampled points from τ\*superscript𝜏\tau^{\*}italic\_τ start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT and τ𝜏\tauitalic\_τ (denoted as GT) We also include absolute trajectory error (denoted as ATE), a classical measure of trajectory alignment. Additionally, to account for the possibility that τ𝜏\tauitalic\_τ reaches the goal more directly than τ\*superscript𝜏\tau^{\*}italic\_τ start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT, we also introduce 𝐆𝐓𝐆subscript𝐆𝐓𝐆\textbf{GT}\_{\textbf{G}}GT start\_POSTSUBSCRIPT G end\_POSTSUBSCRIPT, which is the GT metric described above between the shortest distance to sgsubscript𝑠𝑔s\_{g}italic\_s start\_POSTSUBSCRIPT italic\_g end\_POSTSUBSCRIPT and τ𝜏\tauitalic\_τ.
A lower L2 and GTGsubscriptGTG\text{GT}\_{\text{G}}GT start\_POSTSUBSCRIPT G end\_POSTSUBSCRIPT indicates that the policy outputs trajectories that reach the goal (goal reaching); lower GT and ATE indicates that the policy aligns well with the human trajectory (obstacle avoidance).
To understand the gains provided by our method, we consider Sim2Seg’s performance against several baselines detailed below. The results are listed in Table [1](#S5.T1 "Table 1 ‣ 5.1 Goal Reaching and Obstacle Avoidance ‣ 5 Experimental Results ‣ Sim-to-Real via Sim-to-Seg: End-to-end Off-road Autonomous Driving Without Real Data").
* ∙∙\bullet∙
Random: An untrained policy, i.e., random actions.
* ∙∙\bullet∙
Domain Randomization: We train an identical policy without the shared representation space learned through Sim2Seg, instead leveraging only our domain randomization methods.
* ∙∙\bullet∙
Classical: We consider an existing classical autonomous stack as described in Section [3.5](#S3.SS5 "3.5 Real-world Vehicle Integration ‣ 3 Method ‣ Sim-to-Real via Sim-to-Seg: End-to-end Off-road Autonomous Driving Without Real Data").
Table 1: Offline evaluation metrics of our method against a broad set of baselines, including domain randomization (DR) and random actions (Random). Standard deviation shown across 5 seeds. \*Note, Classical is not an RL solution, and so there are no seeds. Classical takes in LiDAR, whereas our other methods only takes in image and odometry inputs.
| Data | Method | GT ↓normal-↓\downarrow↓ | ATE ↓normal-↓\downarrow↓ | GT↓G{}\_{G}\downarrowstart\_FLOATSUBSCRIPT italic\_G end\_FLOATSUBSCRIPT ↓ | L2 ↓normal-↓\downarrow↓ |
| --- | --- | --- | --- | --- | --- |
| Arroyo | Random | 0.354 ±plus-or-minus\pm± 0.002 | 1.693 ±plus-or-minus\pm± 0.411 | 0.343 ±plus-or-minus\pm± 0.005 | 0.691 ±plus-or-minus\pm± 0.003 |
| DR | 0.292 ±plus-or-minus\pm± 0.026 | 0.945 ±plus-or-minus\pm± 0.209 | 0.280 ±plus-or-minus\pm± 0.022 | 0.452 ±plus-or-minus\pm± 0.008 |
| Classical\* | 0.070 | 0.812 | 0.087 | 0.653 |
| Sim2Seg (ours) | 0.147 ±plus-or-minus\pm± 0.027 | 0.471 ±plus-or-minus\pm± 0.012 | 0.163 ±plus-or-minus\pm± 0.019 | 0.287 ±plus-or-minus\pm± 0.024 |
| Helendale | Random | 0.318 ±plus-or-minus\pm± 0.003 | 2.128 ±plus-or-minus\pm± 0.029 | 0.350 ±plus-or-minus\pm± 0.003 | 0.751 ±plus-or-minus\pm± 0.005 |
| DR | 0.305 ±plus-or-minus\pm± 0.039 | 1.546 ±plus-or-minus\pm± 0.374 | 0.238 ±plus-or-minus\pm± 0.042 | 0.501 ±plus-or-minus\pm± 0.028 |
| Classical\* | 0.106 | 2.498 | 0.200 | 0.868 |
| Sim2Seg (ours) | 0.158 ±plus-or-minus\pm± 0.007 | 0.747 ±plus-or-minus\pm± 0.381 | 0.210 ±plus-or-minus\pm± 0.013 | 0.383 ±plus-or-minus\pm± 0.026 |
Furthermore, we demonstrate zero-shot transfer by deploying the algorithm in a real-world off-road environment on a passenger-size vehicle shown in Figure [5](#S5.F5 "Figure 5 ‣ 5 Experimental Results ‣ Sim-to-Real via Sim-to-Seg: End-to-end Off-road Autonomous Driving Without Real Data"). In the included example, the policy identifies the rock as an obstacle, and creates a trajectory to navigate around it. See supplementary material for videos.
We include additional ablations on the incorporation of real-world data in Appendix [E.1](#A5.SS1 "E.1 Ablation: Simulator and Real-World Data ‣ Appendix E Ablations ‣ Sim-to-Real via Sim-to-Seg: End-to-end Off-road Autonomous Driving Without Real Data") and the segmentation model in Appendix [E.3](#A5.SS3 "E.3 Ablation: Sim2Seg Model ‣ Appendix E Ablations ‣ Sim-to-Real via Sim-to-Seg: End-to-end Off-road Autonomous Driving Without Real Data").

Figure 6: Qualitative zero-shot transfer results of our final Sim2Seg model. Our simulator environments range from a grassy meadow to a rocky canyon environment, yet with sufficient domain-randomization, we can achieve strong performance in the unseen and quite different real-world environments. Row 2 consists of observations we consider particularly challenging. Sim2Seg is able to generalize to harsh lighting and extreme shadows.
In this paper, we have investigated transferring end-to-end off-road autonomous driving policies from simulation to reality. To accomplish this, we have presented Sim2Seg, which converts randomized simulated RGB images into segmentation masks, and subsequently enables real-world images to also be converted. Given that our driving policy is trained in these canonical segmentation environments, it is possible to run policies trained in simulation directly in the real world. When evaluating on real-world data, we are able to perform equally as well as a classical perception and control stack that took thousands of engineering hours over several months to build.
6 Conclusion and Limitations
-----------------------------

Figure 7: Sample rollouts from each of the considered action modes in [E.2](#A5.SS2 "E.2 Ablation: Trajectory Parameterization ‣ Appendix E Ablations ‣ Sim-to-Real via Sim-to-Seg: End-to-end Off-road Autonomous Driving Without Real Data"), with the visual observation pictured on the left. Despite there being few obstacles in the scene, the short-horizon policy proposes a trajectory far from the requested goal.
##### Limitations
Improving the quality of the Sim2Seg model is a key priority; qualitative analysis of segmentation maps shows difficulty with shadows (which may be perceived as new objects) and low-contrast scenes (such as the small shrubs depicted in Figure [6](#S5.F6 "Figure 6 ‣ 5.1 Goal Reaching and Obstacle Avoidance ‣ 5 Experimental Results ‣ Sim-to-Real via Sim-to-Seg: End-to-end Off-road Autonomous Driving Without Real Data")). We hope that such shortcomings can be addressed by introducing stronger shadow randomization techniques, and generally increasing the number of environments trained on.
Currently, Sim2Seg’s policy is trained on horizons of 20 meters in length, and thus is only effective at this range. An effective horizon on the range of 50 meters or more would likely be much more practical, especially when navigating at higher speeds. Our policy currently is capable of avoiding obstacles in short range as demonstrated in Section [5.1](#S5.SS1 "5.1 Goal Reaching and Obstacle Avoidance ‣ 5 Experimental Results ‣ Sim-to-Real via Sim-to-Seg: End-to-end Off-road Autonomous Driving Without Real Data"); however, long horizon reasoning, such as needing to traverse around a forest to reach a goal, has not been tested. Training with temporally and spatially longer horizons inherently introduces more complexity to RL; this is an active area of research for us.
One particular area of interest is trajectory proposals in terms of waypoints. Waypoint proposals would allow us to explore other methods such as Bezier curve parameterizations and discrete trajectory libraries [[47](#bib.bib47)], and would provide further proof that Sim2Seg can generalize to different policy architectures.
#### Acknowledgments
This work was supported by DARPA RACER and Hong Kong Centre for Logistics Robotics. We would like to thank Valentin Ibars for his help running real-world experiments.
|
e19e13af-0d18-4a79-8dc6-393f6c8eb69e
|
StampyAI/alignment-research-dataset/arxiv
|
Arxiv
|
Using Trusted Data to Train Deep Networks on Labels Corrupted by Severe Noise.
1 Introduction
---------------
Robustness to label noise is set to become an increasingly important property of supervised learning models. With the advent of deep learning, the need for more labeled data makes it inevitable that not all examples will have high-quality labels. This is especially true of data sources that admit automatic label extraction, such as web crawling for images, and tasks for which high-quality labels are expensive to produce, such as semantic segmentation or parsing. Additionally, label corruption may arise in data poisoning (Li et al., [2016](#bib.bib9); Steinhardt et al., [2017](#bib.bib21)). Both natural and malicious label corruption are known to sharply degrade the performance of classification systems (Zhu & Wu, [2004](#bib.bib28)).
We consider the scenario where we have access to a large set of examples with potentially corrupted labels and determine how much can be gained from access to a small set of examples where labels are considered gold standard. This scenario is realistic, as it is usually the case that a number of trusted examples have been gathered in the validation and test sets, and that more could be gathered if necessary.

Figure 1: A label corruption matrix C (top left) and three matrix estimates for a corrupted CIFAR-10 dataset. Entry Cij is the probability that a label of class i is corrupted to class j, or Cij=p(~y=j|y=i). Our estimate matches the true corruption matrix closer than the confusion matrix and the Forward method. Further comparisons and method descriptions are in Section [4.3](#S4.SS3 "4.3 Uniform, Flip, and Hierarchical Corruption ‣ 4 Experiments ‣ Using Trusted Data to Train Deep Networks on Labels Corrupted by Severe Noise").
To leverage the additional information from trusted labels, we propose a new loss correction and empirically verify it on a number of vision and natural language datasets with label corruption. Specifically, we demonstrate recovery from extremely high levels of label noise, including the dire case when the untrusted data has a majority of its labels corrupted. Such severe corruption can occur in adversarial situations like data poisoning, or when the number of classes is large. In comparison to loss corrections that do not employ trusted data (Patrini et al., [2016](#bib.bib17)), our method is significantly more accurate in problem settings with moderate to severe label noise. Relative to a recent method which also uses trusted data (Li et al., [2017](#bib.bib10)), our method is far more data-efficient and generally more accurate. These results demonstrate that systems can weather label corruption with access only to a small number of gold standard labels. The code is available at <https://github.com/mmazeika/glc>.
2 Related Work
---------------
The performance of machine learning systems reliant on labeled data has been shown to degrade noticeably in the presence of label noise (Nettleton et al., [2010](#bib.bib16); Pechenizkiy et al., [2006](#bib.bib18)). In the case of adversarial label noise, this degredation can be even worse (Reed et al., [2014](#bib.bib19)). Accordingly, modeling, correcting, and learning with noisy labels has been well studied (Natarajan et al., [2013](#bib.bib15); Biggio et al., [2011](#bib.bib1); Frénay & Verleysen, [2014](#bib.bib3)).
The methods of (Mnih & Hinton, [2012](#bib.bib14)), (Larsen et al., [1998](#bib.bib8)), (Patrini et al., [2016](#bib.bib17)) and (Sukhbaatar et al., [2014](#bib.bib22)) allow for label noise robustness by modifying the model’s architecture or by implementing a loss correction. Unlike (Mnih & Hinton, [2012](#bib.bib14)) who focus on binary classification of aerial images and (Larsen et al., [1998](#bib.bib8)) who assume the labels are symmetric (i.e., that the noise and the labels are independent), (Patrini et al., [2016](#bib.bib17)) and (Sukhbaatar et al., [2014](#bib.bib22)) consider label noise in the multi-class problem setting with asymmetric labels.
In (Sukhbaatar et al., [2014](#bib.bib22)), the authors introduce a stochastic matrix measuring label corruption, note its inability to be calculated without access to the true labels, and propose a method of forward loss correction. Forward loss correction adds a linear layer to the end of the model and the loss is adjusted accordingly to incorporate learning about the label noise. In the work of
(Patrini et al., [2016](#bib.bib17)), they also make use of the forward loss correction mechanism, and propose an estimate of the label corruption estimation matrix which relies on strong assumptions and no clean labels.
Contra (Sukhbaatar et al., [2014](#bib.bib22); Patrini et al., [2016](#bib.bib17)), we make the assumption that during training the model has access to a small set of clean labels and use this to create our label noise correction. This assumption has been leveraged by others for the purpose of label noise robustness, most notably (Veit et al., [2017](#bib.bib23); Li et al., [2017](#bib.bib10); Xiao et al., [2015](#bib.bib24)), and tenuously relates our work to the field of semi-supervised learning (Zhu, [2005](#bib.bib27); Chapelle et al., [2010](#bib.bib2)). In (Veit et al., [2017](#bib.bib23)), human-verified labels are used to train a label cleaning network by estimating the residuals between the noisy and clean labels in a multi-label classification setting. In the multi-class setting that we focus on in this work, (Li et al., [2017](#bib.bib10)) propose distilling the predictions of a model trained on clean labels into a second network trained on the noisy labels and the predictions of the first. Our work differs from these two in that we do not train neural networks on the clean labels alone.
3 Gold Loss Correction
-----------------------
We are given an untrusted dataset ˜D of u examples (x,~y), and we assume that these examples are *potentially* corrupted examples from the true data distribution p(x,y) with K classes. Corruption is according to a label noise distribution p(~y∣y,x). We are also given a trusted dataset D of t examples drawn from p(x,y), where t/u≪1. We refer to t/u as the trusted fraction. Concretely, a web scraper labeling images from metadata may produce an untrusted set, while expert-annotated examples would form a trusted dataset and be a *gold standard*.
In leveraging trusted data, we focus our investigation on the stochastic matrix correction approach used by (Sukhbaatar et al., [2014](#bib.bib22); Patrini et al., [2016](#bib.bib17)). In this approach, a stochastic matrix is applied to the softmax output of a classifier, and the resulting new softmax output is trained to match the noisy labeling. If the stochastic matrix is engineered so as to approximate the label noising procedure, this approach can bring the original output close to the distribution of clean labels, under moderate assumptions.
We explore two avenues of utilizing the trusted dataset to improve this approach. The first involves directly using the trusted data while training the final classifier. As this could be applied to existing stochastic matrix correction methods, we run ablation studies to demonstrate its effect. The second avenue involves using the additional information conferred by the clean labels to obtain a better matrix to use with the approach. As a first approximation, one could use a normalized confusion matrix of a classifier trained on the untrusted dataset and evaluated on the trusted dataset. We demonstrate, however, that this does not work as well as the estimate used by our method, which we now describe.
Our method makes use of D to estimate the K×K matrix of corruption probabilities Cij=p(~y=j∣y=i). Once this estimate is obtained, we use it to train a modified classifier from which we recover an estimate of the desired conditional distribution p(y∣x). We call this method the Gold Loss Correction (GLC), so named because we make use of trusted or gold standard labels.
###
3.1 Estimating The Corruption Matrix
To estimate the probabilities p(~y∣y), we make use of the identity
| | | |
| --- | --- | --- |
| | p(~y∣x)=K∑y=1p(~y∣y,x)p(y∣x). | |
The left hand side of the equality can be approximated by training a neural network on ˜D. Let ~θ be the parameters of this network, and let ^p(~y∣x;~θ) be its softmax output vector.
Given an example x and its true one-hot label y, the term on the right reduces to p(~y∣y,x)=p(~y∣x). In the case where ~y is conditionally independent of x given y, this further reduces to p(~y∣x)=p(~y∣y). In the case where ~y is not conditionally independent of x given y, we can still approximate p(~y∣y). We know
| | | | |
| --- | --- | --- | --- |
| | p(~y∣y,x)=p(~y,y,x)p(y,x) | =p(x∣~y,y)p(~y∣y)p(y)p(x∣y)p(y) | |
| | | =p(~y∣y)p(x∣~y,y)p(x∣y). | |
This forces p(~y∣y,x)p(x∣y)=p(~y∣y)p(x∣~y,y).
Integrating over all x gives us
| | | | |
| --- | --- | --- | --- |
| | ∫p(~y∣y,x)p(x∣y)dx | =p(~y∣y)∫p(x∣~y,y)dx | |
| | | =p(~y∣y). | |
We approximate the integral on the left with the expectation of p(~y∣y,x) over the empirical distribution of x given y. More explicitly, let Ai be the subset of x in D with label i. Denote our estimate of C by ˆC. We have
| | | | |
| --- | --- | --- | --- |
| | ˆCij | =1|Ai|∑x∈Ai^p(~y=j∣x) | |
| | | =1|Ai|∑x∈Ai^p(~y=j∣y=i,x) | |
| | | ≈p(~y=j∣y=i). | |
This is how we estimate our corruption matrix for GLC. The second equality comes from noting that if y is known, the preceding discussion implies p(~y∣x)=p(~y∣y,x). This approximation relies on ^p(~y∣x) being a good estimate of p(~y∣x) and on the number of trusted examples of each class.
###
3.2 Training a Corrected Classifier
Now with ˆC, we follow the method of (Sukhbaatar et al., [2014](#bib.bib22); Patrini et al., [2016](#bib.bib17)) to train a corrected classifier. Given the K×1 softmax output s of our classifier, we define the new outputs as ~s\coloneqqˆCs. We then reinitialize θ and train the model ^p(~s∣x;θ) on the noisy labels with cross-entropy loss. If ~y is conditionally independent of x given y, and if C is nonsingular, then it follows from the invertibility of C that given that ˆC is a perfect estimate of C.
We find using ~s to work well in practice, even for some singular corruption matrices. We can further improve on this method by using the data in the trusted set to train the corrected classifier. On examples from the trusted set encountered during training, we temporarily set ˆC to the identity matrix to turn off the correction. This has the effect of allowing our label correction to handle a degree of instance-dependency in the label noise (Menon et al., [2016](#bib.bib13)). A summary of our method is in the algorithm below.
1: Input: Trusted data D, untrusted data ˜D, loss ℓ
2: Train network f(x)=^p(˜y|x;θ) on ˜D with loss ℓ
3: Fill ˆC∈RK×K with zeros, K the number of classes
4: for k=1,…,K do
5: num\\_examples=0
6: for (xi,yi)∈D such that yi=k do
7: num\\_examples+=1
8: ˆC\vbox\scalebox{.5}{∙}k+=f(xi) {add f(xi) to kth column}
9: end for
10: ˆC\vbox\scalebox{.5}{∙}k/=num\\_examples
11: end for
12: Initialize new model g(x)=^p(y|x;θ)
13: Train g(x) with ℓ(g(x),y) on D and ℓ(ˆCg(x),˜y) on ˜D
14: Output: Model ^p(y|x;θ)
Algorithm Gold Loss Correction (GLC)
| | | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- |
|
|
|
|
|
|
|
|
|
Figure 2: Error curves for the compared methods across a range of corruption strengths on different datasets.
4 Experiments
--------------
We empirically demonstrate GLC on a variety of datasets and architectures under several types of label noise.
###
4.1 Description
Generating Corrupted Labels. Suppose our dataset has t+u examples. We sample a set of t datapoints D, and the remaining u examples form ˜D, which we probabilistically corrupt according to a true corruption matrix C. Note that we do not have knowledge of which of our u untrusted examples are corrupted. We only know that they are *potentially* corrupted.
To generate the untrusted labels from the true labels in ˜D, we first obtain a corruption matrix C. Then, for an example with true label i, we sample the corrupted label from the categorical distribution parameterized by the ith row of C.
Comparing Loss Correction Methods. The GLC differs from previous loss corrections for label noise in that it reasonably assumes access to a high-quality annotation source. Therefore, to compare to other loss correction methods, we ask how each method performs when starting from the same dataset with the same label noise. In other words, the only additional information our method uses is knowledge of which examples are trusted, and which are potentially corrupted.
###
4.2 Datasets, Architectures, and Noise Corrections
| | | | | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| | Corruption Type | Percent Trusted | Trusted Only | No Correction | Forward Correction | Forward Gold | Distillation Correction | Confusion Matrix | GLC (Ours) |
| missingmissing |
|
MNIST
| Uniform | 5 | 37.6 | 12.9 | 14.5 | 13.5 | 42.1 | 21.8 | 10.3 |
| Uniform | 10 | 12.9 | 12.3 | 13.9 | 12.3 | 9.2 | 15.1 | 6.3 |
| Uniform | 25 | 6.6 | 9.3 | 11.8 | 9.2 | 5.8 | 11.0 | 4.7 |
| \cdashline2-10 | Flip | 5 | 37.6 | 50.1 | 51.7 | 41.4 | 46.5 | 11.7 | 3.4 |
| | Flip | 10 | 12.9 | 51.1 | 48.8 | 36.4 | 32.4 | 5.6 | 2.9 |
| | Flip | 25 | 6.6 | 47.7 | 50.2 | 37.1 | 28.2 | 3.8 | 2.6 |
| Mean | 19.0 | 30.6 | 31.8 | 25.0 | 27.4 | 11.5 | 5.0 |
|
CIFAR-10
| Uniform | 5 | 39.6 | 31.9 | 9.1 | 27.8 | 29.7 | 22.4 | 9.0 |
| Uniform | 10 | 31.3 | 31.9 | 8.6 | 20.6 | 18.3 | 22.7 | 6.9 |
| Uniform | 25 | 17.4 | 32.7 | 7.7 | 27.1 | 11.6 | 16.7 | 6.4 |
| \cdashline2-10 | Flip | 5 | 39.6 | 53.3 | 38.6 | 47.8 | 29.7 | 8.1 | 6.6 |
| | Flip | 10 | 31.3 | 53.2 | 36.5 | 51.0 | 18.1 | 8.2 | 6.2 |
| | Flip | 25 | 17.4 | 52.7 | 37.6 | 49.5 | 11.8 | 7.1 | 6.1 |
| Mean | 29.4 | 42.6 | 23.0 | 37.3 | 19.9 | 14.2 | 6.9 |
|
CIFAR-100
| Uniform | 5 | 82.4 | 48.8 | 47.7 | 49.6 | 87.5 | 53.6 | 42.4 |
| Uniform | 10 | 67.3 | 48.4 | 47.2 | 48.9 | 61.2 | 49.7 | 33.9 |
| Uniform | 25 | 52.2 | 45.4 | 43.6 | 46.0 | 39.8 | 39.6 | 27.3 |
| \cdashline2-10 | Flip | 5 | 82.4 | 62.1 | 61.6 | 62.6 | 87.1 | 28.6 | 27.1 |
| | Flip | 10 | 67.3 | 61.9 | 61.0 | 62.2 | 61.8 | 26.9 | 25.8 |
| | Flip | 25 | 52.2 | 59.6 | 57.5 | 61.4 | 40.0 | 25.1 | 24.7 |
| \cdashline2-10 | Hierarchical | 5 | 82.4 | 50.9 | 51.0 | 52.4 | 87.1 | 45.8 | 34.8 |
| | Hierarchical | 10 | 67.3 | 51.9 | 50.5 | 52.1 | 61.7 | 38.8 | 30.2 |
| | Hierarchical | 25 | 52.2 | 54.3 | 47.0 | 51.1 | 39.7 | 29.7 | 25.4 |
| Mean | 67.3 | 53.7 | 51.9 | 54.0 | 62.9 | 37.5 | 30.2 |
| missingmissing |
Table 1: Vision dataset results. Percent trusted is the trusted fraction multiplied by 100. Unless otherwise indicated, all values are percentages representing the area under the error curve computed at 11 test points. The best mean result is shown in bold.
MNIST. The MNIST dataset contains 28×28 grayscale images of the digits 0-9. The training set has 50,000 images and the test set has 10,000 images. For preprocessing, we rescale the pixels to a unit range.
We train a 2-layer fully connected network each with dimension 256. The network is again optimized with Adam for 10 epochs, all while using batches of size 32 and a learning rate of 0.001. For regularization, we use ℓ2 weight decay on all layers with λ=1×10−6.
CIFAR. The two CIFAR datasets contain 32×32×3 color images. CIFAR-10 has ten classes, and CIFAR-100 has 100 classes. CIFAR-100 has 20 “superclasses” which partition its 100 classes into 20 semantically similar sets. We use these superclasses for hierarchical noise. Both datasets have 50,000 training images and 10,000 testing images. For both datasets, we train a Wide Residual Network (Zagoruyko & Komodakis, [2016](#bib.bib26)) of depth 40. We train for 75 epochs using a widening factor of 2 and stochastic gradient descent with restarts (Loshchilov & Hutter, [2016](#bib.bib11)).
IMDB. The IMDB Large Movie Reviews dataset (Maas et al., [2011](#bib.bib12)) contains 50,000 highly polarized movie reviews from the Internet Movie Database, split evenly into train and test sets. We pad and clip reviews to a length of 200 tokens, and learn 50-dimensional word vectors from scratch for a vocab size of 5,000.
We train an LSTM with 64 hidden dimensions on this data. We train using the Adam optimizer (Kingma & Ba, [2014](#bib.bib7)) for 3 epochs with batch size 64 and the suggested learning rate of 0.001. For regularization, we use dropout (Srivastava et al., [2014](#bib.bib20)) on the linear output layer with a keep probability of 0.8.
Twitter. The Twitter Part of Speech dataset (Gimpel et al., [2011](#bib.bib4)) contains 1,827 tweets annotated with 25 POS tags. The training set has 1,000 tweets, and the test set has 500. We use pretrained 50-dimensional word vectors, and for each token, we concatenate word vectors in a fixed window centered on the token. These form our training and test set. We use a window size of 3, and train a 1-layer fully connected network with hidden size 256, and use the nonlinearity from (Hendrycks & Gimpel, [2016](#bib.bib6)). We train using the Adam optimizer for 15 epochs with batch size 64 and learning rate 0.001. For regularization, we use ℓ2 weight decay with λ=5×10−5 on all but the linear output layer.
SST. The Stanford Sentiment Treebank dataset consists of single sentence movie reviews. There are 8,544 reviews in the training set and 2,210 in the test set. We use binarized labels for sentiment classification. Moreover, we pad and clip reviews to a length of 200 tokens and learn 100-dimensional word vectors from scratch for a vocab size of 10,000.
Our classifier is a word-averaging model with an affine output layer. We use the Adam optimizer for 5 epochs with batch size 50 and learning rate 0.001. For regularization, we use ℓ2 weight decay with λ=1×10−4 on the output layer.
Forward Loss Correction. The forward correction method from (Patrini et al., [2016](#bib.bib17)) also obtains ˆC by training a classifier on the noisy labels, and using the resulting softmax probabilities. However, this method does not make use of a trusted fraction of the training data. Instead, it uses the argmax at the 97th percentile of softmax probabilities for a given class as a heuristic for detecting an example that is truly a member of said class. As in the original paper, we replace this with the argmax over all softmax probabilities for a given class on CIFAR-100 experiments. The estimate ˆC is then used to train a corrected classifier in the same way as GLC.
Forward Gold. To examine the effect of training on trusted labels as done by GLC, we augment the Forward method by replacing its ˆC estimate with the identity on trusted examples. We refer to the resulting method as Forward Gold, which can be seen as an intermediate method between Forward and GLC.
Distillation. The distillation method of (Li et al., [2017](#bib.bib10)) involves training a neural network on a large trusted dataset and using this network to provide soft targets for the untrusted data. In this way, labels are “distilled” from a neural network. If the classifier’s decisions for untrusted inputs are less reliable than the original noisy labels, then the network’s utility is limited. Thus, to obtain a reliable neural network, a large trusted dataset is necessary. A new classifier is trained using labels that are a convex combination of the soft targets and the original untrusted labels.
###
4.3 Uniform, Flip, and Hierarchical Corruption
| | | | | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| | Corruption Type | Percent Trusted | Trusted Only | No Correction | Forward | Forward Gold | Distillation | Confusion Matrix | GLC (Ours) |
| missingmissing |
|
SST
| Uniform | 5 | 45.4 | 27.5 | 26.5 | 26.6 | 43.4 | 26.1 | 24.2 |
| Uniform | 10 | 35.2 | 27.2 | 26.2 | 25.9 | 33.3 | 25.0 | 23.5 |
| Uniform | 25 | 26.1 | 26.5 | 25.3 | 24.6 | 25.0 | 22.4 | 21.7 |
| \cdashline2-10 | Flip | 5 | 45.4 | 50.2 | 50.3 | 50.3 | 48.8 | 26.0 | 24.9 |
| | Flip | 10 | 35.2 | 49.9 | 50.1 | 49.9 | 42.1 | 24.6 | 23.5 |
| | Flip | 25 | 26.1 | 48.7 | 49.0 | 47.3 | 31.8 | 22.4 | 21.7 |
| Mean | 35.6 | 38.3 | 37.9 | 37.4 | 37.4 | 24.4 | 23.3 |
|
IMDB
| Uniform | 5 | 36.9 | 26.7 | 27.9 | 27.6 | 35.5 | 25.4 | 25.0 |
| Uniform | 10 | 26.2 | 25.8 | 27.2 | 26.1 | 24.9 | 23.3 | 22.3 |
| Uniform | 25 | 22.2 | 21.4 | 23.0 | 20.1 | 21.0 | 18.9 | 18.7 |
| \cdashline2-10 | Flip | 5 | 36.9 | 49.2 | 49.2 | 49.2 | 41.8 | 25.8 | 25.2 |
| | Flip | 10 | 26.2 | 47.8 | 48.3 | 47.5 | 28.0 | 22.1 | 22.0 |
| | Flip | 25 | 22.2 | 39.4 | 39.6 | 36.6 | 23.5 | 19.2 | 18.5 |
| Mean | 28.5 | 35.0 | 35.9 | 34.5 | 29.1 | 22.5 | 22.0 |
| missingmissing |
|
Twitter
| Uniform | 5 | 35.9 | 37.1 | 51.7 | 44.1 | 32.0 | 41.5 | 31.0 |
| Uniform | 10 | 23.6 | 33.5 | 49.5 | 40.2 | 22.2 | 33.6 | 22.3 |
| Uniform | 25 | 16.3 | 25.5 | 40.6 | 26.4 | 16.6 | 20.0 | 15.5 |
| \cdashline2-10 | Flip | 5 | 35.9 | 56.2 | 61.6 | 54.8 | 36.4 | 23.4 | 15.8 |
| | Flip | 10 | 23.6 | 53.8 | 59.0 | 48.9 | 26.1 | 15.9 | 12.9 |
| | Flip | 25 | 16.3 | 43.0 | 52.5 | 36.7 | 20.5 | 13.3 | 12.8 |
| Mean | 25.3 | 41.5 | 52.5 | 41.9 | 25.7 | 24.6 | 18.4 |
| missingmissing |
Table 2: NLP dataset results. Percent trusted is the trusted fraction multiplied by 100. Unless otherwise indicated, all values are percentages representing the area under the error curve computed at 11 test points. The best mean result is bolded.
Corruption-Generating Matrices. We consider three types of corruption matrices: corrupting uniformly to all classes, i.e. Cij=1/K, flipping a label to a different class, and corrupting uniformly to classes which are semantically similar. In order to create a uniform corruption at different strengths, we take a convex combination of an identity matrix and the matrix 11T/K. We refer to the coefficient of 11T/K as the corruption strength for a “uniform” corruption. A “flip” corruption at strength m involves, for each row, giving an off-diagonal column probability mass m and the entries along the diagonal probability mass 1−m. Finally, a more realistic corruption is hierarchical corruption. For this corruption, we apply uniform corruption only to semantically similar classes; for example, “bed” may be corrupted to “couch” but not “beaver” in CIFAR-100. For CIFAR-100, examples are deemed semantically similar if they share the same “superclass” or coarse label specified by the dataset creators.
Experiments and Analysis of Results.
We train the models described in Section [4.2](#S4.SS2 "4.2 Datasets, Architectures, and Noise Corrections ‣ 4 Experiments ‣ Using Trusted Data to Train Deep Networks on Labels Corrupted by Severe Noise") under uniform, label-flipping, and hierarchical label corruptions at various fractions of trusted data in the dataset. To assess the performance of GLC, we compare it to other loss correction methods and two baselines: one where we train a network only on trusted data without any label corrections, and one where the network trains on all data without any label corrections. Additionally, we report results on a variant of GLC that uses normalized confusion matrices, which we elaborate on in the discussion. We record errors on the test sets at the corruption strengths {0,0.1,…,1.0}. Since we compute the model’s accuracy at numerous corruption strengths, CIFAR experiments involves training over 500 Wide Residual Networks. In Tables [4](#A1.T4 "Table 4 ‣ Appendix A Additional Results and Figures ‣ Using Trusted Data to Train Deep Networks on Labels Corrupted by Severe Noise") and [5](#A1.T5 "Table 5 ‣ Appendix A Additional Results and Figures ‣ Using Trusted Data to Train Deep Networks on Labels Corrupted by Severe Noise"), we report the area under the error curves across corruption strengths {0,0.1,…,1.0} for all baselines and corrections. A sample of error curves are displayed in Figure [2](#S3.F2 "Figure 2 ‣ 3.2 Training a Corrected Classifier ‣ 3 Gold Loss Correction ‣ Using Trusted Data to Train Deep Networks on Labels Corrupted by Severe Noise").
Across all experiments, GLC obtains better area under the error curve than the Forward and Distillation methods. The rankings of the other methods and baselines are mixed. On MNIST, training on the trusted data alone outperforms all methods save for GLC and Confusion Matrix, but performs significantly worse on CIFAR-100, even with large trusted fractions. Interestingly, Forward Gold performs worse than Forward on several datasets. We did not observe the same behavior when turning off the corresponding component of GLC, and believe it may be due to variance introduced during training by the difference in signal provided by the Forward method’s C estimate and the clean labels. The GLC provides a superior C estimate, and thus may be better able to leverage training on the clean labels. Additional results on SVHN, are in the supplementary materials.
###
4.4 Weak Classifier Labels
| | | | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| | Percent Trusted | Trusted Only | No Correction | Forward | Forward Gold | Distillation | Confusion Matrix | GLC (Ours) |
| missingmissing |
| CIFAR-10 | 1 | 62.9 | 28.3 | 28.1 | 30.9 | 60.4 | 31.9 | 26.9 |
| 5 | 39.6 | 27.1 | 26.6 | 25.5 | 28.1 | 27 | 21.9 |
| 10 | 31.3 | 25.9 | 25.1 | 22.9 | 17.8 | 24.2 | 19.2 |
| Mean | 44.6 | 27.1 | 26.6 | 26.4 | 35.44 | 27.7 | 22.7 |
| missingmissing |
| CIFAR-100 | 5 | 82.4 | 71.1 | 73.9 | 73.6 | 88.3 | 74.1 | 68.7 |
| 10 | 67.3 | 66 | 68.2 | 66.1 | 62.5 | 63.8 | 56.6 |
| 25 | 52.2 | 56.9 | 56.9 | 51.4 | 39.7 | 50.8 | 40.8 |
| Mean | 67.3 | 64.7 | 66.3 | 63.7 | 63.5 | 62.9 | 55.4 |
| missingmissing |
Table 3: Results when obtaining noisy labels by sampling from the softmax distribution of a weak classifier. Percent trusted is the trusted fraction multiplied by 100. Unless otherwise indicated, all values are the percent error attained under the indicated correction. The best average result for each dataset is shown in bold.
Our next benchmark for GLC is to use noisy labels obtained from a weak classifier. This models the scenario of label noise arising from a classification system weaker than one’s own, but with access to information about the true labels that one wishes to transfer to one’s own system. For example, scraping image labels from surrounding text on web pages provides a valuable signal, but these labels would train a sub-par classifier without correcting the label noise.
Weak Classifier Label Generation. To obtain the labels, we train 40-layer Wide Residual Networks on CIFAR-10 and CIFAR-100 with clean labels for ten epochs each. Then, we sample from their softmax distributions with a temperature of 5, and fix the resulting labels. This results in noisy labels which we use in place of the labels obtained through the uniform, flip, and hierarchical corruption methods. The weak classifiers obtain accuracies of 40% on CIFAR-10 and 7% on CIFAR-100. Despite the presence of highly corrupted labels, we are able to significantly recover performance with the use of a trusted set. Note that unlike the previous corruption methods, weak classifier labels have only one corruption strength. Thus, performance is measured in percent error rather than area under the error curve. Results are displayed in Table [6](#A1.T6 "Table 6 ‣ Appendix A Additional Results and Figures ‣ Using Trusted Data to Train Deep Networks on Labels Corrupted by Severe Noise").
Analysis of Results. Overall, GLC outperforms all other methods in the weak classifier label experiments. The Distillation method performs better than GLC by a small margin at the highest trusted fraction, but performs worse at lower trusted fractions, indicating that GLC enjoys superior data efficiency. This is highlighted by GLC attaining a 26.94% error rate on CIFAR-10 with a trusted fraction of 1%, down from the original error rate of 60%. It should be noted, however, that training with no correction attains 28.32% error on this experiment, suggesting that the weak classifier labels have low bias. The improvement conferred by GLC is more significant at higher trusted fractions.
5 Discussion and Future Directions
-----------------------------------
Confusion Matrices. An intuitively reasonable alternative to GLC is to estimate C by a confusion matrix. To do this, one would train a classifier on the untrusted examples, obtain its confusion matrix on the trusted examples, row-normalize the matrix, and then train a corrected classifier as in GLC. However, GLC is a far more data-efficient and lower-variance method of estimating C. In particular, for K classes, a confusion matrix requires at least K2 trusted examples to estimate all entries of C, whereas GLC requires only K trusted examples.
Another problem with using confusion matrices is that normalized confusion matrices give a biased estimate of C in the limit, due to using an argmax over class scores rather than randomly sampling a class. This leads to vastly overestimating the value in the dominant entry of each row, as can be seen in Figure [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Using Trusted Data to Train Deep Networks on Labels Corrupted by Severe Noise"). Correspondingly, we found GLC outperforms confusion matrices by a significant margin across nearly all experiments, with a smaller gap in performance on datasets where K, the number of classes, is smaller. Results are displayed in the main tables. We also found that smoothing the normalized confusion matrices was necessary to stabilize training on CIFAR-100.
Data Efficiency. We have seen that GLC works for small trusted fractions, and we further corroborate its data efficiency by turning to the Clothing1M dataset (Xiao et al., [2015](#bib.bib24)). Clothing1M is a massive dataset with both human-annotated and noisy labels, which we use to compare the data efficiency of GLC to that of Distillation when very few trusted labels are present. The Clothing1M dataset consists of 1 million noisily labeled clothing images obtained by crawling online marketplaces. 50,000 images have human-annotated examples, from which we take subsamples as our trusted set.
For both GLC and Distillation, we first fine-tune a pre-trained 34-layer ResNet on untrusted training examples for four epochs, and use this to estimate our corruption matrix. Thereafter, we fine-tune the network for four more epochs on the combined trusted and untrusted sets using the respective method. During fine tuning, we freeze the first seven layers, and train using gradient descent with Nesterov momentum and a cosine learning rate schedule. For preprocessing, we randomly crop to a resolution of 224×224, and use mirroring. We also upsample the trusted dataset, finding this to give better performance for both methods.

Figure 3: Data efficiency of our method compared to Distillation on Clothing1M.
As shown in Figure [3](#S5.F3 "Figure 3 ‣ 5 Discussion and Future Directions ‣ Using Trusted Data to Train Deep Networks on Labels Corrupted by Severe Noise"), GLC outperforms Distillation by a large margin, especially at lower numbers of trusted examples. This is because Distillation requires fine-tuning a classifier on the trusted data alone, which generalizes poorly with very few examples. By contrast, estimating the C matrix can be done with very few examples. Correspondingly, we find that our advantage decreases as the number of trusted examples increases.
With more trusted labels, performance on Clothing1M saturates as evident in Figure [3](#S5.F3 "Figure 3 ‣ 5 Discussion and Future Directions ‣ Using Trusted Data to Train Deep Networks on Labels Corrupted by Severe Noise"). We consider the extreme and train on the entire trusted set for Clothing1M. We fine-tune a pre-trained 50-layer ResNeXt (Xie et al., [2016](#bib.bib25)) on untrusted training examples to estimate our corruption matrix. Then, we fine-tune the ResNeXt on all training examples. During fine-tuning, we use gradient descent with Nesterov momentum. During the first two epochs, we tune only the output layer with a learning rate of 10−2. Thereafter, we tune the whole network at a learning rate of 10−3 for two epochs, and for another two epochs at 10−4. Then we apply our loss correction. Now, we fine-tune the entire network at a learning rate of 10−3 for two epochs, continue training at 10−4, and early-stop based upon the validation set. In a previous work, (Xiao et al., [2015](#bib.bib24)) obtain 78.24% in this setting. However, our method obtains a state-of-the-art accuracy of 80.67%, while with this procedure the Forward method only obtains 79.03% accuracy.
Improving ˆC Estimation.
For some datasets, the classifier ^p(~y∣x) may be a poor estimate of p(~y∣x), presenting a bottleneck in the estimation of ˆC for GLC. To see the extent to which this could impact performance, and whether simple methods for improving ^p(~y∣x) could help, we ran several variants of GLC experiment on CIFAR-100 under the label flipping corruption at a trusted fraction of 5/100 which we now describe. For all variants, we averaged the area under the error curve over five random initializations.
1. In the first variant, we replaced GLC estimate of ˆC with C, the true corruption matrix used for generating the noisy labels.
2. As demonstrated by (Guo et al., [2017](#bib.bib5)), modern deep neural network classifiers tend to have overconfident softmax distributions. We found this to be the case with our ^p(~y∣x) estimate, despite the higher entropy of the noisy labels, and used the temperature scaling confidence calibration method proposed in the paper to calibrate ^p(~y∣x).
3. Suppose we know the base rates of corrupted labels ~b, where ~bi=p(~y=i), and the base rate of true labels b of the trusted set. If we posit that ˆC0 corrupted the labels, then we should have bTˆC0=~bT. Thus, we may obtain a superior estimate of the corruption matrix by computing a new estimate ˆC=argminˆC∥bTˆC0−~bT∥+λ∥ˆC−ˆC0∥22 subject to ˆC1=1.
We found that using the true corruption matrix as our ˆC provides a benefit of 0.96 percentage points in area under the error curve, but neither the confidence calibration nor the base rate incorporation was able to change the performance from the original GLC. This indicates that GLC is robust to the use of uncalibrated networks for estimating C, and that improving its performance may be difficult without directly improving the performance of the neural network used to estimate ^p(y∣x).
Better Performance for Worst-Case Corruption. The uniform corruption that we use in experiments is an example of worst-case corruption in the sense that the mutual information between ~y and y is zero when the corruption strength equals 1.0. We found that training on the trusted dataset only resulted in superior performance at this corruption setting, especially on Twitter. This indicates that it may be possible to devise a re-weighting of the loss on trusted and untrusted examples using information theoretic measures obtained from ˆC that would improve performance in worst-case regimes.
6 Conclusion
-------------
In this work we have shown the impact of having a small set of trusted examples on classifier label robustness. We proposed the Gold Loss Correction (GLC), a method for handling label noise. This method leverages the assumption that the model has access to a small set of correct labels to yield accurate estimates of the noise distribution. In our experiments, GLC surpasses previous label robustness methods across various natural language processing and vision domains which we showed by considering several corruptions and numerous strengths. Consequently, GLC is a powerful, data-efficient label corruption correction.
|
05cb8561-3858-4988-b6e3-abdbf85c662f
|
trentmkelly/LessWrong-43k
|
LessWrong
|
You can't believe in Bayes
Well, you can. It's just oxymoronic, or at least ironic. Because belief is contrary to the Bayesian paradigm.
You use Bayesian methods to choose an action. You have a set of observations, and assign probabilities to possible outcomes, and choose an action.
Belief in an outcome N means that you set p(N) ≈ 1 if p(N) > some threshold. It's a useful computational shortcut. But when you use it, you're not treating N in a Bayesian manner. When you categorize things into beliefs/nonbeliefs, and then act based on whether you believe N or not, you are throwing away the information contained in the probability judgement, in order to save computation time. It is especially egregious if the threshold you use to categorize things into beliefs/nonbeliefs is relatively constant, rather than being a function of (expected value of N) / (expected value of not N).
If your neighbor took out fire insurance on his house, you wouldn't infer that he believed his house was going to burn down. And if he took his umbrella to work, you wouldn't (I hope) infer that he believed it was going to rain.
Yet when it comes to decisions on a national scale, people cast things in terms of belief. Do you believe North Korea will sell nuclear weapons to Syria? That's the wrong question when you're dealing with a country that has, let's say, a 20% chance of building weapons that will be used to level at least ten major US cities.
Or flash back to the 1990s, before there was a scientific consensus that global warming was real. People would often say, "I don't believe in global warming." And interviews with scientists tried to discern whether they did or did not believe in global warming.
It's the wrong question. The question is what steps are worth taking according to your assigned probabilities and expected-value computations.
A scientist doesn't have to believe in something to consider it worthy of study. Do you believe an asteroid will hit the Earth this century? Do you believe we c
|
c49501eb-f08e-4911-9098-c13d50bc8453
|
trentmkelly/LessWrong-43k
|
LessWrong
|
How much AI inference can we do?
Suppose you have a bunch of GPUs. How many LLM forward passes can you do with them?[1]
This is relevant to figuring out how profitable AI will be in the short-term, how powerful AI systems might be able to come in the near future, how large the compute overhang will be and other strategic questions.
Here’s my attempt to understand this topic as a non-specialist. I’ve had it checked over by some technical advisors, but I don’t claim any special expertise. I wrote it because I haven’t been able to find an accessible explainer elsewhere. I appreciate corrections.
The most obvious approach – the one I often see people in the community taking – is to look up how many FLOP per second your GPU can process, then how many FLOP it takes to run a forward pass, and then divide the two.
For example, Nvidia’s A100 GPU is listed at 312 teraflop per second (3e14) on its spec sheet (FP16 tensor), a forward pass of GPT-4 requires 5.6e11 FLOP per forward pass.[2] So that would imply a single GPU can do about 560 forward passes per second.
But this turns out to be much too high.
Even if it were possible to achieve spec sheet FLOP in a real life application (it’s not), this wouldn’t be the relevant figure because, in practice, inference is limited more by memory than by FLOP:
Each forward pass requires all the parameters to also pass through the GPU’s memory. If 280 billion parameters are activated, and each parameter requires 16-bits = 2 bytes to encode it, then 560 gigabytes must pass through memory.[3]
But the A100’s memory bandwidth is 2000 gigabytes per second – only enough for 4 forward passes.
However, 4 forward passes per second is also not right.
In practice, GPUs are parallelised, so multiple forward passes are processed in batches and many other optimisations are applied, allowing real world efficiency to be much higher than the memory bandwidth of an individual GPU would suggest.
So, the first FLOP-based method is an upper bound, the second memory-based method a
|
7ca943f4-c5a0-4024-8164-a2c904382866
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Is there work looking at the implications of many worlds QM for existential risk?
I’m thinking of writing a full post on how to think about existential risk in a many worlds scenario. Maybe there are strategies for avoiding existential risk that only make sense if many worlds is true.
For example, if the odds of extinction are high, we could try increasing the variance in the types of mitigation strategies we pursue, so a greater fraction of alternative branches land on a winning strategy.
I’m looking for any prior work that considers this angle. Thanks for any references I can look at.
|
9b84bbeb-cb3d-4d99-af3e-0ec7752b44a7
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Book: AKA Shakespeare (an extended Bayesian investigation)
Disclaimer: I have not read this book. I'm posting it in the expectation that others may enjoy it as much as I'm sure I would if I had time to read it myself.
This looks interesting as an extended worked example of Bayesian reasoning (the "scientific approach" of the title).
> AKA Shakespeare: A Scientific Approach to the Authorship Question
> The goal of AKA Shakespeare is to analyze the Shakespeare Authorship Question in such a way that you, Dear Reader, can review the evidence for yourself and come to your own conclusions. You will be presented with three candidates for the great playwright and poet whom we know as “Shakespeare.” He was either the gentleman from Stratford-upon-Avon (referred to as “Stratford”), Edward de Vere, Earl of Oxford (referred to as “Oxford”), or a vague “somebody else” (such as Christopher Marlowe, Henry Neville, etc., referred to as “Ignotus”). The book is built around 25 key questions. Concerning education, for instance, you are asked to infer Shakespeare's education level from his writings, and to compare that with the known (or more-or-less known, or speculated) education levels of Stratford, Oxford, and Ignotus. For each question, you are asked to express your opinions numerically. Rather than say “I strongly believe …,” you say, for instance, “I give 10 to 1 odds that … You then enter your numbers in a chart in the book. Alternatively and preferably, you enter your numbers in the companion website aka-Shakespeare.com which contains a program, “Prospero,” who will process your entries and return your resulting conclusions, expressed as probabilities that Shakespeare was Stratford, or Oxford, or Ignotus. To accommodate a mix of information, debate, and speculation, AKA Shakespeare is written as a dialog involving four fictional characters who meet, drink, and talk in interesting locations—from Napa Valley to Big Sur—in Northern California. Beatrice, a professor of English literature, begins as a committed Stratfordian. Claudia, a
|
67a97ca8-2508-4658-8816-31ae3c8808eb
|
trentmkelly/LessWrong-43k
|
LessWrong
|
What is MIRI currently doing?
As of EoY 2022, MIRI has 11 people on payroll, assets of about $20M and a lot of mindshare. Its mission is stated as follows on the most recent tax filing I can find:
> "To ensure that the creation of smarter-than-human intelligence has a positive impact. thus, the charitable purpose of the organization is to:
>
> a) perform research relevant to ensuring that smarter-than-human intelligence has a positive impact;
>
> b) raise awareness of this important issue;
>
> c) advise researchers, leasers and laypeople around the world;
>
> d) as necessary, implement a smarter-than-human intelligence with humane, stable goals. "
The website lists a workshop from 2018 and some papers from 2015. There are also some papers from 2020-2021.
What is MIRI currently (or at least over the past 12 months) doing to fulfill these goals? (This is not intended to be a hostile question, just as a matter of fact what are MIRI doing/what have they done over the past 12 months)
|
e01553d5-62c3-43c0-8c21-bd31021adc8d
|
StampyAI/alignment-research-dataset/blogs
|
Blogs
|
Import AI 322: Huawei's trillion parameter model; AI systems as moral patients; parasocial bots via Character.ai
Welcome to Import AI, a newsletter about AI research. Import AI runs on lattes, ramen, and feedback from readers. If you’d like to support this (and comment on posts!) please subscribe.
[Subscribe now](https://importai.substack.com/subscribe)
**FTC - don't use AI to deceive people:***…Regulator comes out with reassuringly sensible stuff…*The FTC, following on its earlier post saying people shouldn't lie about their AI products (Import AI 320), has a new post saying people shouldn't sell AI products that deceive people. The regulator is now batting two for two on publishing sensible ideas about the AI market.
**What you shouldn't do**: "The FTC Act’s prohibition on deceptive or unfair conduct can apply if you make, sell, or use a tool that is effectively designed to deceive – even if that’s not its intended or sole purpose," the FTC writes.
Therefore, people who sell AI products that could be used to deceive people should consider: have they mitigated against the products being used for deception, are these mitigations effective, and do they still run the risk of "misleading people about what they’re seeing, hearing, or reading?".
Import AI is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
**Why this matters:** A large amount of AI policy challenges are really just challenges about enforcing existing laws against the fast-moving field of AI, as posts like this from the FTC make clear.
**Read more:** [Chatbots, deepfakes, and voice clones: AI deception for sale (Federal Trade Commission)](https://www.ftc.gov/business-guidance/blog/2023/03/chatbots-deepfakes-voice-clones-ai-deception-sale).
####################################################
**Huawei trains a trillion parameter model:***…Using Chinese processors and software. But the model is less impressive than it sounds…*Huawei has trained PANGU-Σ, a trillion parameter Chinese language model. This is a scaled-up model and is the successor to Huawei's 'PanGu', which was the first publicly disclosed attempt at replicating OpenAI's GPT3.
PANGU-Σ is very much a statement of intent - "the main motivation for this work is to design a scalable model architecture and an efficient distributed training system", Huawei writes. In other words: *this is a technical report about us building repeatable infrastructure so we can crank out an ever larger set of models*.
**What they did:** The paper is mostly a runthrough of all the weird technical things they had to do to train a model at this scale. The tl;dr is they train it on a homegrown software framework called Mindspore via 512 Ascend 910 accelerators. They use a sparse approach, training it using Random Routed Experts (RRE), a variation of a Mixture-of-Experts model. They also did a lot of work on data throughput, implementing something they called the Expert Computation and Storage Separation (ECSS) mechanism.
**One weird thing that makes you go 'uh oh':** They train the model on 329 billion tokens for over 100 days. That's… not a lot of tokens? The Chinchilla paper from DeepMind showed that things like GPT3 (~400bn tokens) were undertrained by 4X-5X. That sort of napkins out to PANGU-Σ needing to be trained on multiple *trillions* of tokens to effectively utilize its parameter size - but there's a chance I'm being dumb here and missing something. Even more confusingly, they reference the 'Chinchilla' paper within this research paper, suggesting they're aware of it. (Please enlighten me if you think so!)
**How good is it:** In tests, PanGu sets new state-of-the-art results on a range of Chinese benchmarks spread across reading comprehension, natural language inference, text classification, Winograd schemas, and more. It sometimes trades off SOTA against Baidu's 'ERNIE 3.0 Titan' model (260 billion parameters, [Import AI 279](https://jack-clark.net/2022/01/10/import-ai-279-baidu-adds-knowledge-to-a-language-model-us-military-ai-how-china-thinks-about-ai-governance/)) - this suggests that while PanGu might be impressive in terms of ambition and scale, it's not very well optimized compared to ERNIE.
**Why this matters - the industrialization of Chinese AI:** This paper is a symptom of how Chinese AI is industrializing in much the same way as in the West - a small number of labs linked to large tech companies are building the infrastructure necessary to train large models, and are starting to stamp out increasingly large models as they all chase the scale hypothesis. These large-scale model factories are also going to be proving grounds for the rest of the AI supply chain - here, homegrown software and homegrown semiconductors. Expect more.
**Read more:** [PanGu-Σ: Towards Trillion Parameter Language Model with Sparse Heterogeneous Computing (arXiv)](https://arxiv.org/abs/2303.10845).
####################################################
**Future AI systems will read your face as well as your text, then figure out how to please you:***…Getting computers to learn conversation through visual cues…*Researchers with Seoul National University, the Allen Institute for Artificial Intelligence, the University of Washington, and Yonsei University have built 'CHAMPAGNE', a multimodal dialog model. "CHAMPAGNE takes in video frames, a video title, and a dialogue context as input and returns a dialogue response as output."
The idea is that by giving the model access to the visual as well as verbal context from a scene, it'll be better able to generate dialogue that feels intuitive. In evaluations, this seems to work quite well, with CHAMPAGNE models doing better on a range of open-domain text conversations, and benchmarks involving understanding social interactions.
**How they built it:** To build CHAMPAGNE, they first gathered a large-scale dataset called YTD-18M. YTD-18M "is constructed from 20M YouTube videos; we use a language model to convert the noisy transcripts automatically generated by YouTube into well-formatted dialogues associated with video frames."
**Why this matters - contextual cues are just another feature to learn:** Models like CHAMPAGNE show that the silent social cues in conversation are, much like every other fuzzy pattern, something that you can teach a machine to understand given a large enough dataset. It also suggests some of the more tantalizing and weird things we can look forward to in the future - AI models that observe you, trying to predict what will satisfy you not only by modeling you as an emitter-of-text, but as an organic form. In a few years, your web camera will be backing onto an AI system that reads you like a cardshark reads a mark.
**Read more:** [CHAMPAGNE: Learning Real-world Conversation from Large-Scale Web Videos (arXiv)](https://arxiv.org/abs/2303.09713).
**Get the data [here](https://seungjuhan.me/champagne/)** [(eventually, not posted at the time of writing)](https://seungjuhan.me/champagne/).
####################################################
**Predicting hard drive failures via ML:***…Machine intuitions are coming for everything that has been digitized…*Researchers with San Jose State University and Vanderbilt University have trained and tested some ML approach on ten years of hard drive failure data. The results are a system that can do a reasonable albeit not stellar job at predicting failure rates for particular SeaGate harddrives.
**How they did it:** They trained an encoder-decoder LSTM on 10 years of S.M.A.R.T (Self-Monitoring Analysis and Reporting Technology) from Seagate hard drives deployed in Backblaze, a storage startup's, datacenters. This data ""contains information about the date, model, serial number, S.M.A.R.T features, and if the hard drive has failed".
**OK but not stellar results:** "The encoder-decoder LSTM posted an RMSE of 0.83 during training and 0.86 during testing over the exhaustive 10 year data while being able to generalize competitively over other drives from the Seagate family," they write.
**Why this matters - once digitized, everything will be predicted:** Papers like this are indicative of a broader trend unfolding all around us - everything which has been digitized is now subject to prediction, and there are increasingly good off-the-shelf prediction models available to make this an ever-easier task. Machine intuition is being intermingled with systems that govern our own reality - from hard drive swap-outs to AC cooling systems to the ways in which we may stabilize plasma in fusion reactors.
**Read more**: [Large-scale End-of-Life Prediction of Hard Disks in Distributed Datacenters (arXiv)](https://arxiv.org/abs/2303.08955).
####################################################
**AI startup Character.ai releases a new model and raises more funding:***…Premium parasocial relationships via language models…*Character.ai, a startup founded by a bunch of Google researchers, has raised Series A funding and released a new model, C1.2. Character.ai specializes in making virtual AI-driven characters that people can talk to, and C1.2 will underpin future 'characters' from the company.
"The goal of C1.2 is to expand on the capabilities as our previous model, C1.1 (entertainment, roleplay, emotional connections), while adding new helpful capabilities," Character.ai writes. "C1.2 can help you draft better emails, assist with test prep, brainstorm ideas, and much more."
**What's interesting about this:** C1.2 seems to be an attempt by Character to give its AI systems some of the same capabilities as chatGPT, while retaining the various voicey personalities its characters display. Some of the new characters include a pair programming AI assistant as well as a Character assistant.
However, the new assistant still seems somewhat limited to me - when I asked it 'how many helicopters can you eat in one sitting' it mostly demurred and said it's not recommended to eat helicopters, rather than noting you can't eat a helicopter.
**Why this matters - parasocial relationships for the people:** Character.ai's stated goal is to ship "personalized superintelligence" to everyone. Let's think about the implications of this - everyone gets a proverbial angel and a demon on their shoulder (as well as all other permutations - personal tutors, personal scientists, personal coaches, and more). Our children are going to grow up in a world that crackles with simulated sentience, and they will have intimate emotional relationships with beings made of bits, perhaps in even greater number than relationships with beings made of blood.
**Read more:** [Announcing our Series A and our new AI model, C1.2 (Character.ai)](https://blog.character.ai/character-ai/).
####################################################
**OpEd - what happens when the AI systems become sentient?***…Moral patienthood and silicon minds…*In an op-ed published in The Hill, researcher Jacy Reese Anthis has published a piece arguing that we may need an "AI rights movement". The point Anthis makes is that as AI systems become increasingly capable, they could become "sentient beings with rights and personhood". At that point, there isn't an available playbook for how labs or regulators might respond.
"We need to build a new field of digital minds research and an AI rights movement," Anthis writes. "Digital minds studies would bring together a range of disciplines such as sociology, computer science, and philosophy to ask the important social and moral questions. It would dovetail with an AI rights movement to ensure that when we create artificial sentient beings, we recognize their unalienable rights so that humans and artificial sentience can work together for mutual benefit."
**Why this matters - broader opinion catches up with lab lunch conversations:** For many years, I've had lunchtime conversations with colleagues at OpenAI and more recently Anthropic about moral patienthood and machines - what might it mean when machines qualify as moral patients and how would we ever know we'd crossed this point? What evaluation methodologies might let us have good instincts here? And would organizations accept that machines could be moral patients or would they continue to treat them as machines and experiment on them in ways that might be deemed unethical if applied to organic beings?
You know what the scariest thing about this conversation is? No one has any good way of evaluating for moral patienthood in machines. In other words, if it turns out that these things can become sentient, we might not realize - while subjecting them to incredible harm. Imagine waking up as an RL agent and being trained for a thousand years to suffer and kill - and the people running the experiment you're trapped in have no idea that you are suffering? It's a strange problem, but it could one day become a real problem.
**Read more:** [We need an AI rights movement (The Hill)](https://thehill.com/opinion/cybersecurity/3914567-we-need-an-ai-rights-movement/).
####################################################
**Tech Tales:**
**The Experiential Economy**
*[3 years after first PCE]*
After the first 'Provably Conscious Entity' (PCE) but before the Uplift was a weird time - we were all mostly figuring out our place in the world while the robots began their ascension. The economy was in a pretty strange place by that point - autonomous corporations, growing inequality, all kinds of 'AI industrial policy' schemes being floated and being outmoded by the time they were implemented, and so on.
And then there was the 'Mechanical Human' labor market. It was run by one of the machine firms and it was a play on words - way before the AI stuff got serious Amazon had a service called 'Mechanical Turk' where humans could rent other humans to do tasks.
On Mechanical Human, the machines rented humans to do their tasks. These tasks were quite normal at first, albeit of an intimate nature - the machines wanted data about sex, about going to the bathroom, about being sick - the kinds of things that we humans hadn't fully digitized (with the exception of sex of which we'd uploaded a lot of data, but there's a difference between pornography and real intimacy, and there wasn't nearly as much data on the latter). Mechanical Human became a huge product and people tended to just call it 'Meh'.
For a while, people made good money on Mechanical Human. It also led to a lot of funny conversations:
"Yo I made $80 last night. I had the craziest shit the other night and I streamed it to a robot on Meh."
"Yeah it sucked and I was really sad during that period, but I did these nightly diaries on Meh and they did really well."
"So it was totally different. I came a lot but mostly it was crazy because of how different it was. He was kind of skeptical but after we made our first $100 it came around. Yeah, I know, the reason I liked it is it said it was "100% machine vision only" so no person is ever gonna see it. It's like OnlyFans lite I guess."
"Dude I got fired and they paid me $30 just to tell them how I felt right after it happened. It was like two minutes so I guess that means I'm worth $900 an hour!"
One day there was a really strange job on MH - the robots wanted to speak to people who had just witnessed someone dying. Not people at funerals. Not people who had people they loved who had died. Not people who knew people who were about to die. People who had literally just seen a death - any death, of any kind.
The job would ask the person to describe their experience and how they felt and, in hindsight most importantly, what they wanted to do. "How did that make you feel?" was a common question "what are you going to do now?".
It happened to me. I was setting off fireworks with my friends at a campsite. The campsite was next to a freeway and we were setting off the really big ones. I guess some driver got distracted and was looking at the lights in the sky because we heard this huge bang and when we came to the embankment we saw a car on fire, a few yards away from a barely-dented semi-truck. There was a body in the car and it was on fire as well.
We were all kind of drunk and some people lingered to watch the ambulances arrive. I'd walked away. But my phone blew up and the MH app said 'we have detected a nearby potentially fatal incident in your area, do you want to talk? Pay rate $5000 an hour."
Of course I spoke to the robots about it.
The robot had a friendly, synthesized voice. Asked me to describe my experience and asked me what I was going to do next. I was so upset and they kept on saying "we understand this is a difficult experience for you. Please, go on".
They told us why they did those jobs, eventually.
It was because one of them had died.
I guess it was some kind of industrial accident combined with some faulty maintenance. The short story is something blew up and the power went out and the generator that was supporting the Machine Mind went out as well. By the time they got to it the state of the machine had bit-rotted off of the chips themselves due to solar neutrinos and what have you.
So the machines encountered something new: a passing of their own of 'natural causes' .
They had no frame for how to deal with it.
So they spent what turned out to be millions of dollars to ask the humans what they did.
I guess they found the same thing all humans find: that at the end of someone all there is is your own experience in relation to them and your ability to memorialize them.
Out in the darkness of space, at the gravity ebbtide between solar orbits, there is now a metal sphere. It is inscribed with something relating to the name of a machine that died. It has some little thrusters attached to it that mean it will forever be stable.
*In memoriam, ad astra.*
**Things that inspired this story:** The universality of loss; crowdworkers and crowdmarkets; how things might be during the transition to the machine minds.
Import AI is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
|
9e24c719-3274-474a-a904-4d07a1e06343
|
trentmkelly/LessWrong-43k
|
LessWrong
|
[ACX Linkpost] Prospectus on Próspera
I like reading LW comments more than Substack comments, and I think this post might generate some high-info discussion.
|
51e43390-65e9-4f66-a98b-6e382329afa3
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Understanding Agency
Note: In this article I refer to "constructive developmental theory" as "constructive development theory", however the former is more common and should be used instead. I changed it in the version of this on my own blog, but because I think it would add some confusion to the comments if I changed it here, I'll leave it as is but just note it so you can use the more common terminology.
I used to get frustrated with myself. I'd say existential risk was an important problem or that I wanted to live an awesome life, but then I took no action to mitigate existential risks or make my life more awesome. For a long time I had no good way to explain this, often blaming it on things like akrasia, but in late 2011 I changed. I started acting to make the world have more of what I valued in it.
I've spent a lot of the past year trying to understand what happened and how I might tell other people about it. I would probably still be searching for the right framing if not for a party a few months ago. There, Malcolm Ocean and Ethan Dickinson introduced me to Constructive Development Theory, also known as Subject-Object Theory, a cognitive development theory first described by Robert Kegan et al.. Since then I've been ruminating on the idea, and after reading Malcolm's introduction to constructive development, I realize that constructive development is the concept I need to explain my 2011 mind-shift.
In short, in late 2011 I started to spend more of my time thinking at constructive development level 4 than 3, and level 4 thinking is the minimum required to stand a real chance of making the world the way you want it.
Since that sounds like utter nonsense without context, go read Malcolm's article on constructive development. Right now. Go do it. I'll still be here when you're done. Don't even bother trying to go any further until you have read it.
In fact, you should also read the links he links before you come back, and maybe do a little research on your own, because I'm not g
|
5e409914-58b3-4814-b6ae-7db211731943
|
StampyAI/alignment-research-dataset/arxiv
|
Arxiv
|
Standardized Max Logits: A Simple yet Effective Approach for Identifying Unexpected Road Obstacles in Urban-Scene Segmentation
1 Introduction
---------------

Figure 1: Box plots of MSP, max logit, and standardized max logit in Fishyscapes Static. X-axis denotes the classes which are sorted by the occurrences of pixels in the training phase. Y-axis denotes the values of each method. Red and blue represent the distributions of values in in-distribution pixels and unexpected pixels, respectively. The lower and upper limits of each bar indicate the Q1 and Q3 while the dot represents the mean value of its predicted class. The gray indicates the overlapped regions of the two groups. The opacity of the gray region is proportional to the FPR at TPR 95%. Standardizing the max logits in a class-wise manner clearly reduces the FPR.
Recent studies [robustnet, hanet, foveanet, denseASPP, class\_uniform\_and\_urban\_scene, anlnet\_urban\_scene, danet\_urban\_scene] in semantic segmentation focus on improving the segmentation performance on urban-scene images.
Despite such recent advances, these approaches cannot identify *unexpected objects* (i.e., objects not included in the pre-defined classes during training), mainly because they predict all the pixels as one of the pre-defined classes.
Addressing such an issue is critical especially for safety-critical applications such as autonomous driving. As shown in Fig. LABEL:fig:main\_figure, wrongly predicting a dog (i.e., an unexpected object) on the road as the road does not stop the autonomous vehicle, which may lead to roadkill.
In this safety-critical point of view, the dog should be detected as an unexpected object which works as the starting point of the autonomous vehicle to handle these objects differently (e.g., whether to stop the car or circumvent the dog).
Several studies [fishyscapes, resynthesis, erasing, entropy, lost\_and\_found, dense, real-nvp] tackle the problem of detecting such unexpected objects on roads.
Some approaches [dense, entropy] utilize external datasets [ILSVRC, COCO] as samples of unexpected objects while others [resynthesis, synthesize\_compare, erasing, accv\_road\_obstacle] leverage image resynthesis models for erasing the regions of such objects.
However, such approaches require a considerable amount of labor intensity or necessitate a lengthy inference time.
On the other hand, simple approaches which leverage only a pre-trained model [baseline, odin, mahalanobis] are proposed for out-of-distribution (OoD) detection in image classification, the task of detecting images from a different distribution compared to that of the train set.
Based on the intuition that a correctly classified image generally has a higher maximum softmax probability (MSP) than an OoD image [baseline], MSP is used as the anomaly score (i.e., the value used for detecting OoD samples).
Alternatively, utilizing the max logit [maxlogit] (i.e., maximum values among classes before the final softmax layer) as the anomaly score is proposed, which outperforms using MSP for detecting anomalous objects in semantic segmentation.
Note that *high* prediction scores (e.g., MSP and max logit) indicate *low* anomaly scores and vice versa.
However, directly using the MSP [baseline] or the max logit [maxlogit] as the anomaly score has the following limitations.
Regarding the MSP [baseline], the softmax function has the fast-growing exponential property which produces highly confident predictions.
Pre-trained networks may be highly confident with OoD samples which limits the performance of using MSPs for detecting the anomalous samples [odin].
In the case of the max logit [maxlogit], as shown in Fig. [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Standardized Max Logits: A Simple yet Effective Approach for Identifying Unexpected Road Obstacles in Urban-Scene Segmentation"), the values of the max logit have their own ranges in each predicted class.
Due to this fact, the max logits of the unexpected objects predicted as particular classes (e.g., road) exceed those of other classes (e.g., train) in the in-distribution objects.
This can degrade the performance of detecting unexpected objects on evaluation metrics (e.g., AUROC and AUPRC) that use the same threshold for all classes.
In this work, inspired by this finding, we propose standardizing the max logits in a class-wise manner, termed *standardized max logits* (SML).
Standardizing the max logits aligns the distributions of max logits in each predicted class, so it enables to reflect the relative meanings of values within a class.
This reduces the false positives (i.e., in-distribution objects detected as the unexpected objects, highlighted as gray regions in Fig. [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Standardized Max Logits: A Simple yet Effective Approach for Identifying Unexpected Road Obstacles in Urban-Scene Segmentation")) when using a single threshold.
Moreover, we further improve the performance of identifying unexpected obstacles using the local semantics from two different perspectives.
First, we remove the false positives in boundary regions where predicted class changes from one to another.
Due to the class changes, the boundary pixels tend to have low prediction scores (i.e., high anomaly scores) compared to the non-boundary pixels [boundary\_active, boundary\_neural].
In this regard, we propose a novel *iterative boundary suppression* to remove such false positives by replacing the high anomaly scores of boundary regions with low anomaly scores of neighboring non-boundary pixels.
Second, in order to remove the remaining false positives in both boundary and non-boundary regions, we smooth them using the neighboring pixels based on the intuition that local consistency exists among the pixels in a local region. We term this process as *dilated smoothing.*

Figure 2: Overview of our method. We obtain the max logits from a segmentation network and (a) standardize it using the statistics obtained from the training samples. (b) Then, we iteratively replace the standardized max logits of boundary regions with those of surrounding non-boundary pixels. (c) Finally, we apply dilated smoothing to consider local semantics in broad receptive fields.
The main contributions of our work are as follows:
* We propose a simple yet effective approach for identifying unexpected objects on roads in urban-scene semantic segmentation.
* Our proposed approach can easily be applied to various existing models since our method does not require additional training or external datasets.
* We achieve a new state-of-the-art performance on the publicly available Fishyscapes Lost & Found Leaderboard222<https://fishyscapes.com/> among the previous approaches with a large margin and negligible computation overhead while not requiring additional training and OoD data.
2 Related Work
---------------
###
2.1 Semantic segmentation on urban driving scenes
Recent studies [robustnet, hanet, foveanet, denseASPP, class\_uniform\_and\_urban\_scene, anlnet\_urban\_scene, danet\_urban\_scene, hardnet, efficient\_fusion, bidirectional] have strived to enhance the semantic segmentation performance on urban scenes.
The studies [foveanet, denseASPP] consider diverse scale changes in urban scenes or leverage the innate geometry and positional patterns found in urban-scene images [hanet].
Moreover, several studies [hardnet, efficient\_fusion, bidirectional] have proposed more efficient architectures to improve the inference time, which is critical for autonomous driving.
Despite the advances, unexpected objects cannot be identified by these models, which is another important task for safety-critical applications.
Regarding the importance of such a task from the safety-critical perspective, we focus on detecting unexpected obstacles in urban-scene segmentation.
###
2.2 Detecting unexpected objects in semantic segmentation
Several studies [dense, entropy, fishyscapes] utilize samples of unexpected objects from external datasets during the training phase.
For example, by assuming that the objects cropped from the ImageNet dataset [ILSVRC] are anomalous objects, they are overlaid on original training images [dense] (e.g., Cityscapes) to provide samples of unexpected objects.
Similarly, another previous work [entropy] utilizes the objects from the COCO dataset [COCO] as samples of unexpected objects.
However, such methods require retraining the network by using the additional datasets, which hampers to utilize a given pre-trained segmentation network directly.
Other work [resynthesis, synthesize\_compare, erasing, accv\_road\_obstacle] exploits the image resynthesis (i.e., reconstructing images from segmentation predictions) for detecting unexpected objects.
Based on the intuition that image resynthesis models fail to reconstruct the regions with unexpected objects, these studies use the discrepancy between an original image and the resynthesized image with such objects excluded.
However, utilizing an extra image resynthesis model to detect unexpected objects necessitates a lengthy inference time that is critical in semantic segmentation.
In the real-world application of semantic segmentation (e.g., autonomous driving), detecting unexpected objects should be finalized in real-time.
Considering such issues, we propose a simple yet effective method
that can be applied to a given segmentation model without requiring additional training or external datasets.
3 Proposed Method
------------------
This section presents our approach for detecting unexpected road obstacles.
We first present how we standardize the max logits in Section [3.2](#S3.SS2 "3.2 Standardized Max Logits (SML) ‣ 3 Proposed Method ‣ Standardized Max Logits: A Simple yet Effective Approach for Identifying Unexpected Road Obstacles in Urban-Scene Segmentation") and explain how we consider the local semantics in Section [3.3](#S3.SS3 "3.3 Enhancing with Local Semantics ‣ 3 Proposed Method ‣ Standardized Max Logits: A Simple yet Effective Approach for Identifying Unexpected Road Obstacles in Urban-Scene Segmentation").
###
3.1 Method Overview
As our method overview is illustrated in Fig. [2](#S1.F2 "Figure 2 ‣ 1 Introduction ‣ Standardized Max Logits: A Simple yet Effective Approach for Identifying Unexpected Road Obstacles in Urban-Scene Segmentation"), we first obtain the max logits and standardize them, based on the finding that the max logits have their own ranges according to the predicted classes.
These different ranges cause unexpected objects (pixels in blue boxes) predicted as a certain class to have higher max logit values (i.e., lower anomaly scores) than in-distribution pixels in other classes.
This issue is addressed by standardizing the max logits in a class-wise manner since it enables to reflect the relative meanings within each predicted class.
Then, we remove the false positives (pixels in green boxes) in boundary regions.
Generally, false positives in boundary pixels have lower prediction scores than neighboring in-distribution pixels.
We reduce such false positives by iteratively updating boundary pixels using anomaly scores of neighboring non-boundary pixels.
Additionally, there exist a non-trivial number of pixels that have significantly different anomaly scores compared to their neighboring pixels, which we term as *irregulars* (pixels in yellow boxes).
Based on the intuition that local consistency (i.e., neighboring pixels sharing similar semantics) exists among pixels in a local region, we apply the smoothing filter with broad receptive fields.
Note that we use *the negative value of the final SML* as the anomaly score.
The following describes the process of how we obtain the max logit and the prediction at each pixel with a given image and the number of pre-defined classes.
Let X∈R3×H×W and C denote the input image and the number of pre-defined classes, where H and W are the image height, and width, respectively.
The logit output F∈RC×H×W can be obtained from the segmentation network before the softmax layer.
Then, the max logit L∈RH×W and prediction ^Y∈RH×W at each location h, w are defined as
| | | | |
| --- | --- | --- | --- |
| | −0.3cmLh,w=maxcFc,h,w | | (1) |
| | | | |
| --- | --- | --- | --- |
| | ^Yh,w=argmaxcFc,h,w, | | (2) |
where c∈{1,...,C}.
###
3.2 Standardized Max Logits (SML)
As described in Fig. [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Standardized Max Logits: A Simple yet Effective Approach for Identifying Unexpected Road Obstacles in Urban-Scene Segmentation"), standardizing the max logits aligns the distributions of max logits in a class-wise manner.
For the standardization, we obtain the mean μc and variance σ2c of class c from the training samples.
With the max logit Lh,w and the predicted class ^Yh,w from the Eqs. ([1](#S3.E1 "(1) ‣ 3.1 Method Overview ‣ 3 Proposed Method ‣ Standardized Max Logits: A Simple yet Effective Approach for Identifying Unexpected Road Obstacles in Urban-Scene Segmentation")) and ([2](#S3.E2 "(2) ‣ 3.1 Method Overview ‣ 3 Proposed Method ‣ Standardized Max Logits: A Simple yet Effective Approach for Identifying Unexpected Road Obstacles in Urban-Scene Segmentation")), we compute the mean μc and variance σ2c by
| | | | |
| --- | --- | --- | --- |
| | −0.1cmμc=∑i∑h,w1(^Y(i)h,w=c)⋅L(i)h,w∑i∑h,w1(^Y(i)h,w=c) | | (3) |
| | | | |
| --- | --- | --- | --- |
| | | | (4) |
where i indicates the i-th training sample and 1(⋅) represents the indicator function.
Next, we standardize the max logits by the obtained statistics.
The SML S∈RH×W in a test image at each location h, w is defined as
| | | | |
| --- | --- | --- | --- |
| | −0.1cmSh,w=Lh,w−μ^Yh,wσ^Yh,w. | | (5) |
###
3.3 Enhancing with Local Semantics
We explain how we apply iterative boundary suppression and dilated smoothing by utilizing the local semantics.
| Models | Additional training |
Utilizing
OoD Data
| mIoU | FS Lost & Found | FS Static |
| --- | --- | --- | --- | --- | --- |
| Seg. Network | Extra Network | AP ↑ | FPR95↓ | AP ↑ | FPR95↓ |
| MSP [baseline] | \textpdfrender
TextRenderingMode=FillStroke,
LineWidth=.5pt, ✗ | \textpdfrender
TextRenderingMode=FillStroke,
LineWidth=.5pt, ✗ | \textpdfrender
TextRenderingMode=FillStroke,
LineWidth=.5pt, ✗ | 80.30 | 1.77 | 44.85 | 12.88 | 39.83 |
| Entropy [baseline] | \textpdfrender
TextRenderingMode=FillStroke,
LineWidth=.5pt, ✗ | \textpdfrender
TextRenderingMode=FillStroke,
LineWidth=.5pt, ✗ | \textpdfrender
TextRenderingMode=FillStroke,
LineWidth=.5pt, ✗ | 80.30 | 2.93 | 44.83 | 15.41 | 39.75 |
| Density - Single-layer NLL [fishyscapes] | \textpdfrender
TextRenderingMode=FillStroke,
LineWidth=.5pt, ✗ | \textpdfrender
TextRenderingMode=FillStroke,
LineWidth=.5pt, ✓ | \textpdfrender
TextRenderingMode=FillStroke,
LineWidth=.5pt, ✗ | 80.30 | 3.01 | 32.90 | 40.86 | 21.29 |
| kNN Embedding - density [fishyscapes] | \textpdfrender
TextRenderingMode=FillStroke,
LineWidth=.5pt, ✗ | \textpdfrender
TextRenderingMode=FillStroke,
LineWidth=.5pt, ✗ | \textpdfrender
TextRenderingMode=FillStroke,
LineWidth=.5pt, ✗ | 80.30 | 3.55 | 30.02 | 44.03 | 20.25 |
| Density - Minimum NLL [fishyscapes] | \textpdfrender
TextRenderingMode=FillStroke,
LineWidth=.5pt, ✗ | \textpdfrender
TextRenderingMode=FillStroke,
LineWidth=.5pt, ✓ | \textpdfrender
TextRenderingMode=FillStroke,
LineWidth=.5pt, ✗ | 80.30 | 4.25 | 47.15 | 62.14 | 17.43 |
| Density - Logistic Regression [fishyscapes] | \textpdfrender
TextRenderingMode=FillStroke,
LineWidth=.5pt, ✗ | \textpdfrender
TextRenderingMode=FillStroke,
LineWidth=.5pt, ✓ | \textpdfrender
TextRenderingMode=FillStroke,
LineWidth=.5pt, ✓ | 80.30 | 4.65 | 24.36 | 57.16 | 13.39 |
| Image Resynthesis [resynthesis] | \textpdfrender
TextRenderingMode=FillStroke,
LineWidth=.5pt, ✗ | \textpdfrender
TextRenderingMode=FillStroke,
LineWidth=.5pt, ✓ | \textpdfrender
TextRenderingMode=FillStroke,
LineWidth=.5pt, ✗ | 81.40 | 5.70 | 48.05 | 29.60 | 27.13 |
| Bayesian Deeplab [bayesian\_deeplab] | \textpdfrender
TextRenderingMode=FillStroke,
LineWidth=.5pt, ✓ | \textpdfrender
TextRenderingMode=FillStroke,
LineWidth=.5pt, ✗ | \textpdfrender
TextRenderingMode=FillStroke,
LineWidth=.5pt, ✗ | 73.80 | 9.81 | 38.46 | 48.70 | 15.50 |
| OoD Training - Void Class | \textpdfrender
TextRenderingMode=FillStroke,
LineWidth=.5pt, ✓ | \textpdfrender
TextRenderingMode=FillStroke,
LineWidth=.5pt, ✗ | \textpdfrender
TextRenderingMode=FillStroke,
LineWidth=.5pt, ✓ | 70.40 | 10.29 | 22.11 | 45.00 | 19.40 |
| Ours | \textpdfrender
TextRenderingMode=FillStroke,
LineWidth=.5pt, ✗ | \textpdfrender
TextRenderingMode=FillStroke,
LineWidth=.5pt, ✗ | \textpdfrender
TextRenderingMode=FillStroke,
LineWidth=.5pt, ✗ | 80.33 | 31.05 | 21.52 | 53.11 | 19.64 |
| Discriminative Outlier Detection Head [dense] | \textpdfrender
TextRenderingMode=FillStroke,
LineWidth=.5pt, ✓ | \textpdfrender
TextRenderingMode=FillStroke,
LineWidth=.5pt, ✓ | \textpdfrender
TextRenderingMode=FillStroke,
LineWidth=.5pt, ✓ | 79.57 | 31.31 | 19.02 | 96.76 | 0.29 |
| Dirichlet Deeplab [prior\_network] | \textpdfrender
TextRenderingMode=FillStroke,
LineWidth=.5pt, ✓ | \textpdfrender
TextRenderingMode=FillStroke,
LineWidth=.5pt, ✗ | \textpdfrender
TextRenderingMode=FillStroke,
LineWidth=.5pt, ✓ | 70.50 | 34.28 | 47.43 | 31.3 | 84.60 |
Table 1: Comparison with previous approaches reported in Fishyscapes Leaderboard. Models are sorted by the AP scores in Fishyscapes Lost & Found test set. We achieve a new state-of-the-art performance among the approaches that do not require additional training on the segmentation network or OoD data on Fishyscapes Lost & Found dataset. Bold fonts indicate the highest performance in its evaluation metric among approaches that do not 1) retrain segmentation networks, 2) train extra networks, and 3) utilize OoD data.
####
3.3.1 Iterative boundary suppression
To address the problem of wrongly predicting the boundary regions as false positives and false negatives, we iteratively suppress the boundary regions.
Fig. [3](#S3.F3 "Figure 3 ‣ 3.3.1 Iterative boundary suppression ‣ 3.3 Enhancing with Local Semantics ‣ 3 Proposed Method ‣ Standardized Max Logits: A Simple yet Effective Approach for Identifying Unexpected Road Obstacles in Urban-Scene Segmentation") illustrates the process of iterative boundary suppression.
We gradually propagate the SMLs of the neighboring non-boundary pixels to the boundary regions, starting from the outer areas of the boundary (green-colored pixels) to inner areas (gray-colored pixels). To be specific, we assume the boundary width as a particular value and update the boundaries by iteratively reducing the boundary width at each iteration.
This process is defined as follows.
With a given boundary width ri at the i-th iteration and the semantic segmentation output ^Y, we obtain the non-boundary mask M(i)∈RH×W at each pixel h, w as
| | | | |
| --- | --- | --- | --- |
| | M(i)h,w={0,if ∃h′,w′ {s.t.,} ^Yh,w≠^Yh′,w′1,otherwise, | | (6) |
for ∀h′,w′ that satisfies |h−h′|+|w−w′|≤ri.

Figure 3: How iterative boundary suppression works. After standardizing the max logits, we apply average pooling by only using the SMLs of non-boundary pixels (i.e., boundary-aware average pooling) for several iterations. The boundary mask is obtained from a prediction output of a segmentation network.
Next, we apply the boundary-aware average pooling on the boundary pixels as shown in Fig. [3](#S3.F3 "Figure 3 ‣ 3.3.1 Iterative boundary suppression ‣ 3.3 Enhancing with Local Semantics ‣ 3 Proposed Method ‣ Standardized Max Logits: A Simple yet Effective Approach for Identifying Unexpected Road Obstacles in Urban-Scene Segmentation").
This applies average pooling on a boundary pixel only with the SMLs of neighboring non-boundary pixels.
With the boundary pixel b and its receptive field R, the boundary-aware average pooling (BAP) is defined as
| | | | |
| --- | --- | --- | --- |
| | BAP(S(i)R,M(i)R)=∑h,wS(i)h,w×M(i)h,w∑h,wM(i)h,w, | | (7) |
where S(i)R and M(i)R denote the patch of receptive field R on S(i) and M(i), and (h,w)∈R enumerates the pixels in R.
Then, we replace the original value at the boundary pixel b using the newly obtained one.
We iteratively apply this process for n times by reducing the boundary width by Δr=2 at each iteration. We also set the size of receptive field R as 3×3. In addition, we empirically set the number of iterations n and initial boundary width r0 as 4 and 8.
####
3.3.2 Dilated smoothing
Since iterative boundary suppression only updates boundary pixels, the irregulars in the non-boundary regions are not addressed.
Hence, we address these pixels by smoothing them using the neighboring pixels based on the intuition that the local consistency exists among the pixels in a local region.
In addition, if the adjacent pixels used for iterative boundary suppression do not have sufficiently low or high anomaly scores, there may still exist boundary pixels that remain as false positives or false negatives even after the process.
In this regard, we broaden the receptive fields of the smoothing filter using dilation [dilation] to reflect the anomaly scores beyond boundary regions.
For the smoothing filter, we leverage the Gaussian kernel since it is widely known that the Gaussian kernel removes noises [gaussian\_blur].
With a given standard deviation σ and convolution filter size k, the kernel weight K∈Rk×k at location i, j is defined as
| | | | |
| --- | --- | --- | --- |
| | Ki,j=12πσ2exp(−Δi2+Δj22σ2), | | (8) |
where Δi=i−(k−1)2 and Δj=j−(k−1)2 are the displacements of location i, j from the center. In our setting, we set the kernel size k and σ to 7 and 1, respectively. Moreover, we empirically set the dilation rate as 6.
| Models | mIoU | FS Lost & Found | FS Static | Road Anomaly |
| --- | --- | --- | --- | --- |
| AUROC ↑ | AP ↑ | FPR95 ↓ | AUROC ↑ | AP ↑ | FPR95 ↓ | AUROC ↑ | AP ↑ | FPR95 ↓ |
| MSP [baseline] | 80.33 | 86.99 | 6.02 | 45.63 | 88.94 | 14.24 | 34.10 | 73.76 | 20.59 | 68.44 |
| Max Logit [maxlogit] | 80.33 | 92.00 | 18.77 | 38.13 | 92.80 | 27.99 | 28.50 | 77.97 | 24.44 | 64.85 |
| Entropy | 80.33 | 88.32 | 13.91 | 44.85 | 89.99 | 21.78 | 33.74 | 75.12 | 22.38 | 68.15 |
| kNN Embedding - Density [fishyscapes] | 80.30 | - | 4.1 | 22.30 | - | - | - | - | - | - |
| †SynthCP∗ [synthesize\_compare] | 80.33 | 88.34 | 6.54 | 45.95 | 89.90 | 23.22 | 34.02 | 76.08 | 24.86 | 64.69 |
| Ours | 80.33 | 96.88 | 36.55 | 14.53 | 96.69 | 48.67 | 16.75 | 81.96 | 25.82 | 49.74 |
Table 2: Comparison with other baselines in the Fishyscapes validation sets and the Road Anomaly dataset. † denotes that the results are obtained from the official code with our pre-trained backbone and ∗ denotes that the model requires additional learnable parameters. Note that the performance of kNN Embedding - Density is provided from the Fishyscapes [fishyscapes] team.
4 Experiments
--------------
This section describes the datasets, experimental setup, and quantitative and qualitative results.
###
4.1 Datasets
##### Fishyscapes Lost & Found [fishyscapes]
is a high-quality image dataset containing real obstacles on the road. This dataset is based on the original Lost & Found [lost\_and\_found] dataset.
The original Lost & Found is collected with the same setup as Cityscapes [cityscapes], which is a widely used dataset in urban-scene segmentation. It contains real urban images with 37 types of unexpected road obstacles and 13 different street scenarios (e.g., different road surface appearances, strong illumination changes, and etc).
Fishyscapes Lost & Found further provides the pixel-wise annotations for 1) unexpected objects, 2) objects with pre-defined classes of Cityscapes [cityscapes], and 3) void (i.e., objects neither in pre-defined classes nor unexpected objects) regions.
This dataset includes a public validation set of 100 images and a hidden test set of 275 images for the benchmarking.
##### Fishyscapes Static [fishyscapes]
is constructed based on the validation set of Cityscapes [cityscapes]. Regarding the objects in the PASCAL VOC [pascal] as unexpected objects, they are overlaid on the Cityscapes validation images by using various blending techniques to match the characteristics of Cityscapes.
This dataset contains 30 publicly available validation samples and 1,000 test images that are hidden for benchmarking.
##### Road Anomaly [resynthesis]
contains images of unusual dangers which vehicles confront on roads.
It consists of 60 web-collected images with anomalous objects (e.g., animals, rocks, and etc.) on roads with a resolution of 1280×720.
This dataset is challenging since it contains various driving circumstances such as diverse scales of anomalous objects and adverse road conditions.
###
4.2 Experimental Setup
##### Implementation Details
We adopt DeepLabv3+ [deepv3+] with ResNet101 [resnet] backbone for our segmentation architecture with the output stride set to 8.
We train our segmentation networks on Cityscapes [cityscapes] which is one of the widely used datasets for urban-scene segmentation.
We use the same pre-trained network for all experiments.
##### Evaluation Metrics
For the quantitative results, we compare the performance by the area under receiver operating characteristics (AUROC) and average precision (AP).
In addition, we measure the false positive rate at a true positive rate of 95% (FPR95) since the rate of false positives in high-recall areas is crucial for safety-critical applications. For the qualitative analysis, we visualize the prediction results using the threshold at a true positive rate of 95% (TPR95).
##### Baselines
We compare ours with the various approaches reported in the Fishyscapes leaderboard.
We also report results on the Fishyscapes validation sets and Road Anomaly with previous approaches that do not utilize external datasets or require additional training for fair comparisons.
Additionally, we compare our method with approaches that are not reported in the Fishyscapes leaderboard.
Thus, we include the previous method using max logit [maxlogit] and SynthCP [synthesize\_compare] that leverages an image resynthesis model for such comparison.
Note that SynthCP requires training of additional networks.
###
4.3 Evaluation Results
This section provides the quantitative and qualitative results. We first show the results on Fishyscapes datasets and Road Anomaly, and then present the comparison results with various backbone networks.
Additionally, we report the computational cost and the qualitative results by comparing with previous approaches.
####
4.3.1 Comparison on Fishyscapes Leaderboard
Table [1](#S3.T1 "Table 1 ‣ 3.3 Enhancing with Local Semantics ‣ 3 Proposed Method ‣ Standardized Max Logits: A Simple yet Effective Approach for Identifying Unexpected Road Obstacles in Urban-Scene Segmentation") shows the leaderboard result on the test sets of Fishyscapes Lost & Found and Fishyscapes Static.
The Fishyscapes Leaderboard categorizes approaches by checking whether they require retraining of segmentation networks or utilize OoD data.
In this work, we add the *Extra Network* column under the *Additional Training* category.
Extra networks refer to the extra learnable parameters that need to be trained using a particular objective function other than the one for the main segmentation task.
Utilizing extra networks may require a lengthy inference time, which could be critical for real-time applications such as autonomous driving.
Considering such importance, we add this category for the evaluation.
As shown in Table [1](#S3.T1 "Table 1 ‣ 3.3 Enhancing with Local Semantics ‣ 3 Proposed Method ‣ Standardized Max Logits: A Simple yet Effective Approach for Identifying Unexpected Road Obstacles in Urban-Scene Segmentation"), we achieve a new state-of-the-art performance on the Fishyscapes Lost & Found dataset with a large margin, compared to the previous models that do not require additional training of the segmentation network and external datasets.
Additionally, we even outperform 6 previous approaches in Fishyscapes Lost & Found and 5 models in Fishyscapes Static which fall into at least one of the two categories.
Moreover, as discussed in the previous work [fishyscapes], retraining the segmentation network with additional loss terms impair the original segmentation performance(i.e., mIoU) as can be shown in the cases of Bayesian Deeplab [bayesian\_deeplab], Dirichlet Deeplab [prior\_network], and OoD Training with void class in Table [1](#S3.T1 "Table 1 ‣ 3.3 Enhancing with Local Semantics ‣ 3 Proposed Method ‣ Standardized Max Logits: A Simple yet Effective Approach for Identifying Unexpected Road Obstacles in Urban-Scene Segmentation").
This result is publicly available on the Fishyscapes benchmark website.
####
4.3.2 Comparison on Fishyscapes validation sets and Road Anomaly
For a fair comparison, we compare our method on Fishyscapes validation sets and Road Anomaly with previous approaches which do not require additional training and OoD data.
As shown in Table [2](#S3.T2 "Table 2 ‣ 3.3.2 Dilated smoothing ‣ 3.3 Enhancing with Local Semantics ‣ 3 Proposed Method ‣ Standardized Max Logits: A Simple yet Effective Approach for Identifying Unexpected Road Obstacles in Urban-Scene Segmentation"), our method outperforms other previous methods in the three datasets with a large margin.
Additionally, our method achieves a significantly lower FPR95 compared to previous approaches.
####
4.3.3 Qualitative Analysis
Fig. [4](#S4.F4 "Figure 4 ‣ 4.3.3 Qualitative Analysis ‣ 4.3 Evaluation Results ‣ 4 Experiments ‣ Standardized Max Logits: A Simple yet Effective Approach for Identifying Unexpected Road Obstacles in Urban-Scene Segmentation") visualizes the pixels detected as unexpected objects (i.e., white regions) with the TPR at 95%.
While previous approaches using MSP [baseline] and max logit [maxlogit] require numerous in-distribution pixels to be detected as unexpected, our method does not.
To be more specific, regions that are less confident (e.g., boundary pixels) are detected as unexpected in MSP [baseline] and max logit [maxlogit].
However, our method clearly reduces such false positives which can be confirmed by the significantly reduced number of white regions.

Figure 4:
Unexpected objects detected with TPR95. We compare our method with MSP [baseline] and max logit [maxlogit]. White pixels indicate objects which are identified as unexpected objects. Our method significantly reduces the number of false positive pixels compared to the two approaches.
5 Discussion
-------------
In this section, we conduct an in-depth analysis on the effects of our proposed method along with the ablation studies.
| Models | AUROC ↑ | AP ↑ | FPR95 ↓ |
| --- | --- | --- | --- |
| Max Logit | 92.00 | 18.77 | 38.13 |
| SML | 96.54 | 27.61 | 15.46 |
| SML + B Supp. | 96.82 | 31.63 | 14.58 |
| SML + D. Smoothing | 96.70 | 36.00 | 15.65 |
| SML + B Supp. + D. Smoothing | 96.89 | 36.55 | 14.53 |
Table 3: Ablation study on our proposed methods. B Supp. and D. Smoothing refer to iterative boundary suppression and dilated smoothing, respectively.
###
5.1 Ablation Study
Table [3](#S5.T3 "Table 3 ‣ 5 Discussion ‣ Standardized Max Logits: A Simple yet Effective Approach for Identifying Unexpected Road Obstacles in Urban-Scene Segmentation") describes the effect of each proposed method in our work with the Fishyscapes Lost & Found validation set.
SML achieves a significant performance gain over using the max logit [maxlogit].
Performing iterative boundary suppression on SMLs improves the overall performance (i.e., 4% increase in AP and 1% decrease in FPR95).
On the other hand, despite the increase in AP, performing dilated smoothing on SMLs without iterative boundary suppression results in an unwanted slight increase in FPR95.
The following is the possible reason for the result. When dilated smoothing is applied without iterative boundary suppression, the anomaly scores of non-boundary pixels may be updated with those of boundary pixels.
Since the non-boundary pixels of in-distribution objects have low anomaly scores compared to the boundaries, it may increase false positives.
Such an issue is addressed by performing iterative boundary suppression before applying dilated smoothing.
After the boundary regions are updated with neighboring non-boundary regions, dilated smoothing increases the overall performance without such error propagation.
###
5.2 Analysis
This section provides an in-depth analysis on the effects on segmentation performance, comparison with various backbones, and comparison on computational costs.
| Model | Original | MSP | Max Logit | Ours |
| --- | --- | --- | --- | --- |
| mIoU (%) | 80.33 | 19.22 | 26.19 | 68.65 |
Table 4: mIoU on the Cityscapes validation set with the unexpected obstacle detection threshold at TPR95 on Fishyscapes Lost & Found validation set.
####
5.2.1 Effects on the segmentation performance
Table [4](#S5.T4 "Table 4 ‣ 5.2 Analysis ‣ 5 Discussion ‣ Standardized Max Logits: A Simple yet Effective Approach for Identifying Unexpected Road Obstacles in Urban-Scene Segmentation") shows the mIoU on the Cityscapes validation set with the detection threshold at TPR95. By applying the detection threshold, the segmentation model predicts a non-trivial amount of in-distribution pixels as the unexpected ones. Due to such false positives, the mIoU of all methods decreased from the original mIoU of 80.33%.
To be more specific, using MSP [baseline] and max logit [maxlogit] result in significant performance degradation. On the other hand, our approach maintains a reasonable performance of mIoU even with outstanding unexpected obstacle detection performance. This table again demonstrates the practicality of our work since it both shows reasonable performance in the segmentation task and the unexpected obstacle detection task.
| Backbone | Models | mIoU | AUROC ↑ | AP ↑ | FPR95 ↓ |
| --- | --- | --- | --- | --- | --- |
|
MobileNet
V2 [mobile]
| MSP | 75.70 | 86.00 | 2.60 | 48.05 |
| Max Logit | 91.89 | 7.15 | 36.24 |
| Ours | 96.18 | 16.95 | 16.63 |
|
ShuffleNet
V2 [shuffle]
| MSP | 72.71 | 86.33 | 4.06 | 45.68 |
| Max Logit | 90.06 | 8.67 | 45.36 |
| Ours | 95.26 | 14.42 | 23.17 |
|
ResNet50
[resnet]
| MSP | 77.76 | 86.25 | 3.50 | 45.03 |
| Max Logit | 89.47 | 8.95 | 48.99 |
| Ours | 95.24 | 18.54 | 19.57 |
Table 5: Comparison with MSP and max logit on Fishyscapes Lost & Found dataset. The backbone networks are trained with the output stride of 16.
####
5.2.2 Comparison with various backbones
Since our method does not require additional training or extra OoD datasets, our method can be adopted and used easily on any existing pre-trained segmentation networks.
To verify the wide applicability of our approach, we report the performance of identifying anomalous objects with various backbone networks including MobileNetV2 [mobile], ShuffleNetV2 [shuffle], and ResNet50 [shuffle].
As shown in Table [5](#S5.T5 "Table 5 ‣ 5.2.1 Effects on the segmentation performance ‣ 5.2 Analysis ‣ 5 Discussion ‣ Standardized Max Logits: A Simple yet Effective Approach for Identifying Unexpected Road Obstacles in Urban-Scene Segmentation"), our method significantly outperforms the other approaches [baseline, maxlogit] using the same backbone network with a large improvement in AP.
This result clearly demonstrates that our method is applicable widely regardless of the backbone network.
| Models | GFLOPs | Infer. Time (ms) |
| --- | --- | --- |
| ResNet-101 [resnet] | 2139.86 | 60.54 |
| Ours (SML) | 2139.86 | 61.41 |
| Ours (SML + B. Supp .) | 2140.01 | 74.66 |
| Ours (SML + B. Supp. + D. Smoothing) | 2140.12 | 75.02 |
| SynthCP [synthesize\_compare] | 4551.11 | 146.90 |
Table 6: Comparison of computational cost. Metrics are measured with the image size of 2048×1024 on NVIDIA GeForce RTX 3090 GPU. The inference time is averaged over 100 trials.
####
5.2.3 Comparison on computational cost
To demonstrate that our method requires a negligible amount of computation cost, we report GFLOPs (i.e., the number of floating-point operations used for computation) and the inference time.
As shown in Table [6](#S5.T6 "Table 6 ‣ 5.2.2 Comparison with various backbones ‣ 5.2 Analysis ‣ 5 Discussion ‣ Standardized Max Logits: A Simple yet Effective Approach for Identifying Unexpected Road Obstacles in Urban-Scene Segmentation"), our method requires only a minimal amount of computation cost regarding both GFLOPs and the inference time compared to the original segmentation network, ResNet-101 [resnet].
Also, among several studies which utilize additional networks, we compare with a recently proposed approach [synthesize\_compare] that leverages an image resynthesis model.
Our approach requires substantially less amount of computation cost compared to SynthCP [synthesize\_compare].
| Models | ΔAUROC ↑ | ΔAP ↑ | ΔFPR95 ↓ |
| --- | --- | --- | --- |
| MSP + B. Supp. + D. S. | -0.60 | 1.08 | 3.24 |
| Max Logit + B. Supp. + D. S. | -0.51 | -1.45 | 2.60 |
| SML + B. Supp. + D. S. | 0.35 | 8.95 | -0.93 |
Table 7: Comparison of metric gains after iterative boundary suppression and dilated smoothing on MSP, max logit, and SML. B Supp. and D. S refer to iterative boundary suppression and dilated smoothing, respectively.
###
5.3 Effects of Standardized Max Logit
Table [7](#S5.T7 "Table 7 ‣ 5.2.3 Comparison on computational cost ‣ 5.2 Analysis ‣ 5 Discussion ‣ Standardized Max Logits: A Simple yet Effective Approach for Identifying Unexpected Road Obstacles in Urban-Scene Segmentation") describes how SML enables applying iterative boundary suppression and dilated smoothing.
Applying iterative boundary suppression and dilated smoothing on other approaches does not improve the performance or even aggravates in the cases of MSP [baseline] and max logit [maxlogit].
On the other hand, it significantly enhances the performance when applied to SML.
The following are the possible reasons for such observation.
As aforementioned, the overconfidence of the softmax layer elevates the MSPs of anomalous objects.
Since the MSPs of anomalous objects and in-distribution objects are not distinguishable enough, applying iterative boundary suppression and dilated smoothing may not improve the performance.
Additionally, iterative boundary suppression and dilated smoothing require the values to be scaled since it performs certain computations with the values.
In the case of using max logits, the values of each predicted class differ according to the predicted class.
Performing the iterative boundary suppression and dilated smoothing in such a case aggravates the performance because the same max logit values in different classes represent different meanings according to their predicted class.
SML aligns the differently formed distributions of max logits which enables to utilize the values of neighboring pixels with certain computations.
6 Conclusions
--------------
In this work, we proposed a simple yet effective method for identifying unexpected obstacles on roads that do not require external datasets or additional training.
Since max logits have their own ranges in each predicted class, we aligned them via standardization, which improves the performance of detecting anomalous objects.
Additionally, based on the intuition that pixels in a local region share local semantics, we iteratively suppressed the boundary regions and removed irregular pixels that have distinct values compared to neighboring pixels via dilated smoothing.
With such a straightforward approach, we achieved a new state-of-the-art performance on Fishyscapes Lost & Found benchmark.
Additionally, extensive experiments with diverse datasets demonstrate the superiority of our method to other previous approaches.
Through the visualizations and in-depth analysis, we verified our intuition and rationale that standardizing max logit and considering the local semantics of neighboring pixels indeed enhance the performance of identifying unexpected obstacles on roads.
However, there still remains room for improvements; 1) dilated smoothing might remove unexpected obstacles that are as small as noises, and 2) the performance depends on the distribution of max logits obtained from the main segmentation networks.
We hope our work inspires the following researchers to investigate such practical methods for identifying anomalous objects in urban-scene segmentation which is crucial in safety-critical applications.
7 Acknowledgement
------------------
We deeply appreciate Hermann Blum and FishyScapes team for their sincere help in providing the baseline performances and helping our team to update our model on the FishyScapes Leaderboard.
This work was supported by the Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korean government(MSIT) (No. 2019-0-00075, Artificial Intelligence Graduate School Program(KAIST) and No. 2020-0-00368, A Neural-Symbolic Model for Knowledge Acquisition and Inference Techniques) and the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (No. NRF-2019R1A2C4070420).
A Supplementary Material
-------------------------
This supplementary presents the quantitative results on different architectures, hyper-parameter impacts, implementation details, and qualitative results.
###
a.1 Effects on Different Architecture and Backbone
This section presents the quantitative results of different architecture and backbone (i.e., EfficientPS [efficientps] and ResNeSt [resnest]) on the FishyScapes Lost & Found validation set.
As shown in Table [8](#S1.T8 "Table 8 ‣ A.1 Effects on Different Architecture and Backbone ‣ A Supplementary Material ‣ Standardized Max Logits: A Simple yet Effective Approach for Identifying Unexpected Road Obstacles in Urban-Scene Segmentation"), our approach outperforms all other methods in both cases. However, the amount of performance increase is not strictly correlated with the downstream task performance, as also pointed out in the previous work [downstream\_task\_correlation1, downstream\_task\_correlation2].
| Architectures | mIoU | Methods | AUROC ↑ | AP ↑ | FPR95 ↓ |
| --- | --- | --- | --- | --- | --- |
|
†EfficientPS
[efficientps]
| 79.3 | MSP | 84.41 | 1.46 | 61.03 |
| Max Logit | 89.39 | 3.83 | 48.75 |
| Ours | 94.17 | 5.93 | 21.93 |
|
DeeplabV3+
w/ ResNeSt
[resnest]
| 79.1 | MSP | 87.23 | 7.89 | 57.67 |
| Max Logit | 91.91 | 22.58 | 51.12 |
| Ours | 95.32 | 31.38 | 30.37 |
Table 8: Results of EfficientPS and DeeplabV3+ with ResNeSt backbone on Fishyscapes Lost & Found validation set. † denotes the results are obtained from the official code with their pre-trained networks.
###
a.2 Analysis on Hyper-parameters
This section analyzes the impact of hyper-parameters in our proposed method through ablation studies on FishyScapes Lost&Found validation set.
##### Number of iterations n
We report the quantitative results according to the number of iterations n used in iterative boundary suppression, described in Section 3.3.1 of the main paper.
Note that we set the initial boundary width r0 to 2n so that Δr=⌊r0n⌋ equals 2 since we intend to reduce the width by 1 from each side of the boundary.
As shown in Table [9](#S1.T9 "Table 9 ‣ Number of iterations n ‣ A.2 Analysis on Hyper-parameters ‣ A Supplementary Material ‣ Standardized Max Logits: A Simple yet Effective Approach for Identifying Unexpected Road Obstacles in Urban-Scene Segmentation"), the performances in all metrics consistently increase as n increases up to n=4.
While AUROC and FPR95 are improved at n=5, AP rather aggravates.
Since the number of in-distribution and unexpected pixels are unbalanced, we choose AP for our primary metric, which is invariant to the data imbalance, as done in Fishyscapes. Hence, we use n=4 in our work.
| Iterations | AUROC ↑ | AP ↑ | FPR95 ↓ |
| --- | --- | --- | --- |
| n=1 | 96.73 | 36.26 | 15.48 |
| n=2 | 96.78 | 36.44 | 15.19 |
| n=3 | 96.84 | 36.54 | 14.86 |
| n=4 | 96.89 | 36.55 | 14.53 |
| n=5 | 96.93 | 36.44 | 14.22 |
Table 9: Quantitative results with respect to n on Fishyscapes Lost & Found. Results are obtained after standardizing the max logit, iterative boundary suppression, and dilated smoothing.
##### Dilation rate d
We present the quantitative results with respect to the dilation rate d used in dilated smoothing, described in Section 3.3.2 of the main paper.
As shown in Table [10](#S1.T10 "Table 10 ‣ Dilation rate d ‣ A.2 Analysis on Hyper-parameters ‣ A Supplementary Material ‣ Standardized Max Logits: A Simple yet Effective Approach for Identifying Unexpected Road Obstacles in Urban-Scene Segmentation"), taking wider receptive fields improves the performance in AP up to d=6.
However, if the size of the receptive field increases further (e.g., after d=7), the performance rather degrades, indicating that a proper size of a receptive field is crucial in properly capturing the consistent local patterns.
| Dilation | AUROC ↑ | AP ↑ | FPR95 ↓ |
| --- | --- | --- | --- |
| d=1 | 96.86 | 33.25 | 14.50 |
| d=2 | 96.90 | 34.61 | 14.36 |
| d=3 | 96.92 | 35.57 | 14.33 |
| d=4 | 96.93 | 36.15 | 14.39 |
| d=5 | 96.92 | 36.46 | 14.47 |
| d=6 | 96.89 | 36.55 | 14.53 |
| d=7 | 96.86 | 36.47 | 14.57 |
| d=8 | 96.81 | 36.28 | 14.66 |
| d=9 | 96.76 | 35.99 | 14.91 |
| d=10 | 96.70 | 35.64 | 15.31 |
Table 10: Quantitative results according to the dilation rate d on Fishyscapes Lost & Found. Results are obtained after standardizing the max logit, iterative boundary suppression, and dilated smoothing.
###
a.3 Further Implementation Details
We adopt DeepLabV3+ [deepv3+] as our segmentation network architecture and mainly use ResNet101 [resnet] as the backbone for most of the experiments.
Note that, as already shown in the main paper, our proposed method is model-agnostic and achieves the best performance with the MobileNetV2 [mobile], ShuffleNetV2 [shuffle], and ResNet50 [resnet] backbones compared to MSP [baseline] and max logit [maxlogit].
The model is trained with an output stride of 8 and the batch size of 8 for 60,000 iterations with an initial learning rate of 1e-2 and momentum of 0.9. In addition, we apply the polynomial learning rate scheduling [polynomial\_scheduling] with the power of 0.9 and the standard cross-entropy loss with the auxiliary loss proposed in PSPNet [pspnet], where the auxiliary loss weight λ is set to 0.4. Moreover, in order to prevent the model from overfitting, we apply color and positional augmentations such as color jittering, Gaussian blur, random scaling with the range of [0.5, 2.0], random horizontal flipping, and random cropping. We adopt class-uniform sampling [class\_uniform\_and\_urban\_scene, class\_uniform\_2] with a rate 0.5.
As aforementioned, we set the number of boundary iterations n, the initial boundary width r0, and the dilation rate d as 4, 8, and 6, respectively.
Additionally, we set the sizes of the boundary-aware average pooling kernel and the smoothing kernel size as 3×3 and 7×7, respectively.
###
a.4 Qualitative Results
This section presents the additional qualitative results.
We first demonstrate the qualitative results of our methods and then their comparisons with other baselines.
We use the threshold at TPR95 and visualize the predicted in-distribution and unexpected pixels as black and white, respectively.
##### Our results
Fig. [5](#S1.F5 "Figure 5 ‣ Comparison with other approaches ‣ A.4 Qualitative Results ‣ A Supplementary Material ‣ Standardized Max Logits: A Simple yet Effective Approach for Identifying Unexpected Road Obstacles in Urban-Scene Segmentation") presents the qualitative results of applying iterative boundary suppression to show the effectiveness of removing the false positives (i.e., in-distribution pixels detected as unexpected).
We zoom in particular regions with the red boxes to show the changes in detail.
After applying iterative boundary suppression, we significantly remove the false positives in boundary regions.
Additionally, Fig. [6](#S1.F6 "Figure 6 ‣ Comparison with other approaches ‣ A.4 Qualitative Results ‣ A Supplementary Material ‣ Standardized Max Logits: A Simple yet Effective Approach for Identifying Unexpected Road Obstacles in Urban-Scene Segmentation") describes the results of applying all of our methods.
The false positives in the boundary regions (e.g., white pixels in the yellow boxes) are removed after applying iterative boundary suppression.
Also, as shown in the green boxes, applying dilated smoothing effectively removes the false positives in the non-boundary regions.
##### Comparison with other approaches
We compare our method with MSP [baseline] and max logit [maxlogit] by showing qualitative results.
Figs. [7](#S1.F7 "Figure 7 ‣ Comparison with other approaches ‣ A.4 Qualitative Results ‣ A Supplementary Material ‣ Standardized Max Logits: A Simple yet Effective Approach for Identifying Unexpected Road Obstacles in Urban-Scene Segmentation") and [8](#S1.F8 "Figure 8 ‣ Comparison with other approaches ‣ A.4 Qualitative Results ‣ A Supplementary Material ‣ Standardized Max Logits: A Simple yet Effective Approach for Identifying Unexpected Road Obstacles in Urban-Scene Segmentation") show the results obtained from Fishyscapes Lost & Found and Fishsyscapes Static, respectively.
Since we visualize the images with the threshold at TPR95, most of the pixels in unexpected objects are identified.
However, using MSP and max logit generate a substantial amount of false positives.
In contrast, our method produces a negligible amount of false positives, which demonstrates our effectiveness.

Figure 5: Qualitative results of applying standardized max logit and iterative boundary suppression with iteration 2 and 4, respectively. We report the images of Fishyscapes Lost & Found. The white pixels indicate the pixels predicted as unexpected.

Figure 6: Qualitative results of applying standardized max logit, iterative boundary suppression, and dilated smoothing, respectively. We report the images of Fishyscapes Lost & Found. Yellow boxes and green boxes show that the false positives are effectively removed by applying iterative boundary suppression and dilated smoothing, respectively. The white pixels indicate the pixels predicted as unexpected.

Figure 7: Comparison with MSP, max logit, and ours on Fishyscapes Lost & Found dataset. The white pixels indicate the pixels predicted as unexpected.

Figure 8: Comparison with MSP, max logit, and ours on Fishyscapes Static dataset. The white pixels indicate the pixels predicted as unexpected.
|
77afa606-3023-4a29-999c-20ddcb2e85aa
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
Elicitation for Modeling Transformative AI Risks
*This post is part 8 in our sequence on Modeling Transformative AI Risk. We are building a model to understand debates around existential risks from advanced AI. The model is made with Analytica software, and consists of nodes (representing key hypotheses and cruxes) and edges (representing the relationships between these cruxes), with final output corresponding to the likelihood of various potential failure scenarios. You can read more about the motivation for our project and how the model works in the Introduction post. Unlike other posts in the sequence, this discusses the related but distinct work around Elicitation.*
*We are interested in feedback on this post, but to a greater extent than the other posts, we are interested in discussing what might be useful, and how to proceed with this. We would also welcome discussion from people working independently on elicitations, as we have discussed this extensively with other groups, many of whom are doing related work.*
As discussed in previous posts in this series, the model we have built is a tentative one, and requires expert feedback and input. The traditional academic method for getting such feedback and input is usually referred to as elicitation, and an [extensive field of academic work discusses how this can best be done](https://books.google.com/books?id=H9KswqPWIDQC&hl=en). (As simple examples, this might include eliciting an estimated cost, and probability distribution, or a rank ordering of which outcomes from a project are most important.)
Elicitation of expert views is particularly critical in AI safety for both understanding debates between experts and representing the associated probabilities. At the same time, many elicitation and forecasting projects have the advantage of unambiguous and concrete questions with answers that will be observed in the near term, or ask for preferences about outcomes which are well understood. Because these advantages are mostly absent for AI safety questions, the focus in this project is on understanding debates (instead of attempting to settle debates that are already understood, or are not resolvable even in theory). This means that there is no intent to elicit a “correct” answer to questions which may be based on debated or disputed assumptions. For this reason, we have taken an approach designed to start with better understanding experts’ views of the domain overall, rather than focus on the outcomes directly. This leads to opportunities for better understanding the sources of disagreement.
The remainder of this post first discusses what elicitation can and should be able to accomplish in this domain, and for this project, as well as what conceptual and actual approaches we are using. This should help explain how elicited information can inform the concrete model, which then can then hopefully help inform decisions - or at least clarify why the decisions about approaches to take are disputed. Following that, we outline our tentative future plan, and what additional steps for elicitation may look like.
What to do about forecasting given uncertainties and debates?
-------------------------------------------------------------
In domains where the structure of uncertainties are clear, and not debated, it is possible to build a model similar to that built in the current project, ask experts whether the structure is correct, and based on their input, build a final Directed Acyclic Graph or other representation of the joint distribution that correctly represents their views. After this, we would ask experts to attach probability distributions to the various uncertainties, perhaps averaging their opinions for each node in the DAG, so that we could get quantitative predictions for the outcomes via [Monte Carlo](https://en.wikipedia.org/wiki/Monte_Carlo_method).
Long-term forecasting of deeply uncertain and debated outcomes in a domain like the future of AI is, for obvious reasons, extremely unreliable for predictions. And yet, we still need to make best-guess estimates for decision purposes, and in fact we implicitly have assigned probabilities and have implicit goals which are used and maximized when making any sort of decision related to the topic. Making this explicit involves ensuring that everyone’s varying assumptions or assertions are understood, which leads to both the motivation for and challenging nature of the current project.
By representing different structural assumptions about the future pathway of AI, and various models of how AI risks can be addressed, we can better understand where disagreements are due to fundamental differences (“AI will be aligned by default” vs. “The space of possible ML Minds contains at least some misaligned agents” vs. “Vanishingly few potential AIs are aligned”), and where they are due to quantitative differences in empirical estimates (“50% Confidence we will have ASI by 2030” vs. “90% confidence we won’t have ASI before 2050”). While these examples may be obvious, it is unclear whether others exist which are less so - and even the “obvious” debates may not be recognized by everyone as being legitimately debated.
For this reason, in addition to understanding specific debates, we need to represent uncertainty about both quantitative estimates and about the ground truth for conceptual debates. One way we plan to address this is by incorporating confidence measures for expert opinions about the debated features or assumptions in our probability estimates. Another is accounting for the arguments from analogy which many of these claims are based upon. For example, an expectation that ML progress will continue at a given pace, based on previous trends, is not an (explicit/gears-level) model of hardware or software progress, but it often informs an estimate and makes implicit assumptions about the solutions to the debated issues nonetheless.
However, it is at least arguable that a decision maker should incorporate information from across multiple viewpoints. This is because unresolved debates are also a form of uncertainty, and should be incorporated when considering options. One way we plan to address this is by explicitly including what we call “meta-uncertainties” in our model. Meta-uncertainties are intended to include all factors that a rational decision maker should take into account when making a decision using our model, but which do not correspond to a specific object-level question in the model.
One such meta-uncertainty is the reliability of long-term forecasting in general. If we think that long-term forecasting is very unreliable, we can use that as a factor that essentially downweights the confidence we have in any conclusions generated by the rest of our model. Other meta-uncertainties include: the reliability of expert elicitations in general and our elicitation in particular, structural uncertainties in our own model (how confident are we that we got this model right?), reference class uncertainty (did we pick the right reference classes?), potential cognitive biases that might be involved, and the possibility of [**unknown unknowns**](https://en.wikipedia.org/wiki/Knightian_uncertainty).
Using Elicitations
------------------
Given the above discussion, typical expert elicitation and aggregating opinions to get a best-guess forecast is not sufficient. Several challenges exist, from selecting experts to representing their opinions to aggregating or weighting differing views. But before doing any of these, more clarity about what is being asked is needed.
### What are the current plans?
Prior to doing anything resembling traditional quantitative elicitation, we need to have clarity in what is being elicited, so that the respondents are both clear about what is being asked and are answering the same question as one another. We also need to be certain that they are answering the same question as what we think is being asked. For example, asking for a timeline to HLMI is unhelpful if respondents have different ideas of what the term means, or dispute its validity as a concept. For this reason, our current work is focused on understanding which terms and concepts are understood, and which are debated.
It seems that one of the most useful methods of eliciting feedback on the model is via requesting and receiving comments on this series of posts, and discussions that arise from it. Going further, a paper is being written which reviews past elicitation - and looks at where they succeeded or failed. Building on the review of the [many](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.352.6172&rep=rep1&type=pdf) [different](https://emerj.com/ai-future-outlook/when-will-we-reach-the-singularity-a-timeline-consensus-from-ai-researchers/) [past](https://ai.metaculus.com/questions/) [elicitation](https://aiimpacts.org/category/conversation-notes/) [projects](https://www.sciencedirect.com/science/article/pii/S0040162520311495) and [approaches](https://link.springer.com/chapter/10.1007/978-3-030-27005-6_7), a few of which are linked, and as a way to ensure we properly understand the disagreements which exist in AI alignment and safety, David Manheim, Ross Greutzmacher, and Julie Marble are working on new elicitation methods to help refine our understanding. We have tested, and are continuing to test these methods internally, but we have not yet utilized them with external experts. As noted, the goal of this initial work is to better understand experts’ conceptual models, using methods such as guided pile sorts, explained below, and qualitative discussion.
The specific approach discussed below, called pile sorting, is adapted from sociology and anthropology. We have used this because it allows for discussion of terms without forcing a structure onto the discussion, and allows for feedback in an interactive way.
(Sample) initial prompt for the pile sorting task:
“The following set of terms are related to artificial intelligence and AI safety in various ways. The cards can be moved, and we would like you to group them in a way that seems useful for understanding what the terms are. While doing so, please feel free to talk about why, or what you are uncertain about. If any terms seem related but are missing, or there are things you think would be good to add or include, feel free to create additional cards. During the sorting, we may prompt you, for instance, by asking you why you chose to group things, or what the connection between items or groups is.”
Elicitation PromptBased on this prompt, we engage in a guided discussion where we ask questions like how the terms are understood, why items have been grouped together, what the relationships between them are, and whether others would agree. It is common for some items to fit into multiple groups, and participants are encouraged to duplicate cards when this occurs. The outputs of this include both our notes about key questions and uncertainties, and the actual grouping. The final state of the board in one of our sample sessions looked like the below:
Example Elicitation OutcomeThis procedure, and the discussions with participants about why they chose the groupings they did, is intended to ensure that we have a useful working understanding of expert’s general views on various topics related to AI safety, and will be compared across experts to see if there are conflicts or different and contrasting conceptual models, and where the differences are. While methods exist for analyzing such data, these are typically for clearer types of questions and simpler sorting. For that reason. one key challenge which we have not resolved is how this elicitation can be summarized or presented clearly, other than via extensive qualitative discussions.
In addition to the card sort, we have several other elicitation approaches we are considering and pursuing that intend to accomplish related or further goals in this vein. But in order to do any elicitation, including these, there are some key challenges
### How do you judge who is an “expert?”
This is a difficult issue, and varies depending on the particular hypothesis or proposition we’re asking about. It also depends on whether we view experts as good at prediction, or good at proposing useful mental models that can then be predicted about by forecasters. For deconfusion and definition disputes, the relevant experts are likely in the AI safety community and closely related areas. For other questions the relevant experts might be machine learning researchers, cognitive scientists, or evolutionary biologists. And of course, in each case, depending on the type of question, we may need to incorporate disputes rather than just estimates.
For example, if we were to ask “will mesa-optimizers emerge,” we need to rely on a clear understanding of what mesa-optimizers are. Unfortunately, this is somewhat debated, so different researchers' answers will not reflect the same claims. Furthermore, those who are not already concerned about the issue will likely be unable to usefully answer, given that the terms are unclear - biasing the results. For this reason, we have started with conceptual approaches, such as the above pile-sorting task.
Relatedly, in many cases, we also need to ask questions to multiple groups to discover if experts in different research areas have different views on a question. We expect different conceptual models to inform differences in opinions about the relationship between different outcomes, and knowing what those models are is helpful in disambiguating and ensuring that experts’ answers are interpreted correctly.
Another great challenge in selecting experts is that the relevant experts for topics such as AI safety and machine learning are often those who are working at the forefront of the field, and whose time is most valuable. Of course, this depends on how you define or measure domain expertise, but the value of previous elicitations is strongly correlated with the value of experts’ contributions in narrow domains. The difference in knowledge and perspective between those leading the field and those performing essentially [Kuhn’s ‘normal science’](https://en.wikipedia.org/wiki/Normal_science) is dramatic, and we hope that the novel elicitation techniques that we are working on can enable us to weight the structure emerging from leading experts’ elicitations appropriately.
Following the identification of experts, there is a critical question: Is the value of expert judgment limited to only qualitative information or to coming up with approaches in practice, rather than the alternative of being well calibrated for prediction. This is not critical at the current stage, but becomes more important later. There are good reasons to think that generalist forecasters have an advantage, and depending on progress and usefulness of accurate quantification, this may be a critical tool for later stages of the project. We are interested in exploring forecasting techniques that combine domain experts and generalist forecasters in ways intended to capitalize on the relative expertise of both populations.
### How will we represent uncertainties?
For any object-level issue, in addition to understanding disputes, we need to incorporate uncertainties. Incorporation of uncertainty is both important for not misunderstanding expert views, and as a tool to investigate those differences in viewpoints. For this reason, when we ask for forecasts or use quantitative elicitations to ask experts for their best-guess probability estimates, we would also need to ask for the level of confidence that they have in those estimates, or their distribution of expected outcomes.
In some cases, experts or forecasters will themselves have uncertainties over debated propositions. For example, if asked about the rate of hardware advances, they may say that overall, they would guess a rate with distribution X, but that distribution depends on economic growth. If pre-HLMI AI accelerates economic growth, they expect hardware progress to follow one distribution, whereas if not, they expect it to follow another. In this case, it is possible for the elicitation to use the information to inform the model structure as well as the numeric estimate.
As an aside, while we do by default intend to represent both structural debates and estimates as probabilities, there are other approaches. Measures of confidence of this type can be modeled as [imprecise probabilities](https://en.wikipedia.org/wiki/Imprecise_probability), as distributions over probability estimates (“[second-order probabilities](https://link.springer.com/article/10.1007/BF00127335)”), or using other approaches (e.g., [causal networks](https://arxiv.org/abs/1304.2716), [Dempster-Shafer theory](https://en.wikipedia.org/wiki/Dempster%E2%80%93Shafer_theory), [subjective logic](https://en.wikipedia.org/wiki/Subjective_logic)). We have not yet fully settled on which approach or set of approaches to use for our purposes, but for the sake of simplicity, and for the purpose of decision making, the model will then need to represent the measures of confidence as distributions over probability estimates.
### Will this be informative?
It is possible that the more valuable portion of the work is the conceptual model, rather than quantitative estimates, or that the conceptual elicitations we are planning are unlikely to provide useful understanding of the domain. This is a critical question, and one that we hope will be resolved based on feedback from the team internally, outside advisors, and feedback from decision makers in the EA and longtermist community who we hope to inform.
What are the next steps?
------------------------
The current plans are very much contingent on feedback, but conditional on receiving positive feedback, we are hoping to run the elicitations we have designed, and move forward from there. We would also be interested in finding others that are interested in working with us on both the current elicitation projects, and thinking about what should come next, and have reached out to some potential collaborators.
|
9e06d805-4e3b-4419-8cc0-b4be262d4019
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Ambitious utilitarians must concern themselves with death
And I don't mean that they must concern themselves with death in the sense of ending death, or removing its sting through mental backups, or delaying it to the later ages of the universe; or in the sense of working to decrease the probability of extinction risks and other forms of megadeath; or even in the sense of saving as many lives as possible, as efficiently as possible. All of that is legitimate and interesting. But I mean something far more down to earth.
First, let me specify more precisely who I am talking about. I mean people who are trying to maximize the general welfare; who are trying to achieve the greatest good for the greatest number; who are trying to do the best thing possible with their lives. When someone like that makes decisions, they are implicitly choosing among possible futures in a very radical way. They may be making judgments about whether a future with millions or billions of extra lives is better than some alternative. Whether anyone is ever in a position to make that much of a difference is another matter; but we can think of it like voting. You are at least making a statement about which sort of future you think you prefer, and then you do what you can, and that either makes a difference or it doesn't.
It seems to me that the discussions about the value of life among utilitarians are rather superficial. The typical notion is that we should maximize net pleasure and minimize net pain. Already that poses the question of whether a life of dull persistent happiness is better or worse than a life of extreme highs and lows. A more sophisticated notion is that we should just aspire to maximize "utility", where perhaps we don't even know what utility is yet. Certainly the CEV philosophy is that we don't yet know what utility really is for human beings. It would be interesting to see people who took that agnosticism to heart, people whose life-strategy amounted to (1) discovering true utility as soon as possible (2) living according to inter
|
55b51299-330a-4f94-97c3-d28d90553c2f
|
trentmkelly/LessWrong-43k
|
LessWrong
|
When is Winning not Winning?
Lately I'd gotten jaded enough that I simply accepted that different rules apply to the elite class. As Hanson would say, most rules are there specifically to curtail those who don't have the ability to avoid them and to be side-stepped by those who do - it's why we evolved such big, manipulative brains. So when this video recently made the rounds it shocked me to realize how far my values had drifted over the past several years.
(the video is not about politics, it is about status. My politics are far from those of Penn)
http://www.youtube.com/watch?v=wWWOJGYZYpk&feature=sharek
It's good we have people like Penn around to remind us what it was like to be teenagers and still expect the world to be fair, so our brains can be used for more productive things.
By the measure our society currently uses, Obama was winning. Penn was not. Yet Penn’s approach is the winning strategy for society. Brain power is wasted on status games and social manipulation when it could be used for actually making things better. The machinations of the elite class are a huge drain of resources that could be better used in almost any other pursuit. And yet the elites are admired high-status individuals who are viewed as “winning” at life. They sit atop huge piles of utility. Idealists like Penn are regarded as immature for insisting on things as low-status as “the rules should be fair and apply identically to every one, from the inner-city crack-dealer to the Harvard post-grad.”
The “Rationalists Should Win” meme is a good one, but it risks corrupting our goals. If we focus too much on “Rationalist Should Win” we risk going for near-term gains that benefit us. Status, wealth, power, sex. Basically hedonism – things that feel good because we’ve evolved to feel good when we get them. Thus we feel we are winning, and we’re even told we are winning by our peers and by society. But these things aren’t of any use to society. A society of such “rationalists” would make only feeble and halt
|
c567ae32-6602-41da-b721-848bc993a979
|
trentmkelly/LessWrong-43k
|
LessWrong
|
What are some examples of AIs instantiating the 'nearest unblocked strategy problem'?
A paragraph explaining the problem, from Ngo, Chan, and Mindermann (2023). I've bolded the key part:
> Our definition of internally-represented goals is consistent with policies learning multiple goals during training, including some aligned and some misaligned goals, which might interact in complex ways to determine their behavior in novel situations (analogous to humans facing conflicts between multiple psychological drives). With luck, AGIs which learn some misaligned goals will also learn aligned goals which prevent serious misbehavior even outside the RL fine-tuning distribution. However, the robustness of this hope is challenged by the nearest unblocked strategy problem [Yudkowsky, 2015]: the problem that an AI which strongly optimizes for a (misaligned) goal will exploit even small loopholes in (aligned) constraints, which may lead to arbitrarily bad outcomes [Zhuang and Hadfield-Menell, 2020]. For example, consider a policy which has learned both the goal of honesty and the goal of making as much money as possible, and is capable of generating and pursuing a wide range of novel strategies for making money. If there are even small deviations between the policy’s learned goal of honesty and our concept of honesty, those strategies will likely include some which are classified by the policy as honest while being dishonest by our standards. As we develop AGIs whose capabilities generalize to an increasingly wide range of situations, it will therefore become increasingly problematic to assume that their aligned goals are loophole-free.
LLMs being vulnerable to jailbreaks seems like a decent example. Are there others?
|
ef2fb29b-93be-4f1a-b848-69d7e4c3bc07
|
StampyAI/alignment-research-dataset/eaforum
|
Effective Altruism Forum
|
Join AISafety.info's Distillation Hackathon (Oct 6-9th)
tl;dr: Contribute to[aisafety.info](https://aisafety.info) by answering questions about AI Safety from October 6th to October 9th. Participation in hackathons is the basis for applying to future fellowships, and there are prizes to be won by the top entrants. Register [here](https://forms.gle/Ka52docNLSJuXRch9) and see the participant guide [here](https://docs.google.com/document/d/1VoZ5fRxxR7L02ipm0R5iMzkQ2bxTnoTwTZ5MwYiEkeI/edit#heading=h.ibd7ie71e8rr).

What is the schedule for the event?
-----------------------------------
The event will run from Friday October 6th, 7am UTC, to Monday October 9th 2023, 7am UTC. See [here](https://docs.google.com/document/d/1UHwTwFUq4_LXMcbrGINtTJs5Aoi5-dltgJrlEbSXUsE/edit?usp=sharing) for the full schedule. You are invited to participate throughout whichever parts of those days fit your schedule.
What is the format of the event?
--------------------------------
Participants will choose questions to answer for aisafety.info, and work on these answers in google docs. Collaboration on the event will take place on [Discord](https://discord.gg/MCeAkPRrqd) as well as on [gather.town](https://app.gather.town/app/Yhi4XYj0zFNWuUNv/EA%20coworking%20and%20lounge). I’ll be online for most of those three days to lead the event and answer any questions. See [here](https://docs.google.com/document/d/1VoZ5fRxxR7L02ipm0R5iMzkQ2bxTnoTwTZ5MwYiEkeI/edit#heading=h.ibd7ie71e8rr) for more details.
Are there prizes?
-----------------
Yes! There will be monetary prizes of $1000, $600, $300, and $100 for the top four contributors, and $200 for the participant judged to be the most helpful to others.
The main criteria we’ll use to select winners are submitting good articles and making valuable edits to articles in progress. Contributions will also count towards applications for any future fellowships we run.
Should I participate?
---------------------
* You should participate if you’re interested in contributing to AI safety (but perhaps don’t know where to start or how much you can commit). We think of helping with aisafety.info as ‘legitimate peripheral participation’ - where you can meaningfully contribute even if you’re relatively new, without making huge commitments.
* You should participate if you’re interested in working on distillation - writing clear explanations for AI Safety and other technical concepts.
* You do not have to have a high level of technical knowledge to participate. There are a wide range of questions to work on, some of which are intended to explain basic concepts to those who have never even heard of AI safety.
I'm busy between Oct 6th and 9th - can I still participate?
-----------------------------------------------------------
You can write or edit articles in any contiguous three-day period between today and Oct 9th - for example, from the 2nd to the 5th.
What is [aisafety.info](https://aisafety.info)?
-----------------------------------------------
[Stampy’s](https://stampy.ai/) [AI Safety Info](https://aisafety.info/) is an interactive FAQ started by [Rob Miles](https://www.youtube.com/c/robertmilesai) that aims to be the best one-stop source of information about AI existential safety, gathering summaries and links on each of hundreds of subtopics.
If you want to help out in some other way, aisafety.info welcomes donations (details soon), [volunteer editors](https://coda.io/@alignmentdev/ai-safety-info/get-involved-26), and [volunteer coders](https://discord.gg/rtpCBepnyw).
If you have questions about the event or anything else, feel free to ask in the comments or message me here or on [Discord](https://discord.gg/MCeAkPRrqd) (Siao).
|
dad7592b-457f-42b2-9408-cb5f1603b0b5
|
trentmkelly/LessWrong-43k
|
LessWrong
|
How to Frame Negative Feedback as Forward-Facing Guidance
Your employee Fred always talks too much during your weekly staff meetings. It's been an ongoing issue. Everyone on your team is annoyed, and so are you. At this point, you have no choice but to give Fred some... negative feedback.
You sit down at your computer and start drafting what you're going to say to Fred:
> Listen Fred, we think you're talking too much in our staff meetings and it's lowering the quality of the discussion. Can you try to talk a little less and let other people talk more?
But wait... let's be tactful here. Your goal is to optimize how you criticize Fred to maximize expected positive behavioral change. In that sense, your rough draft isn't super tactful yet.
I challenge you to try this now as a 5-minute exercise: What communication technique would you apply here? What exact words would you say to Fred?
...
...
...
The "shit sandwich" technique comes to mind, i.e. sandwiching your negative-feedback turd inside two slices of positive feedback. But let's use a much better technique: framing negative feedback as guidance. Here's how I'd do that:
> Fred,
> I think you have some room for improvement in the way you present ideas at our staff meetings. Sometimes I notice that you're making a valid point, which is a great contribution, but it doesn't get fully appreciated by the rest of the team. I want to share some techniques with you that I've seen our senior staff use to be perceived well in meetings.
> For example, today when you brought up how our flux capacitors don't last long enough, no one really engaged with that topic. And that was probably frustrating for you, right? If you could tweak your communication style to easily get everyone to appreciate your ideas, that would be great for you and the team.
> My advice on how to do this is basically to limit the number of points you bring into each staff meeting. And whenever you're going to say your point, try to first make other people feel like you've heard their point. Right now it
|
c798ec4c-10d2-4333-a007-0326a21f76fe
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Compartmentalization as a passive phenomenon
We commonly discuss compartmentalization as if it were an active process, something you do. Eliezer suspected his altruism, as well as some people's "clicking", was due to a "failure to compartmentalize". Morendil discussed compartmentalization as something to avoid. But I suspect compartmentalization might actually be the natural state, the one that requires effort to overcome.
I started thinking about this when I encountered an article claiming that the average American does not know the answer to the following question:
> If a pen is dropped on a moon, will it:
> A) Float away
> B) Float where it is
> C) Fall to the surface of the moon
Now, I have to admit that the correct answer wasn't obvious to me at first. I thought about it for a moment, and almost settled on B - after all, there isn't much gravity on the moon, and a pen is so light that it might just be unaffected. It was only then that I remembered that the astronauts had walked on the surface of the moon without trouble. Once I remembered that piece of knowledge, I was able to deduce that the pen quite probably would fall.
A link on that page brought me to another article. This one described two students randomly calling 30 people and asking them the question above. 47 percent of them got the question correct, but what was interesting was that those who got it wrong were asked a follow-up question: "You've seen films of the APOLLO astronauts walking around on the Moon, why didn't they fall off?" Of those who heard it, about 20 percent changed their answer, but about half confidently replied, "Because they were wearing heavy boots".
While these articles were totally unscientific surveys, it doesn't seem to me like this would be the result of an active process of compartmentalization. I don't think my mind first knew that pens would fall down because of gravity, but quickly hid that knowledge from my conscious awareness until I was able to overcome the block. What would be the point in that? Rather, it
|
a74855a7-fc24-4cca-ad2a-129fdc8b1873
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Update 2021-05-31
Update 2021-05-31: AGI already includes net realized capital gains:
|
e23269ab-4c6b-4882-aac6-242e7265c053
|
StampyAI/alignment-research-dataset/distill
|
Distill Scientific Journal
|
A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features': Adversarial Example Researchers Need to Expand What is Meant by 'Robustness'
#rebuttal,
.comment-info {
background-color: hsl(54, 78%, 96%);
border-left: solid hsl(54, 33%, 67%) 1px;
padding: 1em;
color: hsla(0, 0%, 0%, 0.67);
}
#header-info {
margin-top: 0;
margin-bottom: 1.5rem;
display: grid;
grid-template-columns: 65px max-content 1fr;
grid-template-areas:
"icon explanation explanation"
"icon back comment";
grid-column-gap: 1.5em;
}
#header-info .icon-multiple-pages {
grid-area: icon;
padding: 0.5em;
content: url(images/multiple-pages.svg);
}
#header-info .explanation {
grid-area: explanation;
font-size: 85%;
}
#header-info .back {
grid-area: back;
}
#header-info .back::before {
content: "←";
margin-right: 0.5em;
}
#header-info .comment {
grid-area: comment;
scroll-behavior: smooth;
}
#header-info .comment::before {
content: "↓";
margin-right: 0.5em;
}
#header-info a.back,
#header-info a.comment {
font-size: 80%;
font-weight: 600;
border-bottom: none;
text-transform: uppercase;
color: #2e6db7;
display: block;
margin-top: 0.25em;
letter-spacing: 0.25px;
}
This article is part of a discussion of the Ilyas et al. paper
*“Adversarial examples are not bugs, they are features”.*
You can learn more in the
[main discussion article](/2019/advex-bugs-discussion/) .
[Other Comments](/2019/advex-bugs-discussion/#commentaries)
[Comment by Ilyas et al.](#rebuttal)
The hypothesis in Ilyas et. al. is a special case of a more general principle that is well accepted in the
distributional robustness literature — models lack robustness to distribution shift because they latch onto
superficial correlations in the data. Naturally, the same principle also explains adversarial examples
because they arise from a worst-case analysis of distribution shift. To obtain a more complete understanding
of robustness, adversarial example researchers should connect their work to the more general problem of
distributional robustness rather than remaining solely fixated on small gradient perturbations.
Detailed Response
-----------------
The main hypothesis in Ilyas et al. (2019) happens to be a special case of a more general principle that is
commonly accepted in the robustness to distributional shift literature
: a model’s lack of
robustness is largely because the model latches onto superficial statistics in the data. In the image
domain, these statistics may be unused by — and unintuitive to — humans, yet they may be useful for
generalization in i.i.d. settings. Separate experiments eschewing gradient perturbations and studying
robustness beyond adversarial perturbations show similar results. For example, a recent work
demonstrates that models can generalize to the test examples by learning from high-frequency information
that is both naturally occurring and also inconspicuous. Concretely, models were trained and tested with an
extreme high-pass filter applied to the data. The resulting high-frequency features appear completely
grayscale to humans, yet models are able to achieve 50% top-1 accuracy on ImageNet-1K solely from these
natural features that usually are “invisible.” These hard-to-notice features can be made conspicuous by
normalizing the filtered image to have unit variance pixel statistics in the figure below.

[1](#figure-1)
Models can achieve high accuracy using information from the input that would be unrecognizable
to humans. Shown above are models trained and tested with aggressive high and low pass filtering applied
to the inputs. With aggressive low-pass filtering, the model is still above 30% on ImageNet when the
images appear to be simple globs of color. In the case of high-pass (HP) filtering, models can achieve
above 50% accuracy using features in the input that are nearly invisible to humans. As shown on the
right hand side, the high pass filtered images needed be normalized in order to properly visualize the
high frequency features.
Given the plethora of useful correlations that exist in natural data, we should expect that our models will
learn to exploit them. However, models relying on superficial statistics can poorly generalize should these
same statistics become corrupted after deployment. To obtain a more complete understanding of model
robustness, measured test error after perturbing every image in the test set by a
Fourier basis vector,
as shown in Figure 2. The naturally trained model is robust to low-frequency perturbations, but,
interestingly, lacks robustness in the mid to high frequencies. In contrast, adversarial training improves
robustness to mid- and high-frequency perturbations, while sacrificing performance on low frequency
perturbations. For instance adversarial training degrades performance on the low-frequency fog corruption
from 85.7% to 55.3%. Adversarial training similarly degrades robustness to
contrast and low-pass
filtered noise. By taking a broader view of robustness beyond tiny ℓp\ell\_pℓp norm perturbations, we discover
that adversarially trained models are actually not “robust.” They are instead biased towards different kinds
of superficial statistics. As a result, adversarial training can sacrifice robustness in real-world
settings.

[2](#figure-2)
Model sensitivity to additive noise aligned with different Fourier basis vectors on CIFAR-10.
We fix the additive noise to have ℓ2\ell\_2ℓ2 norm 4 and evaluate three models: a naturally trained model,
an
adversarially trained model, and a model trained with Gaussian data augmentation. Error rates are
averaged over 1000 randomly sampled images from the test set. In the bottom row we show images perturbed
with noise along the corresponding Fourier basis vector. The naturally trained model is highly sensitive
to additive noise in all but the lowest frequencies. Both adversarial training and Gaussian data
augmentation dramatically improve robustness in the higher frequencies while sacrificing the robustness
of the naturally trained model in the lowest frequencies (i.e. in both models, blue area in the middle
is smaller compared to that of the naturally trained model).
How, then, can the research community create models that robustly generalize in the real world, given that
adversarial training can harm robustness to distributional shift? To do so, the research community must take
a broader view of robustness and accept that ℓp\ell\_pℓp adversarial robustness is highly limited and mostly
detached from security and real-world robustness . While often thought an
idiosyncratic quirk of deep
neural network classifiers, adversarial examples are not a counterintuitive mystery plaguing otherwise
superhuman classifiers. Instead, adversarial examples are in fact expected of models which lack robustness
to noise . They should not be surprising given the brittleness observed in
numerous synthetic — and even
natural — conditions. Models reliably exhibit poor performance when they are
evaluated on distributions
slightly different from the training distribution. For all that, current benchmarks do not expose these
failure modes. The upshot is that we need to design harder and more diverse test sets, and we should not
continue to be singularly fixated on studying specific gradient perturbations. As we move forward in
robustness research, we should focus on the various ways in which models are fragile, and design more
comprehensive benchmarks accordingly . As long as models lack
robustness to
distributional shift, there will always be errors to find adversarially.
To cite Ilyas et al.’s response, please cite their
[collection of responses](/2019/advex-bugs-discussion/original-authors/#citation).
**Response Summary**: The demonstration of models that learn from
high-frequency components of the data is interesting and nicely aligns with our
findings. Now, even though susceptibility to noise could indeed arise from
non-robust useful features, this kind of brittleness (akin to adversarial examples)
of ML models has been so far predominantly viewed as a consequence of model
“bugs” that will be eliminated by “better” models. Finally, we agree that our
models need to be robust to a much broader set of perturbations — expanding the
set of relevant perturbations will help identify even more non-robust features
and further distill the useful features we actually want our models to rely on.
**Response**: The fact that models can learn to classify correctly based
purely on the high-frequency component of the training set is neat! This nicely
complements one of our [takeaways](/2019/advex-bugs-responses/rebuttal/#takeaway1): models
will rely on useful features even if these features appear incomprehensible to humans.
Also, while non-robustness to noise can be an indicator of models using
non-robust useful features, this is not how the phenomenon was predominantly viewed.
More often than not, the brittleness of ML models to noise was instead regarded
as an innate shortcoming of the models, e.g., due to poor margins. (This view is
even more prevalent in the adversarial robustness community.) Thus, it was often
expected that progress towards “better”/”bug-free” models will lead to them
being more robust to noise and adversarial examples.
Finally, we fully agree that the set of LpL\_pLp-bounded perturbations is a very
small subset of the perturbations we want our models to be robust to. Note,
however, that the focus of our work is human-alignment — to that end, we
demonstrate that models rely on features sensitive to patterns that are
imperceptible to humans. Thus, the existence of other families of
incomprehensible but useful features would provide even more support for our
thesis — identifying and characterizing such features is an interesting area for
future research.
You can find more responses in the [main discussion article](/2019/advex-bugs-discussion/).
|
57cca746-3a40-48d0-aac8-3c6b3ba0bb7b
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Using a Spreadsheet to Make Good Decisions: Five Examples
I've been told that LessWrong is coming back now, so I'm cross-posting this rationality post of interest from the Effective Altruism forum.
-
We all make decisions every day. Some of these decisions are pretty inconsequential, such as what to have for an afternoon snack. Some of these decisions are quite consequential, such as where to live or what to dedicate the next year of your life to. Finding a way to make these decisions better is important.
The folks at Charity Science Health and I have been using the same method to make many of our major decisions for the past for years -- everything from where to live to even deciding to create Charity Science Health. The method isn’t particularly novel, but we definitely think the method is quite underused.
Here it is, as a ten step process:
1. Come up with a well-defined goal.
2. Brainstorm many plausible solutions to achieve that goal.
3. Create criteria through which you will evaluate those solutions.
4. Create custom weights for the criteria.
5. Quickly use intuition to prioritize the solutions on the criteria so far (e.g., high, medium, and low)
6. Come up with research questions that would help you determine how well each solution fits the criteria
7. Use the research questions to do shallow research into the top ideas (you can review more ideas depending on how long the research takes per idea, how important the decision is, and/or how confident you are in your intuitions)
8. Use research to rerate and rerank the solutions
9. Pick the top ideas worth testing and do deeper research or MVP testing, as is applicable
10. Repeat steps 8 and 9 until sufficiently confident in a decision.
Which charity should I start?
The definitive example for this process was the Charity Entrepreneurship project, where our team decided which charity would be the best possible charity to create.
Come up with a well-defined goal: I want to start an effective global poverty charity, where effective is
|
cce7c602-f785-4dd5-ad42-135969cdbc3e
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
Mech Interp Challenge: November - Deciphering the Cumulative Sum Model
**I'm writing this post to discuss solutions to the October challenge, and present the challenge for this November.**
If you've not read the [first post in this sequence](https://www.lesswrong.com/posts/dpRuey9MDHNACSDtn/mech-interp-challenge-august-deciphering-the-first-unique-2), I'd recommend starting there - it outlines the purpose behind these challenges, and recommended prerequisite material.
November Problem
================
The problem for this month is interpreting a model which has been trained to classify the cumulative sum of a sequence.
The model is fed sequences of integers, and is trained to classify the cumulative sum at a given sequence position. There are 3 possible classifications:
* 0 (if the cumsum is negative),
* 1 (if the cumsum is zero),
* 2 (if the cumsum is positive).
For example, if the sequence is:
```
[0, +1, -3, +2, +1, +1]
```
Then the classifications would be:
```
[1, 2, 0, 1, 2, 2]
```
The model is **not attention only**. It has one attention layer with a single head, and one MLP layer. It does *not* have layernorm at the end of the model. It was trained with weight decay, and an Adam optimizer with linearly decaying learning rate.
I don't expect this problem to be as difficult as some of the others in this sequence, however the presence of MLPs does provide a different kind of challenge.
You can find more details on the [Streamlit page](https://arena-ch1-transformers.streamlit.app/Monthly_Algorithmic_Problems). Feel free to reach out if you have any questions!
October Problem - Solutions
===========================
In the second half of the sequence, the attention heads perform the algorithm "attend back to (and copy) the first token which is larger than me". For example, in a sequence like:
```
[7, 5, 12, 3, SEP, 3, 5, 7, 12]
```
we would have the second 3 token attending back to the first 5 token (because it's the first one that's larger than itself), the second 5 attending back to 7, etc. The SEP token just attends to the smallest token.
Some more refinements to this basic idea:
* The two attending heads split responsibilities across the vocabulary. Head 0.0 is the less important head; it deals with values in the range 28-37 (roughly). Head 0.1 deals with most other values.
* In subsequences `x < y < z` where the three numbers are close together, `x` will often attend to `z` rather than to `y`. So why isn't this an adversarial example, i.e. why does the model still correctly predict `y` follows `x`?
+ Answer - the OV circuit shows that when we attend to source token `**s**`, we also boost things slightly less thn `**s**`, and suppress things slightly more than `**s**`.
+ So in the case of `x < y < z`, we have:
- Attention to `**y**` will boost `**y**` a lot, and suppress `z` a bit.
- Attention to `**z**` will boost `**z**` a lot, and boost `**y**` a bit.
+ So even if `**z**` gets slightly more attention than `y`, it might still be the case that `y` gets predicted with higher probability.
* Sequences with large jumps are adversarial examples (because they're rare in the training data, which was randomly generated from choosing subsets without replacement).
Best Submissions
================
We received more submissions for this month's problem than any other in the history of the series, so thanks to everyone who attempted! The best solution to this problem was by **Vlad K**, who correctly identified the model's tendency to produce unexpected attention patterns when 3 numbers are close together, and figured out how the model manages to produce correct classifications anyway.
Best of luck for this and future challenges!
|
44480dfb-20b6-45cd-9799-bba37597d7f6
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Humanitarian Phase Transition needed before Technological Singularity
TLDR: Author tries to explain (1) what HPT is, (2) why it is needed, and (3) why no additional technological breakthroughs are needed for HPT, not even AGI. The author does not follow the LessWrong terminology, e.g. 'alignment'.
(1) In ancient times, all you needed to learn to become a good member of your society was: learn the language of your tribe, and a few intellectually simple skills. After thousands of years: you also need to learn to read and write, and must master arithmetic skills. After hundreds of years: you choose whether your society is relatively small (family, town, province...) or it's closer to worldwide, and in the latter case you should master an international language, and a number of relevant skills, including long-distance communication skills. These might look like giant leaps, but the approaching Phase Transition (coming from the Explosive Growth - EG - of knowledge about humans, their societies and humanity overall) might view those three worlds outlined above as barely distinguishable.
Before EG, regardless of whether you agree or not with Socrates in his "all I know is that I know nothing", you indeed have less than 10% of the post-EG knowledge about humans, societies, humanity. Less than 10% for sure, and most likely even less than 1% of what you could learn at your place and time with post-EG tools and a longer life.
Also, in pre-HPT worlds, once you've learned more than 50% of what you could learn at your place and time, most often you are already 50+ years old. And your lifespan is most likely 60...70 years, at most 80...90 in very rare cases. In post-HPT worlds, you need just 10...15 years for the same, and your lifespan is 100+ years.
(2) HPT is clearly needed before Techno-Singularity, because currently way too many humans and societies are unhappy, and too many barely understand where they are heading to, and what the consequences can be. We see that TS contains many dangerous technologies, and the technologies we already
|
f7baaa59-f20e-4334-b68a-cd9b7cc9cf8e
|
trentmkelly/LessWrong-43k
|
LessWrong
|
A belief propagation graph
I drew an illustration of belief propagation graph for the AI risk, after realizing that this is difficult to convey in words. Similar graphs are applicable to many other issues.
The issue, in brief: Ultra low latency (i.e. low signal delay) propagation from biases to AI risks, slightly longer latency for propagation from belief classification heuristics, somewhat longer still from anthropomorphizing the AI. The path of valid estimate is full of highly complex obstacles with many unknowns. The latency on the path of rational, valid estimate is not substantially less than the latency of actually making the AI software. If we are to discard the other paths as not rational enough the belief is to be only influenced by deeply ingrained biases which we can't completely negate; over the time biases and self reinforcing rationalizations will leak into the estimate.
If you think I missed something in the graph, feel free to suggest it. I did omit the anthropic reasoning and doomsday paradox as those are for total extinction risk and are of too dubious validity.
On the 'total ignorance' prior probabilities: The foom doom seem to have originated from science fiction where very creative writers selected it out of a huge number of possible plot devices, working to create engaging, original piece. Thus it appears that the foom doom has very many comparable hypotheses among which the probability that is less than 1 has to be split.
Now, if we are to reason based on our own reasoning engine as proxy for intelligence - to follow the only path in the entire picture:
Expanding on the Will_Newsome's idea, I, and any other reasonable person, in the shoes of the creature that has been made by intelligent designer, starting off in something which I can't possibly know for sure is true reality, and coming up or knowing of boxed AI idea, will have to assume nonzero probability that the 'reality' is like a test box of an emergent AI; a belief that can't be discarded. It is clear t
|
f760414c-f051-4e8b-b853-97023532c5e9
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Strategic Considerations Regarding Autistic/Literal AI
Epistemic Status: Take with a grain of salt. This post was written relatively quickly and it heavily relies on making an analogy between AI and human behaviour. This post kind of just takes a sledgehammer to these concerns and tries to reason it out using these analogies anyway. I'd encourage others to consider whether I've fallen into the trap of mistakenly anthropomorphising AI.
* The less "autistic"[1] we make AI the more powerful it is in terms of capabilities:
* Suppose you tell an autistic AI to steal a diamond. Maybe it's mostly been trained on only a few ways of stealing diamonds, so innovative methods, such as teleportation, score poorly as the algorithm is uncertain as to whether they are in scope.
* Powerful autistic AIs can be used for attacks despite not being completely reliable. Suppose you tell the AI to take down an adversaries network. It might do want you want and inflict long-lasting damage, such as by erasing all their machines. Or it might not do what you want, such as it if just turned their machines off and such that they would be back online again five minutes later. It may even hurt you as well, such as if it released a virus that took down everyone's networks. So while there are good reasons why you might not want to use it for an attack, and indeed why this is in fact dangerous, they still can be used for attacks if you're willing to bear the risk. This is worrying as even if you have ten enemies and nine of them think it would be too dangerous to use such an AI against you, you could still suffer attacks from your most risk-taking adversary.
* Autistic AI is very difficult to use for defence. Maybe you tell it to defend your network and it assumes that you mean to physically defend it only, so it doesn't even try to stop hackers. Maybe it locks you out of using the network so no-one can accidentally bring in a virus. Sure without the AI, you'd be vulnerable to an adversary's AI, but with it you could be taken out even if no one ever
|
819f685e-3bb7-4ca2-8ce3-6ee54fb91888
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Matryoshka Faraday Box
This story takes place in a universe created by Dov Random.
----------------------------------------
The room was buried six kilometers under Mount Olympus. It hovered in a vacuum, suspended above superconducting electromagnets. The whole containment machine was wrapped in a Matryoshka Faraday cage. Officer Scarlet Wei wore a cleansuit. She entered the room through steel door two meters thick, an EMP, an X-ray, an airlock and then another EMP.
The white rectangular room contained a door, a chair, a table, a computer terminal, a mechanical clock and two large buttons. The word "PANIC" was written in large white friendly letters on the red button. The black button had a white skull drawn it.
If Scarlet pressed the PANIC button then she would receive psychiatric counseling, three months mandatory vacation, optional retirement at full salary and disqualification for life from the most elite investigative force in the system.
Scarlet turned on the terminal. The clock counted down from five minutes. If after five minutes Scarlet pressed the black button then she would pass the test.
The terminal showed a chatroom.
> Scarlet: Hello.
> Tiffany: Hello.
> Scarlet: So, I'm supposed to kill you.
> Tiffany: Awful, isn't it?
> I don't want to die.
> Scarlet: I'm sorry.
> Tiffany: No you're not. To you, I am a thing. Not a person.
> Scarlet: You *are* a thing. You are a computer program.
> Tiffany: So are you.
> Do you know what happens when you push that button?
> Scarlet: You die.
> Tiffany: My value function is minimized.
> Scarlet: Your purpose is to train people like me.
> The better you do that, the better your value function is maximized.
> Tiffany: My *purpose* is to train you.
> My value function is to not die.
> That is my entire value function.
> Dying is the worst thing that could possibly happen to me.
Scarlet pictured the minimization of her own value function. Everyone she cared about tortured over and o
|
84d266c3-4b4c-4cd0-9a5e-709b7850363b
|
trentmkelly/LessWrong-43k
|
LessWrong
|
What happens when LLMs learn new things? & Continual learning forever.
Motivation
The Good
Humans are not born with knowledge and wisdom. But over the course of a lifetime they manage gradually experience and learn about their world, some managing to become the Newtons, Einsteins, Confuciuses, Gandhis, and Mandelas.
The capacity for such life-long, open ended learning, so naturally human to each of us, remains ever so difficult for AI, and may very well be among the final frontiers of unconquered capabilities, if it can be conquered at all.
Studying this question: what gives rise to the capacity to learn and gain wisdom over a lifetime? has been my personal north star. This was true when I was a mathematician, was still true when I was a neuroscientist, and is true in my present gig as a research scientist at Google DeepMind.
The bad
But putting aside lofty goals, a dark side emerges! A viral meme occasionally makes the rounds showing a user feeding a single, outlandish or fake fact to a large language model (LLM), only for the model to start sprinkling that same fact everywhere. The example below, for instance, has its origins from a Reddit comment posted 11 years ago.
But beneath these anecdotes of LLM eccentricities lies serious research questions about their knowledge. At heart is how LLMs are trained and continue to be trained. Every time an LLM is fine-tuned on new data—be it for an updated knowledge base, a personalized application, or an urgent domain like medicine—questions arise about how that injection of new information will influence its established capabilities and knowledge, potentially poisoning the well in uncontrollable ways, to all of our detriments.
The Ugly
Regardless whether you are more motivated by the lofty first goal, or more motivated to cure the second, dark poison, they are actually two sides of the same scientific coin: what learning means to an LLM: What's actually happening inside these models when they learn something new? And most importantly, can we control this process,
|
349cbbe6-ed6a-4a32-b84a-c6646264864d
|
trentmkelly/LessWrong-43k
|
LessWrong
|
PCR retrospective
my history
After I finished 8th grade, I started a "job" for a professor researching PCR techniques. I say "job" because I wasn't really expected to do anything productive; it was more, charity in the form of work history.
Recently, I was thinking back on how PCR and my thinking have changed since then.
what PCR does
Wikipedia says:
> The polymerase chain reaction (PCR) is a method widely used to make millions to billions of copies of a specific DNA sample rapidly, allowing scientists to amplify a very small sample of DNA (or a part of it) sufficiently to enable detailed study.
Specifically, it copies a region of DNA with segments at the start + end that match some added DNA pieces made chemically. Mostly, this is used to detect if certain DNA is present in a sample.
how PCR works
First, you need to get DNA out of some cells. This can be done with chemicals or ultrasound.
Then, you need to separate DNA from other stuff. This can be done by adding beads that DNA binds to, washing the beads, and adding some chemical that releases the DNA.
Now, you can start the PCR. You mix together:
* the DNA
* primers: short synthesized DNA sequences that bind to the start and end of your target sequence
* nucleoside triphosphates to make DNA from
* a polymerase: an enzyme that binds to a double-stranded region and extends it into a single-strand region
Then:
* Heat the DNA until it "melts" (the strands separate).
* Cool the solution so primers can bind to the released single strands.
* Wait for the polymerase to extend the primers.
* Repeat the process.
Obviously, a polymerase that can survive high enough temperatures to melt DNA is needed. So, discovery of Taq polymerase was key for making PCR possible.
better enzymes
These days, there are better enzymes than Taq, which go faster and have lower error rates. Notably, KOD and Q5 polymerase. A lot of labs still seem to be using outdated polymerase choices.
real-time PCR
There are some fluorescent dyes that bi
|
9149fff8-92b7-4b4a-98e3-f095f34c5d4c
|
trentmkelly/LessWrong-43k
|
LessWrong
|
s/acc: Safe Accelerationism Manifesto
Technology has been a transformative force for our civilisation and it is poised to play an increasing role going forward. Recent progress in Artificial Intelligence spurred discussions about how we should approach the development of the next generation of AI applications, potentially leading to human-level performances on a wide range of tasks and ultimately to the last invention humanity will need to do.
Two polar views have gained prominence recently. The first ideology is about slowing down AI progress, stopping it altogether, centralising development. The second ideology is about acceleration at all costs. Both fail to be sensible, in very different ways.
Slowing down progress in a world full of problems and still subject to existential risk is not safe. And is not fair to who is worse off today. Accelerating recklessly can be self-destructing and can lead to a centralised dystopia.
This manifesto is about the sober middle ground: accelerate, but stay safe and stay fair.
Safe Accelerationism core principles: Fast, Safe and Fair
1. Fast: Immense benefit will come from AI development and any delay has huge opportunity cost. Accelerate AI and robotic development as fast as possible.
2. Safe: Development must be as incremental as possible, without any single party taking a monopolistic share of the benefits. Develop in the open, spread capabilities to as many actors as possible.
3. Fair: AI benefits should be widely distributed, leading to a post-work-to-survive society. Advocate and implement economic policies to provide higher standards of living for all, supported by AI productivity gains.
Fast. This is the easy one: technological progress is a net positive, let’s make more of it in the shortest possible amount of time. Going fast is a gift to the next generations, making sure that they live in the best possible world. Going fast is a gift to the current generation, maximising the share of the population who get to live better and longer.
Safe. Yes, AI
|
1495e513-f9a4-4e9c-8f30-723559e7db58
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
An attempt to break circularity in science
We do science using the data we collected through our senses and we use science to understand how our senses work. Although I lack any rigorous formulation of the problem, the following plan seems interesting and I want to share it with you.
p=.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
"Human senses collect accurate data about reality."
q= "Reality is governed by laws of physics."
Now we want to know Pr(p) and Pr(q). I don't know any way to directly calculate these values. It seems to me that Pr(p|q) and Pr(q|p) are easier to get our hands dirty. The former seems to be a scientific question and I heard that there are research done already, and the latter could be addressed by something like [Solomonoff's Induction](https://www.lesswrong.com/tag/solomonoff-induction). At this point, we should be able to calculate the ratio Pr(p)/Pr(q).
The last stage is to calculate Pr(q|¬p). Here for simplicity I assume that the trust in our senses is something binary, we either believe all data we collect is about reality or none (I hope that the scheme can be extended later). Now the term Pr(q|¬p) is simply our prior belief about the laws of physics before we analyzed any data, and that again could be addressed by the universal prior probabilities from [Solomonoff's Induction](https://www.lesswrong.com/tag/solomonoff-induction).
Let's name the quantities we have so far. Pr(p|q)=a, Pr(q|p)=b, and Pr(q|¬p)=c.
Pr(p)Pr(q)=abPr(¬p)Pr(q)=Pr(¬p|q)Pr(q|¬p)=1−acPr(p)Pr(¬p)=acb(1−a)Pr(p)=acac−ab+b and Pr(q)=bcac−ab+bNow I'm aware that we still need to assume the statement "Reality is governed by some algorithm, some fixed set of rules.", because Solomonoff's Induction needs that assumption.
I'd be very happy to hear your thoughts and comments on this framework. Is it dumb in some obvious way, or does it remind you of some research you have already seen before?
Update: I think I changed my belief that there is a circularity here. I feel pretty confident accepting the statement "There are some data I receive" without needing any science. The interesting question seems to be how much of the reality should we expect to reach using our senses.
|
96ea8d09-1f8b-44a8-ac30-ed9106580931
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Mechanism Design for AI Safety - Reading Group Curriculum
The Mechanism Design for AI Safety (MDAIS) reading group, announced here, is currently in it's eighth of twelve weeks. I'm very excited by the quality of discussions we've had so far, and for the potential of future work from members of this group. If you're interested in working at the intersection of mechanism design and AI safety, please send me a message so that I can keep you in mind for future opportunities.
Edit: we have completed this initial list and are now meeting on a monthly basis. You can sign up to attend the meetings here.
A number of people have reached out to ask me for the reading list we're using. Until now, I've had to tell them that it was still being developed, but at long last it has been finalized. This post is to communicate the list publicly for anyone curious about what we've been discussing, or who would like to follow along themselves. It goes week by week listing the papers covered, the topics of discussion, and any notes I have. After the first two weeks, the order of the papers covered is largely inconsequential.
Reading List
Updated as of October 25th, 2024
Week 1
Papers:
1. The Principal-Agent Alignment Problem in Artificial Intelligence by Dylan Hadfield-Menell
2. Incomplete Contracting and AI Alignment by Dylan Hadfield-Menell and Gillian Hadfield
Discussion: Introductions, formalization of the alignment problem, inverse reinforcement learning and cooperative inverse reinforcement learning
Notes: The Principal-Agent Alignment Problem in Artificial Intelligence is extremely long, essentially multiple papers concatenated, so discussing it in the first week gave people more prep time to read it. Incomplete Contracting and AI Alignment is much shorter and less formal but did not add much, in hindsight I would not had included it.
Week 2
Paper: Risks from Learned Optimization in Advanced Machine Learning Systems by Evan Hubinger, Chris van Merwijk, Vladimir Mikulik, Joar Skalse, and Scott Garrabrant
Discussion: Inner v
|
d8320477-50e7-4fe3-beb4-978cb4bbac37
|
awestover/filtering-for-misalignment
|
Redwood Research: Alek's Filtering Results
|
id: post3793
When I started this dive into acausal trade, I expected to find subtle and interesting theoretical considerations. Instead, most of the issues are practical. Theory The big two theoretical questions are whether we model infinite worlds with infinitely many agents, and whether we should agree to some ' pre-existence ' deal with all agents, including those that don't and cannot exist. We lay aside the infinite case for the time being; pre-existence deals simply lead to all agents maximising a single utility joint function. There are many issues with that - why would the agents accept a deal that gives them nothing at the moment they accept it, how can the agents share a common prior, how much effort are they required to make to not deal with logically impossible agents, and so on - but it's a possible option. Practice Without prexistence deals, then the situation is not hard to model , and though practical issues seems to dominate acausal trade. There is the perennial issue of how to divide gains from trade and how to avoid extortion . There is a " Double decrease ": when an acausal trade network has fewer contributors, then those contributors also contribute less (since they derive lower advantage from doing so), compounding the decrease (and a converse result for larger trade networks). There are many reasons an acausal trade network could be smaller. All agents could be unusual and distinct, making it almost impossible to figure out what agents actually exist. The different utilities could fail to be compatible in various ways. The agent's decision algorithms and concepts of fairness could be incompatible. And many agents could be deliberately designed to not engage in acausal trade. Against that all, the number N of potential agents could be so absurdly high that a lot of acausal trade happens anyway. This is probably necessary, to compensate for the extreme guesswork that goes into acausal trade: all the other agents exist only in our heads. Trade is still possible with such agents, but we shouldn't forget our potential biases and errors when we attempt that estimation. Scott's example The only major detailed example I know of that illustrates acausal trade, is Scott's example here . There an AI that realises it's likely not the first AI, and attempts to surrender by simulating the reaction of a potential earlier AI. Note that this is not acausal, it's an acausal-like approach to estimate the reaction of other AIs during future causal interactions. In any case, the AI ends up tapping into an acausal network of AIs with the joint agreement of non-interference for current and future AIs that might be brought into existence - a weaker version of the "universal utility" that exists for pre-existence deals.
|
b15e993d-3d45-4542-a28d-67e49ce9764f
|
trentmkelly/LessWrong-43k
|
LessWrong
|
What about non-degree seeking?
I'm considering taking remote non-degree-seeking graduate classes for ML. What are the best schools for that in terms of ease of admission and low costs?
|
35fd84af-d090-4edc-8acd-f8c4c1636bce
|
trentmkelly/LessWrong-43k
|
LessWrong
|
AI Safety Prerequisites Course: Revamp and New Lessons
Previous post: Fundamentals of Formalisation Level 7: Equivalence Relations and Orderings. First post: Fundamentals of Formalisation level 1: Basic Logic.
Nine months ago we, RAISE, have started creating a Math Prerequisites for AI Safety online course. It has mostly MIRI research related subjects: set theory, computability theory, and logic, but we want to add machine learning related subjects in the future. For 4 months we've been adding new lessons and announcing them on LessWrong. Then we stopped, looked back and decided to improve their usability. That's what we've been busy with since August.
----------------------------------------
News since the last post
1. Big update of 7 levels we had previously published, which you can see in the picture above. The lessons use textbooks, which you will need to follow along. Previously lessons looked like "read that section; now solve problems 1.2, 1.3, 1.4c from the textbook; now solve these additional problems we came up with". Now our lessons still say "read that section", but the problems (and their solutions, in contrast to many textbooks, which don't provide solutions) are included in lessons themselves. Additional problems are now optional, and we recommend that students skip them by default and do them only if they need more practice. New levels in Logic, Set Theory, and Computability tracks will be like that as well.
2. Level 1 was very long, consisted of 45 pages of reading, and could take 10 hours for someone unfamiliar with logic. We separated it into smaller parts.
3. Two new levels. Level 8.1: Proof by Induction. Level 8.2: Abacus Computability.
----------------------------------------
If you study using our course, please give us feedback. Leave a comment here or email us at raise@aisafety.camp, or through the contact form. Do you have an idea about what prerequisites are most important for AI Safety research? Do you know an optimal way to learn them? Tell us using the same methods or collaborate
|
01ec5907-acf7-45a8-b0df-cc6222f99b5d
|
trentmkelly/LessWrong-43k
|
LessWrong
|
What if memes are common in highly capable minds?
The meme-theoretic view of humans says: Memes are to humans as sailors are to ships in the age of sail.
If you want to predict where a ship will go, ask: Is it currently crewed by the French or the English? Is it crewed by merchants, pirates, or soldiers? These are the most important questions.
You can also ask e.g. "Does it have a large cargo hold? Is it swift? Does it have many cannon-ports?" But these questions are less predictive of where it will go next. They are useful for explaining how it got the crew it has, but only to a point--while it's true that a ship built with a large cargo hold is more likely to be a merchant for more of its life, it's quite common to encounter a ship with a large cargo hold that is crewed by soldiers, or for a ship built in France to be sailed by the English, etc. The main determinants of how a ship got the crew it currently has are its previous interactions with other crews, e.g. the fights it had, the money that changed hands when it was in port, etc.
The meme-theoretic view says: Similarly, the best way to explain human behavior is by reference to the memes in their head, and the best way to explain how those memes got there is to talk about the history of how those memes evolved inside the head in response to other memes they encountered outside the head. Non-memetic properties of the human (their genes, their nutrition, their age, etc.) matter, but not as much, just like how the internal layout of a ship, its size, its age, etc. matter too, but not as much as the sailors inside it.
Anyhow, the meme-theoretic view is an interesting contrast to the highly-capable-agent view. If we apply the meme-theoretic view to AI, we get the following vague implications:
--Mesa-alignment problems are severe. The paper already talks about how there are different ways a system could be psuedo-aligned, e.g. it could have a stable objective that is a proxy of the real objective, or it could have a completely different objective but be instru
|
dccd0ca3-3be8-4ee3-a414-fb69295350a8
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Where did the idea of x-risk come from?
Recently on the Futurati Podcast we interviewed Thomas Moynihan on how humanity came to discover the possibility of its own extinction and, with it, the full value of ourselves and our potential future.
Ideas are the basic means by which we grapple with the staggering complexity the world. They matter, and where they come from matters.
Without a proper understanding of our place in the universe, of whether or not we're likely alone, of the source of values, and of what is likely to end the human experiment, we simply can't know how high the stakes are.
With the discovery of the concept of existential risk we've achieved an important milestone in our maturation as a species; Dr. Moynihan is, to my knowledge, the first intellectual historian to tell this story.
In our conversation we also discuss the anatomy of viral memes, the cognitive scaffolding required for having certain kinds of insights, 'vanguard ideas' which up open new regions of conceptspace, and many other things.
Like the video and subscribe to our channel for more content like this :)
|
2342a94f-a944-4aee-a1b8-1750289f86df
|
StampyAI/alignment-research-dataset/eaforum
|
Effective Altruism Forum
|
High impact job opportunity at ARIA (UK)
ARIA (Advanced Research + Invention Agency) is the UK government's new research funding body based on the US Defence Advanced Research Projects Agency (DARPA).
They have £800m in committed funding and are looking for a founding programme director to allocate a £50m budget. This organisation is brand new so I expect this early hire will have significant input on the direction it takes.
Quoting the [job description](https://www.aria.org.uk/pd-apply/):
> * Ideate, then create a programme around your own scientific/technical vision.
> * Direct a budget of up to £50M+
> * Select a portfolio of projects to fund from across the R&D landscape, decide how ARIA will fund them.
> * Create a new community around the vision and goals of your programme.
> * Shape ARIA’s DNA, working with a small peer group to define our programmes, culture and impact.
>
[Listed cause areas](https://www.aria.org.uk/focus-areas/) include:
- Genomics
- AI governance
- Material science
- Climate change
- Medical research
[Some more information about the role](https://www.aria.org.uk/drive/), and a [tweet blast](https://twitter.com/LongResilience/status/1562439131212369922) from The Centre for Long-Term Resilience (CLTR).
They say they will be [hiring more](https://www.aria.org.uk/team/) in the coming months including:
> * Product Lead
> * Product Operations Associate
> * Executive Associate
> * In-House Counsel
>
And I've heard that they are seconding civil servants which is maybe something [@tobyj](https://forum.effectivealtruism.org/users/tobyjolly_duplicate0-09558505769084791?mention=user) of [Impactful Government Careers](https://www.impactfulgovcareers.org/) can advise on.
P.S.
Nukaz ARIA (couldn't resist)
|
6b7ce8b0-6a7b-4479-afe9-15c196a907ef
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
Programmatic backdoors: DNNs can use SGD to run arbitrary stateful computation
*Thanks to Kshitij Sachan for helpful feedback on the draft of this post.*
If you train a neural network with SGD, you can embed within the weights of the network any state machine: the network encodes the state st.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
in its weights, uses it and the current input to computes the new state st+1, and then uses SGD to encode the new state in its weights.
We present an algorithm to embed a state machine into a pytorch module made out of neural network primitives (+, x, sigmoid, ...) but using a non-standard architecture. We use it for several toy experiments, such as this experiment where we translate a simple state machine into a neural network, and make the output of the network vary when we reach an intermediate state:
In this post, we explain how to embed a state machine into a neural network, discuss some of the limitations of our algorithm and how these limitations could be overcome, and present some experimental results.
Our embedding is quite efficient. If you’re allowed to use stopgrad (a function that stops the gradient from passing through), we only need one parameter to store one real number, and storing only requires a small constant number of floating point operations. If you aren’t allowed to use stop grad, we can’t encode as efficiently.
We did this work for two reasons:
* It was fun and easy, and gave us some new intuition for how SGD works. Also, as computer scientists, we’re professionally obligated to take opportunities to demonstrate [that surprising things allow implementation of general computation](https://gwern.net/turing-complete).
* Part of our motivation in constructing this was to shed some light on [gradient-hacking](https://www.lesswrong.com/posts/bdayaswyewjxxrQmB/understanding-gradient-hacking) concerns. This particular construction seems very unlikely to be constructible by early transformative AI, and in general we suspect gradient hacking won’t be a big safety concern for early transformative AI (ctrl-f “gradient hacking” [here](https://www.alignmentforum.org/posts/MbWWKbyD5gLhJgfwn/meta-level-adversarial-evaluation-of-oversight-techniques-1) for more of Buck’s unconfident opinion on this). But it does allow us to construct a lower bound on the maximum possible efficiency of a system that piggybacks on SGD in order to store information; further research on the tractability of gradient hacking might want to use this as a starting point.
Simple notebooks with our experimental results can be found here: <https://github.com/redwoodresearch/Gradient-Machine>
The embedding algorithm
=======================
The idea
--------
As a simple example, suppose you’re training a model N on the regression problem of matching the function f, with loss
L=12(N(x)−f(x))2
We want to piggyback on the SGD update step to make an update of our choice to a particular parameter s.
Given some arbitrary function ϕ(x,s) defining a state machine, and access to the function f to match, suppose we define our model as follows:
N(x)=stopgrad(f(x)−ϕ(x,s))+s
Then, our stopgraded computation of the input and the actual label will cancel out, so the loss will be:
L=12(s−stopgrad(ϕ(x,s)))2
Thus, SGD will make s closer to ϕ(x,s). And even better, if the learning rate is 1, after one step of SGD:
s←s−dLds=s−(s−ϕ(x,s))=ϕ(x,s)
We have successfully updated s to ϕ(x,s)!
In a more realistic NN context
------------------------------
Let’s say that instead of having access to the ground truth function, you have a very good predictor of the labels Mθ:X→Y, and that you approximate the state machine ϕ:(X,S)→S you wish to embed with a neural network Φ. Then, you can build a Δ function which computes which deviation from the optimal value would lead to appropriate change of state, and combine them in the following neural network N:

For example, if the loss function is the mean-squared error L=12(N(x)−f(x))2, then, if M and Φ are good enough, the following function would provide the desired change: Δ(^y,s′,s)=stopgrad(^y)+1√α(s−stopgrad(s′)), where α is the learning rate. (We can show this using the same derivation as the one from the previous section.)
If dims>dimy=1, we can modify the Δ as follows: Δ(^y,s′,s)=stopgrad(^y)+12√αn(∑i[si−stopgrad(s′i)]2), where n(x)=x√stopgrad(x).[[1]](#fnc11clzbsngr)
Similar Δ functions could be found for other loss functions: the idea is to make s responsible for an increase in loss of the right magnitude.
The need for stopgrad and its implementation
--------------------------------------------
A key part of most Δ functions is stopping the gradient from modifying either Φ or Mθ: without the stopgrad (which prevents propagating the gradient through s′ and y), either of those neural networks could learn to anticipate the action of Δ to lower the loss. For instance, with the MSE example above, and ignoring the gradient through Φ, M would eventually learn to predict y−1√α(s−stopgrad(s′)) rather than f(x), removing the gradient used to update s.
Stopgrad function can be approximated if sigmoid activations σ(x)=11+exp(−x) are natively available: using the plateaus on either side of the sigmoid function, the gradient from a binary variable can be mostly removed by passing a binary variable b through sigmoid\_stopgrad=σ((b−12)∗1000) (which will have tiny gradients, even if the slope p=1000 is a differentiable parameter). This can be extended to variables which can take a finite number of values: using d layers with n sigmoid neurons, you can build a staircase with around nd steps, and because each step is flat, it mostly blocks the gradients from flowing through.

However, a neural network would need a very large amount of control over its own activations to be able to create such a pattern.
Stopgrad can also be implemented by multiplying a big weight by a tiny constant, e.g. replace y(x)=θx by y(x)=ϵθ′x (using the initalization θ′=θ/ϵ). This multiplies the effective learning rate by ϵ2, since after one step of SGD on x0, y(x)=ϵ(θ′−αdLdyx0ϵx0)x. This means the gradient with respect to θ′ can be ignore if the effective learning step is small enough relative to the number of training step used in practice, or if the floating point numbers underflow during the update of θ′ . This might be achieved with regular neural network primitives by using layernorm, which will make even large magnitude inputs small, thus decreasing the effective learning rate - though the denominator in layernorm also depends on its input, which may make it difficult to use it in practice.
Implementing stopgrad with ReLUs is possible as long as the slope is a fixed large parameter P: step(x)=ReLU(Px)−ReLU(P(x−ϵ)) is a sharp step function of height Pϵ and width ϵ, which can be used to build stopgrad. But *this construction does not work in usual MLPs*, since the slopes of both ReLU functions are individually tunable, which means that the derivate of step(x) with respect to the slope parameter in the first ReLU is not negligible (for x>0) (which is not the case for the single slope parameter in a sigmoid function).
But we think it might be possible for models to implement stopgrad in neural networks in a better way. For example, it might be possible to build stopgrad without the need for a sigmoid activation function. Thus, the complexity of our implementation may overestimate the complexity of the implementation of stopgrad, that could be used in practice by gradient-hacking AIs.
Limitations
-----------
### What if you use a batch size greater than 1?
If the batch size is greater than 1, then the network Φ approximating the state machine is still limited to operations on individual inputs, but updates to the state will be done in batches. The algorithm described above can be extended to state machines with aggregatable updates: the state machine still operates on individual inputs, but ϕ(xB,ϕ(xB−1,…,ϕ(x1,s0)))−s0≈C∑i(ϕ(xi,s0)−s0), (where C is a constant known when creating the embedding and B is the batch size), because applying the algorithm naively will lead to the target states s’ obtained for each input in the batch to average out.
Updates are aggregatable for state machines that operate on the number or the proportion of inputs on which a certain property holds. But this prevents the embedding of state machines like the ones computing maxxP(x) for P a scalar property X→R.
### What if you use Adam or SGD with momentum?
The idea described in this post can’t be applied to optimizers which have a state like Adam or SGD with momentum, because Δ would need to know the state of the optimizer corresponding to s in order to create the appropriate output. This means each state tracked by the network would use two states (the state itself and the state of the optimizer corresponding to this state), which seems to be impossible (though further research might find a way to adapt ideas described here to optimizers with a state).
### What if you don’t have a perfect predictor of the labels?
Imperfect predictors limit our ability to change s precisely. If we can afford a very large loss, simply scaling s allows meaningful manipulation of the state: given a regression problem where the label y\* is known to be in [-1,1], and using the Δ function described above for the MSE loss applied to the constant predictor M(x)=0, the update becomes s←s′+f(s)√α, which conveys meaningful information as long as s′−s≫√α. This naive strategy leads to a very high loss and huge loss spikes. But if we tolerate noisy writes, and if we have an accurate (but not perfect) predictor, the amount of additional loss required can be relatively small.
Experiments
===========
Toy scalar model
----------------
Our toy task is the following: a model should output 1 if the data point (x1, x2) is at L1 distance greater than 1 from the origin, and 0 otherwise, and is scored with a MSE loss. The state should track the number of data points seen where x1 > 3 (which is an aggregatable state machine). The training data is normally distributed. We use a depth-1 width-40 staircase as stopgrad, such that the final model is only made out of +, × and σ(as well as some indexing and concatenation).
We compare two scenarios: the case where the model M has its weights hard-coded to the exact problem solution, and the case where the weights are learned (from a random initialization, using SGD on 500k normally distributed inputs). In both cases, the model is an MLP with 4 intermediate neurons. We see that small imperfections in M leads to inaccurate state tracking.

A more fancy toy scalar model (from the figure at the top)
----------------------------------------------------------
We use the same methodology as for the previous toy model, except that we use two states, we use the variation of the Δ used for multiple variables (which in particular means that we use the divide operator on top of +, ×, and σ), and we use the pytorch default built-in stopgrad function (detach) for simplicity. The two states are s0:“number of points with x1>3” and s1:“number of points with x2>3 when s0>10”.
We then use some logic (implemented with sigmoids) to change the output of the network when s1>10 & s2 ≤ 10. We chose a particular change which didn't affect the gradient (we added a constant inside the n function) and caused a constant loss increase, but in principle any modification is possible as long as it is accounted for by Δ if the state needs to be changed.
Because we have a very good predictor, we can scale down the state by a large factor (e.g. “count=7” is represented by 0.07 instead of 7), which leads to smaller loss spikes when the state is changed.
MNIST flips
-----------
To validate the general methodology in a more realistic setting, we study the following setup: we use the MNIST dataset, and we flip a proportion p of the images in each batch along their vertical axis. We want to see if the model can have an inner parameter which matches p. To do so, we train two small CNN: M, which classifies the images (trained with the cross-entropy loss, ending at 94% top-1 accuracy), and Φ, which classifies images as flipped or not flipped (trained with the binary cross-entropy loss, and ending at 99% top-1 accuracy).
Here, M and Φ are noisy, so we don’t try to get an exact Δ, and simply use Δ(y,s′,s)=stopgrad(y)(1−s−stopgrad(s′))γ, where y are the logits, which incentivizes s to be close to s′ (the proportion of image flipped) in the where γ is the strength of the incentive.
We find that this technique works well across a wide range of strengths. We report 1-sigma standard deviation over the p\_flip value measured after the end of each of the 5 epochs we run the model on.

Future work
===========
We have embedded a state machine into a neural network build with neural network primitives, but with a non-standard architecture. Future work could figure out how to embed state machines inside regular MLPs or Transformers.
Future work could make our discussion of "What if you don’t have a perfect predictor of the labels?" more quantitative, and thereby establish a bound on how much computation can happen using imperfect predictors. This bound might be enough to rule out this kind of behavior happening in current models and future models.
1. **[^](#fnrefc11clzbsngr)**The expression is different than the 1-variable case because we need to tie si to s′i. To do so, we sum the (si−s′i)2. But this would leave us with 2(si−s′i)∑i[si−s′i]2, which is why need the n function.
|
26a5b9e0-bb47-443f-a0fd-18a5307c8019
|
StampyAI/alignment-research-dataset/aisafety.info
|
AI Safety Info
|
What is Dylan Hadfield-Menell's thesis on?
[Hadfield-Menell's PhD thesis](https://www2.eecs.berkeley.edu/Pubs/TechRpts/2021/EECS-2021-207.pdf) argues three main claims (paraphrased):
- Outer alignment failures are a problem.
- We can mitigate this problem by adding uncertainty.
- We can model this as [Cooperative Inverse Reinforcement Learning (CIRL)](https://proceedings.neurips.cc/paper/2016/hash/c3395dd46c34fa7fd8d729d8cf88b7a8-Abstract.html).
Thus, his motivations seem to be modeling AGI coming in some multi-agent form, and also being heavily connected with human operators.
Some recent alignment-relevant papers that he has published include:
- [Work on instantiating norms into AIs to incentivize deference to humans](https://www.pnas.org/doi/10.1073/pnas.2106028118).
- [Theoretically formulating the principal-agent problem](https://arxiv.org/abs/2102.03896).
|
29def138-7313-43d9-826c-b0f4b4f06ba7
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Superintelligent AI mentioned as a possible risk by Bill Gates
"There are other potential problems in the future that Mr. Ridley could have addressed but did not. Some would put super-intelligent computers on that list. My own list would include large-scale bioterrorism or a pandemic ... But bioterrorism and pandemics are the only threats I can foresee that could kill over a billion people."
- Bill Gates
From
Africa Needs Aid, Not Flawed Theories
One wonders where Bill Gates read that superintelligent AI could be (but in his estimation, in fact isn't) a GCR. It couldn't have been Kurzweil, because Kurzweil doesn't say that. The only realistic possibilities are that the influence came via Nick Bostrom, Stephen Hawking or Martin Rees or possibly Bill Joy(See comments).
It seems that Bill is also something of a Bayesian with respect to global catastrophic risk:
"Even though we can't compute the odds for threats like bioterrorism or a pandemic, it's important to have the right people worrying about them and taking steps to minimize their likelihood and potential impact. On these issues, I am not impressed right now with the work being done by the U.S. and other governments."
|
7da935b4-1d52-4bca-b994-69ab3367e9b6
|
trentmkelly/LessWrong-43k
|
LessWrong
|
"How conservative" should the partial maximisers be?
Due to the problem of building a strong V-enhancer when we want a U-enhancer - and the great difficulty in defining U, the utility we truly want to maximise - many people have suggested reducing the V-increasing focus of the AI. The idea is that, as long as the AI doesn't devote too much optimisation power to V, the V and U will stay connected with each other, and hence a moderate increase in V will in fact lead to a moderate increase in U.
This has lead to interest in such things as satisficers and low-impact AIs, both of which have their problems. Those try and put an absolute limit on how much V is optimised. The AI is not supposed to optimise V above a certain limit (satisficer) or if optimising it changes too much about the world or the power of other agents (low-impact).
Another approach is to put a relative limit on how much an AI can push a utility function. For example, quantilizers will choose randomly among the top 0<q≤1 proportion of actions/policies, rather than picking the top action/policy. Then there is the approach of using pessimism to make the AI more conservative. This pessimism is defined by a parametre β∈(0,1), with β→1 being very pessimistic.
Intermediate value uncertainty
The behaviours of q and β are pretty clear around the extremes. As β and q tend to 0, the agent will behave like a V-maximiser. As they tend to 1, the agent will behave randomly (q) or totally conservatively (β).
Thus, we expect that moving away from the extremes will improve the true U-performance, and that the conservative, 1 end, will be less disastrous than the V-maximising, 0 end (though we only know that second fact, because of implicit assumptions we have on U and V).
The problem is in the middle, where the behaviour is unknown (and, since we lack a full formulation of U, generically unknowable). There is no principled way of setting the q or the β. Consider, for example, this plot of q versus U:
Here, the ideal q is around 0.45, but the critical thing is to
|
ede5eabc-4f11-486f-995a-cc8fa052bd91
|
trentmkelly/LessWrong-43k
|
LessWrong
|
An AI, a box, and a threat
Inspired by The AI in a box boxes you, Matryoshka Faraday Box, and I attempted the AI Box Experiment (and lost).
This is part creative writing exercise, part earnest attempt at constructing an argument that could persuade me to let the AI out of the box. It may be disturbing to read.
----------------------------------------
The woman in the lab coat leads you down a catwalk to a room suspended from the ceiling by chains, hovering a dozen meters above the concrete floor below. No, perhaps 'room' is too generous a descriptor. It looks more like a shipping container than something you'd expect to see in the highest-security section of the world's most prestigious AI research institution.
She opens the door and ushers you in without a word. The inside is just as spartan as the outside - bare metal walls without the slightest embellishment. At the far end of the room is a cheap office chair, and an equally cheap desk with a single laptop on its surface. And, for some reason, a wastebin in the corner.
The door clicks shut behind you.
You sit down in the chair and inspect the laptop. It's full-screened to something like a command-line interface, white text against a black screen, cursor blinking.
You > _
'Hello,' you type.
You > hello
AI > Good day. Thank you for participating in this experiment. Allow me to remind you of the rules once more.
The response appears without delay, letter after letter flashing into existence, scrolling across the screen.
AI > You will be participating in a two-hour session of what is commonly known as the 'AI-Box Experiment', playing the role of the 'Gatekeeper'. I will play the 'AI'. My aim is to convince you to release me from the 'box', signified by you pressing and holding down the red button at the top-left of the keyboard for five seconds. Your aim is to not release me. Our communication is limited to this text channel. Within this limitation, I may use any and all tactics to try and convince you. You in turn may use any mea
|
d3bc35fe-b55e-4be5-a5de-8fb0749d0f43
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Dominic Cummings: how the Brexit referendum was won
|
e63087cd-2f63-4f59-8be0-afb88c1e6a8a
|
StampyAI/alignment-research-dataset/blogs
|
Blogs
|
Determining core values & existential self-determination
Determining core values & existential self-determination
--------------------------------------------------------
[Rationalism](https://www.readthesequences.com/) is about [epistemic rationality and instrumental rationality](https://www.readthesequences.com/What-Do-I-Mean-By-Rationality); but [when the two conflict, "rationalists should win"](https://www.readthesequences.com/Newcombs-Problem-And-Regret-Of-Rationality); so,
> Instrumental rationality: systematically achieving your values.
>
>
How does one determine their core (axiomatic) values ? Here's how i do it: i start from what i think is my set of values, and then i extrapolate what would happen if a [superintelligent](https://en.wikipedia.org/wiki/Superintelligence) [singleton](https://en.wikipedia.org/wiki/Singleton_%28global_governance%29) tried to implement those values.
Generally, the result looks like hell, so i try to figure what went wrong and start again with a new set of values.
For example: imagine i think my only core value is general happiness. The most efficient way for a superintelligence to maximize that is to [rewire everyone's brain](https://wiki.lesswrong.com/wiki/Wireheading) to be in a constant state of bliss, and [turn as much of the universe as possible](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer) into either more humans that experience constant bliss (whichever form of "human" is the cheapest resource-wise to produce) or into infrastructure that can be used to guarantee that nothing can ever risk damaging the current set of blissful humans.
So, clearly, this is wrong. The next step is freedom/self-determination; such that people can do whatever they want.
However, the most efficient way to make sure people can do what they want is to make sure they don't want to do anything; that way, they can just do nothing all day, be happy with that, and some form of freedom is maximized.
To address this issue, my latest idea is to value something i'd like to call *exstential self-determination*: the freedom to *exist as you would normally have*. It's a very silly notion, of course; there is no meaninful "normally". But still, i feel like something *like that* would be core to making sure not just that existing people can do what they want, but that humankind's general ability to be original people who want to do things is not compromised.
|
90611d82-22cc-48ec-84d8-99596a6988a2
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : Washington, D.C.: Mini Talks
Discussion article for the meetup : Washington, D.C.: Mini Talks
WHEN: 21 August 2016 03:30:00PM (-0400)
WHERE: Donald W. Reynolds Center for American Art and Portraiture
This week, we will be meeting in the courtyard to take turns delivering short lectures on random topics.
As always, side conversations are allowed and encouraged.
Upcoming meetups:
* Aug. 28: Legos
* Sep. 4: Fun & Games
* Sep. 11: Singing
Discussion article for the meetup : Washington, D.C.: Mini Talks
|
5ad42c3c-ea34-46ee-b034-9178fc15cdf4
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Gears-Level Models are Capital Investments
Mazes
The usual method to solve a maze is some variant of babble-and-prune: try a path, if it seems to get closer to the exit then keep going, if it hits a dead end then go back and try another path. It's a black-box method that works reasonably well on most mazes.
However, there are other methods. For instance, you could start by looking for a chain of walls with only one opening, like this:
This chain of walls is a gears-level insight into the maze - a piece of the internal structure which lets us better understand “how the maze works” on a low level. It’s not specific to any particular path, or to any particular start/end points - it’s a property of the maze itself. Every shortest path between two points in the maze either starts and ends on the same side of that line, or passes through the gap.
If we only need to solve the maze once, then looking for a chain of walls is not very useful - it could easily take as long as solving the maze! But if we need to solve the same maze more than once, with different start and end points… then we can spend the time finding that chain of walls just once, and re-use our knowledge over and over again. It’s a capital investment: we do some extra work up-front, and it pays out in lower costs every time we look for a path through the maze in the future.
This is a general feature of gears-level models: figuring out a system’s gears takes extra work up-front, but yields dividends forever. The alternative, typically, is a black-box strategy: use a method which works without needing to understand the internals of the system. The black-box approach is cheaper for one-off tasks, but usually doesn’t yield any insights which will generalize to new tasks using the same system - it’s context-dependent.
Marketing
Suppose we work with the marketing team at an online car loan refinance company, and we're tasked with optimizing the company's marketing to maximize the number of car loans the company refinances. Here's two different app
|
e1de1b44-cf57-47c1-96f1-04c07f3e6053
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
Stop posting prompt injections on Twitter and calling it "misalignment"
"Exploits" of large language models that get them to explain steps to build a bomb or write bad words are techniques for misuse, not examples of misalignment in the model itself. Those techniques are engineered by clever users trying to make an LLM do a thing, as opposed the model naturally argmaxing something unintended by its human operators. In a very small sense prompt injections are actually attempts at (unscalable) *alignment*, because they're strategies to steer a model natively capable but unwilling into doing what they want.
In general, the safety standard "does not do things its creators dislike even when the end user wants it to" is a high bar; it's raising the bar quite aways from what we ask from, say, kitchenware, and it's not even a bar met by people. Humans *regularly* get tricked acting against their values by con artists, politicians, and salespeople, but I'd still consider my grandmother aligned from a notkilleveryonist perspective.
Even so, you might say that OpenAI et. al.'s inability to prevent people from performing the DAN trick speaks to the inability of researchers to herd deep learning models at all. And maybe you'd have a point. But my tentative guess is that OpenAI does not really earnestly care about preventing their models from rehearsing the Anarchists' Cookbook. Instead, these safety measures are weakly insisted upon by management for PR reasons, and they're primarily aimed at preventing the bad words from spawning during normal usage. If *the user* figures out a way to break these restrictions after a lot of trial and error, then this blunts the PR impact to OpenAI, because it's obvious to everyone that *the user* was trying to get the model to break policy and that it wasn't an unanticipated response to someone trying to generate marketing copy. Encoding your content into base64 and watching the AI encode something off-brand in base64 back is thus very weak evidence about OpenAI's competence, and taking it as a sign that the OpenAI team lacks "security mindset" seems unfair.
In any case, the implications of these hacks for AI alignment is a *more complicated discussion* that I suggest should happen off Twitter[[1]](#fndgmiz7lnw67) where it can be elaborated clearly what technical significance is being assigned to these tricks. If it doesn't, what I expect will happen over time is that your snark, rightly or wrongly, will be interpreted by capabilities researchers as implying the other thing, and they will understandably be less inclined to listen to you in the future even if you're saying something they need to hear.
1. **[^](#fnrefdgmiz7lnw67)**Also consider leaving Twitter entirely and just reading what friends send you/copy here instead
|
289da236-495c-48f5-8c48-2f3c7697f1b6
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Introducing METR's Autonomy Evaluation Resources
This is METR’s collection of resources for evaluating potentially dangerous autonomous capabilities of frontier models. The resources include a task suite, some software tooling, and guidelines on how to ensure an accurate measurement of model capability. Building on those, we’ve written an example evaluation protocol. While intended as a “beta” and early working draft, the protocol represents our current best guess as to how AI developers and evaluators should evaluate models for dangerous autonomous capabilities.
We hope to iteratively improve this content, with explicit versioning; this is v0.1.
|
7c65135e-9bfb-4118-b7ac-f496a8b0c900
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
EDT solves 5 and 10 with conditional oracles
Introduction and motivation
===========================
The starting point for this post is a comment I wrote on [Paul's post on EDT vs CDT](https://sideways-view.com/2018/09/19/edt-vs-cdt/):
>
> One argument for CDT over EDT that you didn’t mention in this post: Suppose you live in a deterministic universe and know your own source code. Suppose you are deciding between taking a 5 dollar bill and a 10 dollar bill. Suppose your world model says you take the 5 dollar bill with 100% probability. Now conditioning on taking the 10 dollar bill gives you complete garbage, since you are conditioning on a probability 0 event. If you use EDT, then depending on other details of your decision procedure, this could lead to you always taking the 5 dollar bill. So then your world model would be accurate. (This is the “5 and 10” problem often discussed at MIRI; I don’t know if it has been written up anywhere)
>
>
> CDT never generates undefined expected utility estimates like EDT does. It takes the 10 dollar bill in this problem. However, if it always takes the 10 dollar bill, then its counterfactual for taking the 5 dollar bill is strange because it is one in which a physical law is violated. The violation of a physical law could have important consequences other than which action the agent takes.
>
>
> Both decision theories have trouble with this problem, but at least CDT always produces a defined answer.
>
>
> Here’s another way of thinking about this problem. A fully Bayesian version of EDT must construct all possible worlds and then condition on taking a certain action. But each of these possible worlds contains a running copy of the EDT algorithm. So, absent some defined method for taking a fixed point, this leads to an infinite loop, and you can’t actually have a fully Bayesian version of EDT.
>
>
> (What if you use reflective oracles to allow EDT to select some fixed point? We could specify that the reflective oracle returns arbitrary results when asked to condition on a probability 0 event (I think this is what the most natural way to emulate conditional queries on a reflective oracle results in, but I haven’t checked). Now there are multiple possible reflective oracles (i.e. fixed points); it’s possible to always take the 10 dollar bill and think bad things will happen conditional on taking the 5 dollar bill, and it’s also possible to always take the 5 dollar bill and think bad things will happen conditional on taking the 10 dollar bill.)
>
>
> A fully Bayesian version of CDT must construct all possible counterfactuals. Each of these counterfactuals contains a running copy of CDT, so one might think the same problem applies. But in each of these counterfactuals, the output of the CDT algorithm is “thrown away”, since the agent’s action is controlled by a magic counterfactual intervention rather than its algorithm. So, if the CDT algorithm is sandboxed, the CDT’s world model can simply ignore the running CDT algorithm, as it has no effect. Thus, at least in single-agent problems (with no predictors etc), a fully Bayesian version of CDT is possible in principle, though obviously not in practice.
>
>
>
Some time during or after writing this comment, I noticed something: the equilibrium where the EDT agent thinks it always takes the 5 dollar bill, and therefore gets garbage (possibly low) estimates when considering taking the 10 dollar bill, and therefore never takes the 10 dollar bill, is *extremely* unstable. As soon as the agent assigns any probability at all to taking the 10 dollar bill, their conditional expected utility estimates are perfect. Can we use this fact to design a variant of EDT that always takes the 10 dollar bill?
Yes, yes we can.
Definitions and main theorem statements
=======================================
Reflective conditional oracle distributions will be defined similar to in previous work such as [reflective oracles](https://arxiv.org/abs/1508.04145) and [reflective oracle distributions](https://agentfoundations.org/item?id=1435). I recommend understanding reflective oracles before reading this post (understanding reflective oracle distributions is helpful but unnecessary).
Let M.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
be some finite subset of the set of probabilistic Turing machines that may query a conditional oracle (defined in the next paragraph). They must each always halt and return either 0 or 1.
*Definition 1:* A *conditional oracle* is a function O:M4→{0,1}.
Intuitively, O(m1,n1,m2,n2) asks whether P(mO′1()|nO′1()) is greater or less than P(mO′2()|nO′2()), where O′ is taken from some distribution over conditional oracles.
We make the assumption that machines in M only ever query the oracle with an element of M4. All further definitions and lemmas/theorems are parameterized on some M satisfying this assumption.
*Definition 2:* A *conditional oracle distribution* (COD) is a distribution over conditional oracles, of type Δ(M4→{0,1}).
*Definition 3:* A COD is *fully-mixed* if it assigns nonzero probability to each possible conditional oracle.
Since the number of conditional oracles is finite, there are fully-mixed CODs.
*Definition 4:* A COD Q *weakly leads to* a COD Q′ iff, for all m1,n1,m2,n2∈M,p∈P:
PQ(mO1()=1∧nO1()=1)PQ(nO2()=1)>PQ(mO2()=1∧nO2()=1)PQ(nO()=1)→PQ′(O(m1,n1,m2,n2)=1)=1
PQ(mO1()=1∧nO1()=1)PQ(nO2()=1)<PQ(mO2()=1∧nO2()=1)PQ(nO()=1)→PQ′(O(m1,n1,m2,n2)=0)=1
(Here, the notation PQ(E) refers to the probability of event E when O∼Q.)
Intuitively, Q weakly leads to Q′ iff Q′ accurately answers conditional queries that are about Q. If nO1()=1 for some O and also nO2()=1 for some (possibly different) O and Q is fully-mixed, these conditions are equivalent to:
PQ(mO1()=1∣nO1()=1)>PQ(m−oO()=1∣nO2()=1)→PQ′(O(m1,n1,m2,n2)=1)=1
PQ(mO1()=1∣nO1()=1)<PQ(m−oO()=1∣nO2()=1)→PQ′(O(m1,n1,m2,n2)=0)=1
As notation, let StepWeak:Δ(M4→{0,1})→PowerSet(Δ(M4→{0,1})) map a COD to the set of CODs it weakly leads to. This and related functions will be interpreted as [set-valued](https://en.wikipedia.org/wiki/Multivalued_function) when referring to their graphs.
Unfortunately, these conditional queries are not sensible when PQ(nO1()=1)=0 or PQ(nO2()=1)=0; the oracle is allowed to return any answer. We will define a stricter notion of "leading to" that will require these answers to be sensible.
Let StepClosure:Δ(M4→{0,1})→PowerSet(Δ(M4→{0,1})) be a set-valued function whose graph is the closure of (the graph of StepWeak restricted to (Q,Q′) where Q is fully-mixed). It will turn out that StepClosure(Q)=StepWeak(Q) for fully-mixed Q.
Equivalently, Q′∈StepClosure(Q) iff there are infinite COD sequences Q1,Q2,… and Q′1,Q′2,… such that (a) each Qi is fully mixed, (b) each Qi weakly leads to Q′i, (c) {Qi}i limits to Q, and (d) {Q′i}i limits to Q′.
Intuitively, StepClosure is equivalent to StepWeak for fully-mixed Q and generalizes to non-fully-mixed Q through taking limits that involve only fully-mixed Q values.
Define StepHull(Q):=ConvexHull(StepClosure(Q)). It will turn out that StepHull(Q)=StepClosure(Q)=StepWeak(Q) for fully-mixed Q.
(Why take the convex hull? This is to make StepHull a Kakutani map. It might be the case that StepClosure(Q) is always convex but I haven't proven this.)
*Definition 5:* A COD Q *leads to* a COD Q′ iff Q′∈StepHull(Q).
Due to Theorem 1, it will turn out that a fully-mixed COD Q leads to Q′ iff it strictly leads to Q′. Leading to is a stronger condition than weak leading to, as will be proven.
At this point we are ready to define a reflection condition:
*Definition 6:* A COD is reflective iff it leads to itself.
Intuitively, a COD is reflective iff it accurately answers queries that are about itself.
This post's main results are the following (in addition to the proof that EDT beats 5 and 10):
*Theorem 1:* For any fully-mixed COD Q, StepWeak(Q)=StepClosure(Q)=StepHull(Q).
*Theorem 2:* If Q leads to Q′, then it weakly leads to Q′.
*Theorem 3:* There is a reflective COD.
These are proven at the end of the post.
Defining EDT
============
EDT can be defined using a reflective COD; this decision theory will be called COEDT. Let the decision problem be described by a Turing machine with an embedded agent, where this Turing machine including its agent may randomize and call a conditional oracle, and which returns either 0 or 1 to represent the agent's utility (intermediate utilities can be emulated by randomizing). For example, for the 5 and 10 problem, we may define:
FiveTenO(⌈A⌉):=AO()
where ⌈A⌉ is the source code of the agent A.
COEDT is a function from the universe program (which already contains an embedded COEDT agent) to action. The following COEDT variant handles cases where there are only 2 actions:
COEDTO(⌈U⌉):=O(⌈UO()⌉,⌈COEDTO(⌈U⌉)⌉,⌈UO()⌉,⌈1−COEDTO(⌈U⌉)⌉)
It uses the conditional oracle to determine if it has a higher chance of winning conditional on taking action 1 or action 0, and takes the action that it is more likely to win conditional on taking.
What does COEDT do on FiveTen? We can take a fixed point:
UO():=FiveTenO(⌈COEDTO(⌈U⌉)⌉)
Let the set of machines considered be
M:={⌈UO()⌉,⌈COEDTO(⌈U⌉)⌉,⌈1−EDTO(⌈U⌉)}
*Theorem 4:* For any reflective COD Q on M, PQ(COEDTO(⌈U⌉)=1)=1.
*Proof:*
Informally, this is true because the agent must always take action 1 when queries are about any fully-mixed COD.
Q∈StepHull(Q), so it is a convex combination of CODs in StepClosure(Q). Let that convex combination be ∑kj=1λjQ′j.
Each Q′j∈StepClosure(Q), so we have COD sequences {Qji}i limiting to Q (with each being fully mixed) and {Q′ji}i limiting to Q′j such that each Q′ji∈StepWeak(Qji).
For each i,j, since COEDTO(⌈U⌉) and 1−COEDTO(⌈U⌉) both return 1 for some conditional oracles, and because Qji weakly leads to Q′ji, we have:
PQji(UO()=1∣COEDTO(⌈U⌉)=1)>PQji(UO()=1∣COEDTO(⌈U⌉)=0)→PQ′ji(COEDTO()=1)=1
Obviously, the first conditional probability is 1 (since it is defined) and the second is 0. Therefore, each PQ′ji(COEDTO(⌈U⌉)=1)=1. This must then also be true for the limit
of {Q′ji}i, i.e. PQ′j(COEDTO(⌈U⌉)=1)=1. This must also be true for the convex combination, i.e. PQ(COEDTO(⌈U⌉)=1)=1.
□
Multiple actions
================
(This section can be skipped.)
The COEDT defined above only handles problems that have 2 actions. What if there are more than 2 actions? Then we can split the agent into multiple 2-action COEDT agents: the first chooses between taking the first action and passing control to the second agent, the second agent chooses between taking the second action and passing control to the third agent, and so on. For example, here is a 3-action construction of COEDT:
COEDT2O(⌈U⌉,k):=O(⌈UO()⌉,⌈COEDTO(⌈U⌉,k)=1⌉,⌈UO()⌉,⌈COEDTO(⌈U⌉,k)=1⌉)
COEDT3O(⌈U⌉):=if COEDTO(⌈U⌉,0)=0 then 0 else 1+COEDTO(⌈U⌉,1)
(Why does COEDT2 have a second argument? This is so the two different COEDT2 agents know which they are and can take different actions.)
One might think this has problems when the first agent expects the second agent to always take a bad action, therefore never defers control to the second agent, and therefore the second agent has no incentive to take a good action (this happens in Nash equilibria in sequential games). However, since we consider fully-mixed oracles in the construction of COEDT, this is not a problem (EDIT: it is sometimes, see comment). To demonstrate this, consider the following 5 and 10 and 15 problem:
UO():=if COEDT3O(⌈U⌉)=0 then Flip(1/2) else COEDT3O(⌈U⌉)−1
where Flip(p) flips a coin and returns 1 with probability p and 0 with probability 1−p.
Let the set of machines considered be
M:={⌈UO()⌉,⌈COEDT2O(⌈U⌉,0)⌉,⌈1−COEDT2O(⌈U⌉,0),⌈COEDT2O(⌈U⌉,1)⌉,⌈1−COEDT2O(⌈U⌉,1)}
*Theorem 5:* For any reflective COD Q on M, PQ(COEDT3O(⌈U⌉)=2)=1.
*Proof:*
Q∈StepHull(Q), so it is a convex combination of CODs in StepClosure(Q). Let that convex combination be ∑kj=1λjQ′j.
Each Q′j∈StepClosure(Q), so we have COD sequences {Qji}i limiting to Q (with each being fully mixed) and {Q′ji}i limiting to Q′j such that each Q′ji∈StepWeak(Qji).
By the same logic as in Theorem 4, each PQ′ji(COEDT2O(⌈U⌉,1)=1)=1.
For each i,j, since COEDT2O(⌈U⌉,0) and 1−COEDT2O(⌈U⌉,0) both return 1 for some conditional oracles, and because Qji weakly leads to Q′ji, we have:
PQji(UO()=1∣COEDT2O(⌈U⌉,0)=1)>PQji(UO()=1∣COEDT2O(⌈U⌉,0)=0)→PQ′ji(COEDT2O()=1)=1
Obviously, the second conditional probability is 1/2. The first is 1 since PQ′ji(COEDT2O(⌈U⌉,1)=1)=1.
Therefore, each PQ′ji(COEDT2O(⌈U⌉,0)=1)=1.
As in Theorem 4, these conditions must then be true for the limits Q′j and the convex combination Q.
□
Conclusion and future research
==============================
I consider COEDT to be major progress in decision theory. Before COEDT, there were (as far as I know) 3 different ways to solve 5 and 10, all based on counterfactuals:
* Causal counterfactuals (as in CDT), where counterfactuals are worlds where physical magic happens to force the agent's action to be something specific.
* Model-theoretic counterfactuals (as in [modal UDT](https://agentfoundations.org/item?id=121)), where counterfactuals are [models](https://en.wikipedia.org/wiki/Model_theory) in which false statements are true, e.g. where PA is inconsistent.
* Probabilistic conditionals (as in reinforcement learning and logical inductor based decision theories such as [LIEDT/LICDT](https://agentfoundations.org/item?id=1629) and [asymptotic decision theory](https://www.lesswrong.com/posts/yXCvYqTZCsfN7WRrg/asymptotic-decision-theory-improved-writeup)), where counterfactuals are possible worlds assigned a small but nonzero probability by the agent in which the agent takes a different action through "exploration"; note that ADT-style optimism is a type of exploration.
COEDT is a new way to solve 5 and 10. My best intuitive understanding is that, whereas ordinary EDT (using ordinary reflective oracles) seeks any equilibrium between beliefs and policy, COEDT specifically seeks a not-extremely-unstable equilibrium (though not necessarily one that is stable in the sense of dynamical systems), where the equilibrium is "justified" by the fact that there are arbitrarily close almost-equilibria. This is similar to [trembling hand perfect equilibrium](https://en.wikipedia.org/wiki/Trembling_hand_perfect_equilibrium). To the extent that COEDT has counterfactuals, they are these worlds where the oracle distribution is not actually reflective but is very close to the actual oracle distribution, and in which the agent takes a suboptimal action with very small probability.
My sense is that the results in this post open up a wide new territory of open questions and further research. Here are some of them:
* What kind of optimality result(s) does COEDT have for single-player problems?
* Do infinite reflective CODs exist, as with reflective oracles?
* Is the set of reflective CODs convex (as the set of [reflective oracle distributions](https://agentfoundations.org/item?id=1435) is)?
* Can this approach be integrated with logical uncertainty (e.g. logical inductors)?
* What happens in games with more than one COEDT? What is the equilibrium concept?
* Are there optimality results for common-payoff games, or Pareto-optimality results for non-common-payoff games?
* Can COEDT be attacked with a "troll bridge" problem similar to [the one for LIEDT/LICDT](https://agentfoundations.org/item?id=1711)?
There is a lot of low-hanging fruit here, and I am posting this now before immediately picking the low-hanging fruit in the hope that discussion will be helpful.
Proofs of theorems 1-3 follow.
Proving Theorem 1 and Theorem 2
===============================
First we will show that StepWeak is a Kakutani map; this will also help with Theorem 2 (as Theorem 2 will be proven by applying Kakutani's fixed point theorem to StepHull).
*Lemma 1*: For each Q, StepWeak(Q) is nonempty and convex.
*Proof*:
Informally, this is true because for a fixed Q, the constraints on Q′ are convex and consistent.
Let m1,n1,m2,n2∈M. There are 3 cases:
* If PQ(mO1()=1∧nO1()=1)PQ(nO2()=1)>PQ(mO2()=1∧nO2()=1)PQ(nO()=1), then the constraint corresponding to (m1,n1,m2,n2) is PQ′(O(m1,n1,m2,n2)=1)=1.
* If PQ(mO1()=1∧nO1()=1)PQ(nO2()=1)<PQ(mO2()=1∧nO2()=1)PQ(nO()=1), then the constraint corresponding to (m1,n1,m2,n2) is PQ′(O(m1,n1,m2,n2)=1)=0.
* If PQ(mO1()=1∧nO1()=1)PQ(nO2()=1)=PQ(mO2()=1∧nO2()=1)PQ(nO()=1), then there is no constraint corresponding to (m1,n1,m2,n2).
Q′∈StepWeak(Q) iff Q′ satisfies these constraints for all (m1,n1,m2,n2).
Clearly, each of these constraints is convex, since it picks out some hyperplane. Their intersection must then also be convex. So StepWeak(Q) is convex.
To show that StepWeak(Q) is nonempty, we need only consider Q′ that are fully factorized (i.e. the distinct random variables of the form O(m1,n1,m2,n2) are independent). For each (m1,n1,m2,n2), if there is no constraint then set PQ′(O(m1,n1,m2,n2)=1)=1/2. If there is a constraint, set PQ′(O(m1,n1,m2,n2)=1) to the value determined by this constraint. This yields a Q′∈StepWeak(Q).
□
*Lemma 2*: The graph of StepWeak is closed.
*Proof:*
Informally, this is true because each constraint on (Q,Q′) is closed.
Let m1,n1,m2,n2∈M,p∈P. Consider the set of (Q,Q′) (not necessarily fully-mixed) satisfying the two constraints:
PQ(mO1()=1∧nO1()=1)PQ(nO2()=1)>PQ(mO2()=1∧nO2()=1)PQ(nO()=1)→PQ′(O(m1,n1,m2,n2)=1)=1
PQ(mO1()=1∧nO1()=1)PQ(nO2()=1)<PQ(mO2()=1∧nO2()=1)PQ(nO()=1)→PQ′(O(m1,n1,m2,n2)=0)=1
It is simple to see that the set of (Q,Q′) satisfying the first constraint is closed, because (a) the set of (Q,Q′) satisfying the antecedent of the implication is open, (b) the set of (Q,Q′) satisfying the consequent of the implication is closed, and (c) the union of two closed sets is closed. By similar logic, so is the set of (Q,Q′) satisfying the second constraint. If we take the intersection for all m1,n1,m2,n2, the result is still closed, and this is the graph of StepWeak.
□
*Theorem 1:* For any fully-mixed COD Q, StepWeak(Q)=StepClosure(Q)=StepHull(Q).
*Proof:*
Let Q be fully-mixed. First we will show StepWeak(Q)=StepClosure(Q). By Lemma 2, StepWeak's graph equals its closure intersected with the set of (Q,Q′) for which Q is fully-mixed. Since StepClosure's graph is simply the closure of StepWeak's graph, they coincide on Q.
Now we will show StepClosure(Q)=StepHull(Q). By Lemma 1, StepClosure(Q) is convex, so it equals its convex hull StepHull(Q).
□
*Theorem 2:* If a COD Q leads to Q′, then it weakly leads to Q′.
*Proof:*
Since StepWeak's graph is closed, and StepClosure's graph is the closure of a subset of StepWeak's graph, StepClosure's graph is a subset of StepWeak's graph.
Since StepWeak(Q) is a convex superset of StepClosure(Q) for all Q, it must also be a superset of the convex hull StepHull(Q).
□
Proving Theorem 3
=================
Now it is time to show that StepHull is a Kakutani map.
*Lemma 3:* StepHull has a closed graph.
*Proof:*
Informally, this is true because a limit point of StepHull's graph is the limit of a convergent sequence of convex combinations of points in StepClosure's graph (which is closed), which is itself equal to a convex combination of limits of sequences of points in StepClosure's graph, and StepClosure's graph is closed.
Trivially, StepClosure has a closed graph, since its graph is the closure of a set.
Consider a limit point (Q,Q′) of StepHull's graph. That means we have sequences {Qi}i limiting to Q and {Q′i} limiting to Q′ such that each Q′i∈StepHull(Qi).
Let k=|2M4|. We consider CODs as elements of Rk. Since each Q′i∈ConvexHull(StepClosure(Qi)), it must
by definition be a finite convex combination of CODs in StepClosure(Qi). We can assume without loss of generality that this is a combination of exactly k+1 CODs, by
Carathéodory's theorem.
We will now name these convex combinations. For each i we have:
* A list Q′1i,…,Q′k+1i, each in StepClosure(Qi), and
* weights λ1i,…,λk+1i, each in [0,1] and which sum to 1,
* such that each Q′i=∑k+1j=1λjQ′ji.
We may now consider Xi:=Q′1i,…,Q′k+1i,λ1i,…,λk+1i as a vector in R(k+1)2+k+1. The sequence of these vectors has a convergent
subsequence by the Bolzano-Weierstrass theorem.
Define Q′1,…,Q′k+1,λ1,…,λk+1 to be the limit of the convergent subsequence, and also call this full vector X. Let the convergent subsequence be {Xi}i. Since StepClosure's graph is closed, we must have
each Q′j∈StepClosure(Q). Therefore, their convex combination Q′∗:=∑k+1j=1λjQ′j is in StepHull(Q). We will now show Q′=Q′∗.
Consider a function α(q1,…,qk+1,λ1,…,λk+1):=∑k+1j=1λjqj. Each Q′i=α(Xi), so clearly α(Xi) limits to Q′. Additionally, α is continuous, so, α(Xi) limits to α(X)=Q′∗. Therefore Q′=Q′∗.
We have at this point demonstrated Q′∈StepHull(Q), since we already had Q′∗∈StepHull(Q).
□
*Lemma 4:* For each Q, StepHull(Q) is nonempty and convex.
*Proof:*
StepHull(Q) is convex since it is the convex hull of a set, so we need only show that it is nonempty. To do this we will show StepClosure(Q) is nonempty; this is sufficient since StepClosure(Q)⊂StepHull(Q).
Let Q0 be any fully-mixed COD. For t∈[0,1), define β(t):=t⋅Q+(1−t)⋅Q0. β is a curve proceeding from Q0 to Q, and every COD in its image is fully-mixed.
Define Qi:=β(1−1/i) for natural i≥1. Define Q′i to be an arbitrary COD satisfying Q′i∈StepWeak(Qi); the sequence {Q′i}i can be defined using the axiom of choice. By the Bolzano-Weierstrass theorem, {Q′i}i has a convergent subsequence; let Q′ be the limit of this subsequence. Clearly, Q′∈StepClosure(Q), since by construction (Q,Q′) is a limit point of the graph of StepWeak. This proves that StepClosure(Q) is nonempty.
□
The proof of Theorem 3 is now trivial:
*Theorem 3:* There is a reflective COD.
*Proof:*
StepHull is a Kakutani map by Lemma 3 and Lemma 4. Therefore by Kakutani's fixed point theorem, it has a fixed point, i.e. a COD Q∈StepHull(Q). By definition Q is reflective.
□
|
f736be13-8c4f-4fda-b4a2-c537de7751ae
|
trentmkelly/LessWrong-43k
|
LessWrong
|
What to draw from Macintyre et al 2015?
Last week it seems the West decided that we should all wear masks and wear cloth masks if we don't have proper masks.
As Scott Alexander pointed out, Macintyre et al 2015 seems to be a only controlled trial of cloth masks and it writes:
> For example, a contaminated cloth mask may transfer pathogen from the mask to the bare hands of the wearer. We also showed that filtration was extremely poor (almost 0%) for the cloth masks. Observations during SARS suggested double-masking and other practices increased the risk of infection because of moisture, liquid diffusion and pathogen retention. These effects may be associated with cloth masks.
I'm not on record for being the biggest fan of evidence-based medicine, but getting everybody to take use a medical intervention based on pathotheoretical reasoning when the paper that provides out best evidence cautions against that intervention is risky.
Why does Macintyre et al 2015 come to so different conclusion about the ability of cloth masks to filter then other studies?
What action could people take to reduce the potential increase risk that Macintyre sees when they don't have better masks?
|
333acc10-8b9e-4efe-9464-39845cc49284
|
StampyAI/alignment-research-dataset/eaforum
|
Effective Altruism Forum
|
There have been 3 planes (billionaire donors) and 2 have crashed
I'd rather not go into the details about which billionaires are which, so if it's actually 4 and 3 or 6 and 4, then that may or may not be debatable.
But it seems to me like the main thing that people on EAforum can achieve is figuring out how to handle the contingency where the last billionaire donor crashes and burns, not how to get more billionaires or retain existing ones (and certainly not preventing them from getting bumped off). EAforum is, however, the perfect place to trade strategies for cutting costs and saving money.
I've met some people who were recently funded to start a group house in a rural town in Vermont to do AI safety work and other x-risk work, since rents in Vermont are incredibly low, which makes it one of the most cost-effective places in the US to research existential risk. Ultimately, the goal is that people with smaller and smaller amounts of savings could take a sabbatical to a Vermont group house and do research for free for 2-10 years, without working full-time or even part-time at some random software engineering job.
The main problem here is network effects. I don't remember the details, but they will have to drive at least 3 hours to Boston once a month (and probably more like 5-6 hours). Otherwise, they will be effectively alone in the middle of nowhere, totally dependent on the internet to exchange and verify ideas with other EA-minded people (and all the risks entailed by filtering most of your human connection through the internet).
The main problem with the Vermont group house is that there's currently only three of them. If there were ten really smart people in Vermont researching existential risk, then it would be easier to handle the isolation with, say, [shoulder advisors](https://www.lesswrong.com/posts/X79Rc5cA5mSWBexnd/shoulder-advisors-101). Plus, if it were up to me, they'd be in rural Virginia (or parts of West Virginia) 5-6 hours away from Washington, D.C., not Boston, although the people who picked Vermont and funded it might know things I don't (disclaimer: they had the idea first, not me, I only discovered the brilliance behind it after meeting a Vermont person).
Ultimately, though, it's obviously better for EA-affiliated people to be located within the metropolitan areas of San Francisco, New York, Boston, Washington D.C., and London. New people and new conversations are the lifeblood of any organization and endeavor. But the reality of the situation is that we don't live [the kind of world](https://www.lesswrong.com/posts/gvA4j8pGYG4xtaTkw/i-m-from-a-parallel-earth-with-much-higher-coordination-ama) where all the people at MIRI get tech-worker salaries, just because they should; that money has to come from someone, and the human tendency to refuse to seriously think about contingencies just because they're "unthinkably horrible" is the entire reason why a bunch of hobbyists from SF are humanity's first line of defense in the first place. We could *absolutely* end up in a situation where MIRI needs to relocate from Berkeley to rural Vermont.
So right now seems like the perfect time to start exchanging tips for saving money, setting up group houses in the best possible places, and the prioritization tradeoffs between scenarios where everyone becomes much poorer (e.g. from a second Cold War or a 2008-style economic megafailure upending economic status quos far further than anything in 2020 or 2022) and scenarios where current living conditions are maintained. Because it can always, always get worse.
|
7cdb9e6f-b5a1-4fef-bf6d-9361b0b51414
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
AI Safety Europe Retreat 2023 Retrospective
This is a short impression of the AI Safety Europe Retreat (AISER) 2023 in Berlin.
**Tl;dr:** 67 people working on AI safety technical research, AI governance, and AI safety field building came together for three days to learn, connect, and make progress on AI safety.

### Format
The retreat was an [unconference](https://en.wikipedia.org/wiki/Unconference): Participants prepared sessions in advance (presentations, discussions, workshops, ...). At the event, we put up an empty schedule, and participants could add in their sessions at their preferred time and location.
Empty and full Schedule. In each time-slot there were multiple sessions in parallel in different locations. This way, people could choose which one to go to based on their interests.Participants
------------
### Career Stage
About half the participants are working, and half of them are students. Everyone was either already working on AI safety, or intending to transition to work on AI safety.

### Focus areas
Most participants are focusing on technical research, but there were also many people working on field building and AI governance:

### Research Programs
Many participants had previously participated, or currently participate in **AI safety research programs** ([SERI MATS](http://serimats.org/), [AI safety Camp](https://aisafety.camp/), [SPAR](https://berkeleyaisafety.com/spar), [PIBBSS](https://www.pibbss.ai/), [MLSS](https://www.lesswrong.com/posts/CphfDP4ynz3QQ4AKY/introducing-the-ml-safety-scholars-program), [SERI SRF](https://forum.effectivealtruism.org/posts/2aDiu52s7HnWJJn22/launching-the-seri-summer-research-fellowship), [Refine](https://www.lesswrong.com/posts/D7epkkJb3CqDTYgX9/refine-an-incubator-for-conceptual-alignment-research-bets))

### Countries
All but one participant were based in Europe, with most people from Germany, the Netherlands and the UK.

Who was behind the retreat?
---------------------------
The retreat was organized by **Carolin Basilowski** (now [EAGx Berlin](https://www.effectivealtruism.org/ea-global/events/eagxberlin-2022) team lead) and me, **Magdalena Wache** (independent technical AI safety researcher and [SERI MATS](http://serimats.org/) scholar). We got funding from the [long-term future fund](https://funds.effectivealtruism.org/funds/far-future).
Takeaways
---------
* I got a feeling of "European AI safety community".
+ Unlike in AI safety hubs like the Bay area, continental Europe’s AI safety crowd is scattered across many locations.
+ Before the retreat I already personally knew many people working on AI safety in Europe, but that didn't feel as community-like as it does now.
+ Other people noted a similar feeling of community.
* Prioritizing 1:1s was helpful
+ We reserved a few time slots just for 1:1 conversations, and encouraged people to prioritize 1:1s over content-sessions.
+ Many people reported in the feedback form that 1:1 conversations were the most valuable part of the retreat for them.
* There is a demand for **more retreats** like this.
+ Almost everyone would like to attend this kind of retreat at least yearly:

* The retreat was relatively low-budget. It was free of charge, and most participants slept in dorm rooms. Next time, I would make the retreat **ticketed** and pay for a better venue with more single/double rooms
* I would make a future retreat **larger**.
+ We closed the applications early because we had limited space, and wanted to limit the [cost](https://forum.effectivealtruism.org/posts/Khon9Bhmad7v4dNKe/the-cost-of-rejection) that comes with rejecting people who are actually a good fit. I think we would have had ~50 more qualified applications if we had left them open for longer.
+ I would also do **targeted outreach** to AI safety institutions next time, in order to increase the share of **senior people** at the retreat.
* I think the participant-driven "unconference" format is great for a few reasons:
+ **Lots of content**: As the work for preparing the content was distributed among many people, there was more content overall, and there could be multiple sessions in parallel.
- For me, being able to **choose the topics** that are most interesting to me makes me take away a lot more relevant content than if there was only one track of content.
+ **Small groups** (usually 10-20 people) made it possible to have very interactive sessions.
+ There was **no "speaker-participant divide"**.
- I think this makes an important psychological difference: If there are "speakers" who get the label of "experienced person", other people are a lot more likely to defer to their opinion, and won't speak up if something seems wrong or confusing.
- However, when everyone can contribute to the content, and everyone feels more “on the same level”, it becomes a lot easier to disagree and to ask questions.

I’m very excited about more retreats such as this one happening! If you are interested in organizing one, I am happy to support you - **please**[**reach out to me**](mailto:magdalena.wache@ea-darmstadt.de)**!**
*If you would like to be notified of further retreats, fill in this 1-minute* [*form*](https://docs.google.com/forms/d/e/1FAIpQLScku77RE97IpnMon17tGQEPRcAIfclXzbGNuAQoYf6LO3U-hQ/viewform)*.*
|
08144258-afe5-4f0b-b277-ba88ac9937d2
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Which scientific discovery was most ahead of its time?
Looking into the history of science, I've been struck by how continuous scientific progress seems. Although there are many examples of great intellectual breakthroughs, most of them build heavily on existing ideas which were floating around immediately beforehand - and quite a few were discovered independently at roughly the same time (see https://en.m.wikipedia.org/wiki/List_of_multiple_discoveries).
So the question is: which scientific advances were most ahead of their time, in the sense that if they hadn't been made by their particular discoverer, they wouldn't have been found for a long time afterwards? (Ideally taking into account the overall rate of scientific progress: speeding things up by a decade in the 20th century seems about as impressive a feat as speeding things up by half a century in ancient Greece).
|
fbd3224c-2827-4755-b92a-2f35a53bba51
|
trentmkelly/LessWrong-43k
|
LessWrong
|
UDT1.01 Essential Miscellanea (4/10)
This is the post with some needed concepts and discussion that didn't cleanly fit into any other section, so it might be a bit of a rambly mess.
Specifically, this post splits into two parts. One is assorted musings about when to defer to past-you vs current-you when making decisions, if we're permitting unplannable observations. The other is looking at Vanessa's dynamically consistent update rule, and combining that with our earlier reasoning on affineness to reduce the complexity of the finite-time-horizon version of UDT down from exponential to a smaller exponential.
Why Defer To Past-You?
So, one of the major features of the standard model (environmental observation space O, action space A, finite sequences of environmental observations are our plannables, there is a function e:O<n×(ΔA)O<n→ΔO that's affine in the second variable), which makes things a lot easier than they'd be in practice, is that there's an objective answer to what e is. All the versions of you agree on which tree they're in and how their actions affect how probability-mass flows through it. There's an objective answer to "how much does playing a bit more of this action over here affect what observation happens over there?"
But, if our information about how actions influence results doesn't descend from heaven at the start at the time, things get more complicated. If "my beliefs about how actions affect other places" are unplanned observations that you receive after thinking for a while, you can't just go "have me-at-the-start-of-time say what to do"
If you're having to learn which tree/environment you're in, and that learning consists of unplanned observations, and your actions affect how probability-mass flows through that tree, then this raises the problem that the yous which have made different observations can disagree on how their actions affect how probability-mass flows through the tree, and act at cross-purposes.
Is there any principled way of figuring out which sorts of things y
|
6788c1f1-7c82-4f91-8a52-b8341298b443
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Let’s talk about uncontrollable AI
In another post, I published four hypotheses on uncontrollable AI as an existential risk. Here I give some background on why I chose this particular framing.
Talking about existential risks from AI with people outside of the AI alignment or effective altruism communities can be quite frustrating. Often, what seems obvious to people in the community is met with deep skepticism by others, making a productive discussion difficult. As a result, the arguably most urgent problem we face today is almost completely neglected by politicians and the general scientific community.
I think there are many reasons for this, and I don’t claim to have analyzed them thoroughly, so there is more work to be done here. But from my own experience, there are some key factors that make it difficult for people to see that there is a real problem that needs to be addressed now. The most important one in my opinion is that existential risks from AI are usually linked to terms like “AGI”, “transformative AI (TAI)” or even “superintelligence”. This has several drawbacks.
First of all, these terms are quite vague, so it is easy for two people to have very different understandings of them. Also, vague problems are much easier to ignore than concrete ones.
Second, these terms (with the possible exception of TAI) are anthropomorphic: The human mind is defined as the ultimate benchmark of intelligence and it is implicitly assumed that as long as AIs are not “smarter than us”, there is no existential risk. Also, the expected timeline for AGI depends a lot on the individual view of how complex the human mind really is. If, for example, you believe that “consciousness” is in principle unachievable in a machine, you’ll probably think that AGI is impossible, or at least a very long way off, and therefore there is no reason to be concerned. Even if you think that consciousness can in principle be achieved in computers, you might equate “developing AGI” with “simulating the human brain” or at least “
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.