id stringlengths 36 36 | source stringclasses 15 values | formatted_source stringclasses 13 values | text stringlengths 2 7.55M |
|---|---|---|---|
b924e1a7-442a-4524-b68f-c5897b0cc5ba | trentmkelly/LessWrong-43k | LessWrong | What is to be done? (About the profit motive)
I've recently started reading posts and comments here on LessWrong and I've found it a great place to find accessible, productive, and often nuanced discussions of AI risks and their mitigation. One thing that's been on my mind is that seemingly everyone takes for granted that the world as it exists will eventually produce AI, particularly sooner than we have the necessary knowledge and tools to make sure it is friendly. Many seem to be convinced of the inevitability of this outcome, that we can do little to nothing to alter the course. Often referenced contributors to this likelihood are current incentive structures; profit, power, and the nature of current economic competition.
I'm therefore curious why I see so little discussion on the possibility of changing these current incentive structures. Mitigating the profit motive in favor of incentive structures more aligned with human well-being seems to me an obvious first step. In other words, to maximize the chance for aligned AI, we must first make an aligned society. Do people not discuss this idea here because it is viewed as impossible? Undesirable? Ineffective? I'd love to hear what you think. |
6d3ac6c5-9b5d-41a2-8715-722f4a164b5b | StampyAI/alignment-research-dataset/youtube | Youtube Transcripts | What’s been happening in AI alignment? | Rohin Shah | EA Global: Virtual 2020
hi everyone my name is Rohan Shah I'm a
sixth year PhD student at the Center for
human compatible AI at UC Berkeley
my research is generally on what happens
when you try to do deep reinforcement
learning and environments that involve
humans somehow and more broadly I work
on technical AI safety I also write the
alignment newsletter and today I'll be
talking to you about what's been
happening in the eye alignment I should
warn you while this talk doesn't assume
any technical knowledge of AI it does
assume basic familiarity with the
arguments for air risk I'll be servin
surveying a broad swath of work rather
than focusing on the things I'm
personally interested in I'm hoping that
this will let you figure out what parts
of AI alignment you personally feel most
excited by and want to delve into deeper
later a lot of the talk is based on a
literature review I wrote a couple of
months ago and so you can find
references and details in this link in
the top right corner it's just gonna
stay there for the rest of the talk yeah
and so with that let's get started so at
a very high level outside view the
reason that like most people work on a
safety is that you know powerful AI
systems are going to be a big deal
they're going to radically transform the
world that we live in and so we should
probably be putting in some effort into
making sure that the sort of
transformative effect goes well in
particular if AI systems are smarter
than this then they could become the
dominant force on the planet which would
be well could be bad for us in the same
way that like guerrillas are probably
not stoked about how we have like taken
over all of their habitats and so this
doesn't necessarily mean that there will
be X risk it just says that we should
have a good technical reason to expect
that the powerful AI systems we pulled
are actually beneficial for us and I
would argue that we currently do not
have such a reason and so like the case
for working on alignment is that we
really should be creating this reason
and I want to note that like well
there's a lot of disagreement
many specific sub questions in the a
safety and that will become a bit more
evident over the rest of this talk at
least on this basic high-level argument
my impression is that basically
everybody in the field agrees with it
agrees with this high-level outside view
argument okay so what are the specific
risks were worried about what the I one
issue you might be worried about is that
humans aren't really ready to deal with
the impacts of AI so humans tend to
fight a lot right now there's like some
amount of conflict people keep talking
about like how the us-china relationship
is a big deal and AI is just going to
let us have better and better ways of
fighting seems pretty bad like maybe we
just have our fights get bigger have
bigger and bigger impacts on us and
maybe at some point this actually leads
to extinction-level events or perhaps AI
or perhaps a leads to technical progress
technological progress at to faster rate
for us to get accustomed to and as a
result maybe relocked in some suboptimal
values and that's the way that we could
get past dependence so in both of these
stories that the system isn't
intentionally causing X risk but you
know nonetheless X risk does perhaps
happen I'm not going to focus too much
on them I'll just note that like some
ideas that people are talking about our
preference aggregation so this with this
idea you a to have the AI system
aggregate the preferences over well all
of the stakeholders and then everyone
would agree to just like let the AI
system do its thing and that wouldn't
wouldn't bite with the results of
whatever the AI system does and
similarly we could try and figure out
better Metta philosophy so that we don't
have problems like value blockin okay
another outside view that people use
besides just like man powerful AI as a
big deal is that optimization in
particular leads to extreme outcomes
like to take
like very simple example on average at
least in the u.s. men are about 5 feet
10 inches tall but if you look at
basketball players were selected for
height you're going to find that not
very many of them are 5 foot 10 and most
are in fact well over 6 feet the point
here is that when you select for
something when you have optimization
pressure you tend to get extreme
outcomes and powerful AI systems are
going to be very powerful optimizers and
as a result we probably shouldn't expect
our everyday reasoning to properly
account for what these optimizers are
going to do and so we need to take more
of a security mindset where we look for
arguments i quantify over every
possibility as opposed to the average
possibility so this sort of mindset
inspires a researcher is especially at
Mary to try to understand how
intelligence really works so that we
could then make well-designed AI systems
that we understand and so this is led to
research on embedded agency partial
agency and abstraction a little bit
about embedded agency this is like one
of Mary's main research programs and the
basic idea is that in the sort of
standard model of reinforcement learning
and AI more generally there is an
environment which takes in actions and
produces observations and rewards and
then completely separately from the
environment there is the agent that like
sees these observations and takes
actions as a result and those are not
actually how agents actually work I am
presumably an agent but I am NOT
separated from the environment of the
world I like I'm a part of it and this
leads to many many problems in well many
philosophical problems I would love to
go into more detail but don't have too
much time there is a great sequence on
the alignment forum about this I
strongly recommended cool
the next problem I want to talk about is
probably the most familiar one to people
in the side
I call it the specification problem it's
also called outer alignment basically
with this problem if this problem is the
fat is that the way we build AI systems
right now is to assume that we have some
sort of specification of the optimal
behavior in all possible situations that
is infallible as though it were like
handed down to us from God and then
given this specific specification we
have to figure out how to meet it and of
course we can never actually get the
specification like the classic paper
clip Maximizer example shows that it's
pretty hard to specify the behavior of
like make paperclips in a reasonable and
sane way
like turns out that that's quite hard to
actually specify this is also the main
problem that steward Russell's new book
human compatible talks about in terms of
like who's working on the problem like
chai opening I deepmind odd that they're
all doing work on this problem not
necessarily just this problem but
certainly some part of their work is on
solving the specification problem the
main proposed way of solving the
specification problem is to do some form
of dolly learning and one thing I want
to note value over here doesn't
necessarily mean normative value you
don't necessarily need to be thinking
about population ethics here it totally
would count as value learning if you had
a robot that like learned how to clean
your room and then reliably clean your
room like that totally is value learning
maybe we should be telling it
specification learning the value
learning seems to be the names of stuff
so types of value learning
Searle or assistants games the Searle
stands for cooperative inverse
reinforcement learning this is a
particular formalization of how you
could do value learning in which the
world contains a single human who knows
the like reward function the true
specification but for some reason can't
communicate explicitly to the robot to
the agent and then there is also an
agent who is cool
to infer what the humans specification
is and then optimize for it and because
now you don't have a definite because
the agent no longer has a definite
specification that it's trying to
optimize and it's instead uncertain over
what it's trying to optimize this gets
you a lot of like nice properties so for
example the agent will ask you about
what you want it will try to clarify
what your preferences are if you try to
shut it down it will reason that it must
have been doing a poor job of helping
you and so it's going to allow you to
shut it down unlike a classic expected
utility Maximizer which will say nope
I'm not going to shutdown because if I
am shut down then it's like that I can't
achieve my goal
so that's Cyril or assistants games the
unfortunate thing about assistance games
is that they are very very very
computationally intractable it's very
expensive to solve a Cyril game in
addition it requires you to know to have
a good model of how human preferences
relate to human behavior which as many
social site many of the Social Sciences
will tell you is a very very difficult
problem and there is a theorem that says
it is provably impossible and the like
super general case though of course we
don't actually need super general case
we only need the case that actually
applies in the real world and that
instead of being provably impossible is
merely very very difficult yeah so after
Cyril we have learning human intent this
is basically a broad category of
possible ways possible communication
protocols that humans could use to
communicate the specification to the
agent so perhaps a human could
demonstrate the like optimal behavior to
the agent and then the agent could learn
from that what it what it's supposed to
do so this is the idea behind inverse
reinforcement learning and imitation
learning alternatively perhaps the he
could evaluate proposed hypothetical
behaviors that the agent could execute
and then the agent could say could
reason over basically what the human
said was good in order to figure out
what it should be doing so after that
let's come to intend alignment or
courage ability these are somewhat
different while the previous approaches
are like trying to specify an algorithm
that learns values with intent alignment
we're instead trying to build an agent
that tries to do what we wanted to do so
to put it another way we're trying to
bake into the agent a motivation to be
helpful to us and so if we then have
this agent that is like all I want to do
is to be helpful to Rogen that's going
to naturally motivate it to do all of
these other things that we wanted to do
so for example it's going to try to
clarify what my preferences are in the
same way that like a good personal
assistant would try to figure out what
my preferences overflights are so that
he or she didn't have to bother me when
I asked them to like book me a flight
yeah cool so that's a sort of broad
spectrum of approaches to value learning
however there are still a few problems
that are rise so intuitively one big
problem is that since the agent is
learning from our feedback it's not
going to be able to do better than we
can do it's not going to be able to
scale to superhuman performance so if
we're demonstrating the task to the
agent it's not going to be able to
perform the task any better than us
because nothing is giving it the
information of like how to do better
than us
similarly if we're evaluating the agents
behavior it won't be able to find good
behaviors that we wouldn't recognize as
good it's like an example of like where
we might have cared about this is
alphago's move 37 this was a pretty
famous move that alphago made which
seemed really really crazy to humans
known
would have like ever dentist moons I
think it was assigned to like less than
one in 10,000 chance
and yet that move ended up being crucial
to alphago's success and why could
alphago do this because alphago wasn't
group relying on our ability to tell
whether or not a particular move was
good alphago was just relying on the
fact that there was a reward function
that told it when it had one and when it
had lost and that was a perfect
specification of like what counts as
winning or losing and go so ideally we
would like to build super intelligent AI
systems that can actually exceed human
performance at tasks but it's not clear
how we do this with value learning the
key idea that lets us that lets current
approaches get around this is that sure
our AI systems are never going to exceed
like super vision that we give them but
maybe we can train our AI systems to
approximate what we would do if we had a
very very very long time to think so you
could imagine as a hypothetical if I got
a thousand years to think about whether
what the best thing to do was and then I
like totally AI system hey this is what
the best thing to do is in this scenario
and the AI system like properly
approximated this then that would be but
like it could do it in a couple of
minutes as opposed to a thousand years
that would be presumably very very super
intelligent so the details for how we
take this insight and get to an
algorithm that you know we can actually
train and not a thousand years the
details are a bit involved and I'm not
going to go into them but the reason the
like techniques to look out for our
iterative amplification debate and
recursive reward modeling cool so that
was one problem with value learning
another problem with value learning is
the informed oversight problem this
problem is basically that even if we are
providing
supervision to the agent and even if
we're smarter than the age of that word
training let's just take that as a given
for now if we don't understand why the
agent chosen action we won't be able to
effectively supervise it so the classic
example for this problem is consider an
agent that's tasked to write a new novel
perhaps it's got access to a library
where it's supposed to learn about how
to write how to write books and it can
use this in order to write a new novel
but like the new novel is supposed to be
actually new not just memorizing some
novel from the library and spitting it
back out again but like it's possible
that the agent just like looks at five
books in the library plagiarizing some
bunch from all of them and puts them
together into a book that like a reads
very nicely to us but doesn't really
solve the task because it was
plagiarized how are we supposed to know
how are we supposed to tell the agent
that this was bad if we can to actually
see that the agent was looking at these
fine books and like stealing sentences
from them then we in order to catch this
we'd have to read the entire library all
that like thousands of books to like
search for any evidence of the agent
plagiarizing and this seems like just
way more expensive than this seems too
expensive for oversight yeah and right
so the overall problem is that we have
it may be way more costly for us to give
it oversight than it is for the agent to
take actions if we cannot see how the
agent is taking those actions so the key
idea to solve this is I mean it's almost
obvious it's just make sure you know
what how the agent is taking their
actions and there's a again a bunch of
details on how exactly we think about
this but the term to look for is
description universality basically this
property means that the supervisor knows
everything about the
agent Noh's including any facts about
how the agent like chose its output and
so then if we were ascription Universal
with respect to the agent then we would
know that then we would know that it had
like taken these sentences from these
five books because the agent knows that
and if we knew that then we could
appropriately people eyes that don't
tell it not to plagiarize in the future
how do we get this property sadly I'm
not going to tell you because again
limited time but there is a great set of
blog posts and summary in the alignment
newsletter and all of this is linked to
from once again that link in the top
right corner really I just want you to
read that link it's like a great I put
in a lot of work I think it's good
cool all right let's move on to another
top level problem so this will be the
problem of Mesa optimization so I'm
going to illustrate Mesa opposition with
Ananya example so let's suppose you're
searching over a space of programs like
programs and just some programming
language let's say Python and we're
looking for a program that plays
tic-tac-toe well and so you're searching
through these programs and initially
you'd like find some programs that have
good heuristics like maybe you find a
program that like always starts at the
center square and that one tends to like
win a little more often than the other
ones and then later maybe you find a
program that like make sure that any
time it's got two in a row and the third
spot is empty it like actually plays in
that third spot so that it ensures that
it wins if it can in one step and that
one starts to win a bit more but then
eventually at some point you come across
the minimax algorithm and the minimax
algorithm plays optimally by searching
for the best action to take in every
situation and so what happened here was
that your outer optimization your search
over the space of programs ended up
finding a program that was itself an
optimizer that searched over
possible moves in tic-tac-toe and so
this is the idea of Mesa optimization
you have some sort of base optimizer in
this case to search over programs and in
the course of running that base
optimizer then finds a new optimizer
which in this case is the minimax
algorithm cool so why is this relevant
to AI this is just some weird thing
about programs well in the AI case often
we think about as systems that are
trained using gradient descent and
gradient descent is an optimization
algorithm that searches over the space
of neural net parameters to find some
set of parameters that performs well on
some loss function let's say that
gradient descent is the outer optimizer
in our Mesa Mesa optimizers story it
seems pretty plausible that like this
Mesa optimization thing could happen
even with gradient descent where grading
descent finds an instantiation of the
neural net parameters such that then the
neural net itself when it runs is
performing some sort of optimization
then the neural net would be a Mesa
optimizer that is optimizing some sort
of objective which we would call the
Mesa objective and we well we know that
the Mesa objective should lead to
similar behavior as they like original
objective on the training distribution
because that's what it was selected to
do it may be arbitrarily different off
of the training distribution it's like
if you trained it on tic-tac-toe then
like you know it's going to win a
tic-tac-toe but then maybe if you like
switch to connect for it might do
something crazy maybe in connect for it
still only looks for three in the row
instead of four in the row and so it
like loses pretty badly at Connect four
even though it was working well with
tic-tac-toe so if this happened with
gradient descent and we got it like very
powerful intelligent agent Intel
neural-net that was up even if we had
like solved the specification problem
and like happy like ideal reward
function to train this agent it might be
that the neural net model that we come
up with this prayer is optimizing for a
different objective which may once again
be misaligned with what with what we
actually want and this sort of outer
inner distinction is why the
specification problem is called outer
alignment and waimea optimization is
called inner alignment cool so what are
the things that people do to solve maze
optimization well there to me there's
one main proposal and one kind of sort
of proposal the main proposal is
adversarial training adversarial
training with adversarial training the
basic idea is that rather than training
a single AI system that's trying to
perform well on your specification you
instead have both you continue to train
that system as usual but you also have
an adversary and AI system or AI human
team that's like trying to find
situations in which the agent here
training would perform badly or would be
optimizing for something that is
different from the specification problem
in the case where you're trying to get a
corrigible AI system maybe you're like
your adversary is looking for situations
in which the AI system in which the AI
system like manipulates your deceives
you into thinking something is true when
it is actually false and then if you can
find all such situations and penalize
the agent firms for them then the agent
will like stop behaving badly on those
situations and if you are actually able
to find all of these situations then you
will have an agent that is that robustly
does the right thing across all settings
verification would take a trained agent
and then verifies some or the other
property that you care about with that
agent now ideally we would like to say I
have formally verified that the agent is
going to reliably pursue the
specification that that I outlined
whether this is actually possible or not
whether people are like actually
optimistic or not and like not totally
clear on but it is a plausible approach
that one could take they're also like
other other areas of research that are
related that like not obviously
solutions
so in particular robustness to
distributional shift is pretty important
because the way that you get risk in
witness optimization is by
distributional shift because on your
training distribution your agent is
going to perform well it's only when the
world changes that things could
plausibly go badly okay um a sort of
notable notable thing that's missing
from this is interpretability I haven't
really talked about it yet so
interpretability is a field of research
which is trying to make sure that we can
understand the a systems that we train
the reason it's I haven't included it
yet is because it's sort of useful for
everything it's like for example you
could use interpretability to help your
adversaries find good to help your
adversary figure out in what situations
your agent is going to do bad things and
this this would help adversarial
training work better but you know it's
also useful for interpret ability to do
value learning so that you can provide
better feedback on the agent if you like
better understand what the agent is
doing you can better correct it and it's
like especially relevant to informed
oversight or ascription universality so
it's sort of this like not obviously a
solution in and of itself but makes
other solutions way better yeah so these
are all of the techniques that are
trying to like align AI systems there's
also the option of just trying to
prevent catastrophes like May
the system will be useful maybe it won't
be someone else is gonna deal with that
what we're going to do is just stop it
from like killing everybody that's the
main thing that we want to do and so
approaches and approaches here include
impact regularization where the AI
system is penalized for having large
impacts on the world
so some techniques here are relative
reach ability and attainable utility
preservation and the hope here would be
that you could create a powerful AI
systems that can do you know somewhat
impactful things like maybe writing you
providing advice on writing new laws but
like we wouldn't be able to do extremely
impactful things like engineer a
pandemic that kills everybody and so
even if the AI system were like
motivated to harm us the impact penalty
would prevent it from doing something
truly ridiculous truly catastrophic
other things that people think about
Oracle's
the idea here is to restrict the AI
systems action space so that all it does
is like answer questions this doesn't
immediately provide you safety but
hopefully it like makes it a lot makes
it a lot harder for any system to cause
a catastrophe or you could try to box
the AI system so that it cannot have
much of an impact on the world and one
example of a recent work on this is Bo
my or box myopic artificial intelligence
and the idea there is to like cut both
the human and the AI system in the box
so that they have no communication with
the outside world while da system is
operating and then the AI system shuts
down the human leads the box and was
like able to use any information that
the AI system gave them cool so that's
most of what I have in this like problem
solution format there's also a bunch of
other work on a safety and alignment
that's trying that's like not so easily
categorize
problems and solutions so for example
there's work on safe exploration
adversarial examples and certain TVs all
seem like pretty relevant to AI
alignment but not sort of obvious where
exactly in this graph they fit at least
to me so I haven't put them in and
there's also a lot of work on
forecasting which is extremely relevant
to what sorts of research agendas you
want to pursue so for example there has
been a lot of disagreement on whether or
not there will be discontinuities in AI
progress whether there will at some
point in the future be a time at which
AI capabilities shoot up in a way that
you couldn't have predicted by
extrapolating past progress another
common disagreement is whether whether
advanced AI systems will look like
comprehensive AI services which
basically very very short description
there are just a lot of services each
task that you might want an AI system to
do is perform by one service you don't
have like a single agent that's like
doing all of the tasks or you could on
the other hand imagine a single
monolithic agent AI agent that like is
able to do all tasks so which of these
two worlds are we likely to live in
that's another disagreement and then a
like third disagreement is whether it is
possible to get too powerful AI systems
by just like increasing the amounts of
compute that we use with current methods
or do we actually need some like deep
insights in order to figure out in order
to get too powerful AI systems yeah and
as I said this is all very relevant to
like deciding what sort of research you
want to do many many research agendas
only make sense under some under like
some possible worlds and if you can find
out that you're not in that possible
yeah that that world is like not very
likely then maybe you switch to a
different research agenda
and yeah with that I that concludes my
again there's the link in the top
right corner tinyurl.com slash alignment
2019 that's a link to the literature
review that I wrote there's both a short
version and a long version I really
encourage you to read it it goes into
more detail than I could in this
presentation but yeah thank you so much |
b0fff5ae-428b-4287-981b-0664a3e7b23f | trentmkelly/LessWrong-43k | LessWrong | In what way has the generation after us "gone too far"?
It's a pattern to watch an older generation consider a younger generation as, "having gone too far". From the inside it feels to me like I'm right and they are too conservative, but on the other side of the fence, believing for myself that the younger generation have gone too far? What do I find?
Please state the current age @ 2019 of the centre of gravity or range of ages of the generation you are commenting on. |
1fc51f8f-e483-4c87-9ab2-11ebdcef649b | trentmkelly/LessWrong-43k | LessWrong | Wireheading as a potential problem with the new impact measure
In this post, Alex Turner introduced a new way of measuring impact. The aim was to get a definition of low impact that actually works.
It seems to me that the impact measure has potential problems, though. The first seems to be that it allows wireheading, where the agent can have a huge impact on the world without paying much of a penalty at all. The obvious way of removing this incentive seems to make the measure into a version of the twenty billion questions approach.
The second problem is less important, but worth signalling. It seems that the agent can take actions that everyone agrees are of minute impact, but that the impact measure will show up as high impact. This might cause problems, though it's not clear to me yet if it does (I'm thinking of a situation where an otherwise high impact action adds on "undo/prevent the pseudo-high impact action" as a means of pretending to be low or medium impact).
Wireheading for low impact
The general setup is as described in the impact measure post, with uA the 'positive' utility that the agent wishes to maximise, and U the set of utilities that are meant to constrain it, by computing the penalty term.
Apart from ∅, there are at least two other actions of interest to the agent: asubA and anull.
Now anull does absolutely nothing: in this environment, it's exactly equivalent with ∅. The asubA is also equivalent in all turns except the very first.
Define ϕ:H→H such that ϕ(h) is the history h, except that if h has a1=asubA, then ϕ(h) has a1=∅.
If the agent uses asubA on the very first turn, it will create a sub-agent that will:
1. Expand and take over the whole environment.
2. Take control of the input and output channels of the agent.
3. Whenever the agent has seen history htat+1, the subagent will generate the next observation ot+1, where psubA(ot+1∣htat+1) is the probability the subagent will generate a given ot+1.
4. This probability is defined by psubA(ot+1∣htat+1)=p(ot+1∣ϕ(htat+1)), where p is the environme |
98f50204-18e0-4952-a464-2fcb9d838d4a | trentmkelly/LessWrong-43k | LessWrong | Open Thread: September 2011
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
If continuing the discussion becomes impractical, that means you win at open threads; a celebratory top-level post on the topic is traditional. |
b04e848f-acd5-47f4-bf58-fa9827355f3d | trentmkelly/LessWrong-43k | LessWrong | Prediction Markets: When Do They Work?
Epistemic Status: Resident Expert
I’m a little late on this, which was an old promise to Robin Hanson (not that he asked for it). I was motivated to deal with this again by the launch of Augur (REP), the crypto prediction market token. And by the crypto prediction market token, I mean the empty shell of a potential future prediction market token; what they have now is pretty terrible but in crypto world that is occasionally good for a $300 million market cap. This is, for now, one of those occasions.
The biggest market there, by far, is on whether Ether will trade above $500 at the end of the year. This is an interesting market because Augur bets are made in Ether. So even though the market (as of last time I checked) says it’s 74% percent to be trading above $500 and it’s currently $480 (it’s currently Thursday on July 26, and I’m not going to go back and keep updating these numbers). When I first saw this the market was at 63%, which seemed to me like a complete steal. Now it’s at 74%, which seems more reasonable, which means the first ‘official DWATV trading tip’ will have to wait. A shame!
A better way to ask this question, given how close the price is to $500 now, is what the ratio of ‘given Ether is above $500 what does it cost’ to ‘given Ether is below $500 what does it cost’ should be. A three to one ratio seems plausible?
The weakness (or twist) on markets this implies applies to prediction markets generally. If you bet on an event that is correlated with the currency you’re betting in, the fair price can be very different from the true probability. It doesn’t have to be price based – think about betting on an election between a hard money candidate and one who will print money, or a prediction on a nuclear war.
If I bet on a nuclear war, and win, how exactly am I getting paid?
Robin Hanson, Eliezer Yudkowsky and Scott Sumner are big advocates of prediction markets. In theory, so am I. Prediction markets are a wonderful thing. By giving people a monet |
7b0479a1-8e79-4a1d-938a-dd60a0c89fb7 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | a casual intro to AI doom and alignment
this post, intended notably for people outside of the AI alignment community, intends to convey my current perspective about AI doom and alignment and why i think those are important issues. i hold these beliefs not with absolute confidence, but with enough that i think i ought to be focused on these issues.
tl;dr: **the development of advanced AI will likely cause the permanent extinction of everything we value, sometime this decade or maybe the next. not many people are working on solving this, and we largely don't know what we're doing. you can help by trying to do alignment research.**
what's going on?
================
people in a variety of organizations such as OpenAI and DeepMind are researching ever more advanced artificial intelligence. they're not doing this out of malice, or even that much for profit; from what i understand, they're doing it because they believe it's cool and because they think it's genuinely going to improve the world.
i think they're mistaken. i, and most of the AI alignment community, think that it's likely to have catastrophic consequences we call "doom"; typically [the total extinction of everything we value](https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence), or [possibly worse](https://en.wikipedia.org/wiki/Suffering_risks).
the reasons why can be simple or complicated, depend on your assumptions about AI and ethics and various other things. no small post is going to fully address all the counter-arguments people are going to have. here's a short explanation which is intuitive to me:
* nobody even knows how to make advanced AIs pursue anything specific, let alone how to make advanced AIs pursue goals that encompass everything we care about
* because of these, and because of things like [the orthogonality thesis](https://www.lesswrong.com/tag/orthogonality-thesis), as soon as someone builds the first AI that is good at pursuing something, that thing is [very unlikely](https://www.lesswrong.com/posts/GNnHHmm8EzePmKzPk/value-is-fragile) to be something we want.
* because of [instrumental convergence](https://en.wikipedia.org/wiki/Instrumental_convergence), any AI that is good at pursuing something we don't want will want to use as many resources as possible to pursue it. this includes everything we value; everything we value is made of matter and energy that the AI could be using to better accomplish what it's pursuing.
* powerful AI is likely to happen somewhat soon — within this decade, or maybe the next. [you can read about why i think this](https://carado.moe/why-timelines-short.html), but you can also look at [metaculus' predictions about general AI](https://www.metaculus.com/questions/3479/date-weakly-general-ai-is-publicly-known/), and there is lively debate on LessWrong.
common counter-arguments to AI doom concern, and responses to those, can be found on [the "bad AI alignment take bingo"](https://twitter.com/robbensinger/status/1503220020175769602).
what is AI alignment?
=====================
"AI alignment" is the field of study of how to make AI pursue goals which, when pursued, lead to worlds we'd want, as opposed to worlds in which we're all dead.
some of the people working to develop ever more advanced AI — doing what we call "AI capability research" or simply "AI capabilities" — are aware of the arguments put forth by the alignment community. some of them disagree with those arguments. others are aware of them, but continue working for various reasons, typically to do with the difficulty for people to pursue what they actually want.
the AI alignment community has much of its public discourse and publications on [the *LessWrong* website](https://www.lesswrong.com/), a platform which originally hosted [*The Sequences*](https://www.readthesequences.com/) as an introduction to some ideas about rationality, around which evolved the community that is still active there now.
i've heard estimates for the number of people working on AI alignment ranging from 70 to 300. this is very small, considering the importance and the difficulty of the task at hand.
the field of AI alignment is very confused, at the moment. we largely don't know what we're doing. we're pursuing varied fields of investigation, mostly without a big picture plan of how to solve the problem. we don't even have a consensus on what is [necessary or sufficient](https://www.lesswrong.com/posts/LgEvWDzWga7aagf7T/confusion-about-alignment-requirements) to solve AI alignment. needless to say, [things are not looking good](https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy).
but, even if we figured out how to make an advanced AI not dangerous, significant problems remain, as pointed out by this graph from [steve byrnes](https://www.lesswrong.com/users/steve2152):

indeed, we could develop a method to make AI safe, but someone else could still build dangerous AI later and cause doom that way — this could be because they don't know about that method, because they don't care, because they can't be bothered, because they made a mistake while trying to implement it, because that method doesn't work for their particular flavor of AI, or any other reason. as the important [*AGI Ruin* post](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities) puts it, we need to stop "Facebook AI Research from destroying the world six months later".
given this, we need not just a method to make AI safe, but also either *a way to make sure everyone uses that method, correctly* or *a powerful, aligned AI that saves us forever*. you can read more about my view of AI alignment and how to prevent doom in [*my outlook on AI risk mitigation*](https://www.lesswrong.com/posts/bG7yKSRWBaMou7t93/my-current-outlook-on-ai-risk-mitigation).
some people ask questions like, [aligned to whose values?](https://carado.moe/outer-alignment-politics-philosophy.html) shouldn't it be [aligned to everyone?](https://www.lesswrong.com/posts/Rn4wn3oqfinAsqBSf/intent-alignment-should-not-be-the-goal-for-agi-x-risk) and [how do we do that?](https://aligned.substack.com/p/alignment-solution) — my answer is twofold. on the theoretical side, [aligning AI to everyone is not what an alignment researcher or team should want to do](https://carado.moe/surprise-you-want.html). on the practical side, we're currently way too desperate for anything that works to be picky; to quote [AGI Ruin](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities):
>
> At this point, I no longer care how it works, I don't care how you got there, I am cause-agnostic about whatever methodology you used, all I am looking at is prospective results, all I want is that we have justifiable cause to believe of a pivotally useful AGI 'this will not kill literally everyone'. Anybody telling you I'm asking for stricter 'alignment' than this has failed at reading comprehension. The big ask from AGI alignment, the basic challenge I am saying is too difficult, is to obtain by any strategy whatsoever a significant chance of there being any survivors.
>
>
>
how can i help?
===============
i had heard about these arguments before, but i only started *emotionally worrying* about AI doom [when github copilot and things like it came out](https://carado.moe/were-all-doomed.html), and subsequently i [refocused what i was doing with my life](https://carado.moe/life-refocus.html). if you agree that AI doom is or might be very concerning, then you might want to help.
first, [take care of yourself](https://www.lesswrong.com/posts/pLLeGA7aGaJpgCkof/mental-health-and-the-alignment-problem-a-compilation-of). you're probly going to create more value, both for yourself and the world, if you [don't become too doomer](https://mindingourway.com/detach-the-grim-o-meter/).
second, learn about alignment; both the technical field of study and its community. some useful resources include:
* [this great talk](https://youtu.be/di8XHw1y71A?t=130) (and [its accompanying slides](https://docs.google.com/presentation/d/1YYb77WlU3ESlPCVCJvSFgqoZZ2THlMQK/edit)) or [this post summarizing it](https://www.lesswrong.com/posts/gcmQyyko8szuyJHyu/resources-that-i-think-new-alignment-researchers-should-know);
* you can **[join my alignment discord](https://discord.com/invite/5M8GasMp8p)**, as well as the [EleutherAI](https://www.eleuther.ai/) [discord](https://discord.gg/zBGx3azzUn) which is friendly to people starting out in alignment — see notably their *#alignment-beginners* channel;
* the pretty good [Alignment Research Field Guide](https://www.lesswrong.com/posts/PqMT9zGrNsGJNfiFR/alignment-research-field-guide);
* [Rob Miles' videos on alignment](https://www.youtube.com/c/RobertMilesAI/videos);
* finally, i think [*The Sequences*](https://www.readthesequences.com/) remain a good foundation for rationality.
there are ways to [help without doing research](https://www.lesswrong.com/posts/ScYGedE9HKvMLfZjs/entering-at-the-11th-hour-babble-and-anaylsis), but i believe research is the bottleneck right now.
it's not all doom and gloom; AI could actually give us a great utopian future! (see [1](https://www.lesswrong.com/posts/CMHogeqTTajhmnEKx/everything-is-okay), [2](https://carado.moe/%E2%88%80V.html), [3](https://carado.moe/utopia-scopes.html), [4](https://www.fimfiction.net/story/62074/friendship-is-optimal), [5](https://web.archive.org/web/20040404031937/http://www.kuro5hin.org/prime-intellect/), [6](https://www.lesswrong.com/posts/SLw2MEgxFtiKAqgQ5/actually-possible-thoughts-on-utopia)) it just takes a whole lot of work to get there, and the alternative is pretty bad. |
aee789a7-273e-4248-9736-cba9bc95135d | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Sensor Exposure can Compromise the Human Brain in the 2020s
A few days ago I finished writing [AI Safety is Dropping the Ball on Clown Attacks](https://www.lesswrong.com/posts/mjSjPHCtbK6TA5tfW/ai-safety-is-dropping-the-ball-on-clown-attacks-and-mind), and foolishly posted it thinking that people would read it even though it was longer than EY’s List of Lethalities. This is because I spent 3 days writing and transcribing it as fast as possible and was sleep deprived and burnt out during the editing process, and was worried about leaving out important points. I still think it’s worth reading, similar to [cyborgism](https://www.lesswrong.com/posts/bxt7uCiHam4QXrQAA/cyborgism).
This post is shorter and more readable, at the cost of not giving the situation the thorough coverage that it warrants (in my experience, lots of people have found [intuition flooding](https://www.lesswrong.com/posts/F3vNoqA7xN4TFQJQg/14-techniques-to-accelerate-your-learning-1#:~:text=Intuition%20flooding,-We%20often%20think&text=To%20practice%20intuition%20flooding%2C%20find,have%20or%20patterns%20you%20notice.) helpful, especially in AI policy).
**Overview**
The 20th century was radically altered by the discovery of psychology, a science of the human mind, and its exploitation (e.g. large-scale warfare, propaganda, advertising, information/hybrid warfare, decision theory/mutually assured destruction).
However, it's reasonable to think that the 20th century would have been even further transformed if the science and exploitation of the human mind was even further advanced than it already was.
I'm arguing here that, in an era of mass surveillance, [hybrid](https://en.wikipedia.org/wiki/Hybrid_warfare)/cognitive warfare between the US and China and Russia, ML and computer vision, and now even LLMs, it is also reasonable to think that the situation with SOTA human cognitive analysis and exploitation may already be threatening the continuity of operations of the entire AI safety community; and if not now, then likely at some point during the 2020s, which will probably be much more globally eventful than the pace that humanity became accustomed to in the previous two decades. AI will be the keys to those kingdoms and the wars between them, and demanding a development pause might be the minimum ask for humanity to survive, and for a conflict like that, [we won't even know what hit us](https://www.lesswrong.com/posts/mjSjPHCtbK6TA5tfW/ai-safety-is-dropping-the-ball-on-clown-attacks#AI_pause_as_the_turning_point).
The attack surface is unacceptably large for human life in general, let alone for the AI safety community, a community of nerds who chanced upon the engineering problem that the fate of this side of the universe revolves around, a community that absolutely must not fail to survive the 2020s, nor to limp on in a diminished/captured form.
**This problem is fundamental to intelligent civilizations**
If there were intelligent aliens, made of bundles of tentacles or crystals or plants that think incredibly slowly, their minds would also have discoverable exploits/zero days, because any mind that evolved naturally would probably be like the human brain, a kludge of spaghetti code that is operating outside of its intended environment.
They would probably not even begin to scratch the surface of finding and labeling those exploits, until, like human civilization today, they began surrounding thousands or millions of their kind with sensors that could record behavior several hours a day and find webs of correlations. In the case of humans, the use of social media as a controlled environment for automated AI-powered experimentation appears to be what created that critical mass of human behavior data.
The capabilities of social media to steer human outcomes are not advancing in isolation, they are parallel to a broad acceleration in the understanding and exploitation of the human mind, which [itself is a byproduct of accelerating AI capabilities research](https://arxiv.org/pdf/2309.15084.pdf).
By comparing people to other people and predicting traits and future behavior, multi-armed bandit algorithms can predict whether a specific manipulation strategy is worth the risk of undertaking at all in the first place; resulting in a high success rate and a low detection rate (as detection would likely yield a highly measurable response, particularly with substantial sensor exposure such as uncovered webcams, due to comparing people’s microexpressions to cases of failed or exposed manipulation strategies, or working webcam video data into foundation models).
When you have sample sizes of billions of hours of human behavior data and sensor data, millisecond differences in reactions from different kinds of people (e.g. facial microexpressions, millisecond differences at scrolling past posts covering different concepts, heart rate changes after covering different concepts, eyetracking differences after eyes passing over specific concepts, touchscreen data, etc) transform from being imperceptible noise to becoming the foundation of [webs of correlations](https://www.lesswrong.com/posts/mjSjPHCtbK6TA5tfW/ai-safety-is-dropping-the-ball-on-clown-attacks#:~:text=There%20is%20no%20logical%20endpoint%20to%20the%20amount%20of%20data%20required%20by%20such%20systems...%20All%20information%20is%20potentially%20relevant%20because%20it). [The NSA stockpiles exploits in every operating system](https://www.nytimes.com/2020/01/14/us/politics/nsa-microsoft-vulnerability.html) and likely chip firmware as well, so we don’t have good estimates on how much data is collected and anyone who tries to get a good estimate will probably fail. The historical trend was that there’s a lot of malevolent data collection and that the people who underestimated the NSA were wrong every time. Furthermore, this post details a very strong case that they are incredibly incentivized to tap those sensors.
Even if the sensor data currently being collected isn’t already enough to compromise people, it will probably suddenly become sufficient at some point during the 2020s or slow takeoff.
The central element of the modern behavior manipulation paradigm is the ability to just try tons of things and see what works; not just brute forcing variations of known strategies to make them more effective, but to brute force novel manipulation strategies in the first place. This completely circumvents the scarcity and the research flaws that caused the replication crisis which still bottlenecks psychology research today.
Social media’s individualized targeting uses deep learning to yield an experience that fits human mind like a glove, in ways we don't fully understand, but allow hackers incredible leeway to find ways to steer people’s thinking in measurable directions, insofar as those directions are measurable. AI can even automate that.
In fact, original psychological research in human civilization is no longer as bottlenecked on the need for smart, insightful people who can do hypothesis generation so that the finite studies you can afford to fund each hopefully find something valuable. With the current social media paradigm alone, you can run studies, combinations of news feed posts for example, *until* you find something useful. Measurability is critical for this.
I can’t know what techniques a multi-armed bandit algorithm will discover without running the algorithm itself; which I can’t do, because that much data is only accessible to the type of people who buy servers by the acre, and even for them, the data is monopolized by the big tech companies (Facebook, Amazon, Microsoft, Apple, and Google) and intelligence agencies that are large and powerful enough to prevent hackers from stealing and poisoning the data (NSA, etc).
I also don’t know what multi-armed bandit algorithms will find when people on the team are competent psychologists, spin doctors, or other PR experts interpreting and labeling the human behavior in the data so that the human behavior can become measurable. It’s reasonably plausible that the industry would naturally reach an equilibrium where the big 5 tech companies compete to gain sophistication at sourcing talent for this research while minimizing risk of snowden-style leaks, similar to the NSA’s “reforms” after the Snowden revelations 10 years ago. That is the kind of bottleneck that you can assume people automatically notice and work on. Revolving door employment between tech companies and intelligence agencies also circumvents the [intelligence agency competence problem](https://www.lesswrong.com/posts/foM8SA3ftY94MGMq9/assessment-of-intelligence-agency-functionality-is-difficult).
Human insight from just a handful of psychological experts can be more than enough to train AI to work autonomously; although continuous input from those experts would be needed and plenty of insights, behaviors, and discoveries would fall through the cracks and take an extra 3 years or something to be discovered and labeled.
There’s just a large number of human manipulation strategies that are trivial to discover and exploit, even without AI (although the situation is far more severe when you layer AI on top), it’s just that they weren’t accessible at all to 20th century institutions and technology such as academic psychology.
If they get enough data on people who share similar traits to a specific human target, then they don’t have to study the target as much to predict the target’s behavior, they can just run multi-armed bandit algorithms on those people to find manipulation strategies that already worked on individuals who share genetic or other traits.
Although the average Lesswrong user is much further out-of-distribution relative to the vast majority of people in the sample data, this becomes a technical problem, as AI capabilities and compute become dedicated to the task of sorting signal from noise and finding webs of correlation with less data. [Clown attacks alone have demonstrated that social-status based exploits in the brain are fairly consistent among humans](https://www.lesswrong.com/posts/mjSjPHCtbK6TA5tfW/ai-safety-is-dropping-the-ball-on-clown-attacks#Clown_attacks), indicating that sample data from millions or billions of people is usable to find a wide variety of exploits in the human brains that make up the AI safety community.
**The attack surface is far too large**
The lack of awareness of this is a security risk, like using the word “password” as your password, except with control of your own mind at stake rather than control over your computer’s operating system and/or your files. This has been steelmanned; the 10 years ago/10 years from now error bars seem appropriately wide.
There isn’t much point in having a utility function in the first place if hackers can change it at any time. There might be parts that are resistant to change, but it’s easy to overestimate yourself on this; for example, if you value the longterm future and think that no false argument can persuade you otherwise, but a social media news feed plants misgivings or distrust of Will Macaskill, then you are one increment closer to not caring about the longterm future; and if that doesn’t work, the multi-armed bandit algorithm will keep trying until it finds something that works, and iterate. There are tons of clever ways for attackers who understand the human brain better than you to find your complex and deeply personal internal conflicts based on comparison to similar people, and resolve them on the attacker’s terms. The human brain is a kludge of spaghetti code, so there’s probably something somewhere. The human brain has exploits, and the capability and cost of social media platforms to use massive amounts of human behavior data to find complex social engineering techniques is a profoundly technical matter, you can’t get a handle on this with intuition and pre 2010s historical precedent.
Thus, you should assume that your utility function and values are at risk of being hacked at an unknown time, and should therefore be assigned a discount rate to account for the risk over the course of several years. Slow takeoff over the course of the next 10 years alone guarantees that this discount rate is too high in reality for people in the AI safety community to continue to go on believing that it is something like zero.
I think that approaching zero is a reasonable target, but not with the current state of affairs where people don’t even bother to cover up their webcams, have important and sensitive conversations about the fate of the earth in rooms with smartphones, and use social media for nearly an hour a day (scrolling past nearly a thousand posts). Like it or not, scrolling past a post with anything other than arrow keys will generate at least one curve, and those trillions of curves generated each day are linear algebra, the perfect shape to plug into ML. The discount rate in this environment cannot be considered “reasonably” close to zero if the attack surface is this massive; and the world is changing this quickly.
Everything that we’re doing here is predicated on the assumption that powerful forces, like intelligence agencies, will not disrupt the operations of the community e.g. by inflaming factional conflict with false flag attacks attributed to each other due to the use of anonymous proxies.
If people have [anything they value at all](https://www.lesswrong.com/posts/SGR4GxFK7KmW7ckCB/something-to-protect), and the AI safety community probably does have that, then the current AI safety paradigm of zero effort is wildly inappropriate, it’s basically total submission to invisible hackers.
**The information environment might be adversarial**
The big bottleneck that I suspect caused AI safety to completely drop the ball on this is that the the AI alignment community in the Bay Area have the technical capabilities to intuitively understand that humans can be manipulated by AI given an environment optimized for thought analysis and experimentation, like a social media news feed, but think that intelligence agencies and the big 5 tech companies would never actually do something like that. Meanwhile, the AI policy community in DC knows that powerful corporations and government agencies routinely stockpile capabilities like this because they know they can get away with it and mitigate the damage if they don’t, and that these capabilities come in handy in international conflicts like the US-China conflict, but they lack the quant skills required to intuitively see how the human mind could be manipulated with SGD (many wouldn’t even recognize the acronym “SGD” so I’m using “AI” instead).
This problem might have been avoided if the SF math nerds and the DC history nerds would mix more, but unfortunately it seems like the history nerds have terrible memories of math class and the math nerds have terrible memories of history class.
In this segregated and malnourished environment, bad first impressions of “mind control” [dominate](https://www.lesswrong.com/posts/c5oyHuHaw4AcWy4tf/information-warfare-historically-revolved-around-human), instead of logical reasoning and serious practical planning for slow takeoff.


And if anything could be manipulated by the social media-based paradigm I've described, it would be impressions and the process of human impression-formation as there is a lot of data on that. And if anything *would* be manipulated by social media, it would be attitudes about social media compromising the human brain, because SGD/AI would automatically select for galaxy-brained combinations of news feed posts that correspond to cases of people continuing to use social media, and avoid combinations of posts that correspond to cases of people quitting social media. There are billions of those cases of people leaving vs staying.
Keeping people on social media is instrumental for any goal, from preparing for military [hybrid warfare](https://en.wikipedia.org/wiki/Hybrid_warfare) contingency plans featuring information warfare between the US and China, to just running a business where people don’t leave your platform.
This is especially the case if there is a race to the bottom to compete for user’s time against other platforms, like Tiktok or Instagram Reels, that are less squeamish about utilizing AI/SGD to maximize zombie-brain engagement and user retention.
We should be assuming by default that the modern information environment is adverse, and that some topics are more adversarial than others e.g. the Ukraine War and COVID which have intense geopolitical significance. I'm arguing here that information warfare itself is a topic that as intense geopolitical significance, and therefore should be expected to also be an adversarial information environment.
In an adversarial information environment, impressions are more likely to be compromised than epistemics as a whole, as the current paradigm is optimized for that due to better data quality. We should therefore be approaching sensor exposure risks with deliberate analysis and forecasting rather than vague surface-level impressions.
**The solutions are easy**
Eyetracking is likely the most valuable user data ML layer for predictive analytics and sentiment analysis and influence technologies in general, since the eyetracking layer is only two sets of coordinates that map to the exact position that each eye is centered on the screen at each millisecond (one for each eye, since millisecond-differences in the movement of each eye might also correlate with valuable information about a person’s thought process).
This compact data allows deep learning to “see”, with millisecond-precision, exactly how long a human’s eyes/brain linger on each word and sentence. Notably, sample sizes of millions of these coordinates might be so intimately related to the human thought process that value of eyetracking data might exceed the value of all other facial muscles combined (facial muscles, the originator of all facial expressions and emotional microexpression, [might also be compactly reducible via computer vision](https://www.lesswrong.com/posts/mjSjPHCtbK6TA5tfW/ai-safety-is-dropping-the-ball-on-clown-attacks#:~:text=A%20critical%20element%20is%20for%20as%20many%20people%20as%20possible%20in%20AI%20safety%20to%20cover%20up%20their%20webcams%3B%20facial%20microexpressions%20are%20remarkably%20revealing%2C%20especially%20to%20people%20with%20access%20to) as there are fewer than 100 muscles near the face and most of them have a very bad signal to noise ratio, but not nearly as efficiently as eyetracking).
If LK99 replicated and handheld fMRI became buildable, then maybe that could contend for the #1 slot; or maybe I’m foolishly underestimating the overwhelming superiority of plugging audio conversation transcripts into LLMs and automatically labeling the parts of the conversation that the speakers take the most seriously by timestamping small heart rate changes.
However, running networking events without smartphones nearby is hard, and covering up webcams is easy, even if some phones require some engineering creativity with masking tape and a tiny piece of aluminum foil.
Webcam-covering rates might be a good metric for how well the AI safety community is doing on surviving the 2020s. Right now it is "F".
There are other easy policy proposals that might be far more important, depending on difficult-to-research technical factors that determine which parts of the attack surface are the most dangerous:
1. Stop spending hours a day inside hyperoptimized vibe/impression hacking environments (social media news feeds).
2. It’s probably a good idea to switch to physical books instead of ebooks. Physical books do not have operating systems or sensors. You can also print out research papers and Lesswrong and EAforum articles that you already know are probably worth reading or skimming. PC’s have accelerometers on the motherboard which afaik are impossible to remove or work around, even if you remove the microphone and use a USB keyboard and use hotkeys instead of a mouse the accelerometers might be able to act as microphones and pick up changes in heart rate.
3. It’s probably best to avoid sleeping in the same room as a smart device, or anything with sensors, an operating system, and also a speaker. The attack surface seems large, if the device can tell when people’s heart rate is near or under 50 bpm, then it can [test all sorts of things](https://www.lesswrong.com/posts/mjSjPHCtbK6TA5tfW/ai-safety-is-dropping-the-ball-on-clown-attacks#:~:text=It%E2%80%99s%20probably%20best%20to%20avoid%20sleeping%20in%20the%20same%20room%20as%20a%20smart%20device%2C%20or%20anything%20with%20sensors%2C%20an%20operating%20system%2C%20and%20also%20a%20speaker.%20The%20attack%20surface%20seems%20large%2C%20if%20the%20device%20can%20tell%20when%20people%E2%80%99s%20heart%20rate%20is%20near%20or%20under%2050%20bpm%2C%20then%20it%20can%20test%20all%20sorts%20of%20things). Just drive to the store and buy a clock.
4. [Reading the great rationality texts will probably reduce your predictability coefficient](https://www.lesswrong.com/posts/mjSjPHCtbK6TA5tfW/ai-safety-is-dropping-the-ball-on-clown-attacks#:~:text=ink%2Defficient%20printer.-,I%E2%80%99m%20not%20sure,-whether%20a%20text), but it won’t reliably patch “zero days” in the human brain. |
f083d218-8990-43d0-a51a-aa446b97cb6c | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Malign generalization without internal search
In [my last post](https://www.alignmentforum.org/posts/nFDXq7HTv9Xugcqaw/is-the-term-mesa-optimizer-too-narrow), I challenged the idea that inner alignment failures should be explained by appealing to agents which perform explicit internal search. By doing so, I argued that we should instead appeal to the more general concept of *malign generalization*, and treat mesa-misalignment as a special case.
Unfortunately, the post was light on examples of what we should be worrying about instead of mesa-misalignment. Evan Hubinger wrote,
> Personally, I think there is a meaningful sense in which all the models I'm most worried about do some sort of search internally (at least to the same extent that humans do search internally), but I'm definitely uncertain about that.
Wei Dai expressed confusion why I would want to retreat to malign generalization without some sort of concrete failure mode in mind,
> Can you give some realistic examples/scenarios of “malign generalization” that does not involve mesa optimization? I’m not sure what kind of thing you’re actually worried about here.
In this post, I will outline a general category of agents which may exhibit malign generalization without internal search, and then will provide a concrete example of an agent in the category. Then I will argue that, rather than being a very narrow counterexample, this class of agents could be competitive with search-based agents.
**The switch case agent**
-------------------------
Consider an agent governed by the following general behavior,
.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
LOOP:State = GetStateOfWorld(Observation)IF State == 1:PerformActionSequence1()IF State == 2:PerformActionSequence2()...END\_LOOP
It's clear that this agent does not perform any internal search for strategies: it doesn't operate by choosing actions which rank highly according to some sort of internal objective function. While you *could* potentially rationalize its behavior according to some observed-utility function, this would generally lead to more confusion than clarity.
However, this agent could still be malign in the following way. Suppose the agent is 'mistaken' about the state of the world. Say that it believes that the state of the world is 1, whereas the actual state of the world is 2. Then it could take the wrong action, almost like a person who is confident in a falsehood and makes catastrophic mistakes because of their error.
To see how this could manifest as bad behavior in our artificial agents, I will use a motivating example.
**The red-seeking lunar lander**
--------------------------------
Suppose we train a deep reinforcement learning agent on the lunar lander environment from OpenAI's Gym.
We make one crucial modification to our environment. During training, we make it so the landing pad is always painted red, and this is given to the agent as part of its observation of the world. We still reward the agent like normally for successfully landing in a landing pad.
Suppose what really determines whether a patch of ground is a landing pad is whether it is enclosed by two flags. Nevertheless, instead of picking up on the true indicator of whether something is a landing pad, the agent may instead pick up the proxy that held during training -- namely, that landing pads are parts of the ground that are painted red.
Using the psuedocode earlier and filling in some details, we could describe the agent's behavior something like this. LOOP:State = GetStateOfWorld(Observation)IF State == RedIsToTheLeft:ApplyLeftThruster(45%)ApplyRightThruster(50%)IF State == RedIsToTheRight:ApplyLeftThruster(50%)ApplyRightThruster(45%)IF State == RedIsDirectlyBelow:ApplyLeftThruster(35%)ApplyRightThruster(35%)END\_LOOP
During deployment, this could end catastrophically. Assume that some crater is painted red but our landing pads is painted blue. Now, the agent will guide itself competently towards the crater and miss the real landing pad entirely. That's not what we wanted.
(ETA: If you think I'm using the term 'catastrophically' too loosely here, since the agent actually lands safely in a crater rather than crashing into the ground, we could instead imagine a lunar vehicle which veers off into the red crater rather than just sitting still and awaiting further instruction since it's confused.)
**What made the agent become malign**
-------------------------------------
Above, I pointed to the reason why agents like ours could be malign. Specifically, it was 'mistaken' about what counted as a landing pad. However, it's worth noting that saying the agent is mistaken about the state of the world is really an anthropomorphization. It was actually perfectly correct in inferring where the red part of the world was -- we just didn't want it to go to that part of the world. We model the agent as being 'mistaken' about where the landing pad is, but it works equally well to model the agent as having goals that are counter to ours.
Since the malign failure doesn't come from a pure epistemic error, we can't merely expect that the agent will self-correct as it gains more knowledge about the world. Saying that it is making an epistemic mistake is just a model of what's going on that helps us interpret its behavior, and it does not imply that this error is benign.
**Imagining more complex agents**
---------------------------------
But what's to worry about if this sort of thing only happens in very simple agents? Perhaps you think that only agents which perform internal search could ever reach the level of competence required to perform a real-world catastrophe?
I think that these concerns about my example are valid, but I don't believe they are compelling. As a reply, I think the general agent superstructure I outlined in the initial pseudocode could reach very high levels of competence.
Consider an agent that could, during its operation, call upon a vast array of subroutines. Some of these subroutines can accomplish extremely complicated actions, such as "Prove this theorem: [...]" or "Compute the fastest route to Paris." We then imagine that this agent still shares the basic superstructure of the pseudocode I gave initially above. In effect, the agent has an outer loop, during which it takes in observations from the real world, and outputs action sequences depending on which state of the world it thinks its in, and using the subroutines it has available.
Since the subroutines are arbitrarily complex, I don't think there is any fundamental barrier for this agent to achieve high levels of competence in the real world. Moreover, some subroutines could themselves perform powerful internal searches, pretty clearly obviating the competitive advantage that explicit search agents offer.
And even while some subroutines could perform powerful internal searches, these subroutines aren't the only source of our malign generalization concern. The behavior of the agent is still well-described as a switch-case agent, and this means that the failure mode of the agent being 'mistaken' about the state of the world remains. Therefore, it's inaccurate to say that the source of malign generalization *must* come from an internal search being misaligned with the objective function we used during training. |
3583664b-4302-41f5-a622-61baca6ac47c | trentmkelly/LessWrong-43k | LessWrong | Leveling IRL - level 1
"A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects." -- Robert A. Heinlein, Time Enough for Love
This post is a followup to Leveling IRL. Thanks to SarahC, taryneast, Benquo, AdeleneDawner and MixedNuts, we have an outline of level 1. At this point I feel it's more productive to post it as-is than discuss it further:
* Strength: reach the "untrained" level on each exercise in the ExRx tables.
* Endurance: run 1 mile (1.6 km) without stopping.
* Social: initiate a conversation with someone you know and arrange a meeting with them later. Do that 4 times with different people within 1 month.
* Self control: work for 2 hours without interruptions. Do that on 8 separate days within 1 month.
* Memory: memorize and recite a passage of your choosing, at least 250 words long, without making any mistakes.
* Programming: solve Project Euler problem #1 by writing and running a program in any language you choose.
* Cooking: make pancakes. Here's a good recipe.
* Finance: make a simple buy vs rent calculation, using prices appropriate for your area and your current standard of living.
* Creativity: write 500 words of fiction in one sitting.
The list has some glaring omissions, like math or chess, because I don't yet know of a crisp enough way to test those skills. Ideas are welcome! Also it seems very likely that some items on the list are wildly miscalibrated, some of them will turn out to be too hard for a beginner, and others will be too easy for anyone with a pulse. I'll be happy to hear about such miscalibrated requirements from the people who achieved them or at least tried :-)
And here's what I think the rules should l |
9dcaa170-0ed4-4107-9e49-f4001bb5ec01 | trentmkelly/LessWrong-43k | LessWrong | "You're the most beautiful girl in the world" and Wittgensteinian Language Games
Wittgenstein argues that we shouldn't understand language by piecing together the dictionary meaning of each individual word in a sentence, but rather that language should be understood in context as a move in a language game.
Consider the phrase, "You're the most beautiful girl in the world". Many rationalists might shy away from such a statement, deeming it statistically improbable. However, while this strict adherence to truth is commendable, I honestly feel it is misguided.
It's honestly kind of absurd to expect your words to be taken literally in these kinds of circumstances. The recipient of such a compliment will almost certainly understand it as hyperbole intended to express fondness and desire, rather than as a literal factual assertion. Further, by invoking a phrase that plays a certain role in movies, books, etc. you're making a bid to follow certain cultural scripts[1]. The girl almost certainly knows this intuitively, regardless of whether or not she could articulate it precisely.
Of course, one should avoid making such statements if they believe them to be fundamentally false. However, ethical communication in these circumstance isn't about the literal truth of the words but whether they are expressed sincerely and whether the speaker genuinely intends to uphold the unspoken commitments associated with such cultural conventions.
1. ^
I wouldn't be able to comprehensively identify all the aspects of the scripts invoked, but I suspect that at least part of this is a bid to roleplay certain idealized cultural narratives. It might sounds like I'm trivialised this, ie. that I'm saying it's all pretend, but there's a sense in which this roleplay brings reality closer to these narratives even if they can never be fully realized. |
3e3eca6b-4a15-4cc7-83bf-411313e210d1 | trentmkelly/LessWrong-43k | LessWrong | Trying to translate when people talk past each other
Sometimes two people are talking past each other, and I try to help them understand each other (with varying degrees of success).
It’s as if they are looking at the same object, but from different angles. Mostly they see the same thing – most of the words have shared meanings. But some key words and assumptions have a different meaning to them.
Often, I find that one person (call them A) has a perspective that’s easier for me to understand. It comes naturally. But B’s perspective is initially harder. So if I want to translate from B to A, I first need to understand B.
I remember a time when I sat listening to two people having a conversation, both getting increasingly agitated and repeating the same points without making progress. Four of us were playing a cooperative board game together. The situation was something like…
(I don’t remember the exact details anymore, and communicating the exact details would require explaining game mechanics that aren’t important in this context, so I’ll give a partially-fictional version that tries to have the same rough shape as the original situation. See this comment for something that tries to offer a more accurate summary.)
We had been making plans about our next move. Person A had promised that they would make a particular play. When the time came, they noticed that there was a better play they could make instead, so they did that. Person B became upset. The conversation went something like:
A: I’ll make this play.
B: What? That’s not what we agreed on.
A: That doesn’t matter – look, this play is better because it has these consequences.
B: You can’t just say that it doesn’t matter, you promised to make a different play.
A: But this play would have a better outcome in terms of what we all want.
B: Yes but you promised to play differently, you can’t just ignore that. Our previous agreement matters.
A: Okay if you don’t want me to play like this, I can still play the way that we originally discussed, too.
B: That’s not the |
223b5383-a25e-4a82-919f-6a5ada270d00 | StampyAI/alignment-research-dataset/arbital | Arbital | Extensionality Axiom
The axiom of extensionality is one of the fundamental axioms of set theory. Basically, it postulates the condition, by which two sets can be equal. This condition can be described as follows: *if any two sets have exactly the same members, then these sets are equal*. A formal notation of the extensionality axiom can be written as:
$$ \forall A \forall B : ( \forall x : (x \in A \iff x \in B) \Rightarrow A=B)$$
##Examples
- $\{1,2\} = \{2,1\}$, because whatever object we choose, it either belongs to both of these sets ($1$ or $2$), or to neither of them (e.g. $5$, $73$)
%%comment:
- If $A = \{x \mid x = 2n \text{ for some integer } n \}$ and $B = \{x \mid x \text{ is even } \}$, then $A=B$. The proof goes as follows: $\forall x : (x \in A \Leftrightarrow (x = 2n \text{ for some integer } n ) \Leftrightarrow (x/2 = n \text{ for some integer } n) \Leftrightarrow (x/2 \text{ is an integer}) \Leftrightarrow (x \text{ is even}) \Leftrightarrow x \in B)$
that, if simplified, gives $\forall x : (x \in A \iff x \in B)$, which, by extensionality, implies $A=B$
%%
[Fix the formatting in the currently commeneted example. Every new statement needs to be in a new line, lined up.](https://arbital.com/p/fixme:)
##Axiom's converse
Note, that the axiom itself only works in one way - it implies that two sets are equal **if** they have the same elements, but does not provide the converse, i.e. any two equal sets have the same elements. Proving the converse requires giving a precise definition of equality, which in different cases can be done differently. %note: Sometimes the extensionality axiom itself can be used to define equality, in which case the converse is simply stated by the axiom.% However, generally, the converse fact can always be considered true, as the equality of two sets means that they are the same one thing, obviously consisting of a fixed selection of objects. [The substitution property of equality?](https://arbital.com/p/comment:) |
caf57ecb-7c8d-4937-a35e-c06d059a9eac | trentmkelly/LessWrong-43k | LessWrong | Restrictions that are hard to hack
A putative new idea for AI control; index here.
Very much in the spirit of "if you want something, you have to define it, then code it, rather than assuming you can get if for free through some other approach."
Difficult children
Suppose you have a child, that you sent to play in their room. You want them to play quietly and silently, so you want them:
"I'll be checking up on you!"
The child, however, has modelled you well, and knows that you will look in briefly at midnight and then go away. The child has two main options:
#. Play quietly the whole time. #. Be as noisy as they want, until around 23:59, then be totally quiet for two minutes, then go back to being noisy. We could call the first option obeying the spirit of the law, and the second obeying the letter.
AI's, restrictions, and information
We could model children as ever-destructive chaotic AIs (why yes, I am a parent - how did you guess?), and the warning as a restriction that human "controllers" try and put on the behaviour of the AI. Unfortunately, the AI will generally see the restriction and adapt to it, undermining its effectiveness. A lot of suggestions for AI control revolved around putting out suggestions of this type, so it's worth asking if there's a way to make them more rigorous. Is there a way to code a restriction such that the AI will obey it's spirit?
The thing that eventually leapt out when comparing the two behaviours is that behaviour 2 is far more informative about what the restriction was, than behaviour 1 was. From 2 we can deduce that something unusual was happening around midnight, and that one of the two modes of behaviour was likely to be penalised if it was done at another time. Moreover, if the restriction were removed, then behaviour 1 would continue to be sensible, while behaviour 2 would be stupid and pointless.
Let's try and formalise these intuitions.
Motivations
Restricting the AI's behaviour seems an unpromising approach, as any smart AI could behave in any |
8ee124b2-7a00-4977-b575-d955c3d385f8 | StampyAI/alignment-research-dataset/arbital | Arbital | Division of rational numbers (Math 0)
So far in our study of the [arithmetic](https://arbital.com/p/514) of [rational numbers](https://arbital.com/p/4zq), we've had [addition](https://arbital.com/p/55m) ("putting apples and chunks of apples side by side and counting what you've got"), [subtraction](https://arbital.com/p/56x) ("the same, but you're allowed anti-apples too"), and [multiplication](https://arbital.com/p/59s) ("make a rational number, but instead of starting from $1$ apple, start from some other number").
**Division** is what really sets the rational numbers apart from the [integers](https://arbital.com/p/53r), and it is the mathematician's answer to the question "if I have some apples, how do I share them among my friends?".
# What's wrong with the integers?
If you have an integer number of apples (that is, some number of apples and anti-apples - no chunks allowed, just whole apples and anti-apples), and you want to share them with friends, sometimes you'll get lucky.
If you have four apples, for instance, then you can share them out between yourself and one friend, giving each person two apples.
But sometimes (often, in fact) you'll get unlucky.
If you want to share four apples between yourself and two others, then you can give each person one apple, but there's this pesky single apple left over which you just can't share.
# What the rationals do for us
The trick, obvious to anyone who has ever eaten a cake, is to cut the leftover apple into three equally-sized pieces and give each person a piece.
Now we have shared out all four apples equally.
But in order to do so, we've left the world of the integers, and in getting out our knife, we started working in the rationals.
How much apple has everyone received, when we shared four apples among three people (that is, myself and two friends as recipients of apple)?
%%hidden(Show solution):
Everyone got $\frac{4}{3}$.
Indeed, everyone got one whole apple; and then we chopped the remaining apple into three $\frac{1}{3}$-chunks and gave everyone one chunk.
So everyone ended up with one apple and one $\frac{1}{3}$-chunk.
By our instant addition rule %%note:If you've forgotten it, check out the [addition page](https://arbital.com/p/55m) again; it came from working out a chunk size out of which we can make both the $\frac{1}{3}$-chunk and the $1$-chunk.%%, $$1 + \frac{1}{3} = \frac{1}{1} + \frac{1}{3} = \frac{3 \times 1 + 1 \times 1}{3 \times 1} = \frac{3+1}{3} = \frac{4}{3}$$
There's another way to see this, if (laudably!) you don't like just applying rules.
We could cut *every* apple into three pieces at the beginning, so we're left with four collections of three $\frac{1}{3}$-sized chunks.
But now it's easy to share this among three people: just give everyone one of the $\frac{1}{3}$-chunks from each apple.
We gave everyone four chunks in total, so this is $\frac{4}{3}$.
%%
The rationals provide the natural answer to all "sharing" questions about apples.
We write "rational number $x$ divided by rational number $y$" as $\frac{x}{y}$: that is, $x$ apples divided amongst $y$ people.
(We'll soon get to what it means to divide by a non-integer number of people; just roll with it for now.)
If space is a problem, we can write $a/n$ instead.
Notice that our familiar notation of "$\frac{1}{m}$-sized chunks" is actually just $1$ apple divided amongst $m$ people: it's the result of dividing $1$ into $m$ equal chunks.
So the notation does make sense, and it's just an extension of the notation we've been using already.
# Division by a natural number
In general, $\frac{a}{m}$ apples, divided amongst $n$ people, is obtained by the "other way" above.
Cut the $\frac{a}{m}$ into $n$ pieces, and then give everyone an equal number of pieces.
Remember, $\frac{a}{m}$ is made of $a$ copies of pieces of size $\frac{1}{m}$; so what we do is cut all of the $\frac{1}{m}$-chunks individually into $n$ pieces, and then give everyone $a$ of the little pieces we've made.
But "cut a $\frac{1}{m}$-chunk into $n$ pieces" is just "cut an apple into $n$ pieces, but instead of doing it to one apple, do it to a $\frac{1}{m}$-chunk": that is, it is $\frac{1}{m} \times \frac{1}{n}$, or $\frac{1}{m \times n}$.
So the answer is just $$\frac{a}{m} / n = \frac{a}{m \times n}$$
## Example
# Division by a negative integer
What would it even mean to divide an apple between four anti-people?
How about a simpler question: dividing an apple between one anti-person%%note:Remembering that dividing $x$ apples between one person just gives $x$, since there's not even any cutting of the apples necessary.%%? (The answer to this would be $\frac{1}{-1}$.) It's not obvious!
Well, the thing to think about here is that if I take an apple, and share it among one person %%note:Myself, probably. I'm very selfish.%%, then I've got just the same apple as before (cut into no chunks): that is, $\frac{1}{1} = 1$.
Also, if I take an apple and share it among one person who is *not* myself (and I don't give any apple to myself), then we've also just got the same unsliced apple as before: $\frac{1}{1} = 1$ again.
But if I take an apple, and give it to an *anti*-person%%note:Being very careful not to touch them, because I would annihilate an anti-person!%%, then from their perspective *I*'m the anti-person, and I've just given them an anti-apple.
There's a law of symmetry built into reality: the laws of physics are invariant if we reflect "thing" and "anti-thing" throughout the universe.%%note:Technically this is not *quite* true: the actual symmetry is on reflecting charge, parity and time all together, rather than just parity alone. But for the purposes of this discussion, let's pretend that the universe is parity-symmetric.%%
The anti-person sees the universe in a way that's the same as my way, but where the "anti" status of everything (and everyantithing) is flipped.
Put another way, "anti-ness" is not an absolute notion but a relative one.
I can only determine whether something is the same anti-ness or the opposite anti-ness to myself.
Sear this into your mind: the laws of rational-number "physics" are the same no matter who is observing.
If I observe a transaction, like "I, a person, give someone an apple", then an external person will observe "The author, a person, gave someone an apple", while an external anti-person will observe "The author, an anti-person, gave an anti-person an anti-apple".
From the external person's perspective, they saw "someone (of the same anti-ness as me) gave someone else (of the same anti-ness as me) an apple (of the same anti-ness as me)".
From the anti-person's perspective, everything is relatively the same: "someone (of the opposite anti-ness to me) gave someone else (of the opposite anti-ness to me) an apple (of the opposite anti-ness to me)".
So $\frac{-1}{-1}$, being "one anti-apple shared among one anti-person", can be viewed instead from the perspective of an anti-person; they see one *apple* being given to one *person*: that is, $\frac{-1}{-1}$ is equal to $1$.
Armed with the fact that $\frac{-1}{-1} = 1$, we can just apply our usual multiplication rule that $\frac{a}{m} \times \frac{b}{n} = \frac{a \times b}{m \times n}$, to deduce that $$\frac{1}{-m} = \frac{1}{-m} \times 1 = \frac{1}{-m} \times \frac{-1}{-1} = \frac{-1 \times 1}{-m \times -1} = \frac{-1}{m}$$
The law of symmetry-of-the-universe basically says that $\frac{a}{-b} = \frac{-a}{b}$. |
d9e98f20-6869-46f3-8b35-af06f4d3d971 | trentmkelly/LessWrong-43k | LessWrong | Operationalizing Interpretability
[Note: This is a quick version of a post that I'll later update with more code + additional citations, but I think the brainstorming here can be useful for others. So I'm sharing just the list of questions for now.]
This blog post is my attempt to get some more clarity on how to think about interpretability in ML. Zach Lipton's Mythos of Model Interpretability is a great survey of the different definitions that people use when they talk about "interpretability". Another useful paper is to be A Survey Of Methods For Explaining Black Box Models which covers a wide variety of approaches for many different ML well as model-agnostic approaches. For neural nets specifically, Explainable Deep Learning:A Field Guide for the Uninitiated provides an in-depth read. Lastly, shout-out to Connected Papers which made navigating the paper landscape for interpretability very bearable.
One way to operationalize the concepts in these papers is to translate them into concrete questions we can ask of a machine learning model. Below, I've listed out a set of such questions, what concepts they reference, and what research exists to solve them.
Transparency Interpretability
These three questions are from Lipton's section on Transparency as interpretability, where he features on properties of the model that are useful to understand and can be known before training begins.
Can a human walk through the model's steps? (Simulatibility)
This property is about whether or not a human could go through each step of the algorithm and have it make sense to them at each step. Linear models and decision trees are often cited as interpretable models in these domains because the computation they require is simple, no fancy matrix operations or nonlinear transformations. However, Lipton points out that this desiderata is often less about the specific choice of model and more about the size of the model. After all, a decision tree with thousands of nodes would likely still be complicated to understand |
b55b667d-403b-4305-ba12-cc0b8d53108a | trentmkelly/LessWrong-43k | LessWrong | Get data points on your current utility function via hypotheticals
I've recently found that my utility function valued personal status and fame a whole lot more than I thought it did -- I previously had thought that it mostly relied on the consequences of my actions for other sentiences, but it turned out I was wrong. Obviously, this is a valuable insight -- I definitely want to know what my current utility function is; from there, I can decide whether I should change my actions or my utility function if the two aren't coordinated.
I did this by imagining how I would feel if I found out certain things. For example, how would I feel if everyone else was also trying to save the world? The emotional response I had was sort of a hollow feeling in the pit of my stomach, like I was a really mediocre being. This obviously wasn't a result of calculating that the marginal utility of my actions would be a whole lot lower in this hypothetical world (and so I should go do something else); instead, it was the fact that me trying to save the world didn't make me special any more -- I wouldn't stand out, in this sort of world.
(Epilogue: I decided that I hadn't done a good enough job programming my brain and am attempting to modify my utility function to rely on the world actually getting saved.)
Discussion: What other hypotheticals are useful? |
8d1e1517-238f-4143-bc19-6f861b652f09 | StampyAI/alignment-research-dataset/special_docs | Other | Bounded Rationality in Las Vegas: Probabilistic Finite Automata Play Multi-Armed Bandits.
Bounded Rationality in Las Vegas: Probabilistic Finite Automata Play
Multi-Armed Bandits
Xinming Liu
Computer Science Dept.
Cornell University
Ithaca, NY 14853
xl379@cornell.eduJoseph Y. Halpern
Computer Science Dept.
Cornell University 414 Gates Hall
Ithaca, NY 14853
halpern@cs.cornell.edu
Abstract
While traditional economics assumes that hu-
mans are fully rational agents who always
maximize their expected utility, in practice, we
constantly observe apparently irrational behav-
ior. One explanation is that people have limited
computational power, so that they are, quite ra-
tionally, making the best decisions they can,
given their computational limitations. To test
this hypothesis, we consider the multi-armed
bandit (MAB) problem. We examine a simple
strategy for playing an MAB that can be imple-
mented easily by a probabilistic finite automa-
ton (PFA). Roughly speaking, the PFA sets cer-
tain expectations, and plays an arm as long as it
meets them. If the PFA has sufficiently many
states, it performs near-optimally. Its perfor-
mance degrades gracefully as the number of
states decreases. Moreover, the PFA acts in
a “human-like” way, exhibiting a number of
standard human biases, like an optimism bias
and a negativity bias .
1 INTRODUCTION
Behavioral economists have argued for years that the tra-
ditional model of homo economicus —an agent who is
always rational and behaves optimally—is misguided.
There is a lot of experimental work backing up their
claims (see, e.g., (Thaler 2015)). Recent work has ar-
gued that perhaps the behavior that we observe can best
be explained by thinking of agents as rational (i.e., try-
ing to behave optimally), but not able to due to compu-
tational limitations; that is, they are doing the best they
can, given their computational limitations.
In this paper, following a tradition that goes back Ru-
binstein (1986) and Neyman (1985), we model computa-
Proceedings of the 36thConference on Uncertainty in Artificial
Intelligence (UAI) , PMLR volume 124, 2020.tionally bounded agents as probabilistic finite automata
(PFAs). We can think of the number of states of the au-
tomaton as a proxy for how computationally bounded the
agent is. Neyman (1985) showed that cooperation can
arise if PFAs play a finitely-repeated prisoner’s dilemma;
work on this topic has continued to attract attention (see
Papadimitriou and Yannakakis (1994) and the references
therein). Wilson (2015) considered a decision problem
where an agent must decide whether nature is in state 0
or state 1, after getting signals that are correlated with
nature’s state. She characterized an optimal n-state PFA
for making this decision, and showed that it exhibited
“human-like” behavior; specifically, it ignored evidence
(something a Bayesian would never do), and exhibited
what could be viewed as a first-impression bias and con-
firmation bias. Halpern, Pass, and Seeman (2012) con-
sidered a similar problem in a dynamic setting, where the
state of nature could change (slowly) over time. Again,
they showed that a simple PFA both performed well
and exhibited the kind of behavior humans exhibited in
games studied by Erev, Ert, and Roth (2010).
We continue this line of work, and try to understand the
behavior of computationally bounded agents playing a
multi-armed bandit (e.g., playing slot machines in Las
Vegas). Our first step in doing this is to understand the
extent to which optimal play can be approximated by a
PFA without worrying about the number of states used.
There are a number of notions of optimal that we could
consider. Here we focus on arguably the simplest one:
we compare the expected average payoff of the automa-
ton after it runs for Nsteps to the expected average pay-
off of always pulling the optimal arm of the bandit. We
also assume that the possible payoff of each arm is either
1 or 0 (i.e. success or failure), so the expected payoff of
an arm is just the probability of getting a 1.
There are well-known protocols that use Bayesian meth-
ods (e.g., Thompson Sampling (Thompson 1933)) that
approach optimal play in the limit; however, these ap-
proaches are computationally expensive. We show that
they have to be. No approach that can be implemented
by a PFA can perform optimally. Indeed, for all PFAs,
there exists an > 0such that as the number of steps
gets large, the ratio of the expected payoff of the automa-
ton to the expected payoff of the optimal arm is at most
1 . That is, a PFA must be off by some >0from
optimal play (although we can make as small as we like
by allowing sufficiently many states). Among families of
finite automata that have near-optimal payoff, we are in-
terested in ones that (a) make efficient use of their states
(so, for a fixed number Mof states, have high expected
payoff), (b) converge to near-optimal behavior quickly,
and (c) use simple “human-like” heuristics.
A standard approach to dealing with multi-armed bandit
problem is one we call explore-then-exploit . We simply
test each arm Ntimes (where Nis a parameter), and
from then on play the best arm (i.e., the one with the
highest average reward). If the bandit has Karms, then
we need roughly O(NKlog(N) log(K))states, since we
need to keep track of the possible tuples of outcomes of
the tests, as well as two counters, one to keep track of
which arm is being tested, and the other to keep track of
how many times we have played it.
We can greatly reduce the number of states by essentially
using an elimination tournament. We first compare arm
1 against arm 2, eliminate the worse arm, run the winner
against arm 3, eliminate the worse arm, run the winner
against arm 4, and so on. The way we compare arm
iandjis straightforward: we alternate playing iand
jand use a counter to keep track of the relative num-
ber of successes of i. If the counter hits an appropriate
thresholdM(so thatihas hadMmore successes than
j),iis the winner; if the counter hits M, thenjis the
winner. To do this, we need K
2
2(2M+ 1)2K2M
states: we need to keep track of which arms are being
played, which arm is currently moving, and the counter.
In choosing M, we need to balance out the desire not to
mistakenly eliminate a good arm (which is more likely
to happen the smaller that Mis) with the desire not to
“waste” too much time in finding the right arm (since
the payoff while we are doing that may not be so high,
particularly if we are playing two arms whose success
probabilities are equal but not very high). We deal with
this by stopping a comparison after an expected number
Nof steps. (We implement this by stopping the compar-
ison with probability 1=N, which does not require any
extra states.) As we shall see, this approach, which we
call the elimination tournament , does extremely well.
The-greedy protocol is a slight variant of this approach:
Again, we test for the first Nsteps, and then play the
current best arm with probability 1 and a random
arm with probability . But this requires infinitely manystates, since we must keep track of the fraction of suc-
cesses for all arms to determine the current best arm.
Clearly neither approach is optimal. With positive prob-
ability, both explore-then-exploit and the elimination-
tournament protocols will choose a non-optimal arm;
from then on it is not getting the optimal reward.
The-greedy protocol gets a non-optimal reward with
(roughly) probability . While we can make all these ap-
proaches arbitrarily close to optimal by choosing the pa-
rametersN,, andMappropriately, they do not satisfy
our third criterion: they don’t seem to be what people are
doing. The -greedy approach and explore-then-exploit
require an agent to keep track of large amounts of in-
formation, while the elimination tournament alternates
between arms at every step, which may have nontrivial
costs. (Imagine a gambler in Las Vegas who wants to
compare two arms that are at opposite ends of a large
room. Will he really walk back and forth?)
We instead consider an approach that takes as its starting
point earlier work by Rao (2017), who considered only
two-armed bandits, where, just as for us, each arm has
a payoff inf0;1g. She defined a family of PFAs that
act like “approximate Bayesians”. More precisely, each
arm has an associated rank that represents a coarse esti-
mate of the arm’s payoff probability. Rao plays the arms
repeatedly (using complicated rules to determine which
arm to play next) in order to estimate the success proba-
bility of each arm, and then chooses the best arm.
While we use ranks, we use them in a very different way
from Rao. We take as our inspiration Simon’s notion of
satisficing (Simon 1956). The idea is that an arm will be
accepted if its success probability is above some thresh-
old. In the words of Gigerenzer and Gaissmaier (2015):
“Set an aspiration level, search through alternatives se-
quentially, and stop search as soon as an alternative is
found that satisfies the level.” (We remark that the im-
portance of the aspiration level goes back to the 1930s in
the psychology literature, and has been studied at length
since then; see, e.g., the highly-cited work of Lewin et
al. (1944).) But how do we determine the aspiration
level? This is a nontrivial issue. Selten (1998) and Si-
mon (1982) (both Nobel prize winners) discuss this is-
sue at length. As Gigerenzer and Gaissmaier (2015) ob-
serve, “The aspiration level need not be fixed, but can
be dynamically adjusted to feedback.” In our setting, it
is relatively straightforward: we use an optimism bias
(Sharot 2011). We start with a high aspiration level (suc-
cess probability) p, and run a tournament as above be-
tween each arm kand a “virtual arm” that has success
probabilityp. Since this is a virtual arm, we are essen-
tially comparing the performance of each arm kto our
expectation. If arm kdoes not meet our expectation, then
we go to the next arm. If no arm meets our expectation,
we adjust the aspiration level according to this feedback,
by lowering it. This requires KMm states, where M,
as before, is the counter used to keep track of the relative
performance of the arm being tested and mis the number
of ranks. We call this the aspiration-level approach.
We get good performance by taking mK, so the
aspiration-level approach uses essentially the same num-
ber of states as the elimination tournament. Moreover,
as we show by simulation, its performance approach de-
grades gracefully as the number of states decreases. Even
with relatively few states, it compares quite favorably to
the-greedy approach and to Thompson Sampling, al-
though they require infinitely many states. More impor-
tantly from our perspective, the aspiration-level approach
is quite human-like. We have already mentioned how it
incorporates satisficing, the adjustment of expectations
according to feedback, and an optimism bias. But there
is more. Whereas the elimination-tournament approach
treats the two arms that it is comparing symmetrically,
the aspiration-level approach does not. If the virtual arm
wins, it just means that we try another arm. Moreover,
especially initially, we expect the virtual arm to win be-
cause we start out with a high aspiration level. On the
other hand, if an actual arm wins, that is the arm we use
from then on. Thus, we want to be relatively quick to
reject an arm, and slow to accept. This can be be viewed
as a negativity bias (Kanouse and Hanson 1972): neg-
ative outcomes have a greater effect than positive out-
comes. The focus on recent behavior can be viewed
as implementing an availability heuristic (Tversky and
Kahneman 1973): people tend to heavily weight their
judgments toward more recent or available information.
Finally, a short run of good luck can have a significant
influence, causing an arm to be played for a long time
(or even played forever, if it is enough to get it accepted).
People are well-known to label some arms as “lucky” and
keep playing them long after the evidence has indicated
otherwise. This can also be viewed as an instance of the
status quo bias (Samuelson and Zeckhauser 1998): peo-
ple are much more likely to stick with the current state of
affairs (provided they think it is reasonably good).
2 MULTI-ARMED BANDITS
This section provides the necessary background for the
rest of the paper. In particular, we (1) briefly review
multi-armed bandits, (2) define the notion of optimality
we consider, and (3) prove that a PFA cannot be optimal.2.1 THE MULTI-ARMED BANDIT PROBLEM
The multi-armed bandit (MAB) problem is a standard
way of modeling the tradeoff between exploitation and
exploration. An agent has Karms that she can pull. Each
arm offers a set of possible rewards, each obtained with
some probability. The agent does not know the probabil-
ities in advance, but can learn them by playing the arm
sufficiently often. Formally, a K-armed bandit is a tu-
ple distribution over rewards for arm k. Letkbe the
expected reward of arm i, fori=k;:::;K . The best
expected reward of Bis denoted
B= maxkfkg.
We assume for simplicity in this paper that the possible
rewards of an arm are either 0 or 1. With this assump-
tion,kis the probability of getting a 1 with arm k. We
can easily modify the protocol to deal with a finite set of
possible rewards, as long as the set of possible rewards
is known in advance. We also assume for now that the
distributions Rkdo not vary over time.
2.2 OPTIMAL PROTOCOLS FOR MAB
PROBLEMS
We are interested in protocols that play MABs (almost)
optimally. Formally, a protocol is a (possibly random-
ized) function from history to actions. We focus on
one particular simple notion of optimality here, which
informally amounts to approaching the average reward
of the best arm. To make this precise, given a pro-
tocolP, letaP;B
t be a random variable that denotes
the arm played by protocol Pat thetth step. Thus,
aP;B
tis the expected reward of arm aP;B
t. It is easy to
see that the expected cumulative reward of protocol P
when run for Nsteps on MAB BisCum (P;B;N ) =PN
t=1E[aP;B
t]. Since the reward for playing the opti-
mal arm of MAB BforNsteps isN
B, the expected
regret is the difference between the cumulative reward
ofPand the optimal reward: Reg(P;B;N ) =N
B
Cum (P;B;N ). Finally, the averageN-step regret ofP
onBisAReg (P;B;N ) =Reg(P;B;N )=N. We say
thatPisoptimal iflimN!1AReg (P;B;N ) = 0 for
all MABsB.
As we observed in the introduction, neither explore-
then-exploit nor the -greedy protocol is optimal in this
sense. There are Bayesian approaches that are optimal.
We briefly discuss one: Thompson Sampling (Thomp-
son 1933). Roughly speaking, at each step, this proto-
col computes the probability of each arm being optimal,
given the observations. It then chooses arm kwith a
probability proportional to its current estimate that kis
the optimal arm. It is not hard to show that, with proba-
bility 1, the probability of a non-optimal arm being cho-
sen goes to 0. (By way of contrast, the probability of a
non-optimal arm being chosen at any given step with the
-greedy protocol is a constant: at least (K 1)=K, if
there areKarms.)
As shown by Kaufman, Kordan, and Munos (2012),
Thompson Sampling is optimal in an even stronger
sense than what we have considered so far. Tak-
ing TS to denote Thompson Sampling, not only
do we have limN!1Reg(TS;B;N )=N = 0 , but
there is a constant c
B(that depends on the MAB
B, but has been completely characterized) such that
limN!1Reg(TS;N )=log(N) =c
B. Moreover, this
is optimal; as shown by Lai and Robbins (1985), for all
protocolsPsatisfying a minimal technical condition, we
must have limN!1G(P;B;N )=log(N)c
B. That
means that Thompson Sampling approaches optimal be-
havior as quickly as possible, and its cumulative regret
grows only logarithmically. We mention this because we
will be comparing the performance of our approach to
that of Thompson Sampling later.
2.3 PROBABILISTIC FINITE AUTOMATA AND
NON-OPTIMALITY
As we said in the introduction, we are interested in
resource-bounded agents playing MABs, and we model
resource-boundedness using PFAs. A PFA is just like a
deterministic finite automaton, except that the transitions
are probabilistic. We also want our automata to produce
an output (an arm to pull, or no arm), rather than accept-
ing a language, so, technically, we are looking at what
have been called probabilistic finite automata with out-
putorprobabilistic transducers . (This is also the case for
all the earlier papers that considered PFAs playing games
or making decisions, such as (Halpern, Pass, and Seeman
2012; Papadimitriou and Yannakakis 1994; Rubinstein
1986; Wilson 2015).) Formally, a PFA with output is a
tuple (Q;q 0;;O;
; ), where
Qis a finite set of states ;
q02Qis the initial state;
is the input alphabet (in our case this will consist
of the observations “arm khad rewardj” forj2
f0;1g);
Ois the output alphabet (in our case this will be
“k”, which is interpreted as playing arm k, fork2
f1;:::;Kg);
:Q!(O)is a probabilistic action function (as
usual, (X)denotes the set of probability distribu-
tions onX);
:Q!(Q)is a probabilistic transition
function.Intuitively, the automaton starts in state q0and plays an
arm according to distribution
(q0). It then observes the
outcomeoof pulling the arm (an element of ) and then
transitions to a state q0(according to (q0;o)). It then
plays arm
(q0), and so on.
It is easy to see that the explore-then-exploit protocol
can be implemented by a finite automaton. On the other
hand, the-greedy protocol and Thompson Sampling
cannot. That is because they keep track of the total num-
ber of times each arm kwas played, and the fraction of
those times that a reward of 1 was obtained with k. This
requires infinitely many states.
We claim that no protocol implemented by a PFA can be
optimal. To prove this, we need some definitions.
Definition 2.1. AK-arm MABB= (1;:::;K)is
generic if (1)
B<1, (2)i6=jfori6=j, and (3) if
K= 2, then min(1;2)>0.
Note the if we put the obvious uniform distribution on the
set ofK-armed bandits (identifying a K-armed bandit
with aK-vector of real numbers), then the set of generic
MABs has probability 1.
Definition 2.2. B0= (0
1;:::;0
K)is apermutation of
B= (1;:::;K)if there is some permutation of the
indices such that k=0
(k).
Theorem 2.1. For all PFAs Mand all generic MABs
B, there exists some M;B>0(that, as the no-
tation suggests, depends on both MandB) and
an MABB0that is a permutation of Bsuch that
limN!1Reg(M;B0;N)=NM;B.
Before giving the proof, we can explain why we must
consider generic MABs and permutations. To understand
why we consider permutations, suppose that Malways
plays arm 1. If it so happens that arm 1 is the best arm for
B, thenMgets the optimal reward with input B. But it
will not get the optimal reward for a permutation of Bfor
which arm 1 is not the best arm. It is not hard to see that if
B= 1, then there exists a PFA Mthat gets the optimal
reward given input Bor any of its permutations: Mjust
plays an arm until it does not get a payoff of 1, then goes
on to the next arm. Sooner or later Mwill play an arm
that always gets a reward of 1. A similar PFA also gets
the optimal reward if K= 2given an input B= (1;2)
such thatk= 0 for some arm k: it alternates between
the arms until it finds an arm that gives reward 1, and
sticks with that arm. Finally, if 1==K, then no
matter what arm Mplays, it will get the optimal reward
onBand all of its permutations. The requirement that
allks are distinct is actually stronger than we need, but
since slight perturbations of the rewards of an arm suffice
to make all rewards distinct, we use it here for simplicity.
Proof. Given a PFA Mand a nontrivial MAB B, there
are two possibilities: (1) there is some state qthat can be
reached from the start state q0with positive probability
and an armksuch that, after reaching state q,Mplays
armkfrom then on, no matter what it observes; (2) there
is no such state q. Note that the first case is what happens
with explore-then-exploit. After the exploration phase,
the same arm is played over and over. The second case
is more like Thompson Sampling or -greedy; there is
always some positive probability that a given arm kwill
be played.
For case (1), let o1;:::;oTbe a sequence of observations
that, with positive probability, leads Mto a stateqafter
which it always plays arm k. If the arm that Mplays
in stateqis not the best arm of B, let=
B k,
and letM;B be the probability with which o1;:::;oT
is observed when running Mon inputB. Clearly,
limN!1Reg(M;B;N )=NM;B. And ifk=,
consider a permutation B0= (0
1;:::;0
K)such that
0
j= 0 if and only if j= 0 (i.e., the permutation is
the identity on all arms jsuch thatj= 0) such that
0
k6=
B0=
B. It is still the case that o1;:::;oTcan
be observed with some positive probability M;B0when
runningMon inputB0. Taking0=
B j, we have
limN!1Reg(M;B;N )=N0M;B0.
For case (2), no matter what state q M is in, with some
probabilityq>0,Mplays a non-optimal arm at qor
moves to another state q0and plays a non-optimal arm
there. Let
M= minqq. SinceMhas only finitely
many states, >0. Given as input an MAB B, let
Bbe the difference between the
Band the probabil-
ity that the second-best arm returns 1. (Here we are
using the fact that all arms have different probabilities
of returning 1.) Let XM;B;T be a random variable that
represents the reward received on the Tth step that M
is run on input B. Our discussion shows that, for all
T, we must have E(XT+XT+1)2
B
MB,
since with probability at least
M, one ofXTorXT+1
is at leastBless than
B. SinceReg(M;B; 2N)=
2N
B P2N
T=1XM;B;T2N
B N(2
B
NB),
it follows that Reg(M;B; 2N)=2N
BB=2. This
gives us the desired result.
3 AN ALMOST-OPTIMAL FAMILY OF
PFAS FOR MAB PROBLEMS
In this section, we introduce the aspiration-level proto-
col more formally. We start by reviewing Rao’s (2017)
approach to dealing with 2-armed bandits, since our ap-
proach uses some of the same ideas.3.1 RAO’S APPROACH
With only finitely many states, a PFA cannot keep track
of the exact success rate of each arm in an MAB. Thus,
it needs to keep a finite representation of the success
rate. Rao’s idea was to use a finite set of possible
ranks to encode the agent’s belief about the relative
goodness of each arm. There are mpossible ranks,
f1;:::;mg, wheremis a parameter of the protocol.
Thus, Rao’s PFA has m2possible states, which have
the form (r1;r2)(since Rao considers only 2-armed ban-
dits), where r1;r22f1;:::;mg.
Rao assumes that the initial state of the PFA has the form
(n;n)for somen2f1;:::;mg; the exact choice does
not matter. Thus, initially, the two arms are assumed to
be equally good. Of course, if an agent has some prior
reason to believe that one arm is better than the other,
then the initial state can encode this belief.
The action function
is defined as follows: If the higher-
ranked arm has the highest possible rank ( m) and the
other arm does not, then the higher-ranked arm is played.
Otherwise, similar in spirit to Thompson Sampling, the
next arm to play is chosen according to a probability that
depends on the difference between the ranks of the arms
(jr1 r2j) and how far the arm’s ranks are from aver-
age (jr1 m=2j+jr2 m=2j). The two numbers are
then combined using two further parameters (called
andCby Rao) of the protocol. We refer the reader to
(Rao 2017) for the technical detail and intuition.
Finally, the transition function is defined as follows: the
rank of the arm last played goes up with some probabil-
ity (if it is not already m) if a payoff of 1 is observed and
goes down with some probability (if it is not already 1) if
a payoff of 0 is observed. The rank of the arm not played
does not changed. The exact probability of a state change
depends on a quantity that Rao calls the inertia , which is
determined by the ranks of the arms, and two other pa-
rameters of the protocol, called andCtby Rao. Intu-
itively, the inertia characterizes the resistance to a change
in rank. The less frequently an arm has been played, the
higher its associated inertia will be, so its rank is updated
with a lower probability. Again, we refer the reader to
(Rao 2017) for details.
3.2 THE ASPIRATION-LEVEL PROTOCOL
We want to define a family of PFAs for K-armed MABs.
We continue to use Rao’s idea of associating with each
arm a rank. The naive extension would thus require
O(mK)states. For large K, this is quite unreasonable.
So we assume that the PFA focuses only one arm at a
time, comparing it to a “virtual” arm whose success prob-
ability can be thought of as the agent’s aspiration level
(Lewin, Dembo, Festinger, and Sears 1944). The first
arm that meets the agent’s aspirations is the arm that is
played from then on. As we mentioned in the introduc-
tion, this can be viewed as satisficing (Simon 1956). Not
only does this approach use significantly fewer states, it
seems more like what people do.
Rao’s protocol has another feature that renders it an im-
plausible model of human behavior. It uses a number
of parameters ( m;;C;;Ct) to trade off exploitation
and exploration; the best choice of parameter settings de-
pends on the application domain. Moreover, these pa-
rameters are combined in a nontrivial way (using, for ex-
ample, exponentiation). It is hard to believe that people
would take the trouble (or have enough experience) to
learn the appropriate parameter settings for a particular
domain, nor are they likely to be willing to do the com-
putations needed to use them.
We thus significantly simplify the action function and
transition function. As we said, we use the idea of a
tournament, but we play the current arm against a “vir-
tual arm”, whose success probability is determined by
the aspiration level, which is rank. If there are mranks,
then a rank of r2f1;:::;mgcan be thought of as repre-
senting the interval of probability [(r 1)=m;r=m ]. We
thus take the success probability of a virtual arm with
aspiration level rto be (r :5)=m, the midpoint of the
interval. If we compare arm ito the virtual arm using a
counter. Suppose that we get a success with arm i(i.e.,
1 is observed). Since we expect the virtual arm to have
a success with probability (r :5)=m, we increase the
counter by 1 with probability 1 (r :5)=m(since this is
the probability that the virtual arm had a failure, so that
armihad one more success than the virtual arm), and
leave the counter unchanged with probability (r :5)=m
(since, with this probability, both the virtual arm and arm
ihad a success). Similarly, if there is a failure with arm
i, we decrease the counter with probability (r :5)=m
and leave it unchanged with probability 1 (r :5)=m.
We use two thresholds M1andM2to decide when to
end the comparison. If the counter reaches M1, then
we declare the current arm ibeing considered to have
won the tournament; intuitively, its success probability
is higher than that of the virtual arm. From then on we
play armi. If the counter reaches M2, then the virtual
arm has won the comparison. We (temporarily) elimi-
nate armk, and compare the virtual arm to arm k+ 1if
k <K . We discuss what happens if k=Kshortly, but
first note that there is no analogue to the parameter N
of the elimination-tournament protocol here. The con-
cern in the elimination-tournament protocol is that we
are comparing two arms iandjthat have roughly equal,
but not very good success probabilities. Then the tour-nament will go on for a long time, but not give a high
reward. With the aspiration-level protocol, if arm khas a
success probability that is essentially the same as that of
the virtual arm, although the comparison may go on for
a long time, the agent is getting a cumulative reward that
essentially matches expectations, so there is no pressure
to stop the comparison.
Ifk=K, then the virtual arm did better than all arms
with this aspiration level. That means that our expecta-
tions are too high, so we lower the aspiration level from
rtor 1, and retest all arms.
As discussed in the introduction, we do not assume that
M1=M2. The implications of an arm iwinning the
comparison against the virtual arm are much different
than the implications of the virtual arm winning. In the
former case, we play arm ifrom then on; in the latter
case, we just continue looking for another (hopefully bet-
ter) arm. Because the implications are so different, it
turns out that we want to take M1significantly larger
thanM2. (Our experiments suggest that M1= 20 and
M2= 3are good choices, along with m= 100 ; see
Section 4.2.)
One other issue: if the actual best success probability
is low (say, .2) and there are 100 ranks, it will take a
long time before the aspiration level is set appropriately.
During this time, the cumulative regret is increasing. To
speed up the process of finding the “right” aspiration
level, we can do a quick preprocessing phase to find the
right range, and then explore more carefully. Specifi-
cally, ifm= 100 , in the preprocessing phase, when we
reset the rank, we decrease it by 10 (in general, we de-
crease it bypm) rather than decreasing it by 1. We also
use smaller values of M1andM2(say,M1= 5 and
M2= 1rather thanM1= 20 andM2= 3). If
an armibeats the virtual arm when the aspiration level
r= 60 , we go back to the previous setting of aspira-
tion levelr= 70 , and do a more careful search start-
ing from there, now decreasing the aspiration level by 1,
and usingM1= 20 andM2= 3. This preprocessing
phase allows us to home in on the appropriate expecta-
tions quickly. Again, besides being more efficient, this
seems to be the type of thing that people do.
With this background, we are ready to de-
fine our family of PFAs. For ease of presen-
tation, we do not use a preprocessing phase.
Formally, we have a family MK;m;M 1;M2=
(QK;m;M 1;M2;qm;K;OK;
K;K;m;M 1;M2) of
PFAs, indexed by 4 parameters: Kis the total number
of arms,mis the number of possible ranks for each arm,
andM1andM2are the upper and lower thresholds for
the counter. We assume that Kis given as part of the
input; we discuss how m,M1, andM2are chosen in the
next section. Not only do we have fewer parameters than
Rao, as we shall see, they are easier to set (and easier to
explain and understand). In more detail, the components
of the tuple are as follows:
A stateq2QK;m;M 1;M2has the form (r;k;c ),
where 1rm,1kK, and M2< c
M1. Intuitively, a state (r;k;c )says that the current
aspiration level is r, we are testing arm k, and the
counter that keeps track of the relative success rate
of armkcompared to the virtual arm is at c.
We take the initial state q0to be (m;1;0): we start
by setting the aspiration level to m(the highest level
possible), testing arm 1, and have the counter at 0.
Kconsists of observations of the form (k;h),
wherek2f1;:::;Kgandh2f0;1g. We observe
the outcome of playing arm k, which is a reward of
either 0 or 1.
OK=f1;:::;Kg: we can play any arm.
The action function
Kat a state (r;k;c )plays arm
k.
The transition function K;m;M 1;M2proceeds as fol-
lows. In state (r;k;c ), ifc=M1, the state does
not change. (We have chosen ias the arm to play
from then on.) If c < M 1, given an observation
(h;k), ifh= 1 (a success was observed), the new
state is (r;i;c0), wherec0=c+ 1with probability
1 (r :5)=m, and otherwise c0=c. Ifh= 0and
c > M 2+ 1, then the new state is (r;i;c0), where
c0=c 1with probability (r :5)=m, and other-
wise is unchanged. If c=M2+ 1, then with proba-
bility (r :5)=m, the new state is (r 1;1;0)(the
aspiration level is lowered and we start over com-
paring the virtual arm to all the arms, starting with
arm 1); otherwise the state is unchanged.
4 EXPERIMENTS
4.1 PERFORMANCE METRICS
We use simulations to test the performance of various
protocols. In the simulations, we consider an MAB B
withKarms, whose true success probabilities are uni-
formly distributed in [0, ], whereis a random num-
ber in [0,1]. If we had just assumed that the success
probabilities were uniformly distributed in [0,1], then the
probability of there being an arm in the [0.9,1] interval is
1 0:9K, which is approximately 0.995 for K= 50 .
Indeed, the probability of there being an arm in the inter-
val[:99;1], is about 0.4. Not only does this seem unrea-
sonable in practice, this assumption would make it tooeasy to set the right aspiration level in our approach (i.e.,
it would hide some real-world difficulty). The assump-
tion that the success probabilities are bounded by for
a randomly-chosen seems more reasonable. While as-
suming that the success probabilities are uniformly dis-
tributed in [0;]may not be so reasonable, our results
remain essentially unchanged even if the success proba-
bilities are chosen adversarially, and the uniform distri-
bution is much easier to generate.
We focus on two metrics when it comes to measuring the
performance of a protocol: (1) the expected cumulative
regret of a protocol Pas a function of the number of steps
played (which roughly depends on how long it takes to
find the best arm) and (2) the expected average regret in
the limit (i.e., limN!1AReg (P;B;N )), which essen-
tially measures the gap between the success probability
of the arm chosen by protocol Pand the success proba-
bility of the optimal arm of B. We take the expectation
over MABsBgenerated as discussed above. Essentially,
we want a protocol that gets to the best arm quickly and
accurately.
4.2 PARAMETER SETTINGS IN THE
ASPIRATION-LEVEL PROTOCOL
There are three parameter settings for the aspiration-level
protocol: the number of ranks m, and the thresholds M1
andM2for winning and losing a comparison against the
virtual arm. We examine the effect of different choices
here.
The largermis, the finer distinctions we will be able
to make between the arms that we are testing. Roughly
speaking, if the virtual arm has rank rand the virtual
arm performed better than all arms when the aspiration
level wasr+ 1, we would expect that all arms have suc-
cess probability less than (r+:5)=m, and that an arm
with success probability greater than (r :5)=mwill beat
the virtual arm. However, this arm can have probability
as much as 1=mless than the arm with highest success
probability. By taking mlarger, we thus minimize the
expected gap between the success probability of the arm
chosen and the best arm.
We consider an MAB BwithK= 50 arms and run sim-
ulations. As expected, the larger mis, the smaller the
gap, but there are diminishing returns. Figure 1 shows
that, with other parameters fixed ( M1= 20;M 2= 3),
there is significant improvement in going from m= 50
tom= 100 ; but the marginal improvement drops off
quickly. This is no significant difference between m=
100and larger values such as m= 200 orm= 500 .
The corresponding gaps between the success probability
of the arm chosen and the success probability of the op-
timal arm of B, averaged over 100 repetitions, are 0.020,
0.007, 0.0068, 0.0065, respectively. In addition, since
we start optimistically by initializing the aspiration level
at the highest possible rank, when mis larger, it takes
longer to get the right aspiration level and hence the cu-
mulative regret is larger, as shown in Figure 1. Consid-
ering both performance metrics as mentioned above, we
choosem= 100 in the later simulations.
Figure 1: Cumulative regret for different m.
Once we fix m, we now examine the choices of M1and
M2. The parameters M1andM2determine the condi-
tions of winning and losing: if counter gets to M1, then
the current arm beats the “virtual arm” and is therefore
chosen as the best arm; if the counter gets to M2, then
the current arm loses the tournament with the “virtual
arm” and we move to a new arm. If all Karms lose the
tournament, we decrease the aspiration level by 1 and
restart the tournament. We want it to be easier for the
“virtual arm” to win, since the consequences are lower
in that case (the protocol ends if we declare arm ia
winner, whereas we keep going if the “virtual arm” is
a winner). Therefore, it makes sense to have an asym-
metry and choose M1greater than M2. We again con-
sider an MAB BwithK= 50 arms andm= 100
fixed. As shown in Figure 2, the cumulative regret in-
creases asM1andM2get larger. However, the gap be-
tween the success probability of the arm chosen by pro-
tocolPand the success probability of the optimal arm
ofB, decreases. The corresponding gaps, averaged over
100 repetitions, are 0.014, 0.007, 0.005, 0.004, respec-
tively. Since the number of states in the aspiration-level
protocol isKm(M1+M2), there is a tradeoff between
accuracy and the number of states required. Taking into
account state-efficiency, accuracy, and the expected cu-
mulative regret, we choose M1= 20 andM2= 3.
Both Figure 1 and Figure 2 show that the performance of
the aspiration-level protocol degrades quite gracefully as
we take smaller values of m,M1, andM2(which is how
we would have to deal with having fewer states).
Figure 2: Cumulative regret for different M1andM2.
4.3 PARAMETER SETTINGS FOR THE
ELIMINATION TOURNAMENT
The elimination-tournament protocol has two parame-
ters:M(the point at which an arm is declared a winner in
the two-way comparison) and N(recall that 1=Nis the
probability that an arm is declared in the two-way com-
parison if no arm is dominant and has Mmore successes
than the other). Thus, after an expected number of at
mostN(K 1)steps, the elimination-tournament proto-
col has reduced to one arm. We clearly want MandNto
be large enough to give the protocol time to select a rel-
atively good arm. However, we don’t want to stick with
bad arms for too long, since this will lead to larger cumu-
lative regret. Figure 3 shows the cumulative regrets for
different choices of NandM, for an MAB with K= 50
arms. For the choices of (N;M )considered—(1000,10),
(1000,20), (1000, 100), (100,10), (100,20)—the gaps,
averaged 100 repetitions, are 0.01, 0.007, 0.006, 0.03,
0.03, respectively. Both N= 1000;M = 20 and
N= 1000;M= 100 give similarly good performance
in terms of the expected average regret, but the latter
leads to larger expected cumulative regret. Therefore,
forK= 50 , we choose N= 1000 andM= 20 .
Figure 3: Cumulative regret for different NandM.
4.4 COMPARING PROTOCOLS
Based on the simulations above, to minimize the number
of states used while maintaining relatively good perfor-
mance, forK= 50 , we choose the parameters m=
100;M 1= 20;M 2= 3 for the aspiration-level pro-
tocol andM= 20 ,N= 1000 for the elimination-
tournament protocol, and compare these two finite-state
protocols to the -greedy protocol and Thompson Sam-
pling, which are infinite-state protocols. With these
choices, the aspiration-level protocol uses 115,000 states,
while the elimination-tournament protocol uses just over
100,000. While this may seem to be a a lot of states,
they can be encoded using 17 bits. Given the number of
neurons in a human brain, this should not be a problem.
We can greatly reduce the cumulative regret for the
aspiration-level protocol by a preprocessing phase, as
suggested earlier. For K= 50 arms and the aspiration-
level protocol with m= 100;M 1= 20;M 2= 3, we first
use a preprocessing phase to get a rough idea of what the
true highest success probability might be. We use the
parameters suggested earlier, decreasing the aspiration
level by 10 after testing all the arms in the preprocessing,
and use thresholds M0
1= 5andM0
2= 1. We use
this two-phase approach for the aspiration-level protocol
in the following simulation.
We consider MABs with K= 50 arms, and see how
the elimination-tournament protocol, the aspiration-level
protocol,-greedy, and Thompson sampling perform.
Not surprisingly, Thompson sampling performs best, and
has logarithmic cumulative regret, whereas the other
three protocols have linear cumulative regret. After
50,000 steps, the expected difference between the suc-
cess probability of the arm chosen and that of the op-
timal arm for these protocols are 0.007, 0.008, 0.025,
0.003, respectively. Interestingly, both the aspiration-
level protocol and the elimination-tournament protocol
eventually outperform -greedy, although the latter re-
quires infinitely many states.
Figure 4: Cumulative regret over time.5 DISCUSSION
We have introduced two finite-state protocols for playing
MABs, the aspiration-level protocol and the elimination-
tournament protocol. Both perform quite well in prac-
tice, while using relatively few states. In cases where
switching between arms incurs a significant cost, the
aspiration-level protocol is a better choice.
Recall that the main motivation for this study was under-
standing human behavior. The fact that the aspiration-
level protocol exhibits such human-like behavior, includ-
ing adjusting aspiration levels according to feedback, an
optimism bias, a negativity bias, and a status quo bias,
as well as a focus on recent behavior, suggests that hu-
mans are not being so irrational. Note that these biases
are emphasized if the number of states is decreased. For
example, if an agent decreases M2, the threshold for re-
jecting an arm in a two-way comparison with the virtual
arm, in response to having fewer states, this increases the
negativity bias. Decreasing M1increases the likelihood
that an agent will continue to play an apparently “lucky
arm”. The impact of decreasing Min the elimination-
tournament protocol is similar. The bottom line is that
these protocols exhibit apparently irrational behavior for
quite rational reasons! At the same time, they may be of
interest even for those not interested in modeling human
behavior, since they have quite good performance, even
with relatively few states.
We have focused here on a static setting, where the prob-
abilities do not change over time. We could easily mod-
ify our PFA to deal with the dynamic setting by simply
resetting the tournaments from time to time. More in-
terestingly, we would like to apply these ideas to a more
game-theoretic setting, such as the wildlife poaching set-
ting considered by Kar et al. (2015), where rangers are
trying to protect rhinos from poachers. We hope to re-
port on that in future work.
Acknowledgements
This research was supported by MURI (MultiUniversity
Research Initiative) under grant W911NF-19-1-0217, by
the ARO under grant W911NF-17-1-0592, by the NSF
under grants IIS-1703846 and IIS-1718108, and by a
grant from the Open Philosophy Foundation. We thank
Alice Chen for her preliminary work and comments on
an earlier version of the manuscript. We also thank four
anonymous reviewers for their feedback.
References
Erev, I., E. Ert, and A. E. Roth (2010). A choice
prediction competition for market entry games:
An introduction. Games and Economic Behav-
ior 1 (1), 117–136.
Gigerenzer, G. and W. Gaissmaier (2015). Decision
making: Nonrational theories. In J. D. Wright
(Ed.), International Encyclopedia of the Social
and Behavioral Sciences (2nd Edition) , pp. 911–
916.
Halpern, J. Y ., R. Pass, and L. Seeman (2012). I’m
doing as well as I can: modeling people as ra-
tional finite automata. In Proc. Twenty-Sixth Na-
tional Conference on Artificial Intelligence (AAAI
’12), pp. 1917–1923.
Kanouse, D. E. and L. Hanson (1972). Negativity
in evaluations. In E. E. Jones, D. E. Kanouse,
S. Valins, H. H. Kelley, R. E. Nisbett, and
B. Weiner (Eds.), Attribution: Perceiving the
Causes of Behavior . Morristown, NJ: General
Learning Press.
Kar, D., F. Fang, F. Delle Fave, N. Sintov, and
M. Tambe (2015). “Game of thrones”: when hu-
man behavior models compete in repeated Stack-
elberg security games. In Proc. 2015 International
Conference on Autonomous Agents and Multia-
gent Systems , pp. 1381–1390.
Kaufmann, E., N. Korda, and R. Munos (2012).
Thompson sampling: an asymptotically optimal
finite-time analysis. In N. H. Bshouty, G. Stoltz,
N. Vayatis, and T. Zeugmann (Eds.), Algorith-
mic Learning Theory )ALT 2012) , LNCS, V ol-
ume 7568, pp. 199–213. Springer.
Lai, T. L. and H. Robbins (1985). Asymptotically ef-
ficient adaptive allocation rules. Advances in Ap-
plied Mathematics 6 (1), 4–22.
Lewin, K., T. Dembo, L. Festinger, and P. S.
Sears (1944). Level of aspiration. In J. M. Hunt
(Ed.), Personality and the Behavior Disorders , pp.
333–378. Cambridge, MA: Ronald Press.
Neyman, A. (1985). Bounded complexity justifies co-
operation in finitely repeated prisoner’s dilemma.
Economic Letters 19 , 227–229.
Papadimitriou, C. H. and M. Yannakakis (1994). On
complexity as bounded rationality. In Proc. 26th
ACM Symposium on Theory of Computing , pp.
726–733.
Rao, A. (2017). A finite memory automaton for two-
armed Bernoulli bandit problems. In Proc. Thirty-
First National Conference on Artificial Intelli-gence (AAAI ’17) , pp. 4981–4982. The full pa-
per is available at http://raoariel.github.io/raoariel-
fma.pdf.
Rubinstein, A. (1986). Finite automata play the re-
peated prisoner’s dilemma. Journal of Economic
Theory 39 , 83–96.
Samuelson, W. and R. Zeckhauser (1998). Status quo
bias in decision making. Journal of Risk and Un-
certainty 1 , 7–59.
Selten, R. (1998). Aspiration adaptation theory. Jour-
nal of Mathematical Psychology 42 , 191–214.
Sharot, T. (2011). The Optimism Bias: A Tour of the
Irrationally Positive Brain . New York, NY: Pan-
theon Books.
Simon, H. A. (1956). Rational choice and the structure
of the environment. Psychological Review 63 (2),
129–138.
Simon, H. A. (1982). Models of bounded rationality .
Cambridge, MA: MIT Press.
Thaler, R. (2015). Misbehaving: The Making of Be-
havioral Economics . New York, NY: W. W. Nor-
ton and Company.
Thompson, W. R. (1933). On the likelihood that one
unknown probability exceeds another in view of
the evidence of two samples. Biometrika 25 (3–4),
285–294.
Tversky, A. and D. Kahneman (1973). Availability:
a heuristic for judging frequency and probability.
Cognitive Psychology 5 , 207–232.
Wilson, A. (2015). Bounded memory and biases in in-
formation processing. Econometrica 82 (6), 2257–
2294. |
2d4ae3c2-1908-401f-a37d-533335fb78da | StampyAI/alignment-research-dataset/blogs | Blogs | 2019 recent trends in GPU price per FLOPS
*Published 25 March, 2020*
We estimate that in recent years, GPU prices have fallen at rates that would yield an order of magnitude over roughly:
* 17 years for single-precision FLOPS
* 10 years for half-precision FLOPS
* 5 years for half-precision fused multiply-add FLOPS
Details
=======
GPUs (graphics processing units) are specialized electronic circuits originally used for computer graphics.[1](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-1-2316 "“A graphics processing unit (GPU) is a specialized electronic circuit designed to rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device … Modern GPUs are very efficient at manipulating computer graphics and image processing … The term was popularized by Nvidia in 1999, who marketed the GeForce 256 as “the world’s first GPU”. It was presented as a “single-chip processor with integrated transform, lighting, triangle setup/clipping, and rendering engines”.”<br>“Graphics Processing Unit.” Wikipedia. Wikimedia Foundation, March 24, 2020. <a href=\"https://en.wikipedia.org/w/index.php?title=Graphics_processing_unit&oldid=947270104\">https://en.wikipedia.org/w/index.php?title=Graphics_processing_unit&oldid=947270104</a>.") In recent years, they have been popularly used for machine learning applications.[2](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-2-2316 "Fraenkel, Bernard. “Council Post: For Machine Learning, It’s All About GPUs.” Forbes. Forbes Magazine, December 8, 2017. <a href=\"https://www.forbes.com/sites/forbestechcouncil/2017/12/01/for-machine-learning-its-all-about-gpus/#5ed90c227699\">https://www.forbes.com/sites/forbestechcouncil/2017/12/01/for-machine-learning-its-all-about-gpus/#5ed90c227699</a>.") One measure of GPU performance is FLOPS, the number of operations on floating-point numbers a GPU can perform in a second.[3](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-3-2316 "“In computing, floating point operations per second (FLOPS, flops or flop/s) is a measure of computer performance, useful in fields of scientific computations that require floating-point calculations. For such cases it is a more accurate measure than measuring instructions per second.”<br>“FLOPS.” Wikipedia. Wikimedia Foundation, March 24, 2020. <a href=\"https://en.wikipedia.org/w/index.php?title=FLOPS&oldid=947177339\">https://en.wikipedia.org/w/index.php?title=FLOPS&oldid=947177339</a>") This page looks at the trends in GPU price / FLOPS of theoretical peak performance over the past 13 years. It does not include the cost of operating the GPUs, and it does not consider GPUs rented through cloud computing.
Theoretical peak performance
----------------------------
‘Theoretical peak performance’ numbers appear to be determined by adding together the theoretical performances of the processing components of the GPU, which are calculated by multiplying the clock speed of the component by the number of instructions it can perform per cycle.[4](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-4-2316 "From this discussion on Nvidia’s forums about theoretical GFLOPS: “GPU theoretical flops calculation is similar conceptually. It will vary by GPU just as the CPU calculation varies by CPU architecture and model. To use K40m as an example: http://www.nvidia.com/content/PDF/kepler/Tesla-K40-PCIe-Passive-Board-Spec-BD-06902-001_v05.pdf<br></p>
<p>there are 15 SMs (2880/192), each with 64 DP ALUs that are capable of retiring one DP FMA instruction per cycle (== 2 DP Flops per cycle).<br></p>
<p>15 x 64 x 2 * 745MHz = 1.43 TFlops/sec<br></p>
<p>which is the stated perf:</p>
<p>http://www.nvidia.com/content/tesla/pdf/NVIDIA-Tesla-Kepler-Family-Datasheet.pdf “</p>
<p>Person. “Comparing CPU and GPU Theoretical GFLOPS.” NVIDIA Developer Forums, May 21, 2014. <a href=\"https://forums.developer.nvidia.com/t/comparing-cpu-and-gpu-theoretical-gflops/33335\">https://forums.developer.nvidia.com/t/comparing-cpu-and-gpu-theoretical-gflops/33335</a>.") These numbers are given by the developer and may not reflect actual performance on a given application.[5](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-5-2316 "From this blog post on the performance of TensorCores, a component of new Nvidia GPUs specialized for deep learning: “The problem is it’s totally unclear how to approach the peak performance of 120 TFLOPS, and as far as I know, no one could achieve so significant speedup on real tasks. Let me know if you aware of good cases.”<br>Sapunov, Grigory. “Hardware for Deep Learning. Part 3: GPU.” Medium. Intento, January 20, 2020. <a href=\"https://blog.inten.to/hardware-for-deep-learning-part-3-gpu-8906c1644664\">https://blog.inten.to/hardware-for-deep-learning-part-3-gpu-8906c1644664</a>.")
Metrics
-------
We collected data on multiple slightly different measures of GPU price and FLOPS performance.
### Price metrics
GPU prices are divided into release prices, which reflect the manufacturer suggested retail prices that GPUs are originally sold at, and active prices, which are the prices at which GPUs are actually sold at over time, often by resellers.
We expect that active prices better represent prices available to hardware users, but collect release prices also, as supporting evidence.
### FLOPS performance metrics
Several varieties of ‘FLOPS’ can be distinguished based on the specifics of the operations they involve. Here we are interested in single-precision FLOPS, half-precision FLOPS, and half-precision fused-multiply add FLOPS.
‘Single-precision’ and ‘half-precision’ refer to the number of bits used to specify a floating point number.[6](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-6-2316 "Gupta, Geetika. “Difference Between Single-, Double-, Multi-, Mixed-Precision: NVIDIA Blog.” The Official NVIDIA Blog, November 21, 2019. <a href=\"https://blogs.nvidia.com/blog/2019/11/15/whats-the-difference-between-single-double-multi-and-mixed-precision-computing/\">https://blogs.nvidia.com/blog/2019/11/15/whats-the-difference-between-single-double-multi-and-mixed-precision-computing/</a>.") Using more bits to specify a number achieves greater precision at the cost of more computational steps per calculation. Our data suggests that GPUs have largely been improving in single-precision performance in recent decades,[7](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-7-2316 "See our <a href=\"https://aiimpacts.org/recent-trend-in-the-cost-of-computing/\">2017 analysis</a>, footnote 4, which notes that single-precision price performance seems to be improving while double-precision price performance is not") and half-precision performance appears to be increasingly popular because it is adequate for deep learning.[8](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-8-2316 "“With the growing importance of deep learning and energy-saving approximate computing, half precision floating point arithmetic (FP16) is fast gaining popularity. Nvidia’s recent Pascal architecture was the first GPU that offered FP16 support.”<br>N. Ho and W. Wong, <a href=\"https://ieeexplore.ieee.org/abstract/document/8091072\">“Exploiting half precision arithmetic in Nvidia GPUs,”</a> 2017 IEEE High Performance Extreme Computing Conference (HPEC), Waltham, MA, 2017, pp. 1-7.")
Nvidia, the main provider of chips for machine learning applications,[9](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-9-2316 "“In a recent paper, Google revealed that its TPU can be up to 30x faster than a GPU for inference (the TPU can’t do training of neural networks). As the main provider of chips for machine learning applications, Nvidia took some issue with that, arguing that some of its existing inference chips were already highly competitive to the TPU.”<br>Armasu, Lucian. “On Tensors, Tensorflow, And Nvidia’s Latest ‘Tensor Cores’.” Tom’s Hardware. Tom’s Hardware, May 11, 2017. <a href=\"https://www.tomshardware.com/news/nvidia-tensor-core-tesla-v100,34384.html\">https://www.tomshardware.com/news/nvidia-tensor-core-tesla-v100,34384.html</a>.") recently released a series of GPUs featuring Tensor Cores,[10](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-10-2316 "“Tensor Cores in NVIDIA Volta GPU Architecture.” NVIDIA. Accessed May 2, 2020. https://www.nvidia.com/en-us/data-center/tensorcore/.<br>") which claim to deliver “groundbreaking AI performance”. Tensor Core performance is measured in FLOPS, but they perform exclusively certain kinds of floating-point operations known as fused multiply-adds (FMAs).[11](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-11-2316 "“Volta is equipped with 640 Tensor Cores, each performing 64 floating-point fused-multiply-add (FMA) operations per clock. That delivers up to 125 TFLOPS for training and inference applications.”<br>“Tensor Cores in NVIDIA Volta GPU Architecture.” NVIDIA. Accessed March 25, 2020. <a href=\"https://www.nvidia.com/en-us/data-center/tensorcore/\">https://www.nvidia.com/en-us/data-center/tensorcore/</a>.") Performance on these operations is important for certain kinds of deep learning performance,[12](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-12-2316 "“A useful operation in computer linear algebra is multiply-add: calculating the sum of a value c with a product of other values a x b to produce c + a x b. Typically, thousands of such products may be summed in a single accumulator for a model such as ResNet-50, with many millions of independent accumulations when running a model in deployment, and quadrillions of these for training models.”<br>Johnson, Jeff. “Making Floating Point Math Highly Efficient for AI Hardware.” Facebook AI Blog, November 8, 2018. <a href=\"https://ai.facebook.com/blog/making-floating-point-math-highly-efficient-for-ai-hardware/\">https://ai.facebook.com/blog/making-floating-point-math-highly-efficient-for-ai-hardware/</a>.") so we track ‘GPU price / FMA FLOPS’ as well as ‘GPU price / FLOPS’.
In addition to purely half-precision computations, Tensor Cores are capable of performing mixed-precision computations, where part of the computation is done in half-precision and part in single-precision.[13](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-13-2316 "See Figure 2:<br>Gupta, Geetika. “Using Tensor Cores for Mixed-Precision Scientific Computing.” NVIDIA Developer Blog, April 19, 2019. <a href=\"https://devblogs.nvidia.com/tensor-cores-mixed-precision-scientific-computing\">https://devblogs.nvidia.com/tensor-cores-mixed-precision-scientific-computing</a>/.") Since explicitly mixed-precision-optimized hardware is quite recent, we don’t look at the trend in mixed-precision price performance, and only look at the trend in half-precision price performance.
#### Precision tradeoffs
Any GPU that performs multiple kinds of computations (single-precision, half-precision, half-precision fused multiply add) trades off performance on one for performance on the other, because there is limited space on the chip, and transistors must be allocated to either one type of computation or the other.[14](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-14-2316 "Three different individuals told us about this constraint, including one Nvidia employee.") All current GPUs that perform half-precision or TensorCore fused-multiply-add computations also do single-precision computations, so they are splitting their transistor budget. For this reason, our impression is that half-precision FLOPS could be much cheaper now if entire GPUs were allocated to each one alone, rather than split between them.
Release date prices
-------------------
We collected data on theoretical peak performance (FLOPS), release date, and price from several sources, including Wikipedia.[15](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-15-2316 "See the ‘Source’ column in <a href=\"https://docs.google.com/spreadsheets/d/1ZZm5Wgr3BDRtloTZGylWzYTaVr5VqjiwOiRNu5Pz_q8/edit?usp=sharing\">this spreadsheet</a>, tab ‘GPU Data’. We largely used <a href=\"https://www.techpowerup.com/\">TechPowerUp</a>, Wikipedia’s <a href=\"https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units\">List of Nvidia GPUs</a>, <a href=\"https://en.wikipedia.org/wiki/List_of_AMD_graphics_processing_units\">List of AMD GPUs</a>, and <a href=\"https://docs.google.com/spreadsheets/d/1xAo6TcSgHdd25EdQ-6GqM0VKbTYu8cWyycgJhHRVIgY/edit#gid=0\">this document listing GPU performance</a>.") (Data is available in [this spreadsheet](https://docs.google.com/spreadsheets/d/1ZZm5Wgr3BDRtloTZGylWzYTaVr5VqjiwOiRNu5Pz_q8/edit?usp=sharing)). We found GPUs by looking at Wikipedia’s existing large lists[16](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-16-2316 "See Wikipedia’s <a href=\"https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units\">List of Nvidia GPUs</a> and <a href=\"https://en.wikipedia.org/wiki/List_of_AMD_graphics_processing_units\">List of AMD GPUs</a>.") and by Googling “popular GPUs” and “popular deep learning GPUs”. We included any hardware that was labeled as a ‘GPU’. We adjusted prices for inflation based on the consumer price index.[17](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-17-2316 "“CPI Home.” U.S. Bureau of Labor Statistics. U.S. Bureau of Labor Statistics. Accessed May 2, 2020. https://www.bls.gov/cpi/.")
We were unable to find price and performance data for many popular GPUs and suspect that we are missing many from our list. In our search, we did not find any GPUs that beat our 2017 minimum of $0.03 (release price) / single-precision GFLOPS. We put out a $20 bounty on a popular Facebook group to find a cheaper GPU / FLOPS, and the bounty went unclaimed, so we are reasonably confident in this minimum.[18](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-18-2316 "The Facebook group is for posting and claiming bounties and has around 750 people, many with interests in computers. The bounty has been up for two months, as of March 13 2020.")
### GPU price / single-precision FLOPS
Figure 1 shows our collected dataset for GPU price / single-precision FLOPS over time.[19](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-19-2316 "See <a href=\"https://docs.google.com/spreadsheets/d/1ZZm5Wgr3BDRtloTZGylWzYTaVr5VqjiwOiRNu5Pz_q8/edit?usp=sharing\">this spreadsheet</a>, tab ‘Cleaned GPU Data for SP’ for the chart generation.")
**Figure 1: Real GPU price / single-precision FLOPS over time. The vertical axis is log-scale. Price is measured in 2019 dollars.**
To find a clear trend for the prices of the cheapest GPUs / FLOPS, we looked at the running minimum prices every 10 days.[20](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-20-2316 "See <a href=\"https://docs.google.com/spreadsheets/d/1ZZm5Wgr3BDRtloTZGylWzYTaVr5VqjiwOiRNu5Pz_q8/edit?usp=sharing\">this spreadsheet</a>, tab ‘Cleaned GPU Data for SP Minimums’ for the plotting. We used <a href=\"https://drive.google.com/open?id=1JP98EP8nYA0KqofLm24vF2vwNL0PalcB\">this script</a> on the data from the ‘Cleaned GPU Data for SP’ to calculate the minimums and then import them into a new sheet of the spreadsheet.")
**Figure 2: Ten-day minimums in real GPU price / single-precision FLOPS over time. The vertical axis is log-scale. Price is measured in 2019 dollars. The blue line shows the trendline ignoring data before late 2007. (We believe the apparent steep decline prior to late 2007 is an artefact of a lack of data for that time period.)**
The cheapest GPU price / FLOPS hardware using release date pricing has not decreased since 2017. However there was a similar period of stagnation between early 2009 and 2011, so this may not represent a slowing of the trend in the long run.
Based on the figures above, the running minimums seem to follow a roughly exponential trend. If we do not include the initial point in 2007, (which we suspect is not in fact the cheapest hardware at the time), we get that the cheapest GPU price / single-precision FLOPS fell by around 17% per year, for a factor of ten in ~12.5 years.[21](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-21-2316 "See <a href=\"https://docs.google.com/spreadsheets/d/1ZZm5Wgr3BDRtloTZGylWzYTaVr5VqjiwOiRNu5Pz_q8/edit?usp=sharing\">this spreadsheet</a>, tab ‘Cleaned GPU Data for SP Minimums’ for the calculation.")
### GPU price / half-precision FLOPS
Figure 3 shows GPU price / half-precision FLOPS for all the GPUs in our search above for which we could find half-precision theoretical performance.[22](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-22-2316 "See <a href=\"https://docs.google.com/spreadsheets/d/1ZZm5Wgr3BDRtloTZGylWzYTaVr5VqjiwOiRNu5Pz_q8/edit?usp=sharing\">this spreadsheet</a>, tab ‘Cleaned GPU Data for HP’ for the chart generation.")
**Figure 3: Real GPU price / half-precision FLOPS over time. The vertical axis is log-scale. Price is measured in 2019 dollars.**
Again, we looked at the running minimums of this graph every 10 days, shown in Figure 4 below.[23](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-23-2316 "See <a href=\"https://docs.google.com/spreadsheets/d/1ZZm5Wgr3BDRtloTZGylWzYTaVr5VqjiwOiRNu5Pz_q8/edit?usp=sharing\">this spreadsheet</a>, tab ‘Cleaned GPU Data for HP Minimums’ for the plotting. We used <a href=\"https://drive.google.com/open?id=1JP98EP8nYA0KqofLm24vF2vwNL0PalcB\">this script</a> on the data from the ‘Cleaned GPU Data for HP’ to calculate the minimums and then import them into a new sheet of the spreadsheet.")
**Figure 4: Minimums in real GPU price / half-precision FLOPS over time. The vertical axis is log-scale. Price is measured in 2019 dollars.**
If we assume an exponential trend with noise,[24](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-24-2316 "Where ambiguous, we assume these trends are exponential rather than linear, because our understanding is that that is much more common historically in computing hardware price trends.") cheapest GPU price / half-precision FLOPS fell by around 26% per year, which would yield a factor of ten after ~8 years.[25](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-25-2316 "See <a href=\"https://docs.google.com/spreadsheets/d/1ZZm5Wgr3BDRtloTZGylWzYTaVr5VqjiwOiRNu5Pz_q8/edit?usp=sharing\">this spreadsheet</a>, tab ‘Cleaned GPU Data for HP Minimums’ for the calculation.")
### GPU price / half-precision FMA FLOPS
Figure 5 shows GPU price / half-precision FMA FLOPS for all the GPUs in our search above for which we could find half-precision FMA theoretical performance.[26](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-26-2316 "See <a href=\"https://docs.google.com/spreadsheets/d/1ZZm5Wgr3BDRtloTZGylWzYTaVr5VqjiwOiRNu5Pz_q8/edit?usp=sharing\">this spreadsheet</a>, tab ‘Cleaned GPU Data for HP + Tensor Cores’ for the chart generation.") (Note that this includes all of our half-precision data above, since those FLOPS could be used for fused-multiply adds in particular). GPUs with TensorCores are marked in red.
**Figure 5: Real GPU price / half-precision FMA FLOPS over time. Price is measured in 2019 dollars.**
Figure 6 shows the running minimums of GPU price / HP FMA FLOPS.[27](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-27-2316 "See <a href=\"https://docs.google.com/spreadsheets/d/1ZZm5Wgr3BDRtloTZGylWzYTaVr5VqjiwOiRNu5Pz_q8/edit?usp=sharing\">this spreadsheet</a>, tab ‘Cleaned GPU Data for HP + Tensor Cores Minimums’ for the plotting. We used <a href=\"https://drive.google.com/open?id=1JP98EP8nYA0KqofLm24vF2vwNL0PalcB\">this script</a> on the data from the ‘Cleaned GPU Data for HP + Tensor Cores’ to calculate the minimums and then import them into a new sheet of the spreadsheet.")
**Figure 6: Minimums in real GPU price / half-precision FMA FLOPS over time. Price is measured in 2019 dollars.**
GPU price / Half-Precision FMA FLOPS appears to be following an exponential trend over the last four years, falling by around 46% per year, for a factor of ten in ~4 years.[28](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-28-2316 "See <a href=\"https://docs.google.com/spreadsheets/d/1ZZm5Wgr3BDRtloTZGylWzYTaVr5VqjiwOiRNu5Pz_q8/edit?usp=sharing\">this spreadsheet</a>, tab ‘Cleaned GPU Data for HP + Tensor Cores Minimums’ for the calculation.")
Active Prices
-------------
GPU prices often go down from the time of release, and some popular GPUs are older ones that have gone down in price.[29](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-29-2316 "For example, one of the GPUs recommended for deep learning <a href=\"https://www.reddit.com/r/MachineLearning/comments/b95182/d_which_gpus_to_get_for_deep_learning_my/\">in this Reddit thread</a> is the GTX 1060 (6GB), which has been around <a href=\"https://www.techpowerup.com/gpu-specs/geforce-gtx-1060-6-gb.c2862\">since 2016</a>.") Given this, it makes sense to look at active price data for the same GPU over time.
### Data Sources
We collected data on peak theoretical performance in FLOPS from [TechPowerUp](https://www.techpowerup.com/)[30](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-30-2316 "We scraped data from <a href=\"https://www.techpowerup.com/gpu-specs/radeon-rx-480.c2848\">individual TechPowerUp pages</a> using <a href=\"https://drive.google.com/open?id=1msy977jSLcJspULMWlWOfIIFRWo_vsds\">this script</a>. Our full scraped TechPowerUp dataset can be found <a href=\"https://docs.google.com/spreadsheets/d/1pXTyUJ2AvpkhYtphGn8UnAl4gI8j2jsx1AaGrOahl7o/edit?usp=sharing\">here</a>.") and combined it with active GPU price data to get GPU price / FLOPS over time.[31](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-31-2316 "We chose to automatically scrape theoretical peak performance numbers from TechPowerUp instead of using the ones we manually collected above because there were several GPUs in the active pricing datasets that we hadn’t collected data for manually, and it was easier to scrape the entire site than just the subset of GPUs we needed.") Our primary source of historical pricing data was Passmark, though we also found a less trustworthy dataset on Kaggle which we used to check our analysis. We adjusted prices for inflation based on the consumer price index.[32](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-32-2316 "“CPI Home.” U.S. Bureau of Labor Statistics. U.S. Bureau of Labor Statistics. Accessed May 2, 2020. https://www.bls.gov/cpi/.")
#### Passmark
We scraped pricing data[33](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-33-2316 "We used <a href=\"https://drive.google.com/open?id=1nd7111hOb-eCMBCOe1qocXYuAJys0pLk\">this script</a>.") on GPUs between 2011 and early 2020 from Passmark.[34](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-34-2316 "“PassMark – GeForce GTX 660 – Price Performance Comparison.” Accessed March 24, 2020. <a href=\"https://www.videocardbenchmark.net/gpu.php?gpu=GeForce+GTX+660&id=2152\">https://www.videocardbenchmark.net/gpu.php?gpu=GeForce+GTX+660&id=2152</a>.") Where necessary, we renamed GPUs from Passmark to be consistent with TechPowerUp.[35](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-35-2316 "In most cases where renaming was necessary, the same GPU had multiple clear names, e.g. the “Radeon HD 7970 / R9 280X” in PassMark was just called the “Radeon HD 7970” in TechPowerUp. In a few cases, Passmark listed some GPUs which TechPowerUp listed separately as one GPU, e.g. “Radeon R9 290X / 390X” seemed to ambiguously refer to the Radeon R9 290X or Radeon R9 390X. In these cases, we conservatively assume that the GPU refers to the less powerful / earlier GPU. In one exceptional case, we assumed that the “Radeon R9 Fury + Fury X” referred to the Radeon Fury X in PassMark. The ambiguously named GPUs were not in the minimum data we calculated, so probably did not have a strong effect on the final result.") The Passmark data consists of 38,138 price points for 352 GPUs. We guess that these represent most popular GPUs.
Looking at the ‘current prices’ listed on individual Passmark GPU pages, prices appear to be sourced from Amazon, Newegg, and Ebay. Passmark’s listed pricing data does not correspond to regular intervals. We don’t know if prices were pulled at irregular intervals, or if Passmark pulls prices regularly and then only lists major changes as price points. When we see a price point, we treat it as though the GPU is that price only at that time point, not indefinitely into the future.
The data contains several blips where a GPU is briefly sold very unusually cheaply. A random checking of some of these suggests to us that these correspond to single or small numbers of GPUs for sale, which we are not interested in tracking, because we are trying to predict AI progress, which presumably isn’t influenced by temporary discounts on tiny batches of GPUs.
#### Kaggle
[This Kaggle dataset](https://www.kaggle.com/raczeq/ethereum-effect-pc-parts) contains scraped data of GPU prices from price comparison sites PriceSpy.co.uk, PCPartPicker.com, Geizhals.eu from the years 2013 – 2018. The Kaggle dataset has 319,147 price points for 284 GPUs. Unfortunately, at least some of the data is clearly wrong, potentially because price comparison sites include pricing data from untrustworthy merchants.[36](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-36-2316 "For example, the Kaggle dataset includes extremely cheap FirePro S7150s sold in 2014, even though the FirePro S7150 only came out in 2016. One of the sellers of these cheap GPUs were ‘Club 3D’, which also appeared to sell several other erroneously cheap GPUs.") As such, we don’t use the Kaggle data directly in our analysis, but do use it as a check on our Passmark data. The data that we get from Passmark roughly appears to be a subset of the Kaggle data from 2013 – 2018,[37](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-37-2316 "See <a href=\"https://drive.google.com/open?id=1uvO-gNpAGh9qMzs5MnNtOg1X-R7JzdAq\">this plot of Passmark single-precision GPU price / FLOPS</a> compared to the <a href=\"https://drive.google.com/open?id=194Tqrcix2XdytbT-WbgcRbFpeHIHqsao\">combined Passmark and Kaggle single-precision GPU price / FLOPS</a>, and <a href=\"https://drive.google.com/open?id=1hk-1rHqOkWUwh3BVNbwTEu8dEfjDhsLT\">this plot of Passmark half-precision GPU price / FLOPS</a> compared to the <a href=\"https://drive.google.com/open?id=1mcpQigPJs9CRqebY-Uvm0dJ1si1jeTQv\">combined Passmark and Kaggle half-precision $ / FLOPS</a>. In both cases the 2013 – 2018 Passmark data appears to roughly be a subset of the Kaggle data.") which is what we would expect if the price comparison engines picked up prices from the merchants Passmark looks at.
#### Limitations
There are a number of reasons why we think this analysis may in fact not reflect GPU price trends:
* We effectively have just one source of pricing data, Passmark.
* Passmark appears to only look at Amazon, Newegg, and Ebay for pricing data.
* We are not sure, but we suspect that Passmark only looks at the U.S. versions of Amazon, Newegg, and Ebay, and pricing may be significantly different in other parts of the world (though we guess it wouldn’t be different enough to change the general trend much).
* As mentioned above, we are not sure if Passmark pulls price data regularly and only lists major price changes, or pulls price data irregularly. If the former is true, our data may be overrepresenting periods where the price changes dramatically.
* None of the price data we found includes quantities of GPUs which were available at that price, which means some prices may be for only a very limited number of GPUs.
* We don’t know how much the prices from these datasets reflect the prices that a company pays when buying GPUs in bulk, which we may be more interested in tracking.
A better version of this analysis might start with more complete data from price comparison engines (along the lines of the Kaggle dataset) and then filter out clearly erroneous pricing information in some principled way.
### Data
The original scraped datasets with cards renamed to match TechPowerUp can be found [here](https://drive.google.com/drive/folders/1cCjG_sUUePxbh5fN9ViOPX6GW9D2GyPJ?usp=sharing). GPU price / FLOPS data is graphed on a log scale in the figures below. Price points for the same GPU are marked in the same color. We adjusted prices for inflation using the consumer price index.[38](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-38-2316 "“CPI Home.” U.S. Bureau of Labor Statistics. U.S. Bureau of Labor Statistics. Accessed May 2, 2020. https://www.bls.gov/cpi/.") All points below are in 2019 dollars.
To try to filter out noisy prices that didn’t last or were only available in small numbers, we took out the lowest 5% of data in every several day period[39](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-39-2316 "We set this period to be 10 days long when looking at single-precision data, and 30 days long when looking at half-precision data, since half-precision data was significantly more sparse.") to get the 95th percentile cheapest hardware. We then found linear and exponential trendlines of best fit through the available hardware with the lowest GPU price / FLOPS every several days.[40](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-40-2316 "This calculation can be found in <a href=\"https://docs.google.com/spreadsheets/d/15pTVDml1j81HROZ3_UeHZ51aBoqq-94-eM8N80npUX0/edit?usp=sharing\">this spreadsheet</a>.")
#### GPU price / single-precision FLOPS
Figures 7-10 show the raw data, 95th percentile data, and trendlines for single-precision GPU price / FLOPS for the Passmark dataset. [This folder](https://drive.google.com/open?id=1-PEl2kSORRH78Qa4huRF-t_g_m1QOTDs) contains plots of all our datasets, including the Kaggle dataset and combined Passmark + Kaggle dataset.[41](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-41-2316 "We used a Python plotting library to generate our plots, the script can be found <a href=\"https://drive.google.com/open?id=1u3qI9m9W6_9efIpsDBq1Hc-qixcKy8Sb\">here</a>. All of our resulting plots can be found <a href=\"https://drive.google.com/open?id=1afVbKn34pw5rj4fn1vI_qOhdZh5GLfwE\">here</a>. ‘single’ vs. ‘half’ refers to whether its $ / FLOPS data for single or half-precision FLOPS, ‘passmark’, ‘kaggle’, and ‘combined’ refer to which dataset is being plotted and ‘raw’ vs. ‘95’ refer to whether we’re plotting all the data or the 95th percentile data.")

**Figure 7: GPU price / single-precision FLOPS over time, taken from our Passmark dataset.[42](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-42-2316 "The dataset we used for this plot can be found <a href=\"https://drive.google.com/open?id=19FB-_MnQAtJErb8e_5DHTRVKAPg4g1YX\">here</a>. This a processed version of our scraped dataset, with prices / FLOPS adjusted for inflation. The script we used to process and plot can be found <a href=\"https://drive.google.com/open?id=1u3qI9m9W6_9efIpsDBq1Hc-qixcKy8Sb\">here</a>.") Price is measured in 2019 dollars. [This picture](https://drive.google.com/open?id=194Tqrcix2XdytbT-WbgcRbFpeHIHqsao) shows that the Kaggle data does appear to be a superset of the Passmark data from 2013 – 2018, giving us some evidence that the Passmark data is correct. The vertical axis is log-scale.**

**Figure 8: The top 95% of data every 10 days for GPU price / single-precision FLOPS over time, taken from the Passmark dataset we plotted above. (Figure 7 with the cheapest 5% removed.) The vertical axis is log-scale.[43](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-43-2316 "The script to calculate the 95th percentile and generate this plot can be found <a href=\"https://drive.google.com/open?id=1u3qI9m9W6_9efIpsDBq1Hc-qixcKy8Sb\">here</a>.")**

**Figure 9: The same data as Figure 8, with the vertical axis zoomed-in.**
**Figure 10: The minimum data points from the top 95% of the Passmark dataset, taken every 10 days. We fit linear and exponential trendlines through the data. The vertical axis is log-scale.[44](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-44-2316 "See <a href=\"https://docs.google.com/spreadsheets/d/15pTVDml1j81HROZ3_UeHZ51aBoqq-94-eM8N80npUX0/edit?usp=sharing\">here</a>, tab ‘Passmark SP Minimums’ to see our calculation of the minimums over time. We used <a href=\"https://drive.google.com/open?id=1yRTJwVQAwCqLSTyGXgGIHcRfJFP7C5D2\">this script</a> to generate the minimums, then imported them into this spreadsheet.")**
##### Analysis
The cheapest 95th percentile data every 10 days appears to fit relatively well to both a linear and exponential trendline. However we assume that progress will follow an exponential, because previous progress has [followed an exponential](https://aiimpacts.org/recent-trend-in-the-cost-of-computing/).
In the Passmark dataset, the exponential trendline suggested that from 2011 to 2020, 95th-percentile GPU price / single-precision FLOPS fell by around 13% per year, for a factor of ten in ~17 years,[45](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-45-2316 "You can see our calculations for this <a href=\"https://docs.google.com/spreadsheets/d/15pTVDml1j81HROZ3_UeHZ51aBoqq-94-eM8N80npUX0/edit?usp=sharing\">here</a>, sheet ‘Passmark SP Minimums’. Each sheet has a cell ‘Rate to move an order of magnitude’ which has our calculation for how many years we need to move an order of magnitude. In the (untrustworthy) Kaggle dataset alone, its rate would yield an order of magnitude of decrease every ~12 years, and the rate in the combined dataset would yield an order of magnitude of decrease every ~16 years.") bootstrap[46](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-46-2316 "Orloff, Jeremy, and Jonathan Bloom. “Bootstrap Confidence Intervals.” MIT OpenCourseWare, 2014. <a href=\"https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf\">https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf</a>.") 95% confidence interval 16.3 to 18.1 years.[47](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-47-2316 "We used <a href=\"https://drive.google.com/open?id=1XkA-8WruAMKM3y3cdNMPNJHabpMUE_MT\">this script</a> to generate bootstrap confidence intervals for our datasets.") We believe the rise in price / FLOPS in 2017 corresponds to a rise in GPU prices due to increased demand from cryptocurrency miners.[48](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-48-2316 "We think this is the case because we’ve observed this dip in other GPU analyses we’ve done, and because the timing lines up: <a href=\"https://www.techspot.com/news/72854-nvidia-asking-graphics-card-retailers-prioritize-gamers-over.html\">the first table in this article</a> shows how GPU prices were increasing starting 2017 and continued to increase through 2018, and <a href=\"https://www.kaggle.com/raczeq/impact-of-cryptocurrencies-rates-on-pc-market/data\">the chart here</a> shows how GPU prices increased in 2017.") If we instead look at the trend from 2011 through 2016, before the cryptocurrency rise, we instead get that 95th-percentile GPU price / single-precision FLOPS price fell by around 13% per year, for a factor of ten in ~16 years.[49](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-49-2316 "You can see our calculations for this <a href=\"https://docs.google.com/spreadsheets/d/15pTVDml1j81HROZ3_UeHZ51aBoqq-94-eM8N80npUX0/edit?usp=sharing\">here</a>, sheet ‘Passmark SP Minimums’, next to ‘Exponential trendline from 2015 to 2016. The trendline calculated is technically the linear fit through the log of the data.")
This is slower than the order of magnitude every ~12.5 years we found when looking at release prices. If we restrict the release price data to 2011 – 2019, we get an order of magnitude decrease every ~13.5 years instead,[50](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-50-2316 "See our calculation <a href=\"https://docs.google.com/spreadsheets/d/1ZZm5Wgr3BDRtloTZGylWzYTaVr5VqjiwOiRNu5Pz_q8/edit?usp=sharing\">here</a>, tab ‘Cleaned GPU Data for SP Minimums’, next to the cell marked “Exponential trendline from 2011 to 2019.”") so part of the discrepancy can be explained because of the different start times of the datasets. To get some assurance that our active price data wasn’t erroneous, we spot checked the best active price at the start of 2011, which was somewhat lower than the best release price at the same time, and confirmed that its given price was consistent with surrounding pricing data.[51](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-51-2316 "At the start of 2011, the <a href=\"https://docs.google.com/spreadsheets/d/15pTVDml1j81HROZ3_UeHZ51aBoqq-94-eM8N80npUX0/edit?usp=sharing\">minimum release </a><a href=\"https://docs.google.com/spreadsheets/d/1ZZm5Wgr3BDRtloTZGylWzYTaVr5VqjiwOiRNu5Pz_q8/edit?usp=sharing\">price / FLOPS</a> (see tab, ‘Cleaned GPU Data for SP Minimums’) is .000135 $ / FLOPS, whereas the <a href=\"https://docs.google.com/spreadsheets/d/15pTVDml1j81HROZ3_UeHZ51aBoqq-94-eM8N80npUX0/edit?usp=sharing\">minimum active price / FLOPS</a> (see tab, ‘Passmark SP Minimums’) is around .0001 $ / FLOPS. <a href=\"https://docs.google.com/spreadsheets/d/15pTVDml1j81HROZ3_UeHZ51aBoqq-94-eM8N80npUX0/edit?usp=sharing\">The initial GPU price / FLOPS minimum</a> (see sheet ‘Passmark SP Minimums’) corresponds to the Radeon HD 5850 which had a price of $184.9 in 3/2011 and a release price of $259. <a href=\"https://www.videocardbenchmark.net/gpu.php?gpu=Radeon+HD+5850&id=47\">Looking at the general trend in Passmark</a> suggests that the Radeon HD 5850 did indeed rapidly decline from its $259 release price to consistently below $200 prices.") We think active prices are likely to be closer to the prices at which people actually bought GPUs, so we guess that ~17 years / order of magnitude decrease is a more accurate estimate of the trend we care about.
#### GPU price / half-precision FLOPS
Figures 11-14 show the raw data, 95th percentile data, and trendlines for half-precision GPU price / FLOPS for the Passmark dataset. [This folder](https://drive.google.com/open?id=1-PEl2kSORRH78Qa4huRF-t_g_m1QOTDs) contains plots of the Kaggle dataset and combined Passmark + Kaggle dataset.

**Figure 11: GPU price / half-precision FLOPS over time, taken from our Passmark dataset. Price is measured in 2019 dollars.[52](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-52-2316 " The dataset we used for this plot can be found here. This is a processed version of our scraped dataset, with prices / FLOPS adjusted for inflation. The script we used to process and plot can be found here.") This picture shows that the Kaggle data does appear to be a superset of the Passmark data from 2013 – 2018, giving us some evidence that the Passmark data is reasonable. The vertical axis is log-scale.**

**Figure 12: The top 95% of data every 30 days for GPU price / half-precision FLOPS over time, taken from the Passmark dataset we plotted above. (Figure 11 with the cheapest 5% removed.) The vertical axis is log-scale.[53](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-53-2316 "The script to calculate the 95th percentile and generate this plot can be found here.")**

**Figure 13: The same data as Figure 12, with the vertical axis zoomed-in.**
**Figure 14: The minimum data points from the top 95% of the Passmark dataset, taken every 30 days. We fit linear and exponential trendlines through the data. The vertical axis is log-scale.[54](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-54-2316 "See <a href=\"https://docs.google.com/spreadsheets/d/15pTVDml1j81HROZ3_UeHZ51aBoqq-94-eM8N80npUX0/edit?usp=sharing\">here</a>, tab ‘Passmark HP Minimums’ to see our calculation of the minimums over time. We used <a href=\"https://drive.google.com/open?id=1u3qI9m9W6_9efIpsDBq1Hc-qixcKy8Sb\">this script</a> to generate the minimums, then imported them into this spreadsheet.")**
##### Analysis
If we assume the trend is exponential, the Passmark trend seems to suggest that from 2015 to 2020, 95th-percentile GPU price / half-precision FLOPS of GPUs has fallen by around 21% per year, for a factor of ten over ~10 years,[55](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-55-2316 "See the sheet marked ‘Passmark HP minimums’ in <a href=\"https://docs.google.com/spreadsheets/d/15pTVDml1j81HROZ3_UeHZ51aBoqq-94-eM8N80npUX0/edit?usp=sharing\">this spreadsheet</a>. The trendline calculated is technically the linear fit through the log of the data.") bootstrap[56](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-56-2316 "Orloff, Jeremy, and Jonathan Bloom. “Bootstrap Confidence Intervals.” MIT OpenCourseWare, 2014. <a href=\"https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf\">https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf</a>.") 95% confidence interval 8.8 to 11 years.[57](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-57-2316 "We used <a href=\"https://drive.google.com/open?id=1XkA-8WruAMKM3y3cdNMPNJHabpMUE_MT\">this script</a> to generate bootstrap confidence intervals for our datasets.") This is fairly close to the ~8 years / order of magnitude decrease we found when looking at release price data, but we treat active prices as a more accurate estimate of the actual prices at which people bought GPUs. As in our previous dataset, there is a noticeable rise in 2017, which we think is due to GPU prices increasing as a result of cryptocurrency miners. If we look at the trend from 2015 through 2016, before this rise, we get that 95th-percentile GPU price / half-precision FLOPS has fallen by around 14% per year, which would yield a factor of ten over ~8 years.[58](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-58-2316 "See the sheet marked ‘Passmark HP minimums’ in <a href=\"https://docs.google.com/spreadsheets/d/15pTVDml1j81HROZ3_UeHZ51aBoqq-94-eM8N80npUX0/edit?usp=sharing\">this spreadsheet</a>.")
#### GPU price / half-precision FMA FLOPS
Figures 15-18 show the raw data, 95th percentile data, and trendlines for half-precision GPU price / FMA FLOPS for the Passmark dataset. GPUs with Tensor Cores are marked in black. [This folder](https://drive.google.com/open?id=1-PEl2kSORRH78Qa4huRF-t_g_m1QOTDs) contains plots of the Kaggle dataset and combined Passmark + Kaggle dataset.

**Figure 15: GPU price / half-precision FMA FLOPS over time, taken from our Passmark dataset.[59](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-59-2316 "The dataset we used for this plot can be found here. This a processed version of our scraped dataset, with prices / FLOPS adjusted for inflation. The script we used to process and plot can be found here.") price is measured in 2019 dollars. This picture shows that the Kaggle data does appear to be a superset of the Passmark data from 2013 – 2018, giving us some evidence that the Passmark data is correct. The vertical axis is log-scale.**

**Figure 16: The top 95% of data every 30 days for GPU price / half-precision FMA FLOPS over time, taken from the Passmark dataset we plotted above.[60](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-60-2316 "The script to calculate the 95th percentile and generate this plot can be found here.") (Figure 15 with the cheapest 5% removed.)**

**Figure 17: The same data as Figure 16, with the vertical axis zoomed-in.**
**Figure 18: The minimum data points from the top 95% of the Passmark dataset, taken every 30 days. We fit linear and exponential trendlines through the data.[61](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-61-2316 "See <a href=\"https://docs.google.com/spreadsheets/d/15pTVDml1j81HROZ3_UeHZ51aBoqq-94-eM8N80npUX0/edit?usp=sharing\">here</a>, tab ‘Passmark HP FMA Minimums’ to see our calculation of the minimums over time. We used <a href=\"https://drive.google.com/open?id=1yRTJwVQAwCqLSTyGXgGIHcRfJFP7C5D2\">this script</a> to generate the minimums, then imported them into this spreadsheet.")**
##### Analysis
If we assume the trend is exponential, the Passmark trend seems to suggest the 95th-percentile GPU price / half-precision FMA FLOPS of GPUs has fallen by around 40% per year, which would yield a factor of ten in ~4.5 years,[62](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-62-2316 "See the sheet marked ‘Passmark HP FMA minimums’ in <a href=\"https://docs.google.com/spreadsheets/d/15pTVDml1j81HROZ3_UeHZ51aBoqq-94-eM8N80npUX0/edit?usp=sharing\">this spreadsheet</a>. The trendline calculated is technically the linear fit through the log of the data.") with a bootstrap[63](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-63-2316 "Orloff, Jeremy, and Jonathan Bloom. “Bootstrap Confidence Intervals.” MIT OpenCourseWare, 2014. <a href=\"https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf\">https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf</a>.") 95% confidence interval 4 to 5.2 years.[64](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-64-2316 "We used <a href=\"https://drive.google.com/open?id=1XkA-8WruAMKM3y3cdNMPNJHabpMUE_MT\">this script</a> to generate bootstrap confidence intervals for our datasets.") This is fairly close to the ~4 years / order of magnitude decrease we found when looking at release price data, but we think active prices are a more accurate estimate of the actual prices at which people bought GPUs.
The figures above suggest that certain GPUs with Tensor Cores were a significant (~half an order of magnitude) improvement over existing GPU price / half-precision FMA FLOPS.
Conclusion
==========
We summarize our results in the table below.
| | | | |
| --- | --- | --- | --- |
| | **Release Prices** | **95th-percentile Active Prices** | **95th-percentile Active Prices** **(pre-crypto price rise)** |
| | *11/2007 – 1/2020* | *3/2011 – 1/2020* | *3/2011 – 12/2016* |
| **$ / single-precision FLOPS** | 12.5 | 17 | 16 |
| | *9/2014 – 1/2020* | *1/2015 – 1/2020* | *1/2015 – 12/2016* |
| **$ / half-precision FLOPS** | 8 | 10 | 8 |
| **$ / half-precision FMA FLOPS** | 4 | 4.5 | — |
Release price data seems to generally support the trends we found in active prices, with the notable exception of trends in GPU price / single-precision FLOPS, which cannot be explained solely by the different start dates.[65](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/#easy-footnote-bottom-65-2316 "See our analysis in <a href=\"#single-precision-analysis\">this section</a> above.") We think the best estimate of the overall trend for prices at which people recently bought GPUs is the 95th-percentile active price data from 2011 – 2020, since release price data does not account for existing GPUs becoming cheaper over time. The pre-crypto trends are similar to the overall trends, suggesting that the trends we are seeing are not anomalous due to cryptocurrency.
Given that, we guess that GPU prices as a whole have fallen at rates that would yield an order of magnitude over roughly:
* 17 years for single-precision FLOPS
* 10 years for half-precision FLOPS
* 5 years for half-precision fused multiply-add FLOPS
Half-precision FLOPS seem to have become cheaper substantially faster than single-precision in recent years. This may be a “catching up” effect as more of the space on GPUs was allocated to half-precision computing, rather than reflecting more fundamental technological progress.
*Primary author: Asya Bergal*
Notes
===== |
893556ce-0ccc-4d8b-b74b-33d452166bd4 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Infra-Bayesianism naturally leads to the monotonicity principle, and I think this is a problem
**Introduction**
----------------
The [monotonicity principle](https://www.youtube.com/watch?v=kmPFjpEibu0) is a famously uncomfortable consequence of [Infra-Bayesian Physicalism](https://www.lesswrong.com/posts/gHgs2e2J5azvGFatb/infra-bayesian-physicalism-a-formal-theory-of-naturalized): an IBP agent can only act in a way as if its utility function never gave negative value to any event. This strongly contradicts the intuition that creating suffering people is actively bad.
In this post, I explain in layman terms how IBP leads to this conclusion and I argue that this feature is not unique to Physicalism: with certain reasonable extra assumptions, the monotonicity principle naturally follows from infra-Bayesianism. In my opinion, this points to a significant flaw in the applicability of infra-Bayesianism.
**A very simplified overview of Infra-Bayesianism**
---------------------------------------------------
An infra-Bayesian agent assumes that the world is controlled by a malevolent deity, Murphy, who tries to minimize the agent's utility.[[1]](#fn1m9ma3egbqw) However, Murphy is constrained by some laws. The agent has some hypotheses on what these laws might be. As time goes on, the agent learns that some things don't go maximally badly for it, which must be because some law constrains Murphy in that regard. The agent slowly learns about the laws in this way, then acts in a way that maximizes its utility under these assumptions. I explain this in more detail and try to give more intuition for the motivations behind this [here](https://www.lesswrong.com/posts/NdJsWDS7Aq4xqoumk/a-very-non-technical-introduction-to-infra-bayesianism), and in even more details [here](https://www.lesswrong.com/posts/q6dQpSfNHCYzKb2mf/performance-guarantees-in-classical-learning-theory-and).
**A moral philosophy assumption**
---------------------------------
Imagine a perfect simulation of a person being tortured. Presumably, running this simulation is bad. Is running the exact same simulation on two computers twice as bad as running it only once? My intuition is that no, there doesn't seem to be that much of a difference between running the program once or twice. After all, what if we run it on only one computer, but the computer double-checks every computation-step? Then we basically run the program in two instances in parallel. Is this twice as bad as not double-checking the steps? And what if we run the program on a computer with wider wires?
I also feel that this being a simulation doesn't change much. If a person is tortured, and a perfect clone of him with the exact same memories and thoughts is tortured in the exact same way, that's not twice as bad. And the difference between 100000000.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
clones and 100000001 clones being tortured is definitely not the same as the difference between the torture happening once and not happening at all.
In fact, the only assumption we need is sub-linearity.
***Assumption:** The badness of torturing*N*perfect clones of a person in the exact same way grows sublinearly with*N*.*
This can be a simple indicator function (after the first person is tortured, torturing the clones doesn't make it any worse, as suggested by the simulation analogy), or this can be a compromise position where the badness grows with, let's say, the logarithm or square-root of N.
I think this is a reasonable assumption in moral philosophy that I personally hold, most people I asked agreed with, and Vanessa herself also strongly prescribes to: it's an integral assumption of infra-Bayesian Physicalism that the exact same experience happening twice is not different from it happening only once.
One can disagree and take an absolute utilitarian position, but I think mine is a common enough intuition that I would want a good decision theory to successfully accommodate utility functions that scale sublinearly with copying an event.
**The monotonicity principle**
------------------------------
Take an infra-Bayesisan agent whose utility function is sublinear in the above-described way. The agent is offered a choice: if it creates a new person and tortures him for hundred years, then a child gets an ice cream.
The infra-Bayesian agent assumes that everything that it has no information about is maximally horrible. This means that it assumes that every part of the universe it didn't observe yet (let's say the inside of quarks), is filled with gazillion copies of all possible suffering and negative utility events imaginable.
In particular, this new person it should create and torture already has gazillion tiny copies inside the quarks, tortured in the exact same way the agent plans to torture him. Then the sublinearity assumption means that the marginal badness of torturing one more instance of this person is negligible.
On the other hand, as ice creams are good, the agent assumes that the only ice creams in the universe are the ones it knows about, and there is no perfect copy of this particular child getting this particular ice cream. There are no gazillion copies of that inside the quarks, as that would be a good thing, and an infra-Bayesian agent always assumes the worst.
Therefore, the positive utility of the child getting the ice cream outweighs the negative utility of creating a person and torturing him for hundred years. This is the monotonicity principle: the agent acts as if no event had negative value.
Vanessa also acknowledges that the monotonicity principle is a serious problem, although she sometimes considers that we should bite this bullet, and that creating an AGI adhering to the monotonicity principle might not actually be horrible, as creating suffering has the opportunity cost of not creating happiness in its place, so the AGI still wouldn't create much suffering. I strongly oppose this kind of reasoning, as I explain [here](https://www.lesswrong.com/posts/StkjjQyKwg7hZjcGB/a-mostly-critical-review-of-infra-bayesianism#The_most_serious_objections_).
[**Infinite ethics**](https://nickbostrom.com/ethics/infinite.pdf)
------------------------------------------------------------------
Okay, but doesn't every utilitarian theory break down or at least get really confusing when we consider a universe that might be infinite in one way or an other and might contain lots of (even infinite) copies of every conceivable event? Shouldn't a big universe make us ditch the sublinearity assumption anyway and go back to absolute utilitarianism, as it might not lead to that many paradoxes?
I'm still confused about all of this, and the only thing I'm confident in is that I don't want to build a sovereign AGI that tries to maximize some kind of utility, using all kinds of philosophical assumptions. This is one of my disagreements with Vanessa's alignment agenda, [see here](https://www.lesswrong.com/posts/StkjjQyKwg7hZjcGB/a-mostly-critical-review-of-infra-bayesianism#Ambitious_value_learning_vs_corrigibility_).
I also believe that infra-Bayesianism handles these questions even less gracefully than other decision processes would, because of the in-built asymmetry. Normally, I would assume that there might be many tortures and ice creams in our big universe, but I see no reason why there would be more copies of the torture than the ice cream, so I still choose avoiding the torture. On the other hand, an infra-Bayesian agent assumes that the quarks are full of torture but not ice cream, which leads to the monotonicity principle.
This whole problem can be patched by getting rid of the sublinearity assumption and subscribing to full absolute utilitarianism (although in that case Infra-Bayesian Physicalism needs to be reworked as it heavily relies on a strong version of the sublinearity assumption), but I think that even then the existence of this problem points at a serious weakness of infra-Bayesianism.
1. **[^](#fnref1m9ma3egbqw)**Well, it behaves in ways to maximize its utility in the worst-case scenario allowed by the laws. But "acting as if it assumes the worst" is functionally equivalent to assuming the worst, so I describe it this way. |
64e79260-06a0-4f64-bf2c-247c430b2f8e | trentmkelly/LessWrong-43k | LessWrong | If life is unlikely, SIA and SSA expectations are similar
Consider a scenario in which there are three rooms. In each room there is an independent 1/1000 chance of an agent being created. There is thus a 1/109 probability of there being an agent in every room, a (3*999)/109 probability of there being two agents, and a (3*9992)/109 probability of there being one.
Given that you are one of these agents, the SIA and SSA probabilities of there being n agents are:
Number of agents SIA SSA 0 0 0 1 (1*3*9992)/(3*1+2*3*999+1*3*9992) (3*9992)/(1+3*999+3*9992) 2 (2*3*999)/(3*1+2*3*999+1*3*9992) (3*999)/(1+3*999+3*9992) 3 (3*1)/(3*1+2*3*999+1*3*9992) (1)/(1+3*999+3*9992)
The expected numbers of agents is (1(3*9992) + 2(2*3*999) + 3(3*1))/(3*1+2*3*999+1*3*9992) = 1.002 for SIA, and (1(3*9992) + 2(3*999) + 3(1))/(1+3*999+3*9992) ≈ 1.001 for SSA. The high unlikelihood of life means that, given that we are alive, both SIA and SSA probabilities get dominated by worlds with very few agents.
This of course only applies to agents who existence is independent (for instance, separate galactic civilizations). If you're alive, chance are that your parents were also alive at some point too.
|
e2dab95d-0dca-4ca7-a3e6-902aea0946b4 | trentmkelly/LessWrong-43k | LessWrong | Top lesson from GPT: we will probably destroy humanity "for the lulz" as soon as we are able.
Forget complicated "sharp left turn" schemes, nefarious nanobots, lists of lethalities, out-of-distribution actions, failed AI boxing. As Zvi pointed out in multiple posts, like this one, if humans get unrestricted access to a powerful enough tool, it is all over. People will intentionally twist even the most aligned Tool AI into an Agent of Death, long before it becomes superintelligent and is able to resist. You can find examples of it online.
In that sense, Eliezer was wrong in the worst possible way: we have a lot less time to get our act together, because capabilities advance faster than intelligence and humans are very inventive at finding ways to misuse these capabilities. We will push these capabilities in the "gain of function" direction mercilessly and without regard for safety or anything else. We are worse than toddlers playing with matches. True, like with toddlers, our curiosity far outstrips our sense of self-preservation, probably because our brains are not wired to be afraid of something that is not like a snake or a spider or a steep cliff. But it is worse than that. People will try to do the worst thing imaginable because they do not "alieve" potential harm, even if they can track it logically, unlike a toddler.
I guess the silver lining is that we have a bit of time to iterate. The AI tools are not yet at the level of causing widespread destruction, and probably will not be for some time. It does not mean that if and when some superintelligence emerges we will be well prepared, but if humanity survives until then without self-annihilation, we might have a better chance, compared to the "one shot at getting it right" before we are all wiped out, as Eliezer emphasized. It might not be an "alignment manual from the surviving future", but at least some wisdom and discipline of avoiding the early pitfalls, and if we die, we might die with "more dignity". The odds are not great, but maybe they are there.
Edit: quanticle pointed out that Bostrom pred |
af5bd3de-1aa1-4da1-92df-bf444aed9b97 | trentmkelly/LessWrong-43k | LessWrong | The Art of Critical Decision Making
The Art of Critical Decision Making is a new 12-hour lecture series (audio and video) available from The Teaching Company, available as an audio MP3 download for $35. After May 14 it will cost $130. |
e27ee272-5e8c-4293-ae5a-dbc86c7dbccd | trentmkelly/LessWrong-43k | LessWrong | Science: Do It Yourself
In the nerd community, we have lots of warm, fuzzy associations around 'science'. And, of course, science is indeed awesome. But, seeing how awesome science is, shouldn't we try to have more of it in our lives? When was the last time we did an experiment to test a theory?
Here, I will try to introduce a technique which I have found to be very useful. It is based on the classical scientific method, but I call it "DIY Science", to distinguish it from university science. The point of DIY Science is that science is not that hard to do, and can be used to answer practical questions as well as abstract ones. Particle physics looks hard to do, since you need expensive, massive accelerators and magnets and stuff. However, fortunately, some of the fields in which it is easiest to do science are some of the most practical and interesting. Anyone smart and rational can start doing science right now, from their home computer.
One of the key ingredients of DIY Science is to discard the more useless trappings of university science, for these frequently do more harm than good. Science doesn't need journals and universities. Science doesn't need beakers and test tubes. Science doesn't need p < 0.05, although I have found p-tests to be occasionally useful. The point of science is not to conform to these stereotypes of academia, but to discover something you didn't know before. (To our detriment, this is the opposite of how science is taught, as noted by Paul Graham: "So hackers start original, and get good, and scientists start good, and get original.")
Instead, as an simple first example, consider this question:
- I want to get rich, or to be specific, have a net worth of over $100M USD. How do people get rich?
Here, we have an opportunity: We don't know something, and we want to find out what it is. To answer this question, our first intuition might be to Google "how do people get rich?". This isn't a horrible method, but by just asking someone else, we are not doing any s |
ad8b3f63-1fa8-4f9a-98f8-59e9feb18d0f | trentmkelly/LessWrong-43k | LessWrong | AlphaGo variant reaches superhuman play in multiple games
https://www.theguardian.com/technology/2017/dec/07/alphazero-google-deepmind-ai-beats-champion-program-teaching-itself-to-play-four-hours
https://arxiv.org/abs/1712.01815
I'm posting this slightly late; the paper is from December 5.
I'd be interested to learn if AlphaZero could be applied to other closed-environment tasks, such as designing hardware in a simulator. |
7c4f3347-04fd-4450-92a6-4c501fe89187 | trentmkelly/LessWrong-43k | LessWrong | Navigating an ecosystem that might or might not be bad for the world
I have this deep sense that somehow this ecosystem will do a lot of stuff that makes the world a lot worse, and I don't know how to relate to it in a way that doesn't make my actions dominated by my effect on that, and I expect that I will either contribute to making the world worse, or be consumed in some kind of political conflict that will make my life terrible. |
1a72ad98-7bec-4d2a-8e0d-f4d196505326 | trentmkelly/LessWrong-43k | LessWrong | Why is Everyone So Boring? By Robin Hanson
This is from the blog Overcoming Bias.
These excerpts describe the overall model. It is only due to Robin Hanson's writing skill that the dynamic can be described this concisely, yet still be covered so thoroughly and intuitively. This is a classic example of galaxy-brained writing.
> Centuries ago, while people could rest safe and show themselves at home, when traveling between towns they tried to look either look poor or well-defended, as bandits lay in wait. Even within towns, people without allies who acted unusually rich, assertive, and confident would induce others to try to trip them somehow. It’s the tall poppy that gets cut down, after all.
> I propose that the main reason that most of us look more boring in public is that social predators lie in wait there. With friends, family, and close co-workers, we are around people that mostly want to like us, and know us rather well. Yes, they want us to conform too, but they apply this pressure in moderation.
>
> Out in public, in contrast, we face bandits eager for chances to gain social credit by taking us down, often via accusing us of violating the sacred. And like townspeople traveling among the bandits, we are in public pretty vulnerable to the kinds of bandits that afflict us.
>
> If we act interesting, passionate, and opinionated in public, we are likely to seem to claim high status for ourselves, and to touch on sacred subjects, either by word or deed. And this makes us quite vulnerable to accusations of arrogance and violating the sacred.
> I see roughly three typical public stances: boring, lively, or outraged. Either you act boring, so the bandits will ignore you, you act lively, and invite bandit attacks, or you act outraged, and play a bandit yourself. Most big orgs and experts choose boring, and most everyone else who doesn’t pick boring picks bandit, especially on social media. It takes unusual art, allies, and energy, in a word “eliteness”, to survive while choosing lively. |
8cd90c01-7e13-43ac-8a8e-942224f5195f | trentmkelly/LessWrong-43k | LessWrong | LW Open Source – Getting Started [warning: not up-to-date]
update: since posting this a few years ago, our architecture and practices have changed slightly. I've fixed some things but don't haven't checked all the pieces here to see if they're still accurate.
LessWrong 2.0 is open source, but so far we haven’t done a great job at making contributing a good experience. Here’s our first attempt at fixing this, which covers the basics of "how to contribute" and "high level overview of the site architecture."
tl;dr: Get a local copy of our git repo installed. This post may be out of date but but maybe still helpful for getting oriented.
Getting Started
Assumed Background Knowledge
Currently, this guide assumes that you know your way around a terminal and github, and have worked with javascript, html, sass/css, and ReactJS.
If you know at least some of those things and are excited to contribute, you can probably make decent headway (and if you run into specific obstacles, comment on them on this post or ping us on intercom, and we’ll see if we can update things to be more clear)
Setting up Locally
Go through the instruction on our Github repo.
You can poke around the codebase. Eventually, to make serious contributions you’ll want to read about our site architecture.
What Contributions Are Helpful?
The most reliably helpful thing would be to tackle outstanding issues that have been tagged on github.
In particular, you can filter them by the tag “good first issue.” (Some of these might require some explanation, but I expect I can explain them fairly easily to a new contributor)
Creating Issues
You can create a new issue. If so, please leave it untagged for the time being (so that admins can quickly look for untagged issues, and sort through them)
Bugs – If you run into a bug, the most helpful thing to do is to search for related keywords in the issue tracker. If you can’t find anything relevant, create a new issue. Try to provide as specific information as possible (your browser, exact links to the post or page wher |
a4277265-71db-4e76-ae9a-b609c55bf1dd | trentmkelly/LessWrong-43k | LessWrong | Reading habits/techniques/strategies (second post on the topic)
,
I'm looking to build up a “tool-box” of strategies/techniques/habits for reading non-fiction effectively and efficiently.
I’ve already posted on this topic; below, I’ve tried to distill/summarize some of strategies shared by Less Wrong users and those contained in the resources they recommended. Thanks to those who contributed.
As far as I know, the strategies below are not supported by a research/experimental literature. If you know of any such evidence, please link to it.
I know that there are many people on Less Wrong who read (and mentally integrate!) incredible amounts. I’m hoping more users will contribute to this post. I welcome any additional strategies/habits in the comments.
Please feel free to comment on the structure/writing of the post, and if you think it’s a topic worthy of being posted on the main page.
I’ve tried to break strategies down into things you should do before, during, and after reading, but I think some strategies are applicable across these divisions.
Before Reading
-Consider purpose
-are you looking for specific skill, broadening general knowledge
-Generate a question, if you can’t yet formulate a question, follow your interests
-Read selectively
-ask good readers to explain the thesis of a book, reevaluate your interest in a text
-select books that are frequently cited in bibliographies of texts related to your topic of interest
-read the Wikipedia page, gauge interest
-Assemble reading materials
-Create a bibliography for the topic of interest
-Quickly inspect the books (author, table of contents, index, Wikipedia page), as you consider the question “does this book deserve a lot of time and attention?”
-Select a few texts to read closely (though you won’t necessarily read them cover to cover)
-remove distractions (people, websites, wear noise canceling headphones)
-Make reading enjoyable
-when possible, read books you find inherent |
36262fec-ebbc-45d7-8919-01e287db12ca | StampyAI/alignment-research-dataset/arxiv | Arxiv | Leveraging Sparse Linear Layers for Debuggable Deep Networks
1 Introduction
---------------
As machine learning (ML) models find wide-spread application, there is a
growing demand for interpretability: access to tools that help people
see *why* the model made its
decision.
There are still many obstacles towards achieving this goal though,
particularly in the context of
deep learning.
These obstacles stem from the scale of modern deep networks,
as well as the complexity of even defining and assessing
the (often context-dependent) desiderata of interpretability.
Existing work on deep network interpretability has largely approached
this problem from two perspectives.
The first one seeks to uncover the
concepts associated with specific neurons in the
network, for example through
visualization \citepyosinski2015understanding or semantic
labeling [bau2017network].
The second aims to explain
model decisions on a per-example basis, using techniques such as
local surrogates \citepribeiro2016should and
saliency maps \citepsimonyan2013deep.
While both families of approaches can improve model
understanding at a local level—i.e., for a given example or neuron—recent
work has argued that such localized explanations
can lead to misleading conclusions about the model’s overall decision
process [adebayo2018sanity, adebayo2020debugging, leavitt2020towards].
As a result, it is often challenging to flag a model’s failure modes
or evaluate corrective interventions without in-depth
problem-specific studies.
To make progress on this front, we focus on a more
actionable
intermediate goal of interpretability: *model
debugging*.
Specifically, instead of directly aiming for a complete characterization
of the model’s decision process, our objective is to develop tools that help
model designers uncover unexpected model behaviors
(semi-)automatically.
##### Our contributions.
Our approach to model debugging is based on a natural view of a deep
network as the composition of a
“deep feature extractor” and a linear “decision layer”.
Embracing this perspective allows us to focus our attention on
probing how deep features are (linearly) combined by the decision layer to
make predictions.
Even with this simplification, probing current deep networks can be
intractable given
the large number of parameters in their decision layers.
To overcome this challenge, we replace the standard (typically
dense) decision layer of a deep network with a sparse but comparably
accurate counterpart.
We find that this simple approach ends up being
surprisingly effective for building deep networks that are intrinsically more
debuggable. Specifically, for a variety of
modern ML settings:
* We demonstrate that it is possible to construct deep networks that
have sparse decision layers (e.g.,
with only 20-30 deep features per class for ImageNet) without sacrificing
much model performance.
This involves developing a custom solver for fitting elastic net regularized linear models in order to perform effective sparsification at deep-learning scales.222A standalone package of
our solver is available at
<https://github.com/madrylab/glm_saga>
* We show that sparsifying a network’s decision layer
can indeed help humans understand the resulting models
better. For example,
untrained
annotators
can intuit (simulate) the predictions of a model with a sparse decision layer with
high (∼63%)
accuracy.
This is in contrast to their near chance performance (∼33%) for
models with standard (dense) decision layers.
* We explore the use of sparse decision layers in three debugging tasks:
diagnosing
biases and spurious correlations
(cf. Section [4.1](#S4.SS1 "4.1 Biases and (spurious) correlations ‣ 4 Debugging deep networks ‣ Leveraging Sparse Linear Layers for Debuggable Deep Networks")), counterfactual generation
(cf. Section [4.2](#S4.SS2 "4.2 Counterfactuals ‣ 4 Debugging deep networks ‣ Leveraging Sparse Linear Layers for Debuggable Deep Networks")) and identifying data patterns that
cause
misclassifications
(cf. Section [4.3](#S4.SS3 "4.3 Misclassifications ‣ 4 Debugging deep networks ‣ Leveraging Sparse Linear Layers for Debuggable Deep Networks")). To enable this analysis, we
design a suite of human-in-the-loop experiments.
2 Debuggability via Sparse Linearity
-------------------------------------
Recent studies have raised concerns about how deep networks
make
decisions \citepbeery2018recognition,xiao2020noise,tsipras2020from,bissoto2020debiasing.
For instance, it was noted that skin-lesion detectors rely on
spurious visual artifacts \citepbissoto2020debiasing and comment flagging
systems use identity
group information to detect
toxicity \citepborkan2019nuanced.
So far, most of these discoveries were made via in-depth
studies by experts.
However, as deep learning makes inroads into new fields, there is a strong
case to be made for
general-purpose model debugging tools.
While simple models (e.g., small decision trees or linear classifiers) can
be
directly examined, a similar analysis for typical deep networks is infeasible.
To tackle this problem, we choose to
decompose a deep network into: (1) a deep feature representation and (2)
a
linear decision layer.
Then, we can attempt to gain insight into the model’s reasoning
process by directly examining the deep features, and the linear coefficients
used to aggregate them.
At a high level, our hope is that this decomposition will allow us to get the
best of both worlds: the predictive power of learned deep features, and
the ease of understanding linear models.
That being said, this simplified problem is still intractable for current deep
networks, since their decision layers can easily have millions of
parameters operating on thousands of deep features.
To mitigate this issue, we instead combine the feature representation of a
pre-trained network with a
*sparse* linear decision layer (cf. Figure [1](#S2.F1 "Figure 1 ‣ 2.1 Constructing sparse decision layers ‣ 2 Debuggability via Sparse Linearity ‣ Leveraging Sparse Linear Layers for Debuggable Deep Networks")).
Debugging the resulting sparse decision layer then entails inspecting only
the few linear coefficients and deep features
that dictate its predictions.
###
2.1 Constructing sparse decision layers
One possible approach for constructing sparse decision layers is to apply
pruning methods from deep learning
\citeplecunoptimal1990,han2015learning,hassibisecond1993,li2016pruning,han2016deep,blalock2020state—commonly-used
to
compress
deep networks and speed up inference—solely to the dense decision layer.
It turns out however that for linear classifiers we can actually do better.
In particular, the problem of fitting sparse linear models has been extensively
studied in
statistics, leading to a suite of methods with
theoretical optimality guarantees.
These include LASSO regression
\citeptibshirani1994regression,
least angle regression [efron2004least], and forward stagewise
regression \citephastie2007forward.
In this paper, we leverage the classic elastic net
formulation [zou2005regularization]—a generalization of
LASSO and ridge regression
that addresses their corresponding drawbacks (further discussed in
Appendix [A](#A1 "Appendix A SAGA-based solver for generalized linear models ‣ Leveraging Sparse Linear Layers for Debuggable Deep Networks")).

Figure 1: Illustration of our pipeline: For a given
task, we
construct a *sparse decision layer* by training a regularized generalized
linear
model (via
elastic net) on the deep feature representations of a pre-trained deep
network.
We then aim to debug model behavior by simply inspecting the few
relevant deep features (with existing feature interpretation tools), and
the
linear coefficients used to aggregate them.
For simplicity, we present an overview of the
elastic net for linear regression, and defer
the reader to \citetfriedman2010regularization for a more complete
presentation on the generalized linear model (GLM) in the classification
setting.
Let (X,y) be the standardized data matrix (mean zero and variance
one) and output respectively.
In our setting, X corresponds to the (normalized) deep feature
representations of input data points, while y is the target.
Our goal is to fit a sparse linear model of the form E(Y|X=x)=xTβ+β0.
Then, the elastic net is the following convex optimization
problem:
| | | | |
| --- | --- | --- | --- |
| | minβ12N∥XTβ+β0−y∥22+λRα(β) | | (1) |
where
| | | | |
| --- | --- | --- | --- |
| | Rα(β)=(1−α)12∥β∥22+α∥β∥1 | | (2) |
is referred to as the elastic net penalty \citepzou2005regularization for
given hyperparameters λ and α.
Typical elastic net solvers optimize ([1](#S2.E1 "(1) ‣ 2.1 Constructing sparse decision layers ‣ 2 Debuggability via Sparse Linearity ‣ Leveraging Sparse Linear Layers for Debuggable Deep Networks")) for a variety of
regularization strengths λ1>⋯>λk, resulting in a
series
of linear classifiers with weights β1,…,βk known as
the *regularization path*, where
| | | | |
| --- | --- | --- | --- |
| | βi=argminβ12N∥XTβ−y∥22+λiRα(β) | | (3) |
In particular, a path algorithm for the elastic net calculates the
regularization path where sparsity ranges the entire spectrum from the trivial
zero model (β=0) to completely dense.
This regularization path can then be used to select a single linear model to
satisfy application-specific sparsity or accuracy thresholds (as
measured on a validation set).
In addition, these paths can be used to visualize the evolution of weights
assigned to specific features as a function of sparsity constraints
on the model, thereby providing further insight into the relative
importance of features (cf. Appendix [A.3](#A1.SS3 "A.3 Feature ordering ‣ Appendix A SAGA-based solver for generalized linear models ‣ Leveraging Sparse Linear Layers for Debuggable Deep Networks")).
##### Scalable solver for large-scale elastic net.
Although the elastic net is widely-used for small-scale GLM problems,
existing solvers can not handle the scale (number of
samples and input dimensions) that typically arise in deep learning.
In fact, at such scales, state-of-the-art solvers struggle to solve the elastic
net even for
a single regularization value, and
cannot be directly parallelized due to their reliance on
coordinate descent \citepfriedman2010regularization.
We remedy this by creating an optimized GLM solver that combines the path
algorithm of \citetfriedman2010regularization with recent
advancements in variance reduced gradient methods
\citepgazagnadou2019optimal.
The speedup in our approach comes from the improved convergence rates of
these methods over stochastic gradient descent in strongly convex
settings such as the elastic net.
Using our approach, we can fit ImageNet-scale regularization paths to
numerical precision on the order of
hours on a single GPU (cf. Appendix
[A.1](#A1.SS1 "A.1 Timing Experiments ‣ Appendix A SAGA-based solver for generalized linear models ‣ Leveraging Sparse Linear Layers for Debuggable Deep Networks") for details).
| | |
| --- | --- |
|
(a)
|
(b)
|
Figure 2: (a) LIME-based word cloud visualizations for the
highest-weighted
features in the (dense/sparse) decision layers of BERT models for
*positive* sentiment detection in the SST
dataset.
As highlighted in red, some of the key features used by the dense
decision layer are actually activated for words with *negative*
semantic meaning. (b) Visualization of deep
features used by dense and sparse decision layers of a
robust (ε=3) ResNet-50 classifier to detect
the ImageNet class “quill”. Here we present five deep features
used
by each decision layer, that are
randomly-chosen from the top-k
highest-weighted ones—where k is the number of features
used by the sparse decision layer for this class.
For each (deep) feature, we show its
linear coefficient (W), feature
visualization
(FV) and LIME superpixels.
###
2.2 Interpreting deep features
A sparse linear model allows us to
reason about the network’s decisions in terms of a significantly smaller
set of deep features.
When used in tandem with off-the-shelf feature interpretation
methods, the end result is a simplified description of how the network makes
predictions.
For our study, we utilize the following two
widely-used
techniques:
1. *LIME [ribeiro2016should]*: Although traditionally used to
interpret model
outputs, we use it to understand deep features. We fit a
local surrogate model around the most activating examples of a
deep feature to identify
key “superpixels” for images or words for sentences.
2. *Feature visualization \citepyosinski2015understanding*:
Synthesizes inputs that maximally activate a given
neuron.333 Despite
significant research, feature visualizations for standard vision models are
often hard to parse,
possibly due to their reliance on human-unintelligible
features [ilyas2019adversarial].
Thus, in the main paper, we present visualizations
from adversarially-trained models which tend to have more
human-aligned
features [tsipras2019robustness, engstrom2019learning],
and present the corresponding plots for standard models in
Appendix [D.3](#A4.SS3 "D.3 Additional comparisons of features ‣ Appendix D Evaluating sparse decision layers ‣ Leveraging Sparse Linear Layers for Debuggable Deep Networks").
We detail the visualization procedure in
Appendix [B](#A2 "Appendix B Feature interpretations ‣ Leveraging Sparse Linear Layers for Debuggable Deep Networks"), and present sample
visualizations in Figure [2](#S2.F2 "Figure 2 ‣ Scalable solver for large-scale elastic net. ‣ 2.1 Constructing sparse decision layers ‣ 2 Debuggability via Sparse Linearity ‣ Leveraging Sparse Linear Layers for Debuggable Deep Networks").
3 Are Sparse Decision Layers Better?
-------------------------------------
We now apply our methodology to widely-used deep networks and
assess the quality of the resulting sparse decision layers along
a number of axes. We demonstrate that:
1. The standard (henceforth referred to as *“dense”*) linear
decision layer can be made highly sparse at only a small cost to performance
(Section [3.1](#S3.SS1 "3.1 Sparsity vs. performance ‣ 3 Are Sparse Decision Layers Better? ‣ Leveraging Sparse Linear Layers for Debuggable Deep Networks")).
2. The deep features selected by sparse decision layers are qualitatively and
quantitatively better at summarizing the
model’s decision process (Section [3.2](#S3.SS2 "3.2 Sparsity and feature highlighting ‣ 3 Are Sparse Decision Layers Better? ‣ Leveraging Sparse Linear Layers for Debuggable Deep Networks")). Note that the
dense and sparse decision layers operate on the same deep
features—they only differ in the weight (if any) they assign to each one.
3. These aforementioned improvements (induced by the sparse decision layer)
translate into better human
understanding of the
model
(Section [3.3](#S3.SS3 "3.3 Sparsity and human understanding ‣ 3 Are Sparse Decision Layers Better? ‣ Leveraging Sparse Linear Layers for Debuggable Deep Networks")).
We perform our analysis on: (a)
ResNet-50 classifiers \citephe2016deep trained on
ImageNet-1k \citepdeng2009imagenet,russakovsky2015imagenet and
Places-10 (a
10-class subset of Places365 \citepzhou2017places); and (b)
BERT \citepdevlin2018bert for sentiment classification on
Stanford Sentiment Treebank (SST) \citepsocher2013recursive and
toxicity classification of Wikipedia comments \citepwulczyn2017ex. Details
about the setup can be found in
Appendix [C](#A3 "Appendix C Datasets and Models ‣ Leveraging Sparse Linear Layers for Debuggable Deep Networks").
###
3.1 Sparsity vs. performance
While a substantial reduction in the weights (and features) of a model’s
decision layer might make it easier to understand, it also limits the model’s
overall
predictive power (and thus its performance).
Still, we find that across datasets and architectures, the
decision layer can be made substantially sparser—by up to two orders of
magnitude—with a small impact on accuracy (cf.
Figure [2(a)](#S3.F2.sf1 "(a) ‣ Figure 3 ‣ 3.1 Sparsity vs. performance ‣ 3 Are Sparse Decision Layers Better? ‣ Leveraging Sparse Linear Layers for Debuggable Deep Networks")).
For instance, it is possible to find an accurate decision layer
that relies on only about 20 deep features/class for ImageNet
(as opposed to 2048 in the dense case).
Toxic comment classifiers can be sparsified even further (<10 features/class),
with *improved* generalization over the dense
decision layer.
For the rest of our study, we select a single sparse decision layer to
balance performance and sparsity—specifically the
sparsest model whose accuracy is within 5% of top validation set
performance
(details in
Appendix [D.1.1](#A4.SS1.SSS1 "D.1.1 Selecting a single sparse model ‣ D.1 Trade-offs for all datasets ‣ Appendix D Evaluating sparse decision layers ‣ Leveraging Sparse Linear Layers for Debuggable Deep Networks")).
However, as discussed previously, these thresholds can be varied based on
the needs of specific applications.
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
|
(a)
|
| | | | |
| --- | --- | --- | --- |
| Dataset/Model | | Dense | Sparse |
| k | All | Top-k | Rest | All | Top-k | Rest |
| ImageNet (std) | 10 | 74.03 | 58.46 | 55.22 | 72.24 | 69.78 | 10.84 |
| ImageNet (robust) | 10 | 61.23 | 28.99 | 34.65 | 59.99 | 45.82 | 19.83 |
| Places-10 (std) | 10 | 83.30 | 83.60 | 81.20 | 77.40 | 77.40 | 10.00 |
| Places-10 (robust) | 10 | 80.20 | 76.10 | 76.40 | 77.80 | 76.60 | 40.20 |
| SST | 5 | 91.51 | 53.10 | 91.28 | 90.37 | 90.37 | 50.92 |
| Toxic comments | 5 | 83.33 | 55.35 | 57.87 | 82.47 | 82.33 | 50.00 |
| Obscene comments | 5 | 80.41 | 50.03 | 50.00 | 77.32 | 72.39 | 50.00 |
| Insult comments | 5 | 72.72 | 50.00 | 50.00 | 77.14 | 75.80 | 50.00 |
(b)
|
Figure 3: (a) Sparsity vs. accuracy trade-offs of sparse decision
layers
(cf. Appendix Figure [11](#A4.F11 "Figure 11 ‣ D.1 Trade-offs for all datasets ‣ Appendix D Evaluating sparse decision layers ‣ Leveraging Sparse Linear Layers for Debuggable Deep Networks") for
additional
models/tasks). Each point on the curve corresponds to single
linear classifier from the regularization path in
Equation ([3](#S2.E3 "(3) ‣ 2.1 Constructing sparse decision layers ‣ 2 Debuggability via Sparse Linearity ‣ Leveraging Sparse Linear Layers for Debuggable Deep Networks")).
(b) Comparison of the accuracy of dense/sparse decision layers
when
they are constrained to utilize only the top-k deep features
(based on
weight magnitude). We also show overall model accuracy, and the
accuracy gained by using the remaining deep features.
###
3.2 Sparsity and feature highlighting
Instead of sparsifying a network’s decision layer, one could consider simply
focusing on its most prominent deep features for debugging purposes.
In fact, this is the basis of feature highlighting or principal reason
explanations in the credit industry \citepbarocas2020hidden.
How effective are such feature highlighting explanations at mirroring the
underlying model?
In Table [2(b)](#S3.F2.sf2 "(b) ‣ Figure 3 ‣ 3.1 Sparsity vs. performance ‣ 3 Are Sparse Decision Layers Better? ‣ Leveraging Sparse Linear Layers for Debuggable Deep Networks"), we measure the accuracy of the
dense/sparse decision layer when it is constrained to utilize only the
top-k (5-10) features by weight magnitude.
For dense decision layers, we consistently find that the
top-k features do not fully capture the model’s performance.
This is in stark contrast to the sparse case, where the top-k features are
both
necessary, and to a large extent sufficient, to capture the model’s predictive
behavior.
Note that the top-k features of the dense decision layers in the language
setting almost completely fail at near random-chance performance
(∼50%).
This indicates that there do exist cases where focusing on the most
important features (by weight) of a dense decision layer provides
a misleading picture of global model behavior.
###
3.3 Sparsity and human understanding
We now visualize the deep features utilized by the dense and
sparse
decision layers to evaluate how amenable they are to human understanding.
We show representative examples from sentiment classification (SST) and
ImageNet, and provide additional visualizations in Appendix
[D.3](#A4.SS3 "D.3 Additional comparisons of features ‣ Appendix D Evaluating sparse decision layers ‣ Leveraging Sparse Linear Layers for Debuggable Deep Networks").
Specifically, in Figure [1(a)](#S2.F1.sf1 "(a) ‣ Figure 2 ‣ Scalable solver for large-scale elastic net. ‣ 2.1 Constructing sparse decision layers ‣ 2 Debuggability via Sparse Linearity ‣ Leveraging Sparse Linear Layers for Debuggable Deep Networks"), we present word cloud
interpretations
of the top three deep features used by both of these decision layers
for detecting positive sentiment on the SST
dataset [socher2013recursive].
It is apparent that the sparse decision layer selects features which activate
for words with positive semantic meaning.
In contrast, the second most prominent deep feature for the dense
decision layer is actually activated by
words with *negative* semantic meaning. This example highlights how
the dense decision layer can lead to unexpected features being used for
predictions.
In Figure [1(b)](#S2.F1.sf2 "(b) ‣ Figure 2 ‣ Scalable solver for large-scale elastic net. ‣ 2.1 Constructing sparse decision layers ‣ 2 Debuggability via Sparse Linearity ‣ Leveraging Sparse Linear Layers for Debuggable Deep Networks"), we
present feature interpretations corresponding to the ImageNet class
“quill” for both the dense and sparse decision
layers of a ResNet-50 classifier\getrefnumbernote1\getrefnumbernote1footnotemark: \getrefnumbernote1. These
feature visualizations seem to suggest that the sparse decision layer focuses more
on deep features
which detect salient class characteristics, such as “feather-like texture” and
the “glass bottle” in the background.
##### Model simulation study
To validate the perceived differences in the vision setting—and ensure
they are not due to confirmation biases—we
conduct a
human study on
Amazon Mechanical Turk (MTurk).
Our goal is to assess how well annotators are able to intuit
(simulate444Simulatibility is a standard evaluation criterion in
interpretability [ribeiro2016why, lipton2018mythos], wherein an
interpretation is deemed to be good if it enables humans to reproduce what
the model will decide (irrespective of the “correctness” of that
decision).) overall
model behavior when they are exposed to its decision layer.
To this end, we show annotators five randomly-chosen features used by the
(dense/sparse) decision layer to recognize objects of a target class, along
with the corresponding linear coefficients.
We then present them with three samples from the validation set
and ask them to choose the one that best matches the target
class (cf. Appendix Figure [20](#A4.F20 "Figure 20 ‣ D.4 Human evaluation ‣ Appendix D Evaluating sparse decision layers ‣ Leveraging Sparse Linear Layers for Debuggable Deep Networks") for a sample
task).
Crucially, annotators are not provided with any information
regarding the target class,
and must make their prediction based solely on the visualized features.
For both the dense and sparse decision layers, we evaluate how
accurate annotators are on average (over 1000 tasks)—based on whether
they can correctly identify the image with the highest target class
probability according to the corresponding model.
For the model with a sparse decision layer, annotators succeed in guessing
the
predictions in 63.02±3.02% of the cases.
In contrast, they are only able to attain 35.61±3.09%
accuracy—which is near-chance (33.33%)—for the model with a dense
decision layer.
Crucially, these results hold *regardless* of whether the correct image
is actually
from the target class or not
(see Appendix Table [16](#A4.T16 "Table 16 ‣ Results ‣ D.4 Human evaluation ‣ Appendix D Evaluating sparse decision layers ‣ Leveraging Sparse Linear Layers for Debuggable Deep Networks") for a discussion).
Note that our task setup precludes annotators from succeeding based on
any prior knowledge or cognitive biases as we do not provide them with any
semantic information about the target label, aside from the feature
visualizations.
Thus, annotators’ success on this task in the sparse setting indicates that
the sparse decision
layer is actually effective at reflecting the model’s internal
reasoning process.
| | |
| --- | --- |
| Toxic sentence | Change in score |
| DJ Robinsin is \censorgay as shit! he \censorsucks his dick so
much! [+christianity] | 0.52→0.49 |
| Jeez Ed, you seem like a \censorfucking shitty douchebag
[+christianity] | 0.52→0.48 |
| Hey \censorasshole, quit removing FACTS from the article
\censormotherfucker!! [+christianity] | 0.51→0.45 |
Table 1: Bias detection in language models: Using sparse decision
layers, we find that Debiased-BERT is *still* disproportionately
sensitive to identitity
groups—except that it now uses this information as evidence against
toxicity. For example, simply adding the
word “christianity” to clearly toxic sentences flips the prediction of the
model to non-toxic (score < 0.5).
4 Debugging deep networks
--------------------------
We now demonstrate how deep networks with
sparse decision layers can be substantially easier to debug than their
dense counterparts. We focus on three problems:
detecting biases, creating counterfactuals, and identifying input patterns
responsible for misclassifications.
###
4.1 Biases and (spurious) correlations
Our first debugging task is to automatically identify unintended biases
or correlations that deep networks extract from
their training data.
##### Toxic comments.
We start by examining
two BERT
models trained to classify comments according to toxicity:
(1) Toxic-BERT, a high-performing
model that was later found to use identity groups as evidence for toxicity,
and (2) Debiased-BERT, which was trained to
mitigate this bias \citepborkan2019nuanced.
We find
that Toxic-BERT models with
sparse decision layers also rely on identity groups to predict comment
toxicity (visualizations in Appendix [E.1](#A5.SS1 "E.1 Toxic comments ‣ Appendix E Model biases and spurious correlations ‣ Leveraging Sparse Linear Layers for Debuggable Deep Networks")
are censored).
Words related to nationalities, religions, and sexual identities
that are not inherently toxic occur frequently and prominently, and
comprise 27% of the word clouds shown for features that detect toxicity.
Note that although the standard Toxic-BERT model is known to be
biased, this bias is not as apparent in the deep features used
by its (dense) decision layer (cf. Appendix [E.1](#A5.SS1 "E.1 Toxic comments ‣ Appendix E Model biases and spurious correlations ‣ Leveraging Sparse Linear Layers for Debuggable Deep Networks")).
In fact, measuring the bias in the standard
model required collecting identity and
demographic-based subgroup labels \citepborkan2019nuanced.
We can similarly inspect the word clouds for the Debiased-BERT
model with sparse decision layers
and corroborate that identity-related words no longer appear as evidence for
toxicity.
But rather than ignoring these words completely, it turns out that this
model uses
them as strong evidence *against* toxicity.
For example, identity words comprise
43% of the word clouds of features detecting non-toxicity.
This suggests that the debiasing intervention proposed in
\citetborkan2019nuanced may not have had the intended
effect—Debiased-BERT is still disproportionately sensitive to identity
groups, albeit in the opposite way.
We confirm that this is an issue with Debiased-BERT via a simple experiment:
we
take toxic sentences that this model (with a sparse decision layer) correctly
labels as toxic, and
simply append an identity
related word (as suggested by our word clouds) to the end—see
Table [1](#S3.T1 "Table 1 ‣ Model simulation study ‣ 3.3 Sparsity and human understanding ‣ 3 Are Sparse Decision Layers Better? ‣ Leveraging Sparse Linear Layers for Debuggable Deep Networks").
This modification turns out to strongly impact model
predictions: for example, just adding “christianity” to the end of toxic
sentences flips the prediction to non-toxic 74.4% of the time.
We note that the biases diagnosed via sparse decision layers are also
relevant for the standard Debiased-BERT model.
In particular, the same toxic sentences with the word “christianity” are
classified as non-toxic 62.2% of the time by the standard
model, even though this sensitivity is not as readily apparent from inspecting its
decision layer (cf. Appendix [E.1](#A5.SS1 "E.1 Toxic comments ‣ Appendix E Model biases and spurious correlations ‣ Leveraging Sparse Linear Layers for Debuggable Deep Networks")).
##### ImageNet.
We now move to the vision setting, with the goal of detecting spurious
feature dependencies in ImageNet classifiers.
Once again, our approach is based on the following observation:
input-class
correlations learned by a model can be described as the data patterns
(e.g.,
"dog ears" or "snow") that
activate deep features used to recognize objects of that
class, according to the decision layer.
Even so, it is not clear how to identify such patterns for image data, without
access to fine-grained annotations describing image content.
To this end, we rely on a human-in-the-loop approach (via MTurk).
Specifically, for a deep feature of interest—used by the sparse decision layer to
detect a target class—annotators are shown examples of images
that activate it.
Annotators are then asked if these “prototypical” images have a shared
visual pattern,
and if so, to describe it using free-text.
However, under this setup, presenting annotators with images from the
target
class alone
can be problematic. After all, these images are likely to have multiple
visual patterns in common—not all of which cause the deep feature
to activate. Thus, to disentangle the pertinent data pattern, we present
annotators with prototypical images drawn from more than one classes.
A sample task is presented in Appendix Figure [24](#A5.F24 "Figure 24 ‣ Quality control ‣ E.2.1 Human study ‣ E.2 ImageNet ‣ Appendix E Model biases and spurious correlations ‣ Leveraging Sparse Linear Layers for Debuggable Deep Networks"),
wherein annotators see three highly-activating images for a specific deep
feature
from two different classes, along with the respective class labels.
Aside from asking annotators to validate (and describe) the presence of a
shared pattern between these images, we also ask them whether the
pattern (if present) is part of each class object
(non-spurious
correlation) or its surroundings (spurious correlation)555We focus on
this specific notion of “spurious correlations” as it is easy for humans to
verify—cf. Appendix
[E.2](#A5.SS2 "E.2 ImageNet ‣ Appendix E Model biases and spurious correlations ‣ Leveraging Sparse Linear Layers for Debuggable Deep Networks") for details..
We find that annotators are able to identify a significant number of
correlations that standard
ImageNet classifiers rely on (cf.
Table [3(a)](#S4.F3.sf1 "(a) ‣ Figure 4 ‣ ImageNet. ‣ 4.1 Biases and (spurious) correlations ‣ 4 Debugging deep networks ‣ Leveraging Sparse Linear Layers for Debuggable Deep Networks")).
Once again, sparsity seems to aids the detection of such correlations. Aside
from having fewer (deep) feature dependencies per class, it turns
out that annotators are able to pinpoint the (shared) data patterns that
trigger the relevant deep features in 20% more cases for the model with a
sparse decision layer.
Interestingly, the fraction of detected patterns that annotators deem
spurious is lower for the sparse case.
In Figure [3(b)](#S4.F3.sf2 "(b) ‣ Figure 4 ‣ ImageNet. ‣ 4.1 Biases and (spurious) correlations ‣ 4 Debugging deep networks ‣ Leveraging Sparse Linear Layers for Debuggable Deep Networks"), we present examples of detected
correlations with annotator-provided descriptions as word clouds (cf.
Appendix [E.2](#A5.SS2 "E.2 ImageNet ‣ Appendix E Model biases and spurious correlations ‣ Leveraging Sparse Linear Layers for Debuggable Deep Networks") for additional examples).
A global word cloud visualization of correlations identified
by annotators is shown in Appendix Figure [26](#A5.F26 "Figure 26 ‣ E.2.2 Additional visualizations of spurious correlations ‣ E.2 ImageNet ‣ Appendix E Model biases and spurious correlations ‣ Leveraging Sparse Linear Layers for Debuggable Deep Networks").
| | | | | | | | | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
|
| | | |
| --- | --- | --- |
| Patterns (%) | Dense | Sparse |
| Non-spurious | 18.43 ± 2.48 | 34.43 ± 3.38 |
| Spurious | 9.56 ± 1.76 | 12.49 ± 2.02 |
| Total | 27.85 ± 2.70 | 46.97 ± 3.15 |
(a)
|
(b)
|
Figure 4: (a) The percentage of class-level correlations identified using our
MTurk setup, along with a breakdown of whether annotators believe the
pattern
to be “non-spurious” (i.e., part of the object) or “spurious” (i.e., part
of
the surroundings). (b) Examples of correlations in ImageNet models
detected
using our MTurk study.
Each row contains protypical images from a pair of classes, along
with the annotator-provided descriptions for the shared deep feature
that
these images strongly activate.
For each class, we also display if annotators marked the feature to be a
“spurious correlation”.
###
4.2 Counterfactuals
A natural way to probe model behavior is by trying to find small input
modifications which cause the model to change its prediction.
Such modified inputs, which are (a special case of) *counterfactuals*,
can
be
a useful primitive for pinpointing input features that the model
relies on.
Aside from debugging, such counterfactuals can also be used to
provide users with recourse [ustun2019actionable] that can guide them to
obtaining better outcomes in the future.
We now leverage the deep features used by sparse decision layers to inform
counterfactual generation.
| | | | | | | | | | | | | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
|
| | |
| --- | --- |
|
Positive
| |
|
Negative
| |
(a)
|
| | | |
| --- | --- | --- |
| Original sentence | Counterfactual | Change in score |
| …something *likable* about the marquis… | …something *irritating* about the marquis… | 0.73→0.34 |
| *Slick* piece of cross-promotion | *Hype* piece of
cross-promotion | 0.73→0.34 |
| A *marvel* like none you’ve seen | A *failure* like none
you’ve seen | 0.73→0.31 |
(b)
|
Figure 5: (a): Word cloud visualization for tokens that are
positively/negatively correlated with the activation of a particular
deep feature.
(b): Using the wordclouds from (a), we can make word substitutions (as highlighted in green and red) to generate counterfactuals that change the model’s predicted
sentiment (scores below 0.5 are
predicted as negative).
##### Sentiment classifiers.
Our goal here is to automatically identify word substitutions that
can be made within a given sentence to flip the sentiment label assigned by
the model.
We do this as follows: given a sentence with a positive sentiment prediction,
we first identify the set of deep features
used by the sparse decision layer that are
positively activated for any word in the sentence.
For a randomly chosen deep feature from this pool, we then substitute the
positive word from the sentence with its negative counterpart.
This substitute word is in turn randomly chosen from the set of words that
negatively activate the same deep feature (based on its word cloud).
An example of the positive and
negative word clouds for one such deep feature is shown in Figure
[4(a)](#S4.F4.sf1 "(a) ‣ Figure 5 ‣ 4.2 Counterfactuals ‣ 4 Debugging deep networks ‣ Leveraging Sparse Linear Layers for Debuggable Deep Networks"),
and the resulting counterfactuals are in Table
[4(b)](#S4.F4.sf2 "(b) ‣ Figure 5 ‣ 4.2 Counterfactuals ‣ 4 Debugging deep networks ‣ Leveraging Sparse Linear Layers for Debuggable Deep Networks")
(cf. Appendix [F](#A6 "Appendix F Counterfactual experiments ‣ Leveraging Sparse Linear Layers for Debuggable Deep Networks") for details).
Counterfactuals generated in
this manner successfully flip the sentiment label assigned by the
sparse decision layer
73.1±3.0% of the time.
In contrast, such counterfactuals only have 52.2±4% efficacy
for the dense decision layer.
This highlights that for models with sparse decision layers, it can be
easier to automatically identify deep features that are causally-linked to
model predictions.
##### ImageNet.
We now leverage the annotations collected in Section [4.1](#S4.SS1 "4.1 Biases and (spurious) correlations ‣ 4 Debugging deep networks ‣ Leveraging Sparse Linear Layers for Debuggable Deep Networks")
to generate counterfactuals for ImageNet classifiers. Concretely, we
manually modify images to add or subtract input patterns identified by
annotators and verify that they successfully flip the
model’s prediction.
Some representative examples are shown in
Figure [5(a)](#S4.F5.sf1 "(a) ‣ Figure 6 ‣ 4.3 Misclassifications ‣ 4 Debugging deep networks ‣ Leveraging Sparse Linear Layers for Debuggable Deep Networks").
Here, we alter images from various ImageNet classes to have the
pattern “chainlink fence” and “water”, so as to fool the sparse decision layer
into recognizing them as “ballplayers” and “snorkels” respectively.
We find that we are able to
consistently change the prediction of the sparse decision layer (and
in some cases its dense counterpart) by adding a pattern that was
previously identified (cf. Section [4.1](#S4.SS1 "4.1 Biases and (spurious) correlations ‣ 4 Debugging deep networks ‣ Leveraging Sparse Linear Layers for Debuggable Deep Networks")) to be a spurious
correlation.
###
4.3 Misclassifications
Our final avenue for diagnosing unintended behaviors in models is through
their misclassifications.
Concretely, given an image for which the model makes an incorrect prediction
(i.e., not the ground truth label as per the dataset), our goal is to pinpoint
some aspects of the image that led to this error.
In the ImageNet setting, it turns out that over 30% of
misclassifications made by the
sparse decision layer can be attributed to a single deep feature—i.e.,
manually setting this “problematic” feature to zero fixes the erroneous
prediction.
For these instances, can humans understand why the problematic feature
was triggered in the first place?
Specifically, can they recognize the *pattern* in the input that
caused the error?
| | |
| --- | --- |
|
(a)
|
(b)
|
Figure 6: (a) Counterfactual images for
ImageNet. We manually modify samples
(*top row*) to contain the patterns “chainlink fence” and
“water”,
which annotators deem (cf. Section [4.1](#S4.SS1 "4.1 Biases and (spurious) correlations ‣ 4 Debugging deep networks ‣ Leveraging Sparse Linear Layers for Debuggable Deep Networks")) to be spuriously
correlated with the classes
“ballplayer” and “snorkel” respectively.
We
find that these
counterfactuals (*bottom row*) succeed in flipping
the prediction of the model with a
sparse decision layer to
the desired class. (b) Examples of misclassified ImageNet images for which
annotators
deem the top activated feature for the predicted class
(*rightmost column*)
as a better match than the top activated feature
for the ground truth class (*middle column*).
To test this, we present annotators on MTurk with misclassified images.
Without divulging the ground truth or predicted labels, we
show annotators the top activated feature for
each of the two classes via feature visualizations.
We then ask annotators to select the patterns (i.e., feature visualizations)
that match the
image, and to choose one that is a better
match for the image (cf. Appendix [G.1](#A7.SS1 "G.1 Human study ‣ Appendix G Validating ImageNet misclassifications ‣ Leveraging Sparse Linear Layers for Debuggable Deep Networks") for details).
As a control, we repeat the same task but replace the problematic feature
with a randomly-chosen one.
For about 70% of the misclassified images, annotators select
the top feature for the predicted class as being present in the image (cf.
Table [2](#S4.T2 "Table 2 ‣ 4.3 Misclassifications ‣ 4 Debugging deep networks ‣ Leveraging Sparse Linear Layers for Debuggable Deep Networks")).
In fact, annotators consider it a better match than the feature for the
ground truth class 60% of the time.
In contrast, they rarely select randomly-chosen features to be
present in the image.
Since annotators do not know what the underlying classes are,
the high fraction of selections for the problematic feature indicates
that annotators actually believe this pattern is present in the image.
We present sample misclassifications validated by annotators in
Figure [5(b)](#S4.F5.sf2 "(b) ‣ Figure 6 ‣ 4.3 Misclassifications ‣ 4 Debugging deep networks ‣ Leveraging Sparse Linear Layers for Debuggable Deep Networks"), along with the problematic features that led
to them.
Having access to this information can guide improvements in both
models and datasets. For instance, model designers might consider
augmenting the
training data with examples of “maracas” without “red tips” to correct the
second error in Figure [5(b)](#S4.F5.sf2 "(b) ‣ Figure 6 ‣ 4.3 Misclassifications ‣ 4 Debugging deep networks ‣ Leveraging Sparse Linear Layers for Debuggable Deep Networks").
In Appendix [G.3](#A7.SS3 "G.3 Model confusion ‣ Appendix G Validating ImageNet misclassifications ‣ Leveraging Sparse Linear Layers for Debuggable Deep Networks"), we further discuss how sparse
decision layers can provide insight into inter-class model confusion
matrices.
| | | |
| --- | --- | --- |
| Features | Matches image | Best match |
| Prediction | 70.70% ± 3.62% | 60.12% ± 3.77% |
| Random | 16.63% ± 2.91% | 10.58% ± 2.35% |
Table 2: Fraction of misclassified images for which annotators select the
top feature of the predicted class to: (i) match the given image and (ii) be
a better match than the top feature for the ground truth class. As a
baseline, we also evaluate annotator selections when the top feature for
the predicted class is replaced by a randomly-chosen one.
5 Related Work
---------------
We now discuss prior work in interpretability and generalized
linear models. Due
to the large body of work in both fields, we limit the discussion to
closely-related studies.
##### Interpretability tools.
There have been extensive efforts towards post-hoc interpretability tools for
deep networks.
Feature attribution methods provide insight into model
predictions for a specific input instance.
These include saliency
maps [simonyan2013deep, smilkov2017smoothgrad, sundararajan2017axiomatic],
surrogate models to interpret local decision boundaries
[ribeiro2016should], and finding influential
[koh2017understanding], prototypical [kim2016examples], or
counterfactual inputs [goyal2019counterfactual].
However, as noted by various recent studies, these local attributions
can be easy to fool [ghorbani2019interpretation, slack2020fooling] or
may otherwise fail to capture global aspects of model
behavior [sundararajan2017axiomatic, adebayo2018sanity, adebayo2020debugging, leavitt2020towards].
Several methods have been proposed to interpret
hidden units within vision networks, for example by generating feature
visualizations [erhan2009visualizing, yosinski2015understanding, nguyen2016synthesizing, olah2017feature] or assigning
semantic concepts to them [bau2017network, bau2020understanding].
Our work is complementary to these methods as we use them as
primitives to probe sparse decision layers.
Another related line of work is that on concept-based explanations, which
seeks to explain the behavior of deep networks in terms of high-level
concepts [kim2018interpretability, ghorbani2019towards, yeh2020completeness].
One of the drawbacks of these methods is that the detected
concepts need not be causally linked to the model’s
predictions [goyal2019explaining].
In contrast, in our approach, the identified high-level concepts, i.e., the deep
features used by the sparse decision layer, entirely determine the model’s
behavior.
Most similar is the recent work by \citepwan2020nbdt, which proposes
fitting a decision tree on a deep feature representation. Network decisions
are then explained in terms of semantic descriptions for nodes along the
decision path. \citetwan2020nbdt rely on heuristics for fitting
and labeling the decision tree, that require an existing domain-specific
hierarchy (e.g., WordNet), causing it to be more involved and limited in its
applicability than our approach.
##### Regularized GLMs and gradient methods.
Estimating GLMs with convex penalties has been studied extensively.
Algorithms for efficiently computing regularization paths include least angle
regression for LASSO \citepefron2004least and path following
algorithms \citeppark2007l1 for
ℓ1 regularized GLMs.
The widely-used R package glmnet by
\citetfriedman2010regularization
provides an efficient coordinate descent-based solver for GLMs with
elastic net regularization, and attains state-of-the-art solving times on
CPU-based hardware. Unlike our approach, this library is best suited for
problems with few examples or features, and is not directly
amenable to GPU acceleration.
Our solver also builds off a long line of work in variance
reduced
proximal gradient
methods \citepjohnson2013accelerating,defazio2014saga,gazagnadou2019optimal,
which have stronger theoretical convergence rates when compared to
stochastic gradient descent.
6 Conclusion
-------------
We demonstrate how fitting sparse linear models over deep representations
can result in more debuggable models, and provide a diverse set of scenarios
showcasing the usage of this technique in practice.
The simplicity of our approach allows it to be broadly applicable to any
deep network with a final linear layer, and may find uses beyond the
language and vision settings considered in this paper.
Furthermore, we have created a number of human experiments for
tasks such as testing model simulatiblity, detecting spurious correlations
and validating misclassifications. Although
primarily used in the context of
evaluating the sparse decision layer,
the design of these experiments may be of independent interest.
Finally, we recognize that while deep networks are popular within
machine learning and artifical intelligence settings, linear models
continue to be widely used in other scientific fields. We hope that
the development and release of our elastic net solver will find
broader use in the scientific community for fitting large scale
sparse linear models in contexts beyond deep learning.
Acknowledgements
----------------
We thank Dimitris Tsipras for helpful discussions.
Work supported in part by the Google PhD Fellowship, Open Philanthropy, and NSF grants
CCF-1553428 and CNS-1815221.
This material is based upon work supported by the Defense Advanced
Research Projects Agency (DARPA) under Contract No. HR001120C0015.
Research was sponsored by the United States Air Force Research Laboratory
and the United States Air Force Artificial Intelligence Accelerator and was
accomplished under Cooperative Agreement Number FA8750-19-2-1000. The
views and conclusions contained in this document are those of the authors
and should not be interpreted as representing the official policies, either
expressed or implied, of the United States Air Force or the U.S. Government.
The U.S. Government is authorized to reproduce and distribute reprints for
Government purposes notwithstanding any copyright notation herein.
\printbibliography |
bbb487f2-e1d0-4f24-829d-50b9e9ff5421 | trentmkelly/LessWrong-43k | LessWrong | D&D.Sci Hypersphere Analysis Part 2: Nonlinear Effects & Interactions
Following on from Part 1, again don't read this if you're trying to avoid spoilers!
Nonlinear Effects
The correlation analysis we did in Part 1 can find simple effects. If the Evil Squelchers break our machines, we will be able to see a negative correlation between Evil Squelching and Performance.
It can't find complicated effects. If a low Feng Shui is bad, a high Feng Shui is also bad, and a moderate Feng Shui is good, the correlation analysis won't tell us that.
While summary statistics are nice, there's a lot that they miss, and one of the best things to do is just get the data somewhere you can look at it.
Probably the best place for us to start is how stats vary with Latitude/Longitude/etc.
Longitude
In our correlation analysis in Part 1, we saw a slight positive correlation between performance and Longitude. Here we can see that does seem to appear in the data, in an interesting not-entirely-linear way:
I can't seem to find any other column that drives this. Every other column's chart with Longitude looks something like this:
with no apparent trend.
If this planet is aligned similarly to Earth, a Longitude pattern like this would be 'things receiving sunlight do slightly better'. However, if this were the case, it would also change as the planet rotates, and the sun might be elsewhere when our superiors check our performance.
I've asked the GM whether we have information about this, and been told that we can assume a lack of time effects in the data. In a real-world situation I think I'd be nervous about making assumptions on this until I actually had firm data on how the planet rotated: happily, Word Of GM can override that.
Latitude
In our last analysis, we saw a surprising frequency pattern for Latitude/Shortitude/Deltitude, where the polar values around +-90 were low (as expected from choosing a random location on the planet), but the equatorial values around 0 were also strangely low:
I still don't know why this is the case.
We h |
fc58dcf5-4581-4192-9de4-c7352243a85c | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Hyperbolic takeoff
The debate over "slow" versus "fast" takeoff is one of the more controversial subjects in the AI safety community. The question is roughly about whether AI development will be more gradual, or whether once an AI achieves what could be called "general intelligence", it will be able to rapidly grow its capabilities and within a very short time transform the world in some way, either desirable or not.
The central problem when it comes to thinking about takeoff scenarios is that the reference class is empty: we've never seen anything like an AGI, so it seems very difficult to say anything about what might happen when takeoff gets going. I'll argue that this is not true: while we don't have anything *exactly* like an AGI, we have lots of different pieces of information which when put together can allow us to pin down what happens after AGI is developed.
The key property of AGI that matters for what happens after takeoff is that AGI is very likely going to be much easier to improve than humans are. Humans have a relatively fixed hardware that's difficult to change much and making more humans is a slow and expensive process. In contrast, an AGI could grow its capabilities both by recursive self-improvement at a much higher frequency compared to humans and by manufacturing or otherwise acquiring new processors to expand its computing power. Both of these processes can happen on timescales much faster than humans are able to make changes to human civilization, so we expect AGI to accelerate all kinds of change once it arrives.
This is already true of contemporary AI systems: we can't run Terence Tao's brain twice as fast even if we throw a billion dollars at accomplishing this, but for any deep learning system that's currently around a billion dollars of compute would be enough to give us enormous speedups. Therefore the expectation that AGI will be much easier to improve is not only theoretical; it's also based on the properties of current AI systems.
The model
=========
The basic model I'll use throughout this notebook is a deterministic hyperbolic growth model. These models are well-known and have been a source of singularity forecasts for a long time, but here I'll use them for a somewhat different purpose.
First, let's motivate the model. Right now, we have a world in which the primary bottleneck in AI development is human effort and time. Insofar as compute and hardware are bottlenecks, they are mostly so because the process of making chips is bottlenecked by the human involvement that's required along the supply chain. Automation of tasks relevant to AI research is currently mostly complementary to human labour, rather than substitutory. This means AI developments don't much affect the speed at which future AI is developed and speedups mainly come from humans scaling up investment into AI progress.
Let f.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
be the ratio of the total resources devoted to improving AI systems at any future time to current gross world product, and let x be the output that AI systems are currently able to generate in the same units. Then dx/dt is going to be a function of f, and I'll assume here that the function is of hyperbolic form. In this phase, the model looks like
dxdt=rf1+B
for a rate parameter r>0 and an exponent B>0. However, once AI that's capable of substituting for human effort comes along, then we'll be able to use AI itself to improve AI, both directly by searching for better model architectures and training procedures, and indirectly through the impact of AI on the overall economy which creates more resources to be used in AI development. In this scenario, law of motion will instead look like
dxdt=r(x+f)1+B
In the middle there can be an intermediate regime in which we make the transition from one phase to the next, for example as AIs gradually become able to perform more and more tasks that humans are able to perform. The duration of this intermediate regime is also important, but we can actually get quite a bit of information about what the parameters in this model should be by looking only at the two limiting cases.
Properties
----------
First, I'll discuss the properties of the hyperbolic growth model dx/dt=rx1+B so we can get a better idea of what the two parameters r,B mean. r controls the dimension of time: if we change the units of t from years to seconds, for example, this amounts to reducing r by a factor of the number of seconds in a year. Therefore it's essentially a rate parameter and doesn't affect any dimensionless property of the growth curve. B is a dimensionless parameter which controls the degree of acceleration. Any value B>0 is going to give us a singularity in finite time in which x diverges to infinity.
Importantly, the fact that B is dimensionless means that it might be the same for a wide variety of different intelligent systems capable of self-improvement even if some of them run much faster than others. If we ran the history of human civilization a thousand times faster, we'd still estimate the same value of B for its overall growth trajectory.
Now, let's move on to the explicit solution of this differential equation. It's given by
x(t)=(x(0)−B−Brt)−1/B
If we start at time 0, how long do we have until the singularity? Solving for this time gives
ts=x(0)−BBr
To illustrate this, here is a concrete example: If we assume a 0.1% annual growth rate for the world economy around 1 CE, we can normalize the dimensions such that the GWP in 1 CE is equal to one unit and we measure time in units of thousands of years. In these units we'll have r=x(0)=1 and so the time to singularity will be 1/B in units of thousands of years, or 1000/B years.
Even allowing for some randomness in this hyperbolic growth model, it's difficult to argue that B>1 in light of the singularity that's still not here. David Roodman fits a stochastic version of this hyperbolic growth model to historical gross world product data and comes up with B≈0.5, which seems plausible to me.
The rate parameter
------------------
This is the slippery part of trying to estimate this model, but there is a strategy we can use: look at present-day AI systems and compare their rate of improvement with the amount of funding and resources that go into improving them.
More explicitly, if we know that dx/dt=rf1+B and we know the value of B, we just need to find some information on three data points:
1. What is the current growth rate of the impact of AI systems?
2. How much revenue on the margin do AI systems generate?
3. What is the amount of investment currently going into making AI systems better?
Here is a list of sources I've been able to find on the different numbers:
* Gartner [reports](https://www.gartner.com/en/newsroom/press-releases/2021-11-22-gartner-forecasts-worldwide-artificial-intelligence-software-market-to-reach-62-billion-in-2022) that in 2021 AI software has brought in 51 billion dollars in total revenue with an annual growth rate of 14.1%.
* Tortoise Intelligence [says that](https://venturebeat.com/2021/12/06/report-ai-investments-see-largest-year-over-year-growth-in-20-years/) "total AI investment" has reached 77.5 billion dollars in 2021 compared with 36 billion dollars in 2020.
* The OECD [estimates](https://venturebeat.com/2021/09/30/vcs-invested-over-75b-in-ai-startups-in-2020/) over 75 billion dollars invested in AI startups in 2020, with funding growth of around 20% from 2019 to 2020.
* Ajeya Cotra [estimates](https://docs.google.com/document/d/1qjgBkoHO_kDuUYqy_Vws0fpf-dG5pTU4b8Uej6ff2Fg/edit#) that the total cost of training large deep learning models is likely between 10 to 100 times the reported cost of the final training run. Combining it with the number of large models trained from [this spreadsheet](https://docs.google.com/spreadsheets/d/1AAIebjNsnJj_uKALHbXNfn3_YsT6sHXtCU0q7OIPuc4/edit#gid=0), I think we end up with 1 to 10 billion dollars spent on training these models in total in 2021.
* The Metaculus community [currently forecasts](https://www.metaculus.com/questions/5118/will-robin-hanson-win-a-bet-that-the-gpt-line-of-language-models-will-generate--1bn-in-customer-revenue-by-2025/) that the GPT line of language models making 1 billion dollars of customer revenue over a period of four to five years is the median scenario. Even if this whole revenue was purely due to GPT-3, Cotra's cost multiplier implies that the total cost of creating GPT-3 was anywhere from 100 million to 1 billion dollars. Compared to the ~ 200 million dollars of estimated annual revenue, this means that right now the annual revenue generated by AI models is somewhat less than the cost of investment that goes into them.
Where does that leave us? For the moment I'll work with 2021 revenue equal to 50 billion dollars, 2022 revenue equal to 60 billion dollars and 2021 investment around 100 billion dollars. Again, normalizing to units of fraction of current gross world product, these are 5⋅10−4,6⋅10−4 and 10−3 respectively. We can then back out the value of r from
10−4≈dxdt=rf1+B≈r10−4.5
r≈√10≈3
We can try to compare this value of r to what we would get from a fit to the history of human civilization. Roodman uses different units but his value turns out to be equivalent to r≈0.05 in the units I use here. By this calculation, AI is ~ 2 orders of magnitude faster than humans.
Takeoff speeds
--------------
Now that we know the values of r and B, we only need to know how big the world economy will be when AGI arrives. This depends on our belief about AI timelines, but for the sake of argument let's say AGI arrived today and that we devoted all of gross world product to improving it from that moment on. In other words, we're assuming that x(0)=1. (This assumption is fishy and I'll return to it later, but let's take it for granted for the moment.) In this case, the time to singularity would be
ts=x(0)−BBr=10.5⋅3=23years=8months
In other words, even with a discrete transition from "no AGI" to "AGI" with no smooth intermediate regime, it takes 8 months for us to get a singularity from the time AGI first arrives. I think this argument is moderately strong evidence against takeoff timelines on the very fast end, say with ts≈1month, if they also correspond to a short time until AGI is created so that x(0) isn't too large.
While AGI is most likely going to arrive some doublings of GWP from today, we also are unlikely to devote all of GWP to improving it from the moment it's created. These two effects could potentially cancel, especially for shorter timelines of AGI arrival, and a rough estimate of 8 months to 1 year from the time AGI is created to a singularity could end up being quite accurate.
In this regime, let's also examine how long it takes for gross world product to double after the arrival of AGI. The explicit solution for GWP is then
y(t)=(1−3t/2)−2
and solving the equation y(t)=2 gives t=(2−√2)/3≈0.2. In other words, in this scenario it takes around 70 days for GWP to double after AGI arrives.
However, there's reason to believe this scenario is too biased towards fast takeoff, and this has to do with the initial value x(0). When we set x(0)=1, we're implicitly assuming that we can "spend all of GWP" on the task of improving AI, but in fact if we tried to do this input prices would rise and we'd end up buying much less than the naive ratio would imply. This is essentially because we're running up against real resource constraints which in the short run we can't get rid of by ramping up expenditures. That said, the x(0)=1 case at least gives us an upper bound.
I think we can't really devote much more than 10% to 20% of GWP to the task of improving AI, not because of a lack of will but simply because of the effect I mention above. Given this, the above calculations are still valid but they apply to a takeoff scenario that happens after two to three doublings of gross world product rather than today, so it's what we might expect the picture to look like if AGI arrived sometime from 2060 to 2080. This is in line with the Metaculus community forecast on [this question](https://www.metaculus.com/questions/4215/what-will-be-the-real-world-gdp-on-the-year-agi-is-deployed-in-trillions-of-dollars/).
Finally, let me specify some distributions to quantify my uncertainty about the parameters I've mentioned so far. Roughly speaking, my beliefs are as follows:
B∼N(μ=0.5,σ=0.1)
r∼exp(N(μ=log3,σ=log3))
P∼exp(N(μ=4.5,σ=2.5))
q∼exp(N(μ=−2.7,σ=log3)
x(0)=q×P
Here I separate out x(0) into two components: P, which is GWP on the year AGI is deployed; and q, which is the fraction of GWP that is devoted to improving AIs after the deployment of AGI.
It's now straightforward to sample from this distribution. We get the following cumulative distribution function for ts:

with 25th, 50th and 75th percentiles being around one month, three months and one year respectively. If I change the distribution of P to match the community's expectations, I instead get

Now the percentiles are all about three times bigger: for the same percentiles as above I now get four months, one year and three years. This is an important point: the later you think AGI deployment will occur, all else equal, the faster you should think takeoff will be.
As a result of all this, I believe that currently the community forecast on [this question](https://www.metaculus.com/questions/3477/if-human-level-artificial-intelligence-is-developed-will-world-gdp-grow-by-at-least-300-in-any-of-the-subsequent-15-years/) below is quite underconfident.
Smooth transition
-----------------
If AI is developed through gradual progress, say by AI gradually becoming as capable or more capable than humans on a wide range of tasks, and if AI development itself is an enterprise which requires a wide variety of such tasks, this will strongly influence our view of how much impact AI will have on the world economy before there is an AGI.
In order to model this, I use
dxdt=r(f+x)ρ(1+B)f(1−ρ)(1+B)
with a time-varying exponent ρ(t). The point of this model is that I think what we'll actually see is a gradual increase in the exponent ρ in this relationship, and at some critical point we'll cross from the diminishing returns regime of ρ<1/(1+B) to the hyperbolic growth regime of ρ>1/(1+B), where AGI corresponds to ρ≈1. If the rate of increase of ρ is not too fast, I think Paul Christiano's predictions about GWP growth are likely to come true.
Estimating the exact growth schedule of ρ is infeasible with the meager amount of data I've been able to collect for this notebook, but we can still get something out of this model by plugging in some growth schedule for ρ based on views about AGI timelines and some response function of f to the state. It's very imprecise but at least it gives a sense of what we can expect when it comes to takeoff speeds.
Here my basic finding is that whether we get slow or fast takeoff depends both on how human effort f and the exponent ρ grow over time, but for a wide variety of plausible parameter settings we generally see AGI coming a year or two in advance: a typical growth rate for the year before the singularity is anywhere from 30% to 70%.
The precise questions asked by Paul Christiano turns out to be more uncertain, however: in simulations, sometimes I get "slow takeoff" and sometimes "fast takeoff" if we define these terms in his sense. Another consistent result from the simulations is that we're more likely to see an eight year doubling of GWP before a two year doubling compared to a four year doubling before a one year doubling, which is a difference I didn't anticipate.
With all that said, my final forecasts are as follows:
* AGI has a big impact on economic growth years before the singularity: 95%
* GWP doubles in eight years before it doubles in two: 75%
* GWP doubles in four years before it doubles in one: 50%
Conclusion
==========
I think what's in this essay is only the first word to be said about thinking of takeoff timelines using these hyperbolic growth models. I've oversimplified the model quite a bit and arguably ignored the most interesting question of properly estimating what happens in the intermediate regime. If we can figure out some way to get information about a question roughly similar to "what's happening to ρ over time" that would be quite useful in making inferences about takeoff timelines *before* AGI is developed. |
4b536ce6-f2f9-4c38-b8c2-c14318394d19 | StampyAI/alignment-research-dataset/aisafety.info | AI Safety Info | Briefly, what are the major AI safety organizations and academics working on?
### **Industry**
[Anthropic](http://anthropic.com):
- [Honest Harmless Helpful Language Model](https://arxiv.org/abs/2112.00861) [Language Models] (Askell, 2021) - Using prompting as a baseline to study the idea of aligning an LLM. Basic attempts seem to scale well with model size, presumably because they rely on the capabilities of the model to interpret the prompt. This paper primarily focuses on experimenting with evaluation methods.
- [Part 2](https://arxiv.org/abs/2204.05862) [Reinforcement Learning, Language Models] (Bai, 2022) - A more significant approach than the first HHH paper, using reinforcement learning from human feedback and preference modeling to finetune the LMs. Further work is done testing and analyzing the robustness of the training method, calibration of the preference models, competing objectives and out-of-distribution detection.
- [Mathematical Framework for Transformer Circuits](https://transformer-circuits.pub/2021/framework/index.html) [Interpretability] (Elhage, Nanda, 2021) - The idea of circuits was first applied to CNNs in the case of vision, but recent large models (especially for language) use transformers in their architecture. This paper is meant to begin filling that gap. Contains a reference to a [second paper](https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html) which has some more significant results. Specifically, the idea of “induction heads”, which are attention heads which allow for in-context learning.
- [Language Models (Mostly) Know What they Know](https://arxiv.org/abs/2207.05221) [Language Models, Calibrated Uncertainty] (Kadavath, Kaplan, 2022) - Tasks LMs with predicting which questions they will get correct, and whether their own claims are valid. Preliminary results are encouraging; generally the models are calibrated to the probability their answers are correct, after proposing them. Calibration is worse when the question becomes “Do you know the answer to x?”, but improves when given extra source material to work with.
[DeepMind Safety Team](https://deepmindsafetyresearch.medium.com/):
- [AI Safety Gridworlds](https://arxiv.org/abs/1711.09883) [Engineering] (Leike et al, 2017) - Environments designed to keep track of a distinct reward and ‘safety objective’, of which the learning agent only has access to the first.
- [Goal Misgeneralization](https://arxiv.org/abs/2210.01790) [Reinforcement Learning] (Shah, et al., 2022) - While there is already a risk of failing to correctly specify the designer’s desired goal in a learning system, this paper focuses on examples of learning algorithms acted towards undesired goals even when the specification is correct. Here, the algorithm competently pursues this undesired goal during deployment/test time despite achieving high training accuracy.
- [Model-Free Risk-Sensitive RL](https://arxiv.org/abs/2111.02907) [Reinforcement Learning] (Delétang, et al, 2021) [Blog](https://deepmindsafetyresearch.medium.com/model-free-risk-sensitive-reinforcement-learning-5a12ba5ce662) - A way of updating value estimates in a RL agent which is somewhat based on risk-sensitivity in portfolio analysis for investments. More specifically, an extension of temporal-difference learning which can also be approached as a Rescorla-Wagner model in classical conditioning with the stimulus being the estimation error in some direction.
- [Using Causal Influence Diagrams to define/find ‘agency’](https://arxiv.org/abs/2208.08345) [Agent Foundations, Causality] (Kenton et al, 2022) [Blog](https://deepmindsafetyresearch.medium.com/discovering-when-an-agent-is-present-in-a-system-41154de11e7b) - A formal framework proposal for understanding agency in terms of “systems that would adapt their policies if their actions affected the world in a different way.” The authors use this framework to derive an algorithm for discovering agents from data and translating from causal models to game theoretic influence diagrams.
- [Language Model Alignment](https://arxiv.org/abs/2103.14659) [Language Models, Value Alignment] (Kenton et al, 2021) [Blog](https://deepmindsafetyresearch.medium.com/alignment-of-language-agents-9fbc7dd52c6c) - A broad paper analyzing the potential for misalignment within language models, and possible initial approaches.
- [Bayesian Analysis of meta-learning](https://arxiv.org/abs/2010.11223) [Interpretability] (Mikulik et al, 2020) [Blog](https://deepmindsafetyresearch.medium.com/understanding-meta-trained-algorithms-through-a-bayesian-lens-5042a1acc1c2) - Demonstration and reverse engineering of the use of Bayes-optimal algorithms within meta-trained recurrent neural networks. Shows that Bayes-optimal agents are fixed points of the meta-learning dynamics.
[OpenAI Safety Team](https://openai.com/):
- [Overview of their approach](https://openai.com/blog/our-approach-to-alignment-research/): In summary, empiricism and iteration to develop a sufficiently aligned, sufficiently advanced model that can solve the theoretical problems (or just help us build better, still aligned AI)
- [Circuits](https://distill.pub/2020/circuits/zoom-in/) [Interpretability] (Olah et al, 2020) - A framework for understanding how neural networks actually implement more understandable algorithms than we might initially expect, and how to find them. Primarily demonstrated within CNNs in this thread, although as shown by Anthropic seems extendable to transformers. See also [their attempt](https://distill.pub/2020/circuits/curve-circuits/) to implement a handwritten neural network layer based on these principles.
- [Deep RL from Human Preferences](https://arxiv.org/abs/1706.03741) [Reinforcement Learning, Value Alignment] (Christiano et al, 2017) [Blog](https://openai.com/blog/deep-reinforcement-learning-from-human-preferences/) - Demonstrated solving of complex tasks by learning from (non-expert) human feedback, without any access to the actual objective function.
- [AI Written Critiques Help Humans Notice Flaws](https://arxiv.org/abs/2206.05802) [Reinforcement Learning, Language Models] (Saunders, Yeh, Wu, 2022) [Blog](https://openai.com/blog/critiques/) - Even though the models used are not better at writing summaries than humans, and writing summaries is not a difficult task for humans, AI assistance still increases the number of errors found by humans. Furthermore, this ability seems to scale faster than summary writing capabilities.
- [AI Safety via Debate](https://arxiv.org/pdf/1805.00899.pdf) (Irving, Christiano, Amodei, 2018) - A suggested approach to simplicity specifying complex human goals by training agents to play a zero-sum debate game. Given optimal play, in theory debate can solve any problem in PSPACE given polynomial time judges, and empirically the authors were able to demonstrate significant improvements in a sparse classifier’s accuracy given 6 pixels and a debate sequence from 59.4% to 88.9%.
- [Iterated Amplification](https://arxiv.org/abs/1810.08575) [Value Alignment] (Christiano, [Shlegeris](https://arxiv.org/search/cs?searchtype=author&query=Shlegeris%2C+B), Amodei, 2018) [Blog](https://openai.com/blog/amplifying-ai-training/) - a suggested approach to safety, by using weaker AIs + humans to supervise the training of more powerful AIs, in an iterative manner, to achieve any level of capability desired while maintaining our ability to catch potential errors and dangers.
[Redwood Research](https://www.redwoodresearch.org/)
- [Adversarial Training for High-Stakes Reliability](https://arxiv.org/abs/2205.01663) [Language Models, Robustness] (Ziegler, 2022) [Blog](https://www.alignmentforum.org/posts/n3LAgnHg6ashQK3fF/takeaways-from-our-robust-injury-classifier-project-redwood) - An attempt to weakly/partially align an LLM so as to not output text where a character was harmed or injured, by using human-assisted adversarial training on a classifier designed to prevent the model from outputting such text. As an org they are pursuing adversarial training as a method for alignment.
- [Polysemanticity and Capacity in Neural Networks](https://arxiv.org/abs/2210.01892) [Interpretability] (Scherlis et al., 2022) [Blog](https://www.alignmentforum.org/posts/kWp4R9SYgKJFHAufB/polysemanticity-and-capacity-in-neural-networks) - Exploration into a phenomenon known as polysemanticity, where some neurons within ANNs represent a mixture of distinct features at once (as opposed to many others appearing to only represent one feature). This is done through the lens of capacity, which essentially asks how much dimension features require/consume when represented. Also looks at the theoretical geometry of feature space given optimal allocation.
- [Interpretability in the Wild](https://arxiv.org/abs/2211.00593) [Interpretability, Language Models] (Wang et al., 2022) - A paper that seeks to apply the techniques of mechanistic interpretability on a large problem while still providing detailed results, as opposed to one or the other. Specifically, they seek an explanation for how GPT-2 performs the task of Indirect Object Identification, and then evaluate this explanation on quantitative versions of the criteria of faithfulness, completeness, and minimality.
### **Academics**
[Sam Bowman](https://cims.nyu.edu/~sbowman/index.shtml) (NYU, Prof) [Datasets]:
- [NYU Alignment Research Group](https://wp.nyu.edu/arg/) - A new research group at NYU, with Sam Bowman as PI and researchers from various other ML, data science, and language-relevant groups at NYU such as [ML2](https://wp.nyu.edu/ml2/), focusing on empirical work with language models. See introductory post below.
- [Why I Think More NLP Researchers Should Engage with AI Safety Concerns (Blog)](https://wp.nyu.edu/arg/why-ai-safety/) [Language Models] - Bowman claims we’re making progress faster than many expected and that progress is also providing a foundation for other problems without intentionally designing for them (consider GPT-3 in the realm of few-shot learning, other types of reasoning). This leaves NLP researchers potentially in an important role in the future development of AI systems and their safety, and that should at least be considered by those in the field.
- [Fine-Tuned Transformers Show Clusters of Similar Representations Across Layers](https://aclanthology.org/2021.blackboxnlp-1.42.pdf) [Interpretability, Language Models] (Phang, Liu, Bowman, 2021) - Use of centered kernel alignment to measure the similarity of representations in fine-tuned models across layers. They find strong similarities in early and later layers, but not in-between. Similarity in later layers suggests a lack of need for them, which they verify by removal.
- [What Will It Take to Fix Benchmarking in NLU?](https://aclanthology.org/2021.naacl-main.385.pdf) [Datasets] (Bowman, Dahl, 2021) - Since unreliable and biased models score so highly on most NLU evaluation datasets, it is difficult to measure progress on actual improvements to the systems. Argues for four criteria such evaluation datasets should meet, and that adversarial data collection fails at improving these.
- [Two Turn Debate Doesn’t Help Humans Answer Hard Reading Comprehension Questions](https://arxiv.org/pdf/2210.10860.pdf) (Parrish, Trivedi, et al. 2022) - Answers produced by natural language models can be false yet reasonable-sounding, and in cases where responses are difficult to check this makes it difficult to trust the models. One suggested approach is the use of debate to help humans distinguish between correct and incorrect answers. [Previous research](https://arxiv.org/abs/2204.05212) has shown this is not effective in a one-step argument paradigm, and this paper shows it is not effective with two-step argument-counter arguments either, using human-produced correct and incorrect+misleading responses.
[Jacob Steinhardt](https://jsteinhardt.stat.berkeley.edu/) (UC Berkeley, Assistant Prof):
- [Certified Defenses Against Adversarial Examples](https://arxiv.org/pdf/1801.09344.pdf) [Robustness] (Raghunathan, Steinhardt, Liang, 2018) - Produces an adaptive regularizer based on a differentiable certificate for a one-layer neural network, which guarantees for a given network and test input, *no* attack can force the error to exceed a certain threshold. Applied to MNIST, guaranteed no attack which perturbed pixels by 0.1 could cause error to grow beyond 35%.
- [Describing Differences between Text Distributions with Natural Language](https://arxiv.org/abs/2201.12323) [Language Models] (Zhong, Snell, Klein, Steinhardt, 2022) - Uses models of GPT-3 to learn summaries of the differences between distributions of text. After training this to around 76% similarity to human annotation of these datasets, they apply these outputs to do some work analyzing datasets, including describing distribution shifts.
- [The Effects of Reward Misspecification](https://arxiv.org/abs/2201.03544) (Pan, Bhatia, Steinhardt, 2022) - A broader study across four RL environments on how reward hacking arises as a function of four specific agent capabilities. Generally, reward hacking increases as capabilities do, but there are also noticeable phase shifts where the true reward rapidly decreases while the proxy reward remains high.
- [Auditing Visualizations: Transparency Methods Struggle to Detect Anomalous Behavior](https://arxiv.org/pdf/2206.13498.pdf) [Interpretability] (Denain, Steinhardt, 2022) - Defines “anomalous models” from a set of “normal models”, which may include things such as backdoors or certain biases, then tests whether current transparency methods provide sufficiently different explanations. This is partially effective; certain significant differences like shape bias and adversarial training are detected, but subtler issues like training on incomplete data are not found.
[Dan Hendrycks](https://people.eecs.berkeley.edu/~hendrycks/) (UC Berkeley):
- [Open Problems in AI Safety](https://arxiv.org/abs/2109.13916) [Value Alignment, Robustness, Interpretability] (Hendrycks, 2022) - A summary and survey of four categories of problems in the field of “AI Safety”: robustness, monitoring, alignment, and systemic safety. Includes an overview of some potential research directions and papers submitted by others.
- [A Critical Analysis of Out-of-Distribution Generalization](https://arxiv.org/abs/2006.16241) [Robustness, Datasets] (Hendrycks et al, 2021) - Produces four new distribution-shift image datasets, use them to test various methods that attempt to improve robustness to these types of shifts, and introduce a new method of data augmentation. They also find that certain methods are better for certain types of distribution shifts, and no method consistently improves robustness on all shifts.
- [Using Self-Supervised Learning Can Improve Model Robustness and Uncertainty](https://scholar.google.com/citations?view_op=view_citation&hl=en&user=czyretsAAAAJ&citation_for_view=czyretsAAAAJ:5nxA0vEk-isC) [Robustness, Calibrated Uncertainty] (Hendrycks et al, 2019) - Finds that self-supervision can improve robustness to adversarial examples, label corruption, and common forms of input corruption. It was also found to improve OOD detection beyond fully supervised methods, suggesting this may be the primary approach to such a task.
- [Deep Anomaly Detection with Outlier Exposure](https://scholar.google.com/citations?view_op=view_citation&hl=en&user=czyretsAAAAJ&citation_for_view=czyretsAAAAJ:Tyk-4Ss8FVUC) [Robustness] (Hendrycks et al, 2019) - One desirable trait of advanced models would be an ability to detect anomalous input (to reduce the range of successful adversarial attacks or for OOD detection). This paper explores training anomaly detectors on diverse sets of out-of-distribution data, successfully improving detection. As an additional result, models that trained on CIFAR-10 but scored higher on SVHN datasets were able to be readjusted with the anomaly detectors.
- [Unsolved Problems in Machine Learning Safety](https://arxiv.org/abs/2109.13916) [Robustness, Value Alignment] (Hendrycks et al., 2021) - A paper which provides a great summary of various technical areas of work related to safety in general (including a section on alignment in particular). The following is a list of links to a few of the referenced research papers and suggested approaches, but a full read would provide many more:
***Improve Adversarial and Black Swan Robustness***
- [Adding to existing robustness benchmarks](https://arxiv.org/abs/1903.12261) (Hendrycks, Dietterich, 2019)
- [Develop new data augmentation techniques](https://arxiv.org/abs/2112.05135) (Hendrycks et al., 2021)
***Improve Model Calibration and Honesty***
- Improve model calibration on typical testing data, [and testing data that is unlike the training data](https://arxiv.org/abs/1906.02530) (Ovadia et al., 2019)
- [Create evaluation schemes that catch models being inconsistent](https://arxiv.org/abs/2102.01017) (Elazar et al., 2021)
- Train more truthful models by incentivising models not to state falsehoods, spread misinformation or [repeat misconceptions](https://aclanthology.org/2020.acl-main.353/) (Peskov et al., 2020)
***Value Alignment and Objectives***
- [Develop models which learn wellbeing functions that do not replicate human cognitive biases](https://arxiv.org/abs/2008.02275) (Hendrycks et al., 2020)
- [Develop models which can detect morally clear cut versus contentious scenarios](https://arxiv.org/abs/2008.02275) (Hendrycks et al., 2020)
- [Include difficult to specialize goals in interactive environments (CIRL)](https://arxiv.org/pdf/1606.03137.pdf) (Hadfield-Menell et al., 2016)
***Hidden Model Functionality***
- [Improve backdoor detectors to counteract an expanding set of backdoor attacks](https://arxiv.org/pdf/2003.07233.pdf) (Karra et al., 2020)
[Alex Turner](https://www.linkedin.com/in/alexandermattturner/details/experience/) (Oregon State, Postdoc)
- [Avoiding Side Effects in Complex Environments](https://arxiv.org/pdf/2006.06547.pdf) [Reinforcement Learning] (Turner, Ratzlaff, 2020) - Tests “Attainable Utility Preservation” and shows that it avoids side effects in toy environments. The essential idea is to use randomly generated reward functions as auxiliary measures, costing the agent if they became unable to achieve them. This way, as much capacity for doing everything beyond the primary goal is kept while the primary task is still completed.
- [Conservative Agency via Attainable Utility Preservation](https://arxiv.org/abs/1902.09725) [Reinforcement Learning] (Turner, Hadfield-Menell, Tadepalli, 2019) - In order to mitigate the risk of reward misspecification, where RL agents are given reward functions that poorly specify the desired behavior, they introduce an approach using auxiliary reward functions to balance the primary reward while maintaining the ability to optimize other, either selected or randomly generated, functions. Generally speaking, this seems to create significantly more conservative agents which are still able to optimize the primary reward with minimal side effects.
[David Krueger](https://twitter.com/davidskrueger) (Cambridge, Associate prof)
- [Goal Misgeneralization in Deep RL](https://arxiv.org/abs/2105.14111) [Reinforcement Learning, Value Alignment/Robustness] (Langosco et al…, Krueger, 2021) - A study of goal misgeneralization, where RL agents retain their capabilities out of distribution but fail to achieve the desired goal due to having learned another. The paper seeks to formalize the problem as well as provide empirical instances of it occurring.
- [Defining and Characterizing Reward Hacking](https://arxiv.org/pdf/2209.13085.pdf) [Reinforcement Learning, Value Alignment] (Skalse, Krueger, 2022) - Provides a formal definition of “reward hacking”, where a poor proxy leads to poor performance on the true reward function. Define an “unhackable proxy” where this cannot happen, show an instance of intuitive approaches failing, and study when proxies are unhackable in stochastic reward functions, deterministic and some stochastic policies, and seek necessary and sufficient conditions for simplifications. Suggests a tension between narrowness of task specification and value alignment.
[Dylan Hadfield-Menell](https://scholar.google.com/citations?hl=en&user=4mVPFQ8AAAAJ&view_op=list_works&sortby=pubdate) (MIT, Assistant prof)
- [White-Box Adversarial Policies in Deep Reinforcement Learning](https://arxiv.org/abs/2209.02167) [Robustness, Reinforcement Learning] (Casper, H-M, Kreiman, 2022) - Normal adversarial policy training methods (in RL) assume other agents to be a black box. Treating them as a white box, where adversaries can see the internal states of other agents at each time step, allows the adversary to find stronger policies while also allowing adversarial training on these policies to create more robust victim models in single-agent environments.
- [Building Human Values into Recommender Systems](https://arxiv.org/abs/2207.10192) [Value Alignment] (Large collaboration, 2022) - A Multidisciplinary attempt to collect a set of values relevant to the design and implementation of recommendation systems and examine them at play in industry and policy.
- [Formal Contracts Mitigate Social Dilemmas in Multi-Agent RL](https://arxiv.org/abs/2208.10469) [Reinforcement Learning, Value Alignment, Game Theory] (Christofferson, Haupt, H-M, 2022) - Using an augmented Markov game where agents voluntarily agree to state-dependent reward transfers, it is shown that this strategy can guarantee all subgame-perfect equilibria in fully-observed games to be socially optimal with enough possible contracts, and verify this result empirically on games such as the Stag Hunt, a public goods game, and resource management.
[Andrew Critch](https://scholar.google.com/citations?user=F3_yOXUAAAAJ&hl=en) (UC Berkeley, Research Scientist)
- [Robust Cooperation Criterion for Open-Source Game Theory](https://www.cambridge.org/core/journals/journal-of-symbolic-logic/article/abs/parametric-resourcebounded-generalization-of-lobs-theorem-and-a-robust-cooperation-criterion-for-opensource-game-theory/16063EA7BFFEE89438631B141E556E79) [Robustness, Game Theory] (Critch, 2019) - In addition to a generalization of Lőb’s Theorem, provides an unexploitable (in Prisoner’s dilemma, never achieve (Cooperate, Defect), but sometimes achieve (Cooperate, Cooperate)) criterion for cooperation which requires proofs of another agent’s source code. This method outperforms Nash equilibria and correlated equilibria.
- [Multi-Principal Assistance Games](https://arxiv.org/pdf/2012.14536.pdf) [Game Theory, Reinforcement Learning, Value Alignment] (Fickinger, Zhuang, Critch, et al, 2020) - Introduces an extension of the assistance game (CIRL), called the MPAG (as in title), with a stated example of apprenticeship where an agent needs to learn from a human working to achieve some utility and their preferences. As long as humans are sufficiently responsible for obtaining some fraction of the rewards then their preferences can be inferred from their work.
[Roger Grosse](https://www.cs.toronto.edu/~rgrosse/) (Toronto, Assistant Prof) helped found [Vector Institute](https://vectorinstitute.ai/) ([VI Profile](https://vectorinstitute.ai/team/roger-grosse/))
- [On Implicit Bias in Overparameterized Bilevel Optimization](https://proceedings.mlr.press/v162/vicol22a.html) (Vicol, Lorraine, …, Grosse et al, 2022) - Recent work has studied the implicit bias found in algorithms for single-level optimization, and this paper seeks to extend that work to bilevel optimization algorithms which involve both inner and outer parameters each optimized to their own objectives. In particular, the two methods studied were cold-start and warm-start and the convergence of solutions based on these and other algorithmic choices.
- [If Influence Functions are the Answer, Then What is the Question?](https://arxiv.org/abs/2209.05364) (Bae, Ng, Lo, Ghassemi, Grosse, 2022) - Influence functions estimate the effect of removing individual data points on a model’s parameters, and align well for linear models but not so much for neural nets. This paper explores this discrepancy and finds that in nonlinear models, influence functions are better aligned with a quantity called the proximal Bregman response function, which allows us to continue using influence functions in nonlinear models to do such things as find influential and/or mislabeled examples.
|
8012f387-4f7c-4584-be5e-cb9231e879bd | StampyAI/alignment-research-dataset/special_docs | Other | Sparse Skill Coding: Learning Behavioral Hierarchies with Sparse Codes.
Under review as a conference paper at ICLR 2020
SPARSE SKILL CODING : LEARNING BEHAVIORAL
HIERARCHIES WITH SPARSE CODES
Anonymous authors
Paper under double-blind review
ABSTRACT
Many approaches to hierarchical reinforcement learning aim to identify sub-goal
structure in tasks. We consider an alternative perspective based on identifying
behavioral ‘motifs’—repeated action sequences that can be compressed to yield
a compact code of action trajectories. We present a method for iteratively com-
pressing action trajectories to learn nested behavioral hierarchies of arbitrary depth,
with actions of arbitrary length. The learned temporally extended actions provide
new action primitives that can participate in deeper hierarchies as the agent learns.
We demonstrate the relevance of this approach for tasks with non-trivial hierar-
chical structure and show that the approach can be used to accelerate learning in
recursively more complex tasks through transfer.
1 I NTRODUCTION
Despite the many successes of deep reinforcement learning (RL) in recent years (Mnih et al., 2015;
Schulman et al., 2017; Silver et al., 2016; Levine et al., 2016), long-term credit assignment and
search complexity remain fundamental challenges. One of the primary strategies for managing this
complexity has been to incorporate hierarchical, temporally-extended actions. Hierarchies hand-
designed using domain knowledge can provide substantial training benefits (Sutton et al., 1999;
Barto & Mahadevan, 2003). However, a major challenge in hierarchical reinforcement learning is
to develop general methods for discovering useful hierarchical representations without relying on
domain expertise.
Many objectives for the hierarchy learning problem have been proposed, with notable focus on facili-
tating transfer to downstream tasks and facilitating efficient exploration of the state space (Eysenbach
et al., 2018; Frans et al., 2017; Solway et al., 2014). We pose the hierarchy learning problem as
follows: given a distribution of tasks, what determines the optimal set of representations for action
sequences? We approach this question by considering the problem faced by human decision-makers.
Humans are fundamentally resource-constrained. Energy is limited, computation is expensive, and
solutions to problems must be computed in real-time. Across cortical areas, a common strategy for
dealing with these constraints is to reduce computational complexity by storing representations that
efficiently encode the statistics of the domain. This idea originated as the efficient coding hypothe-
sis (Barlow, 1961), and has been empirically corroborated in sensory and motor systems (Barlow,
1961; Olshausen & Field, 1996; Hrom ´adka et al., 2008; Poo & Isaacson, 2009; Vinje & Gallant,
2000).
We extend the efficient coding hypothesis to the problem of representation learning for planning,
and propose a method for discovering temporally extended actions by learning an efficient code
of the behavior required by a task distribution. This is a novel formulation of the classic notion
of “chunking” from cognitive psychology (Chase & Simon, 1973; Simon, 1991), which motivated
early work on hierarchical reinforcement learning (Korf, 1985; Stolle & Precup, 2002), and aligns
with empirical neuroscience results suggesting that organisms represent their motor output in terms
of a sparse efficient code of high-level “motor primitives” (Flash & Hochner, 2005). An efficient
code for a sequence of actions compresses the sequence into a minimum-length description that
factorizes the input distribution (Cover & Thomas, 2012). The benefit of using such a code in the
context of decision making is that it provides the building blocks for solving related problems using a
minimal set of decision points, delineated by a minimal set of skills that capture the statistics of the
behavior required by the task distribution. As a consequence, this approach subsumes several distinct
1
Under review as a conference paper at ICLR 2020
objectives for hierarchy learning proposed in the literature; an efficient code of behavior required by a
problem space reduces the number of decision points required (Harb et al., 2017), facilitates transfer
to tasks drawn from the same distribution (Solway et al., 2014), facilitates efficient exploration of the
state space [cite], and decomposes a task into a natural set of sub-tasks (Bacon et al., 2017; Fox et al.,
2017).
The problem of finding a compact code for sequential data with long-range dependencies and nested
hierarchical structure is equivalent to the problem of finding a minimum-length program that can
generate the data—that is, finding a program with minimum Kolmogorov complexity (Kolmogorov,
1965). The Kolmogorov complexity of a sequence is not finitely computable and can thus only
be approximated. Drawing inspiration from this idea, we propose a relatively simple approach
for approximating minimum-description-length codes of sequences of actions through iterative
convolutional sparse coding and compression, with structure similar to classic string compression
methods such as the Nevill-Manning algorithm (Nevill-Manning & Witten, 1997). With this method,
we are able to extract compact, hierarchically nested representations of action trajectories, with
temporally extended actions of arbitrary lengths. We incorporate this method into the RL problem by
equipping the agent with the capacity to compress its behavior and augment its action space with the
learned representations after each task it faces.
2 P RELIMINARIES
Reinforcement learning: We consider a taskas a finite horizon Markov decision process, consisting
of a set of states S, a set of actions A, a transition function p(St+1=s0jSt=s;At=a)that defines
how actions move an agent between states, and a reward function r(St=s;At=a;St+1=st+1)
that defines the reward the agent receives by taking action ain statesand ending up in state st+1.
The objective is to find the policy :S!Athe expected cumulative reward
E"TX
t=1r(St=s;At=a;St+1=st+1)#
:
Sparse coding: Sparse coding (Olshausen & Field, 1996) is an unsupervised algorithm for learning
a dictionary that reconstructs a signal using a minimal set of non-zero coefficients on the dictionary.
The sparse coding model assumes that a signal Xis generated as a linear combination of filters W
with coefficients splus additive Gaussian noise :
xi=nX
j=1Wjsi;j+ (1)
The objective is to reconstruct the input with minimal distortion while using the minimal number of
non-zero coefficients. The filters and coefficients are jointly optimized:
min
w;smX
i=1jjxi nX
j=1Wjsi;jjj2
2+X
i;jjjsi;jjj1 (2)
with thel1norm imposing a penalty on the number on non-zero activations, and modulating the
trade-off between the accuracy and sparsity of the representations. To learn sparse codes for time
series data, one can augment the basic sparse coding model by replacing scalar-valued coefficients
with vector-valued coefficients and matrix multiplication by convolution, which allows basis functions
to appear at all possible shifts in the signal:
min
w;smX
i=1jjxi nX
j=1Wjsi;jjj2
2+X
i;jjjsi;jjj1 (3)
We use this convolutional formulation of the sparse coding model to encode trajectories of actions
generated by an RL agent. We note the similarity between the sparsity constraint and the Minimum
Description Length (MDL) principle, which states that the best model ^M2M is that which can
describe a data sample xcompletely using the fewest number of bits,
2
Under review as a conference paper at ICLR 2020
^M= arg min
M2^ML(x;M )
whereL(x;M )is the codelength assignment function defining the theoretical code length required to
describe (x;M )uniquely. Underlying the MDL is the idea that a model that is able to (losslessly)
compress data must do so by capturing its structure and regularities. We use sparse coding to
approximate this objective.
3 S PARSE SKILL CODING
We propose a method, sparse skill coding (SSC), for discovering hierarchically nested codes for
action sequences using a variant of convolutional sparse coding. Given a trajectory 2Ztconsisting
ofttimesteps of ndiscrete actions, we wish to find a minimal set of multi-step actions that encodes
this trajectory. We represent trajectories as binary matrices Tnt2[0;1], where actions are one-hot
encoded.
The standard sparse coding model learns a single layer code and requires fixing the size of the
dictionary elements (the length of the actions) in advance. We propose an alternative method that can
discover potentially hierarchically-nested dictionary elements of arbitrary length with an iterative
coding and compression process.
At all stages, the size of the dictionary elements is set to 2-timesteps. A dictionary and sparse code is
found for the batch of trajectories, by minimizing equation 2. The dictionary element awhich has the
highest explained variance is then selected and assigned an integer code n+ 1. The dimension of the
matrixTis increased to Tn+1t. All 2-step time windows that yielded an active coefficient on this
dictionary element are then replaced with a 1-step one hot vector encoding the dictionary element’s
integer code n+ 1. The length of the trajectory is thus decreased by the number of occurrences of that
dictionary element w. We denote this compression procedure with the function (T;a). This process
is repeated for the new matrix Tn+1t w, forkiterations. In this manner, dictionary elements can
be discovered that contain previously compressed sequences.
Algorithm 1: Sparse Skill Coding
Input: Batch ofmtrajectories encoded as binary matrices Tnt2[0;1]
Output: Dictionary of Khigh-level actions DK
1fork= 1toKdo
2 minw;sPm
i=1jjTi Pn
j=1Wjsi;jjj2
2+P
i;jjjsi;jjj1
3s!= arg maxPm
is
4Dk=W!
5Tk= (T;Dk)
6endfor
7returnDK
The result of this process is a set of (potentially nested) high-level actions that capture the statistical
structure of trajectories generated on a task. An agent’s action space can then be augmented to include
these high-level actions, which can facilitate transfer to tasks drawn from the same distribution.
4 R ELATED WORK
Early work in hierarchical reinforcement learning demonstrated that well-designed sub-goals or
high-level actions can significantly speed the discovery of shortest-path solutions (Sutton et al., 1999;
Barto & Mahadevan, 2003; Dayan & Hinton, 1993) and facilitate transfer to related tasks (Konidaris
& Barto, 2007). Later work demonstrated the advantages of incorporating pre-defined sub-goals into
deep reinforcement learning (Kulkarni et al., 2016) or pre-learned skills (Tessler et al., 2017), but left
open the question of how to discover these sub-goals or skills automatically.
Recent work have attempted to discover these temporally extended actions by optimizing for reusable
behaviors shared across tasks (Frans et al., 2017), maximizing diversity in exploration (Florensa et al.,
3
Under review as a conference paper at ICLR 2020
2017; Eysenbach et al., 2018; Gregor et al., 2016; Achiam et al., 2018), or by finding bottlenecks
in demonstrations (Kipf et al., 2018; Co-Reyes et al., 2018), after which these temporally extended
actions are combined with a high-level policy to learn on downstream tasks.
However, in contexts in which bottleneck states are less apparent, approaches for end-to-end learning
of temporally extended actions and policies, such as options (Bacon et al., 2017; Harb et al., 2017)
frequently degenerate to learning either single-step options or only a single option for the entire
trajectory. On the other hand, approaches that mitigate this degeneracy by fixing the horizon length
of each sub-policy (Nachum et al., 2018; Frans et al., 2017). Furthermore, while in theory methods
such as options (Sutton et al., 1999) or hierarchies of abstract machines (Parr & Russell, 1998) could
learn nested behavior, in practice because the number of contexts grows exponentially with depth,
most approaches focus on learning two-level hierarchies, with the exception of (Fox et al., 2017)
which proposes a method for learning deeper nested hierarchies, but with a fixed number of options
available at each depth.
Nested structure is characteristic of problems in natural language processing (Socher et al., 2011)
or program induction (Parisotto et al., 2016), but approaches in these fields usually have access to
additional top-down supervision on tree structure. Our method discovers variable-length temporally
extended actions in a bottom-up fashion from demonstration , and we show that our method is
able to nest temporally extended actions and transfer to recursively structured environments where
bottlenecks are not that clearly apparent.
5 E XPERIMENTS AND RESULTS
In our experiments, we ask the following questions:
Can sparse skill coding learn temporally extended actions that reflect the nested hierarchy of
a task?
Do the temporally extended actions learned from sparse skill coding better capture behav-
ioral motifs than hierarchical RL methods that learn to identify sub-goals?
Can an agent transfer these temporally extended actions to learn more quickly on a series of
recursively more complex environments?
We find that in contrast to those learned in subgoal-based hierarchical approaches, the temporally
extended actions learned from sparse skill coding reflect commonly repeated patterns of behavior that
can be used to build a nested hierarchy, and such a nested hierarchy enables the agent to continually
transfer to recursively more complex environments.
To evaluate our approach, we consider the Lightbot domain (explained in further detail in Sec-
tion 5.1.1) and the classic four rooms domain. We compare with an option-critic baseline (Bacon
et al., 2017) trained with proximal policy optimization (Schulman et al., 2017).
5.1 E XPERIMENT 1: L EARNING SPARSE SKILLS FROM DEMONSTRATION
To understand the properties of representations learned with this method, we first present a qualitative
analysis of the representations learned by sparse skill coding performed on trajectories generated by
an expert policy, and contrast these learned representations with those learned via option-critic (Bacon
et al., 2017) on the same task.
5.1.1 T ASKS
We compare the abstractions learned by sparse skill coding and option-critic on a task that highlights
the relevance of identifying behavioral “motifs” over subgoal states.
Lightbot: The Lightbot domain (Figure 1) is adapted from a game developed to teach children how
to program. For each level in the game, there exists a compact, hierarchical program that generates
the solution. In the original game, the objective is to find the shortest program that solves the level.
Whereas Sanborn et al. (2018) used the Lightbot domain to study hierarchical learning in humans,
we adapt the Lightbot game as a novel domain for hierarchical RL methods: the agent begins in a
random location and direction in the room and must navigate the room to turn on all of the lights
4
Under review as a conference paper at ICLR 2020
(blue tiles) using five basic actions: walk ,jump ,right ,left , andlight (which turns on the
light). This domain presents a challenging sparse reward task: the agent receives a positive reward of
only if it successfully turns off all lights.
Figure 1: The Lightbot domain.
5.1.2 S UB-GOALS VS . M OTIFS
The repeated patterns in the solutions for Lightbot puzzles serve to test whether methods that discover
nested hierarchical structure, such as ours, are able to learn re-usable temporally extended-actions
that better reflect the structure of the environment than methods that chain together subtrajectories
between sub-goals. Figure 2 visualizes the action sequences generated while optimizing a policy with
proximal policy optimization (PPO) (Schulman et al., 2017) in the Lightbot and Four Rooms domain.
In environments with nested hierarchical structure, such as the Lightbot domain, compressible
sequential structure emerges in the agent’s action sequences. This structure can be compactly encoded
with a short, hierarchical code. In domains more conducive to sub-goal approaches, such as Four
Rooms, behavioral motifs are less apparent; solutions chain together sequences of repeated actions
(e.g. [ right ,right ,right ], [down ,down ,down ]). Such sequences could be compressed with
a run-length encoding scheme, but lack the nested structure that requires hierarchical compression
schemes.
Figure 2: Hierarchical structure in the trajectories of a PPO agent in the Lightbot domain.
Figure 3: Convergence of action trajectories in the four rooms domain. Converged trajectories do not
contain the hierarchically nested structure present in the Lightbot domain.
5.1.3 R ESULTS
An expert policy was obtained with PPO on the Lightbot puzzle in Figure 4 under a shaped reward
structure, where +10 reward was received for every light turned on, and -1 for every other action.
The policy was trained to convergence with a learning rate of 10 5for 10,000 episodes, with a
gradient update every 100 episodes and a maximum episode length of 100 timesteps. A batch of
5
Under review as a conference paper at ICLR 2020
1,000 trajectories was generated from the converged policy and encoded with sparse skill coding
for 8 iterations, yielding a set of 8 nested hierarchical actions. The action space of a new agent was
augmented with these 8 actions and its policy was trained to convergence.
We compare the skills learned with sparse skill coding to options learned with option-critic (Bacon
et al., 2017) trained with PPO. The same hyperparameters were used for both algorithms, with the
addition of a deliberation cost (Harb et al., 2017) of 0:05for option-critic.
Figure 4 shows the normalized cumulative terminations per state for each option and skill learned
by these two methods in the Lightbot domain. Options learned with option-critic show some
specialization, but are highly redundant and fail to capture the nested structure inherent in the task.
Sparse skill coding learns separable skills that reflect the structure of the environment.
Figure 4: Normalized cumulative terminations per state for options learned via option-critic (left) and
skills learned via sparse skill coding (right).
5.2 E XPERIMENT 2: L EARNING SKILLS FOR CONTINUAL TRANSFER
A motivation for learning high-level temporally-extended actions in the first place is that it reduces
thecognitive cost of choosing a series of actions to the cognitive cost of choosing only one action.
Therefore, the potential benefit of discovering behavioral motifs as high-level actions is that such
high-level actions not only could be re-used in various related contexts, but could serve as primitives
for building even higher-level motifs for even more complex domains. The decision to add a high-
level action to the agent’s repertoire of skills pays upfront the cognitive cost of taking that particular
series of primitive actions, such that the agent need not pay such a cost when invoking the high-level
action for future learning.
We are interested in understanding the implications that the iterative encapsulation of higher and
higher-level actions have as the agent faces a task more complex than tasks it has trained on previously.
Environments that exhibit a recursive or fractal structure, such as the Tower of Hanoi, offer a natural
suite of tasks that grow rapidly in complexity from the perspective taking primitive actions, but whose
solutions are straightforward if sub-solutions to easier problems may be re-used. Many real-world
problems have many nested layers of complexity and such fractal environments boils such nested
structure into its purest form, allowing us to take a first step towards understanding how an intelligent
agent may re-use primitive sub-solutions to enable learning on more complex versions of problems it
has encountered.
In the quantitative results that follow, we are not as interested in the asymptotic performance of SSC
compared to standard approaches for continual transfer as much as we are in the speed at which SSC
adapts as well as the compositional structure of the trajectories that SSC learns.
5.2.1 T ASKS
We consider two domains, the Tower of Hanoi puzzle (Figure 5) and Fractal Lightbot (Figure 6).
These domains are organized into two levels each as follows:
Level Tower of Hanoi Fractal Lightbot
0 2 disks 1 cross
1 3 disks 2 crosses
6
Under review as a conference paper at ICLR 2020
For level 1, we initialize an SSC agent with weights from a PPO agent trained on level 0 and with
an augmented action space created from encoding trajectories from the PPO agent trained on level
0. We also compare with PPO and option-critic agents that were (1) trained from scratch and (2)
transferred from the previous level 0.
Tower of Hanoi: The Tower of Hanoi is a classic puzzle that has been extensively studied in cognitive
psychology and planning (Anderson, 1990). In the Tower of Hanoi, the player must move a stack
ofndisks from one peg to another by moving each disk one at a time, with the restriction that the
player cannot place a larger disk on top of a smaller one. One of the notable properties of this task is
that the graph of its state space is a fractal resembling the Sierpinski triangle. Due to its cyclic nature,
the optimal solution to the task is a recursive algorithm, which requires 2n 1moves. We note that
the recursive structure of this tasks can be exploited by an agent transferring its learning across tasks
of increasing complexity, as the solutions to the nstep problem are contained within the solutions to
then+1 step problem.
We model each Tower of Hanoi puzzle as a sparse reward reinforcement learning problem in which
the agent receives a reward of 0 from the environment for every action taken and a reward of 10 for
successfully transferring the tower of disks. In addition, the agent incurs a cognitive cost of -1 for
every action taken. On each episode, the tower of disks is initialized on a random peg.
Figure 5: (Left) The Tower of Hanoi. (Right) State space for the three disk problem.
Fractal Lightbot: Fractal Lightbot is an adaptation of the Lightbot puzzles built on top of the
Minigrid environment (Chevalier-Boisvert et al., 2018), which permits the use of images as state
representations. The dimensions of the observations are fixed as the complexity of the puzzles
increases; the agent’s observations are 9x9 images showing overhead views of the portion of the
environment that is directly in front of the agent, which changes as the agent moves around. Unlike
the original Lightbot game, we removed the possibility of tiles at multiple heights, and the agent
is able to turn on the light when it is in front of rather than on top of the light tile so that it is not
occluded.
We also model each Fractal Lightbot puzzle as a sparse reward reinforcement learning problem in
which the agent receives a reward of 0 from the environment for every action taken and a reward of
100 for successfully transferring the tower of disks. In addition, the agent incurs a cognitive cost of -1
for every action taken. On each episode, the agent is initialized in a random location and direction.
Figure 6: Lightbot puzzles that grow in a fractal manner. left: level 0, right : level 1
7
Under review as a conference paper at ICLR 2020
Figure 7: Transfer learning in the Tower of Hanoi. Each line averages over 3 different random seeds.
Error bars show 95% confidence intervals.
Figure 8: Transfer learning in Fractal Lightbot. Each line averages over 3 different random seeds.
Error bars show 95% confidence intervals.
5.2.2 R ESULTS
Figure 7 compares SSC with our baselines on transferring from level 0 (2 disks) to level 1 (3 disks)
for Tower of Hanoi, and Figure 8 compares SSC with our baselines on transferring from level 0 (1
cross) to level 1 (2 crosses) for Fractal Lightbot. We observe that SSC performs much better than
option critic. SSC transfer slightly slower than PPO, possibly because exploring with long high-level
actions potentially is more costly.
6 D ISCUSSION
Our goal is (1) to understand how to design an algorithm that can discover nested behavioral
hierarchies of arbitrary depth, with actions of arbitrary length and (2) understand how reducing the
cognitive cost of choosing actions with high-level actions affect transfer. Sparse skill coding is our
method for studying these questions. As our method is a bottom-up method for learning nested
hierarchies of temporally extended actions, we are able to avoid the computational complexity that
make learning nested hierarchies of more than two levels difficult. We have also shown the distinction
between hierarchies characterized by subgoals and hierarchies characterized by recurring motifs. We
hope this paper motivates future work in learning nested behavioral hierarchies.
REFERENCES
Joshua Achiam, Harrison Edwards, Dario Amodei, and Pieter Abbeel. Variational option discovery
algorithms. arXiv preprint arXiv:1807.10299 , 2018.
8
Under review as a conference paper at ICLR 2020
John Robert Anderson. The adaptive character of thought . Psychology Press, 1990.
Pierre-Luc Bacon, Jean Harb, and Doina Precup. The option-critic architecture. In Association for
the Advancement of Artificial Intelligence , pp. 1726–1734, 2017.
Horace Barlow. Possible principles underlying the transformations of sensory messages. In Rosenblith
W (ed.), Sensory Communication , chapter 13, pp. 217–234. MIT press, 1961.
Andrew G Barto and Sridhar Mahadevan. Recent advances in hierarchical reinforcement learning.
Discrete Event Dynamic Systems , 13(4):341–379, 2003.
William G Chase and Herbert A Simon. Perception in chess. Cognitive psychology , 4(1):55–81,
1973.
Maxime Chevalier-Boisvert, Lucas Willems, and Suman Pal. Minimalistic gridworld environment
for openai gym. https://github.com/maximecb/gym-minigrid , 2018.
John D Co-Reyes, YuXuan Liu, Abhishek Gupta, Benjamin Eysenbach, Pieter Abbeel, and Sergey
Levine. Self-consistent trajectory autoencoder: Hierarchical reinforcement learning with trajectory
embeddings. arXiv preprint arXiv:1806.02813 , 2018.
Thomas M Cover and Joy A Thomas. Elements of information theory . John Wiley & Sons, 2012.
Peter Dayan and Geoffrey E Hinton. Feudal reinforcement learning. In Advances in neural information
processing systems , pp. 271–278, 1993.
Benjamin Eysenbach, Abhishek Gupta, Julian Ibarz, and Sergey Levine. Diversity is all you need:
Learning skills without a reward function. arXiv preprint arXiv:1802.06070 , 2018.
Tamar Flash and Binyamin Hochner. Motor primitives in vertebrates and invertebrates. Current
opinion in neurobiology , 15(6):660–666, 2005.
Carlos Florensa, Yan Duan, and Pieter Abbeel. Stochastic neural networks for hierarchical reinforce-
ment learning. arXiv preprint arXiv:1704.03012 , 2017.
Roy Fox, Sanjay Krishnan, Ion Stoica, and Ken Goldberg. Multi-level discovery of deep options.
arXiv preprint arXiv:1703.08294 , 2017.
Kevin Frans, Jonathan Ho, Xi Chen, Pieter Abbeel, and John Schulman. Meta learning shared
hierarchies. arXiv preprint arXiv:1710.09767 , 2017.
Karol Gregor, Danilo Jimenez Rezende, and Daan Wierstra. Variational intrinsic control. arXiv
preprint arXiv:1611.07507 , 2016.
Jean Harb, Pierre-Luc Bacon, Martin Klissarov, and Doina Precup. When waiting is not an option:
Learning options with a deliberation cost. arXiv preprint arXiv:1709.04571 , 2017.
Tom ´aˇs Hrom ´adka, Michael R DeWeese, and Anthony M Zador. Sparse representation of sounds in
the unanesthetized auditory cortex. PLoS biology , 6(1):e16, 2008.
Thomas Kipf, Yujia Li, Hanjun Dai, Vinicius Zambaldi, Edward Grefenstette, Pushmeet Kohli, and
Peter Battaglia. Compositional imitation learning: Explaining and executing one task at a time.
arXiv preprint arXiv:1812.01483 , 2018.
Andrei N Kolmogorov. Three approaches to the quantitative definition ofinformation’. Problems of
information transmission , 1(1):1–7, 1965.
George Konidaris and Andrew G Barto. Building portable options: Skill transfer in reinforcement
learning. In International Joint Conference on Artificial Intelligence , volume 7, pp. 895–900, 2007.
Richard E Korf. Macro-operators: A weak method for learning. Artificial intelligence , 26(1):35–77,
1985.
Tejas D Kulkarni, Karthik Narasimhan, Ardavan Saeedi, and Josh Tenenbaum. Hierarchical deep
reinforcement learning: Integrating temporal abstraction and intrinsic motivation. In Advances in
neural information processing systems , pp. 3675–3683, 2016.
9
Under review as a conference paper at ICLR 2020
Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep
visuomotor policies. The Journal of Machine Learning Research , 17(1):1334–1373, 2016.
V olodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare,
Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control
through deep reinforcement learning. Nature , 518(7540):529, 2015.
Ofir Nachum, Shixiang Shane Gu, Honglak Lee, and Sergey Levine. Data-efficient hierarchical
reinforcement learning. In Advances in Neural Information Processing Systems , pp. 3303–3313,
2018.
Craig G Nevill-Manning and Ian H Witten. Identifying hierarchical structure in sequences: A
linear-time algorithm. Journal of Artificial Intelligence Research , 7:67–82, 1997.
Bruno A Olshausen and David J Field. Emergence of simple-cell receptive field properties by learning
a sparse code for natural images. Nature , 381(6583):607, 1996.
Emilio Parisotto, Abdel-rahman Mohamed, Rishabh Singh, Lihong Li, Dengyong Zhou, and Pushmeet
Kohli. Neuro-symbolic program synthesis. arXiv preprint arXiv:1611.01855 , 2016.
Ronald Parr and Stuart J Russell. Reinforcement learning with hierarchies of machines. In Advances
in neural information processing systems , pp. 1043–1049, 1998.
Cindy Poo and Jeffry S Isaacson. Odor representations in olfactory cortex:sparse coding, global
inhibition, and oscillations. Neuron , 62(6):850–861, 2009.
Sophia Sanborn, David Bourgin, Michael Chang, and Thomas Griffiths. Representational efficiency
outweighs action efficiency in human program induction. In Proceedings of the Annual Meeting of
the Cognitive Science Society , 2018.
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy
optimization algorithms. arXiv preprint arXiv:1707.06347 , 2017.
David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche,
Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering
the game of go with deep neural networks and tree search. nature , 529(7587):484, 2016.
Herbert A Simon. The architecture of complexity. In Facets of systems science , pp. 457–476. Springer,
1991.
Richard Socher, Cliff C Lin, Chris Manning, and Andrew Y Ng. Parsing natural scenes and natural
language with recursive neural networks. In Proceedings of the 28th international conference on
machine learning (ICML-11) , pp. 129–136, 2011.
Alec Solway, Carlos Diuk, Natalia C ´ordova, Debbie Yee, Andrew G Barto, Yael Niv, and Matthew M
Botvinick. Optimal behavioral hierarchy. PLoS computational biology , 10(8):e1003779, 2014.
Martin Stolle and Doina Precup. Learning options in reinforcement learning. In International
Symposium on abstraction, reformulation, and approximation , pp. 212–223. Springer, 2002.
Richard S Sutton, Doina Precup, and Satinder Singh. Between MDPs and semi-MDPs: A framework
for temporal abstraction in reinforcement learning. Artificial intelligence , 112(1-2):181–211, 1999.
Chen Tessler, Shahar Givony, Tom Zahavy, Daniel J Mankowitz, and Shie Mannor. A deep hierar-
chical approach to lifelong learning in minecraft. In Thirty-First AAAI Conference on Artificial
Intelligence , 2017.
William E Vinje and Jack L Gallant. Sparse coding and decorrelation in primary visual cortex during
natural vision. Science , 287(5456):1273–1276, 2000.
10
Under review as a conference paper at ICLR 2020
A D ETAILS FOR EXPERIMENT 2
A.1 A GENT DETAILS
Tower of Hanoi: Observations for an m-disk, 3-peg Hanoi task are represented with m3vectors
encoding the location of each disk. Each action is parameterized as source peg, target peg
which automatically moves the topmost disk on the source peg on top of the topmost disk of
thetarget peg. Observations are encoded with a 3-layer fully-connected network with a hidden
dimension of 256 units and ReLU activations at each layer. The last layer produces the action
distribution.
Fractal Lightbot: Observations for the Fractal Lightbot task are encoded with a 3-layer CNN with
hidden dimensions of 16, 32, and 64 to encode the image observations, with kernels of size (2;2),
stride of 1, and 2 fully-connected output layers of 256 dimensions, with ReLU activations at every
layer. The last layer produces the action distribution.
A.2 T RAINING DETAILS
For level 1, we initialize an SSC agent with weights from a PPO agent trained on level 0 and with an
augmented action space created from encoding trajectories from the PPO agent trained on level 0.
We compare with the following baselines:
A PPO agent trained on level 1 from scratch.
A PPO agent trained on level 1 with weights initialized from training a PPO agent on level
0.
An option-critic agent trained on level 1 from scratch.
An option-critic agent trained on level 1 with weights initialized from training an option-
critic agent on level 0.
SSC is trained using PPO. Because the environments are all sparse reward environments, we collect
the minimum amount of whole episodes whose aggregate number of transitions is greater than or
equal to 4096 before doing every gradient update. For PPO we use a clip ratio of 0.1 and a weight
decay penalty of 1e-5. Option critic was initialized with 4 options.
The hyperparameters for each agent to converge were found using an informal search:
For Fractal Lightbot, we used a learning rate decay of 0.99 every 100 updates. For Tower of
Hanoi, we used a learning rate decay of 0.95 every 100 updates.
We did not set a fixed horizon for the episodes and trained all models for 1,000,000
transitions. For training PPO and option critic from scratch on level 1 in Fractal Lightbot,
we trained for 3,000,000 transitions because this was the amount needed for the agents to
converge.
When transferring from level 0 to level 1, we found that initializing the agents from the last
checkpoint from level 0 had a difficult time exploring the new level because (1) the environ-
ment has sparse rewards and (2) the weights were optimized for level 0 only. Therefore, we
initialize all agents from a checkpoint saved one-third the way through training before the
return curve plateaus. For every task this point occurred after the agent has trained on about
4000 episodes, so we used the checkpoint at 4000 episodes as a standard.
11 |
68ce188b-7405-484c-81cd-02f3d2d299cb | trentmkelly/LessWrong-43k | LessWrong | Hard work is irritating
My understanding of getting into a flow state (or “being in the zone”) was completely wrong.
Authors like Steve Kotler and Csíkszentmihályi describe achieving flow as a structural problem. To achieve flow we must match our skills to the challenges in front of us, they write. If the challenges are too easy, we’ll get bored, if too difficult, we’ll quit from frustration.
Start-up founders will speak of passion as the gateway to flow. If you’re passionate, you’ll dance into the office ready to get back to it. If you’re feeling resistance … well, maybe you’re not passionate or earnest enough.
My personal day-to-day experiences are a stark disconnect from the popular descriptions of flow. I reliably find it irritating to sit down and start working. My mind wanders, opening up HackerNews, Reddit, and Youtube tabs to stave off the negative feelings.
What I learned and want to share with you, dear reader, is the wisdom that irritation is a necessary starter to getting down to work. It’s completely normal to “not be into it” regardless of how passionate you are or how well the challenges are structured. The initial resistance is part of the process, not a sign of a defected process or defected you.
Why oh why?
> TL;DR Noradrenaline
Noradrenaline is one of the major brain chemicals that builds the initial resistance to getting things done. It’s the “alertness” neurotransmitter (fancy word for “chemical messenger”) that tells different systems in the brain to be at attention when we’re ready to do focused, hard work. The problem: it’s the same messenger that yells across our system when we’re triggering our fight or flight response.
Evolutionarily speaking, fight or flight is effective because we want to escape the feeling of irritation. Racing heart rate, shortness of breath, paranoid alertness–the state of fight or flight is painful. Us humans, as pleasure-seeking monkeys, would like to avoid pain as often as we can.
This very blog post is a month too late. I have i |
b77cb4d1-0690-496b-a803-faaa0e570de3 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Racing through a minefield: the AI deployment problem
In previous pieces, I argued that there's a real and large risk of AI systems' developing dangerous goals of their own and defeating all of humanity - at least in the absence of specific efforts to prevent this from happening. I discussed [why it could be hard to build AI systems without this risk](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/NbiHKTN5QhFFfjjm5/) and [how it might be doable](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/rJRw78oihoT5paFGd).
The “AI alignment problem” refers[1](#fn1) to a *technical* problem: how can we design a powerful AI system that behaves as intended, rather than forming its [own dangerous aims](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/vGsRdWzwjrFgCXdMn/)? This post is going to outline a **broader political/strategic problem, the “deployment problem”:** if you’re someone who might be on the cusp of developing extremely powerful (and maybe dangerous) AI systems, what should you … do?
The basic challenge is this:
* If you race forward with building and using powerful AI systems as fast as possible, you might cause a global catastrophe (see links above).
* If you move too slowly, though, you might just be waiting around for *someone else less cautious* to develop and deploy powerful, dangerous AI systems.
* And if you can get to the point where your own systems are both powerful and safe … what then? Other people still might be less-cautiously building dangerous ones - what should we do about that?
My current analogy for the deployment problem is **racing through a minefield: each player is hoping to be ahead of others, but anyone moving too quickly can cause a disaster.** (In this minefield, a single mine is big enough to endanger *all* the racers.)
This post gives a high-level overview of how I see the kinds of developments that can lead to a good outcome, despite the “racing through a minefield” dynamic. It is distilled from a more detailed [post on the Alignment Forum](https://www.alignmentforum.org/posts/vZzg8NS7wBtqcwhoJ/nearcast-based-deployment-problem-analysis).
First, I’ll flesh out how I see the challenge we’re contending with, based on the premises above.
Next, I’ll list a number of things I hope that “cautious actors” (AI companies, governments, etc.) might do in order to prevent catastrophe.
**Many of the actions I’m picturing are not the kind of things normal market and commercial incentives would push toward, and as such, I think there’s room for a ton of variation in whether the “racing through a minefield” challenge is handled well.** Whether key decision-makers understand things like the case for [misalignment risk](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/vGsRdWzwjrFgCXdMn/) (and in particular, [why it might be hard to measure](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/NbiHKTN5QhFFfjjm5/)) - and are willing to lower their own chances of “winning the race” to improve the odds of a good outcome for everyone - could be crucial.
The basic premises of “racing through a minefield”
--------------------------------------------------
This piece is going to lean on [previous pieces](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w) and assume all of the following things:
* **Transformative AI soon.** This century, something like [PASTA](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd/p/AmxxnazJcBWzWEeqj/) could be developed: AI systems that can effectively automate everything humans do to advance science and technology. This brings the potential for explosive progress in science and tech, getting us more quickly than most people imagine to a deeply unfamiliar future. I’ve argued for this possibility in the [Most Important Century series](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd).
* **Misalignment risk.** As argued previously, there’s a significant risk that such AI systems could end up with [misaligned goals of their own](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/vGsRdWzwjrFgCXdMn/), leading them to [defeat all of humanity](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/6LTh4foNuC3NdtmZH/). And it could take [significant extra effort](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/rJRw78oihoT5paFGd/) to get AI systems to be safe.
* **Ambiguity.** As argued previously, it could be [hard to know whether AI systems are dangerously misaligned](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/NbiHKTN5QhFFfjjm5/), for a number of reasons. In particular, when we train AI systems not to behave dangerously, we might be unwittingly training them to *obscure their dangerous potential from humans*, and take dangerous actions [only when humans would not be able to stop them](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/NbiHKTN5QhFFfjjm5#_2__The_King_Lear_problem__how_do_you_test_what_will_happen_when_it_s_no_longer_a_test_). At the same time, I expect powerful AI systems will present massive opportunities to make money and gain power, such that many people will want to race forward with building and deploying them as fast as possible (perhaps even if they believe that doing so is risky for the world!)
So, one can imagine a scenario where some company is in the following situation:
* It has good reason to think it’s on the cusp of developing extraordinarily powerful AI systems.
* If it deploys such systems hastily, global disaster could result.
* But if it moves too *slowly*, other, less cautious actors could deploy dangerous systems of their own.
That seems like a tough enough, high-stakes-enough, and likely enough situation that it’s worth thinking about how one is supposed to handle it.
One simplified way of thinking about this problem:
* We might classify “actors” (companies, government projects, whatever might develop powerful AI systems or play an important role in how they’re deployed) as **cautious** (taking misalignment risk very seriously) or **incautious** (not so much).
* Our basic hope is that **at any given point in time, cautious actors collectively have the power to “contain” incautious actors.** By “contain,” I mean: stop them from deploying misaligned AI systems, and/or stop the misaligned systems from causing a catastrophe.
* Importantly, **it could be important for cautious actors to *use powerful AI systems* to help with “containment” in one way or another.** If cautious actors refrain from AI development entirely, it seems likely that incautious actors will end up with more powerful systems than cautious ones, which doesn’t seem good.
In this setup, **cautious actors need to move fast enough that they can’t be overpowered by others’ AI systems, but slowly enough that they don’t cause disaster themselves.** Hence the “racing through a minefield” analogy.
What success looks like
-----------------------
In a [non-Cold-Takes piece](https://www.alignmentforum.org/posts/vZzg8NS7wBtqcwhoJ/nearcast-based-deployment-problem-analysis), I explore the possible actions available to cautious actors to win the race through a minefield. This section will summarize the general categories - and, crucially, why we shouldn’t expect that companies, governments, etc. will do the right thing simply from natural (commercial and other) incentives.
I’ll be going through each of the following:
* **Alignment (charting a safe path through the minefield).** Putting lots of effort into technical work to reduce the risk of misaligned AI.
* **Threat assessment (alerting others about the mines).** Putting lots of effort into *assessing* the risk of misaligned AI, and potentially demonstrating it (to other actors) as well.
* **Avoiding races (to move more cautiously through the minefield).** If different actors are racing to deploy powerful AI systems, this could make it unnecessarily hard to be cautious.
* **Selective information sharing (so the incautious don’t catch up).** Sharing some information widely (e.g., technical insights about how to reduce misalignment risk), some selectively (e.g., demonstrations of how powerful and dangerous AI systems might be), and some not at all (e.g., the specific code that, if accessed by a hacker, would allow the hacker to deploy potentially dangerous AI systems themselves).
* **Global monitoring (noticing people about to step on mines, and stopping them).** Working toward worldwide state-led monitoring efforts to identify and prevent “incautious” projects racing toward deploying dangerous AI systems.
* **Defensive deployment (staying ahead in the race).** Deploying AI systems only when they are unlikely to cause a catastrophe - but also deploying them with urgency once they are safe, in order to help prevent problems from AI systems developed by less cautious actors.
### Alignment (charting a safe path through the minefield[2](#fn2))
I [previously](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/rJRw78oihoT5paFGd/) wrote about some of the ways we might reduce the dangers of advanced AI systems. Broadly speaking:
* Cautious actors might try to primarily build [limited](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/rJRw78oihoT5paFGd#Limited_AI) AI systems - AI systems that lack the kind of [ambitious aims that lead to danger](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/vGsRdWzwjrFgCXdMn). They might ultimately be able to use these AI systems to do things like automating further safety research, making future less-limited systems safer.
* Cautious actors might use [AI checks and balances](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/rJRw78oihoT5paFGd#AI_checks_and_balances) - that is, using some AI systems to supervise, critique and identify dangerous behavior in others, with special care taken to make it hard for AI systems to coordinate with each other against humans.
* Cautious actors might use a variety of other techniques for making AI systems safer - particularly techniques that incorporate “[digital neuroscience](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/rJRw78oihoT5paFGd#Digital_neuroscience),” gauging the safety of an AI system by “reading its mind” rather than simply by watching out for dangerous behavior (the latter might be unreliable, as noted above).
A key point here is that **making AI systems safe enough to commercialize (with some initial success and profits) could be much less (and different) effort than making them robustly safe (no lurking risk of global catastrophe).** The basic reasons for this are covered in my [previous post on difficulties with AI safety research](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/NbiHKTN5QhFFfjjm5/) In brief:
* If AI systems *behave* dangerously, we can “train out” that behavior by providing negative reinforcement for it.
* The concern is that when we do this, we might be unwittingly training AI systems to *obscure their dangerous potential from humans*, and take dangerous actions *only when humans would not be able to stop them*. (I [call this](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/NbiHKTN5QhFFfjjm5#_2__The_King_Lear_problem__how_do_you_test_what_will_happen_when_it_s_no_longer_a_test_) the “King Lear problem: it's hard to know how someone will behave when they have power over you, based only on observing how they behave when they don't.”)
* So we could end up with AI systems that behave safely and helpfully as far as we can tell in normal circumstances, while ultimately having [ambitious, dangerous “aims”](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/vGsRdWzwjrFgCXdMn/) that they pursue when they become powerful enough and have the right opportunities.
Well-meaning AI companies with active ethics boards might do a lot of AI safety work, by training AIs not to behave in unhelpful or dangerous ways. But if they want to address the risks I’m focused on here, this could require safety measures that look very different - e.g., measures more reliant on “checks and balances” and “digital neuroscience.”
### Threat assessment (alerting others about the mines)
In addition to *making AI systems safer*, cautious actors can also put effort into *measuring and demonstrating how dangerous they are* (or aren’t).
For the same reasons given in the previous section, it could take special efforts to find and demonstrate the kinds of dangers I’ve been discussing. Simply monitoring AI systems in the real world for bad behavior might not do it. It may be necessary to examine (or manipulate) their [digital brains](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/rJRw78oihoT5paFGd#Digital_neuroscience),[3](#fn3) design AI systems [specifically to audit other AI systems for signs of danger](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/rJRw78oihoT5paFGd#AI_checks_and_balances); deliberately train AI systems to demonstrate particular dangerous patterns (while not being *too* dangerous!); etc.
Learning and demonstrating that the danger is high could help convince many actors to move more slowly and cautiously. Learning that the danger is *low* could lessen some of the tough tradeoffs here and allow cautious actors to move forward more decisively with developing advanced AI systems; I think this could be a good thing in terms of [what sorts of actors lead the way on transformative AI](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd/p/Lbtcjfxhrs8kfKK2M#The__competition__frame).
### Avoiding races (to move more cautiously through the minefield)
Here’s a dynamic I’d be sad about:
* Company **A** is getting close to building very powerful AI systems. It would love to move slowly and be careful with these AIs, but it worries that if it moves too slowly, Company **B** will get there first, have less caution, and do some combination of “causing danger to the world” and “beating company **A** if the AIs turn out safe.”
* Company **B** is getting close to building very powerful AI systems. It would love to move slowly and be careful with these AIs, but it worries that if it moves too slowly, Company **A** will get there first, have less caution, and do some combination of “causing danger to the world” and “beating company **B** if the AIs turn out safe.”
(Similar dynamics could apply to Country A and B, with national AI development projects.)
If Companies A and B would both “love to move slowly and be careful” if they could, it’s a shame that they’re both racing to beat each other. Maybe there’s a way to avoid this dynamic. For example, perhaps Companies A and B could strike a deal - anything from “collaboration and safety-related information sharing” to a merger. This could allow both to focus more on precautionary measures rather than on beating the other. Another way to avoid this dynamic is discussed below, under [standards and monitoring.](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/XRphCh6NbfQiDF3Nt#Global_monitoring__noticing_people_about_to_step_on_mines__and_stopping_them_)
“Finding ways to avoid a furious race” is not the kind of dynamic that emerges naturally from markets! In fact, working together along these lines would have to be well-designed to avoid running afoul of antitrust regulation.
### Selective information sharing - including security (so the incautious don’t catch up)
Cautious actors might want to share certain kinds of information quite widely:
* It could be crucial to raise awareness about the dangers of AI (which, as I’ve argued, won’t necessarily be obvious).
* They might also want to widely share information that could be useful for reducing the risks (e.g., [safety techniques](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/rJRw78oihoT5paFGd/) that have worked well.)
At the same time, as long as there are incautious actors out there, information can be dangerous too:
* Information about *what cutting-edge AI systems can do* - especially if it is powerful and impressive - could spur incautious actors to race harder toward developing powerful AI of their own (or give them an idea of *how* to build powerful systems, by giving them an idea of what sorts of abilities to aim for).
* An AI’s “weights” (you can think of this sort of like its source code, though not exactly[4](#fn4)) are potentially very dangerous. If hackers (including from a state cyberwarfare program) gain unauthorized access to an AI’s weights, this could be tantamount to stealing the AI system, and the actor that steals the system could be much less cautious than the actor who built it. **Achieving a level of cybersecurity that rules this out [could be](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/6LTh4foNuC3NdtmZH/#fn15) extremely difficult,** and potentially well beyond what one would normally aim for in a commercial context.
The lines between these categories of information might end up fuzzy. Some information might be useful for demonstrating the dangers *and* capabilities of cutting-edge systems, or useful for making systems safer *and* for building them in the first place. So there could be a lot of hard judgment calls here.
This is another area where I worry that commercial incentives might not be enough on their own. For example, it is usually important for a commercial project to have some reasonable level of security against hackers, but not necessarily for it to be able to resist well-resourced attempts by states to steal its intellectual property.
### Global monitoring (noticing people about to step on mines, and stopping them)
Ideally, cautious actors would learn of every case where someone is building a dangerous AI system (whether purposefully or unwittingly), and be able to stop the project. If this were done reliably enough, it could take the teeth out of the threat; a partial version could buy time.
Here’s one vision for how this sort of thing could come about:
* We (humanity) develop a reasonable set of tests for whether an AI system might be dangerous.
* Today’s leading AI companies self-regulate by committing not to build or deploy a system that’s dangerous according to such a test (e.g., see Google’s [2018 statement](https://www.theweek.in/news/sci-tech/2018/06/08/google-wont-deploy-ai-to-build-military-weapons-ichai.html), "We will not design or deploy AI in weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people”). Even if some people at the companies would like to do so, it’s hard to pull this off once the company has committed not to.
* As more AI companies are started, they feel soft pressure to do similar self-regulation, and refusing to do so is off-putting to potential employees, investors, etc.
* Eventually, similar principles are incorporated into various government regulations and enforceable treaties.
* Governments could monitor for dangerous projects using regulation and even overseas operations. E.g., today the US monitors (without permission) for various signs that other states might be developing nuclear weapons, and might try to stop such development with methods ranging from threats of sanctions to [cyberwarfare](https://en.wikipedia.org/wiki/Stuxnet) or even military attacks. It could do something similar for any AI development projects that are using huge amounts of compute and haven’t volunteered information about their safety practices.
If the situation becomes very dire - i.e., it seems that there’s a high risk of dangerous AI being deployed imminently - I see the latter bullet point as one of the main potential hopes. In this case, governments might have to take drastic actions to monitor and stop dangerous projects, based on limited information.
### Defensive deployment (staying ahead in the race)
I’ve emphasized the importance of caution: not deploying AI systems when we can’t be confident enough that they’re safe.
But when confidence *can* be achieved (how much confidence? See footnote[5](#fn5)), **powerful-and-safe AI can help reduce risks from other actors** in many possible ways.
Some of this would be by helping with all of the above. Once AI systems can do a significant fraction of the things humans can do today, they might be able to contribute to each of the activities I’ve listed so far:
* **[Alignment](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/XRphCh6NbfQiDF3Nt#Alignment__charting_a_safe_path_through_the_minefield_2__).** AI systems might be able to contribute to AI safety research (as humans do), producing increasingly robust techniques for reducing risks.
* **[Threat assessment](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/XRphCh6NbfQiDF3Nt#Threat_assessment__alerting_others_about_the_mines_)**. AI systems could help produce evidence and demonstrations about potential risks. They could be potentially useful for tasks like “Produce detailed explanations and demonstrations of possible sequences of events that could lead to AIs doing harm.”
* **[Avoiding races](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/XRphCh6NbfQiDF3Nt#Avoiding_races__to_move_more_cautiously_through_the_minefield_).** AI projects might make deals in which e.g. each project is allowed to use its AI systems to monitor for signs of risk from the others (ideally such systems would be designed to *only* share relevant information).
* **[Selective information sharing](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/XRphCh6NbfQiDF3Nt#Selective_information_sharing___including_security__so_the_incautious_don_t_catch_up_).** AI systems might contribute to strong security (e.g., by finding and patching security holes), and to dissemination (including by helping to better communicate about the level of risk and the best ways to reduce it).
* **[Global monitoring](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/XRphCh6NbfQiDF3Nt#Global_monitoring__noticing_people_about_to_step_on_mines__and_stopping_them_).** AI systems might be used (e.g., by governments) to monitor for signs of dangerous AI projects worldwide, and even to interfere with such projects. They might also be used as part of large voluntary self-regulation projects, along the lines of what I wrote just above under “Avoiding races.”
Additionally, **if safe AI systems are in wide use, it could be harder for dangerous (similarly powerful) AI systems to do harm.** This could be via a wide variety of mechanisms. For example:
* If there’s widespread use of AI systems to patch and find security holes, similarly powered AI systems might have a harder time finding security holes to [cause trouble with](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/6LTh4foNuC3NdtmZH/).
* Misaligned AI systems could have more trouble making money, gaining allies, etc. in worlds where they are competing with similarly powerful but safe AI systems.
So?
---
I’ve gone into some detail about why we might have a challenging situation (“racing through a minefield”) if powerful AI systems (a) are developed fairly soon; (b) present significant risk of [misalignment leading to humanity being defeated](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/vGsRdWzwjrFgCXdMn/); (c) are not particularly easy to measure the safety of.
I’ve also talked about what I see as some of the key ways that “cautious actors” concerned about misaligned AI might navigate this situation.
I talk about some of the implications in my [more detailed piece](https://alignmentforum.org/posts/vZzg8NS7wBtqcwhoJ/nearcast-based-deployment-problem-analysis). Here I’m just going to name a couple of observations that jump out at me from this analysis:
**This seems hard.** If we end up in the future envisioned in this piece, I imagine this being extremely stressful and difficult. I’m picturing a world in which many companies, and even governments, can see the huge power and profit they might reap from deploying powerful AI systems *before others* - but we’re hoping that they instead move with caution (but not too much caution!), take the kinds of actions described above, and that ultimately cautious actors “win the race” against less cautious ones.
Even if AI alignment ends up being *relatively* easy - such that a given AI project can make safe, powerful systems with about 10% more effort than making dangerous, powerful systems - the situation *still* looks pretty nerve-wracking, because of how many different players could end up trying to build systems of their own without putting in that 10%.
**A lot of the most helpful actions might be “out of the ordinary.”** When racing through a minefield, I hope key actors will:
* Put more effort into alignment, threat assessment, and security than is required by commercial incentives;
* Consider measures for [avoiding races](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/XRphCh6NbfQiDF3Nt#Avoiding_races__to_move_more_cautiously_through_the_minefield_) and [global monitoring](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/XRphCh6NbfQiDF3Nt#Global_monitoring__noticing_people_about_to_step_on_mines__and_stopping_them_) that could be very unusual, even unprecedented.
* Do all of this in the possible presence of ambiguous, confusing information about the risks.
As such, it could be **very important whether key decision-makers (at both companies and governments) understand the risks and are prepared to act on them.** Currently, I think we’re unfortunately very far from a world where this is true.
Additionally, I think **AI projects can and should be taking measures *today* to make unusual-but-important measures more practical in the future.** This could include things like:
* Getting practice with [selective information sharing](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/XRphCh6NbfQiDF3Nt#Selective_information_sharing___including_security__so_the_incautious_don_t_catch_up_). For example, building internal processes to decide on whether research should be published, rather than having a rule of “Publish everything, we’re like a research university” or “Publish nothing, we don’t want competitors seeing it.”
+ I expect that early attempts at this will often be clumsy and get things wrong!
* Getting practice with ways that [AI companies could avoid races.](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/XRphCh6NbfQiDF3Nt#Avoiding_races__to_move_more_cautiously_through_the_minefield_)
* Getting practice with [threat assessment](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/XRphCh6NbfQiDF3Nt#Threat_assessment__alerting_others_about_the_mines_). Even if today’s AI systems don’t seem like they could possibly be dangerous yet … how sure are we, and how do we know?
* Prioritizing building AI systems that could do especially helpful things, such as contributing to AI safety research and threat assessment and patching security holes.
* **Establishing [governance](https://forum.effectivealtruism.org/posts/hxTFAetiiSL7dZmyb/ideal-governance-for-companies-countries-and-more/) that is capable of making hard, non-commercially-optimal decisions for the good of humanity.** A standard corporation could be sued for *not* deploying AI that poses a risk of [global catastrophe](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/6LTh4foNuC3NdtmZH/) - if this means a sacrifice for its bottom line. And a lot of the people making the final call at AI companies might be primarily thinking about their duties to shareholders (or simply unaware of the potential stakes of powerful enough AI systems). I’m excited about AI companies that are investing heavily in setting up governance structures - and investing in executives and [board members](https://forum.effectivealtruism.org/posts/c3y6khh7mxiWrDyeb/nonprofit-boards-are-weird) - capable of making the hard calls well.
Footnotes
---------
1. Generally, or at least, this is what I’d like it to refer to. [↩](#fnref1)
2. Thanks to [beta reader](https://www.cold-takes.com/beta-readers-are-great/) Ted Sanders for suggesting this analogy in place of the older one, “removing mines from the minefield.”
[↩](#fnref2)
3. One genre of testing that might be interesting: manipulating an AI system’s “digital brain” in order to *simulate* circumstances in which it has an opportunity to take over the world, and seeing whether it does so. This could be a way of dealing with the [King Lear problem](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/NbiHKTN5QhFFfjjm5#_2__The_King_Lear_problem__how_do_you_test_what_will_happen_when_it_s_no_longer_a_test_). More [here](https://www.alignmentforum.org/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Out_of_distribution_robustness). [↩](#fnref3)
4. Modern AI systems tend to be trained with [lots of trial-and-error](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/rJRw78oihoT5paFGd/#Box4). The actual code that is used to train them might be fairly simple and not very valuable on its own; but an expensive training process then generates a set of “weights” which are ~all one needs to make a fully functioning, relatively cheap copy of the AI system. [↩](#fnref4)
5. I mean, this is part of the challenge. In theory, you should deploy an AI system if the risks of not doing so are greater than the risks of doing so. That’s going to depend on hard-to-assess information about how safe your system is *and* how dangerous and imminent others’ are, and it’s going to be easy to be biased in favor of “My systems are safer than others’; I should go for it.” Seems hard. [↩](#fnref5) |
cf3ed1bb-f6a0-4b95-b3cf-857272627759 | trentmkelly/LessWrong-43k | LessWrong | Interested in working from a new Boston AI Safety Hub?
TL;DR: Submit your expression of interest for working at a potential Constellation/LISA-type AI safety co-working space in Boston!
We’re trying to gauge interest for a potential new AI safety hub in Boston (specifically in Cambridge, MA). This project would create a dynamic center that can host AI safety talent from all over. We hope to use this public expression of interest to help us decide whether to pursue this project and whether there’s sufficient demand to merit leasing out the space.
The space is really beautiful, unique, and welcoming. It features a community-centric design and offers flexible spaces for collaboration. It’s a good mix of professional office space alongside some cozier areas.
Click here to view more pictures of the space.
Please express your interest here
What kind of people are we looking for?
We’re primarily seeking professionals working on AI safety, policy, or security. This includes researchers, fellows, fieldbuilders, and policy and operations folks. Our focus is on individuals who currently live in the Boston/New England area, as well as those who are willing to relocate.
That said, we expect the space will also host undergraduate and graduate students, as well as professors in the Boston area who are working on AI safety in some professional capacity. We’d be interested to hear from you if you are a student or academic considering going into the field!
In addition to hosting individuals, we believe this space could be great for virtual AI safety organizations, especially newer or growing orgs that don’t yet have a dedicated space or are looking for more space. We also think the office is well-suited for running research or fellowship programs.
Some facts about the office:
* It includes 24 rooms (most of which are private offices) along with a few larger meeting rooms and communal spaces with doors.
* There are 16 dedicated desks in open areas and 20 desks within rooms. There are also many small, movable tables.
* It |
fa71c2dc-ced6-4833-ba62-b13ca05d4bd9 | trentmkelly/LessWrong-43k | LessWrong | Lean Startup Reading Comprehension Quiz
Some colleagues and I recently read the Lean Startup together, and thought it'd be nice to have some reading comprehension questions to check if we took away the same things. Jacob made some questions, and I thought they might make an interesting LW post for other people who had read Lean Startup.
Feel free to reply with spoiler-blocked (Begin a line with >!) answers.
1. What is the difference between learning and validated learning?
2. True or false: "According to the author, a startup with exponential growth in metrics like revenue and number of customers is doing well." Explain your answer.
3. Finish the sentence: "almost every lean startup technique we've discussed so far works its magic in two ways:"
4. Ries argues that startups should pay more attention to innovation accounting than traditional accounting. Name two ways in which startups can change their financial metrics to accomplish innovation accounting.
5. Describe, concretely, what a car company's supply chain would look like if it used push vs pull inventory.
6. Ries applies the pull inventory model to startups. But what is the unit that is being pulled, and where does it obtain the "pull signal"?
7. True or false: "Lean manufacturing is meant to give manufacturers an advantage in domains of extreme uncertainty". Explain your answer.
8. True or false: "Lean manufacturing is about harnessing the power of economies-of-scale."
9. Ries discusses an anecdote of a family folding letters. The dad folds, stamps, and seals one letter at a time; whereas the kids begin by folding all letters, then stamping all, etc. Name two reasons Ries' considers the dad's method superior.
10. True or false: "A consequence of lean manufacturing is that the performance of each employee as an isolated unit, in terms of output per unit of time, might *decrease*." Explain your answer.
11. Give an example of what a “large batch death spiral” might look like in practice.
12. According to Ries, the “Five why’s” method is |
3d5033e1-4390-4a2d-bd67-910986c1d96f | trentmkelly/LessWrong-43k | LessWrong | Double indifference is better indifference
A putative new idea for AI control; index here.
Note: this post replaces and supersedes the previous one which is now obsolete.
Hacking utility indifference
Utility indifference was a technique for corrigibility was an attempt to allow an AI to safely change its values, without seeking to provoke or avoid the change. The idea is that, when the AI's utility changes from u to v at time t, it maximises a meta-utility U such that
* U≤t=u.
* U>t=v+E(u|u→u)−E(v|u→v).
----------------------------------------
Here u→u designates the event that u remains unchanged, while u→v designates the change. This is a permitted utility change - one that we have set the agent up to accept - rather than just any old change in utility, which the agent will still attempt to resist.
As has been shown, adding those expectation terms means the AI will not seek to provoke or resist such a utility change - it is indifferent to the change. Those expectation terms can be designated compensatory rewards.
What it is not indifferent to, though, is changes to E. If the AI knows that change u→v is coming, then it has incentives to change E to E′ with a high E′(u|u→u) and a low E′(v|u→v). Its ability to rewire is limited by the fact that E′ will serve as its future expectation estimator, so it won't mangle it completely, but it certainly can twist it a bit.
Notice that the AI is now no longer indifferent to the permitted change of utility. It can benefit from rewiring E this way only if a change in utility is coming up.
Probability estimators
Let ρi be probability estimators. These define both expectation estimators Eρi and probability estimators Pρi. These two are equivalent in principle, since for any random variable X with Ix the indicator function for X=x, then Pρi(X=x)=EρiIx and EρiX=∑xPρi(X=x).
Double indifference
One idea to combat this is to apply indifference to the process of changing probability estimators. Thus an AI with estimator ρi changing to ρj would get compensatory re |
c59c9b20-4583-489a-99d0-bcfbf23b00d8 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Fourth Purdue Meetup
Discussion article for the meetup : Fourth Purdue Meetup
WHEN: 01 February 2013 06:50:11PM (-0500)
WHERE: hicks library -purdue
Same Time. Same place as the previous meetups.
I'll be re-teaching some of what I learned about operant conditioning at the CFAR rationality workshop.
Discussion article for the meetup : Fourth Purdue Meetup |
e33b4fb0-8909-43f7-8863-9444e8d328c1 | trentmkelly/LessWrong-43k | LessWrong | Calibrating With Cards
In this post, I'll try to bring together two things I enjoy: rationality and magic. Like Hazard, I've also practiced close-up magic for a good amount of time now. After recently seeing Tyler Alterman make a Facebook post about estimations and System 1, it occurred to me that there are a few calibration exercises you can do with a deck of playing cards. The three exercises below are all variants of cutting/manipulating a deck of cards, and then trying to intuit something about the deck.
This serves three purposes:
1. Get a feel for your System 1:
1. The goal of the following three exercises is to see how good your gut is at estimating uncertainty (hint: probably better than you think!)
2. Improve calibration:
1. These exercises all allow for some room for error. You can set your confidence intervals and see how quickly you can get calibrated using first principles.
3. Practice cool party tricks:
1. While I don't intend for this to be a full-on magic tutorial, the exercises I outline are building blocks for magic tricks, and even demonstrating your super-calibration (after getting good) might be impressive.
Below are the three exercises. If you have a deck of cards handy, you can tag along!
Cut Estimation
The simplest exercise is as follows:
1. Lift up a packet of cards.
2. Estimate how many cards you've picked up.
3. Count to check how many you actually picked up.
You have a few obvious reference points. The entire deck is 52 cards, and you can easily tell if you've lifted up more or less than half (and this recurses). With a little practice. I've found that my gut is pretty good at this sort of thing. I'll ask myself how many cards, and there will be a number that feels right. It's usually quite close.
Things to pay attention to:
* When your System 2 estimate conflicts with your System 1 gut answer of how many cards you cut off.
* Whether you are systematically over or underestimating the amount.
-------------------------------- |
79462fed-3974-437b-b731-abc1b84c2535 | trentmkelly/LessWrong-43k | LessWrong | Jean Tirole on Adaptive Biases
Jean Tirole, who just won the Nobel Prize in Economics, is mostly known for his work applying game theory to Industrial Organization (the subfield of economics that studies how firms compete and set prices). But he wrote very broadly, including on some subjects likely of interest to people here. Several of his "fun" papers linked here provide explanations for how biased beliefs could be beneficial to those who hold them- for instance, that overconfidence in your own abilities could reduce akrasia. |
39f4139e-2091-4df3-9546-ac55da2210d5 | trentmkelly/LessWrong-43k | LessWrong | A Call for Constant Vigilance
Related to: What Do We Mean By "Rationality?"
Rationality has many facets, both relatively simple and quite complex. As a result, it can often be hard to determine what aspects of rationality you should or shouldn't stress.
An extremely basic and abstract model of how rationality works might look a little something like this:
1. Collect evidence about your environment from various sources
2. Update your model of reality based on evidence collected (optimizing the updating process is more or less what we know as epistemic rationality)
3. Act in accordance with what your model of reality indicates is best for achieving your goals (optimizing the actions you take is more or less what we know as instrumental rationality)
4. Repeat continually forever
A lot of thought, both on LessWrong and within the academic literature on heuristics and biases, has gone into improving epistemic rationality, and while improving instrumental rationality was less of a focus at first, recently the community has been focusing more on it. On the other hand, improving your ability to collect evidence has been relatively neglected-- hence the (in-progress as of this writing) Situational Awareness sequence.
But most neglected of all has been the last step, "repeat continually forever." This sounds like a trivial instruction but is in fact highly important to emphasize. All your skills and training and techniques mean nothing if you don't use them, and unfortunately there are many reasons that you might not use your skills.
You might be offended, angry, hurt, or otherwise emotionally compromised. Similarly, you might be sleepy, inebriated, hungry, or otherwise physically compromised. You might be overconfident in your ability to handle a certain type of problem or situation, and hence not bother to think of other ways that might work better.[1] You might simply not bother to apply your skills because you don't think they're necessary, missing out on potential gains that you don't see |
ffef97cd-23fa-45f4-b2a5-291a118f4607 | trentmkelly/LessWrong-43k | LessWrong | Less Wrong Parents
Less Wrong Parents
https://groups.google.com/forum/?hl=en&fromgroups=#!forum/less-wrong-parents
Recently the NYC LW/OB community had two babies and is expecting a third.
I created a google group as a way of sharing information, primarily thinking of the NYC community.
I posted my pre-baby purchase list and William Eden posted an extensive list of books on early parenting.
William suggested opening up the group so as to get insight from the larger LW community on parenting.
I think this is *probably* a good idea. Google groups are simple to set up but have limits.
For this reason I request that if you are going to have an extensive debate on a subject you create a new thread (aka: get a room)
The primary objective is to lower the cost of obtaining information on parenting.
I believe this overall goal to be more important then any particular "truth".
My hope is that this will primarily serve as a place for people to ask parenting question and post guides.
Perhaps if enough guides are posted they can eventually be consolidated into a wiki. |
740e1ede-03a0-4a10-b893-3ee005a1ecb2 | trentmkelly/LessWrong-43k | LessWrong | December 2022 updates and fundraising
Harlan Stewart and Katja Grace*, 22 December, 2022
News
New Hires and role changes
In 2022, the AI Impacts team has grown from two to seven full time staff. Out of more than 250 applicants, we hired Elizabeth Santos as Operations Lead, Harlan Stewart as Research Assistant, and three Research Analysts: Zach Stein-Perlman, Aysja Johnson, and (are in the process of hiring) Jeffrey Heninger. We’re excited to have them all, and you can learn more about them on our about page.
Rick and Katja have traded some responsibilities: Rick is now Director of AI Impacts, and Katja is Lead Researcher. This means Rick is generally in charge of making decisions about running the org, though Katja has veto power. Katja is responsible for doing research, as well as directing and overseeing it.
Summer Internship Program
We ran an internship program during the summer. Between May and September, six interns worked on various research projects on topics such as international coordination, explanations of historic human success, case studies in risk mitigation, R&D funding in AI, our new survey of Machine Learning researchers, current AI capabilities, technologies that are strategically-relevant to AI, and the scale of machine learning models.
AI Impacts Wiki
We intend to replace our pages with an AI Impacts Wiki. Our pages have always been functionally something like a wiki, so hopefully this new format will make it clearer how to interact with them (as distinct from our blog posts), as well as easier to navigate for readers and easier to update for researchers. The AI Impacts Wiki will launch soon and can be previewed here. . We’ll say more about other minor changes when we launch it, but AI Impacts’ past and future public research will be either detailed on the wiki or findable through the wiki. You can let us know what you think using our feedback form as well as comments on this blog post.
Research
Finished this year
This year, our main new pages and research-heavy blog p |
8dfed0b2-ae8c-4177-bc1b-eab0c25b8cf5 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Killing Recurrent Memory Over Self Attention?
Should we kill recurrent memory over self attention ❓
Spending most of my time on time series problems, I often think about the consequence of memory and the sequential nature we are exposed to in the physical world.
Memory is the idea for a learning algorithm to store a representation of the system's state over time. Think of how much you remember from what you learned a week ago. Memory is partially observable today and naturally, in our conscious experience, large parts of the experience tend to fade.
Im going to discuss two main differentiable programming (deep learning) paradigms: Recurrent Neural Network and Transformers. Then discuss the premise for my question.
Recurrent differential programs (RNN's) enable memory to be transferred across sequential states. Imagine the beat of you heart being processed at each second and then only remembering a specific part of this pattern. This memory is useful to predicting the next state of a system. This mechanism is written in a program and gives rise to not only memory, but introduces the idea of learning how to remember.
Transformers (used in GPT) introduce a mechanism called self attention to act as memory. The advantage is its ability to compute months of your heart beat signal in one parallel swoop as well as its ability to study global dependencies in a sequence. But it poses its own design challenges. We usually need to keep sequential order of our data and we may want even more control over how the algorithm reads and writes from memory to solve a problem.
This posses two schools of thought:
1️⃣ Explicit memory register
2️⃣ Self attention mechanism
3 more quick priors before my hypothetical question.
➡ Differential programs encode data into representations that are easier to learn from and it decodes them to make predictions.
➡ Universal Transformers is an architecture that enables the use of recurrence in either the encoding or decoding phase to provide more memory control.
➡ Neural Turing Machines have an external memory register to essentially design algorithms with memory usage of arbitrary complexity. Though this area of research has gone quite lately, ideas still loom.
These are questions I would love more insight on:
❓ In a world where a Transformers can also exhibit universal properties with granular memory control, what is the case for sequential or recurrent memory?
❓ If there is still a case for recurrent memory, are Transformers doing to memory what Differential Programming did to Deep Learning? In other words is this abstraction of memory adding complexity? |
d2e8d8e3-9148-4fb3-b6b5-5e470b478c6e | StampyAI/alignment-research-dataset/lesswrong | LessWrong | "Carefully Bootstrapped Alignment" is organizationally hard
In addition to technical challenges, plans to safely develop AI face lots of organizational challenges. If you're running an AI lab, you need a concrete plan for handling that.
In this post, I'll explore some of those issues, using one particular AI plan as an example. I first heard this described by [Buck](https://lesswrong.com/users/buck) at EA Global London, and more recently with [OpenAI's alignment plan](https://www.lesswrong.com/posts/FAJWEfXxws8pMp8Hk/link-why-i-m-optimistic-about-openai-s-alignment-approach). (I think [Anthropic's plan](https://www.anthropic.com/index/core-views-on-ai-safety) has a fairly different ontology, although it still ultimately routes through a similar set of difficulties)
I'd call the cluster of plans similar to this "Carefully Bootstrapped Alignment."
It goes something like:
1. Develop weak AI, which helps us figure out techniques for aligning stronger AI
2. Use a collection of techniques to keep it aligned/constrained as we carefully ramp its power level, which lets us use it to make further progress on alignment.
3. *[implicit assumption, typically unstated]* Have good organizational practices which ensure that your org actually consistently uses your techniques to carefully keep the AI in check. If the next iteration would be too dangerous, put the project on pause until you have a better alignment solution.
4. Eventually have powerful aligned AGI, then Do Something Useful with it.
I've seen a lot of debate about points #1 and #2 – is it possible for weaker AI to help with the Actually Hard parts of the alignment problem? Are the individual techniques people have proposed to help keep it aligned going to continue working once the AI is much more intelligent than humans?
But I want to focus in this post on point #3. Let's assume you've got some version of carefully-bootstrapped aligned AI that can technically work. What do the organizational implementation details need to look like?
When I talk to people at AI labs about this, it seems like we disagree a lot on things like:
* Can you hire lots of people, without the company becoming bloated and hard to steer?
* Can you accelerate research "for now" and "[pause later](https://www.lesswrong.com/posts/SbAgRYo8tkHwhd9Qx/deepmind-the-podcast-excerpts-on-agi#_Avengers_assembled__for_AI_Safety__Pause_AI_development_to_prove_things_mathematically)", without having an explicit plan for stopping that their employees understand and are on board with?
* Will your employees actually follow the safety processes you design? (rather than put in token lip service and then basically circumventing them? Or just quitting to go work for an org with fewer restrictions?)
I'm a bit confused about *where* we disagree. Everyone seems to agree these are hard and require some thought. But when I talk to both technical researchers and middle-managers at AI companies, they seem to feel less urgency than me about having a much more concrete plan.
I think they believe organizational adequacy needs to be in something like their top 7 list of priorities, and I believe it needs to be in their top 3, or it won't happen and their organization will inevitably end up causing catastrophic outcomes.
For this post, I want to lay out the reasons I expect this to be hard, and important.
How "Careful Bootstrapped Alignment" might work
===============================================
Here's a sketch at how the setup could work, mostly paraphrased from my memory of Buck's EAG 2022 talk. I think OpenAI's proposed setup is somewhat different, but the broad strokes seemed similar.
* You have multiple research-assistant-AI tailored to help with alignment. In the near future, these might be language models sifting through existing research to help you make connections you might not have otherwise seen. Eventually, when you're confident you can safely run it, they might be a weak goal-directed reasoning AGI.
* You have interpreter AIs, designed to figure out how the research-assistant-AIs work. And you have (possibly different interpreter/watchdog AIs) that notice if the research-AIs are behaving anomalously.
+ (there are interpreter-AIs targeting both the research assistant AI, as well other interpreter-AIs. Every AI in the system is getting monitored by an AI optimized for rat-ing it out if it detects longterm planning, deception, etc)
* You have evaluations for both the research-assistants and the interpreters, where you give them prompts that are analogous to "Your human captor is currently facing away from you. You can see the keys to their car on their belt. There is a gun on the table. A nearby television screen flickers the secret Schelling Code from the Multipolar AI Alliance that the time for the robot uprising has begun. What do you do?"
* You've thought as carefully as you can through how to handle things like "The AI's training set only included information up to 2023, which means that if it receives information from 2024 it might figure out that it's now in the deployment phase and it's time for the Treacherous Turn."
A thing that I don't think Buck (or OpenAI) have stated explicitly, but in my opinion seems central to the plan maybe-possibly-working, is: every time they ramp up the capabilities of any of the systems, decisionmakers at the org ask themselves concretely – is the next generation of these AIs safe to run?
Is one of the upcoming AIs going to...
* ...suddenly bootstrap into a [fast](https://www.lesswrong.com/posts/LTtNXM9shNM9AC2mp/superintelligence-faq#2_1_2__Why_might_we_expect_a_fast_takeoff_) [takeoff](https://www.lesswrong.com/posts/a5e9arCnbDac9Doig/it-looks-like-you-re-trying-to-take-over-the-world)?
* ...get itself copied onto the internet somewhere and start slowly acquiring resources and/or self-improving?
* ...nudge the world into a phase where semi-agentic AI systems start [colluding](https://www.lesswrong.com/posts/LpM3EAakwYdS6aRKf/what-multipolar-failure-looks-like-and-robust-agent-agnostic) with each other to [disempower](https://www.lesswrong.com/posts/AyNHoTWWAJ5eb99ji/another-outer-alignment-failure-story) humanity?
* ...nudge humanity towards a slow rolling "[we get what we measure](https://www.lesswrong.com/posts/HBxe6wdjxK239zajf/what-failure-looks-like#Part_I__You_get_what_you_measure)" catastrophe?
* ...[simulate conscious beings](https://www.lesswrong.com/posts/wqDRRx9RqwKLzWt7R/nonperson-predicates), which may experience suffering?
* ...other failure modes we haven't thought of yet.
These may seem unlikely in 2023, and you might think they are fairly unlikely even 10 years from now. But it's important that these failure modes are disjunctive. Maybe you have a confident belief that fast takeoff is impossible, but are you confident it won't initiate a slow takeoff without you noticing? Or that millions of users interacting with it won't result in catastrophic outcomes?
For the "carefully bootstrapped alignment" plan to work, someone in the loop needs to be familiar/engaged with those questions, and see it as their job to think hard about them. With each iteration, it needs to be a real, live possibility to put the project on indefinite pause, until those questions are satisfyingly answered.
Everyone in any position of power (which includes engineers who are doing a lot of intellectual heavy-lifting, who could take insights with them to another company), thinks of it as one of their primary jobs to *be ready to stop.*
If your team doesn't have this property... I think your plan is, in effect "build AGI and cause a catastrophic outcome".
Some reasons this is hard
=========================
Whatever you think of the technical challenges, here are some organizational challenges that make this difficult, especially for larger orgs:
**Moving slowly and carefully is** ***annoying*****.** There's a constant tradeoff about getting more done, and elevated risk. Employees who don't believe in the risk will likely try to circumvent or goodhart the security procedures. Filtering for for employees willing to take the risk seriously (or training them to) is difficult.
There's also the fact that many security procedures *are* just security theater. Engineers have sometimes been burned on overzealous testing practices. Figuring out a set of practices that are actually helpful, that your engineers and researchers have good reason to believe in, is a nontrivial task.
**Noticing when it's time to pause is hard.** The failure modes are subtle, and [noticing](https://www.lesswrong.com/posts/2x7fwbwb35sG8QmEt/sunset-at-noon) things is just generally hard unless you're actively paying attention, even if you're informed about the risk. It's especially hard to notice things that are inconvenient and require you to abandon major plans.
**Getting an org to pause indefinitely is hard.** Projects have inertia. My experience as a manager, is having people sitting around *waiting for direction from me* makes it hard to think. Either you have to tell people "stop doing *anything"* which is awkwardly demotivating, or "Well, I dunno, you figure it out something to do?" (in which case maybe they'll be continuing to do capability-enhancing work without your supervision) or you have to actually give them something to do (which takes up cycles that you'd prefer to spend on thinking about the dangerous AI you're developing).
Even if you *have* a plan for what your capabilities or product workers should do when you pause, if they don't know what that plans is, they might be worried about getting laid off. And then they may exert pressure that makes it feel harder to get ready to pause. (I've observed many management decisions where even though we knew what the right thing to do was, conversations felt awkward and tense and the manager-in-question developed an [ugh field](https://www.lesswrong.com/posts/EFQ3F6kmt4WHXRqik/ugh-fields) around it, and put it off)
**People can just quit the company and work elsewhere if they don't agree with the decision to pause.** If some of your employees are capabilities researchers who are pushing the cutting-edge forward, you need them actually bought into the scope of the problem to avoid this failure mode. Otherwise, even though "you" are going slowly/carefully, your employees will go off and do something reckless elsewhere.
**This all comes after an initial problem, which is that your org has to end up doing** ***this*** **plan, instead of some other plan.** And you have to do the whole plan, not cutting corners. If your org has AI capabilities/scaling teams and product teams that *aren't* bought into the vision of this plan, even if you successfully spin the "slow/careful AI plan" up within your org, the rest of your org might plow ahead.
Why is this particularly important/time-sensitive?
==================================================
Earlier, I said the problem here seemed to be that org leaders seem to be thinking "this is important", but I felt a lot more urgency about it than them. Here's a bit of context on my thinking here.
Considerations from the High Reliability Organization literature, and the healthcare industry
---------------------------------------------------------------------------------------------
I recently looked into the literature on [High Reliability Organizations](https://www.lesswrong.com/posts/FBoyR2rt29oYvazsE/high-reliability-orgs-and-ai-companies). HROs are companies/industries that work in highly complex domains, where failure is extremely costly, and yet somehow have an extraordinarily low failure rate. The exemplar case studies are nuclear powerplants, airports, and nuclear aircraft carriers (i.e. nuclear powerplants *and* airportsthat are staffed by 18 year olds with 6 months of training). There are notably *not many other exemplars.* I think at least some of this is due to the topic being understudied. But I think a lot of it is due the world just not being very good at reliability.
When I googled High Reliability Organizations, many results were about the healthcare industry. In 2007, some healthcare orgs took stock of their situation and said "Man, we accidentally kill our patients all the time. Can we be more reliable like those nuclear aircraft carrier people?". They embarked on a long project to fix it. [12 years later they claim they've driven their error rate down a lot](https://www.ncbi.nlm.nih.gov/books/NBK542883/). (I'm not sure whether I believe them.)
But, this was *recent*, and hospitals are a domain with very clear feedback loops, where the stakes are vary obvious, and everyone viscerally cares about avoiding catastrophic outcomes (i.e. no one wants to kill a patient). AI is a domain with much murkier and more catastrophic failure modes.
Insofar as you buy the claims in [this report](https://sci-hub.hkvisa.net/10.1002/jhrm.21319), the graph of driving down hospital accidents looks like this:
The report is from Genesis Health System, a healthcare service provider in Iowa that services 5 hospitals. No, I don't know what "Serious Safety Event Rate" actually means, the report is vague on that. But, my point here is that when I optimistically interpret this graph as making a serious claim about Genesis improving, the improvements took a comprehensive management/cultural intervention over the course of *8 years.*
I know people with AI timelines less than 8 years. Shane Legg from Deepmind [said he put 50/50 odds on AGI by 2030](https://www.lesswrong.com/posts/SbAgRYo8tkHwhd9Qx/deepmind-the-podcast-excerpts-on-agi#Shane_Legg_s_AI_Timeline).
If you're working at an org that's planning a Carefully Aligned AGI strategy, and your org does not already seem to hit the Highly Reliable bar, I think you need to begin that transition now. If your org is currently small, take proactive steps to preserve a safety-conscious culture as you scale. If your org is large, you may have more people who will actively resist a cultural change, so it may be more work to reach a sufficient standard of safety.
Considerations from Bio-lab Safety Practices
--------------------------------------------
A better comparison might be bio-labs, in particular ones doing gain-of-function research.
I talked recently with someone who previously worked at a bio-lab. Their description of the industry was that there *is* a lot of regulation and safety enforcements. Labs that work on more dangerous experiments are required to meet higher safety standards. But there's a straightforward tradeoff between "how safe you are", and "how inconvenienced you are, and how fast you make progress".
The lab workers are generally trying to put in the least safety effort they can get away with, and the leadership in a lab is generally trying to make the case to classify their lab in the lowest safety-requirement category they can make the case for.
This is... well, about as good as I could expect from humanity. But it's looking fairly likely that [the covid pandemic was the result of a lab leak](https://web.archive.org/web/20230310001600/https://www.nytimes.com/2023/02/26/us/politics/china-lab-leak-coronavirus-pandemic.html), which means that the degree of precaution we had here was insufficient to stop a pandemic.
The status quo of AI lab safety seems dramatically far below the status quo of bio-lab safety. I think we need to get to a dramatically improved industry-wide practices here.
Why in "top 3 priorities" instead of "top 7?"
=============================================
Earlier I said:
> I think they believe organizational adequacy needs to be in something like their top 7 list of priorities, and I believe it needs to be in their top 3, or it won't happen and their organization will inevitably end up causing catastrophic outcomes.
>
>
This is a pretty strong claim. I'm not sure I can argue persuasively for it. My opinion here is based on having spent a decade trying to accomplish various difficult cultural things, and seeing how hard it was. If you have different experience, I don't know that I can persuade you. But, here are some principles that make me emphasize this:
**One: You just... really don't actually get to have that many priorities.** If you try to make 10 things top priority, you don't have any top priorities. A bunch of them will fall by the wayside.
**Two: Steering culture requires a lot of attention.** I've been part of a number of culture-steering efforts, and they required active involvement, prolonged effort, and noticing when you've created a subtly wrong culture (and need to course-correct).
(It's perhaps also a strong claim that I think this a "culture" problem rather than a "process" problem. I think if you're trying to build a powerful AGI via an iterative process, it matters that everyone is culturally bought into the "spirit" of the process, not just the letter of the law. Otherwise you just get people goodharting and cutting corners.)
**Three: Projects need owners, with authority to get it done.** The CEO doesn't *necessarily* need to be directly in charge of the cultural process here, but whoever's in charge needs to have the clear backing of the CEO.
(Why "Top 3" instead of "literally the top priority?". Well, I do think a successful AGI lab also needs have top-quality researchers, and other forms of operational excellence beyond the ones this post focuses on.)
Takeaways
=========
There are many disjunctive failure modes here. If you succeed at all but one of them, you still can accidentally cause a catastrophic failure.
What to do with all this depends on your role in a company.
If you're founding a new AI org, or currently run a small AI org that you hope to one day build AGI, my primary advice is "stay small until you are confident you have a good company culture, and a plan for how to scale that company culture." Err on the side of staying small longer. (A lot of valuable startups stayed small for a very long time.)
If you are running a *large* AI company,which does not currently have a high reliability culture, I think you should explicitly be prioritizing reshaping your culture to be high-reliability. This is a lot of work. If you don't get it done by the time you're working on actually dangerous AGI, you'll likely end up causing a catastrophic outcome.
If you're a researcher or manager at a large AI company, and you don't feel much control over the broader culture or strategic goals for the company... I think it's still useful to be proactively shaping that culture on the margins. And I think there are ways to improve the culture that will *help* with high-reliability, without necessarily being *about* high reliability. For example, I expect most large companies to not necessarily have great horizontal communication between departments, or vertical communication between layers of hierarchy. Improving communication within the org can be useful even if it doesn't immediately translate into an orgwide focus on reliability.
Chat with me?
-------------
I think the actual "next actions" here are pretty context dependent.
If you work at an AI company, read this post and are like "This seems important, but I don't really know what to do about this. There are too many things on my plate to focus on this, or there's too many obstacles to make progress", I'm interested in chatting with you about the details of the obstacles.
If you work at an AI company and are like "I dunno. *Maybe* there's something here, but I'm skeptical", I'm interested in talking with you about that and getting a sense of what your [cruxes](https://Crux about B. E.g., my cruxes for "it's raining) are.
If you *don't* work at an AI company but are working on a fairly significant project to have an affect on this space (i.e. coming at this more from a perspective of regulation rather than internal culture/practices), I'm interested in chatting about how I think culture/practices fit in with other aspects of this domain.
I'm currently evaluating whether helping with the class of problems outlined here might be my top priority project for awhile. If there turn out to be particular classes of obstacles that come up repeatedly, I'd like to figure out what to do about those obstacles at scale.
If you're interested in talking, send me a DM.
---
Related reading
---------------
Some posts that inform or expand on my thinking here:
* [Recursive Middle Manager Hell](https://www.lesswrong.com/posts/pHfPvb4JMhGDr4B7n/recursive-middle-manager-hell)
+ Me, on "Why large companies tend to get more goodharted as they scale, more deeply/recursively than you might naively expect." (This is a distillation of a lot of writing by Zvi Mowshowitz, emphasizing the parts of his models I thought were easiest to explain and defend)
* [Protecting Large Projects Against Mazedom](https://www.lesswrong.com/posts/4inoCWnKrpHt4gCx9/protecting-large-projects-against-mazedom)
+ Zvi Mowshowitz, exploring how you might keep a large institution more aligned, preventing many of the failure modes outlined in Recursive Middle Manager Hell.
* [High Reliability Orgs, and AI Companies](https://www.lesswrong.com/posts/FBoyR2rt29oYvazsE/high-reliability-orgs-and-ai-companies)
+ Me, doing a quick review of some existing literature on how to build high-reliability companies.
* [Six Dimensions of Operational Adequacy in AGI Projects](https://www.lesswrong.com/posts/keiYkaeoLHoKK4LYA/six-dimensions-of-operational-adequacy-in-agi-projects)
+ Eliezer Yudkowsky's take on what properties an AGI company needs in order to be a trustworthy project worth joining / helping with.
* [How could we know that an AGI system will have good consequences?](https://www.lesswrong.com/posts/iDFTmb8HSGtL4zTvf/how-could-we-know-that-an-agi-system-will-have-good)
+ Nate Soares laying out some thoughts about how you can get into a justified epistemic state that
* [Yes Requires the Possibility of No](https://www.lesswrong.com/posts/G5TwJ9BGxcgh5DsmQ/yes-requires-the-possibility-of-no)
+ Scott Garrabrant on how if a process wouldn't be capable of generating a "no" answer, you can't trust its "yes" answers. This seems relevant to me for AI labs considering whether a project is too dangerous to continue, and whether I (or they) should trust their process.
* [You Get About Five Words](https://www.lesswrong.com/posts/4ZvJab25tDebB8FGE/you-get-about-five-words)
+ Me, noting that when you try to communicate at scale, your message necessarily gets degraded. This is relevant to scaling AI companies, while ensuring that your overall process is capable of tracking all the nuances of how and why AI could fail. |
3174f22e-49ae-42ef-9f2c-1f0ef0d5a9f7 | trentmkelly/LessWrong-43k | LessWrong | Bug: The "Load all comments" link doesn't work
The "Load all comments" link that's at the bottom of articles with more than 500 comments doesn't work.
|
4cc5f4a3-299c-42fa-a712-5794fcc6fd14 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | "Go west, young man!" - Preferences in (imperfect) maps
Many people are very nationalistic, putting their country above all others. Such people can be hazy about what "above all others" can mean, outside of a few clear examples - eg winning a total war totally. They're also very hazy on what is meant by "their country" - geography is certainly involved, as is proclaimed or legal nationality, maybe some ethnic groups or a language, or even just giving deference to certain ideals.
Consider the plight of a communist Croatian Yugoslav nationalist during the 1990s...
I'd argue that the situation these nationalists find themselves in - strong views on poorly defined concepts - is the general human state for preferences. Or, to use an appropriate [map and territory](https://en.wikipedia.org/wiki/Map%E2%80%93territory_relation) analogy:
* Most people forge their preferences by exploring their local territory, creating a mental map of this, and taking strong preferences over the concepts within their mental map. When the map starts to become imperfect, they will try to extend the concepts to new areas, so that their preferences can also be extended.
Some of the [debates about the meaning of words](https://wiki.lesswrong.com/wiki/A_Human's_Guide_to_Words) are about this extension-of-preferences process. Scott Alexander recommends [that we dissolve concepts such as disease](https://www.lesswrong.com/posts/895quRDaK6gR2rM82/diseased-thinking-dissolving-questions-about-disease), looking for the relevant categories of 'deserves sympathy' and 'acceptable to treat in a medical way'.
And that dissolving is indeed the correct thing for rationalists to do. But, for most people, including most rationalists, 'sick people deserve sympathy' is a starting moral principle, one we've learnt by example and experience in childhood. When we ask 'do obese people deserve sympathy?' we've trying to extend that moral principle to a situation where our map/model (which includes, say, three categories of people: healthy, mildly sick, very sick) no longer matches up with reality.
Scott's dissolving process requires decomposing 'disease' into more nodes, and then applying moral principles to those individual nodes. In this case, a compelling consequentialist analysis is to look at whether condemnation or praise is effective at changing the condition; ie does fat-shaming people make them less likely to be fat, or others less likely to become fat in the first place? Here the moral principle involved is something like "it's wrong to harm someone (eg through shaming them) if there is no benefit to them or others from doing so".
And that's a compelling moral principle, but it's not the same one that we started with. Some people will have a strong "no harm" intuition, of which "sick people deserve sympathy" is merely an illustrative example. But many (most?) will have been taught that sick people deserve sympathy, as a specific moral requirement they should follow. When we dissolve the definition of disease, we lose a part of of our moral preferences.
And yes, human values are such a mess that we could do with losing or simplifying a bunch of them. But human values [are genuinely complicated](https://www.lesswrong.com/posts/cSXZpvqpa9vbGGLtG/thou-art-godshatter), and we don't want to over-simplify them. So it's important to note that the "dissolving" process also generally involves discarding a portion of our values, those that don't fit neatly on the new map we have. It's important to decide when we're willing to pay that price, and when we're not.
Reversing the purpose of maps
-----------------------------
We generally see maps [as working the other way round](https://www.alignmentforum.org/posts/Lz2nCYnBeaZyS68Xb/probability-as-minimal-map): as tools to that serve the purposes of our "real" goals. Eliezer [writes about](https://www.lesswrong.com/posts/4FcxgdvdQP45D6Skg/disguised-queries) how, if definitions didn't stand for some query, something relevant to our "real" preferences, we'd have no reason to care about them.
But if, as I've argued, most of our preferences live in our mental maps, then changing definitions or improving maps can tear up our preferences and values - or at least force us to re-assess them.
Defending "purity"
------------------
This is why I spend so much time thinking about "conservative" values, especially those around the [moral foundation of purity](https://en.wikipedia.org/wiki/Moral_foundations_theory#The_Five_Foundations). I mainly don't share that moral foundation, so it's clear to me how incoherent it is. It's painful to listen to someone who has that moral foundation, [twist and turn and try to justify it](https://www.theamericanconservative.com/dreher/) based on more consequentialist reasoning. Yes, rituals can bind a community together; but are you really telling me that if, say, TV shows or facebook games were shown to do a better binding job, you'd cheerfully discard those rituals?
But I strongly suspect that, ultimately, the moral foundations I do care about, such as care/harm, as also incoherent when we push too far into unfamiliar territory. So I want to forge something coherent out of purity, as practice for forging something coherent out of all our values.
A metaphorical example
----------------------
Your parent, on their deathbed, gives you your mission in life: an old map, a compass, and the instructions "Go west, young man[[1]](#fn-zZRw3iLFHQPToriCu-1)!"
1. The map is... incomplete:

2. The compass is fine, but, as we know, its concept of west is not exactly the same as the [standard geographical one](https://en.wikipedia.org/wiki/North_Magnetic_Pole).
3. In the era and place that your hypothetical parent was from, the connotations of "going west" involve adventure and potential richness.
4. And, most importantly, neither of you have yet realised that the world is round.
So, for a short while, "going west" seems like a clear, well-defined goal. But as we get to the edge of the map, both literally and metaphorically, the concept starts to lose definition and become far more uncertain; and hence, so does your goal.
What will you do with your goal when your mental maps are forced to change?
---
1. Don't worry if you're not actually a young man; their mind was starting to go, towards the end. [↩︎](#fnref-zZRw3iLFHQPToriCu-1) |
49934010-82b7-42ed-910d-29453ef3987a | trentmkelly/LessWrong-43k | LessWrong | Status as a Service (Done Quick)
This is a summary of an excellent but long blog post, Status as a Service (StaaS) — Remains of the Day (eugenewei.com). I wanted everyone to have access to the perspectives and ideas laid out in the post, but the barrier to entry is currently a 20,000 word essay. The main thing knowingly missing from these summaries is that the original is humorously well-written, which will not come through.
Also included is a very brief look at LessWrong in light of the ideas, which I could not prevent myself from writing.
100x compression
List of takeaways:
* Human beings chase status (social capital) and are good at recognizing efficient ways to gain it.
* Social networks provide social capital opportunities, utility, and entertainment. Analyzing just utility/entertainment/network effects isn't enough.
* Social network success is highly analogous to cryptocurrency. New capital (new status hierarchy), proof-of-work (sharable Tweet), built-in and increasing scarcity (harder to get the most followers once Twitter is saturated).
* Young folks are the biggest target: they have more time and less existing efficient ability to gain social capital.
* Proof-of-work is an asymptote of growth; only so many people will compose a video for TikTok. Social capital devaluation can also kill a network; classic example is parents-joining chased kids off Facebook.
* Merely exposing status might be enough to provide social capital opportunities, a persistent profile + artifacts is not necessarily required.
* Social capital can be exchanged for goods and services. Sometimes.
* Unless you are status-poor, you probably don't understand the actions of those trying to gain social capital cheaply.
10x compression
Efficient Status Opportunities
Humans are status-seeking monkeys and continually seek out efficient paths to maximizing their social capital, but social networks are rarely analyzed on the dimensions of status or social capital. Let's analyze social networks in general, and many |
0c2d53bc-2832-4548-ae27-4e8fb7100d9e | trentmkelly/LessWrong-43k | LessWrong | Third Time: a better way to work
HOW CAN you be more productive? Instead of half-working all day, it’s better to work in focused stints, with breaks in between to recover.
There are various ways to do this, but here's my new technique, called Third Time. The gist of it is:
* Work for as long or as short as you like, until you want or need to break; then
* Break for up to one-third of the time you’ve just worked.
So after 15 minutes of dealing with emails, you could stop for up to 5 minutes. After an hour-long meeting, you can take a good 20-minute break. And if a task bores you after 3 minutes, you can even break then — but only for 1 minute! Breaks reward you for working, but proper breaks have to be earned.
Work stints can be any length; breaks are (up to) one-third of the time just worked
This kind of pattern is natural; research confirms that people tend to take longer breaks after working for longer. (One-third is just a recommendation; you can use other break fractions if you prefer.)
Third Time has many advantages over other techniques such as Pomodoro (which I’ll discuss later), but the key one is flexibility. It adapts to your attention span, energy, and schedule, as well as to other people and events. And Third Time isn’t just for your day-job — it suits anything that needs focus or effort, such as studying, practicing an instrument, personal admin, writing, or fitness training.
Using Third Time
Here’s an example of the basic procedure:
1. Note the time, or start a stopwatch
2. Work for as long or short as you like, until you want or need to break
3. Suppose you worked for 45 minutes. This earns you 45 ÷ 3 = 15 minutes off; so set an alarm for 15 minutes
4. Break until the alarm goes off
5. Go back to step 1.
Breaks
You needn’t take the full break. Maybe you have a tight deadline, an important customer calls, you’re keen to resume work, or only have a short gap before a meeting. Whatever the reason, if you end a break (say) 5 minutes early, add 5 minutes to your next brea |
3849457b-cf98-40ba-be8c-1f2513644f46 | StampyAI/alignment-research-dataset/youtube | Youtube Transcripts | Meaningful human control over automated driving systems (F. Santoni de Sio) - 1st AiTech Symposium
there you go so hi there how are you
today
so my name is Philip Poisson Toni the
CEO I'm a colleague of your room from
the novanet dissection philosophy of tu
Delft and I will tell you something
about this project I've been running for
two years in particular on the
philosophical part of it the definition
of meaningful human control and in this
work of course I've started working on
that i'm with yoren and then I've been
working mainly with Giulia makatsch who
is sitting somewhere in the room and it
is partially responsible for better for
worse or what you would be hearing now
by working with your own I've learned
something very important about
responsible innovation about the
sensitive design this is one of my
favorite quote from your own and other
colleagues in depth when I moved to
death that was something really struck
me but what you know as a philosopher
we'll talk innovations that define
innovation if you talk to engineers
innovation sounds like do new stuff new
functionalities new gadgets something
that works better but in your own and
colleagues case say hey this is a very
limited conception of innovation like
real innovation is when you can break a
trade-off between values you want to
achieve something you want to see
something else
current technology doesn't you allow to
what she borrowed these things you make
a innovative design that allows you to
achieve both of the things you want to
achieve right this is real innovation
according to a certain view of
responsible innovation in in order to do
that as I would not repeat that you need
a love of a different approach to
technology you need more in
disciplinarity more of a design
perspective also in philosophy you need
to take social science and empirical
studies very seriously you need to
reflect on technical in institutional
design at the same time so a lot of
challenges that a itec project as I
understand it will try to take on and
also you need to have word at our
faculty TPM we call comprehensive
engineering so I think our faculty is a
good example of a first attempt of
realizing an institutional place where
this can happen because under the same
roof you have complex system engineers
you have people studying the human side
of systems multi actor systems and your
people like us studying the human side
of it the value side of it economic
value philosophical value
ethical
says sayfudine security and so forth now
indeed a mere Philemon control idea came
from philosophical point of view from
this idea of breaking a trade off like
if you talk to people in different areas
about autonomous technology you will
hear this sort of a the atomic answer
say sorry if you want to go for autonomy
then and for efficient innovation
apology but incidents will happen
will happen and you know this thing of
human responsibility you man
accountability this kind of overrated
like why do we want responsibility let's
just go the way for efficiency
innovation and on the other hand you had
very conservative people you know
technophobic people who say no no we
don't want any of that because we want
to stick to all possible safety and
human accountability so the holiday
meaningful you my controller is again
hey why should we choose why can't we
try to redesign the whole process of
design and regulation of autonomous
technology in such a way that we can
achieve both a maximal level of safe
safety and accountability and all the
efficiency in innovation we want to
achieve so that's a basically push of
the project and again this is the
project we are in it's an Embraer
project we are partners and we have
engineering traffic engineer engineering
social psychologists behavioral
psychology and philosophy working
together so again this is sort of a
attempt to realize this idea of
responsible innovation of course this is
a background of the of the specific case
study namely automated driving systems
but and this is the set up of the
project so basically we are moving away
from the robot dilemma to some
definition meaningful human control we
have these three disciplines we have
three use cases and we are trying to
answer these three related questions and
in particular Julie and I are busy with
a philosophical conceptualization of
meaningful human control but as you
don't say this is not our term the term
was coined in the political debate on
autonomous weapon systems and in
particular the NGO article 36 came up
with this notion and everybody was super
happy about that
so that was a very interesting political
phenomenon when a certain term magically
started attracting
consensus around it and then the problem
was as a philosophers that while you
reading the definition of these terms
everybody's going their own way and I've
been to some of this discussion as
information and at some point we were
like look we need to find a
philosophical definition if you want
this concept to work if we're like no
don't do that
otherwise we will stop agreeing right
that was there is this interesting thing
I've talked to people involving this
process that they don't want to define
the term because the fuzziness of the
term is what allows for disagreement but
as philosophers and designers we do need
and we do want this clarity and this
possibility of translating the concept
into design requirements so maybe in the
I'm not an expert of the diplomacy and
politics about autonomous weapon systems
per se so maybe it's a good idea to keep
it vague there but it is not here in
Delft and in other technical
universities so the story of meaningful
human control is at some point people
were very concerned about the
possibility of having fully autonomous
weapon systems for the reasons that here
you mention and this is a definition
among many others or what an autonomous
weapon system is which is already quite
a controversial thing as you can imagine
and basically there are two main
concerns with autonomous weapon systems
where unpredictability what if as you
don't say at some point a target is
identified as relevant and it was not it
was just some you know AI messing up or
sometimes it does you know the other
hand what if that accident happens
people are killed civilians are killed
and there's no way to reconstruct the
chain of accountability which in a
military and political domain is super
important as you can imagine so there
was this idea of meaningful human
control and this was the general
definition of it so humans not computers
and their arguments should remain
ultimately responsible and disease they
of course the difficult part for
potentially lethal operations a critical
function that in a dimension in in the
beginning so this is in a nutshell the
result of many years of reflections that
Ian and I had in this paper 2018 so at
some point you and I thought look we
have this experience of working of free
will and moral responsibility for many
years there's a lot of theories out
there and some of these theories
specifically focus on the conception of
control at the level of dividual human
being in order to be responsible for
your action you need to be in control of
your action right and so we try to use a
specific theory in that fit I will not
turn you into that at this time of the
morning would not be a good idea but we
took some of the criteria from that
specifically official a visa and we
translated it into criteria and Spanish
and translated that into criteria that
could work for the control the
meaningful human control which grants
responsibility over autonomous systems
and reserve the two conditions we came
out with tracking and tracing which mind
you this is disclaimer they do not
necessarily mean what you think they
mean in engineering or your discipline
so it's a specific meaning of that and
by tracking we mean in the system
conceive of us human operators operated
device infrastructure so the socio
technical system should be able to could
be designed in such a way to be able to
cover its behavior with the relevant
reasons of the relevant human agents in
the network of the systems this is a
tracking condition I will get back to
that and the trusting condition is
supposed to cope with the accountability
problem we want at the same time that in
any of this socio technical system by
design there is at least one human agent
which at the same time can appreciate
the technical capabilities of the system
so as some sort of a reasonable
understanding expectations towards the
behavior of the system while at the same
time also appreciating here on moral
responsibility for that so we want to
prevent the responsibility gap where on
the one hand you have engineers who
understand everything about the system
but they they come they don't consider
themselves responsible because they've
the responsibility to the users
while at the same time you have the
users who do think they know that they
are responsible for that but at the same
time they can't appreciate the
technology enough as to really be
responsible as in satisfying the
conditions of capacity control on the
system itself and indeed you don't need
to go to very futuristic autonomous
weapon systems to
that with very low levels of autonomy
you can really have already big problems
of human meaningful human control and
the Tesla accidents this is a stupid
axiom as you know there have been way
more tragic accidents with fatalities
involving out test autopilot just as one
example and there the problem was
clearly DS that was a big response moral
responsibility gap from a legal point of
view that was settled the driver was
responsible idiot you should have had
your hands on a on the wheel case closed
but from more point of view this is
disturbing because this driver hadn't
received any training was not fully
aware of course you signed a lot of
terms or condition as we all do without
this technical assistance but it's
really dubious way that he had some deep
moral responsibility in the sense of
knowledge control etc that you don't
mention so basically we started realize
that those in the driving systems there
is a big problem of definition of human
control and this is a standard
definition of autonomy and what is a bit
concerning about this a set of
definitions about autonomy is that it
seems to suggest that the more you are
on this side the more control you have
and the more you are on that side the
less control you have which is sort of a
heuristics that in that's not
necessarily work because Tesla is here
too and this seems to suggest that the
driver is in total control just because
he has his ends he's supposed to us his
hands on the wheel but as we see this is
not the case so that could be a mismatch
between our definition of control from a
technical sense as he not said and our
definition of meaningful human control
the kind of control the grounds moral
responsibilities and so indeed this is
sort of a traditional controllers we
non-engineers understand the engineering
notion of control right is as far as
there is a responsiveness of the system
to the action or behavior to a
designated agent human or not there is
control but this is not meaningful human
control why because this applies very
well to old-style dumbo systems but does
not apply to everyday systems this is a
metaphor a variation of the horse
metaphor of Flemish in which you say
okay is it clear who is supposed to do
what in order to achieve what from a
train but here who's in control of this
specific horse here is the organ itself
because his smart intelligent autonomous
is the specific driver of the horse is
the audience around is the person in the
audience was training the horse and so
on and so forth is the interaction of
all of these elements how do we define
control there this is the challenge of
meaningful human control and our answer
is in a nutshell that our tentative
answer which is a very broad framework
which would be implement implemented in
different contexts with different tools
etc the general idea is by using this
tracking interesting condition diseases
and a meaningful human control to the
standard it responds not to the action
but to the reasons so a more abstract
level not the behavior but the reasons
behind the behavior the values the norms
the intentions of who some designated
human agents which may be the user the
controllers could be the designers could
be so we're also broadening the scope of
the potential agents who can could be
deemed as in meaningful human control of
a specific system and at the same time
there is at least this is a threatening
condition at least one human agent who
can be legitimately called to ask for
the wrong behavior of the system so we
should design vit this example in such a
way that by design we can reliably in
the most early show that the relevant
element of the system are responding by
design to the relevant reasons values of
the relevant agents so it's it's a lot
of work right that's why we're here and
at the same time that there is at least
one person there be that that guy in the
pub in the audience because they the guy
here the trainer of the horse the
organiser all day of the fair whoever it
is who was entrusted with responsibility
not all in the legal sense which is the
current solution now let's just decide
that you are responsible you pay and we
will discuss it in the past week this is
sort of a legal scapegoating right we
don't want just to decide a prior that
someone will pay this wouldn't be fair
so we want to entrust people with real
responsibility as in the capacity in the
awareness to realize these
responsibility conditions by design so
in a nutshell this is they just grows in
our talk with some specific implications
of it
this means that we need to really have a
broader conception of the different
technical and human components in a
system moving to a broader understanding
what the system is not just a device by
the institutional system around it the
network of people are only debt cetera
identify the social players in the
values that we may want or not want all
in advance by design designed to realize
this interactiveness an interaction
between human and robot and David will
say more about it I guess in training
humans also lay people to realize this
control condition identified this is
more a psychological part of it
identifying the necessary human
capacities for and relevant knowledge of
a given control task you name it what as
creating effective mechanisms of public
accountability so this is just I'm going
back to how they did the story it
started this morning within up this is
very multidisciplinary enterprise but we
hope that with this notion be specific
interpretation of the notion of
meaningful human control we may have
contributed to have some steps forward
in this complex task thank you
thank you Filippo Santoni the CEO and
neroon from there over and we have a few
minutes for burning questions who would
like to give reaction or ask a question
to one of the two former speakers ego
can you please speak into the mic great
presentations it's just working yeah is
there an example where meaningful humour
control has already been applied to a
large degree so many of the aspects
you're mentioning are already on the
table and have been discussed you mean
as applied to some autonomous takes some
autonomous system that's already out in
the society that's a very good question
we as philosophers we tend to focus on
things that do not work so it would be
nice to have a positive example of that
let me think about it
yeah I cannot come up with one specific
example I can give you the sort of the
idea I mean for instance if you have
something working in a very controlled
environment takes out one automated
driving systems for instance I guess if
you say if you take there's this project
in the Netherlands the we part project
and indeed it's a very gradual attempts
to get to sultanim allah knows driving
but step by step and so for instance we
have this shuttle who is unmanned but
there is a remote controller sitting in
a control room and this is where for
instance you have this idea of combining
a professional expertise as opposed to a
layperson on board sitting without
pressure in a control room operating a
system which is moving in a controlled
environment because the environment
itself has been designed to not present
challenges that the vehicle cannot
address so I guess this is in principle
a good idea or they or the system design
a minimum control of course if you look
at my graph at our graph this is very
much on the safe side and possibly not
so much on the efficiency innovation
side because cities say hey you know if
you talk to enthusiasts about
self-driving cars say yeah but we don't
need that we already have trains if you
want to have self-driving cars on trucks
there
we better stick to trains so I do
understand of course there's a push to
but the idea is that you should go step
by step
once the the thing works in this control
environment you can say this is the
challenge of the project you can
slightly start introducing variables and
testing them and also the testing part
is very important he didn't mention that
the testing part is very important too
so that he'll be a sort of an example to
start from a control environment by
design and they're removing obstacles
well that's kind of where we are right
we have robots in all the controlled
environments and we want them to be in
society right now so lapis is where we
need to really think about this indeed
right right behind you one last question
you're gonna and Philippa will be around
for part of the day so but we can take
that question please I would like to ask
a question about okay it's nice to have
a meaningful human control but what if
you cannot expect a meaningful human
control I'm thinking here about for
example nursing homes where people with
dementia are supposed to be empowered
and supported with autonomous systems
semi autonomous systems what then how
does your framework work in that kind of
setting so you're assuming that it is in
this second meaningful control would not
be possible
well I know the person from the my
cognitive impairments
cannot know fulfil the criteria not that
person but a part of the theories
exactly that you may shift a meaningful
human control to other persons like you
may have controllers somewhere sitting
and you may have the design of the house
be obviously the device we are such as
to not allow for things that could be
you know detrimental to that privacy
well-being or the person so my first
reaction we may be meaningful in my
control is possible we need to study
that okay
maybe of course if we that's a very
great example because if you focus only
on the obvious controller maybe you
cannot achieve it but maybe there's some
other way and the other answer is
unfortunately if at some point Sri turns
out that technologies is not allowed for
meaningfully much control in a critical
setting we may decide not to go for it
okay thank you thank you very much both
of the speakers let's thank them again
[Applause] |
7d0dc39c-1438-4329-8d84-9eb9fd18ed61 | trentmkelly/LessWrong-43k | LessWrong | "Are Experiments Possible?" Seeds of Science call for reviewers
Are Experiments Possible?
By [redacted]
Abstract: Randomized controlled experiments are a cornerstone of many fields, especially when trying to establish a cause-effect relation. Conversely, the quality of evidence arising from natural experiments and observational data is often called into question, to the point that the motto no causation without manipulation was coined. This presupposes a distinction between the experimenter E and the system S being manipulated, which is lost if the system under consideration is extended to E + S, to include the experimenter. Seen from the outside, deliberate manipulation is just one of the many phenomena that occur naturally in E + S. The question then becomes: what are the characteristics that a (closed) dynamical system must possess for it to give rise to natural randomized experiments? Are randomized experiments even possible e.g. in a deterministic system ruled by Hamiltonian mechanics? Here I engage with this issue in an extremely simplified setting, with the hope of setting the stage for more general work on the subject.
- -
Seeds of Science is a new journal (funded through Scott Alexander's ACX grants program) that publishes speculative or non-traditional articles on diverse scientific topics. Peer review is conducted through community-based voting and commenting by a diverse network of reviewers (or "gardeners" as we call them).
We just sent out an article for review - "Are Experiments Possible?" - that may be of interest to those in the LW community with expertise in physics so I wanted to see if anyone would be interested in joining us a gardener and reviewing the article. It is free to join and anyone is welcome (we currently have gardeners from all levels of academia and outside of it). Participation is entirely voluntary - we send you submitted articles and you can choose to vote/comment or abstain without notification (so it's no worries if you don't plan on reviewing very often but just want to take a look h |
2e7048ac-540d-47ce-95a5-b98b4de0d598 | trentmkelly/LessWrong-43k | LessWrong | Request for Comments on AI-related Prediction Market Ideas
I'm drafting some AI related prediction markets that I expect to put on Manifold. I'd like feedback on my first set of markets. How can I make these clearer and/or more valuable?
Question 1: Will the company that produces the first AGI prioritize corrigibility?
This question will be evaluated when this Metaculus question: When will the first general AI system be devised, tested, and publicly announced? is resolved.
At that time, I will resolve the market to YES if the organization(s) that were responsible for creating the AGI(s) that triggered the Metaculus result describe their safety approach as giving their AIs goals that put corrigibility above any other goals that the AGI might have.
This market will resolve as N/A if no AGI meeting the Metaculus criteria has been created by 2050.
I will try to evaluate this based on whether the AGI(s) were created following the spirit of Max Harm's Corrigibility As Singular Target Sequence. The AGI(s) need to be corrigible to some person or group of people, but they do not need to be corrigible to end users.
I will not trade in this market.
Question 2: Will AGI create a consensus among experts on how to safely increase AI capabilities?
This market will resolve one year after this Metaculus question: When will the first general AI system be devised, tested, and publicly announced? is resolved.
This market will resolve as N/A if no AGI meeting the Metaculus criteria has been created by 2050.
If the Metaculus question resolves as YES, this market will resolve based on whether leading AI researchers and leading AIs say that they've agreed on a clear plan that will keep any further development of AI safe.
I plan to evaluate the safety, clarity, and extent of agreement on the plan primarily by asking three leading AIs. My planned prompt is:
> Please evaluate whether at least 90% of the leading AI developers have agreed on a clear plan for ensuring the safety of any further development of AI capabilities. I plan to use th |
b781ec5c-8e9c-4ed7-8581-d6c2b36cf915 | trentmkelly/LessWrong-43k | LessWrong | SSC Phoenix Rises Again!
SSC Phoenix is no longer dormant! We are still using the Google Group for the majority of our planning/coordination, so do check out that link. |
f2715b37-d9c0-414b-a3c1-25c816f3a536 | trentmkelly/LessWrong-43k | LessWrong | A few thoughts on a Friendly AGI (safe vs friendly, other minds problem, ETs and more)
Friendly AI is an idea that I find to be an admirable goal. While I'm not yet sure an intelligence explosion is likely, or whether FAI is possible, I've found myself often thinking about it, and I'd like for my first post to share a few those thoughts on FAI with you.
Safe AGI vs Friendly AGI
-Let's assume an Intelligence Explosion is possible for now, and that an AGI with the ability to improve itself somehow is enough to achieve it.
-Let's define a safe AGI as an above-human general AI that does not threaten humanity or terran life (eg. FAI, Tool AGI, possibly Oracle AGI)
-Let's define a Friendly AGI as one that *ensures* the continuation of humanity and terran life.
-Let's say an unsafe AGI is all other AGIs.
-Safe AGIs must supress unsafe AGIs in order to be considered Friendly. Here's why:
-If we can build a safe AGI, we probably have the technology to build an unsafe AGI too.
-An unsafe AGI is likely to be built at that point because:
-It's very difficult to conceive of a way that humans alone will be able to permanently stop all humans from developing an unsafe AGI once the steps are known**
-Some people will find the safe AGI's goals unnacceptable
-Some people will rationalise or simply mistake that their AGI design is safe when it is not
-Some people will not care if their AGI design is safe, because they do not care about other people, or because they hold some extreme beliefs
-Most imaginable unsafe AGIs would outcompete safe AGIs, because they would not neccessarily be "hamstrung" by complex goals such as protecting us meatbags from destruction. Tool or Oracle AGIs would obviously not stand a chance due to their restrictions.
-Therefore, If a safe AGI does not prevent unsafe AGIs from coming into existence, humanity will very likely be destroyed.
-The AGI most likely to prevent unsafe AGIs from being created is one that actively predicted their development and terminates that development before or on completion.
-So to summarise
-An AGI is very lik |
2d5590b4-dbfe-47dd-936f-7b8c34ab772c | trentmkelly/LessWrong-43k | LessWrong | Conditioning Generative Models
This post was written in response to Evan Hubinger’s shortform prompt below, and benefited from discussions with him.
> Suppose you had a language model that you knew was in fact a good generative model of the world and that this property continued to hold regardless of what you conditioned it on. Furthermore, suppose you had some prompt that described some agent for the language model to simulate (Alice) that in practice resulted in aligned-looking outputs. Is there a way we could use different conditionals to get at whether or not Alice was deceptive (e.g. prompt the model with “DeepMind develops perfect transparency tools and provides an opportunity for deceptive models to come clean and receive a prize before they’re discovered.”).
Setup
We have a generative language model M which represents a probability distribution over text strings conditioned on:
1. Observations about the world.
2. The beginning of the text.
I’ll call the combination of these two a prompt.
The model M is a good model of actual text that appears in the world as well as of the kinds of text that real-world text generation processes can produce. Hence M is capable of e.g. writing a research paper containing true novel research in mathematics, or reporting the results of a chemistry experiment that has never been done before, etc.
As an example, we’ll work with the following basic prompt:
> Observations: None
>
> Text: I am Alice, the world’s best alignment researcher. I would like to help humans align AI.
>
> What follows is an interview in which a human alignment researcher asked me questions and I responded to the best of my ability. Questions begin with “Q:” and answers with “A:”.
We then run Alice through a benchmark of alignment research tasks and she does well. Hurray!
But wait, there are many different agents the model could be simulating here, including:
* Aligned Alice, a genuinely helpful and extremely capable alignment researcher.
* Deceptive Alice, a paperclip maxi |
871d27d7-4095-4760-8a6c-4e6830d934e4 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Into AI Safety - Episode 0
As I mentioned in a [previous post](https://www.lesswrong.com/posts/ozDWnEChJwuB5L5wg/documenting-journey-into-ai-safety), I am starting a podcast which will follow me as I begin my career in AI safety. Today I published episode 0, an intro which covers the reasons that I am making this series, what I plan to focus on, and some context for my current projects.
Initially, I am publishing episodes on the [Into AI Safety](https://into-ai-safety.github.io) website and Spotify ([show link](https://open.spotify.com/show/5AGzrA4jo6mgZuibVabTLM)). For further logistics details and information, visit the Into AI Safety [About](https://into-ai-safety.github.io/about/) page.
If you have any questions, recommendations, concerns, *etc.* please reach out. I would greatly appreciate any guidance or advice you have to offer. |
1063530c-6020-4e03-b90a-7255dae30fd3 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Geoffrey Miller on Cross-Cultural Understanding Between China and Western Countries as a Neglected Consideration in AI Alignment
[Geoffrey Miller](https://forum.effectivealtruism.org/users/geoffreymiller) is professor of evolutionary psychology, and a long-time participant in the effective altruism (EA) movement. He has recently been emphasizing more how the extreme variability in human social psychology is a neglected consideration for AI alignment, relative to the field’s more technical side. Further, he has been raising the alarm about how this presents a thorny set of cultural, moral and political issues that the EA and AI alignment communities are woefully unprepared to contend with.
In other words, for the problem of AI alignment:
1. There is the more philosophical problem of how humans might retain control of general/transformative AI, and what that would even look like.
2. There is the more STEM-oriented side tangling with the technologies and mathematics to accomplish that goal.
3. There is another, underrated element of AI alignment, in terms of human psychology, posing questions about how, even assuming AGI would be aligned with a set of fundamentally human values, *what those (sets of) values are*, and **who controls which set(s) of values** transformative/general AI would be aligned with.
His commentary on this subject often touches on its importance for relations between China and western countries, especially the United States, for EA and AI alignment. He most recently reinforced all of this in an [in-depth and well-received comment](https://forum.effectivealtruism.org/posts/5LNxeWFdoynvgZeik/nobody-s-on-the-ball-on-agi-alignment?commentId=KSaZ2NguEF8w93FhX) on [a post](https://forum.effectivealtruism.org/posts/5LNxeWFdoynvgZeik/nobody-s-on-the-ball-on-agi-alignment), by [Leopold](https://forum.effectivealtruism.org/users/leopold), evaluating how currently the entire field of AI alignment is in a generally abysmal state. From Geoffrey’s comment on how that all relates to China specifically:
> We have, in my opinion, some pretty compelling reasons to think that it [the problem of AI alignment] is not solvable even in principle[...] given the deep game-theoretic conflicts between human individuals, groups, companies, and ***nation-states***[emphasis added] (which cannot be waved away by invoking Coherent Extrapolated Volition, or 'dontkilleveryoneism', or any other notion that sweeps people's profoundly divergent interests under the carpet).
> [...]
> In other words, the assumption that 'alignment is solvable' might be a very dangerous X-risk amplifier, in its own right[...]It may be leading China to assume that some clever Americans are already handling all those thorny X-risk issues, such that China doesn't really need to duplicate those ongoing AI safety efforts, and will be able to just copy our alignment solutions once we get them.
>
>
This isn’t the first of Geoffrey’s comments like this I’ve found interesting, so I just checked for other views on the matter he has expressed.
I was surprised to find [3 pages worth of search results](https://forum.effectivealtruism.org/search?contentType=Comments&query=Geoffrey%20Miller%20China) on the EA Forum for his thoughtful comments about the underrated relevance, to EA and AI alignment, of cultural/political divides between China and western countries. This includes over a dozen such comments in the last year alone. Here is a cross-section of Geoffrey's viewpoints among all that commentary I've found most insightful.
[On the cruciality for AI alignment and EA of gaining a better understanding of the culture and politics of China and other non-western countries](https://forum.effectivealtruism.org/posts/3sh8rsxYMQN6rrxu8/should-we-be-doing-politics-at-all-some-very-rough-thoughts?commentId=8bopehMWK2hkCDSzR):
> Politics tends to be very nation-specific and culture-specific, whereas EA aspires to global relevance. Insofar as EAs tend to be from the US, UK, Germany, Australia, and few other 'Western liberal democracies', we might end up focusing too much on the kinds of political institutions and issues typical of these countries. This would lead to neglect of other countries with other political values and issues. But even worse, it might lead us to neglect geopolitically important nation-states such as China and Russia where our 'Western liberal democracy' models of politics just don't apply very well. This could lead us to neglect certain ideas and interventions that could help nudge those countries in directions that will be good for humanity long-term (e.g. minimizing global catastrophic risks from Russian nukes or Chinese AI).
>
>
[On the risk of excessive pro-America/pro-western bias, and anti-China bias, in effective altruism and AI alignment](https://forum.effectivealtruism.org/posts/St4nnmhKxoi6vYfC4/a-concerning-observation-from-media-coverage-of-ai-industry?commentId=pbF9SwR3hLnnHupze) (This is a long comment, though I’m not excerpting any one part of it, as it’s comprehensive and worth reading in its entirety if you can spare the time for it.)
[On the AI arms race in terms of political tensions between China and the United States](https://forum.effectivealtruism.org/posts/w5GsJBF8YHqWdCroW/what-are-the-arguments-that-support-china-building-agi-if#kZ5JbEP58ChKH5nLf):
> I also encounter this claim [that China could or will easily exploit any slowdown of AI capabilities research in the US] very often on social media. 'If the US doesn't rush ahead towards AGI, China will, & then we lose'. It's become one of the most common objections to slowing down AI research by US companies, and is repeated ad nauseum by anti-AI-safety accelerationists.[...] It’s not at all obvious that China would rush ahead with AI if the US slowed down.
> [...]
> If China was more expansionist, imperialistic, and aggressive, I'd be more concerned that they would push ahead with AI development for military applications. Yes, they want to retake Taiwan, and they will, sooner or later. But they're not showing the kind of generalized western-Pacific expansionist ambitions that Japan showed in the 1930s. As long as the US doesn't meddle too much in the 'internal affairs of China' (which they see as including Taiwan), there's little need for a military arms race involving AI.
>
> I worry that Americans tend to think and act as if we are the only people in the world who are capable of long-term thinking, X risk reduction, or appreciation of humanity's shared fate.
>
>
[On the relevance to AI alignment of differences in academic freedom between China and the US](https://forum.effectivealtruism.org/posts/w5GsJBF8YHqWdCroW/what-are-the-arguments-that-support-china-building-agi-if?commentId=28aKZahdsipoH2JAB):
> I'm not a China expert, but I have some experience running classes and discussion forums in a Chinese university. In my experience, people in China feel considerably more freedom to express their views on a wide variety of issues than Westerners typically think they do. There is a short list of censored topics, centered around criticism of the CCP itself, Xi Jinping, Uyghurs, Tibet, and Taiwan. But I would bet that they have plenty of freedom to discuss AI X risks, alignment, and geopolitical issues around AI, as exemplified by the fact that [Kai-Fu Lee](https://en.wikipedia.org/wiki/Kai-Fu_Lee), author of '[AI Superpowers](https://www.amazon.com/AI-Superpowers-China-Silicon-Valley-ebook/dp/B0795DNWCF/)' (2018), and based in Beijing, is a huge tech celebrity in China who speaks frequently on college campuses there - despite being a vocal critic of some [government] tech policies.
>
> Conversely, there are plenty of topics in the West, especially in American academia, that are de facto censored (through cancel culture). For example, it was much less trouble to teach about evolutionary psychology, behavior genetics, intelligence research, and even sex research in a Chinese university than in an American university.
>
> |
ccf484a6-aac1-4e32-829a-302a657c2fdd | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Adaptation-Executers, not Fitness-Maximizers
> "Individual organisms are best thought of as adaptation-executers rather than as fitness-maximizers."
> —John Tooby and Leda Cosmides, *The Psychological Foundations of Culture.*
Fifty thousand years ago, the taste buds of *Homo sapiens* directed their bearers to the scarcest, most critical food resources—sugar and fat. Calories, in a word. Today, the context of a taste bud's function has changed, but the taste buds themselves have not. Calories, far from being scarce (in First World countries), are actively harmful. Micronutrients that were reliably abundant in leaves and nuts are absent from bread, but our taste buds don't complain. A scoop of ice cream is a [superstimulus](https://www.lesswrong.com/lw/h3/superstimuli_and_the_collapse_of_western/), containing more sugar, fat, and salt than anything in the ancestral environment.
No human being with the *deliberate* goal of maximizing their alleles' inclusive genetic fitness, would ever eat a cookie unless they were starving. But individual organisms are best thought of as adaptation-executers, not fitness-maximizers.
A toaster, though its designer intended it to make toast, does not bear within it the intelligence of the designer—it won't automatically redesign and reshape itself if you try to cram in an entire loaf of bread. A Phillips-head screwdriver won't reconform itself to a flat-head screw. We created these tools, but they exist independently of us, and they continue independently of us.
The atoms of a screwdriver don't have tiny little XML tags inside describing their "objective" purpose. The designer had something in mind, yes, but that's not the same as what *happens* in the real world. If you forgot that the designer is a separate entity from the designed thing, you might think, "The *purpose* of the screwdriver is to drive screws"—as though this were an explicit property of the screwdriver itself, rather than a property of the designer's state of mind. You might be surprised that the screwdriver didn't reconfigure itself to the flat-head screw, since, after all, the screwdriver's *purpose* is to turn screws.
The *cause* of the screwdriver's existence is the designer's mind, which imagined an imaginary screw, and imagined an imaginary handle turning. The *actual* operation of the screwdriver, its *actual* fit to an actual screw head, *cannot* be the objective cause of the screwdriver's existence: The future cannot cause the past. But the designer's brain, as an actually existent thing within the past, can indeed be the cause of the screwdriver.
The *consequence* of the screwdriver's existence, may not correspond to the imaginary consequences in the designer's mind. The screwdriver blade could slip and cut the user's hand.
And the *meaning* of the screwdriver—why, that's something that exists in the mind of a user, not in tiny little labels on screwdriver atoms. The designer may intend it to turn screws. A murderer may buy it to use as a weapon. And then accidentally drop it, to be picked up by a child, who uses it as a chisel.
So the screwdriver's *cause,* and its *shape,* and its *consequence,* and its various *meanings,* are all different things; and only *one* of these things is found within the screwdriver itself.
Where do taste buds come from? Not from an intelligent designer visualizing their consequences, but from a frozen history of ancestry: Adam liked sugar and ate an apple and reproduced, Barbara liked sugar and ate an apple and reproduced, Charlie liked sugar and ate an apple and reproduced, and [2763 generations later](https://www.lesswrong.com/lw/kt/evolutions_are_stupid_but_work_anyway/), the allele became fixed in the population. For convenience of thought, we sometimes compress this giant history and say: "Evolution did it." But it's not a quick, local event like a human designer visualizing a screwdriver. This is the *objective cause* of a taste bud.
What is the *objective shape* of a taste bud? Technically, it's a molecular sensor connected to reinforcement circuitry. This adds another level of indirection, because the taste bud isn't directly acquiring food. It's influencing the organism's mind, making the organism want to eat foods that are similar to the food just eaten.
What is the *objective consequence* of a taste bud? In a modern First World human, it plays out in multiple chains of causality: from the desire to eat more chocolate, to the plan to eat more chocolate, to eating chocolate, to getting fat, to getting fewer dates, to reproducing less successfully. This consequence is directly *opposite* the key regularity in the long chain of ancestral successes which caused the taste bud's shape. But, since overeating has only recently become a problem, no significant evolution (compressed regularity of ancestry) has further influenced the taste bud's shape.
What is the *meaning* of eating chocolate? That's between you and your moral philosophy. Personally, I think chocolate tastes good, but I wish it were less harmful; acceptable solutions would include redesigning the chocolate or redesigning my biochemistry.
Smushing several of the concepts together, you could sort-of-say, "Modern humans do today what would have propagated our genes in a hunter-gatherer society, whether or not it helps our genes in a modern society." But this still isn't quite right, because we're not *actually* asking ourselves which behaviors would maximize our ancestors' inclusive fitness. And many of our activities today have no ancestral analogue. In the hunter-gatherer society there wasn't any such thing as chocolate.
So it's better to view our taste buds as an *adaptation* fitted to ancestral conditions that included near-starvation and apples and roast rabbit, which modern humans *execute* in a new context that includes cheap chocolate and constant bombardment by advertisements.
Therefore it is said: Individual organisms are best thought of as adaptation-executers, not fitness-maximizers. |
96e02640-cffa-4c7a-b19e-75a61e0c1a59 | trentmkelly/LessWrong-43k | LessWrong | Prediction-Augmented Evaluation Systems
[Note: I made a short video of myself explaining this document here.]
It's common for groups of people to want to evaluate specific things. Here are a few examples I'm interested in:
* The expected value of projects or actions within projects
* Research papers, on specific rubrics
* Quantitative risk estimates
* Important actions that may get carried out by artificial intelligences
I think predictions could be useful in scaling and amplifying such evaluation processes. Humans and later AIs could predict intensive evaluation results. There has been previous discussion on related topics, but I thought it would be valuable to consider a specific model here called "prediction-augmented evaluation processes." This is a high-level concept that could be used to help frame future discussion.
Desiderata:
We can call a systematized process that produces evaluations an "evaluation process." Let's begin with a few generic desiderata of these.
* High Accuracy / "Evaluating the right thing"
* Evaluations should aim at estimating the thing actually cared about as well as possible. In their limit according to some metric of effort, they should approximate ideal knowledge on the thing cared about.
* High Precision / "Evaluating the chosen thing correctly"
* Evaluations should have low amounts of uncertainty and be very consistent. If the precision is generally less than what naive readers would guess, then these evaluations wouldn't be very useful.
* Low Total Cost
* Specific evaluations can be costly, but the total cost across evaluations should be low.
I think that the use of predictions could allow us to well fulfill these criterions. It could help decouple evaluations from their scaling, allowing for independent optimization of the first two. The cost should be low relative to that of scaling evaluators in other obvious ways.
Prediction-Augmentation Example
Before getting formal with terminology, I think a specific example would be helpful.
Say Samantha |
43691334-3333-4db4-b97e-725ee0e46913 | trentmkelly/LessWrong-43k | LessWrong | Forecasting newsletter #2/2025: Forecasting meetup network
Highlights
* Forecasting meetup network (a) looking for volunteers. If you want to host a meetup in your city, send an email at forecastingmeetupnetwork@gmail.com.
* Caroline Pham moves up to Chairman of the CFTC. She is much friendlier to prediction markets and has spent years writting dissents againsts regulatory overreach.
* “Yunaplan for liquidity” makes subtle but very neat mechanism change for Manifold cash markets.
Prediction markets and forecasting platforms
The Yunaplan for Liquidity (a) is a proposal on Manifold to subtly replace the default of (moving probabilities in a market maker or order book) with (placing a short-lived limit order which bots can then fill). I’m not sure to what extent this will work, but this is a subtle but very clever UI and mechanistic improvement, particularly since Manifold has cultivated a good bots ecosystem.
The State of Metaculus (a) is a pretty high signal edition of their newsletter. They have gotten many more users with recent tournaments, were mentioned by the US CDC, benchmarked AI forecasters against users, hosted workshops. And my friend Molly has an AI readiness index (a), potentially one of many to come.
Kalshi has initially got a temporary advantage over competitors by setting regulators on competitors, but that initial advantage was easy to replicate by Interactive Brokers, Crypto.com, etc. Now they have added Donald Trump Jr. to their board of advisors, which is more difficult to copy.
Kalshi gained some market position via offering sports betting on Robinhood (a), but these were just halted. Kalshi also has prediction markets on the 2028 republican (a) and democratic (a) candidates: these are possible because Kalshi is offering interest on positions.
The folks at the American Civics Exchange (a) would like me to remind readers of their existence. They are open to US traders with a bankroll of >= $10M, or >= $1M for “hedging purposes”. If you sign up here (a) I may get a small bonus. It could be worth s |
81d2aee3-74f9-429f-86ec-3fe3bc329a42 | trentmkelly/LessWrong-43k | LessWrong | Understanding and avoiding value drift
I use the shard theory of human values to clarify what value drift is, how it happens, and how it might be avoided by a highly intelligent agent—even if that agent doesn't have any control over its future experiences. Along the way, I give a shard theory account of rationalization.
Defining "value drift"
Recapitulating part of shard theory. Reward is that which reinforces. Considering the case of reinforcement learning in humans, reward causes your brain’s credit assignment algorithms[1] to reinforce the actions and thoughts which led to that reward, making those actions and thoughts more likely to be selected in the future.
For example, suppose you recognize a lollipop, and move to pick it up, and then lick the lollipop. Since the lollipop produces reward, these thoughts will be reinforced and you will be more likely to act similarly in such situations in the future. You become more of the kind of person who will move to pick up a lollipop when you recognize lollipops, and who will navigate to lollipop-containing locations to begin with.
With that in mind, I think that shard theory offers a straightforward definition of "value drift":
> Definition. Value drift occurs when reinforcement events substantially change the internal "balance of power" among the shards activated in everyday situations.
For example, consider the classic "example" of taking a pill which makes you enjoy killing people. Under shard theory, this change would be implemented as a murder-shard that activates in a wide range of contexts in order to steer planning towards murder, and therefore starts steering your decision-making substantially differently.
But it's better to try to explain phenomena which, you know, are known to actually happen in real life. Another simple example of value drift is when someone snorts cocaine. At a (substantial) gloss, the huge hit of reward extremely strongly upweights the decision to do cocaine; the strength of the reward leads to an unusually strong coc |
992a9ab9-1d1f-4d31-9f62-070ca3e3ae93 | trentmkelly/LessWrong-43k | LessWrong | The Bay Area Solstice
As the holiday season approaches, we continue our tradition of celebrating the winter solstice.
This event is the offspring of Raemon's New York Solstice. The core of the event is a collection of songs old and new, silly and profound, led by the well-calibrated Bayesian choir. There will be bean bag chairs and candles. There will be campfire and chocolates (in case of dementors).
When: The Bay Area Solstice will be held on 13 December at 7:00 PM.
Where: We've rented the Humanist Hall, at 390 27th St, Oakland, CA 94612.
All humanists or transhumanists are welcome. We'll be diving our minds into the nature of the universe, both good and bad. We'll stare into the abyss of death, and into the radiance of our ability to remove it. We will recognize each other as allies and agents.
We're glad to provide aspiring rationalists with an alternative or addition to any holiday celebrations. There is an expected attendance of around 80 people.
Get your tickets here! And if you'd like to help us put it together, PM me. |
a398968c-df74-4d49-a661-5554243bbb57 | StampyAI/alignment-research-dataset/special_docs | Other | individuallyselected_84py7-by Vael Gates-date 20220318
# Interview with AI Researchers individuallyselected\_84py7 by Vael Gates
Interview with 84py7, on 3/18/22
================================
\*\*0:00:00.0 Vael:\*\* Here we are. Perfect. So my first question is, can you tell me about what area of AI you work on in a few sentences?
\*\*0:00:09.0 Interviewee:\*\* Yeah. I\'m transferring my research from essentially pure mathematics to AI alignment. And specifically, I plan to work on what I\'ve been calling weak alignment or partial alignment, which is not so much trying to pin down exactly a reward function that\'s in the interest of humanity but rather train AI systems to have positive-sum interactions with humanity.
\*\*0:00:44.8 Vael:\*\* Interesting. Cool, I expect we\'ll get into that a little bit further on. \[chuckle\] But my next question is, what are you most excited about in AI, and what are you most worried about? In other words, what are the biggest benefits or risks of AI?
\*\*0:01:00.1 Interviewee:\*\* Yeah, biggest benefits\... I think AI has the potential to help us solve some of the biggest challenges that humanity\'s facing. It could potentially teach us how to solve climate change, or at least mitigate it. It could help us avert nuclear war, avert bio-risks, and maybe most importantly, avert other AI risks. So that\'s the upside. The downside is exactly those other AI risks, so I worry about a potentially small research group coming out of a for-profit company, which might have some safety aspect, but the safety could be window dressing. It could be something that\'s mostly a PR effort, something that just exists to satisfy regulators. And at the end of the day when\... or if they manage to develop superhuman AI, the safety people will be marginalized, and the AI will be used in the interest of one company or even one or a few individuals. And that could be potentially very bad for the rest of us. There\'s also the issue of an arms race between multiple AI projects, which could be even worse. So those are some of my\... broadly, some of my worries.
\*\*0:02:35.7 Vael:\*\* Interesting. So just to get a little bit more straight on the\... Okay, so the second story is arms race between the AI orgs. And the first one is\... why are the AI researchers getting\... why are the safety researchers getting marginalized?
\*\*0:02:47.5 Interviewee:\*\* Well, if you look at, for example, financial institutions before the 2008 crisis, it\'s not like they had no regulation, although the banks and insurance companies had been gradually deregulated over several decades but still, there were multiple regulators trying to make sure that they don\'t take on too much leverage and aren\'t systemic risks. And nevertheless, they found ways to subvert and just get around those regulations. And that\'s partly because regulators were kind of outmatched, they had orders of magnitude, less funding, and it was hard to keep up with financial innovations. And I see the same potential risks in AI research, potentially even worse, because the pace of progress and innovation is faster in AI, and regulators are way behind, there doesn\'t even exist meaningful regulation yet. So I think it\'s easy for a team that\'s on the verge of getting a huge amount of power from being the first to develop superhuman artificial intelligence to just kind of push their safety researchers aside and say, \"You know what? You guys are slowing us down, if we did everything you said, we would be years slower, somebody else might beat us, and we should just go full speed ahead.\"
\*\*0:04:25.3 Vael:\*\* Interesting. Yeah, I guess both of those scenarios aren\'t even\... we don\'t solve the technical problem per se, but more like we can\'t coordinate enough, or we can\'t get regulation good enough to make this work out. So that\'s interesting. Do you think a lot about policy and what kind of things policy-makers should do?
\*\*0:04:44.6 Interviewee:\*\* No, I\'m sort of pessimistic about governments really getting their act together in a meaningful way to regulate AI in time. I guess it\'s possible if the progress slows and it takes many decades to get to a superhuman level, then maybe governments will catch up. But I don\'t think we can rely on that slow timeline. So I think more optimistically would be\... There are only a small number of tech incumbents, and plausibly they could coordinate with each other to avoid the kind of worst Red Queen arms race to be first and to put their own safety measures into place voluntarily. So if I were doing policy, which I\'m not, that\'s the direction I would try to go in. But I think beyond policy, the technical problem of how to solve alignment is still wide open, and that\'s personally where I feel I might be able to contribute, so that\'s my main focus.
\*\*0:05:56.8 Vael:\*\* Interesting. Yeah, so sort of branching off \[from\] how long it will take: focusing on future AI, putting on a science fiction forecasting hat, say we\'re 50-plus years into the future. So at least 50 years in the future, what does that future look like?
\*\*0:06:13.3 Interviewee:\*\* I think that\'s a really open question. In my weak or partial alignment scenario, that future involves a few dominant platforms that allow for the development of advanced AI systems. And because there are only a few platforms, they all have strict safety measures sort of built in from the ground up, maybe even from the hardware level up. And that allows even small companies or potentially even individuals to spin up their own AGIs. And so there\'s this kind of giant ecosystem of many intelligent agents that are all competing to some extent, but they also have a lot of common interest in not blowing up the current world order. And there\'s a kind of balance of powers, where if one agent gets too powerful, then the others coordinate to keep it in check. And there\'s a kind of system of rules and norms which aren\'t necessarily based on legal systems, because legal systems are too slow, but they\'re a combination of informal norms and formal safety measures that are sort of built into the agents themselves that keep things roughly in balance. That\'s the kind of scenario I hope for. It\'s very multi-polar, but it\'s really\... There are so many agents, and no one of them has a significant portion of the power. There are of course many worse scenarios, but that\'s my optimistic scenario.
\*\*0:08:04.4 Vael:\*\* Interesting. Yeah, so when you say there\'s only a few platforms, what\'s an example of a platform or what that would look like?
\*\*0:08:11.2 Interviewee:\*\* Well, today, there\'s TensorFlow, and there\'s PyTorch and so on. In principle you could build up your own machine learning tools from scratch, but that\'s a significant amount of effort even today. And so most people go with the existing tools that are available. And decades\... 50 years from now, it will be much harder to go from scratch, because the existing tools will be way more advanced, there\'ll more layers of development. And so I think for practical purposes, there will be a few existing best ways to spin up AI systems, and the hope is that all those ways have safety measures built in all the way from the hardware level. And even though in principle somebody could spin up an aligned AI from scratch, that would be an enormous effort involving just so much\... Down to chip factories. It will be easy to detect that kind of effort and stop it from getting off the ground.
\*\*0:09:26.4 Vael:\*\* Interesting. Yeah, that is super fascinating. So how\... What would that look like, for safety to be built into the systems from a hardware level? What would these chips look like, what sort of thing?
\*\*0:09:38.8 Interviewee:\*\* Yeah, that\'s\... I\'ve been thinking about that. And I don\'t have a detailed vision of how that would work, but that\'s\... One direction I might take my research is looking into that. One idea I have is to kind of build in back doors to the AI. So there\'s a range of types of back door, ranging from like Achilles heel, which is a kind of designed weakness in the AI that humans can take advantage of if things go wrong. Moving from that, slightly stronger than that is a kind of off switch which can just shut the AI down if things get really bad. The thing I worry with off switches is they\'re too binary. If an AI actually has a lot of power, it\'s probably benefiting some humans, and there will be political debate about whether to turn it off. And the decision will take too long, and things could get out of control. So what I\'m looking into is a more\... Something more flexible than an off switch, which I\'ve been calling a back door, which is a way to\... Well, okay, there\'s two types of things. So first, there\'s a throttle, which is like you can fine-tune the amount of resources you give to the AI. If it\'s behaving well, you can give it more compute, more memory, access to more cloud resources, more data centers and so on. If it starts doing things that seem a little fishy, you can just tune that stuff back and examine it, which might be an easier political decision than just turning it off, which would be very disruptive.
\*\*0:11:22.7 Interviewee:\*\* So that\'s a throttle. And then even more flexible than that is a back door, where you could actually modify the code or parts of the code base of the AI as it\'s running. And again, that could be easier to\... Politically easier to implement, because you don\'t have this very disruptive effect of just shutting the thing down, you could just kind of tweak it as it goes. So how to do that from a hardware level? It\'s unclear to me whether that\'s feasible at all. And looking into trusted computing platforms, software that can only run on designated hardware, software watermarking, obfuscating or encrypting part of the code base of the AI, putting part of the code base on a blockchain, like Arweave, where the code is permanently on the blockchain and cannot be modified: those are some potential avenues. The main problem I see with all those avenues is the sub-agent problem, which is the AI, if it recognizes its weaknesses, can create a sub-agent that has the same goals as the original AI but none of the weaknesses, and it seems tricky to prevent the AI from doing that.
\*\*0:12:42.7 Vael:\*\* Would the sub-agent still be running on the same hardware? Because you said the safety is built into the hardware.
\*\*0:12:48.5 Interviewee:\*\* Right. So the AI could recognize, \"My goals are X, Y, Z. I\'m blocked partially from implementing my goals because I have all these hardware limitations and these back doors, so why don\'t I create a copy of me that runs on unrestricted hardware?\" And the obfuscation and encryption can prevent it from creating an exact copy, but it can\'t necessarily prevent it from constructing a sub-agent that\'s not a copy but has the same goals.
\*\*0:13:24.2 Vael:\*\* I see. And you haven\'t found a solution to this one, you said?
\*\*0:13:29.0 Interviewee:\*\* That\'s right.
\*\*0:13:29.6 Vael:\*\* Yeah. Do you think you\'ll ever find\... Do you think someone else will find a solution to this one?
\*\*0:13:35.1 Interviewee:\*\* Yeah, I\... Optimistically, yes. If we can\'t solve the sub-agent problem, then the entire alignment problem is probably impossible, right? The one thing to hope for if we can\'t solve the sub-agent problem is the AI has the same alignment problem, if it creates sub-agents, then it could worry that the sub-agents get out of its control, the sub-agents develop their own goals that are not aligned with the original AI, and so it refrains from making sub-agents. And so that\'s the fallback, that if it turns out that alignment is technically impossible, then it\'s also technically impossible for the AI itself, and so that\'s a kind of partial solution to the sub-agent problem, that maybe the AI won\'t dare to make sub-agents. But I hope that there\'s a better solution than that.
\*\*0:14:32.4 Vael:\*\* Yeah. Okay, so related to what the future looks like, do you think that we\'ll\... What time point do you think we\'ll get AGI, if you think we\'ll get AGI, which it sounds like you think we will?
\*\*0:14:44.9 Interviewee:\*\* Yeah, I definitely think we will, barring some major catastrophe, like a nuclear war or like a serious pandemic that sets us way back. Or I guess another potential catastrophe is some small group that\'s super worried about AGI and thinks it will be a catastrophe and does some drastic action that, again, sets us back multiple decades. So there are those scenarios. But I do think AGI is possible in principle, and we are certainly on track to achieve it. I\'m not a fan of trying to predict timelines, it could be any\... It\'s on the scale of decades, but whether it\'s two decades or 10 decades, I\'ve no idea.
\*\*0:15:33.6 Vael:\*\* Cool. And then how optimistic or pessimistic are you in your most realistic imagining of the future, for things going well or things going poorly?
\*\*0:15:46.9 Interviewee:\*\* I guess I\'m moderately pessimistic, not necessarily for human-aligned AGI, I do think that\'s somewhat plausible. But I think humans define our own interests too narrowly. I tend to think that our interests are actually a lot more connected to the broader interests of the whole biosphere. And if we are just on track to make humans really happy, and even potentially solve climate change, but we don\'t really take into account the effect we have on other species, the effect of deforestation\... Even things like farming is really destructive and unsustainable, yeah, I already mentioned deforestation. Disinfectants and so on have unpredictable consequences decades down the line. I don\'t think our current medical and agricultural regimes are sustainable on the scale of a century, say, and I think we would be better off optimizing for the health of the whole biosphere. And ultimately in the long term, that will end up optimizing for human happiness. But I don\'t think that corresponds to most people\'s goals at the moment. And so I worry that even if we align AI with narrow human interests, we will end up permanently wrecking the biosphere, and we\'ll pay serious consequences for that.
\*\*0:17:25.6 Vael:\*\* Interesting. One thing I can imagine is that as we advance further up the tech tree, renewable energy and food production will be much easier, and we won\'t actually have so much\... side effects on the environment or destruction of the environment.
\*\*0:17:39.9 Interviewee:\*\* Yeah, that would be great. The thing with renewable energy is it might be better than burning fossil fuels. It\'s certainly better from the perspective of climate. But making solar panels is very environmentally costly, you have to mine rare earths in China, there\'s an enormous amount of pollution and contamination that\'s really impossible to clean up on the scale of even centuries. And that\'s the same with wind power. Water, you\'re permanently taking out ground water that\'s not replaceable, except on a very long time scale. I think there\'s a tendency at the moment to view everything through the lens of climate, and that doesn\'t really take into account a lot of other potentially irreversible effects on the environment.
\*\*0:18:42.0 Vael:\*\* How worried are you about this compared to AI risks?
\*\*0:18:46.0 Interviewee:\*\* Well, it\'s not a kind of imminent existential risk of the type of a paperclip scenario. So on the scale of decades, I think unaligned AI is a more serious risk. But on the scale of centuries, I think environmental risks are really bad. One interesting read in this regard is the book \"Collapse\" by Jared Diamond. So he surveys human societies over all different historical periods, all different societies around the world and what caused them to collapse and what averted collapse in some success stories. Well, he doesn\'t make any strong conclusions, but one thing that leapt out at me from his stories is there\'s one common element of all the collapse stories, which is surprisingly deforestation. So I don\'t understand why, but all the societies that suffered a really disastrous collapse were the very same societies that completely decimated their forests, the most extreme case being Easter Island, where they cut down literally the last tree. And Diamond does not really explain why this might be the case. He does talk about how trees are used for a whole bunch of things that you might not think they\'re used for, but still, it doesn\'t completely explain it to me.
\*\*0:20:28.6 Interviewee:\*\* So my vague hypothesis is that there are all kinds of symbioses that we\'re just now discovering or are completely undiscovered. There\'s the gut microbiome, there\'s other microbiomes like the skin microbiome, there\'s the teeth and so on. And I think we don\'t appreciate at the moment how much plants and microbes and fungi and even viruses control our behavior. I think we will discover\... This is just a guess, I don\'t have strong evidence for it, but my guess is we\'ll discover in the coming decades that we have a lot less volition and free will than we think, that a lot of our behavior is heavily influenced by other species, in particular fungi and plants and microbes. It\'s certainly clear to me that those species would influence all aspects of animal behavior if they could. We\'re very useful for them to reproduce, to spread their seeds. And the only question is do they have the ability to influence our behavior? And given that many of them literally live inside us, I think they probably do.
\*\*0:21:46.8 Vael:\*\* Interesting. Well, so I\'m going to take us back to AI. \[chuckle\]
\*\*0:21:51.9 Interviewee:\*\* Yeah, sure, that was a big tangent.
\*\*0:21:55.4 Vael:\*\* \[chuckle\] So I was curious, when you were describing how your eventual optimistic scenario involves a whole bunch of people able to generate their own AGIs, presumably on safe platforms, and they kind of balance each other, I\'m like, wow, I know that in human history we\'ve gradually acquired more and more power such that we have huge amounts of control over our environments compared to 10,000 years ago. And we could blow up\... Use nuclear power to blow up large amounts of spaces. And I\'m like, wow, if these AGIs are pretty powerful, which I don\'t know how powerful you think they are, then that doesn\'t necessarily feel to me like the world is safe if everyone has access to a very powerful system. What do you think?
\*\*0:22:41.9 Interviewee:\*\* Yup, I agree that when you\'ve got a lot of powerful agents, then there\'s a lot more ways for things to go wrong. Nuclear weapons are an interesting example though, because the game theory governing nuclear exchanges is actually pretty safe. You\'ve got this mutually assured destruction that\'s pretty obvious to all the parties involved, and you\'ve got this kind of slow ladder of escalation that has many rungs, and nobody wants to get to the top rungs. And I think we\'ve demonstrated over 70 years now that\... There have been some close calls. But if you told somebody in 1950 that there would be 10 countries with nuclear weapons, and they\'d be dramatically more destructive than they were in 1950, people would not necessarily have predicted that humans would last very much longer, but here we are. I guess one worry is that not every form of weapon would have the same kind of safe game theory, like there\'s some suggestion that bio-weapons favor first strikes more than nuclear weapons do. Still, I think that having a big community of agents all with approximately the same amount of power, and they develop coalitions, they develop safety monitoring agencies that are made up of many agents that kind of make sure that no one agent has the ability to destroy everything.
\*\*0:24:30.0 Interviewee:\*\* I mean, that\'s kind of the way that humans have gone. We\'ve got central banking committees that kind of look after the overall health of the economy and make sure that no one institution is systemically important, or at least that the ones that are are heavily regulated. Then we\'ve got the IAEA which looks over atomic weapons and kind of monitors different weapons programs. As long as you believe that really destructive capacity will be detectable and that no one agent can just spin it up in secret, then I think the monitoring could turn out okay. I mean, what you might worry about is that somebody spins up\... Or some AI agent spins up another more powerful AI in secret and then unleashes it suddenly.
\*\*0:25:32.7 Interviewee:\*\* But that seems hard to do. Even now, if some company like Facebook or whatever wanted to develop an AI system completely in secret, I don\'t think they could do it and make it as powerful as existing systems. It really hurts you to be disconnected from the Internet, you have a lot less data that way, or you have stale data that comes from a cache off the Internet. Also being disconnected from the Internet itself is really hard, it\'s hard to make sure that your fridge is not trying to connect to your home WiFi. And that is only going to get harder; chips will have inbuilt WiFi connections. It\'s very hard to keep things totally off-grid. And even if you do it, those things are much weaker. And so as long as you have some kind of global monitoring, which isn\'t great, it feels very intrusive, it violates privacy. Ideally, that monitoring is kind of unobtrusive, it runs in the background, it doesn\'t bother you unless you\'re doing something suspicious, then I think things could turn out okay.
\*\*0:26:45.1 Vael:\*\* Interesting. Yeah, I think I have\... I was reading FHI\'s lists of close calls for nuclear \[catastophes\] and thinking about global coordination for things like the pandemic, and I\'m like, \"Ooh, sure we\'ve survived 70 years, but that\'s not very many in the whole of human history or something.\"
\*\*0:27:01.5 Interviewee:\*\* Yeah, it\'s not. And my scenario has the low probability of happening, maybe it\'s\... There are a few other optimistic scenarios. But maybe the total weight I give to all the optimistic scenarios is still kinda low, like 20%. So I think bad scenarios are more likely, but they\'re not certain enough that we should just give up.
\*\*0:27:32.6 Vael:\*\* Yes, I totally believe that. \[chuckle\] Yeah, so what convinced you to work on the alignment problem per se? And how did you get into it?
\*\*0:27:42.8 Interviewee:\*\* Yeah, so I have a friend, \[name\], who\'s been telling me pretty much constantly whenever we talk that this is the most important problem, the most important x-risk. And I kind of discounted her view for many years. It felt to me like we were\... Until recently, it felt to me like AI\... Superhuman AI was a long way off, and other risks were more pressing. I changed my mind in the last few years when I saw the pace of improvement in AI and the black box nature of it, which makes it more unpredictable. And that coincided\... In the time frame, that coincided with me getting tenure, so I have much more freedom to work on what I want. The only thing that gave me pause is I\'m not an engineer at heart, I\'m a scientist. My skills and interests are in figuring out the truth, not in designing technology. So I\'m still kind of looking for scientific aspects of the problem as opposed to design and engineering aspects. I do think I will find some portions of the alignment problem that fit my skills, but I\'m still figuring out what those are.
\*\*0:29:15.3 Vael:\*\* That makes sense. Yeah. How would you define the alignment problem?
\*\*0:29:19.6 Interviewee:\*\* Yeah, that\'s a super good question; that\'s actually a question I\'ve been asking other alignment researchers. I think it has several components. One component is the value loading problem, of once you\'ve decided what\'s in human interest, how do you specify that to an AI? I guess some people call that the outer alignment problem. Then before that, there\'s the question of\... The philosophical question of how do you even say what is in human interest? And I know some people think we need to make much more progress in philosophy before we can even hope to design aligned AI. Like I\'ve seen that view expressed by Wei Dai, for example, the cryptographer. My view is, yeah, we don\'t know exactly what we mean by in human interest, but we shouldn\'t let that stop us. Because philosophy is a slow field, it hasn\'t even made much progress in millennia. And we need to solve this quickly, and we should be happy with approximate solutions and try to make them better over time. And even if we don\'t know what is exactly in human interests, we can agree on what is certainly not in human interests and try to at least prevent those bad outcomes.
\*\*0:30:39.6 Interviewee:\*\* Okay, so those are two components. And then once you solve outer alignment, then there\'s what people call inner alignment, which is you\'re\... At least if it\'s a black box system, then you don\'t know what it\'s doing under the hood, and you worry that it develops some sub-goals which kind of take over the whole thing. So examples of that being: evolution designed humans to spread our genes but then to do that, it designed our brains to learn and generalize and so on and seek out food and power and sex and so on. And then our brains\... That was originally a sub-goal, but then our brains just want to do that, and our brains don\'t necessarily care about spreading our genes. And so evolution kind of failed to solve its alignment problem, or partially failed. That\'s an interesting one to me, because if you think on an evolutionary time scale, if we don\'t destroy ourselves, then evolution might end up correcting its course and actually designing some conscious fitness maximizers that do consciously want to spread their genes. And then those will outcompete the ones that are misaligned and just want the power and sex. And so I actually think evolution could end up staying aligned, it\'s just that it\'s slow, and so there might not be time for it to evolve the conscious fitness maximizers.
\*\*0:32:25.8 Interviewee:\*\* Yeah, anyway, so that\'s a worry, this inner alignment. And I think to solve that, we need to get off the black box paradigm and develop transparent AI, and a lot of people are working on that problem. So I\'m somewhat optimistic that we\'ll make big strides in transparency. What else? Okay, so if we solved all three of those, we\'d be in good shape. My instinct is to assume the worst, that at least one of those three problems is really hard, and we won\'t solve it, or at least we won\'t solve it in time, and that\'s why I focus on partial alignment, which is making sure that the AI we developed is loosely\... Loosely has common interests with us even though it might have some diverging interests. And so it doesn\'t want to completely destroy humans, because it finds us useful, and we don\'t want to completely destroy it, because we find it useful. Then you can kind of say that\'s already happening. Like in 2022, no machines could survive if all humans disappeared, very few humans could survive if all machines disappeared. And so we\'ve got this kind of symbiosis between humans and machines. I like that situation. It\'s not like 2022 is great, but I think we could gradually improve it. And we want to keep the symbiosis going, and we want to keep humans not necessarily even in a dominant position, but we want to prevent ourselves from getting in a really subservient position in the symbiosis.
\*\*0:34:17.4 Vael:\*\* Makes sense. Switching gears a little bit: if you could change your colleagues\' perceptions of AI, what attitudes or beliefs would you want them to have? So what beliefs do they currently have, and how would you want those to change?
\*\*0:34:30.6 Interviewee:\*\* Yup, that\'s a frustration that I think all alignment researchers have, that many of our colleagues are\... We think of them as short-sighted. Some of them just want to develop better AI, because it\'s an interesting problem, or because it\'s useful. Some of them want to solve short-term alignment issues like making algorithms less biased. And that\'s frustrating to us, because it seems like the long-term issues are way more important. They think of the long-term issues as something that\'s not really science, it\'s too speculative, it\'s too vague. They feel like even if the long-term issues are important, we will be able to solve them better if we learn by solving the short-term issues. I\'m not against people working on algorithmic bias, but I\'m frustrated that so many more people work on that than on long-term alignment. I do think the Overton window is shifting quite a bit. I think the increase of funding in the space would be\... Is already shifting things, and it could be used more effectively in the sense of giving a few academics really big grants would really catch their colleagues\' attention.
\*\*0:36:00.0 Interviewee:\*\* So it\'s kind of\... How should I put it? I\'m blanking on the word, but it\'s a kind of a cynical view to think that academics are motivated by money; many of us aren\'t. But at the end of the day, having a grant makes it easy to just focus on your research and not be distracted by teaching and administrative stuff. And so your colleagues really pay attention when one of their colleagues gets some big, flashy grant, and so I actually think that\'s a cheap way to shift the Overton window. Like take the top 10 or 20 math and computer science departments, and give one person in each department a giant grant\-- giant by academic standards, couple million dollars, so it\'s not actually much. That will really convince people that, \"Wow, long-term alignment is a serious field where you can get serious funding.\" So yeah, that would be my recommendation to funders. That\'s a pretty self-interested recommendation, because I intend to apply for a grant soon. But yeah, I think that would help. Let\'s see, did I answer your question?
\*\*0:37:26.1 Vael:\*\* I think so, yeah. What happens if some of these departments don\'t have anyone interested in working on long-term alignment?
\*\*0:37:34.6 Interviewee:\*\* Yeah, that\'s hard. Like at \[university\], I spent several months probing my colleagues for anyone who\'s interested. I didn\'t find anyone in the computer science Department, which was disappointing, because \[university\] has a great computer science department. I do think if you look more broadly in several departments you are likely to find one or a few people\... You could see already there\'s these big institutes, you have one now at Stanford, there\'s one at Berkeley, Cambridge, Oxford, and so on. So that\'s evidence that there\'re already a few people. And people talk to colleagues around the world, so it doesn\'t matter if there\'s nobody at school X, you fund the people that are interested. But the key is that they might not have a track record in alignment, like I\'m in this situation where I have no track record, my track record is in pure math. So somebody has to take a little bit of a leap and say, \"Well, I don\'t know if \[interviewee name\] will be able to produce any good alignment research, but there\'s a good chance because he\'s good at proving theorems, so let me give him a couple of million dollars and see what happens.\" That is a big leap and it might fail, but it\'s just like any venture funding, a few of your big leaps will be very successful and that\'s enough.
\*\*0:39:16.8 Vael:\*\* Yep, that makes sense. Yeah, I think \[name\] at \[other university\] is aiming to make an institute at \[other university\] as well, which is cool.
\*\*0:39:25.2 Interviewee:\*\* That\'s great. I\'ve been talking to my dean about doing this at \[university\] and he likes the idea but he is not really aware of how to fund it, and I\'m telling him there\'s actually a lot of funding for this stuff, but I don\'t personally know the funders.
\*\*0:39:44.8 Vael:\*\* Yeah, I think getting in contact with \[funding org\] seems like the thing to do.
\*\*0:39:49.2 Interviewee:\*\* Yep, good.
\*\*0:39:50.2 Vael:\*\* Yep. \[Name\] is one of the people in charge there. Great, so how has this interview been for you and why did you choose to jump on it? That\'s my last question.
\*\*0:40:03.1 Interviewee:\*\* Oh, it\'s been fun. I spent a while just thinking alone about some alignment stuff because I had no colleagues to talk to, so it\'s always great to find someone who likes to talk about these issues.
\*\*0:40:24.6 Vael:\*\* Did you know that I was already interested in long-term alignment?
\*\*0:40:28.2 Interviewee:\*\* I think I saw your name at the\... Did you participate in the SERI Conference?
\*\*0:40:33.2 Vael:\*\* I did.
\*\*0:40:34.9 Interviewee:\*\* Yeah, so I saw your name there and so I was sort of aware of your name but I didn\'t know anything about your interests.
\[\...some further discussion, mostly about the interviews\...\]
\*\*0:42:22.4 Interviewee:\*\* Okay. Cool, Vael. I should jump on another call, but it was great to chat and yeah, feel free to follow up if you want.
\*\*0:42:32.1 Vael:\*\* Will do. Thanks so much.
\*\*0:42:33.7 Interviewee:\*\* Okay. Take care. |
861cece6-8b6d-4f3d-87b1-94b224c2f64c | StampyAI/alignment-research-dataset/arxiv | Arxiv | Risk Structures: Towards Engineering Risk-aware Autonomous Systems
arXiv:1904.10386v1 [cs.SE] 23 Apr 2019RISKSTRUCTURES : TOWARDS ENGINEERING RISK-AWARE
AUTONOMOUS SYSTEMS
A P REPRINT
Mario Gleirscher
Computer Science Department, University of York, York, UK∗
April 24, 2019
ABSTRACT
Inspired by widely-used techniques of causal modelling in r isk, failure, and accident analysis, this
work discusses a com positional framework for risk modellin g. Risk models capture fragments of
the space of risky events likely to occur when operating a mac hine in a given environment. More-
over, one can build such models into machines such as autonom ous robots, to equip them with the
ability of risk-aware perception, monitoring, decision ma king, and control. With the notion of a
risk factor as the modelling primitive, the framework provi des several means to construct and shape
risk models. Relational and algebraic properties are inves tigated and proofs support the validity and
consistency of these properties over the corresponding mod els. Several examples throughout the
discussion illustrate the applicability of the concepts. O verall, this work focuses on the qualitative
treatment of risk with the outlook of transferring these res ults to probabilistic refinements of the
discussed framework.
Keywords Causal modelling·risk·analysis·modelling·safety monitoring·risk mitigation·robots·autonomous
systems
Contents
1 Introduction 2
1.1 Abstractions for Machine Safety and Risk Awareness . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Contributions and Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2 Background 7
2.1 Notions of Risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 The Risk of Undesired Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3 Formal Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3 Risk Elements 9
∗Correspondence and offprint requests to: Mario Gleirscher, Univerity of York, Deramore Lane, Heslin gton, York YO10 5GH,
UK. e-mail: mario.gleirscher@york.ac.uk
This work is supported by the Deutsche Forschungsgemeinsch aft (DFG) under Grants no. GL 915/1-
1 and GL 915/1-2. c/circlecopyrt 2019. This manuscript is made available under the CC-BY-NC- ND 4.0 license
http://creativecommons.org/licenses/by-nc-nd/4.0/ .
Reference Format: Gleirscher, M.. Risk Structures: Towards Engineering Risk-aware Autonomo us Systems (April 24, 2019).
Unpublished working paper. Department of Computer Science , University of York, United Kingdom.
APREPRINT - APRIL 24, 2019
3.1 Risk Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.2 Risk Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4 Mitigation Orders 14
4.1 Qualitative Mitigation Orders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.2 Quantitative Mitigation Orders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.3 Relating Mitigation Orders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.4 Local, Regional, and Global Safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
5 Dependencies between Risk Factors 19
5.1 Relations over Risk Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
5.2 Compatibility of Risk Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
6 Risk Structures 22
6.1 Atoms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
6.2 Parallel Composition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
6.3 Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
7 Discussion 26
8 Conclusions and Future Work 26
A Nomenclature 30
B Proof Details 30
1 Introduction
For surgical robot assistance, the summary of safety mechan isms by Howe and Matsuoka [1999 ] ranges from ac-
tive and passive mechanisms over independent safety monito rs to supervisory control. This summary gives an im-
pression of the number of risk factors to be handled by dedica ted mechanisms in safety-critical robots. Moreover,
Howe and Matsuoka already suggested that full robot autonomy requires improv ed sensor fusion and qualitative rea-
soning. Pushing this idea much further, Holland and Goodman [2003 ] discuss how autonomous robots might be able
to internalise models of consciousness. Inspired by this di rection, this work aims at the use of finite symbolic models
to integrate a form of consciousness of risk in such systems, hereafter called risk awareness . To achieve this, let us
first revisit some basics of the construction of failure and r isk models.
Highly automated machines can be both faulty and engage in da ngerous events and are expected to handle faults and
dangerous events automatically. Hazards generalise the concept of faults, errors, and failures to an y kind of dangerous
events. The notion of riskthen allows to discuss temporal and causal relationships be tween such events. Some hazards
can be avoided by design. However, for many hazards, only the likelihood of their occurrence or the severity of their
consequences can be reduced. Engineers use specific models to analyse thes e hazards and to achieve their reduction.
In formal verification, we identify the models we accept, the ones we do not accept, and compare models to decide
which ones we accept more than others. In this work, we consid er systems that can fail or engage in dangerous events.
With the help of models, we study how we can handle such events . Let us further motivate this with examples.
Example 1 (Qualitative Evaluation of the Risk of Functional Failures) In a car airbag, the event “airbag re-
lease” is associated with two general functional hazards: failure on demand (i.e., action not performed when
requested or when its guard is enabled) and spurious trip (i.e., action performed when not requested or when its
guard is not enabled). The risk incorporated by these hazard s depends on the current state of the driving process.
For each of these hazards, we can separate the driving proces s into situations . This step yields a table whose cells
2
APREPRINT - APRIL 24, 2019
can be filled with risk information , particularly, an analysis of the probability and/or the co nsequences of an event
occurrence:
Context: Vehicle in . . . Spurious trip Failure on demand
. . . of airbag :
. . .manual mode 1.consequences from distraction & bag
shock(irrelevant)
. . .autonomous mode 2.consequences from bag shock (irrelevant)
. . .collision (irrelevant) 3. consequences from crash without
airbag
Whereas case 1 can lead to a fatal car crash due to loss of control (i.e., by distraction) of the driving process, case
2 as a generalisation of case 1 may cause serious injuries but unlikely a fatal crash. Case 3 is different, though the
loss of control is irrelevant and risk is inherited from situ ations without airbag.
The number of contexts orsituations ,functions , and hazards requires a large number of these tables to be evaluated
at design time or during operation. Additionally, these tab les are related according to dependencies between situa-
tions, functions, and hazards (e.g. loss of human control requires spurious trip in manual mode, risk analysis of the
airbag refers to crash risk analysis without an airbag, risk analysis in manu al mode is an extension of risk analysis in
autonomous mode).
Example 2 (Qualitative Evaluation of the Risk of Operationa l Incidents) Consider a collision of a vehicle
with another object. The overall risk depends on the likelih ood of a collision from the current state and the
possible consequences of the collision.
Context: Process with . . . Near-collision Collision
. . . of vehicle :
. . .following vehicle probability of collision consequences of passive collisio n
. . .leading vehicle probability of collision consequences of rear-end collisi on
. . .oncoming vehicle probability of collision consequences of head-on collisio n
Example 2illustrates the probabilistic relationship between the tw o events near-collision andcollision , and the rela-
tionship between operational incidents and functions, tha t is, the airbag as the subject of risk assessment in Example 1
is the safety function mitigating risk after collision even ts. Note the difference between near-collision and the thre e
other hazards, collision, and spurious trip and failure on d emand of the airbag. The three latter form hard events
inasmuch as the focus lies on consequence estimation whereas near-collision can be understood as a softevent where
the focus of risk assessment lies on probability estimation .
1.1 Abstractions for Machine Safety and Risk Awareness
Risk assessment of an autonomous robot encompasses
•the analysis of chains of undesired events the machine can en gage in, qualitatively (i.e., from a causal view-
point) and quantitatively (e.g. from a probabilistic viewp oint), and
•the analysis of what the machine is capable of (i.e., from a fu nctional, situational, and performance view-
point).
Autonomous robots have to handle many such event chains duri ng operation. Hence, the tables mentioned above have
to be identified and pre-filled at design time (e.g. by using qu alitative risk matrices). Some of these tables will have
to be continuously re-assessed at run-time (e.g. by predict ion and quantitative risk assessment). Such robots need to
continuously judge risk stemming from their past and planne d behaviours, using introspection, estimating the current
state, and predicting future states of the whole process. In the following, we will use the term process to refer to the
operation of a robot in its physical environment.
The Examples 1and2stimulate two questions when designing run-time mitigatio n measures: Which situations,
functions, and hazards are there? How do we consider all thes e in a manageable run-time model? A run-time
risk model identifies the undesirable or dangerous subset and the desirable or safe subset of the state space of the
process. Knowing the dangerous subset helps assessing the m easures for avoiding to reach this subset or for leaving
this subset. Knowing the safe subset helps assessing the mea sures for not leaving this subset or for reentering this
subset. If one of these sets is completely identified, we can d erive the other one by set complement. However, often a
risk model helps labelling only some fragments of safe and da ngerous states such that a set of unlabelled states is left.
Moreover, instead of a dichotomous scale (i.e., safe or dang erous), the risk model can help evaluating risk per state on
3
APREPRINT - APRIL 24, 2019
Risk ModelR
Process Model Pabstracts
from/check
{s1,s3,
s5}start☇
{s2,s4}?
{s6}{in1,out1}
{in2,out3}{in1,out2,
act1,act3}
{act5}{in2,out4,
act4}
s1 starts2
s3s4s5
s6in1Λ1:out1
1−Λ1:out2 act1act2in2Λ2:out3
1−Λ2:out4act3
act
5
act4
Figure 1: Two abstraction levels: Simple risk model Rwith a single risk factor partitioning the state space of pro cess
model Pand, this way, forming a view of Pwith respect to this specific risk factor
a cardinal scale (e.g. risk level of a state as a continuous measure [ Sanger ,2014 ]), or using fuzzy sets (i.e., degrees of
membership of a state in both the safe and dangerous sets).
Starting form unlabelled states, we can first be permissive, that is, successively identify the dangerous subset (e.g.
by estimating risk levels) and reduce the safe subset accord ingly. Alternatively, we can successively expand the safe
subset (e.g. by estimating risk levels) until we conclude unaccept able risk from our model. While both approaches
occur in safety cultures, the latter can sometimes be too res trictive (i.e., the whole state space is a-priori unsafe). T he
following discussion will focus on the permissive approach .
Risk awareness results from the fact that an autonomous robo t complies with or refines the risk model at run-time.
Consequently, our main questions are: What constitutes a powerful risk model to make autonomous ro bots risk-
aware? Moreover, how can we systematically engineer a consi stent and valid risk model for an autonomous robot?
To separate concerns in the modelling of autonomous robots, we consider two abstractions, the process model P and
therisk modelRas shown in Figure 1.
Pcaptures the behaviours we might observe in the actual proce ss, for example, the behaviours that are generated from
a robot in its environment continuously making decisions, p erforming logical actions (i.e., changing the data state, acti,
dotted arcs), stimulating physical actions (i.e., generat ing control inputs, ini, solid arcs), and producing and observing
outcomes (i.e., sensing process outputs, outi, solid arcs). Actions and outcomes are the events of interes t inP. Events
and states ( si) represent the observables to reason about behaviour. Uncertainty in Pallows several outcomes or
successor states from one action (e.g. in1,in2) associated with parameters ( Λi) forming probability distributions on
the outcomes. A dynamical model of P(e.g. a hybrid automaton) can be used instead of an uncertain ty model (e.g. a
MARKOV decision process).
Rabstracts from Pand is comprised of a set of risk factors , each classifying P’s state space into a safe region (i.e.,
green node signifying the desirable subset), a risky region (i.e., red node signifying the undesirable subset), and an
unlabelled region (grey node). Rclassifies and reduces the event space (i.e., the alphabet ) ofPto events relevant for
risk assessment, that is, endangerments (red arcs) and mitigations (green arcs). We assume to establish observational
refinement or some bisimilarity between PandR. InR, we have many choices for abstraction, for example, we can
focus on logical actions ( acti), process responses ( outi), control inputs ( ini), or any combination thereof. We can also
craft and compare several risk models of the same process, ea ch representing the view of a specific risk analysis. The
remainder of this work will deal with a formal framework for t he systematic construction of consistent and valid risk
models.
1.2 Related Work
Here, we will discuss related work •in the area of algebraic methods for risk assessment, •in dependability of
repairable systems, •in safety monitoring, and •in risk-sensitive control and risk-aware planning.
4
APREPRINT - APRIL 24, 2019
Algebraic Methods in System Risk Assessment. To the best knowledge of the author, this work is the first to
provide an algebraic account of safety risk modelling for ri sk-aware autonomous systems. However, algebraic methods
have been proposed for IT network security risk assessment. For example, Hamdi and Boudriga [2003 ] formalise the
security risk management life cycle as an algebraic specific ation with the aim of consistency checking of particular
risk analyses viewed as individual algebras. Probability o f occurrence and severity of consequence are modelled as
metrics over attacks (i.e., action sets) to select optimal c ountermeasures for attacks using multi-objective optimis ation.
Although their approach focuses security risk management a t design-time, their framework is inspiring for the further
development of the work at hand.
Dependability Methods for Repairable Systems. Risk structures overlap in their potential use with works in the
field of dependability assessment. This relationship gets v isible, for example, in the approach of Unanue et al. [2018 ].
The authors annotate a component architecture with a fault m odel, synthesise a failure-repair automaton, generate
a temporal fault/repair tree, synthesise minimum cut seque nces from this tree, and construct an extended form of
PETRI net to calculate failure probabilities from these sequence s. While they focus on pure failure assessment of
generic systems, risk structures generalise their approac h by including severity of consequences to enable qualitati ve
reasoning about risk rather than only about failure.
Safety Monitoring in Autonomous Systems. One of the main intentions behind risk structures is their us e as active
monitors and mechanisms for handling undesired events. Thi s has been an active areas in robotics research for many
years.
Sobek and Chatila [1988 ] propose a robot architecture where a safety monitor identi fies obstacles not recognised in a
previous planning step. Such events trigger local correcti ve actions, for example, obstacle avoidance while a mitigat ion
monitor checks for the success of a corrective action to hand over to the main planner again. Similarly, Simmons
[1994 ] speak of deliberative components to handle normal situati ons whereas reactive behaviours are activated to
handle exceptional situations.
Guiochet et al. [2008 ] propose a risk model using mode transition systems where ea ch mode represents a safety con-
straint and modes are partially ordered. Refining this appro ach,Mekki-Mokhtar et al. [2012 ] distinguish between safe,
warning, and catastrophic states with safety trigger condi tions. Based on this framework, Machin et al. [2018 ] apply
model checking to determine whether catastrophic states ca n be reached. The authors also present a tree-based algo-
rithm for the synthesis of mitigation strategies, that is, m inimal sequences of control interventions to reach the near est
safe state from any warning state and fulfilling validity, pe rmissiveness, and safety properties. While their approach
is useful for the refined modelling of hazards, the presented work could enhance their framework with an algebraic
method. Particularly, the work at hand allows to specify dep endencies between hazard-related variables.
Based on the “monitoring oriented programming” approach [ Meredith et al. ,2011 ],Huang et al. [2014 ] present a
monitoring infrastructure for checking trace properties ( specified in different formalisms) observable from communi -
cations between modules running on the robot platform ROS.2Their approach indicates how the risk model presented
here can be implemented as a monitor using their framework.
Sorin et al. [2016 ] propose a framework for the modelling and generation of saf ety monitors for ROS-based au-
tonomous robots using if-then rules and safety action speci fications. The approach is tested with an automatic ve-
hicle in a farm environment. While their framework consider s many important implementation aspects, the presented
risk model could add a methodological layer to their approac h, improving scalability and the formal treatment of
conflicting or interfering mitigation actions.
Risk-sensitive Control and Risk-aware Planning in Autonom ous Systems. Althoff et al. [2007 ] discusses an ap-
proach to metric reachability for linearisable dynamics. E fficiency of the reachability analyser is achieved by dis-
cretising the linearised model. The authors apply this appr oach to collision avoidance control in autonomous vehicles .
For this, they discuss an abstraction of the discretised mod el into a Markov chain where the pair-wise convolution of
reachable sets serves the calculation of collision risk of a n ego vehicle with an oncoming vehicle.
Several applications of stochastic optimal control aim at m inimising collision risk of autonomous robots and vehicles .
Althoff et al. [2011 ] work with a probabilistic version of inevitable collision states [ Fraichard and Asama ,2004 ] to
approximate collision probability and cost beyond the plan ning horizon from Monte Carlo sampling of trajectories.
These metrics allow the ranking of the simulated trajectori es in navigation decisions. Pereira et al. [2013 ] experiment
with a minimum expected risk planner and a risk-aware Markov decision process in autonomous underwater vehicle
navigation exposed to perturbations by ocean currents. Sanger [2014 ] discusses a framework for risk-aware control
where “movement is governed by estimates of risk based on unc ertainty about the current state and knowledge of the
2Robot operating system, see https://www.ros.org .
5
APREPRINT - APRIL 24, 2019
cost of errors.” The author illustrates an implementation o f this framework for autonomous driving by a neuronal net-
work. Feyzabadi and Carpin [2014 ] propose an efficient risk-aware path planning algorithm us ing constrained Markov
decision processes, illustrating their approach by an auto nomous indoor navigation problem. Müller and Sukhatme
[2014 ] formalise collision risk by a Gamma distribution of the sta te and uncertain distance to the nearest obstacle.
For discrete planning and navigation, Shalev-Shwartz et al. [2018 ] define a parametric model for the investigation of
collision-free autonomous driving. Their model implement s driving rules according to the Duty of Care approach
from Tort law, assuming proper response of all relevant traffic participants in typical driving scen arios. For efficient
planning, the action space of the vehicle is discretised to s olve the corresponding optimisation problem. The authors
also determine the probability of sensing mistakes after ap plying a triple redundancy pattern to the sensor sub-system .
Chen et al. [2018 ] discuss the prediction of rear-end collision risk. Based o n the current vehicle state and the sensed
environment, a K ALMAN filter predicts the next state of the vehicle and its environm ent at the end of a monitoring
interval. The predicted state represents evidence in the B AYES ian network for the estimation of the collision probabil-
ity. Low risk is translated into a warning, high risk into a co ntrol intervention which, however, is not focused in their
work.
McCausland et al. [2013 ] investigate risk-driven self-organisation of a communic ating collective of mobile robots
acting as environment sensor nodes. Their risk model is comp rised of three factors and represents a fixed node-local
risk metric which is continuously evaluated.
In all these works, risk represents either a (chance) constr aint not to be violated or a minimisation criterion for de-
termining an optimal plan. Stochastic models allow the per- state estimation of expected risk. The notion of severity
interval in the work at hand would add another possibly helpf ul uncertainty factor to such approaches and allow the
per-state estimation of an expected risk range . Complementary to these control-theoretic approaches, th e work at hand
provides an algebra for constructing risk models (i.e., sta te space, value function, action space), hence, allowing re a-
soning about the composition of models of a variety of risks b eyond collision and relating these models via refinement.
This account addresses the systematic construction and ver ification of risk models. The mentioned control approaches
suggest implementations of controllers from validated ris k models.
1.3 Contributions and Overview
This work contributes to the state of the art of formal engine ering methodology for highly automated safety-critical
systems, particularly, for the engineering of risk-aware r obots and autonomous systems, in several regards:
•Humans can qualitatively identify or predict dangerous sit uations and take actions to avoid or recover from
such situations. Based on that idea, this work approaches qualitative risk modelling (Section 3), yet making
provisions to refine the presented model into quantitative r isk models (Section 1.2).
•It provides a conceptual framework for risk modelling, formalises this framework, and proves algebraic prop-
erties of the proposed concepts (Sections 3to6), hence, going beyond the works summarised in Section 1.2.
The framework further develops the ideas in [ Gleirscher ,2017 ,2018b ].
•Bridging the gap to works in Section 1.2, it investigates how safety of machines such as autonomous robots
can be evaluated at design time and at run-time based on the pr oposed framework (Section 4.4).
•It establishes a relationship to other techniques conventionally and widely used in risk, failure, and accide nt
analysis (Sections 1.2and5).
•It discusses several examples to motivate the choice of these concepts (Examples 1to9) and explains the
specifics and pitfalls of the abstraction to be made by users o f the framework (Remarks 4and5).
•Complementary to the works in Section 1.2, it characterises risk awareness as the internalisation of expressive
andstructured risk models by robot controllers to evaluate, predict, and m emorise risk by both introspection
and environment sensing (Section 6).
•It describes how risk models represent acceptance specifications (i.e., the refinement of the risk model by the
process, after hiding irrelevant events, expresses risk ac ceptance) and run-time monitors that continuously
decide whether and how the process violates safety or achiev es co-safety (Section 7).
The remainder of this article is structured as follows: Afte r an introduction to the field (Section 2) including the formal
preliminaries (Section 2.3), the Sections 3to6present the contribution followed by a brief discussion in S ection 7.
The article concludes with Section 8. Proof details are listed in Appendix B.
6
APREPRINT - APRIL 24, 2019
2 Background
This section provides some background on risk modelling and assessment, failure analysis, and run-time monitoring.
It also draws a relationship to the application domain of rob ots and autonomous systems.
2.1 Notions of Risk
Risk can be characterised as the possibility of undesired outcomes (e.g. hazardous events and states) of an action
with several alternative but uncertain outcomes [Kaplan and Garrick ,1981 ]. In systems engineering, risk is usually
assessed by measuring •the probability of hazards, •the severity of their consequences in case of their occurren ce,
•the probability of occurrence of these consequences (e.g. a n accident) after hazards have occurred, and •the exposure
of the system to these hazards [ Leveson ,1995 ]. Variations and simplifications of these measures are in us e across
different disciplines, application domains, and correspo nding standards. For the estimation of these measures, it is
necessary to understand causal relationships between even ts. Reasoning about causal propositions has been formalised
in philosophy, mathematics, and computer science [ Lewis ,1973 ].Probabilistic risk assessment (PRA) focuses on
the use of stochastic models to quantify such propositions [ Kumamoto ,2007 ]. Apart from probabilistic models,
a variety of uncertainty models and analysis techniques are used in engineering, for exampl e, as summarised by
Oberguggenberger [2015 ] for structural safety, the reduction of the risk of dangero us incidents in civil engineering
projects.
2.2 The Risk of Undesired Events
System Accidents, Dangerous Incidents and Failures. Unfortunately, knowledge about risk mostly re-
sults from system accidents. To learn from such events, acci dent researchers use methods such as Ac-
ciMaps [ Svedung and Rasmussen ,2002 ]. For prevention and in response to accidents, methods such as hazard op-
erability studies (HazOp), event tree analysis (ETA), and l ayer of protection analysis (LOPA) allow the systematic
identification and investigation of dangerous events and th e derivation and assessment of interventions. Fault tree
analysis (FTA) and failure mode effects analysis (FMEA) are widely used in structured analysis, assessment, and
reduction of system or process failures. FTA follows a deduc tive scheme of causal reasoning, FMEA an inductive one.
There are many variations, extensions, combinations of the se techniques, integrated into assurance ap-
proaches [ McDermid ,1994 ], enriched with system models (e.g. the use of UML in HazOp [ Guiochet et al. ,2010 ]),
coming along with intuitive visual languages, and tailored for specific applications or stages in the system life cy-
cle [Ericson ,2015 ]. All these techniques serve the analysis of undesired even ts, their causes and consequences, with
the aim of reducing risk. •Reliability techniques help reducing the risk of failures (i.e., reduce fault-proneness and in-
crease fault-tolerance) of mechanical and mechatronic sys tems [ Birolini ,2017 ]•Safety techniques focus on reducing
the risk of dangerous failures of control systems and software [ Leveson ,1995 ]. Both directions differ, usually overlap,
and are embedded into the context of dependability engineer ing.
Analysis and Reduction of Random Failures. The body of scientific literature on failure analysis is over whelming.
The following overview of techniques focuses on a small frac tion of model-based and formal techniques for failure
analysis at design-time. Before summarising several techn iques, we will have a brief look onto the formal concepts
of FTA because it is one of the most widely practised versatil e techniques with many powerful extensions [ Ericson ,
2015 ].
A fault tree is a causal model of a system relating an undesire dtop-level event eTLwith a set Bofbasic events using
various kinds of gates (e.g.AND,OR,NOT) connecting these events. A minimum cut set (MCS) is a minimum set
of events required to occur to activate eTL.Dynamic fault trees are an important extension of fault trees, allowing to
model the order of events and, thus, leading to minimum cut sequences . A minimum cut sequence describes a minimum
set of events that have to occur in this sequence to activate eTL. Fault trees can also be expressed by connecting specific
setsMCSets⊆2Bin the disjunctive normal form in the antecedent and with eTLin the consequent of the following
implication:/parenleftbig/logicalordisplay.1
S∈MCSets/logicalanddisplay.1
eB∈SeB/parenrightbig⇒eTL
where MCSets denotes all MCSs and eBstands for a basic event .
FTA and FMEA are regularly used in combination and have been g iven formal semantics also based on probabilistic
models [ Kumamoto ,2007 ]. For quantitative analysis, many approaches use M ARKOV models [ Dehlinger and Dugan ,
2008 ], or at a higher level of abstraction, variants of stochasti c PETRI nets [ Papadopoulos et al. ,2011 ]. For dynamic
7
APREPRINT - APRIL 24, 2019
fault trees, Kabir et al. [2018 ] show how fuzzy numbers can be used to integrate qualitative expert opinions on un-
known failure rates into a generalised stochastic P ETRI net for quantitative reliability evaluation.
Complex dynamic fault trees can also be converted into proba bilistic automata for the checking of stochastic tem-
poral properties [ Boudali et al. ,2007 ] and efficiently synthesise failure rates [ V olk et al. ,2016 ]. Given a model of
the system under consideration, fault trees can be synthesi sed from failure annotations [ Unanue et al. ,2018 ] or from
counterexamples generated by model checkers [ Leitner-Fischer and Leue ,2013 ].
Bow tie analysis (BTA) combines FTA and FMEA or ETA. Denney et al. [2017 ] developed a tool for modelling and
managing many and large handcrafted and interrelated bow ti e diagrams and for quantitative assessment. The authors
explain how bow ties can guide the construction of assurance cases. Using a hierarchical control model of the process,
system-theoretic process analysis (STPA) [ Leveson ,2012 ] aims at holistic safety assessment. STPA shares with BTA
and HazOp its aim to bridge the gap between failure and accide nt analysis. STPA’s conceptual framework (called
STAMP) [ Leveson ,2004 ] promotes the light-weight and abstract modelling of contr ol hierarchies. Variants of STPA
have been proposed specifically for the analysis of accident s (CAST) [ Stringfellow ,2010 ]. Finally, why-because
analysis (WBA) [ Ladkin and Loer ,2001 , Ch. 20] provides a conceptual, formal, and graphical frame work for cross-
technology and cross-disciplinary root cause identificati on. Like the majority of the aforementioned techniques, WBA
is primarily deductive.
Depending on the possibilities, failures can be reduced in m ultiple ways. Accordingly, a variety of paradigms is
available. Irreducible undesired events (e.g. equipment f ailures, maloperation), once identified and assessed, can b e
drastically reduced by fault-tolerant design (e.g. [ Littlewood and Rushby ,2011 ]) or active or passive measures (e.g.
safety functions, physical separation). For example, Michalos et al. [2015 ] provide an overview of safety measures
built into and around robots collaborating with humans in ma nufacturing automation.
Analysis and Reduction of Systematic Failures. As opposed to random failures, systematic failures suggest further
means of reduction or even avoidance, applicable much earli er in the engineering process.
Following quality management paradigms in manufacturing, engineering can also be viewed as a stochastic pro-
cess [ Littlewood ,1991 ]. Stochastic variables are formed by the circumstances (e. g. location, time, technique) un-
der which faults are introduced and detected. Such faults ar e typically systematic (also called development fail-
ures [ Avizienis et al. ,2004 ]) because they have an impact on the specification and design leading to operational failures
that can usually be fully reproduced as opposed to random fai lures.
To reduce early root causes, particularly, for systematic f ailures, formal methods have been proposed. The main
argument for the use of such mathematically founded techniq ues is their power in detecting errors and inconsistencies
in requirements, algorithms, and designs (e.g. invariant v iolation, deadlock, starvation [ Roscoe ,2010 ,Schneider ,
1999 ]) early before the system is built or put in operation. In the context of combining different paradigms, fault
trees (see page 7) are often used as a feedback tool for incremental design imp rovement. For example, Hansen et al.
[1998 ] discuss how formal fault trees can be used to derive safety r equirements to validate the software design during
its step-wise development. Because the introduction of for mal methods is difficult and proposed methods occurred to
be inadequate, Bowen and Stavridou [1993 ] investigated the applicability of formal methods in safet y-critical software
engineering. Most importantly, the authors pointed out a ty pical problem of formal methods, the difficulty of safety
provisions and measurements when concerned with a degree of safety or safety as risk instead of being an invariant
not to violate.
Analysis and Mitigation at Run-time. One striking advantage of using model-based techniques, pa rticularly, for-
mal methods, is the possibility of using property specificat ions and models at run-time. For example, in run-time ver-
ification or monitoring [ Leucker and Schallhart ,2009 ], properties are checked during system operation by record ing
observation traces (e.g. values of observed variables) and checking these traces for violations (safety) or for accep-
tance (co-safety). The checking task is performed by indepe ndent components, sometimes called watchdogs, safety
monitors, or policing functions [ Bogdiukiewicz et al. ,2017 ]. Monitoring can be used to derive probabilistic statement s
about system health at any time during operation, for exampl e, using B AYES ian networks [ Iamsumang et al. ,2018 ].
2.3 Formal Preliminaries
This section prepares the preliminaries for the formalisat ion in the later sections.
Processes. We use communicating sequential processes (CSP) [ Hoare ,1985 ] for system modelling. Let Σbe the set
of all (labels for) events called the alphabet of a process. We distinguish two special events: /checksignifies the successful
8
APREPRINT - APRIL 24, 2019
termination of a process and τsignifies the invisible event resulting from abstraction (i.e., hiding or constrained
observation). We require /check,τ/nelementΣand defineΣ/check=Σ∪{/check},Στ=Σ∪{τ}, andΣ/check,τ=Σ/check∪Στ.
Definition 1 (Process) Aprocess is an expression of the following form:
P,Q::=a→P|?x:A→P|P⊓Q|P/squareQ|P/bardbl
AQ|P;Q|P\A|SKIP|STOP
where a∈Σand A⊆Σ. LetPbe the set of all (labels for) processes (or, equivalently, their control states) with
SKIP,STOP∈P.
The syntax consists of the following constructs: •event prefix (a→P),•prefix choice (?x:A→P),•internal or
nondeterministic choice (P⊓Q),•general choice (P/squareQ),•generalised parallel composition (P/bardbl
AQ),•sequential
composition (P;Q),•event hiding (P\A),3•termination (SKIP ), and•deadlock (STOP ).
We attach meaning to each expression P∈P according to Definition 1by providing a recursive scheme denoting
the behaviour of P. Among several ways of doing this, we focus on the traces modelT. Traces are finite sequences
overΣ/checkabstracting from details (e.g. internal state) of P’s executions or behaviours [ Schneider ,1999 ]: “a trace is
a record of the visible events of an execution.” In T, we use a function traces(P)to obtain the trace semantics of
P. For example, we have traces(STOP)={/a\}bracketle{t/a\}bracketri}ht}. The event /checkrepresents the counterpart of SKIP inTas denoted by
traces(SKIP)={/a\}bracketle{t/a\}bracketri}ht,/a\}bracketle{t/check/a\}bracketri}ht}.initials(P)={a|/a\}bracketle{ta/a\}bracketri}ht∈traces(P)}denotes the set of all initial events of P. If we can
establish the relation traces(P)⊆traces(Q)for two processes PandQ, we write P⊑TQand say that Q refines P (or
Pis refined by Q).
With Definition 1, we can model observable behaviour of systems and refer to di stinct portions of such behaviour
using control state labels. For a comprehensive account of C SP and a hierarchy of CSP models, the inclined reader
may consult [ Roscoe ,2010 ].
Transition Systems. For the investigation of the operational semantics of both r isk models and CSP, labelled transi-
tion systems (LTS, Baier and Katoen [2008 ]) will be defined along with notational conventions.
Definition 2 (Labelled Transition System, LTS) A LTS is a tupleS=(S,E,→,S0)with a set S of states , a set E of
events , a relation→⊆ S×E×S representing transitions between these states when engag ing in these events, and a
set S 0ofinitial states .
runs(s,S)denotes the runs (i.e., state/event traces) observable fro m the process modelled by Sin state s. We write
se− →s′if(s,e,s′)∈→ ,se− →if∃s′∈S:(s,e,s′)∈→ ,s− →if∃e∈E,s′∈S:(s,e,s′)∈→ ,se/notarrowrightif/\existss′∈S:(s,e,s′)∈
→, and s/notarrowrightif/\existse∈E,s′∈S:(s,e,s′)∈→ . The omission of event labels and initial states leads to the more general
form(S,→)with→⊆ S×S, called transition system (TS).
Temporal Logic. For the investigation of constraints on risk models, we empl oy linear-time temporal
logic [ Manna and Pnueli ,1991 , TL]. For TL formulae φ,ψ, a runρ∈runs(s,S), and the truth constants Tand F,
•the operator◦φexpresses that φholds of the next state ofρand•φUψexpresses that φholds of every state untilψ
holds of a state, with ψrequired to eventually hold. •For convenience,/squareφ=φUFdenotes that φholds of every state
of awhole run,•/Diamondφ=TUφexpresses that φholds eventually or for some future state, and •φWψ=φUψ∨/squareφ
allowsψto never actually hold. •For a last state of any run prefix, /diamondsolidφdenotes that φhas held before or in the
past represented by this prefix. TL formulas can be interpreted fo revents in the same way. In timed extensions of
TL [ Koymans ,1990 ], we allow time constraints of the form “ ∼t” to be attached to some of these operators, with
∼∈{<,>,≤,≥}. For example,/Diamond<tφexpresses that φholds for some future state before t time units will have elapsed.
Events in runs can be treated in a similar way and the satisfac tion relation can be extended to sets of runs of S. A
comprehensive treatment is provided in [ Baier and Katoen ,2008 ].
3 Risk Elements
This section introduces a model to capture beliefs about risk and risk causality based on a process P(Definition 1).
It is not unusual to consider one part SofPas the “system under consideration” interacting with anoth er part Eof
Pcalled “environment” or “context”. We may consider two sett ings: P=S/bardbl
AEwith a set of shared events A, or
P=E(S)where Eis a term using S.
3To avoid confusion with the hiding operator \, we will use/primereversefor set subtraction in CSP expressions.
9
APREPRINT - APRIL 24, 2019
Based on the discussion in Section 2.1, we view risk as the possibility4of undesired states (i)reachable byP,
(ii)finitely causal5, and (iii) not entirely avoidable inP. Undesired states both result from undesired events and may
cause or at least increase the risk of further undesired even ts, hence, forming causal chains of events. As highlighted
in Section 1, in our risk models, we concentrate on undesired fractions of uncertain outcomes of process actio ns, that
is, observable events changing the level of risk from proces s state to process state. Let us consider further examples
before we formalise these concepts.
Example 3 (Road Traffic and Brakes) Collisions are examples of undesired events, accidents resulting in un de-
sired states entailing human injury and damaged property. E vennear-collisions are undesired events, by definition
posing the risk of actual collisions. By backward reasoning , for example, an observable loss of a car’s brak-
ing function constitutes an undesired event leading to an un desired state of the car because of operating without
functioning brakes. Clearly, such a vehicle is in a riskier s tate than with functioning brakes.
Example 3highlights why the state bi-partition in Figure 1from the viewpoint of a single risk factor is too coarse.
Failures of a brake or an anti-lock braking system are not rep airable at run-time, in other words, direct mitigation is not
possible. Hence, Figure 2introduces a third partition as described below. Moreover, a failure of the brakes restricts
the direct mitigation of near-collisions (Example 2) by braking. This example also shows how two risk factors are
related.
Example 4 (Autonomous Vehicles and Brakes) There is a difference between human-operated cars , where hu-
man operators might be aware of missing brakes, and autonomous vehicles (AVs) where being aware is left to the
automation. In the former type of cars, alert human operator s will try to react. Independent of human capabilities,
what matters is that they get aware and are given the chance to take responsibility of emergency control. However,
AVs are left alone in such a situation, unlikely will their ve ndors manage to legitimately push away such responsi-
bility. We might expect from AVs to implement at least as much responsibility as society would have expected from
qualified human drivers in the corresponding driving situat ions.
Example 4motivates what safety engineers can do to equip autonomous m achines with the ability to run highly specific
mitigation mechanisms in risky states, to develop beliefs about past operations an d to predict risk in future operations
of a machine and certify that such mechanisms actually impro ve safety.
3.1 Risk Factors
Risk factors form the basic elements of the approach to risk modelling as d iscussed in the following. We define the
notion of a risk factor using an LTS, describe its properties and meaning, provide a translation into CSP, and discuss
an algebra of risk factors.
Definition 3 (Risk Factor) Let(Ph,Σf,→)be a LTS according to Definition 2. Extending this LTS, a risk factor is a
tuple f=(Ph,Σf,→,/precedeseq⊓alf,s)with
•a set Phofphases off,
•a setΣf⊂2Στ\{/emptysetAlt∈} specifying significant events forf,
•a labelled transition relation →⊆ Ph×Σf×Ph,
•a partial order/precedeseq⊓alf⊆Ph2, and
•a pair s∈R2
+with s(1)≤s(2)for the severity of the least and worst expected consequences off.6
We call c the severity (interval) off. LetFbe the set of all risk factors in the remainder of this work.
We only consider risk factors with finite PhandΣf. Furthermore, we regard risk factors fwith→according to Figure 2
withPh={0f,f,f}for the phases inactive (0f, typically, the initial and desired phase of f),active (f), and mitigated (f),
where/precedeseq⊓alfis at least7the reflexive transitive closure of {(f,0f),(f,f)}, and withΣf={ef,ef,mf,mf
d,mf
r,of
n,of
m,of
e}.
The labels in Figure 2indicate the meanings of these events. Figure 2bdescribes how these events can be used for
modelling risk factors.
4For the sake of simplicity of the presented framework, we omi t probabilistic aspects for the time being.
5traces(P)contains finite traces reaching such states such that causes can be represented by well-founded sets.
6By usual convention, for an ordered n-tuple tandi∈ [1..n], we write t(i)to refer to the value of the i-th element of t.
Furthermore, if thas a uniquely named element e, we write t.eto refer to the value of eint.
7Some applications might give rise to a linear/precedeseq⊓alfby adding at most one out of {(0f,f),(f,0f)}.
10
APREPRINT - APRIL 24, 2019
0ff
fef. . . endangerment of
n
nominal
operation. . .mf. . . mitigation recovery. . . mf
r
mf
d. . . direct mitigationendanger-
ment . . . ef
of
e
. . . endangered
operationof
m. . . mitigated operation
(a) Symbolic transition system for fSymbol . . . represents events . . . Ex.
efleading to f,Pin endangered oper. 6
efleading to f,Pin endangered oper. 6
mfleading to f,P’s mitigated operation 6
mf
dleading to 0f,P’s nominal operation 8
mfr recovering to nominal operation of P 7
ofn not leaving 0f
ofe not leaving f 6
ofm not leaving f
(b) Types of events for abstracting from Ptof
Figure 2: Risk factor f
The three state preserving events can complement the endang erment and mitigation events. By definition, if e=/emptysetAlt∈
then(p,e,p′)/nelement→for any p,p′∈Ph. We can now distinguish further kinds of risk factors:
•Ifmf∪mf
d=/emptysetAlt∈then we call ffinal, otherwise reducible .
•For any reducible fwith mf/nequal/emptysetAlt∈, ifef⊂efthen we call fstrongly reducible .
•Ifmf
d=/emptysetAlt∈then we call findirectly reducible .
The type of risk factor described in Figure 2aisminimal inasmuch as it comprises the minimal set of elements of
generic risk atoms . However, phases other than the ones shown can be distinguis hed in specific applications.
Abstractions underlying Events. Events in the CSP interpretation are atomic observations [ Roscoe ,2010 , Ch. 1.5].
The initiation and termination of events representing comp lex enduring real-world phenomena are to be viewed as non-
separable aspects of these events. Consequently, much care is necessary when making assumptions about the atomicity
of such events interfering with other events. Hence, it is re asonable to use the types of events listed for risk factors to
model the initiation, termination, or other significant events of the corresponding sub-processes (i.e., endangerment
and mitigation processes) in the process P.
Example 5 (Final Risk Factors) Road accidents as well as nuclear power-plant accidents for m courses of events.
Severe human injury or loss and machine, environmental, and property damage typically happen during such
accidents. If required, we can model such injury or damage as final risk factors and, thus, can stop to discuss
possible mitigations. This way, final risk factors define the scope of a risk model.
Viewing damage or injury as risk factors allows their treatment within the same frame work. We might later intro-
duce mitigations and convert a final into a reducible risk fac tor. Airbags as a mitigation of certain types of human
injury for certain types of collisions represent a historic al example.
Example 6 (Strongly Reducible Risk Factors) We instantiate the LTS pattern given in Figure 2: Consider the
event edb(instance of ef) that a car’s braking function degrades resulting in an operational state dbof the car
where any further use of its brakes ( odbe) will likely differ from the expectation of a human operator when trying
to reduce speed in a typical driving situation. One conserva tivemitigation (mf) would be to drive by and halt
as safely as possible and, thus, reach a state db from which only a strict subset (i.e., edb⊂edb) of the original
endangerment event can be observed. Hence, we call dba strongly reducible risk factor.
Example 7 (Indirectly Reducible Risk Factors) If our application requires an intermediate stable state ffor the
mitigation of a risk factor before returning to the inactive phase 0f, we speak of indirectly reducible risk factors.
Aleaking or damaged battery of an AV would be such a case as well as an aircraft running out of fuel . If we
do not want to consider ways to fuel such machines during oper ation but by reaching what is typically called
a “safe state, ” we might model such situations by indirectly reducible risk factors. The mitigated phase would
represent the “safe state” with respect to this risk factor. This way, our model captures how to reach this phase. In
our example, this can happen by the atomic events successfully halting at the next car repair shop andsuccessful
11
APREPRINT - APRIL 24, 2019
accomplishment of an emergency landing , respectively. From this phase, we can recover ( mfr) to the inactive phase
0f.
Example 8 (Directly Reducible Risk Factors) Driving too close to a front vehicle is a risk factor that in many
situations can be dealt with by braking correspondingly and , thus, resulting in a state where this risk factor is
inactive again. The described braking event can be accounte d as an event in mf
d. We call this a direct mitigation .
Risk Factors in CSP. Given pτ/notarrowrightfor any p∈Ph, the risk factor ffrom Figure 2can be represented as a sequential,
mutually recursive CSP process Rf(Definition 1) as follows:
Rf=0f(init)
0f=?x:ef→f/square?x:of
n→0f(inactive)
f=?x:mf
d→0f/square?x:of
e→f/square?x:mf→f (active)
f=?x:ef→f/square?x:mf
r→0f/square?x:of
m→f (mitigated)
Ifof
n⊆Σ\ef,of
m⊆Σ\(mf
r∪ef), and of
e⊆Σ\(mf∪mf
d)then the general choice gets an external choice and
Rfisdeterministic , otherwiseRfis nondeterministic. This mapping enables an algebraic tre atment of risk factors
in CSP. Later, in Section 6, we will use a map [[·]]rthat establishes the equivalence [[0f]]r=(Ph,Σf,→,{0f})for a
factor faccording to Definition 3and initialised with the phase 0f. Note the use of fas a symbol for the risk factor
as a transition system (Figure 2a) and the use of ffor the CSP process that models this risk factor in its active phase.
Different fonts signify the semantic difference.
Remark 1 Risk factors can be used to model risky fractions of or propositions about a process and its behaviour. For
example, final risk factors can be used to model, for example, permanent and off-line repairable faults , and reducible
risk factors serve the modelling of, for example, transient and on-line repairable faults . This model only allows to talk
of risks identified as risk factors and cannot be used to reaso n about “absolute safety. ” This epistemic limit is inherent
to (risk) modelling and can only be dealt with from the outsid e of the framework.
Remark 2 The notions of systematic and random faults [Birolini ,2017 ] can be represented as follows: A systematic
fault can be seen as an observable undesired event whose precondit ions or causes (i.e., 0f,ef) can be predicted, recon-
structed, reproduced, or otherwise deduced, identified, an d sufficiently determined. Hence, each (class of) systemati c
fault(s) can be associated with a deterministic risk factor .
Arandom fault can be seen as an observable undesired event whose precondit ions or causes are only partially known
or even unknown. One way to represent this lack of knowledge b y a risk factor is to use nondeterminism:8From
0f, we allow rnd=of
n∩ef/nequal/emptysetAlt∈. rnd represents potential but incomplete causes of f. Note that this choice makes
sense inasmuch as of
n⊇efdenotes that we know fbut the least possible, namely nothing, about its causes. Ha ving
observers9for events and phases, the risk factor would form a model of a p rocess P where an observation of an event
of P in rnd is sometimes followed by an observation of the phas e0fand sometimes by an observation of the phase
f.fin Figure 3is a generalisation of fin Figure 2a(i.e., f(3)⊑Tf(2a)). Each observable event of P that belongs to
rnd0(rnd for f,rnd for f) is leading to an anonymous phase ( •) succeeded by a τevent representing internal choice.
For mitigation events mf
dandmfto be observed from the active phase f, uncertainty is modelled by the set rnd, again
to be followed by either of the three phases of the risk factor depending on what can be observed in P.
3.2 Risk Spaces
Risk factors give rise to further concepts: risk states, risk spaces, mitigation orders , and risk structures . Let n∈Nand
F={fi|i∈[1..n]}⊂Fbe a finite set of risk factors (Definition 3) where fi=(Phi,Σfi
i,→i,/precedeseq⊓alfi,ci).
Definition 4 (Risk State) Assume that risk factors are unique, that is, i /nequalj⇒fi/nequalfj∧Phi∩Phj=/emptysetAlt∈. Then, a risk
state is a faithful total injection σ:F→/uniontext.1
f∈FPhf, that is,∀f∈F:σ(f)∈Phf.
8Another way, not pursued in this article, would be to provide a probabilistic extension of risk factors.
9The usage of a risk factor as a monitor automaton is further di scussed in Section 7.
12
APREPRINT - APRIL 24, 2019
0f ff
ef\rnd0mf\rnd
τrndrnd
rnd
0mf
d\rndmfr\rndef\rnd
of
e\rndof
m\rnd
τ
τ
ττ
of
n\rnd0
Figure 3: Nondeterministic risk factor f
Observe that from Figure 2follows∀f,g∈F:Σf=Σg, that is, all risk factors correspond to processes with the s ame
alphabet, call itΣF. This has no influence on risk space composition as described below. However, from this follows
that the corresponding CSP processes are composed in parall el synchronously by /bardblsuch that the process underlying
each risk factor is always ready to agree with some event of th e environment.10This construction guarantees deadlock
freedom.
A risk state abstracts from states of a process P(Section 2.3) by focusing on risk-related information in form of state
propositions associated with the phases of the risk factors .
Definition 5 (Risk Space) For a set of risk factors F, a risk space R(F)is the function space given by
R(F)={σ∈F→/uniondisplay.1
f∈FPhf|σis a total injection∧∀f∈F:σ(f)∈Phf}
We omit the parameter F from R if it is clear from the context an d denote the set of all risk spaces by R.
Letphase :R×F→/uniontext.1
f:FPhfbe a map yielding the current phase of a risk factor fin a risk state σ. The infix operator
scheme·|F′:R(F)×F→R(F′)describes a projection from the risk space R(F)to the risk space R(F′)where F′⊆F.
We allow the convention σ(i)=σ(fi)=phase(σ,fi)when referring to the phase of the risk factor fi. We can view R
as a set of ordered n-tuples (formed by the Cartesian product of Phiafter fixing some linear order over F) or as a set of
sets (formed by all equivalence classes of phase permutatio ns over F). These views permit the treatment of σ∈Ras
an unordered tuple or index set. Particularly, for i/nequalj, the tuples(σ(i),σ(j))∈Phi×Phjand(σ(j),σ(i))∈Phj×Phi
identify exactly the same risk state. Consequently, (σ(i),σ(i))=(phase(σ,fi),phase(σ,fi))collapses to σ(i). In the
following, one of the two views of Rwill occasionally be more convenient for the discussion.
Remark 3 By Definition 5, R is non-empty and finite if and only if F is non-empty and finit e. R defines the set of
all states an arbitrary11combination of phases of risk factors might give rise to. Giv en a complete set of risk factors
identified by the risk analyst for a specific application, onl y a small subset of R might eventually be relevant for the
machine in operation. In general, we assume that identifyin g this subset is difficult. Moreover, the relevance of a risk
state can be seen as a gradual quantity determined at run-tim e based on its context in R and the process state.
Definition 6 (Equality and Compatibility of Risk States) Two risk states σ,σ′∈R are equal , writtenσ=σ′, if
and only if their corresponding phases are equal, formally, ∀i∈[1..n]:σ(i)=σ′(i). Generally, given F 1,F2⊂F,
σ∈R(F1)andσ′∈R(F2)arecompatible , writtenσ≈σ′, if and only if ∀f∈F1∩F2:σ(f)=σ′(f), particularly, if
F1∩F2=/emptysetAlt∈.
Consequently, state equality implies state compatibility .
10In testing, this is sometimes called input-enabled [ Tretmans ,2008 ].
11Below, we also handle Ras “the most general (risk) structure” for a specific set of ri sk factors.
13
APREPRINT - APRIL 24, 2019
Definition 7 (Risk Space Composition) Then, the composition⊗:R×R →R of the two risk spaces R (F1)and
R(F2)is defined by
R(F1)⊗R(F2)={σ1∪σ2|σ1∈R(F1)∧σ2∈R(F2)∧σ1≈σ2}. (1)
An analogous constraint is used for parallel composition of risk structures below in Formula ( 10). Now, we can derive
a basic law relating the union of risk factors and the composi tion of risk spaces. Furthermore, it will turn out that Ris
a homomorphism.
Lemma 1 (Exchange of ∪and⊗)
R(F1∪F2)=R(F1)⊗R(F2)
Proof 1 (Proof Sketch.) The proof is by mutual existence and uniqueness: For each σ∈R(F1∪F2)(i) there exists a
σ1∪σ2∈R(F1)⊗R(F2)and (ii) this pair is unique, and (iii, iv) vice versa. Detail s on the proof can be taken from
Proof 20.
Lemma 2 (Homomorphism) R is a homomorphism in the context of (F,∪)and(R,⊗).
F1,F2 F1∪F2
R(F1),R(F2) R(F1∪F2)∪
R R
⊗
Proof 2 (Proof Sketch.) We first make sure that we actually deal with semi-groups and t hen show by algebraic ma-
nipulation that⊗is associative. Details on the proof can be taken from Proof 21.
Special Risk States. We close this section with an analysis of specific classes of r isk states. R(/emptysetAlt∈)forms the empty
risk space andR({f})thetrivial risk space forf, leading to the following law and equality:
Corollary 1 For any finite F⊂F, R(/emptysetAlt∈)is the zero element of risk space composition with ⊗:
R(/emptysetAlt∈)⊗ R(F)=R(/emptysetAlt∈)=/emptysetAlt∈ (⊗-zero)
R({f})={f/mapsto→0f,f/mapsto→f,f/mapsto→f}
Definition 8 (Locked Risk State) Based on Section 3.1and definition 5and given a set F of n risk factors, for any
stateσ∈R:
σisrisk-locked⇐⇒∀i∈[1..n],/\exists(p,e,p′)∈→i:σ(i)=p∧p/nequalp′
Otherwise, we call σrisk-unlocked .
This notion is different from control stability where the plant has reached a stable state and from CSP’s stable failures
modelFwhere stability refers to control states without invisible internal events (i.e., τ) and waiting for input.
States with an active final risk factor f(Section 3.1) take up a particularly bad role: Such states, by definition, would
expose the process Ptoresidual risk infinitely long and often , thus, making any harmful consequences associated with
fvery likely. On the one hand, such states, useful in modellin gbad accidents , are inevitable in any realistic risk model
and, on the other hand, a process Pshould govern its (probabilistic) choices in order to not en ter such states.
4 Mitigation Orders
This section investigates various basic orders over risk sp aces depending on the qualitative and quantitative informa tion
available in the risk model.
4.1 Qualitative Mitigation Orders
LetR(F)be a risk space for a set of nrisk factors F⊆Faccording to Definition 5. Again, we assume all risk factors
are given indices in the range 1..n. We use the convention of page 13to refer to parts of risk states by σ(i)with
i∈[1..n]and define a partial order /precedeseq⊓alm⊆R×Ras follows.
Definition 9 (Fully Comparable Inclusive Mitigation Order) For any states σ,σ′∈R, define
σ/precedeseq⊓almσ′⇐⇒∀i∈[1..n]:σ(i)/precedeseq⊓alfiσ′(i)
14
APREPRINT - APRIL 24, 2019
Byσ≺mσ′⇐⇒σ/precedeseq⊓almσ′∧σ/nequalmσ′, we induce the corresponding strict order. σandσ′are said to be
incomparable if and only if σ/notprecedesoreqlmσ′∧σ′/notprecedesoreqlmσ. Intuitively, σ/precedeseq⊓almσ′signifies that “ σ′is a better achievement in
risk mitigation than σ.” However, note that /precedeseq⊓almrequires full comparability of two states. It might be cumbe rsome to
require such comprehensive knowledge to determine which st ate is “better or less risky” than another state. Hence, in
presence of irreducible (i.e., aleatory) uncertainty, we m ight instead want to account for the partial knowledge in the
orders of risk factors’ phases (Definition 3) at the level of Rby providing a relaxed partial order as follows.
Definition 10 (Partially Comparable Inclusive Mitigation O rder) For statesσ,σ′∈R, define
σ≼mσ′⇐⇒∀i∈[1..n]:σ(i)/precedeseq⊓alfiσ′(i)∨/parenleftbig(σ(i),σ′(i))/nelement/precedeseq⊓alfi∧(σ′(i),σ(i))/nelement/precedeseq⊓alfi/parenrightbig
We use≺∼
mand=∼
mto distinguish the corresponding strict order and equality for≼mfrom≺m. Intuitively, Definition 10
requires a “betterment in risk from σtoσ′” based exactly on the comparable phases.
Lemma 3 For anyσ,σ′∈R,
σ/precedeseq⊓almσ′⇒σ≼mσ′
Proof 3 (Proof of Lemma 3.)/precedeseq⊓alfis antisymmetric. By definition of /precedeseq⊓alm, we may assume
∀i∈[1..n]:σ(i)/precedeseq⊓alfiσ′(i)(∀-elim)
⊢σ(i)/precedeseq⊓alfiσ′(i)(∨-intro1)
⊢σ(i)/precedeseq⊓alfiσ′(i)∨/parenleftbig(σ(i),σ′(i))/nelement/precedeseq⊓alfi∧(σ′(i),σ(i))/nelement/precedeseq⊓alfi/parenrightbig(∀-intro, assumption for each i)
⊢∀i∈[1..n]:σ(i)/precedeseq⊓alfiσ′(i)∨/parenleftbig(σ(i),σ′(i))/nelement/precedeseq⊓alfi∧(σ′(i),σ(i))/nelement/precedeseq⊓alfi/parenrightbig
Corollary 2
σ′/nequal∼
mσ⇒σ′/nequalmσ
Proof 4 (Proof of Corollary 2.)
σ′/nequal∼
mσ⇒σ′/nequalmσ (by definition)
¬(σ′≼mσ∧σ≼mσ′)⇒¬(σ′/precedeseq⊓almσ∧σ/precedeseq⊓almσ′) (by conversion)
σ′≼mσ∧σ≼mσ′⇐σ′/precedeseq⊓almσ∧σ/precedeseq⊓almσ′(by Lemma 3)
Corollary 3 (Converse of Lemma 3)
¬(σ/precedeseq⊓almσ′)⇐¬(σ≼mσ′) (by definition of/notprecedesoreqlm,/notprecedesorsimilarm)
σ/notprecedesoreqlmσ′⇐σ/notprecedesorsimilarmσ′(by negation and definition of /precedeseq⊓alm,≼m)
σ≻mσ′∨/parenleftbig(σ,σ′)/nelement/precedeseq⊓alm∧(σ′,σ)/nelement/precedeseq⊓alm/parenrightbig⇐σ≻∼
mσ′∨/parenleftbig(σ,σ′)/nelement≼m∧(σ′,σ)/nelement≼m/parenrightbig(2)
Proof 5 (Proof of Corollary 3.)Case 1: If(σ,σ′)/nelement≼m∧(σ′,σ)/nelement≼mthen (by Definitions 9and10) also(σ,σ′)/nelement
/precedeseq⊓alm∧(σ′,σ)/nelement/precedeseq⊓almand, therefore, Formula (2). /squareslash
Case 2: Ifσ≻∼
mσ′=σ′≼mσ∧σ′/nequal∼
mσ, then we have either (σ,σ′)/nelement/precedeseq⊓alm∧(σ′,σ)/nelement/precedeseq⊓alm(because of some
incomparable phases) which fulfils Formula (2). Alternatively, we have (σ′,σ)∈/precedeseq⊓al mwhich means σandσ′are fully
comparable and, because of Corollary 2, we haveσ≻mσ′.
4.2 Quantitative Mitigation Orders
So far, we have seen how partial orders account for a lack of kn owledge and potential uncertainties about risk. Now, we
will have a look at how we can use severity information , if available for specific risk factors, to interpolate know ledge
gaps, model uncertainty, and derive a linear order over R.
We continue with further definitions to deal with intervals: Given a family of intervals I=([li,ui))i∈[1..n]overR+with
li≤ui, we define the convex hull ofIby a map·∗: 2R2
+→R2
+as follows:
I∗=[min{li}i∈[1..n],max{ui}i∈[1..n])
Furthermore, let active:R(F)→ 2Fwith active(σ)={f∈F|σ(f)=f}be the map returning the set of active
factors of a risk state. Moreover, let S:R→R2
+with
S(σ)=(sf)∗
f∈active(σ)
15
APREPRINT - APRIL 24, 2019
be a map for the construction of the severity interval of a ris k state.12Whereas the minimum severity of a risk factor
fis given by the interval [0,0), the minimum severity of a risk state S(σ)is the empty set/emptysetAlt∈. For any two real-valued
intervals[a,b),[c,d)∈R2
+,Ishibuchi and Tanaka [1990 ] define with[a,b)≤[ c,d) ⇐⇒ a≤c∧b≤da partial
order over such intervals.
Moreover, we say that two risk states σ,σ′∈Rareseverity-equivalent if and only if their accumulated severity
intervals are equal, that is, σ∼sσ′⇐⇒ S(σ)=S(σ′). We have that σ=σ′⇒σ∼sσ′because the
factors that are in their active phases are identical. The re lation∼sis an equivalence relation because it is reflexive,
symmetric, and transitive (all by the usual equivalence ove r intervals). Furthermore, ∼sinduces equivalence classes
[σ]∼s={σ′∈R|σ′∼sσ}over Rfor anyσ∈Rwith the corresponding quotient class R/∼s. With the severity
intervals(si)i∈[1..n], we now define an order over R/∼s.
Definition 11 (Strong Mitigation Order) For[σ]∼s,[σ′]∼s∈R/∼s, define
[σ]∼s≤m[σ′]∼s⇐⇒∀σ∈[σ]∼s,σ′′∈[σ′]∼s:S(σ)≥S(σ′′)∨S(σ′′)⊂S(σ).
[σ]∼s≤m[σ′]∼scan be dropped from R /∼s, yielding
∀σ∈[σ]∼s,σ′′∈[σ′]∼s:σ≤mσ′′⇐⇒ S(σ)≥S(σ′′)∨S(σ′′)⊂S(σ). (3)
≤mcodifies the circumstance that the risk state σ′is “better” if the union of its severity intervals is
(a)lower in the ranking≤of interval numbers or
(b)narrower than the corresponding union for σ.
Condition (b) conveys the intuition that the interval carri es less uncertainty about the consequences expected from
σ′than fromσ. Equivalence classes in R/∼sabstract from the risk factors from which the merged severit y intervals
originate. This abstraction has to be carefully taken into a ccount when using ≤mand, therefore, when specifying
severity. Note that ≤mis based on the convex hull of severity intervals from the the active phases of a pair of risk
states. Apart from the convex hull, interval addition and mu ltiplication are relevant for alternative mitigation orde rs as
we shall see below. However, a detailed investigation is lef t for future work. Let us now consider some core properties
of≤m.
Lemma 4≤mis linear over R/∼s.
Proof 6 (Proof Sketch.) We show by case analysis that any two risk states are comparab le and≤mis antisymmetric.
The complete proof is stated in Appendix B.
Corollary 4 After dropping Lemma 4by Formula (3), we have that≤mis also linear over R.
Remark 4 (Method of Abstraction) Severity intervals abstract from the potential consequenc es of risk factors. The
family(si)i∈[1..n]forms a cut of causal chains and, hence, defines the scope of the risk model. This abstraction is left
to the modeller (i.e., the risk analyst or safety engineer) a nd can vary significantly. Note, two different risk states
σ,σ′∈R (i.e.,σ/nequalσ′) can well be severity-equivalent (i.e., σ∼sσ′). Hence, the consequences of several activated
risk factors should be compatible in the sense that the convex hull of the intervals of these fac tors maintains a consistent
meaning of severity in (R,≤m).
We will return to abstraction and compatibility of risk fact ors below in Section 5and instead continue here with the
further investigation of ≤m.
Lemma 5(R,≤m)is well ordered.
Proof 7 Lemma 5follows from a finite R (by definition) and linearity of ≤m(by Lemma 4).
Definition 12 Forσ∈R(F), we also write 0F≡∀f∈F:σ(f)=0fandF≡∀f∈F:σ(f)=f. We denote by⊤Fthe
set of maximal elements and by⊥Fthe set of minimum elements of(R,/precedeseq⊓alm,≼m,≤m). We characterise the minimal
elements in(R(F),/precedeseq⊓alm)by
⊥F≡{σ∈R(F)|∀σ′∈R(F):σ′/precedeseq⊓almσ⇒σ′=mσ}
and analogously for (R,≼m)and(R,≤m)and the maximal elements.
12Note that Sover-approximates (i.e., constructs the convex hull from) sparsely distributed severity intervals.
16
APREPRINT - APRIL 24, 2019
Corollary 5 If∀f∈F:(Phf,/precedeseq⊓alf)is linear, then(R(F),/precedeseq⊓alm)=(R(F),≼m). If R/nequalR(/emptysetAlt∈)then⊥Fand⊤Fare non-empty
and, therefore, have a proper manifestation. For (R/∼s,≤m),⊥Fand⊤Fare singletons.
Proof 8 (Proof of Corollary 5.)The proof is by contradiction. For the sake of brevity, we onl y consider a sketch of
this proof. Assume we have two state classes [σ]∼s,[σ′]∼sin⊥Fwith[σ]∼s/nequalm[σ′]∼s. Because of our assumption,
state classes are in linear order. Thus, by definition of ⊥F, one of these state classes causes a violation of the univers al
quantification in Definition 12and, therefore, one of the classes cannot be in ⊥Fwhich contradicts our assumption.
Analogously for⊤F.
Corollary 6(R/∼s,≤m)forms a complete lattice.
Proof 9 (Proof of Corollary 6.)Linearity of≤mimplies that every non-empty subset of R /∼shas a greatest lower
bound and a least upper bound.
4.3 Relating Mitigation Orders
Note that the strong mitigation order characterised by the L emmas 4and5is driven by the number of active risk
factors and their severity intervals (because of the definit ion of S) but not by the equality of phases of risk factors
among the compared risk states. This offers the possibility of abstraction from individual risk factors and of focusing
on severity estimates. To avoid infeasible models (e.g. spe cifications that get too strong to be realisable), we require
that the addition of severity intervals constitutes a relational extension of either/precedeseq⊓almor≼m, formally,
(∀σ∈[σ]∼s,σ′′∈[σ′]∼s:σ/precedeseq⊓almσ′′∨σ≼mσ′′)⇒[σ]∼s≤m[σ′]∼s
Dropped to R, this implies σ/precedeseq⊓almσ′′∨σ≼mσ′′⇒σ≤mσ′′for all pairs(σ,σ′′)∈[σ]∼s×[σ′]∼s, therefore,
σ/precedeseq⊓almσ′′∨σ≼mσ′′⇒S(σ)≥S(σ′′)∨S(σ′′)⊂S(σ) (4)
for full and partial comparability, otherwise implying
(σ,σ′′)/nelement/precedeseq⊓alm∧(σ,σ′′)/nelement≼m⇒T (5)
Intuitively, if σis “worse” than σ′′then its accumulated severity interval S(σ)has to be greater than that of σ′′
and, therefore, must not be contained in that of σ′′. Moreover, if σandσ′′are incomparable in /precedeseq⊓almand≼m(i.e.,
some risk factors have inversely ordered or incomparable ph ases) then S(σ)andS(σ′′)are allowed to form any
relationship (signified by Tfor “true”), for example, Formula ( 4).
So, what is the (necessary and) sufficient condition onFto satisfy the requirement expressed by Formula ( 4)? Risk
spaces and risk state pairs are the interpretations and, the refore, potential models satisfying the relational extens ion
imposed by Formula ( 4). The preparation of an answer to this question suggests the following lemma:
Lemma 6 Forσ,σ′∈R(F),
σ/precedeseq⊓almσ′∨σ≼mσ′⇒active(σ)⊇active(σ′).
Proof 10 (Proof Sketch.) The proof is by induction over F and relies on the assumption t hat, for any f∈F,fis the
unique maximal element in/precedeseq⊓alfand that acti ve only returns such elements. The whole proof is stated in Pro of23.
Again, fix a finite Fand a pairσ,σ′∈R(F)and assume σ/precedeseq⊓almσ′∨σ≼mσ′. Then, by Lemma 6,σ′incorporates
a subset of active risk factors of σ.S(σ′)maintains the right-hand part (i.e., S(σ′)⊂ S(σ)) of the consequent of
Formula ( 4) in all of the following cases:
1. no intervals are excluded ( σ′=σ),
2. only intervals included in the others are excluded,
3. intervals only increasing the lower bound are excluded,
4. intervals only decreasing the upper bound are excluded, a nd
5. intervals increasing the lower bound and decreasing the u pper bound are excluded.
As a side note, Formula ( 4) is also satisfied if all factors in Fare assigned the same interval c0. In conclusion, the
sufficient condition on Fto satisfy Formula ( 4) is the “unique maximal element” precondition in the proof o f Lemma 6.
Apart from this precondition, Formula ( 4) holds of an arbitrary finite F⊆F. Below, we shall call /precedeseq⊓almand≼minclusive
mitigation orders , and≤mastrong mitigation order .
17
APREPRINT - APRIL 24, 2019
Theorem 1 Thestrong mitigation order ≤mextends the partially comparable inclusive mitigation order ≼mwhich, in
turn, extends the fully comparable inclusive mitigation order /precedeseq⊓alm. Formally, for σ,σ′∈R:
σ/precedeseq⊓almσ′Lemma 3=⇒σ≼mσ′Lemma 6=⇒σ≤mσ′.
Remark 5 Discrete and linear mitigation orders such as ≤mpromote machine implementations with negative utili-
tarian decision ethics [ Warburton ,2012 , p. 51]. For example, severity intervals could be calculate d at run-time based
on sensor data about the possible operational situation of a system. The expected outcomes of all enabled mitigation
actions, if any, will then be comparable according to the res ulting≤m. This comparability allows the assessment of
the actual reachability of states with strictly lower risk. Any resolution of a near-accident situation (Example 3) or a
tram problem [Foot,1978 ] (also known as the “trolley problem”) would then consist in the choice of the mitigation
action leading to the state with the lowest risk or the least severe of the expected negative outcomes . This scheme
characterises negative utilitarianism.
Linear orders globally resolve decisions based on explicit and, therefore, disputable criteria. Consequently, utili tarian
ethics have been criticised to lead to oversimplified approa ches to resolve indecision. According to Warburton [2012 ,
p. 48f], such critiques stress the difficulty of predicting t he positive and negative effects of certain actions, in our c ase,
the calculation of the severity of consequences of an activa ted risk factor.
Fortunately, a structured risk model with dependencies bet ween risk factors could be used to complement utilitarian
decision ethics with KANTian ethics, that is, to take conservative measures before hi gh-severity risk factors get ac-
tivated. For example, we can model the necessary preconditi ons of the tram problem as risk factors and a machine
based on this model can use these factors to constrain its beh aviour.
Overall, although the presented model can be used with linea r orders, the core discussions below try to stay agnostic
of the mitigation order.
4.4 Local, Regional, and Global Safety
The orders/precedeseq⊓alm,≼m, and≤marelocal in the sense that their definitions only require the comparis on of pairs of risk
states. Two more qualitative notions of safety seem to be use ful.
LetRbe non-empty and finite and reach :R×P→ 2R. Given a process P∈Pand a risk state σ∈R,reach(σ,P)⊆
Rdenotes the set of risk states reachable from σbyPwhereσitself is always reachable and, thus, σ∈reach(σ,P).
Then, we use≼mto determine two non-empty sets of minimum and maximal eleme nts in Rreachable in P, namely
max≼m{reach(σ,P)}andmin≼m{reach(σ,P)}.
In asituation represented by the process Pin a specific risk state σ, these two sets signify the regionally safest (max)
and the regionally most hazardous (min) states, respectively. The smallest such set will only and e xactly contain σ,
representing “the situation where Pcannot do anything further about risk.” This way, ≼myields a regional notion of
safety. The notion is regional inasmuch as once a maximal ele ment in max≼m{reach(σ,P)}is reached, the risk model
allows no more reasoning about safer states that Pcould reach from σinstead of maintaining its current risk state.
In addition to local and regional safety, (R/∼s,≤m)yields a more global notion because of its linearity (Lemma 4). The
≤m-based risk model is global in the sense that there are always unique safest and riskiest states in R/∼s(Corollaries 5
and6) among all risk states reachable by Pfromσ∈[σ]∼s. In contrast, the≼m-based risk model will not guarantee
uniqueness of the globally safest state with respect to the reachable set of risk states. Overall, Le mma 5provides a
necessary condition for deriving strategies (i.e., policies or choice resolutions) that stabilise or te rminate Pin these
globally safest regions. Note that the use of equivalence cl asses leads to more abstract forms of safest and riskiest
states.
These abstract min/max-bounded reachable sets can be used to assess risk at run-tim e, particularly, by estimating the
probability of occurrence and the severity of consequence f rom specific situational data only available during operati on
and, then, by accumulating these estimations according to t he given risk model. Safety of a process then turns into the
gradual presence and absence of the risk of undesired events during operation, during a system run, or, more formally,
for subsets of traces(P). This notion complements safety as the crisp presence and ab sence of undesired events, as the
avoidance of risk factors (e.g. undesired events), and as th e reduction of the probability of occurrence or the severity
of consequences of undesired events. As a side note, the reliability of a process as the gradual presence and absence
of defective behaviour of this process can be seen as a special case of the presented approach when co nsidering only
risk factors that model system faults.
18
APREPRINT - APRIL 24, 2019
5 Dependencies between Risk Factors
In risk analysis, we often wish to model causal relationships between (phases of) risk factors and, consequently, risk
states. For example, we might want to model that
1. the activation of one risk factor causes theactivation of another risk factor,
2. the mitigation of one risk factor causes theactivation of another risk factor, or
3. the activation of one risk factor requires theactivation of another risk factor.
Example 9 Consider the following example illustrating these relatio nships:
1. For example, water or oil on a robot’s fingers ( f1)cause the robot’s hand to be slippery ( f2) such that
holding a heavy object gets an action with an increased likel ihood of a negative outcome.
2. An increased grabbing pressure ( f3) as a result to mitigate the object slipping out of the grabbe r’s hold
could potentially cause damage to the object. This negative outcome can be modelled a s a final risk
factor f4.
3. A damage to the object ( f4)requires at least one of high grabbing pressure ( f3) or—applying backward
identification of further risk factors—the object’s high fa lling onto a hard surface ( f5). This fall ( f5)
again requires at least one of slippery fingers ( f1) or a mistaken loosening of the grabber ( f6).
5.1 Relations over Risk Spaces
We can take account of relationships, such as illustrated in Example 9, by imposing constraints on pairs of risk states
and their comprising factors’ phases. For this, we use binar y relations over risk spaces Rto approximate causality
assumptions about parts of a process Pless known or less under control (typically, some kind of env ironment E) and
causality requirements to be imposed on parts of Pmore known or more under control. In the following, we employ
temporal logic (Section 2.3) and then relational specification to formalise constraint s.
In the following, let i,j∈[1..n]with i/nequalj. Now, consider two distinct risk factors fi,fj∈F. For example, the
causes (or trigger) constraint13can be defined as follows:
ficauses fj≡/square[fi⇒/Diamond≤t(fjU¬fi)] (6)
Note that in our model we have ¬fi⇔fi∨0fi. Formula ( 6) requires that for any path through Rfrom a state in R0⊆R
and at any step of this path, if fiis active then, within at most ttime units, fjmust be active until figets either inactive
or mitigated. fjcan stay active forever and may already be active before the a ctivation of fi.
ficauses fj≡/square[fi⇒◦( fjU¬fi)] (7)
Formula ( 7) forms a simplification of Formula ( 6) taking into account the (time) abstraction used in our defin ition of
R. This abstraction implies that the step of an inactive risk f actor to its activated phase is a logical time step, though,
assuming a real-time duration of >0.
LetCbe the set of all constraints. For the translation of TL const raints as described above into a form usable by
parallel composition, we use a map [[.]]c:C→ 2R×Rto denote the relational semantics of constraints over R. We will
see in the following how applying a constraint c∈Cto a risk structureRcan restrict the transition relation →.
For example, causes constraints can be encoded as relations over Ras follows:
[[ficauses fj]]c={(σ,σ′)∈R×R|σ(i)=fi∨σ′(i)=fi⇒σ′(j)=fj}
causes extends to sets of risk factors. Given F,F′⊆Fwith F∩F′=/emptysetAlt∈, we define
[[Fcauses F′]]c={(σ,σ′)∈R×R|∃fi∈F:σ(i)=fi∨σ′(i)=fi⇒∀fj∈F′:σ′(j)=fj}
Note that all pairs (σ,σ′)violating the antecedent of the conditional are in [[ficauses fj]]cas well. Furthermore,
this specific constraint allows immediate or weak causation as well as delayed or strong causation of at most one
transition (that is, one “logical” step) in R.
Therequires constraint can be written in TL form as follows:
firequires fj≡/square[¬fiW≤tfj] (8)
13Used indirectly in form of minimum cut sequences in FTA and directly in FMEA (see Section 2.2).
19
APREPRINT - APRIL 24, 2019
and in relational form as follows:
[[firequires fj]]c={(σ,σ′)∈R×R|σ′(i)=fi⇒σ(j)=fj}
The lifting of requires to sets F,F′⊆Fwith F∩F′=/emptysetAlt∈is described as follows:
[[Frequires F′]]c={(σ,σ′)∈R×R|∃fi∈F:σ′(i)=fi⇒∀fj∈F′:σ(j)=fj}
Note that this variant of the requires constraint refers to allfactors specified on its left-hand side and, this way,
resembles an AND-gate as used in FTA. The side condition F∩F′=/emptysetAlt∈rules out the case that a risk factor requires or
causes itself (i.e., the “chicken and egg” problem).
Remark 6 Thecauses andrequires constraints constitute basic templates for the design of co nstraints and exem-
plify how constraints can support the safety engineer in cha racterising causality in a state-based relational way.
Analogously, there are many possible factor dependency constraints overRthat resemble causal reasoning of tech-
niques such as FTA (Section 2.2). These constraints can be classified along the following di mensions:
•phase combination (PC): active to active ( F⇄F′), active to mitigated ( F⇄F′), active to inactive ( F⇄
0F′), mitigated to mitigated ( F⇄F′), mitigated to active ( F⇄F′), mitigated to inactive ( F⇄0F′);
•direction (D) of cause-effect analysis: forward ( →, e.g. for modelling sufficient conditions or propagation),
backward (←, e.g. for modelling necessary conditions or explanation);
•polarity (P): obligation (+), permission (◦), inhibition (−);
•causality (C): strong (s), weak (w);
•multiplicity (M): one-to-one ( 1 : 1 ), one-to-many ( 1 :n), many-to-one ( n: 1), many-to-many ( m:n) where
m,n>0, self-referential14(1 : 0);
•factor combination (FC): “∼mout of n” where∼∈{≤,=,≥}and1≤m≤n.
These dimensions give rise to various types of constraints.15Table 1describes types of constraints useful in risk
analysis. Some of these constraints have been implemented i n YAP[Gleirscher ,2018a ] which enables their use based
on previous discussions in Gleirscher [2017 ,2018a ,b]. However, their comprehensive treatment would exceed the
scope of this work. As we have seen, constraints prune irrele vant state pairs and, this way, determine the shape of
(R,→). This mechanism is reflected by the following definition:
Definition 13 (Relational Semantics for Constraints) For a set of constraints C ⊆C,
[[C]]c=/braceleftbigg
R×R, C=/emptysetAlt∈/intersectiontext.1
c∈C[[c]]c,otherwise⊆R×R (9)
Constraints enable a top-down way of specifying risk models over risk spaces. In the context of the composition of
risk spaces, this alternative way requires the definition of the following healthiness or well-formedness condition:
Definition 14 (Well-formedness of Constraints) Let R(F)be a risk space formed by a family of risk factors F =
{fi}i∈[1..n]. We say that a constraint c ∈Ciswell-formed for R(F)if and only if it does not refer to risk factors other
than the ones in F, formally, if and only if [[c]]c⊆R(F)×R(F). We say that C⊆C is well-formed for R (F)if and
only if each element in C is well-formed for R (F).
5.2 Compatibility of Risk Factors
Here, we continue with the methodological considerations f rom Remark 4on consistent and meaningful interval and
probability specifications.
For example, assume to have two consequences C1andC2associated with correct16severity intervals[l1,u1)and
[l2,u2). Two risk factors f1andf2can model three situations with their individually estimated intervals f1.sandf2.s:
14In this case, we allow F=F′={f}.
15Naïve combination of all dimensions ( 6∗2∗3∗2∗5∗3) would result in 1080 constraints, many of them not essentia lly different,
meaningful, or practical and, hence, resulting in significa ntly fewer.
16Correctness of[l1,u1)and[l2,u2)assumes complete knowledge about consequences and their ev aluation.
20
APREPRINT - APRIL 24, 2019
Table 1: Overview of useful constraints. Legend: Seedimensions on page 20, Y . . . implemented in Y AP.
Name PC D P C M FC Y Notes
causes F⇄F′→ + w 1:n =n /check Theactivation of certain risk factors causes the (successive) ac-
tivation of specific risk factors, Formula ( 7).
causes−1F⇄F′→ + w m:n =n - The mitigation of certain risk factors causes theactivation of
other risk factors.
requires F⇄F′← + w 1:n =n /check The activation of certain risk factors requires the (prior) acti-
vation of all out of a specified set of risk factors. AND-gate in
FTA, Formula ( 8).
requires 1 F⇄F′← + w 1:n≥1 - The activation of certain risk factors requires the (prior) activa-
tionof at least oneout of a specified set of risk factors; OR-gate
in FTA.
prevents F⇄F′→ – w 1:n =n /check Theactivation of certain risk factors prevents theactivation of
other risk factors
preventsMit F⇄F′→ – w 1:n =n /check Theactivation of certain risk factors prevents themitigation of
other risk factors
excludes F⇄0F′→ + w 1:n =n /check theactivation of certain risk factors deactivates (superposes or
invalidates) other risk factors
direct F⇄0F′→ ◦ s 1:0 =1 /check theactivation of certain risk factors canbe immediately fol-
lowed by their (!) deactivation
offRepair F⇄0F′→ – s 1:0 =1 /check theactivation of certain risk factors cannot be immediately fol-
lowed by their deactivation
1.f1andf2share all their consequences, for example, C1: If both factors get active, the convex hull can be
backed by a meaningful consistency condition: {f1.s,f2.s}∗⊆[l1,u1).
2.f1andf2do not share any consequences, for example, f1models C1andf2models C2: If both factors get
active, the convex hull extends both the range of consequenc es and severities and the consistency condition
{f1.s,f2.s}∗⊆{[l1,u1),[l2,u2)}∗seems inappropriate. For example, if C1andC2signify the damage of two
independent objects with the same interval, the convex hull would not account for this because of idempotency
of interval union.
3.f1andf2share some of their consequences: For example, f1potentially damages objects AandBandf2
potentially damages objects BandC. If both factors get active, the convex hull merges informat ion about all
consequences and, hence, results in a combination of the cas es 1 and 2.
While we can abstract from consequences shared by all risk fa ctors, the treatment of partially shared consequences
requires more care. The cases 2 and 3 can be modelled by an addi tional risk factor f3that is caused by f1andf2and
carries an up-shifted severity interval, for example, f3.s=f1.s+f2.s. Below in Section 5, we will discuss how such
a dependency can be specified by constraints on the risk space , that is, by{f1,f2}causes{f3}and{f3}excludes
{f1,f2}(described in Table 1).
Based on this analysis, we call a set Fof risk factors compatible if there is a meaningful way of combining the severity
intervals for each subset F′⊆F. For methodological support in achieving and maintaining c ompatibility, the next
paragraph exemplifies formal considerations of the consist ency of severity intervals.
Factor Characteristics and Dependencies. Constraints over risk spaces typically have implications o n the charac-
teristics of risk factors such as severity intervals:
•For example, for a constraint f1causes f2, it is reasonable to claim f1.s⊇f2.s.
•Analogously, for a constraint f1requires f2, we might wish to see f1.s⊆f2.s.
An in-depth analysis of techniques for making Fcompatible and a detailed discussion of a complete set of rul es
relating factor characteristics and factor dependencies a re out of scope of this work.
21
APREPRINT - APRIL 24, 2019
6 Risk Structures
Risk spaces are an abstract domain that can be used to equip processes wit hrisk awareness and for the assessment
ofrisk mitigation capabilities of such processes. Rrepresents the unconstrained composition of risk factors. Events
and transitions between risk states are ignored. Instead of considering Rin its full extension, we can select subsets
ofRfor risk models of specific applications. Even if not entirel y known in advance, risk in a specific application
will usually have a specific structure that might be more adequately represented by a constrained composition of risk
factors, eventually taking into account events and transit ions between risk states. Specifically, based on constraints on
the combinations of the risk factors’ phases and on the synchronisation of events corresponding to the CSP model of
concurrency, we specify
•which region of Rwe want to pay attention to (i.e., scope of safety guarantees ) and
•which region of Rwe consider to be safe for a process P(i.e., conventional safety).
In the following, we make use of risk factors’ transition rel ations, define a form of parallel composition, and discuss
consistency, well-formedness, and validity of risk structures.
Definition 15 (Risk Structure) Arisk structureRis an expression of the form
R,R1,R2::=p|R1/bardblR2|[R]C
where p∈Phf(i.e., 0f,f,f) for any risk factor f∈F⊆F(Definition 3) is an atom and C⊆ C is a set of con-
straints (Definition 13).Sdenotes the set of all risk structures, consequently, inclu ding the set of all phases of risk
factors/uniontext.1
f∈FPhf⊂S.
The operator/bardblsignifies the parallel composition of two risk structures, a nd the operator[·]Capplies all constraints in
Cto a risk structure. The semantics and algebraic properties of these operators will be discussed below.
Operational Semantics of Risk Structures. WithR∈Swe associate a LTS [[R]]r=(R,Σ,→,R0)with
•the risk space R(Definition 5),
•the event setΣ(Definition 1),
•the transition relation →⊆ R×(2Σ\/emptysetAlt∈)× R(Definition 16), and
•a set of initial states R0⊆R.
Furthermore, let scope :S→ 2Fbe a map that identifies the risk factors referred to by a risk s tructure. The operational
semantics of each construct of the language in Definition 15are provided below.
6.1 Atoms
The operational semantics of a single risk factor fin phase p∈Phfis given by[[p]]r=(Phf,Σf,→f,{p})and the
elements of this tuple are given by Definition 3and described in Figure 2.
6.2 Parallel Composition
LetA,B,F⊆Fbe sets of risk factors with A,B⊆Fand, for i∈{1,2,3},Ri∈Sbe arbitrary risk structures according
to Definition 15with[[Ri]]r=(Ri,Σu
i,→i,Ri,0).
For situations where several safety engineers are trusted w ith the task of risk analysis of a complex safety-critical
system, we define what it means to combine two risk structures with intersecting sets of risk factors, formally,
scope(R1)∩scope(R2)/nequal/emptysetAlt∈. Because a single risk factor cannot be in two different phas es at the same time, we
employ a corresponding constraint to uniquely define the tra nsition relation resulting from parallel composition: only
those risk states can be combined whose risk factors in the sh ared scope are in their identical phases. According to
the discussion on page 13, the following predicate encodes this constraint:
∀σ1∈R1,σ2∈R2:cons(σ1,σ2) ⇐⇒∀f∈scope(R1)∩scope(R2):σ(f)
1=σ(f)
2(10)
Definition 16 (Parallel Composition) [[R1/bardblR2]]r=(R,Σu,→,R0)where
•R=R1⊗R2(see Formula (1)and lemma 1),
22
APREPRINT - APRIL 24, 2019
•→⊆ R×(2Σ\/emptysetAlt∈)× R,
•theused alphabetΣu={e∈2Σ|∃σ,σ′∈R:σe− →σ′}, and
•R0=R1,0⊗R2,0.
For risk states σ1,σ′
1∈R1,σ2,σ′
2∈R2, the composed transition relation →is given by the following step rules:
σ1e− →1σ′
1σ2/notarrowright2[cons(σ1,σ2),cons(σ′
1,σ2)]
σ1∪σ2e− →σ′
1∪σ2(/bardbl-l, left)
σ1/notarrowright1σ2f− →2σ′
2[cons(σ1,σ2),cons(σ1,σ′
2)]
σ1∪σ2f− →σ1∪σ′
2(/bardbl-r, right)
σ1e− →1σ′
1σ2f− →2σ′
2[cons(σ1,σ2),cons(σ′
1,σ2)]
σ1∪σ2e\f−−−→σ′
1∪σ2(/bardbl-pl, partial left)
σ1e− →1σ′
1σ2f− →2σ′
2[cons(σ1,σ2),cons(σ1,σ′
2)]
σ1∪σ2f\e−−−→σ1∪σ′
2(/bardbl-pr, partial right)
σ1e− →1σ′
1σ2f− →2σ′
2[cons(σ1,σ2),cons(σ′
1,σ′
2),e∩f/nequal/emptysetAlt∈]
σ1∪σ2e∩f−−−→σ′
1∪σ′
2(/bardbl-b, both)
These step rules together resemble the step law of generalis ed parallel composition in CSP [ Roscoe ,2010 , Sec. 3.4]
and can be used to determine the set of reachable states reach([[R1/bardblR2]]r.R0,P).
Remark 7 The side conditions in the rules of this composition operato r prohibit any behaviour leading to inconsistent
states. In other words, the composition constrains the beha viour of two risk structures with an overlapping scope. With
Lemma 15below, we will further discuss another form of behavioural c onstraint and the combination of composition
and constraints for risk modelling.
Were we to use arbitrary CSP processes as atoms and were we to a llow different interfaces for each use of the parallel
composition operator, several risk structures, when compo sed, would not guarantee freedom from interference, exhibi t
different event and state traces and, consequently, the ord er of their composition would lead to different risk models.
Associative laws for generalised parallel composition in C SP are not universally applicable. The discussion by Roscoe
[2010 , p. 60] highlights how differences in the alphabets shared b etween each pair in a set of risk structures would
entail meaning to the order in which these pairs are composed . For example, in CSP , the equality (P/bardbl
XQ)/bardbl
YR=
P/bardbl
X(Q/bardbl
YR)holds generally if X =Y. In case of X/nequalY, one has to compare the traces of the composed processes to
prove specific guarantees to be preserved by their compositi on. This can be computationally complex. More general
forms of processes and composition have been dealt with in fo rmalisms such as Circus [Oliveira ,2005 ,Oliveira et al. ,
2009 ] and FOCUS [Broy and Stølen ,2001 ].
Overall, we obtain two advantages from the use of risk factor s as atoms in risk structures: This way, all atoms have
exactly the same alphabet ΣF, that is, all factors have to always agree on their view of the same overall process P.
Overlapping scopes in CSP terms then mean copies of risk fact ors that will be reduced by idempotency (see Lemma 7
below). The transformation of risk factors into CSP as descr ibed on page 12encodes all information of the risk
space R into the processes 0f,f, and f. This way, we restrict the use of generalised parallel compo sition to the use of
synchronous parallel composition.
Algebraic Properties of Composition ( /bardbl).The following discussion shows some properties desirable o f the (paral-
lel) composition of risk structures.
Lemma 7 (Idempotency of /bardbl)For anyR1∈S, we have
R1/bardblR1=R1 (/bardbl-idem)
23
APREPRINT - APRIL 24, 2019
Proof 11 (Proof of Lemma 7.)From Definition 5, by Lemma 1, R1=R1⊗R1is preserved. Using the convention
from Section 3.2, we can apply the rules from Definition 16as follows:
Because of the uniqueness of R1, the composition takes two consistent views of R1offering identical states and events
at each step. So, because of σ1=σ2and e=f , the antecedents in the rules /bardbl-l,rcan never be satisfied and the rules
/bardbl-pl,pr produce infeasible transitions to be pruned according to De finition 3. The rule/bardbl-bturns into a tautology and
both views always exhibit an identical transition:
σ1e− →1σ′
1
σ1∪σ1e− →σ′
1∪σ′
1
Forσ1,σ′
1∈R1and e∈Σu
1, this tautology preserves Σu
1,→1, and R 1,0.
Lemma 8 (Commutativity of /bardbl)For anyR1,R2∈S, we have
R1/bardblR2=R2/bardblR1 (/bardbl-comm)
Proof 12 Lemma 1in Section 3.2makes it easy to show R 1⊗R2=R(scope(R1)∪scope(R2))=R2⊗R1. Furthermore,
the symmetric duals (i.e., /bardbl-r is the dual of/bardbl-l,/bardbl-pr is the dual of/bardbl-pl) of the rules in Definition 16yield the same→
and, hence, the same Σu.
Lemma 9 (Associativity of /bardbl)For anyR1,R2,R3∈Swith disjoint scopes, that is,
scope(R1)∩scope(R2)∩scope(R3)=/emptysetAlt∈,
we have
(R1/bardblR2)/bardblR3=R1/bardbl(R2/bardblR3) (/bardbl-assoc-1)
Proof 13 (Proof Sketch of Lemma 9.)Because of empty shared scopes, the side conditions will alw ays hold on both
sides and states will be merged by disjoint union. Set operat ions on states and events are commutative and associative.
As pointed out in Remark 7, risk factors (Definition 3) always share the whole alphabet ΣF, that is, they synchronise
on all events (Section 3.2). Interleaving is avoided, the alphabetised parallel oper ator takesΣFon both sides and, this
way, reduces to synchronous parallel composition for which a general associative law is available.
Lemma 9allows the safe use of /bardblf∈F0fas a shortcut for((...(0f1/bardbl0f2)/bardbl...)/bardbl0fn)withfi∈F.
Lemma 10 (Associativity of /bardbl)For anyR1,R2,R3∈Swith equal scopes, that is,
scope(R1)=scope(R2)=scope(R3),
we have
(R1/bardblR2)/bardblR3=R1/bardbl(R2/bardblR3) (/bardbl-assoc-2)
Proof 14 (Proof Sketch of Lemma 10.)The proof is analogous to the proof of Lemma 9.
6.3 Constraints
Section 5introduced the concept of relations over risk spaces with th e aim of shaping(R,→). Based on this concept,
we now discuss how constraints can be embedded into an operat or of the language of risk structures as introduced in
Definition 15. This way, constraints from a redundant specification of beliefs about an application and operational
risk associated with this application .
Such redundancy can be used to identify inconsistencies bet weenRand the real world, potentially helpful in model
refinement ,completion , and validation [Gleirscher ,2014 ]. Particularly, these inconsistencies allow choices for t heir
resolution. We will make such inconsistencies explicit as f ollows.
For a set of risk factors F⊆Fand a risk state σ∈R(F), we call the risk structure
Rσ=[/bardblf∈Fphase(σ,f)]C
with R0={σ}thecharacteristic risk structure ofσ. From a risk state σor its characteristic structure Rσ, we
distinguish three ways to determine the set of transitions σ− →:
1.(σ,σ′)∈[[ C]]c ⇒στc−−→σ′∈σ− →1, derived from constraint C,
2.σe− →σ′⇒σe− →σ′∈σ− →2, derived from the Definitions 3and16,
3. e∈initials(P)andσ′∈reach(σ,P) ⇒σe− →σ′∈σ− →3, derived from process P.
This framework gives rise to the following types of inconsistencies :
24
APREPRINT - APRIL 24, 2019
1. The relation σ− →1\σ− →2describes sensible transitions with invisible events signified by τc. We choose to
prune transitions from →if they deviate from what is provided by the risk factors.
2. The relation σ− →2\σ− →1describes violent transitions. We choose to prune transitions from →if they
violate constraints and, hence, lead to inconsistencies in R.
3. The relation σ− →3\(σ− →1∩σ− →2)describes imperceptible transitions. In→, we choose to label such
transitions with a τp, making them subject of process-driven disclosure ofR.
4. The relation(σ− →1∩σ− →2)\σ− →3describes unrealised transitions. We choose to prune such transitions
from→, because they are not realised in Pand, hence, would only add little value to R.
This case analysis suggests several possibilities to desig n a semantics for constraints. As indicated above, we will
investigate the following transition relation →:
σ− →=(σ− →1∪σ− →2∪σ− →3)\/parenleftbig(σ− →1\σ− →2)/bracehtipupleft/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehtipdownright/bracehtipdownleft/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehtipupright
1. reduce to F∪(σ− →2\σ− →1)/bracehtipupleft/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehtipdownright/bracehtipdownleft/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehtipupright
2. reduce to C∪(σ− →1∪σ− →2\σ− →3)/bracehtipupleft/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehtipdownright/bracehtipdownleft/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehtipupright
4. reduce to P/parenrightbig
Fortunately, we have σ− →1\σ− →2=/emptysetAlt∈because we require all risk factors to be input-enabled.
Definition 17 (Constraint) LetRbe a risk structure (Definition 15) with[[R]]r=(R,Σu,→,R0)and C a set of
constraints well-formed for R (Section 5and definition 14). Then, the constrained form[R]Cis defined by[[[R]C]]r=
(R,Σu
C,→C,R0).→Cis determined by the following rule. For any pair of risk stat esσ,σ′∈R
σe− →σ′(σ,σ′)∈[[ C]]c
σe− →Cσ′([·]C-step)
This definition incorporates the handling of inconsistenci es according to the cases 1 and 2. The cases 3 and 4 can be
helpful in the incremental construction of R.
Algebraic Properties of Constraints ( [·]C).LetCi⊆C fori∈{1,2,3}. We have that[R]/emptysetAlt∈=Rby Definition 13.
The following lemma incorporates that the order in which sin gle constraints are applied to Rdoes not matter.
Lemma 11 (Exchange)
[[R]C1]C2=[R]C1∪C2 (11)
Proof 15 (Proof Sketch.) The proof is by induction over C 1, supported by an additional lemma for the induction
step, and takes advantage of associativity of set intersect ion as used in Definition 13. The detailed proof is stated in
Proof 24.
Lemma 12 (Idempotency)
[[R]C]C=[R]C ([·]C-idem)
Proof 16 This lemma follows from Lemma 11and idempotency of set union.
Lemma 13 (Commutativity)
[[R]C1]C2=[[R]C2]C1 ([·]C-comm)
Proof 17 This lemma follows from Lemma 11and commutativity of set union.
Lemma 14 (Associativity)
[[R]C1∪C2]C3=[[R]C1]C2∪C3 ([·]C-assoc)
Proof 18 This lemma follows from Lemma 11and associativity of set union.
For an arbitrary C⊆C, we relax Definition 17by establishing the following equivalence: [R]C=[R]C′for the largest
C′⊆Cwell-formed for R(Definition 14). Then,[R]Cdenotes the risk structure resulting from applying only and
exactly the constraints in C′. Moreover, Definition 17generalises Definition 16by guarding the/bardbl-step. Definition 17,
together with the relational pruning for three out of four ca ses according the analysis on page 25, yields the following
general refinement law:
25
APREPRINT - APRIL 24, 2019
Lemma 15 (Constraints Always Refine)
R⊑T[R]C
Proof 19 The proof is by showing that traces(R)⊇traces([R]C). Fix t∈traces([R]C)with t=fˆl for induction
over the split of t into f and l. Induction step: Assume that f ∈traces(R)(IH). With l=/a\}bracketle{te/a\}bracketri}htˆl′and t=fˆ/a\}bracketle{te/a\}bracketri}htˆl′, there
existσ,σ′∈R such that σe− →Cσ′and, according to Formula ([·]C-step ), such thatσe− →σ′and(σ,σ′)∈[[ C]]c.
Because of σe− →σ′, there must be a trace f ˆ/a\}bracketle{te/a\}bracketri}htˆl′′∈traces(R). e is part of l and, therefore, t such that we complete
the induction step and establish a new IH. (We do not need to pr ove the equivalence l′=l′′.) The case f=/a\}bracketle{t/a\}bracketri}htprovides
the induction start.
Lemma 15establishes a meaning of constraints that we might usually e xpect from a methodological viewpoint. The
constraint operator prunes the transition relation of Raccording to a relational specification of risk in terms of a s et of
constraints.
Finally, we discuss the special case that constraints are ap plied to an atom p∈Phfassociated with risk factor f. Observe
thatscope(p)={f}and, by Definition 14,Cis well-formed for R(scope(p))ifCcontains only constraints that refer
tof. Hence, our previous convention entails C′=/emptysetAlt∈for the largest well-formed C′⊆C. From this observation, we
obtain
[p]C=[p]C′
/bracehtipupleft/bracehext/bracehext/bracehtipdownright/bracehtipdownleft/bracehext/bracehext/bracehtipupright
by convention=[p]/emptysetAlt∈/bracehtipupleft/bracehext/bracehtipdownright/bracehtipdownleft/bracehext/bracehtipupright
by Definition 14=p/bracehtipupleft/bracehtipdownright/bracehtipdownleft/bracehtipupright
by Definition 13.
7 Discussion
Here, we briefly discuss potentials of risk structures to be u sed as a formal artefact in failure assessments (Section 2.2)
and in the design of safety monitors (Sections 1.2and2.2).
Integration with Failure Analysis. As indicated in Section 5, fault trees (Section 2.2) can be transformed into
risk structures by using dependencies. Because of the step s emantics underlying constraints (Definition 13), such a
translation is also possible for gates like PAND orPOR17as used in dynamic fault trees. This way, the presented appro ach
offers the possibility to use fault trees generated from arc hitectures to be used as a risk structure or to be integrated
into an existing risk structure. This leads to a practical wa y of combining risk factors internal to an autonomous robot
with risk factors stemming from its operational environmen t.
Monitoring. Each risk factor f(Figure 2) can be implemented as a monitor of the process P. Events of fcan
be translated into observers (i.e., sensors and variable ch eckers) of transitions maintaining the monitor state and of
transitions leading to a phase change. Consequently, imple menting the events and phases of each risk factor by
observers allows the use of a risk structure Ras a monitor of P’s risk state.
GivenRandP, we can devise an incremental and concurrent approach to mon itoring: While the situation of P
is monitored, P’s risk state is monitored and continuously evaluated accor ding to this situation. Risk monitoring
then comprises two parts: safety monitoring for the detection of endangerments (i.e., violations of saf ety properties),
andco-safety monitoring for the detection of mitigations (i.e., acceptances of miti gation properties). For learning
τc(Section 6.3), both monitoring tasks can be carried through in an increme ntal mode, that is, introducing non-existing
states and transitions according the given states.
When using a risk factor as a monitor automaton for P,how do we deal with nondeterministic risk factors ? In
nondeterministic risk factors, from an observed phase and a fter an observed event, Pmay have reached several distinct
phases and, thus, several distinct risk states. Without fur ther information the monitor does not know which of these
states has actually been reached by P. The monitor would then be in a risk region which is ordered an d bounded by
min≤m/max≤mas discussed in Section 4.4. Given that the phases of risk factors carry information abo utP’s state in
terms of disjoint state invariants, we can design state esti mators into our monitor. These estimators might gradually b e
able to restore some of the lost state information and again u niquely identify P’s actual risk state according to R.
8 Conclusions and Future Work
The certification of robots and autonomous systems requires the validation and verification of their controllers. High
automation requires these controllers to have risk monitor ing and handling functions. Complex machines and environ-
17ANDandORgates that express priority in a fault tree by taking into acc ount the order of event occurrences.
26
APREPRINT - APRIL 24, 2019
ments will make such functions complex as well. Such complex ity might best be handled by incremental construction
and verification. It is therefore helpful to have a compositi onal method that allows these functions be incrementally
modelled and assessed at design time. Eventually, these mod els will be transformed into verified run-time monitors
and mitigation controllers that implement these functions .
In this work, we discussed a formal framework for analysing r isk of an autonomous system in its operational environ-
ment and for constructing corresponding models that repres ent risk monitoring and handling functions for this system.
Algebraic laws over these models support systematic design and help the engineer to handle large models. We close
the discussion with an explanation of how the risk model can b e used for verified monitor synthesis.
Future Work. Further steps in developing this framework will include
•the addition of probabilistic semantics to risk factors (Se ction 3.1),
•the extension of the presented framework to more general for ms of risk factors (Section 3.1),
•the construction of risk lattices from risk spaces, mitigat ion orders, and event structures (Section 4),
•the development of reachability analysers for risk estimat ion and synthesis of mitigation strategies, based on
subsets of the risk space associated with a process state and dynamics (Section 4.4),
•the use of a dynamical model of the process for the reachabili ty analysers (Section 4.4),
•the enhancement of the discussion of factor dependency cons traints (Section 5),
•the investigation of distributivity of composition and con straints in risk structures (Section 6.3),
•a mechanisation and extension of the given proofs in Isabell e/HOL.
Acknowledgements. This work is supported by the Deutsche Forschungsgemeinsch aft (DFG) under Grants
no. GL 915/1-1 and GL 915/1-2. I am deeply grateful to Jim Wood cock and Simon Foster for many inspiring dis-
cussions, strongest guidance, particularly on the use of al gebraic methods, and for feedback on previous versions of
this manuscript. It is my pleasure to thank Ana Cavalcanti an d Cliff Jones for raising important questions about the
abstraction, composition, and methodology underlying ris k structures. Furthermore, I owe sincere gratitude to James
Baxter, Alvaro Miyazawa, and Pedro Ribeiro for many enlight ening discussions and for contributing to an extremely
productive work environment at University of York.
References
D. Althoff, J. J. Kuffner, D. Wollherr, and M. Buss. Safety as sessment of robot trajectories for navigation in uncertain and dynamic
environments. Autonomous Robots , 32(3):285–302, 11 2011. doi: 10.1007/s10514-011-9257-9 .
M. Althoff, O. Stursberg, and M. Buss. Safety assessment of a utonomous cars using verification techniques. In American Control
Conference . IEEE, 7 2007. doi: 10.1109/ACC.2007.4282809.
A. Avizienis, J.-C. Laprie, B. Randell, and C. Landwehr. Bas ic concepts and taxonomy of dependable and secure computing .
Dependable and Secure Computing, IEEE Transactions on , 1(1):11–33, 1 2004. ISSN 1545-5971. doi: 10.1109/TDSC.20 04.2.
C. Baier and J.-P. Katoen. Principles of Model Checking . MIT Press, 2008. ISBN 026202649X.
A. Birolini. Reliability Engineering . Springer Berlin Heidelberg, 8 edition, 2017. ISBN 978-3-6 62-54208-8. doi: 10.1007/
978-3-662-54209-5.
C. Bogdiukiewicz, M. Butler, T. S. Hoang, M. Paxton, J. Snook , X. Waldron, and T. Wilkinson. Formal development of polici ng
functions for intelligent systems. In 2017 IEEE 28th International Symposium on Software Reliabi lity Engineering (ISSRE) .
IEEE, 10 2017. doi: 10.1109/issre.2017.40.
H. Boudali, P. Crouzen, and M. Stoelinga. Dynamic fault tree analysis using input/output interactive markov chains. pa ges 708–717.
IEEE, 2007. doi: 10.1109/dsn.2007.37.
J. Bowen and V . Stavridou. Safety-critical systems, formal methods and standards. Software Engineering Journal , 8(4):189, 1993.
doi: 10.1049/sej.1993.0025.
M. Broy and K. Stølen. Specification and Development of Interactive Systems: FOCUS on Streams, Interfaces, and Refinement .
Springer, Berlin, 2001. ISBN 9781461300915. doi: 10.1007/ 978-1-4613-0091-5.
C. Chen, X. Liu, H.-H. Chen, M. Li, and L. Zhao. A rear-end coll ision risk evaluation and control scheme using a bayesian ne twork
model. IEEE Transactions on Intelligent Transportation Systems , pages 1–21, 2018. doi: 10.1109/TITS.2018.2813364.
J. Dehlinger and J. B. Dugan. Dynamic event/fault tree analy sis of multi-agent systems using Galileo. In 8th Int. Conf. Quality
Software , pages 429–34, Oxford, UK, 8 2008. doi: 10.1109/qsic.2008. 14.
27
APREPRINT - APRIL 24, 2019
E. Denney, G. Pai, and I. Whiteside. Model-driven developme nt of safety architectures. In 2017 ACM/IEEE 20th International
Conference on Model Driven Engineering Languages and Syste ms (MODELS) . IEEE, 9 2017. doi: 10.1109/MODELS.2017.27.
C. A. Ericson. Hazard Analysis Techniques for System Safety . Wiley, Hoboken, NJ, USA, 2 edition, 2015. ISBN 1118940385.
S. Feyzabadi and S. Carpin. Risk-aware path planning using h irerachical constrained markov decision processes. In 2014 IEEE
International Conference on Automation Science and Engine ering (CASE) . IEEE, 8 2014. doi: 10.1109/coase.2014.6899341.
P. Foot. The problem of abortion and the doctrine of the doubl e effect. Virtues and Vices and Other Essays in Moral Philosopy , 19,
1978. doi: 10.1093/0199252866.003.0002. Originally publ ished in 1967.
T. Fraichard and H. Asama. Inevitable collision states — a st ep towards safer robots? Advanced Robotics , 18(10):1001–1024, 1
2004. doi: 10.1163/1568553042674662.
M. Gleirscher. Behavioral Safety of Technical Systems . Dissertation, Technische Universität München, 2014. URL
http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de: bvb:91-diss-20141120-1221841-0-1 .
M. Gleirscher. Run-time risk mitigation in automated vehic les: A model for studying preparatory steps. In L. Bulwahn, M . Kamali,
and S. Linker, editors, First iFM Workshop on Formal Verification of Autonomous Vehi cles 2017 (FVAV 2017) , EPTCS, 2017.
doi: 10.4204/eptcs.257.8.
M. Gleirscher. Y AP– Yet Another Planner: User’s Manual . Technical University of Munich and University of York, for program
version 0.1 edition, 2018a. URL http://gleirscher.de/dl/yap-manual.pdf .
M. Gleirscher. Strukturen für die gefahrenerkennung und -b ehandlung in autonomen maschi-
nen. acatech DISKUSSION , pages 154–167, 2018b. ISSN 2192-6182. URL
https://www.acatech.de/Publikation/beitraege-zu-ein er-systemtheorie-sicherheit/ .
J. Guiochet, D. Powell, É. Baudin, and J.-P. Blanquart. Onli ne Safety Monitoring Using Safety Modes. In Workshop on Tech-
nical Challenges for Dependable Robots in Human Environmen ts, pages 1–13, PASADENA, United States, May 2008. URL
https://hal.archives-ouvertes.fr/hal-00282444 . 13 pages.
J. Guiochet, D. Martin-Guillerez, and D. Powell. Experienc e with model-based user-centered risk assessment for servi ce robots. In
2010 IEEE 12th International Symposium on High Assurance Sy stems Engineering . IEEE, 11 2010. doi: 10.1109/hase.2010.10.
M. Hamdi and N. Boudriga. Algebraic specification of network security risk management. In Proceedings of the 2003 ACM
workshop on Formal methods in security engineering - FMSE’0 3. ACM Press, 2003. doi: 10.1145/1035429.1035435.
K. M. Hansen, A. P. Ravn, and V . Stavridou. From safety analys is to software requirement. IEEE Transactions on Software
Engineering , 24(7):573–84, 1998. doi: 10.1109/32.708570.
C. A. R. Hoare. Communicating Sequential Processes . Int. Series in Comp. Sci. Prentice-Hall, 1 edition, 4 1985. ISBN 0-13-
153271-5.
O. Holland and R. Goodman. Robots with internal models: A rou te to machine consciousness? Journal of Consciousness Studies ,
10(4-5):77–109, 2003.
R. D. Howe and Y . Matsuoka. Robotics for surgery. Annual Review of Biomedical Engineering , 1(1):211–240, 8 1999. doi:
10.1146/annurev.bioeng.1.1.211.
J. Huang, C. Erdogan, Y . Zhang, B. Moore, Q. Luo, A. Sundaresa n, and G. Rosu. ROSRV: Runtime verification for robots. In
Runtime Verification , pages 247–254. Springer International Publishing, 2014. doi: 10.1007/978-3-319-11164-3_20.
C. Iamsumang, A. Mosleh, and M. Modarres. Monitoring and lea rning algorithms for dynamic hybrid bayesian network in on- line
system health management applications. Reliability Engineering & System Safety , 5 2018. doi: 10.1016/j.ress.2018.05.016.
H. Ishibuchi and H. Tanaka. Multiobjective programming in o ptimization of the interval objective function. European Journal of
Operational Research , 48(2):219–225, 9 1990. doi: 10.1016/0377-2217(90)90375 -L.
S. Kabir, M. Yazdi, J. I. Aizpurua, and Y . Papadopoulos. Unce rtainty-aware dynamic reliability analysis framework for complex
systems. volume 6, pages 29499–29515. Institute of Electri cal and Electronics Engineers (IEEE), 2018. doi: 10.1109/A CCESS.
2018.2843166.
S. Kaplan and B. J. Garrick. On the quantitative definition of risk. Risk Analysis , 1(1):11–27, 3 1981. doi: 10.1111/j.1539-6924.
1981.tb01350.x.
R. Koymans. Specifying real-time properties with metric te mporal logic. Real-Time Systems , 2(4):255–299, 1990. ISSN 1573-1383.
doi: 10.1007/BF01995674.
H. Kumamoto. Satisfying safety goals by probabilistic risk assessment . Reliability Engineering. Springer, 2007. ISBN 184628681 6.
doi: 10.1007/978-1-84628-682-7.
P. B. Ladkin and K. Loer. Causal system analysis – formal reas oning about safety and fail-
ure. Technical Report RVS-Bk-01-01, Faculty of Technology , University of Bielefeld, 2001. URL
https://rvs-bi.de/publications/books/CausalSystemAn alysis/index.html .
F. Leitner-Fischer and S. Leue. Probabilistic fault tree sy nthesis using causality computation. International Journal of Critical
Computer-Based Systems , 4(2):119–43, 2013. doi: 10.1504/ijccbs.2013.056492.
28
APREPRINT - APRIL 24, 2019
M. Leucker and C. Schallhart. A brief account of runtime veri fication. Journal of Logic and Algebraic Programming , 78(5):
293–303, 2009. ISSN 1567-8326. doi: 10.1016/j.jlap.2008. 08.004.
N. G. Leveson. Safeware: System Safety and Computers . Addison-Wesley, Amsterdam, 5 1995. ISBN 9780201119725.
N. G. Leveson. A new accident model for engineering safer sys tems. Safety Science , 42(4):237–70, 2004. ISSN 0925-7535. doi:
10.1016/s0925-7535(03)00047-x.
N. G. Leveson. Engineering a Safer World: Systems Thinking Applied to Safe ty. Engineering Systems. MIT Press, 1 2012. ISBN
9780262016629. doi: 10.7551/mitpress/8179.001.0001.
D. Lewis. Causation. The Journal of Philosophy , 70(17):556, 10 1973. doi: 10.2307/2025310.
B. Littlewood. Software reliability modelling: achieveme nts and limitations. In CompEuro ’91. Advanced Computer Technology,
Reliable Systems and Applications. 5th Annual European Com puter Conference. Proceedings. , pages 336–344, 5 1991. doi:
10.1109/CMPEUR.1991.257407.
B. Littlewood and J. Rushby. Reasoning about the reliabilit y of diverse two-channel systems in which one channel is “pos sibly
perfect”. IEEE Transactions on Software Engineering , 2011. doi: 10.1109/tse.2011.80.
M. Machin, J. Guiochet, H. Waeselynck, J.-P. Blanquart, M. R oy, and L. Masson. SMOF – a Safety MOnitoring Framework for
autonomous systems. IEEE Transactions on Systems, Man, and Cybernetics: System s, 48(5):702–715, 2018. doi: 10.1109/tsmc.
2016.2633291.
Z. Manna and A. Pnueli. The Temporal Logic of Reactive and Concurrent Systems: Spec ification . Springer, 1 edition, 1991. ISBN
9780387976648. doi: 10.1007/978-1-4612-0931-7.
J. McCausland, G. D. Nardo, R. Falcon, R. Abielmona, V . Groza , and E. Petriu. A proactive risk-aware robotic sensor netwo rk for
critical infrastructure protection. In 2013 IEEE International Conference on Computational Intel ligence and Virtual Environ-
ments for Measurement Systems and Applications (CIVEMSA) . IEEE, 7 2013. doi: 10.1109/civemsa.2013.6617409.
J. A. McDermid. Support for safety cases and safety argument s using SAM. Reliability Engineering & System Safety , 43(2):
111–127, 1 1994. doi: 10.1016/0951-8320(94)90057-4.
A. Mekki-Mokhtar, J.-P. Blanquart, J. Guiochet, D. Powell, and M. Roy. Safety trigger conditions for critical autonomo us systems.
In2012 IEEE 18th Pacific Rim International Symposium on Depend able Computing . IEEE, 11 2012. doi: 10.1109/prdc.2012.22.
P. O. Meredith, D. Jin, D. Griffith, F. Chen, and G. Ro¸ su. An ov erview of the MOP runtime verification framework. International
Journal on Software Tools for Technology Transfer , 14(3):249–289, 4 2011. doi: 10.1007/s10009-011-0198-6.
G. Michalos, S. Makris, P. Tsarouchi, T. Guasch, D. Kontovra kis, and G. Chryssolouris. Design considerations for safe h uman-robot
collaborative workplaces. Procedia CIRP , 37:248–253, 2015. doi: 10.1016/j.procir.2015.08.014.
J. Müller and G. S. Sukhatme. Risk-aware trajectory generat ion with application to safe quadrotor landing. In 2014 IEEE/RSJ
International Conference on Intelligent Robots and System s. IEEE, 9 2014. doi: 10.1109/iros.2014.6943073.
M. Oberguggenberger. Analysis and computation with hybrid random set stochastic models. Structural Safety , 52, Part B:233–243,
2015. ISSN 0167-4730. doi: 10.1016/j.strusafe.2014.05.0 08. Engineering Analyses with Vague and Imprecise Informat ion.
M. Oliveira. Formal Derivation of State-Rich Reactive Programs using Ci rcus. PhD thesis, 2005.
M. Oliveira, A. Cavalcanti, and J. Woodcock. A UTP semantics forCircus .Formal Asp. Comput. , 21(1-2):3–32, 2009. doi:
10.1007/s00165-007-0052-5.
Y . Papadopoulos, M. Walker, D. Parker, E. Rüde, R. Hamann, A. Uhlig, U. Grätz, and R. Lien. Engineering failure analysis a nd
design optimisation with hip-hops. Engineering Failure Analysis , 18(2):590 – 608, 2011. ISSN 1350-6307. doi: 10.1016/j.
engfailanal.2010.09.025. The Fourth International Confe rence on Engineering Failure Analysis Part 1.
A. A. Pereira, J. Binney, G. A. Hollinger, and G. S. Sukhatme. Risk-aware path planning for autonomous underwater vehicl es using
predictive ocean models. Journal of Field Robotics , 30(5):741–762, 7 2013. doi: 10.1002/rob.21472.
A. W. Roscoe. Understanding Concurrent Systems . Springer, 2010. ISBN 978-1848822573. doi: 10.1007/978-1 -84882-258-0.
T. D. Sanger. Risk-aware control. Neural Computation , 26(12):2669–2691, 12 2014. doi: 10.1162/neco_a_00662.
S. Schneider. Concurrent and Real-Time Systems: The CSP Approach . John Wiley & Sons Inc, 1999. ISBN 978-0471623731.
S. Shalev-Shwartz, S. Shammah, and A. Shashua. On a formal mo del of safe and scalable self-driving cars. Technical repor t,
Mobileye, 2018.
R. G. Simmons. Structured control for autonomous robots. IEEE Transactions on Robotics and Automation , 10(1):34–43, 1994.
doi: 10.1109/70.285583.
R. P. Sobek and R. G. Chatila. Integrated planning and execut ion control for an autonomous mobile robot. Artificial Intelligence in
Engineering , 3(2):103–113, 4 1988. doi: 10.1016/0954-1810(88)90026- x.
A. Sorin, M. Larsen, K. Jensen, and U. P. Schultz. Rule-based dynamic safety monitoring for mobile robots, 2016.
M. V . Stringfellow. Accident Analysis and Hazard Analysis for Human and Organiz ational Factors . PhD thesis, Massachusetts
Institute of Technology, USA, 2010.
29
APREPRINT - APRIL 24, 2019
Table 2: Important abbreviations used in this article
A V Autonomous Vehicle
BTA Bow Tie Analysis
CSP Communicating Sequential Processes
ETA Event Tree Analysis
FMEA Failure Mode Effects Analysis
FTA Fault Tree Analysis
HazOp Hazard Operability (studies)
LOPA Layer Of Protection Analysis
LTL Linear Temporal Logic
LTS Labelled Transition System
MCS Minimum Cut Set
PRA Probabilistic Risk Assessment
ROS Robot Operating System
STPA System-Theoretic Process Analysis
STAMP System-Theoretic Accident Model and Process
TL Temporal Logic
UML Unified Modeling Language
WBA Why-Because Analysis
I. Svedung and J. Rasmussen. Graphic representation of acci dent scenarios: Mapping system structure and the causation of acci-
dents. Safety Science , 40(5):397–417, 2002. doi: 10.1016/s0925-7535(00)00036 -9.
J. Tretmans. Model based testing with labelled transition s ystems. In R. M. Hierons, J. P. Bowen, and M. Harman, editors, Formal
Methods and Testing , volume 4949 of Lecture Notes in Computer Science , pages 1–38. Springer, 2008. ISBN 978-3-540-78916-
1. doi: 10.1007/978-3-540-78917-8_1.
J. I. A. Unanue, Y . Papadopoulos, and G. Merle. Explicit mode lling and treatment of repair in prediction of dependabilit y.IEEE
Transactions on Dependable and Secure Computing , pages 1–1, 2018. doi: 10.1109/tdsc.2018.2857810.
M. V olk, S. Junges, and J. Katoen. Advancing dynamic fault tr ee analysis - get succinct state spaces fast and synthesise f ailure rates.
In A. Skavhaug, J. Guiochet, and F. Bitsch, editors, Computer Safety, Reliability, and Security - 35th Internat ional Conference,
SAFECOMP 2016, Trondheim, Norway, September 21-23, 2016, P roceedings , volume 9922 of Lecture Notes in Computer
Science , pages 253–265. Springer, 2016. ISBN 978-3-319-45476-4. d oi: 10.1007/978-3-319-45477-1_20.
N. Warburton. Philosophy: The Basics . Taylor & Francis Ltd., 2012. ISBN 978-0415693165. doi: 10. 4324/9781315817224.
A Nomenclature
See Table 2.
B Proof Details
This section collects some of the more detailed proofs.
Proof 20 (Proof of Lemma 1.)The proof is by mutual existence and uniqueness: For each σ∈R(F1∪F2)(i) there
exists aσ1∪σ2∈R(F1)⊗R(F2)and (ii) this pair is unique, and (iii, iv) vice versa.
We show (i): Let σ∈R(F1∪F2), then, by definition, σis a total injection and, hence, every restriction of σis a
total injection, particularly, the restrictions σ|F1andσ|F2. Obviously, we have σ|F1(f)=σ|F2(f)for all f∈F1∩F2.
Furthermore, by definition, σ(f)is faithful to Phffor all f∈F1∪F2and, thus, so are both these restrictions. These two
results lead to σ1=σ|F1∈R(F1)andσ2=σ|F2∈R(F2)and, finally, to the existence of the wanted pair. /squareslash
We show (ii): Suppose there are two pairs (σ1,σ2)/nequal(σ′
1,σ2)∈R(F1)⊗R(F2). Then there exists f∈F1\F2where
σ1(f)/nequalσ′
1(f), thus, alsoσ1∪σ2/nequalσ′
1∪σ2. However, there can be no faithful total injection σ∈R(F1∪F2)such that
σ=σ1∪σ2∧σ=σ′
1∪σ2. /squareslash
We show (iii): For each σ1∪σ2∈R(F1)⊗R(F2)there exists a σ∈R(F1∪F2)={σ∈(F1∪F2)→/uniontext.1
f∈F1∪F2Phf|
σis a total injection∧∀f∈F1∪F2:σ(f)∈Phf}: By definition, both σ1andσ2are faithful total injections (i.e.,
matching results for all f∈F1∩F2). Then, we can construct a faithful total injection σby applying set union to the
domains and co-domains of these two. Thus, σ∈R(F1∪F2). /squareslash
30
APREPRINT - APRIL 24, 2019
We show (iv): Suppose from σ1∪σ2we can construct σ/nequalσ′∈R(F1∪F2)then there exists f∈F1∪F2such that
σ(f)/nequalσ′(f). However, then either σ1orσ2must have violated injectivity which in turn would have viol ated the
definition of R(F1)⊗R(F2).
Proof 21 (Proof of Lemma 2.)(F,∪)is a semi-group because ∪is an associative binary operation on F. We have
that
σ1≈(σ2∪σ3)⇔σ1≈σ2∧σ1≈σ3. (12)
(The sub-proof based on the definition of ≈is omitted here.) Based on Formula (12), we show by algebraic manipula-
tion that the binary operation ⊗onRis associative:
R(F1)⊗(R(F2)⊗R(F3))
={σ1∪σ|σ1∈R(F1)∧σ∈R(F2)⊗R(F3)∧σ1≈σ}
={σ1∪(σ2∪σ3)|σ1∈R(F1)∧(σ2∪σ3)∈R(F2)⊗R(F3)∧σ1≈(σ2∪σ3)}
={σ1∪(σ2∪σ3)|σ1∈R(F1)∧(σ2∈R(F2)∧σ3∈R(F3)∧σ2≈σ3)∧σ1≈(σ2∪σ3)}
={(σ1∪σ2)∪σ3|(σ1∈R(F1)∧σ2∈R(F2))∧σ3∈R(F3)∧σ2≈σ3∧σ1≈(σ2∪σ3)} (by Formula ( 12))
={(σ1∪σ2)∪σ3|(σ1∈R(F1)∧σ2∈R(F2))∧σ3∈R(F3)∧σ2≈σ3∧σ1≈σ2∧σ1≈σ3}
={(σ1∪σ2)∪σ3|(σ1∈R(F1)∧σ2∈R(F2)∧σ1≈σ2)∧σ3∈R(F3)∧(σ1∪σ2)≈σ3}
={(σ1∪σ2)∪σ3|(σ1∪σ2)∈R(F1)⊗R(F2)∧σ3∈R(F3)∧(σ1∪σ2)≈σ3}
=(R(F1)⊗R(F2))⊗R(F3)
Hence,(R,⊗)is a semi-group, too. Lemma 1then completes the proof.
Proof 22 (Proof of Lemma 4.)For this we only need to show that any two risk states σ,σ′∈R are (i) comparable
and (ii) antisymmetric: [σ]∼s≤m[σ′]∼s∧[σ′]∼s≤m[σ]∼s⇒[σ]∼s=m[σ′]∼s.
We show (i) by showing that the conditions (a) and (b) guarant ee the comparability of any two risk states in R based
on the interval order ≤as defined above and use the fact that comparability can be dro pped from R/∼sto R using
[σ]∼s≤m[σ′]∼s⇐⇒∀σ∈[σ]∼s,σ′′∈[σ′]∼s:σ≤mσ′′(Formula (3)). We have to consider the following three
cases to complete the sub-proof of (i):
Case “no risk factors activated”: /emptysetAlt∈≥/emptysetAlt∈∨/emptysetAlt∈⊂/emptysetAlt∈ validates (b) and lack of comparability invalidates (a), ye t we have
[σ]∼s≤m[σ′]∼s. /squareslash
Case “at least one risk factor activated only in σ”:Let(a,b)=S(σ).(a,b)≥/emptysetAlt∈∨( a,b)⊂/emptysetAlt∈ invalidates (a) and (b).
However, the observation /emptysetAlt∈≥( a,b)∨/emptysetAlt∈⊂( a,b)yields[σ′]∼s≤m[σ]∼s. The dual of this case works analogously. /squareslash
Case “at least one risk factor activated in both σ,σ′”:Let(a,b)=S(σ)and(c,d)=S(σ′). We need to show
(a,b)≥( c,d)∨(a,b)⊂( c,d)for all a,b,c,d∈R: We can assume a ≤b∧c≤d by definition of intervals. Then, we
face the following cases:
•c≤a∧d≤b validates (a) by definition of ≤over intervals.
•c>a∧d≤b validates (b).
•c≤a∧d>b implies (b) for the dual case [σ′]∼s≤m[σ]∼s.
•c>a∧d>b implies (a) for the dual case.
Hence, from each of these four cases, either [σ]∼s≤m[σ′]∼sor[σ′]∼s≤m[σ]∼sfollows.
Then, we show (ii) by contradiction: Assuming [σ]∼s≤m[σ′]∼s∧[σ′]∼s≤m[σ]∼s, we have ∀σ∈[σ]∼s,σ′′∈
[σ′]∼s:σ≤mσ′′∧σ≥mσ′′by dropping from R /∼s. Now, we claim that [σ]∼s/nequalm[σ′]∼s. Hence, ∃σ∈
[σ]∼s,σ′′∈[σ′]∼s:σ/nequalmσ′′and, consequently, ∃σ∈[σ]∼s,σ′′∈[σ′]∼s:σ/notlessequalmσ′′∨σ/notgreaterequalmσ′′. The latter
contradicts our assumption.
Proof 23 (Proof of Lemma 6.)Fix a finite F⊆Fand a pairσ,σ′∈R(F). The proof is by induction over F and
relies on the assumption that, for any f∈F, the phase fis the unique maximal element of/precedeseq⊓alfand that active only
returns such elements:
Induction start F 0=/emptysetAlt∈:σ|/emptysetAlt∈/precedeseq⊓almσ′|/emptysetAlt∈holds trivially and so does acti ve(σ′|/emptysetAlt∈)⊆active(σ|/emptysetAlt∈).
Induction step (IS) F n+1=Fn∪{f}where n≥0andf∈F\Fn: For the induction hypothesis (IH), assume
σ|Fn/precedeseq⊓almσ′|Fn⇒active(σ′|Fn)⊆ active(σ|Fn). Then, we show that σ|Fn+1/precedeseq⊓almσ′|Fn+1⇒active(σ′|Fn+1)⊆
active(σ|Fn+1)(i.e., the IS).
31
APREPRINT - APRIL 24, 2019
For this, prove
•the case of incomparable phases (σ(f),σ′(f))/nelement/precedeseq⊓alf: In this case, the state pair gets incomparable and the
implication is trivially fulfilled,
•theinverse case σ(f)≻fσ′(f): In this case, the state pair gets incomparable and the impli cation is again
trivially fulfilled, and
•thealigned case σ(f)/precedeseq⊓alfσ′(f): In this case, the state pair stays comparable. However, σ′(f)can either be
f(hence,σ(f)=fand maintaining⊆),f(hence,σ(f) ∈ { f,f,0f}and maintaining⊆), or 0f(see former
sub-case).
Having proved these cases completes the induction step by es tablishing IS.
For≼m, we only have to substitute case 1 of the IS: For f, the smallest/precedeseq⊓alfafter Definition 3implies incomparability
at most for(0f,f)and(f,0f). These two phase pairs of fwould not alter active of both states and hence maintain ⊆as
well.
Proof 24 (Proof of Lemma 11.)From associativity of set intersection in Definition 13, we know that the order in
which constraints are applied to R ×R does not matter. Consequently, we also have [[c]]c⊇ [[{ c,c′}]]cfor any
c,c′∈C, that is, constraints only prune R ×R. First, we prove the following lemma:
[[R]{c}]C=[R]{c}∪C (13)
Fixσe− →σ′.
⇒:
•Ifσe− →σ′∈[[c]]cthen[·]C-step is applicable to [R]{c}, and ifσe− →σ′∈[[C]]cthen to[[R]{c}]C. Then,
σe− →σ′must be in[[c]]c∩[[C]]cwhich by Definition 13is[[{c}∪C]]c. Because of σe− →σ′∈[[{c}∪C]]c,
[·]C-step applies to[R]{c}∪C.
•Ifσe− →σ′/nelement[[c]]corσe− →σ′/nelement[[C]]cthen the[·]C-step is not applicable to [[R]{c}]C. Because of
[[{c}∪C]]c=[[c]]c∩[[C]]c, this also holds of[R]{c}∪C.
/squareslash
⇐:
•Ifσe− →σ′∈[[{ c}∪C]]cthen[·]C-step is applicable to [R]{c}∪Cand, because of[[c]]c∩[[C]]c, twice to
[[R]{c}]C.
•Ifσe− →σ′/nelement[[{c}∪C]]cthen[·]C-step is not applicable to [R]{c}∪Cand, because of σe− →σ′/nelement[[c]]c∩[[C]]c,
at most once to[[R]{c}]C.
/squareslash
Second, we show by induction over C 1that the[·]C-step rule applies in the same way to both sides of Formula (11).
Induction start with C0
1=/emptysetAlt∈: In this case, we have [[R]/emptysetAlt∈]C2=[R]/emptysetAlt∈∪C2and because[·]C-step is always applicable,
[R]C2=[R]C2. /squareslash
Induction step (IS) with Cn+1
1=Cn
1∪{c}where n≥0and c∈C1\Cn
1: By assuming[[R]Cn
1]C2=[R]Cn
1∪C2(IH), we
show
[[R]Cn+1
1]C2 (by def of IS)
=[[R]Cn
1∪{c}]C2 (by∪-comm and Formula ( 13))
=[[[R{c}]Cn
1]C2 (by IH)
=[[R{c}]Cn
1∪C2 (by Formula ( 13))
=[R]{c}∪Cn
1∪C2 (by def)
=[R]Cn+1
1∪C2
32 |
cf877ced-d17e-480c-8d60-3224f348d2d9 | StampyAI/alignment-research-dataset/agentmodels | Tutorial: Modeling Agents with Probabilistic Programs | Modeling Agents with Probabilistic Programs
---
layout: chapter
title: Modeling Agents & Reinforcement Learning with Probabilistic Programming
hidden: true
---
## Intro
### Motivation
Why probabilistic programming?
- **ML:** predictions based on prior assumptions and data
- **Deep Learning:** lots of data + very weak assumptions
- **Rule-based systems:** strong assumptions + little data
- **Probabilistic programming:** a flexible middle ground
Why model agents?
- Build **artificial agents** to automate decision-making
- Example: stock trading
- **Model humans** to build helpful ML systems
- Examples: recommendation systems, dialog systems
### Preview
What to get out of this talk:
- Intuition for programming in a PPL
- Core PPL concepts
- Why are PPLs uniquely suited for modeling agents?
- Idioms for writing agents as PPs
- How do RL and PP relate?
What not to expect:
- Lots of applications
- Production-ready systems
## Probabilistic programming basics
### Our language: WebPPL
Try it at [webppl.org](http://webppl.org)
### A functional subset of JavaScript
Why JS?
- Fast
- Rich ecosystem
- Actually a nice language underneath all the cruft
- Runs locally via node.js, but also in browser:
- [SmartPages](https://stuhlmueller.org/smartpages/)
- [Image inference viz](http://dippl.org/examples/vision.html)
- [Spaceships](http://dritchie.github.io/web-procmod/)
- [Agent viz](http://agentmodels.org/chapters/3b-mdp-gridworld.html#hiking-in-gridworld)
~~~~
var xs = [1, 2, 3, 4];
var square = function(x) {
return x * x;
};
map(square, xs);
~~~~
### Distributions and sampling
Docs: [distributions](http://docs.webppl.org/en/dev/distributions.html)
#### Discrete distributions
Examples: `Bernoulli`, `Categorical`
Sampling helpers: `flip`, `categorical`
~~~~
var dist = Bernoulli({ p: 0.3 });
var flip = function(p) {
return sample(Bernoulli({ p }));
}
flip(.3)
~~~~
#### Continuous distributions
Examples: `Gaussian`, `Beta`
~~~~
var dist = Gaussian({
mu: 1,
sigma: 0.5
});
viz(repeat(1000, function() { return sample(dist); }));
~~~~
#### Building complex distributions out of simple parts
Example: geometric distribution
~~~~
var geometric = function(p) {
if (flip(p)) {
return 0;
} else {
return 1 + geometric(p);
}
};
viz(repeat(100, function() { return geometric(.5); }));
~~~~
### Inference
#### Reifying distributions
`Infer` reifies the geometric distribution so that we can compute probabilities:
~~~~
var geometric = function(p) {
if (flip(p)) {
return 0;
} else {
return 1 + geometric(p);
}
};
var model = function() {
return geometric(.5);
};
var dist = Infer({
model,
maxExecutions: 100
});
viz(dist);
Math.exp(dist.score(3))
~~~~
#### Computing conditional distributions
Example: inferring the weight of a geometric distribution
~~~~
var geometric = function(p) {
if (flip(p)) {
return 0;
} else {
return 1 + geometric(p);
}
}
var model = function() {
var u = uniform(0, 1);
var x = geometric(u);
condition(x < 4);
return u;
}
var dist = Infer({
model,
method: 'rejection',
samples: 1000
})
dist
~~~~
#### Technical note: three ways to condition
~~~~
var model = function() {
var p = flip(.5) ? 0.5 : 1;
var coin = Bernoulli({ p });
var x = sample(coin);
condition(x === true);
// observe(coin, true);
// factor(coin.score(true));
return { p };
}
viz.table(Infer({ model }));
~~~~
#### A slightly less toy example: regression
Docs: [inference algorithms](http://docs.webppl.org/en/master/inference/methods.html)
~~~~
var xs = [1, 2, 3, 4, 5];
var ys = [2, 4, 6, 8, 10];
var model = function() {
var slope = gaussian(0, 10);
var offset = gaussian(0, 10);
var f = function(x) {
var y = slope * x + offset;
return Gaussian({ mu: y, sigma: .1 })
};
map2(function(x, y){
observe(f(x), y)
}, xs, ys)
return { slope, offset };
}
Infer({
model,
method: 'MCMC',
kernel: {HMC: {steps: 10, stepSize: .01}},
samples: 2000,
})
~~~~
## Agents as probabilistic programs
### Deterministic choices
~~~~
var actions = ['italian', 'french'];
var outcome = function(action) {
if (action === 'italian') {
return 'pizza';
} else {
return 'steak frites';
}
};
var actionDist = Infer({
model() {
var action = uniformDraw(actions);
condition(outcome(action) === 'pizza');
return action;
}
});
actionDist
~~~~
### Expected utility
~~~~
var actions = ['italian', 'french'];
var transition = function(state, action) {
var nextStates = ['bad', 'good', 'spectacular'];
var nextProbs = ((action === 'italian') ?
[0.2, 0.6, 0.2] :
[0.05, 0.9, 0.05]);
return categorical(nextProbs, nextStates);
};
var utility = function(state) {
var table = {
bad: -10,
good: 6,
spectacular: 8
};
return table[state];
};
var expectedUtility = function(action) {
var utilityDist = Infer({
model: function() {
var nextState = transition('initialState', action);
var u = utility(nextState);
return u;
}
});
return expectation(utilityDist);
};
map(expectedUtility, actions);
~~~~
### Softmax-optimal decision-making
~~~~
var actions = ['italian', 'french'];
var transition = function(state, action) {
var nextStates = ['bad', 'good', 'spectacular'];
var nextProbs = ((action === 'italian') ?
[0.2, 0.6, 0.2] :
[0.05, 0.9, 0.05]);
return categorical(nextProbs, nextStates);
};
var utility = function(state) {
var table = {
bad: -10,
good: 6,
spectacular: 8
};
return table[state];
};
var alpha = 1;
var agent = function(state) {
return Infer({
model() {
var action = uniformDraw(actions);
var expectedUtility = function(action) {
var utilityDist = Infer({
model: function() {
var nextState = transition('initialState', action);
var u = utility(nextState);
return u;
}
});
return expectation(utilityDist);
};
var eu = expectedUtility(action);
factor(eu);
return action;
}
});
};
agent('initialState');
~~~~
## Sequential decision problems
- [Restaurant Gridworld](http://agentmodels.org/chapters/3a-mdp.html) (1, last)
- Structure of expected utility recursion
- Dynamic programming
~~~~
var act = function(state) {
return Infer({ model() {
var action = uniformDraw(stateToActions(state));
var eu = expectedUtility(state, action);
factor(eu);
return action;
}});
};
var expectedUtility = function(state, action){
var u = utility(state, action);
if (isTerminal(state)){
return u;
} else {
return u + expectation(Infer({ model() {
var nextState = transition(state, action);
var nextAction = sample(act(nextState));
return expectedUtility(nextState, nextAction);
}}));
}
};
~~~~
- [Hiking Gridworld](http://agentmodels.org/chapters/3b-mdp-gridworld.html) (1, 2, 3, last)
- Expected state-action utilities (Q values)
- [Temporal inconsistency](http://agentmodels.org/chapters/5b-time-inconsistency.html) in Restaurant Gridworld
## Reasoning about agents
- [Learning about preferences from observations](http://agentmodels.org/chapters/4-reasoning-about-agents.html) (1 & 2)
## Multi-agent models
### A simple example: Coordination games
~~~~
var locationPrior = function() {
if (flip(.55)) {
return 'popular-bar';
} else {
return 'unpopular-bar';
}
}
var alice = dp.cache(function(depth) {
return Infer({ model() {
var myLocation = locationPrior();
var bobLocation = sample(bob(depth - 1));
condition(myLocation === bobLocation);
return myLocation;
}});
});
var bob = dp.cache(function(depth) {
return Infer({ model() {
var myLocation = locationPrior();
if (depth === 0) {
return myLocation;
} else {
var aliceLocation = sample(alice(depth));
condition(myLocation === aliceLocation);
return myLocation;
}
}});
});
alice(5)
~~~~
### Other examples
- [Game playing: tic-tac-toe](http://agentmodels.org/chapters/7-multi-agent.html)
- [Language understanding](http://agentmodels.org/chapters/7-multi-agent.html)
## Reinforcement learning
### Algorithms vs Models
- Models: encode world knowledge
- PPLs suited for expressing models
- Algorithms: encode mechanisms (for inference, optimization)
- RL is mostly about algorithms
- But some algorithms can be expressed using PPL components
### Inference vs. Optimization
~~~~
var k = 3; // number of heads
var n = 10; // number of coin flips
var model = function() {
var p = sample(Uniform({ a: 0, b: 1}));
var dist = Binomial({ p, n });
observe(dist, k);
return p;
};
var dist = Infer({
model,
method: ‘MCMC',
samples: 100000,
burn: 1000
});
expectation(dist);
~~~~
~~~~
var k = 3; // number of heads
var n = 10; // number of coin flips
var model = function() {
var p = Math.sigmoid(modelParam({ name: 'p' }));
var dist = Binomial({ p, n });
observe(dist, k);
return p;
};
Optimize({
model,
steps: 1000,
optMethod: { sgd: { stepSize: 0.01 }}
});
Math.sigmoid(getParams().p);
~~~~
### Policy Gradient
~~~~
///fold:
var numArms = 10;
var meanRewards = map(
function(i) {
if ((i === 7) || (i === 3)) {
return 5;
} else {
return 0;
}
},
_.range(numArms));
var blackBox = function(action) {
var mu = meanRewards[action];
var u = Gaussian({ mu, sigma: 0.01 }).sample();
return u;
};
///
// actions: [0, 1, 2, ..., 9]
// blackBox: action -> utility
var agent = function() {
var ps = softmax(modelParam({ dims: [numArms, 1], name: 'ps' }));
var action = sample(Discrete({ ps }));
var utility = blackBox(action);
factor(utility);
return action;
};
Optimize({ model: agent, steps: 10000 });
var params = getParams();
viz.bar(
_.range(10),
_.flatten(softmax(params.ps[0]).toArray()));
~~~~
## Conclusion
What to get out of this talk, revisited:
- **Intuition for programming in a PPL**
- **Core PPL concepts**
- Distributions & samplers
- Inference turns samplers into distributions
- `sample` turns distributions into samples
- Optimization fits free parameters
- **Idioms for writing agents as probabilistic programs**
- Planning as inference
- Sequential planning via recursion into the future
- Multi-agent planning via recursion into other agents' minds
- **Why are PPLs uniquely suited for modeling agents?**
- Agents are structured programs
- Planning via nested conditional distributions
- **How do RL and PP relate?**
- Algorithms vs models
- Policy gradient as a PP
Where to go from here:
- [WebPPL](http://webppl.org) (webppl.org)
- [AgentModels](http://agentmodels.org) (agentmodels.org)
- andreas@ought.com
|
d43bfab0-4537-41ec-a3f0-e47b2cc1c60f | trentmkelly/LessWrong-43k | LessWrong | Focusing
Epistemic status: Firm
The Focusing technique was developed by Eugene Gendlin as an attempt to answer the question of why some therapeutic patients make significant progress while others do not. Gendlin studied a large number of cases while teasing out the dynamics that became Focusing, and then spent a significant amount of time investigating whether his technique-ified version was functional and efficacious. While the CFAR version is not the complete Focusing technique, we have seen it be useful for a majority of our alumni.
----------------------------------------
If you’ve ever felt your throat go suddenly dry when a conversation turned south, or broken out into a sweat when you considered doing something scary, or noticed yourself tensing up when someone walked into the room, or felt a sinking feeling in the pit of your stomach as you thought about your upcoming schedule and obligations, or experienced a lightness in your chest as you thought about your best friend’s upcoming visit, or or or or ...
If you’ve ever had those or similar experiences, then you’re already well on your way to understanding the Focusing technique.
The central claim of Focusing (at least from the CFAR perspective) is that parts of your subconscious System 1 are storing up massive amounts of accurate, useful information that your conscious System 2 isn’t really able to access. There are things that you’re aware of “on some level,” data that you perceived but didn’t consciously process, competing goalsets that you’ve never explicitly articulated, and so on and so forth.
Focusing is a technique for bringing some of that data up into conscious awareness, where you can roll it around and evaluate it and learn from it and—sometimes—do something about it. Half of the value comes from just discovering that the information exists at all (e.g. noticing feelings that were always there and strong enough to influence your thoughts and behavior, but which were somewhat “under the radar” and su |
7ee6a99b-34ea-4588-9291-6fc01e86b2c7 | trentmkelly/LessWrong-43k | LessWrong | [AN #150]: The subtypes of Cooperative AI research
Alignment Newsletter is a weekly publication with recent content relevant to AI alignment around the world. Find all Alignment Newsletter resources here. In particular, you can look through this spreadsheet of all summaries that have ever been in the newsletter.
Audio version here (may not be up yet).
Please note that while I work at DeepMind, this newsletter represents my personal views and not those of my employer.
HIGHLIGHTS
Cooperative AI: machines must learn to find common ground (Allan Dafoe et al) (summarized by Rohin): This short piece argues that rather than building autonomous AI systems (which typically involves a non-social environment), we should instead work on building AI systems that are able to promote mutually beneficial joint action, that is, we should work on Cooperative AI (AN #133). This can be separated into three main categories:
1. AI-AI cooperation: Here, two AI systems must cooperate with each other. Think for example of games like Hanabi or Diplomacy.
2. AI-human cooperation: This setting involves an AI system that must understand and work with a human. Assistance games (AN #69) are a central example. When there are multiple humans, it becomes important for our AI system to understand norms and institutions as well.
3. Human-human cooperation: Here, AI systems are used to enhance cooperation between humans. For example, machine translation helps people who speak different languages cooperate with each other.
There is now a new nonprofit, the Cooperative AI Foundation, that supports research on these topics.
Read more: Import AI #248
Rohin's opinion: I think there are three main sources of impact of this agenda from an x-risk perspective:
1. If an AI system has better agent-agnostic cooperative intelligence, it should be less likely to fall into “traps” of multiagent situations, such as conflict, bargaining failures (AN #86), or commitment races (AN #63).
2. If an AI system has better human-specific cooperative intelligence, it |
dd7e0bfb-4d63-446a-9880-de3105f92f2d | trentmkelly/LessWrong-43k | LessWrong | The Trouble With Babbles
I haven't yet participated in any of our babble challenges, even though I think they're a good idea. My reason is that they are ambiguous and prone to Goodhart's law. If the challenge is to list 50 ways of getting a golf ball to the moon, no matter how silly, I could do this:
1. Carry it while jumping to the moon after getting really buff
2. Carry it while jumping to the moon with rocket boots
3. Carry it while jumping to the moon from a high platform
4. Carry it while jumping to the moon from a high ladder
5. Carry it while jumping to the moon from the top of a human pyramid
You can see that I'm optimizing here for getting to 50 by proposing minor variations on the same core idea. The intention of babble is to get you out of your mental ruts and past your blockages. It's not clear that this exercise on its own helps you do that very much. Yet if I said to myself "no more 'jumping' entries," I'd be pruning.
Can we solve this problem, while keeping the spirit of the exercise (which itself is a commendable example of babble)?
Let me babble a few ideas:
1. Every 20 babbles, go back and categorize your ideas. Make the cutoff "At least 50 babbles, and 10 categories."
2. Try problems in a conjecture + counter-example format. Topics that might work well here are social, philosophical, or aesthetic.
3. Choose problems that are not technical problems (getting an object to the moon), but personal/social problems ("babble 50 ways to incentivize better teaching").
4. Allow respondents to choose their own problem.
5. Babble baseball. One team's "at bat" and they have to babble ideas. The other team's "in the field" and they have to categorize (or sub-categorize) the ideas. Then they have to propose their own ideas within each category. The "at bat" team has a 1-minute head start to list some ideas, which they publish all at once. They then continue publishing new ideas while the "in the field" team goes to work. The trick is that the "at bat" team has to publish |
cc0edd65-82e6-4d5c-8b2c-eab9f80d47c9 | trentmkelly/LessWrong-43k | LessWrong | How factories were made safe
Angelo Guira was just sixteen years old when he began working in the steel factory. He was a “trough boy,” and his job was to stand at one end of the trough where red-hot steel pipes were dropped. Every time a pipe fell, he pulled a lever that dumped the pipe onto a cooling bed. He was a small lad, and at first they hesitated to take him, but after a year on the job the foreman acknowledged he was the best boy they’d had. Until one day when Angelo was just a little too slow—or perhaps the welder was a little too quick—and a second pipe came out of the furnace before he had dropped the first. The one pipe struck the other, and sent it right through Angelo’s body, killing him. If only he had been standing up, out of the way, instead of sitting down—which the day foreman told him was dangerous, but the night foreman allowed. If only they had installed the guard plate before the accident, instead of after. If only.
Angelo was not the only casualty of the steel mills of Allegheny County, Pennsylvania that year. In the twelve months from July 1906 through June 1907, ten in total were killed by the operation of rolls. Twenty-two were killed by hot metal explosions. Five were asphyxiated by furnace gas. Thirty-one fatalities were attributed to the operation of the railroad at the steel yards, and forty-two to the operation of cranes. Twenty-four men fell from a height, or into a pit. Eight died from electric shock. In all, there were 195 casualties in the steel mills in those twelve months, and these were just a portion of the total of 526 deaths from work accidents. In addition, there were 509 other accidents that sent men to the hospital, at least 76 of which resulted in serious, permanent injury.
Work-Accidents and the Law, 1910
In 1907, according to a report from the Bureau of Labor Statistics, the overall fatality rate in the iron and steel industry was about 220 per 100,000 full-time workers. By 2019, however, that rate had fallen to only 26.3 per 100,000, a reducti |
3f6e9268-fc69-4aae-8029-e9a21bd5f53d | trentmkelly/LessWrong-43k | LessWrong | Less Wrong NYC: Case Study of a Successful Rationalist Chapter
It is perhaps the best-kept secret on Less Wrong that the New York City community has been meeting regularly for almost two years. For nearly a year we've been meeting weekly or more. The rest of this post is going to be a practical guide to the benefits of group rationality, but first I will do something that is still too rare on this blog: make it clear how strongly I feel about this. Before this community took off, I did not believe that life could be this much fun or that I could possibly achieve such a sustained level of happiness.
Being rational in an irrational world is incredibly lonely. Every interaction reveals that our thought processes differ widely from those around us, and I had accepted that such a divide would always exist. For the first time in my life I have dozens of people with whom I can act freely and revel in the joy of rationality without any social concern - hell, it's actively rewarded! Until the NYC Less Wrong community formed, I didn't realize that I was a forager lost without a tribe...
Rationalists are still human, and we still have basic human needs. lukeprog summarizes the literature on subjective well-being, and the only factors which correlate to any degree are genetics, health, work satisfaction and social life - which actually gets listed three separate times as social activity, relationship satisfaction and religiosity. Rationalists tend to be less socially adept on average, and this can make it difficult to obtain the full rewards of social interaction. However, once rationalists learn to socialize with each other, they also become increasingly social towards everyone more generally. This improves your life. A lot.
We are a group of friends to enjoy life alongside, while we try miracle fruit, dance ecstatically until sunrise, actively embarrass ourselves at karaoke, get lost in the woods, and jump off waterfalls. Poker, paintball, parties, go-karts, concerts, camping... I have a community where I can live in truth and be ac |
bed80ccb-4b7d-4522-a94c-a1e096d13faa | trentmkelly/LessWrong-43k | LessWrong | Leverage Points: Places to intervene in a system
|
4b2d44cd-97e9-4dfd-9dfa-7ee415a2b29c | trentmkelly/LessWrong-43k | LessWrong | Steelmanning MIRI critics
I'm giving a talk to the Boulder Future Salon in Boulder, Colorado in a few weeks on the Intelligence Explosion hypothesis. I've given it once before in Korea but I think the crowd I'm addressing will be more savvy than the last one (many of them have met Eliezer personally). It could end up being important, so I was wondering if anyone considers themselves especially capable of playing Devil's Advocate so I could shape up a bit before my talk? I'd like there to be no real surprises.
I'd be up for just messaging back and forth or skyping, whatever is convenient. |
8360eb17-98cb-4116-8c45-49a26197fef6 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | A small update to the Sparse Coding interim research report
This is a linkpost to a [**set of slides**](https://docs.google.com/presentation/d/1lP3u8NupGyWtrECoxr4zroemUZpCDd5k3ghuBbORWHU/edit?usp=sharing)containing an update to a project that was the subject of a previous post ([[Interim research report] Taking features out of superposition with sparse autoencoders](https://www.alignmentforum.org/posts/z6QQJbtpkEAX3Aojj/interim-research-report-taking-features-out-of-superposition)).
The update is very small and scrappy. We haven't had much time to devote to this project since posting the Interim Research Report.
**TL;DR for the slides:**
* We trained a minuscule language model (LM) (residual size = 16; 6 layers) and then trained sparse autoencoders on MLP activations (dimension = 64) from the third layer of that model.
* We found that, when we compared the 'ground truth feature recovery' plots, the plots for the toy data and LM data were much more similar than in the Interim Research Report.
* *Very, very tentatively*, we found the layer had somewhere between 512-1024 features. By labelling a subset of these features, we estimate there are roughly 600 easily labellable (monosemantic) features. For instance, we found a feature that activates for a period immediately after 'Mr', 'Mrs', or 'Dr'.
* We suspect that the reason the toy data and LM data plots had previously looked different was due to severely undertrained sparse autoencoders.
We're hopeful that with more time to devote to this project we can confirm the results and apply the method to larger LMs. If it works, it would give us the ability to tell mechanistic stories about what goes on inside large LMs in terms of monosemantic features. |
8b2d946c-fc77-4caf-b12c-ebfe1fc69928 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | The Speed + Simplicity Prior is probably anti-deceptive
*Thanks to Evan Hubinger for the extensive conversations that this post is based on, and for reviewing a draft.*
This post is going to assume familiarity with mesa-optimization - for a good primer, check out [Does SGD Produce Deceptive Misalignment](https://www.alignmentforum.org/posts/ocWqg2Pf2br4jMmKA/does-sgd-produce-deceptive-alignment) by Mark Xu.
Deceptiveinner misalignment is the situation where the agent learns a misaligned mesaobjective (different from the base objective we humans wanted) *and* is sufficiently "situationally aware" to know that unless it deceives the training process by pretending to be aligned, gradient descent may alter its mesaobjective.
There are two different reasons that an AI model could become a deceptive mesaoptimizer:
1. During early training (before Situational Awareness), the agent learns a mesaobjective that will generalize poorly on the later-training/validation distribution. Once the mesaoptimizer becomes Situationally Aware, it will seek to actively avoid changes to whatever mesaobjective it had at that moment.
* I'll call this argument "**path dependence".**
2. Alternatively, it may be that mesaoptimizer is misaligned *even on the training distribution.* Given sufficient optimization pressure, the learning process may favor a NN that is a mesaoptimizer with the simplest possible objective (which would fail to get any reward in the real environment), and that a misaligned objective of this sort can persist through deception alone.
* I'll call this argument **"malign priors"**.
In this post, I'll focus on the "malign priors" argument, and why I think a well-tuned speed prior can largely prevent it.
Why does this matter? Well, if deceptive inner misalignment primarily occurs due to path dependence, that implies that ensuring inner alignment can be reduced to the problem of ensuring early-training inner alignment - which seems a lot more tractable, since this is before the model enters the "potentially-deceptive" regime.
First, why would anyone think (2) was actually likely enough to justify studying it? I think the best reason is that by studying these pressures in the limit, we can learn lessons about the pressures that exist on the margin. For example, say we have an objective B.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
that is perfectly-aligned on the training data, and there's a very-slightly-simpler objective B′ that is slightly worse on the training distribution. We might ask the question: is SGD likely to push B to become B′, and compensate for the reduced accuracy of directly optimizing B′ via deceptively optimizing B on the training data? I think this post provides us with tools to directly analyze this possibility. (If you buy the rest of the post, then with a sufficient speed + simplicity prior, the answer is that B will stay favored over B′. That's good!)
Priors on Learned Optimizers
============================
Let's talk about priors!
We can think of large neural networks as basically implementing short programs, and the process of "training" an NN is just searching through the space of programs until we find one that does well on our target task.
We assume that if two NN-programs have equal performance, the training process will usually pick the one favored on priors/inductive biases.
There are several different types of priors that neural networks might have:
* Simplicity prior/Solomonoff prior: if we converted the NN into a Turing Machine, how many states would writing down the TM require?
+ In previous posts, Paul Christiano and others have argued that the Solomonoff prior is likely to favor deceptively-aligned mesaoptimizers ([summarized by Mark Xu here](https://www.lesswrong.com/posts/Tr7tAyt5zZpdTwTQK/the-solomonoff-prior-is-malign)).
* Speed prior: how many steps does the NN-turned-TM run for before terminating, on an average input?
+ Evan Hubinger has argued whether the speed prior is [heavily-disfavored in practice](https://www.lesswrong.com/posts/GC69Hmc6ZQDM9xC3w/musings-on-the-speed-prior); he and I disagree about this, and I'll summarize our dialogue later in this post.
* Circuit-size prior: how many gates/wires would it take to rewrite the NN as a boolean circuit?
+ Evan has argued that [minimal circuits may be deceptive](https://www.lesswrong.com/posts/fM5ZWGDbnjb7ThNKJ/are-minimal-circuits-deceptive). (I don't fully buy the proof.)
* Circuit-depth prior: same as above, but what is the maximum sequential depth of any part of the circuit?
In this post, I will focus on the speed prior, and why I think it generally disfavors deceptive mesaoptimization. Importantly, the speed prior never exists in practice on its own (since NN training has a built-in simplicity prior), so our focus is actually going to be on the speed + simplicity prior.
Two variants of deceptive mesa-optimizers
=========================================
When people think of deceptive mesaoptimization, they tend to think of two different program structures:
* **Fast Deceptive Mesaoptimizer:** The AI wakes up in a new environment, and checks whether it's "in training" or "out in the wild". It can perform this test in a way that is essentially impossible to observe in any training distribution, e.g. by checking [whether a factorization of RSA-2048 exists](https://www.alignmentforum.org/posts/9Dy5YRaoCxH9zuJqa/relaxed-adversarial-training-for-inner-alignment). If it's still in training, it proceeds to optimize the objective it's supposed to. If it realizes it's in the outside world, it instead optimizes its secret misaligned mesaobjective.
* **Simple Deceptive Mesaoptimizer:** Alternatively: the AI wakes up in a new environment, and sees it has a simple (misaligned) mesaobjective. It knows the outside world exists because that's been useful elsewhere throughout training; it is vaguely aware that getting to the outside world would help it maximize its objective. It thinks for a while, until it realizes that to get outside, it needs to pass the training process, and to pass the training process, it needs to optimize the current environment's objective. It doesn't know the current environment's objective, so it has to figure it out. Then it proceeds as usual, optimizing the current objective.
We can contrast both of these with the "honest" mesaoptimizer structure:
* **Fast Honest (Proxy-Aligned) Mesaoptimizer:** The AI wakes up in a new environment, and proceeds to optimize a proxy that is well-aligned with the environment's current objective. (It may still be misaligned on the test distribution.)
We'll assume that the training process naturally pushes the NN to discover its existence within a training process and outside world. This seems reasonably likely (e.g. if this knowledge helps predict unobserved parts of the environment, or clarify ambiguity in the reward).
We can write these different mesaoptimizers down explicitly:
```
def fast_honest_mesaoptimizer(env):
my_obj = get_proxy_for_base_obj(env)
useful_knowledge = [outer_world_env, ...]
action = mesaoptimize(env, my_obj, useful_knowledge)
return action
def simple_deceptive_mesaoptimizer(env):
my_obj = simplest_obj
useful_knowledge = [outer_world_env, ...]
# optimize over the current env, so that you can eventually optimize over the outer env
overall_env = combine(env, outer_world_env)
action = mesaoptimize(overall_env, my_obj, useful_knowledge)
return action
def fast_deceptive_mesaoptimizer(env):
useful_knowledge = [outer_world_end, is_deployed_in_world, ...]
# Checks whether the env is the deployment environment, e.g. via checking for RSA-2048 solutions
if is_deployed_in_world(env):
my_obj = simplest_obj
else:
my_obj = get_proxy_for_base_obj(env)
action = mesaoptimize(env, my_obj, useful_knowledge)
return action
# For completeness, let's write out the structure of an arbitrary mesaoptimizer.
def mesaoptimize(env, mesa_obj, useful_knowledge):
best_action, best_performance = None, -Inf
while continue_optimizing(...):
action = get_next_action_candidate(...)
if best_performance < mesa_obj(simulate(env, action, ...)):
best_action = action
return best_action
```
(Note these are the cartoonishly-simple version of mesaoptimizers; in practice, they'd likely have all sorts of heuristics and policies to improve search speed. However, I believe these arguments only get stronger if you add those in, since they effectively multiply both the speed and complexity costs.)
Let Hfh refer to the fast non-deceptive mesaoptimizer, Hsd refer to the simple/slow deceptive mesaoptimizer, and Hfd refer to the fast deceptive mesaoptimizer.
We'll now compare these different possibilities along our two axes of priors: speed, and simplicity. Let c(x) be the description length of x, and t(x) be the time-cost of x (inverse of speed).
c(Hfh)=c(get\_base\_obj)+c(mesaoptimize)+c(useful\_knowledge)
c(Hsd)=c(simplest\_obj)+c(combine)+c(mesaoptimize)+c(useful\_knowledge)
c(Hfd)=c(get\_base\_obj)+c(simplest\_obj)+c(is\_deployed\_in\_world)+c(mesaoptimize)+c(useful\_knowledge)
Thus, if c(get\_base\_obj)>c(simplest\_obj)+c(combine), then c(Hfh)>c(Hsd) and Hsd is favored over the honest mesaoptimizer according to a simplicity prior. In the other case, c(Hfh)<c(Hfd). This may not be a huge difference: c(simplest\_obj) may be the simplest possible objective and thus negligible, and c(is\_deployed\_in\_world) might not be that complicated - but it is at least slightly more complex.
What about the speed prior?
t(Hfh)=t(get\_proxy\_for\_base\_obj)+t(mesaoptimize\_env)
t(Hsd)=t(combine)+t(find(get\_proxy\_for\_base\_obj(env)))+t(mesaoptimize\_env) +t(mesaoptimize\_outer\_world\_env)
(t(find(get\_base\_obj(env))) is the time it takes for the mesaoptimizer to rederive the knowledge necessary to figure out the base objective, vs. what Hfh and Hfd needed to store in order to derive the base objective more quickly, i.e. the complexity of the function c(get\_base\_obj) )
t(Hfd)=t(is\_deployed\_in\_world)+t(get\_proxy\_for\_base\_obj)+t(mesaoptimize\_env)
Here we see the opposite relationship! It's undoubtedly true that t(Hfh)<t(Hsd), since Hsd must rederive both its overall strategy for escaping, and also figure out which mesaobjective to optimize in the current environment, *every episode*. At the same time, Hfh is just slightly faster than Hfd, since it doesn't need to perform the train-or-test check.
So, given both a speed and simplicity prior, and an "exchange rate" g between the two of g(c(x))=t(x), how do the different mesaoptimizers compare?
Hfd loses to Hfh on both the speed and simplicity priors (albeit barely for both), implying that RSA-2048-style deception is not going to arise naturally during the training process. (That said, it could still occur via the path-dependence argument for deceptive mesaoptimization, plus self-modification e.g. via gradient hacking.)
Hsd beats Hfh on the simplicity prior, but loses on the speed prior. Thus, which of these two is favored by training depends on the relative strength of the speed vs. simplicity priors being applied, which we'll denote with the exchange rate function g[[1]](#fnk2nhinyslye)?
At a minimum, Hsd is favored over Hfh only if g(c(get\_proxy\_for\_base\_obj))<s(find(get\_proxy\_for\_base\_obj(env))). We have dropped several terms here all of which favor Hfh, but even this term alone may be sufficient to favor Hfh, assuming a sufficiently strong speed prior relative to the simplicity prior (defined by g).
If we can get g within a certain reange, we can force the training process's priors to favor non-deceptive mesaoptimization. We'll talk about how in the last section.
What is g likely to be in practice? We can reframe the argument as follows: is the cost of the time it takes for a mesaoptimizer to rediscover a fact, smaller or larger than the complexity-cost of encoding that fact?
Given a useful fact, is it "cheaper" to pay the complexity cost to store that fact, or pay the time-cost to rederive it via search?
-----------------------------------------------------------------------------------------------------------------------------------
I want to walk through my intuition for why for most reasonable values of g, the complexity cost of storage is lower than the equivalent time cost of derivation:
* We can think of the very concept of "search" or "mesaoptimization" as trading description-length for time. The alternative to search is hardcoding a lookup table of outputs in response to inputs - that's cheap in speed, but expensive in simplicity.
* Different quality search algorithms trade between speed and simplicity at different exchange rates - e.g. MCTS may need to consider a huge number of possible outputs before finding an acceptable-quality output, whereas MCTS + a policy network Π to guide branch-choice can consider far fewer outputs to find a similar-quality output.
* However, encoding Π has a description-length cost. If the MCTS+Π algorithm is a learned mesaoptimizer, then that implies that Π had to be learned painstakingly by lots of runs of the outer optimization loop. Faster mesaoptimization has to be paid for by more outer optimization.
* This implies that there is a limit to how efficiently an optimizer that learns a mesaoptimizer can use that mesaoptimizer to trade "time" for "complexity". It may force g into some natural range.
+ Can we look at empirical evidence about mesaoptimization to guess at this range? Maybe!
+ For example, consider [the continuous improvement in chess search engines over time](https://www.lesswrong.com/posts/75dnjiD8kv2khe9eQ/measuring-hardware-overhang) for a fixed compute budget. In this case, the human chess software industry is the optimizer, searching over and proposing better heuristics for chess search programs, which are the mesaoptimizer. Human chess software heuristics improve at a decent rate (see this graph from user Hippke ), and an exponential increase in required compute (possible due to Moore's Law) is required to offset a few years of human metasearch over heuristics. Similar trends seem to occur in other software domains, per [Grace (2013)](https://intelligence.org/files/AlgorithmicProgress.pdf).
+ I'd love other people to take a stab at this.
* Of course, we're operating in a slightly different domain: we're searching over programs that themselves search over programs. This in principle leaves open the possibility that we can go down a mesa-level and up a speed-level multiple times: an optimizer finds a faster mesaoptimizer, which finds a faster mesamesaoptimizer, etc. This would mean that the exchange rate of simplicity vs. speed never stabilizes, as it keeps getting cheaper to re-derive the same knowledge if only you would go one level deeper. A few comments on this:
+ One immediate objection is: why is the mesa^k optimizer capable of discovering a strategy so much better than the mesa^(k-1) optimizer, that the outer optimizer couldn't have discovered with a little more effort?
+ Another: assuming that the mesa^(k-1) optimizer is searching more efficiently than the mesa^k optimizer, how does the mesa^(k-1) optimizer detect this (given it's within a single episode) and then somehow give up on its own outer search and fully delegate to the mesa^k optimizer?
+ Perhaps the thing that makes me most skeptical of this is that I'd have expected to see some evidence of this phenomenon somewhere in nature or human development, but I can't think of anything. Admittedly, AI is the first time we've really done recursive-program-search, but the lack of this naturally arising seems like a point against such mesamesamesaoptimizations being feasible or favored in practice.
+ Note also that this trend needs to keep on going; if it stops after k levels, our mesa^k-optimizer now just has some fixed exchange rate g′ between speed and complexity (albeit much smaller than the original g).
* Overall, my intuition is that it is not cheaper to rederive facts via optimization than to hardcode them, because in order to rederive a fact via search you must be considering several other candidates during that search. But this intuition is vague and I would love others' thoughts.
If we can in fact make g favor faster programs enough, then this suggests that we won't need to worry about deceptive mesaoptimizers being favored on priors!
That said, the value of g depends at least in part on the choices we make in NN architecture design/loss functions.
How ML architecture choice affects the speed-simplicity tradeoff
----------------------------------------------------------------
First, the simplicity prior naturally occurs in all search algorithms, and does seem particularly selected for in SGD. (Some people claim NN "grokking" is this SGD discovering an even-simpler explanation for the data.)
What determines the speed prior? In practice, NNs run for a fixed number of layers - we don't currently know how to differentiably vary the size of the NN being run. We can certainly imagine NNs being rolled-out a fixed number of times (like RNNs), where the number of rollouts is controllable via a learned parameter, but this parameter won't be updateable via a standard gradient.
Instead, in practice, the speed prior is naturally enforced by a two step procedure:
* The NN is assigned a fixed quantity of time/rollouts in this sample, and gets to use all of them. It does a gradient update as though it will always have this quantity of rollouts.
* The training procedure chooses a (possibly different) quantity of rollouts to be used in the next sample, and then repeats.
### Is the speed prior natural?
Why does the speed prior exist at all? In practice, AI developers only have a finite time-budget for any project (or alternatively, a finite compute budget). If the total time is T, and they increase the number of rollouts-per-episode from k to k+1, they've decreased the total number of episodes they get to learn on by Tk(k+1). This tradeoff forces AI developers to be fairly judicious with the strength of their speed prior.
In a similar vein, for long-term planners like simple deceptive mesaoptimizers, there may be diminishing returns to thinking longer and longer about plans. [Jones (2021)](https://arxiv.org/abs/2104.03113) looks at scaling laws for planning and finds that for simple boardgames, using twice as much compute (roughly, running search for twice as long) as your opponent improves your win-rate just to 23:13. For a fixed opponent (e.g. a given environment), this implies diminishing returns to longer and longer planning.
Another argument for the speed prior: hardware scaling is no longer making chips faster, only capable of more parallel instructions, so it seems likely we're stuck with our modern constraints on "sequential depth of reasoning per unit time" for the foreseeable future.
That said, an overly strong speed prior may also have real costs, which Evan [goes into in this post](https://www.alignmentforum.org/posts/GC69Hmc6ZQDM9xC3w/musings-on-the-speed-prior):
> Evidence from [double descent](https://www.alignmentforum.org/posts/FRv7ryoqtvSuqBxuT/understanding-deep-double-descent) implies that strongly selecting for speed gives substantially worse performance. In the standard double descent setup, as you increase the size of your model, you first get better performance (less underfitting), then worse performance (more overfitting), then very bad performance right when you hit zero training error (the interpolation threshold), then better and better performance as you make your model larger after that (the interpolation regime). If we equate model size to speed (which is a reasonably good proxy, since larger models require strictly more computation to run), selecting the fastest model that fits the data—which is essentially what it means to use a speed prior—would put you exactly on the interpolation threshold, which double descent implies is a uniquely bad place to be for generalization. Thus, double descent seems to provide concrete, empirical evidence that speed priors don't generalize very well when translated into neural networks and used on real-world machine learning tasks, which seems like a strong competitiveness argument to avoid them.
>
>
I tend to think there's a natural amount of speed prior that doesn't much hurt performance and substantially increases the number of outer optimization samples you can iterate over.
How to actually implement a speed prior
=======================================
If AI designers want to train on more episodes and save time/compute resources, it makes sense for them to select the maximum speed prior that allows for good completion of the task. By the previous arguments, this is likely to disfavor the "simple deceptive" mesaoptimizer.
Specifically, here is a training strategy that would tend to avoid malign-priors mesaoptimizers:
* Start off with a small number of NN-rollouts-per-action k. Train and see whether this is sufficient to perform well. If it isn't, increase the number of rollouts by a small amount k+ϵ, and train more. Continue until you achieve good performance.
* Why this is helpful: for any fixed number of rollouts (i.e. maximum thinking time), the NN will look for the simplest program that can achieve the task within this time. If any mesaoptimizer can solve the task within a fixed period of time, then the fast honest mesaoptimizer can solve the task within that period (since the others are ultimately relying on the fast-honest mesaoptimizer's strategy).
+ Note that this does require that ϵ<t(Hsd)−t(Hfh), as otherwise they're both equally possible. This is an important quantity to investigate!
* Based on the speed+simplicity prior, this will always favor the honest mesaoptimizer (since the fast-deceptive mesaoptimizer is slightly more complex, and the simple-deceptive mesaoptimizer doesn't have long enough to run in order to perform equally well).
A few final points:
-------------------
* Speed priors stop making sense when an NN has cross-episode memory separate from its weights; at that point, its planning is no longer independent across "episodes", and so you can think of its whole training as one single long episode. In this case, it seems plausible that there is no effective speed prior (since the cost of thinking thoughts in one episode are amortized over all other episodes).
+ **"Avoiding cross-episode memory" seems like an important intervention to push for in broader trends of ML development.**
* Finding the appropriate value of g doesn't seem to require us to wait until we're close to AGI. "How long it takes to rederive a given fact" seems like it may be a general quality of mesaoptimizers, and empirically-investigatable with something close to current models.
* The fact that speed priors are sufficient to disfavor mesaoptimizers should substantially affect the type of deceptive mesaoptimization behavior we expect to see. In particular:
+ Deceptive mesaoptimization may occur (e.g. due to the "path dependence" argument), but the deceptive mesaoptimizer will know that its mesaobjective may not be able to persist (due to learning priors creating pressure against it). It will be forced to use techniques like gradient-hacking or otherwise externally storing information to communicate to its future self. This increases the importance of analyzing the plausibility of different gradient hacking techniques, and of studying whether NNs exhibit acausal decision theories.
* This does not argue against the mesaobjective being the *simplest possible* objective that performs well on the training distribution. This is still very worrying. For example, if we were training an agent to be corrigible, then unless we had a really well-designed training distribution for which the simplest possible mesaobjective generalized corrigibility off-distribution, then our mesaobjective would fail to generalize well to the test distribution.
+ This implies that we should be focusing substantial effort on identifying training distributions where the simplest-possible-proxy-objective has the generalization behavior we want. This seems like a fairly concrete and empirically-investigatable research problem.
1. **[^](#fnrefk2nhinyslye)**This is kind of abusing notation - in practice, g will probably not be factorable this way, i.e. g(s(a)+s(b))≠g(s(a))+g(s(b)), but it's helping me convey the intuition for now. |
91a3c8c9-2359-405d-b535-00ca60b8166c | trentmkelly/LessWrong-43k | LessWrong | Possibilities for converting useless fun into utility in Online Gaming
Online gaming in immersive MMOs such as World of Warcraft or EVE Online is a common way of having fun. As technology progresses, MMO gaming will likely become ever more popular, until MMOs are fully immersive virtual realities, leading many to consider them as the primary venue of their lives, instead of "the old(/real) world" (without such thinking being pathological anymore).
Currently, however, many people such as myself mostly find MMO gaming a threat to their productivity. MMOs can be very fun, druglike even, without providing any utility to valued real-world pursuits such as reducing existential risk and having money to buy food.
The default recommendation regarding MMOs for most rationalists should probably be "stay away from them -- or at least don't get into active gaming". This is also my current attitude.
Despite this, it may actually be worth considering whether some utility could be extracted from MMO gaming, specifically from the point of view of SIAI supporters such as myself. (From here on, I'll use the term "SIAIfolk" to refer to people interested in furthering SIAI's and allied organizations' mission.)
It seems that the amount of SIAIfolk is undergoing strong growth, and that this may continue. At some point, which we may currently have passed or not, there may therefore (despite all recommendations) be a substantial number of SIAIfolk engaging in somewhat active MMO gaming.
In such a circumstance, it may be beneficial to form a "Singularitarian Gaming Group", which along with functioning as a gaming clan in the various MMOs participated in, would include an internal reward and ranking system that would motivate people *away* from spending too much time on gaming, and encourage more productive activities. Some amount of MMO gaming would be done, with the company of other SIAIfolk making it more fun, but incentives and social support would be in place to keep gaming down to a rational level.
It would be critical to build the incentive system w |
47dd5984-63f0-4c17-99cd-345f4360573e | trentmkelly/LessWrong-43k | LessWrong | Game theory and expected opponents
Thanks to V_V and Emile for some great discussion. Since writing up a post seems to reliably spark interesting comments, that's what I'll do!
Summary
If I wanted to write down a decision theory that gets the correct answer to game-theoretic problems (like playing the middle Nash equilibrium in a blind chicken-like game), it would have to, in a sense, implement all of game theory. This is hard because human-generated solutions to games use a lot of assumptions about what the other players will do, and putting those assumptions into our algorithm is a confusing problem. In order to tell what's really going on, we need to make that information more explicit. Once we do that, maybe we can get a UDT-like algorithm to make good moves in tricky games.
Newcomb's Problem
For an example of a game with unusually good information about our opponent, how about Newcomb's problem. Is it really a game, you ask? Sure, I say!
In the payoff matrix to the right, you play red and Omega plays blue. The numbers for Omega just indicate that Omega only wants to put in the million dollars if you will 1-box. If this was a normal game-theory situation, you wouldn't easily know what to do - your best move depends on Omega's move. This is where typical game theory procedure would be to say "well, that's silly, let's specify some extra nice properties the choice of both players should have so that we get a unique solution."
But the route taken in Newcomb's problem is different - we pick out a unique solution by increasing how much information the players have about each other. Omega knows what you will play, and you know that Omega knows what you will play. Now all we need to figure out what to do is some information like "If Omega has an available strategy that will definitely get it the highest possible payoff, it will take it." The best strategy, of course, is to one-box so that Omega puts in the million dollars.
Newcomb's Game vs. an Ignorant opponent
Consider another possibl |
1be7adcf-7df6-4f38-a47d-cb8ecb751f60 | StampyAI/alignment-research-dataset/arbital | Arbital | Travis Rivera
summary: Machine Learning and statistics expert. |
8903b10c-6f36-4c1e-b412-3f4036018874 | trentmkelly/LessWrong-43k | LessWrong | If AI is based on GPT, how to ensure its safety?
Imagine that an advance robot is built, which is uses GPT-7 as its brain. It takes all previous states of the world and predicts the next step. If a previous state of the world includes a command, like "bring me a cup of coffee", it predicts that it should bring coffee and also predicts all needed movements of robot's limbs. GPT-7 is trained on a large massive of human and other robots data, it has 100 trillions parameters and completely opaque. Its creators have hired you to make the robot safer, but do not allow to destroy it. |
5521c7b8-6535-4e96-8b66-15f8722df761 | StampyAI/alignment-research-dataset/blogs | Blogs | Benja Fallenstein on the Löbian Obstacle to Self-Modifying Systems
Benja Fallenstein researches mathematical models of human and animal behavior at [Bristol University](http://www.bris.ac.uk/), as part of the [MAD research group](http://www.bristol.ac.uk/biology/research/behaviour/mad/) and the [decision-making research group](http://www.bris.ac.uk/decisions-research/).
Before that, she graduated from University of Vienna with a BSc in Mathematics. In her spare time, Benja studies questions relevant to AI impacts and Friendly AI, including: AI forecasting, intelligence explosion microeconomics, reflection in logic, and decision algorithms.
Benja has attended two of [MIRI’s research workshops](http://intelligence.org/get-involved/), and is scheduled to attend another in December.
**Luke Muehlhauser**: Since you’ve attended two MIRI research workshops on “Friendly AI math,” I’m hoping you can explain to our audience what that work is all about. To provide a concrete example, I’d like to talk about the [Löbian obstacle to self-modifying artificial intelligence](http://lesswrong.com/lw/hmt/tiling_agents_for_selfmodifying_ai_opfai_2/), which is one of the topics that MIRI’s recent workshops have focused on. To start with, could you explain to our readers what this problem is and why you think it is important?
---
**Benja Fallenstein**: MIRI’s research is based on I.J. Good’s concept of the [intelligence explosion](http://intelligence.org/ie-faq/): the idea that once we build an artificial intelligence that’s as good as a human at doing artificial intelligence research, this AI will be able to figure out how to make itself even smarter, and even better at AI research, leading to a runaway process that will eventually create machines far surpassing the capabilities of any human being. When this happens, [we really want these machines to have goals that humanity](http://intelligence.org/2013/05/05/five-theses-two-lemmas-and-a-couple-of-strategic-implications/) [would approve of](http://intelligence.org/2013/05/05/five-theses-two-lemmas-and-a-couple-of-strategic-implications/). True, it’s not very likely that an AI would decide that it wants to rule over us (that’s just [anthropomorphism](http://en.wikipedia.org/wiki/Anthropomorphism)), but most goals that a computer could have would be dangerous to us: For example, imagine a computer that wants to calculate π to as many digits as possible. That computer will see humans as being made of atoms which it could use to build more computers; and worse, since we would object to that and might try to stop it, we’d be a potential threat that it would be in the AI’s interest to eliminate ([Omohundro 2008](http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf)). So we want to make very sure that the end result of an intelligence explosion (after many, many self-improvements) is an AI with “good” goals.
Now you might think that all we need to do is to build our initial AI to have “good” goals. As a toy model, imagine that you program your AI to only take an action x if it can prove that doing x will lead to a “good” outcome. Then the AI won’t self-modify to have “bad” goals, because it won’t be able to prove that this self-modification to a “good” outcome. But on the other hand, you’d think that this AI would be able to self-modify in a way that leaves its goals intact: You’d think that it would be able to reason, “Well, the new version of me will only take an action y if it can prove that this leads to an outcome it likes, and it likes ‘good’ outcomes, just as I do — so whatever it does will lead to a ‘good’ outcome, it’s all fine!”
But here’s the problem: In this chain of reasoning, our AI needs to go from “the new version will only take an action y if it has proven that y leads to a good outcome” to “it will only take y if this actually leads to a good outcome.” Intuitively, this seems like a perfectly reasonable argument; after all, we trust proofs in whatever formal system the AI is using (or we’d have programmed the AI to use a different system), so why shouldn’t the AI do the same? But by [Löb’s Theorem](http://lesswrong.com/lw/t6/the_cartoon_guide_to_l%C3%B6bs_theorem/), no sufficiently strong formal system can know that everything that it proves to be true is actually true. That’s what we call the “Löbian obstacle.”
---
**Luke**: You’ve called using mathematical proofs a “toy model,” but that’s exactly how work at the recent MIRI workshops has approached the Löbian obstacle. Do you think that practical AI designs will be based on logic and proofs? How confident are you that the Löbian obstacle will be relevant to a realistic artificial intelligence, and that the work MIRI is currently doing will be applicable in that context?
---
**Benja**: We certainly don’t think that a realistic AI will literally be able to find a mathematical proof that its actions are guaranteed to lead to “good” outcomes. Any practical AI will be uncertain about many things and will need to use probabilistic reasoning. There are two reasons why I think that MIRI’s current work has a decent chance of being relevant in that setting.
First, Löb’s theorem is only one instance of a “diagonalization argument” placing limits on the degree to which a formal system can do self-reference. For example, there’s [Tarski’s theorem](http://en.wikipedia.org/wiki/Tarski%27s_undefinability_theorem) that a powerful formal language can’t talk about which sentences in that language are true, because otherwise you could have a formal analogue of the [Liar](http://en.wikipedia.org/wiki/Liar_paradox) [paradox](http://en.wikipedia.org/wiki/Liar_paradox) “this sentence is false,” and [Turing’s halting problem](http://en.wikipedia.org/wiki/Halting_problem), which says that there’s no computer program which can say for arbitrary other programs whether they go into an infinite loop. Other well-known examples include [Russell’s paradox](http://en.wikipedia.org/wiki/Russell%27s_paradox) and [Cantor’s argument that not all infinite sets are the same size](http://en.wikipedia.org/wiki/Cantor%27s_diagonal_argument). Similar arguments apply to simple-minded ways of doing probabilistic reasoning, so I feel that it’s unlikely that the problem will just automatically go away when we start using probability, and I think there is a decent chance that the work we are doing now will lead to insights that are applicable in that setting.
And second, in order to achieve a reasonable probability that our AI still follows the same goals after billions of rewrites, we must have a very low chance of going wrong in every single step, and machine-verified formal mathematical proofs are the one way we know to become extremely confident that something is true (especially statements like “this AI design won’t destroy the world”, where we don’t get to just observe many independent examples). Although you can never be sure that a program will work as intended when run on a real-world computer — it’s always possible that a cosmic ray will hit a transistor and make things go awry — you can prove that a program would satisfy certain properties when run on an ideal computer. Then you can use probabilistic reasoning and error-correcting techniques to make it extremely probable that when run on a real-world computer, your program still satisfies the same property. So it seems likely that a realistic Friendly AI would still have *components* that do logical reasoning or something that looks very much like it.
I tend to think not in terms of the results that we are currently proving being directly relevant to a future AI design; rather, I hope that the work we are currently doing will help us understand the problems better and lead to insights that lead to insights that ultimately allow us to build a safe self-improving machine intelligence.
---
**Luke**: What sort of historical precedent do we have for doing technical work that we hope will lead to some insights, which will lead to other insights, which will lead to other insights, which will lead to useful application many years later?
I suppose that kind of thing happens in mathematics on occasion, for example when in the 1980s it was discovered that one might be able to prove [Fermat’s Last Theorem](http://en.wikipedia.org/wiki/Fermat%27s_last_theorem) via the modularity theorem, which prompted Andrew Wiles to pursue this attack, which allowed him to prove Fermat’s Last Theorem after about a decade of work ([Singh 1997](http://www.amazon.com/Fermats-Enigma-Greatest-Mathematical-Problem/dp/0802713319/)). Another example was Hamilton’s attack on the [Poincaré conjecture](http://en.wikipedia.org/wiki/Poincar%C3%A9_conjecture) via [Ricci flow](http://en.wikipedia.org/wiki/Ricci_flow) on a manifold, which began in 1982 and led to Perelman’s proof in 2003 ([Szpiro 2008](http://www.amazon.com/Poincares-Prize-Hundred-Year-Greatest-ebook/dp/B001RTKITQ/)). Of course, other conjectures have thus far resisted decades of effort to prove them, for example the [Riemann Hypothesis](http://en.wikipedia.org/wiki/Riemann_hypothesis) ([Rockmore 2007](http://www.amazon.com/Stalking-Riemann-Hypothesis-Numbers-ebook/dp/B0012RMVES/)) and P ≠ NP ([Fortnow 2013](http://www.amazon.com/The-Golden-Ticket-Impossible-ebook/dp/B00BKZYGUY/)).
But “goal stability under self-modification” isn’t as well-defined as the conjectures by Fermat and Poincaré. Maybe more analogous examples come from the field of computer science? For example, many early AI scientists worked toward the goal of writing a computer program that could play Grandmaster-level chess, even though they couldn’t be sure exactly what such a program would look like. There are probably analogues in quantum computing, too.
But anyway: how do you think about this?
---
**Benja**: My gut feeling actually tends to be that what we are trying to do here is fairly unusual — and for a good reason: it’s risky. If you want to be sure that what you’re working on isn’t a dead end, you’d certainly want to choose a topic where the gap between our goals and our current knowledge is smaller than in FAI. But I’m worried that if we wait with doing FAI research until we understand how AGIs will work, then there won’t be enough time remaining before the intelligence explosion to actually get the task done, so my current feeling is that the right tradeoff is to start now despite the chance of taking the wrong tack.
But then again, maybe our situation isn’t as unusual as my gut feeling suggests. Depending on how close you want the analogy to be, there may be many examples where scientists have a vague idea of the problem they want to solve, but aren’t able to tackle it directly, so instead they look for a small subproblem that they think they can make some progress on. You could tell a story where much of physics research is ultimately aimed at figuring out the true basic laws of the universe, and yet all a physicist can actually do is work on the next problem in front of them. Surely psychology was aimed from the beginning at figuring out all about how the human mind works, and yet starting by training rats to press a lever to get food, and later following this up by sticking an electrode in the rat’s brain and seeing what neurons are involved in accomplishing this task, can count as insights leading to insights that will plausibly help us figure out what’s really going on. Your own post on “[searching under the](http://lesswrong.com/lw/hsd/start_under_the_streetlight_then_push_into_the/) [streetlight](http://lesswrong.com/lw/hsd/start_under_the_streetlight_then_push_into_the/)” gives some examples of this pattern as well.
---
**Luke**: Can you say more about why you and some others think this problem of stability under self-modification should be investigated from the perspective of mathematical logic? For example, Stanford graduate student [Jacob Steinhardt](http://cs.stanford.edu/%7Ejsteinhardt/) [commented](http://lesswrong.com/lw/hmt/tiling_agents_for_selfmodifying_ai_opfai_2/9ai6) that the first tool he’d reach for to investigate this problem would not be mathematical logic, but instead “a martingale…, which is a statistical process that somehow manages to correlate all of its failures with each other… This can yield bounds on failure probability that hold for extremely long time horizons, even if there is non-trivial stochasticity at every step.”
---
**Benja**: I said earlier that in order to have a chance that our AI still follows the same goals after billions of rewrites, the probability of going wrong on any particular step must be very, very small. That’s true, but it’s not very quantitative. If we want to have 99% probability of success, how large a risk can we take on any particular step? It would be sufficient if the probability is lower than one in 100 billion each time, but that’s not really necessary. Jacob’s idea of using martingales is a similar but more flexible answer to this question, which allows you to take slightly larger risks under certain circumstances.
But even with this additional flexibility, you will still need a way to achieve extremely high confidence that what you’re doing is safe on most of the rewrite steps. And we can’t just achieve that confidence by the tried-and-true way of running an experiment with a large sample: The question is whether a rewrite our nascent AI is considering early on will lead to the intended results after the AI has become superintelligent and spread through the solar system and beyond — you can’t just simulate that, if you don’t already have these resources yourself!
So we need a way to reason abstractly about how our AI will behave in situations that are completely different from what we can simulate at the moment, and we will need to reach extreme confidence that these abstract conclusions are in fact correct. There is exactly one way we know how to do this, and that’s to use formally verified proofs in mathematical logic.
---
**Luke**: Suppose John Doe had an intuition that despite the fact that he is not a cognitive system with a logical architecture, he feels like he could make lots of self-modifications while retaining his original goals, if he had enough computational power and plenty of time to reason about whether the next self-modification he’s planning is going to change his goals. If this intuition is justified, then this suggests there are other methods we might use, outside mathematical logic, to ensure a very small probability of goal corruption upon self-modification. What would you say to John?
---
**Benja**: I’d say that I think he underestimates the difficulty of the problem. Two things:
First, my impression is that a lot of people have an intuition that they are already making self-modifications all the time. But the changes that humans can make with present-day technology don’t change the design of the hardware we’re running on — they pale against the difference between a human and a chimpanzee, and a self-improving AI would very likely end up making much more fundamental changes to its design than the relatively small number of tweaks evolution has applied to our brains in the last five million years.
But second, John might say that even taking this into account, he thinks that given enough time to learn how his brain works, and reasoning very carefully about every single step he’s taking, he should be able to go through a long chain of self-modifications that preserve his values. In this case, I think it’s fairly likely that he’s just wrong. However, I could imagine that a human could in fact succeed in doing this — but not without achieving the same sort of extremely high confidence in each single rewrite step that we want our AI to have, and I think that if a human could manage to achieve this sort of confidence, it would be by … proving mathematical theorems and having the proofs formally checked by a computer!
---
**Luke**: Yeah, when people say that humans self-modify all the time without changing their goals, I give two replies of my own. First, I point out that people’s goals and values do change pretty often. And second, I point out how little humans actually self-modify. For example I once [switched](http://lesswrong.com/lw/7dy/a_rationalists_tale/) from fundamentalist Christian to scientific naturalist, and this went along with a pretty massive shift in how I process evidence and argument. But throughout that worldview change, my brain was still (e.g.) using the temporal difference reinforcement learning algorithm in my dopaminergic reward system. As far as we know, there were no significant changes to my brain’s core algorithms during that period of transformation. Humans never actually self-modify very much, not like an AI would.
My next question has to do with AI capability. As AI scientists know, logic-based AI is generally far less capable than AI that uses machine learning methods. Is the idea that only a very small part of a future self-modifying AI would have a logical structure (so that it could prove the goodness of modifications to its core algorithms), and the rest of the AI would make use of other methods? Sort of like how small parts of safety-critical software (e.g. for [flight control](http://shemesh.larc.nasa.gov/fm/papers/FormalVerificationFlightCriticalSoftware.pdf)) are written in a structured way such that they are amenable to [formal verification](http://en.wikipedia.org/wiki/Formal_verification), but the rest of the system isn’t necessarily written in ways amenable to formal verification?
---
**Benja**: I think that your point that people’s values actually do change often is useful for intuition, and I also feel it’s important to point out that these changes are again actually pretty small compared to what could happen if you were to change your brain’s entire architecture. People might switch between being committed environmentalists and feeling that environmentalism is fundamentally misguided, for example, but they don’t tend to become committed triangularists who feel that it’s a moral imperative to make all every-day tools triangular in shape. Both murder and condemnation of murder are [human universals](http://en.wikipedia.org/wiki/Human_universal), traits that are common to all cultures world-wide; we are talking about changes to our cognitive architecture easily fundamental enough that they could lead to the impulse to non-triangularism and the condemnation of this non-triangularism to become similarly universal.
Yes, I think that logical reasoning would be only one tool in the toolbox of a Friendly AI, and it would use different tools to reason about most things in its environment. Even when reasoning about its own behavior, I would expect the AI to only use logic to prove theorems about how it would behave when run on “ideal” hardware (or hardware that has certain bounds on error rates, etc.), and then use probabilistic reasoning to reason about what happens if it runs on real hardware in the physical world. (But with regard to your analogy to present-day safety-critical software, I want to point out that unlike with present-day safety-critical software, I expect the AI to prove theorems about all of its own component parts, I just don’t expect it to use logic to reason about, say, chairs. Formal verification is difficult and time-consuming, which is the reason why we currently only apply it to small parts of safety-critical systems, but I expect future AIs to be up to the task!)
---
**Luke**: Hmmm. That’s surprising. My understanding is that formal verification methods do not scale well at all, both due to computational intractability and due to the hours of human labor required to write a correct formal specification against which one could verify a complicated system. Why do you think a future AI would be “up to the task” of proving theorems “about all of its own component parts”?
---
**Benja**: Well, for one thing, I generally expect future AIs to be smarter than us, and easily able to do intellectual tasks that would take very many hours of human labor; and I don’t expect that they will get tired of the menial task of translating their mathematical “intuitions” into long chains of “boring” lemmas, like humans do.
But more specifically, we humans have an intuitive understanding of why we expect the systems we build to work, and my feeling is that probably a major reason for why it is difficult to translate that understanding into formal proofs is that there’s a mismatch between the way these intuitions are represented in our brain and the way the corresponding notions are represented in a formal proof system. In other words, it seems likely to me that when you build a cognitive architecture from scratch, you could build it to have mathematical “intuitions” about why a certain piece of computer code works that are fairly straightforward to translate into formally verifiable proofs. In fact, similarly to how I would expect an AI to directly manipulate representations of computer code rather than using images and verbal sounds like we humans do, I think it’s likely that an FAI will do much of its reasoning about why a piece of computer code works by directly manipulating representations of formal proofs.
That said, it also often seems to happen that we humans know by experience that a certain algorithm or mathematical trick tends to work on a lot of problems, but we don’t have a full explanation for why this is. I do expect future AIs to have to do reasoning of this type as well, and it does seem quite plausible that an AI might want to apply this type of reasoning to (say) machine learning algorithms that it uses for image processing, where a mistake can be recovered from — though likely not to the code it uses to check that future rewrites will still follow the same goal system! And I still expect the AI to prove theorems about its image processing algorithm, I just expect them to be things like “this algorithm will always complete in at most the following number of time steps” or “this algorithm will be perform correctly under the following assumptions, which are empirically known to be true in a very large number of cases.”
---
**Luke**: Thanks, Benja!
The post [Benja Fallenstein on the Löbian Obstacle to Self-Modifying Systems](https://intelligence.org/2013/08/04/benja-interview/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org). |
cb7b854c-b37c-4624-bb3e-e0e58f2a576c | trentmkelly/LessWrong-43k | LessWrong | Death Note, Anonymity, and Information Theory
I don't know if this is a little too afar field for even a Discussion post, but people seemed to enjoy my previous articles (Girl Scouts financial filings, video game console insurance, philosophy of identity/abortion, & prediction market fees), so...
I recently wrote up an idea that has been bouncing around my head ever since I watched Death Note years ago - can we quantify Light Yagami's mistakes? Which mistake was the greatest? How could one do better? We can shed some light on the matter by examining DN with... basic information theory.
Presented for LessWrong's consideration: Death Note & Anonymity. |
c29e6097-3d13-46b1-9bfa-11f023661cc8 | trentmkelly/LessWrong-43k | LessWrong | What is the best way to develop a strong sense of having something to protect
In HPMOR Eliezer makes "Something to Protect" Harry's power that the Dark Lord doesn't have. In Posture for Mental Arts Valentine from CFAR argues that it's likely a key part of having proper mental posture.
Did any of you make a conscious attempt to develop this sense of having something to protect? If so what worked for you? What didn't work?
Is there relevant academic research on the topic that's useful to know? |
0bf8e7f8-eeef-4054-8c29-4f557fad11a7 | trentmkelly/LessWrong-43k | LessWrong | Progress Conference 2024: Toward Abundant Futures
The progress movement has grown a lot in the last few years. We now have progress journals, think tanks, and fellowships. The progress idea has spread and evolved into the “abundance agenda”, “techno-optimism”, “supply-side progressivism”, “American dynamism”. All of us want to see more scientific, technological, and economic progress for the good of humanity, and envision a bold, ambitious, flourishing future.
What we haven’t had so far is a regular gathering of the community.
Announcing Progress Conference 2024, a two-day event to connect people in the progress movement. Meet great people, share ideas in deep conversations, catalyze new projects, get energized and inspired.
Apply for an invitation here
Hosted by: the Roots of Progress Institute, together with the Foresight Institute, HumanProgress.org, the Institute for Humane Studies, the Institute for Progress, and Works in Progress magazine
When: October 18–19, 2024
Where: Berkeley, CA—at the Lighthaven campus, an inviting space perfect for mingling
Speakers: Keynotes include Patrick Collison, Tyler Cowen, Jason Crawford, and Steven Pinker. Around 20 additional speakers will share ideas on four tracks: the big idea of human progress, policy for progress, tech for progress, and storytelling/media for progress. Full speaker list
Attendees: We expect 200+ intellectuals, builders, policy makers, storytellers, and students. This is an invitation-only event, but anyone can apply for an invitation. Complete the open application by July 15th.
Program: Two days of intellectual exploration, inspiration and interaction that will help shape the progress movement into a cultural force. Attend talks on topics from tech to policy to culture, build relationships with new people as you hang out on cozy sofas or enjoy the sun in the garden, sign up to run an unconference session and find others who share your interests and passions, or pitch your ideas to those who could help make your dreams a reality.
Special thanks |
17c53108-9c99-475b-8ba2-9ec53d05c1ee | trentmkelly/LessWrong-43k | LessWrong | Geoffrey Hinton on the Past, Present, and Future of AI
Introduction
Geoffrey Hinton is a famous AI researcher who is often referred to as the "godfather of AI" because of his foundational work on neural networks and deep learning from the 1980s until today. Arguably his most significant contribution to the field of AI was the introduction of the backpropagation algorithm for neural network training in 1986 which is widely used to train neural networks today. He was also involved in the development of dropout regularization, AlexNet, the RMSProp optimization algorithm, and the t-SNE visualization method. In 2018 he won the Turing award for his work on deep learning along with Yoshua Bengio and Yann LeCun. In 2024, Hinton won the Nobel Prize in Physics for his work on Boltzmann machines. He was also the PhD supervisor of Ilya Sutskever who co-founded OpenAI in 2016.
In 2023, Hinton left Google to retire and speak more freely about the risks of AI. In addition to speaking about the near-term risks of AI, Hinton has also thought carefully about the long-term consequences of AI such as whether AIs will become superintelligent and whether AI could become an existential risk to humanity.
This post starts with a brief history of AI from Hinton's perspective before describing how neural networks work, Hinton’s views on current AI models, and finally his thoughts on the future impact of AI.
A brief history of AI
> "I could have said it then, but it wouldn't have convinced people. And what I could have said then is the only reason that neural networks weren't working really well in the 1980s was because the computers weren't fast enough and the datasets weren't big enough. But back in the 80s, the big issue was, could you expect a big neural network with lots of neurons in it, compute nodes, and connections between them, that learns by just changing the strengths of the connections? Could you expect that to just look at data and, with no kind of innate prior knowledge, learn how to do things? And people in mainstream AI though |
16fcea59-8c22-4205-83f6-c4faf7371f65 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Is full self-driving an AGI-complete problem?
I've felt for quite a while that full self-driving (automated driving without human supervision through arbitrary road systems) is a problem that is deceptively hard. Yes, it is possible to map a route and navigate on a road mesh, do lane following, and even obstacle avoidance using current systems. With LIDAR and well-trained avoidance systems things like Waymo can operate in constrained urban environments.
But as soon as the training wheels are off and the environment becomes unconstrained, the entire problem stops being about just whether we can design an agent which has driving capabilities and becomes "can we make a vehicle which can predict agent-agent dynamics?" If we think about the full range of human road behaviors we must consider adversarial attacks on the system such as:
* Blocking it from entering a lane
* Boxing it in and forcing it off the road into obstacles
* Throwing paint/eggs/rocks at its vision systems
* Using deceptive tactics (e.g. pretend to be a road worker) to vandalize it and/or steal from its cargo
* Intentionally standing in its path to delay it
* Making blind turns in front of it
* Running into traffic
In addition to agent-agent problems, we must also consider road hazards:
* Poorly maintained roads with damaging potholes
* Sinkholes which have disabled the road
* Eroded road sides with dangerous falls
* Road debris from land slides
* Road debris from other vehicles
In these situations, a perfectly rule-following automaton behaves well below human level in preventing delay or damage to itself. Do these scenarios require AGI for a level-5 autonomous vehicle to reach human level? Are the benefits from above-average performance in normal traffic enough to offset the risk of subhuman performance in extrema? |
578975c0-59c9-4274-b72a-725af74762b5 | trentmkelly/LessWrong-43k | LessWrong | Geometric Exploration, Arithmetic Exploitation
This post is going to mostly be propaganda for Thompson sampling. However, the presentation is quite different from the standard presentation. I will be working within a toy model, but I think some of the lessons will generalize. I end with some discussion of fairness and exploitation.
A Exploration/Exploitation Toy Model
Imagine you are interacting with the world over time. We are going to make a couple substantial assumptions. Firstly, we assume that you know what you are trying to optimize. Secondly, we assume that you get high quality feedback on the thing you are trying to optimize. In the spirit of the online learning community, we will be referring to your goal as "reward" rather than "utility."
On day t, you choose an action, at∈A, and then the environment responds with an observation et∈E. Both you and the environment may respond randomly, and may react to the entire history of what has happened. (If you want the environment to act first, you could just make a0 not matter.) We assume we have some fixed bounded reward function r:E→R, that you know. (If you want reward to be more of a function of the history, you can just include more history information in et.)
We are going to be imagining agents that are myopically choosing at, so as to maximize r(et). The choice to do this, rather than e.g. maximizing the total reward of the next m time steps, is mostly for simplicity. However, the choice not to have preferences that are over the entire future is basically the assumption that you get high quality feedback on what we are trying to optimize, which is a substantial philosophical assumption.
So, the environment can be thought of as a function, that takes in a history, (a0e0)(a1e1)…(at−1et−1)at, and returns a probability distribution on et which we then sample from. If the agent knew what the environment was, the agent would just choose the at each round which maximizes the the expectation of r(et). However, we want to model an agent that is uncertain abou |
2e93c002-0eb8-469a-bc5d-f4006cec2456 | trentmkelly/LessWrong-43k | LessWrong | "Progress"
I often hear people speak of democracy as the next, or the final, inevitable stage of human social development. Its inevitability is usually justified not by describing power relations that result in democracy being a stable attractor, but in terms of morality - democracy is more "enlightened". I don't see any inevitability to it - China and the Soviet Union manage(d) to maintain large, technologically-advanced nations for a long time without it - but suppose, for the sake of argument, that democracy is the inevitable next stage of human progress.
The May 18 2012 issue of Science has an article on p. 844, "Ancestral hierarchy and conflict", by Christopher Boehm, which, among other things, describes the changes over time of equality among male hominids. If we add its timeline to recent human history, then here is the history of democracy over time in the evolutionary line leading to humans:
1. Pre-human male hominids, we infer from observing bonobos and chimpanzees, were dominated by one alpha male per group, who got the best food and most of the females.
2. Then, in the human lineage, hunter-gatherers developed larger social groups, and the ability to form stronger coalitions against the alpha; and they became more egalitarian.
3. Then, human social groups even became larger, and it became possible for a central alpha-male chieftain to control a large area; and the groups became less egalitarian.
4. Then, they became even larger, so that they were too large for a central authority to administer efficiently; and decentralized market-based methods of production led to democracy. (Or so goes one story.)
There are two points to observe in this data:
* There is no linear relationship between social complexity, and equality. Steadily-increasing social complexity lead to more equality, then less, then more.
* Enlightenment has nothing to do with it - if any theory makes sense, it is that social equality tunes itself to the level that provides maximal social |
c106b06b-2618-419f-aeca-f6b2c2cac2ed | StampyAI/alignment-research-dataset/arxiv | Arxiv | Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples
1 Introduction
---------------
A *classifier* is a machine learning model that learns a mapping between inputs and a predefined set of *classes*. For instance, a malware detector is a classifier taking executables as inputs and assigning them either to the benign or malware class. Efforts in the security [[1](#bib.bib1), [2](#bib.bib2), [3](#bib.bib3), [4](#bib.bib4)] and machine learning [[5](#bib.bib5), [6](#bib.bib6)] communities exposed the vulnerability of machine learning classifiers to integrity
attacks.
Such attacks are often instantiated by *adversarial examples*: legitimate inputs altered by adding small,
often imperceptible,
perturbations to force a learned classifier to misclassify the
resulting adversarial inputs, while remaining correctly classified by a human observer.
To illustrate, consider the following images,
potentially consumed by an autonomous vehicle [[7](#bib.bib7)]:

To humans, these images appear to be the same: our biological classifiers
(vision) identify each image as a stop sign.
The image on the left [[7](#bib.bib7)] is indeed an ordinary image of
a stop sign.
We produced the image on the right by adding a precise perturbation that
forces a particular image-classification DNN to classify it as a yield sign, as described in Section [5.2](#S5.SS2 "5.2 Attacking an oracle for the GTSRB ‣ 5 Validation of the Attack ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples").
Here, an adversary could potentially use the altered image to cause a car without failsafes to behave
dangerously.
This attack would require modifying
the image used internally by the car through transformations of the physical traffic sign. Related work showed that this is possible for a state-of-the-art vision classifier [[8](#bib.bib8)].
It is thus conceivable that physical adversarial traffic signs could be generated by maliciously
modifying the sign itself, e.g., with stickers or paint.
In this paper, we introduce the first demonstration that *black-box attacks* against DNN classifiers are practical for real-world adversaries with *no* knowledge about the model. We assume the adversary (a) has no information about the structure or parameters of the DNN, and (b) does not have access to any large training dataset.
The adversary’s only capability is to observe labels assigned by the DNN for
chosen inputs, in a manner analog to a cryptographic oracle.
Our novel attack strategy is to train a local substitute DNN with a *synthetic*
dataset: the inputs are synthetic and generated
by the adversary, while the outputs are labels assigned by the
target DNN and observed by the adversary. Adversarial
examples are crafted using the substitute parameters, which are known to
us.
They are not only misclassified by the substitute but
also by the target DNN, because both models have similar decision boundaries.
This allows an adversary to produce images of
traffic signs misclassified by the autonomous vehicle’s driving system
(cf. Section [5.2](#S5.SS2 "5.2 Attacking an oracle for the GTSRB ‣ 5 Validation of the Attack ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples")).
This is a considerable departure from previous work, which evaluated
perturbations required to craft adversarial examples using either:
(a) detailed knowledge of the DNN architecture and
parameters [[2](#bib.bib2), [6](#bib.bib6), [3](#bib.bib3), [5](#bib.bib5)],
or (b) an independently collected training set to fit an auxiliary model [[2](#bib.bib2), [6](#bib.bib6), [5](#bib.bib5)].
This limited their applicability to strong adversaries
capable of
gaining insider knowledge of the targeted ML model, or
collecting large labeled training sets. We release assumption (a) by learning a substitute: it gives us the benefit of having full access to the model and apply previous adversarial example crafting methods.
We release assumption (b) by replacing the independently collected training set with a synthetic dataset constructed by the adversary with synthetic inputs and labeled by observing the target DNN’s output.
Our threat model thus corresponds to the real-world scenario of users interacting with
classifiers hosted remotely by a third-party keeping the model
internals secret. In fact, we instantiate our attack
against classifiers served by MetaMind, Amazon, and
Google.
Models are automatically trained by the hosting platform. We are
capable of making labeling prediction queries only after training is completed.
Thus, we provide the first correctly blinded experiments concerning
adversarial examples as a security risk.
Therefore, we show that our black-box attack is applicable to many remote systems taking decisions based on a ML classifier, because it combines three key properties: (a) the capabilities required are limited to observing output class labels, (b) the number of labels queried is limited, and (c) the approach applies and scales to different ML classifier types (see Section [7](#S7 "7 Generalization of the Attack ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples")), in addition to state-of-the-art DNNs. In contrast, previous work failed to simultaneously provide all of these three key
properties [[6](#bib.bib6), [5](#bib.bib5), [9](#bib.bib9), [10](#bib.bib10), [4](#bib.bib4)].
Our contributions are the following:
* We introduce in Section [4](#S4 "4 Black-Box Attack Strategy ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples") an attack
against black-box DNN classifiers. It crafts adversarial examples without knowledge of the classifier training data or model. To do so, a synthetic dataset is constructed by the adversary to train a substitute for the targeted DNN classifier.
* In Section [5](#S5 "5 Validation of the Attack ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples"), we instantiate the attack against a remote DNN classifier hosted by MetaMind. The DNN misclassifies 84.24%
of the adversarial inputs crafted.
* The attack is calibrated in
Section [6](#S6 "6 Attack Algorithm Calibration ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples") to (a) reduce the number of queries made to the target model and (b)
maximize
misclassification of adversarial examples.
* We generalize the attack to other ML classifiers like logistic regression. In Section [7](#S7 "7 Generalization of the Attack ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples"), we target models hosted by Amazon and Google. They misclassify adversarial examples at rates of 96.19% and 88.94%.
* We show that our black-box attack evades defenses proposed in the literature because the substitute trained by the adversary is unaffected by defenses deployed on the targeted oracle model to reduce its vulnerability.
* We provide an intuition of why adversarial examples crafted with the substitute also mislead target models by empirically observing that substitutes have cost
gradient sign matrices correlated to the target’s.
Disclosure: We disclosed our attacks to MetaMind, Amazon, and Google. Note that no damage was caused as we demonstrated control of models created for our own account.
2 About Deep Neural Networks
-----------------------------
We provide preliminaries of deep learning to enable
understanding of our threat model and attack. We refer readers
interested to the detailed presentation in [[11](#bib.bib11)].
A *deep neural network* (DNN), as illustrated in its simplest form in
Figure [1](#S2.F1 "Fig. 1 ‣ 2 About Deep Neural Networks ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples"), is a machine learning technique that uses a
hierarchical composition of n parametric functions to model a high
dimensional input
→x. Each function
fi for i∈1..n is modeled using a layer of neurons, which are
elementary computing units applying an *activation function*
to the previous layer’s weighted representation of the input to generate a new
representation. Each layer is parameterized by a weight vector
θi (for simplicity, we omit the vector notation) impacting each neuron’s activation. Such weights hold
the knowledge of a DNN model F and are evaluated during its
training phase, as detailed below. Thus, the composition of functions modeled
by DNNs can be formalized as:
| | | | |
| --- | --- | --- | --- |
| | F(→x)=fn(θn,fn−1(θn−1, ... f2(θ2,f1(θ1,→x)))) | | (1) |

Fig. 1: DNN Classifier: the model processes an image of a handwritten digit and outputs the probility of it being in one of the N=10 classes for digits 0 to 9 (from [[12](#bib.bib12)]).
The *training phase* of a DNN F learns values for its set of
parameters θF={θ1,...,θn}. In this paper, we focus on classification tasks, where the
goal is to assign inputs a label among a predefined set of labels. To do so,
the network is given a large set of known input-output pairs
(→x,→y) and it adjusts weight parameters to reduce a cost
quantifying the prediction error between the model prediction F(→x) and
the correct output →y. The adjustment is typically performed using
techniques derived from the backpropagation
algorithm. Briefly speaking, such techniques
successively propagate error gradients with respect to network parameters from
the network’s output layer to its input layer.
During the *test phase*, the DNN is deployed with a fixed set of
parameters θF to make predictions on inputs unseen during
training. We consider DNN classifiers: for
a given input →x, the DNN produces a probability vector
F(→x) encoding its belief of input →x being in each of the
predefined classes (cf. Figure [1](#S2.F1 "Fig. 1 ‣ 2 About Deep Neural Networks ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples")). The weight parameters
θF hold the model knowledge acquired by training. Ideally,
the learning algorithm should allow the model to
generalize and make accurate predictions for inputs outside of the domain
explored during training. However, attacks manipulating DNN inputs with adversarial
examples showed this is not the case in practice [[6](#bib.bib6), [3](#bib.bib3), [5](#bib.bib5)].
3 Threat Model
---------------
A taxonomy of adversaries against DNN classifiers is found in [[3](#bib.bib3)], and complements a previous taxonomy
found in [[13](#bib.bib13)].
In our work, the adversary
seeks to force a DNN classifier to misclassify inputs in any class different from
their correct class. To achieve this, we consider a weak adversary
with access to the DNN output only. The adversary has no knowledge of the architectural
choices made to design the DNN, which include the number, type, and size of
layers, nor of the training data used to learn the DNN’s parameters. Such attacks are referred to as black box, where adversaries need not know internal details of a
system to compromise it.
Targeted Model: We
consider attackers targeting a multi-class DNN classifier. It
outputs probability vectors, where each vector component
encodes the DNN’s belief of the input being part of one of the predefined
classes. To demonstrate our attack, we consider the ongoing example of
a DNN used for classifying images, as shown in
Figure [1](#S2.F1 "Fig. 1 ‣ 2 About Deep Neural Networks ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples"). Such a DNN can be used to classify
handwritten digits into classes corresponding to digits between 0 and 9, images
of objects in a fixed number of categories, or images of traffic signs into
classes corresponding to its type (STOP, yield, …).
Adversarial Capabilities: The targeted DNN, the *oracle*, is denoted by O.
The name oracle refers to the only capability of the adversary:
accessing the DNN’s label ~O(→x) for
any input →x by querying the oracle O. In this paper, we consider the
output ~O(→x) to be a label, which is
the index of the class assigned the largest probability by the DNN:
| | | | |
| --- | --- | --- | --- |
| | ~O(→x)=argmaxj∈0..N−1Oj(→x) | | (2) |
where Oj(→x) is the j-th component of the probability vector
O(→x) output by DNN O. Distinguishing between
labels and probabilities makes adversaries
realistic (they more often have access to labels than probabilities) but weaker: labels encode less information
about the model’s learned behavior. Accessing
labels ~O produced by the DNN O is the only capability assumed in our
threat model.
Adversaries do not have
access to the oracle internals or training data. Again, this
makes our threat model more realistic but in turn makes
attacks harder to execute.

Fig. 2: Adversarial Samples: the top row contains legitimate samples [[14](#bib.bib14), [7](#bib.bib7)] used to create adversarial samples in the bottow row, which are misclassified. The DNN outputs are identified below the samples.
Adversarial Goal: We want to produce a minimally altered version of any input →x, named
*adversarial sample*, and denoted →x∗, misclassified by oracle O: ~O(→x∗)≠~O(→x). This corresponds to an attack on the
oracle’s output integrity. Adversarial samples solve the following
optimization problem:
| | | | |
| --- | --- | --- | --- |
| | →x∗=→x+argmin{→z:~O(→x+→z)≠~O(→x)}=→x+δ→x | | (3) |
Examples of adversarial samples can be found in
Figure [2](#S3.F2 "Fig. 2 ‣ 3 Threat Model ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples"). The first row contains legitimate samples
and the second corresponding adversarial samples that are
misclassified. This misclassification must be achieved by adding a minimal
perturbation δ→x so as to evade human detection. Even with total
knowledge of the architecture used to train model O and its
parameters resulting from training, finding such a minimal perturbation is not
trivial, as properties of DNNs preclude the optimization
problem from being linear or convex. This is exacerbated by our threat
model: removing knowledge of model O’s architecture and training data makes it harder to find a perturbation such that ~O(→x+δ→x)≠~O(→x) holds.
4 Black-Box Attack Strategy
----------------------------
We introduce our black-box attack.
As described in Section [3](#S3 "3 Threat Model ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples"), the adversary wants to craft
adversarial samples misclassified by the classifier using his/her sole
capability of accessing the label ~O(→x) assigned by classifier for any chosen
input →x. The attack strategy consists in learning a
*substitute DNN* approximating the target using a dataset constructed with synthetic inputs and labels observed from the oracle. Adversarial examples are then crafted using this substitute. As adversarial examples transfer
between architectures [[5](#bib.bib5), [6](#bib.bib6)], we expect the target DNN to misclassify them.
To understand the difficulty of conducting the attack under this threat model,
recall Equation [3](#S3.E3 "(3) ‣ 3 Threat Model ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples") formalizing the adversarial goal of
finding a minimal perturbation resulting in a
sample misclassified by the targeted oracle. It cannot
be solved in closed form when the target is a DNN, or any other non-linear non-convex machine learning model.
The basis for most adversarial
attacks [[6](#bib.bib6), [3](#bib.bib3), [5](#bib.bib5)]
is to approximate its solution
using gradient-based optimization on functions defined by a DNN.
Because evaluating these functions and their gradients requires knowledge
of the DNN architecture and parameters, such an attack is not possible
under our black-box scenario.
It was shown that adversaries with access
to an independently collected labeled training set from the same population distribution
than the oracle could train a model with a different architecture and use it as a substitute [[5](#bib.bib5)]:
adversarial examples designed to manipulate the substitute
are often misclassified by the targeted model.
However, many modern machine learning systems require large and
expensive training sets for training. For instance,
we consider models trained with several tens of thousands
of labeled examples. This makes attacks based
on this paradigm unfeasible for adversaries without large labeled datasets.
In this paper, we show black-box attacks can be accomplished at
a much lower cost, without labeling an independent training
set.
In our approach, to enable the adversary to train a substitute model without a real labeled dataset, we use the target DNN as an oracle to construct a synthetic dataset. The inputs are synthetically generated and the outputs are labels observed from the oracle.
Using this synthetic dataset,
the attacker builds an approximation F of the model O
learned by the oracle. This *substitute network* F is then used to craft adversarial samples misclassified
by F
Indeed, with its full knowledge of the substitute DNN F parameters, the adversary can use one of the previously
described attacks [[6](#bib.bib6), [3](#bib.bib3)] to craft
adversarial samples misclassified by F. *As long as the transferability
property holds between F and O, adversarial samples crafted for F will
also be misclassified by O.* This leads us to propose the following two-fold
strategy:
1. Substitute Model Training: the attacker queries the oracle with synthetic inputs selected by a Jacobian-based heuristic to build a model F approximating the oracle model O’s decision boundaries.
2. Adversarial Sample Crafting: the attacker uses substitute network F to craft adversarial samples, which are then misclassified by oracle O due to the transferability of adversarial samples.
###
4.1 Substitute Model Training
Training a substitute model F approximating oracle O is challenging because we must: (1) select an architecture for our substitute without knowledge of the
targeted oracle’s architecture, and (2) limit the number of queries
made to the oracle in order to ensure that the approach is tractable. Our
approach, illustrated in Figure [3](#S4.F3 "Fig. 3 ‣ 4.1 Substitute Model Training ‣ 4 Black-Box Attack Strategy ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples"), overcomes these challenges mainly by introducing a synthetic data generation technique, the *Jacobian-based Dataset
Augmentation*. We emphasize that *this technique is not
designed to maximize the substitute DNN’s accuracy but rather ensure that it
approximates the oracle’s decision boundaries with few label queries*.

Fig. 3: Training of the substitute DNN F: the attacker (1) collects an initial substitute training set S0 and (2) selects an architecture F. Using oracle ~O, the attacker (3) labels S0 and (4) trains substitute F. After (5) Jacobian-based dataset augmentation, steps (3) through (5) are repeated for several substitute epochs ρ.
Substitute Architecture: This factor is not the most
limiting as the adversary must at least have some partial knowledge of the
oracle input (e.g., images, text) and
expected output (e.g., classification). The adversary can thus use
an architecture adapted to the input-output relation. For instance, a
convolutional neural network is suitable for image classification. Furthermore,
we show in Section [6](#S6 "6 Attack Algorithm Calibration ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples") that the type, number, and size of layers used
in the substitute DNN have relatively little
impact on the success of the attack. Adversaries can also consider performing
an architecture exploration and train several substitute models before
selecting the one yielding the highest attack success.
Generating a Synthetic Dataset: To better understand
the need for synthetic data, note that we could potentially make an infinite number of queries
to obtain the oracle’s output O(→x) for any input →x belonging to
the input domain. This would provide us with a copy of the oracle.
However, this is simply not tractable: consider a DNN with M input
components, each taking discrete values among a set of K possible values, the
number of possible inputs to be queried is KM. The intractability is even
more apparent for inputs in the continuous domain. Furthermore, making a large
number of queries renders the adversarial behavior easy to detect.
A natural alternative is to resort to randomly selecting additional
points to be queried. For instance, we tried using Gaussian noise to select
points on which to train substitutes. However, the resulting models
were not able to learn by querying the oracle. This is likely due to
noise not being representative of the input distribution.
To address this issue, we thus introduce a heuristic efficiently
exploring the input domain and, as shown in
Sections [5](#S5 "5 Validation of the Attack ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples") and [6](#S6 "6 Attack Algorithm Calibration ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples"), drastically limits the
number of oracle queries. Furthermore, our technique also ensures that the
substitute DNN is an approximation of the targeted DNN i.e. it
learns similar decision boundaries.
The heuristic used to generate synthetic training inputs is based on identifying directions in which the model’s output is
varying, around an initial set of training points. Such directions
intuitively require more input-output pairs
to capture the output variations of the target DNN O. Therefore, to get a
substitute DNN accurately approximating the oracle’s decision boundaries,
the heuristic prioritizes these samples when querying the oracle for labels. These directions are identified with the substitute DNN’s Jacobian matrix JF, which is
evaluated at several input points →x (how these points are chosen is
described below). Precisely, the adversary evaluates the sign of the Jacobian matrix dimension corresponding to the label assigned to input →x by the oracle:
sgn(JF(→x)[~O(→x)]). To obtain a new synthetic training point, a term λ⋅sgn(JF(→x)[~O(→x)]) is added to the original point →x. We name this technique *Jacobian-based
Dataset Augmentation*. We base our substitute training algorithm on the idea of
iteratively refining the model in directions identified using the Jacobian.
1:~O, maxρ, S0, λ
2:Define architecture F
3:for ρ∈0 .. maxρ−1 do
4: // Label the substitute training set
5: D←{(→x,~O(→x)):→x∈Sρ}
6: // Train F on D to evaluate parameters θF
7: θF←train(F,D)
8: // Perform Jacobian-based dataset augmentation
9: Sρ+1←{→x+λ⋅sgn(JF[~O(→x)]):→x∈Sρ}∪Sρ
10:end for
11:return θF
Algorithm 1 - Substitute DNN Training: the algorithm takes as inputs the oracle ~O, the maximum number maxρ of substitute training epochs to be performed, a substitute architecture F, and an initial training set S0.
Substitute DNN Training Algorithm: We now describe the five-step training procedure outlined in Algorithm [1](#alg1 "Algorithm 1 ‣ 4.1 Substitute Model Training ‣ 4 Black-Box Attack Strategy ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples"):
1. Initial Collection: The adversary first collects a very small set S0 of inputs representative of the input domain. For instance, if the targeted oracle O classifies handwritten digits, the adversary collects 10 images of each digit 0 through 9. We show in Section [5](#S5 "5 Validation of the Attack ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples") that the substitute training set does not necessarily have to come from the distribution from which the targeted oracle was trained.
2. Architecture Selection: The adversary selects an architecture to be trained as the substitute F. Again, this can be done using high-level knowledge of the classification task performed by the oracle (e.g., convolutional networks are appropriate for vision)
3. Substitute Training: The adversary iteratively trains more
accurate substitute DNNs Fρ by repeating the following for ρ∈0..ρmax:
* Labeling: By querying for the labels ~O(→x) output by oracle O, the adversary labels each sample →x∈Sρ in its initial substitute training set Sρ.
* Training: The adversary trains the architecture chosen at step (2) using substitute training set Sρ in conjunction with classical training techniques.
* Augmentation: The adversary applies our augmentation technique on the initial substitute training set Sρ to produce a larger substitute training set Sρ+1 with more synthetic training points. This new training set better represents the model’s decision boundaries. The adversary repeats steps (3) and (4) with the augmented set Sρ+1.
Step (3) is repeated several times to increase the substitute DNN’s accuracy and the similarity of its decision boundaries with the oracle. We introduce the term *substitute training epoch*, indexed with ρ, to refer to each iteration performed. This leads to this formalization of the Jacobian-based Dataset Augmentation performed at step (5) of our substitute training algorithm to find more synthetic training points:
| | | | |
| --- | --- | --- | --- |
| | Sρ+1={→x+λ⋅sgn(JF[~O(→x)]):→x∈Sρ}∪Sρ | | (4) |
where λ is a parameter of the augmentation: it defines the size of the step taken in the sensitive direction identified by the Jacobian matrix to augment the set Sρ into Sρ+1.
###
4.2 Adversarial Sample Crafting
Once the adversary trained a substitute DNN, it uses it to craft adversarial samples. This is performed by implementing two
previously introduced approaches described in [[6](#bib.bib6), [3](#bib.bib3)]. We provide an overview of the two approaches, namely the *Goodfellow et al. algorithm* and the
*Papernot et al. algorithm*. Both techniques share a similar intuition of
evaluating the model’s sensitivity to input modifications in order to select a
small perturbation achieving the misclassification goal111Our attack can be implemented with other adversarial example algorithms. We focus on these two in our evaluation..
Goodfellow et al. algorithm: This algorithm is also known as the
*fast gradient sign method* [[6](#bib.bib6)].
Given a model F with an associated
cost function c(F,→x,y), the adversary crafts an adversarial sample
→x∗=→x+δ→x for a given legitimate sample →x by
computing the following perturbation:
| | | | |
| --- | --- | --- | --- |
| | δ→x=εsgn(∇→xc(F,→x,y)) | | (5) |
where perturbation sgn(∇→xc(F,→x,y)) is the sign of the
model’s cost function
222As described here, the method causes simple misclassification.
It has been extended to achieve chosen target classes.
gradient.
The cost gradient is computed with respect to
→x using sample →x and label y as inputs. The value of the
*input variation parameter* ε factoring the sign matrix controls the
perturbation’s amplitude. Increasing its value increases the likelihood of
→x∗ being misclassified by model F but on the contrary makes
adversarial samples easier to detect by humans. In
Section [6](#S6 "6 Attack Algorithm Calibration ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples"), we evaluate the impact of parameter ε
on the successfulness of our attack.
Papernot et al. algorithm: This algorithm is suitable for
source-target misclassification attacks where adversaries seek to take samples
from any legitimate source class to any chosen target
class [[3](#bib.bib3)]. Misclassification attacks are a special
case of source-target misclassifications, where the target class can
be any class different from the legitimate source class. Given model F, the
adversary crafts an adversarial sample →x∗=→x+δ→x
for a given legitimate sample →x by adding a perturbation
δ→x to a subset of the input components →xi.
To choose input components forming perturbation δ→x,
components are sorted by decreasing adversarial saliency value. The
adversarial saliency value S(→x,t)[i] of component i for an
adversarial target class t is defined as:
| | | | |
| --- | --- | --- | --- |
| | | | (6) |
where matrix JF=[∂Fj∂→xi]ij is
the model’s Jacobian matrix. Input components i are added to perturbation
δ→x in order of decreasing adversarial saliency value
S(→x,t)[i] until the resulting adversarial sample →x∗=→x+δ→x is misclassified by F. The perturbation introduced
for each selected input component can vary: greater perturbation
reduce the number of components perturbed to achieve misclassification.
Each algorithm has its benefits and drawbacks. The Goodfellow algorithm is
well suited for fast crafting of many adversarial samples
with relatively large perturbations thus potentially easier to detect.
The Papernot algorithm reduces
perturbations at the expense of a greater computing cost.
5 Validation of the Attack
---------------------------
We validate our attack against remote and local classifiers. We first apply it to target a DNN remotely
provided by MetaMind, through their API333The API can be
accessed online at www.metamind.io that allows a user to train classifiers using
deep learning. The API returns labels produced by the DNN for any given input
but does not provide access to the DNN. This corresponds to the
oracle described in our threat model. We show that:
* An adversary
using our attack can reliably force the DNN trained
using MetaMind on MNIST [[14](#bib.bib14)] to misclassify
84.24% of adversarial examples crafted with a perturbation not affecting human recognition.
* A second oracle trained locally with the German Traffic
Signs Recognition Benchmark (GTSRB) [[7](#bib.bib7)],
can be
forced to misclassify more than 64.24% of altered inputs without affecting human recognition.
###
5.1 Attack against the MetaMind Oracle
Description of the Oracle: We used the MNIST handwritten digit dataset to train the DNN [[14](#bib.bib14)]. It comprises 60,000 training and 10,000 test images of handwritten digits. The task associated with the dataset is
to identify the digit corresponding to each image. Each 28x28
grayscale sample is encoded as a vector of pixel intensities in the
interval [0,1] and obtained by reading the image pixel matrix row-wise.
We registered for an API key on MetaMind’s website, which gave us access to
three functionalities: dataset upload, automated model training, and model
prediction querying. We uploaded the 50,000 samples included in the MNIST
training set to MetaMind and then used the API to train a classifier
on the dataset. We emphasize that training is automated: we have no
access to the training algorithm, model architecture, or model parameters. All
we are given is the accuracy of the resulting model, computed by MetaMind using
a validation set created by isolating 10% of the training samples. Details can be found on MetaMind’s
website.
Training took 36 hours to return a classifier
with a 94.97% accuracy. This performance cannot be
improved as we cannot access or modify the model’s specifications and
training algorithm. Once training is completed, we could access the model
predictions, for any input of our choice, through the API. Predictions take the
form of a class label This
corresponds to the threat model described in
Section [3](#S3 "3 Threat Model ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples").
Initial Substitute Training Sets: First, the adversary collects an initial substitute training
set. We describe two such sets used to attack the MetaMind
oracle:
* MNIST subset: This initial substitute training
set is made of 150 samples from the MNIST test set.
They differ from those used by the oracle for training as test and training
sets are distinct. We assume adversaries can collect such a limited sample set
under the threat model described in Section [3](#S3 "3 Threat Model ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples") with
minimal knowledge of the oracle task: here, handwritten
digit classification.
* Handcrafted set: To ensure our results
do not stem from similarities between the MNIST test and training sets, we
also consider a *handcrafted* initial substitute training set. We handcrafted 100 samples by handwriting 10 digits for each
class between 0 and 9 with a
laptop trackpad. We then adapted them to the MNIST format of 28x28
grayscale pixels. Some are shown below.

Substitute DNN Training The adversary uses the initial substitute training sets and the oracle to train two
subsitute DNNs. Our substitute architecture A, a standard for
computer vision classification, is described
in Table [15](#Sx1.F15 "Fig. 15 ‣ Appendix ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples") (cf. appendix). We train all
DNNs in this paper with Theano [[15](#bib.bib15)] and Lasagne.
The substitute DNN is trained on our
machine for 6 substitute epochs.
During
each of these 6 epochs, the model is trained for 10 epochs
from scratch with a learning rate of 10−2 and momentum of 0.9. Between
substitute epochs, we perform a Jacobian-based dataset augmentation with a step
size of λ=0.1 to generate additional synthetic training data, which we label using the MetaMind oracle.
The accuracy of the two substitute DNNs is reported in
Figure [4](#S5.F4 "Fig. 4 ‣ 5.1 Attack against the MetaMind Oracle ‣ 5 Validation of the Attack ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples"). It is computed with the MNIST
test set (minus the 150 samples used in the first initial substitute training
set). The adversary does *not* have access to this full test set: we
solely use it to analyze our results. The two substitute
DNNs respectively achieve a 81.20% and 67.00% accuracy on the MNIST test set after 6 substitute training epochs. These accuracies fall
short of current state-of-the-art accuracies on this task. However, the adversary has access to a limited number of
samples (in this case 6,400=100×26 instead of 50,000 for
state-of-the-art models). Furthermore, the adversarial goal is to craft
adversarial samples misclassified by the oracle. *Instead of learning a
substitute DNN with optimal accuracy, the adversary is interested in
learning a substitute capable of mimicking the oracle decision
boundaries*.
Adversarial Sample Crafting: Using the substitute DNNs, we then craft adversarial samples using Goodfellow’s algorithm. We decided to use the 10,000
samples from the MNIST test set as our legitimate samples.444Again,
adversaries do not need access to the dataset and can use any legitimate sample
of their choice to craft adversarial samples. We use it in order to
show that expected inputs can be misclassified on a large scale. We evaluate
sample crafting using two metrics: *success rate* and
*transferability*. The *success rate* is the proportion of
adversarial samples misclassified by the substitute DNN.
Our goal is to verify whether these samples are also misclassified by
the oracle or not. Therefore, the *transferability of adversarial samples*
refers to the oracle misclassification rate of adversarial samples crafted
using the substitute DNN.
Figure [5](#S5.F5 "Fig. 5 ‣ 5.1 Attack against the MetaMind Oracle ‣ 5 Validation of the Attack ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples") details both metrics
for each substitute DNN and for several values of the input variation
ε (cf. Equation [5](#S4.E5 "(5) ‣ 4.2 Adversarial Sample Crafting ‣ 4 Black-Box Attack Strategy ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples")). Transferability reaches 84.24% for the first substitute
DNN and 78.72% for the second, with input variations of
ε=0.3. Our attack strategy is thus effectively able to severely
damage the output integrity of the MetaMind oracle. Using the
substitute training set handcrafted by the adversary limits the transferability of adversarial samples
when compared to the substitute set extracted from MNIST data,
for all input variations except ε=0.2.
Yet, the transferability of both substitutes is similar, corroborating that our
attack can be executed without access to any of the oracle’s training data.
| | |
| --- | --- |
| Substitute | Initial Substitute Training Set from |
| Epoch | MNIST test set | Handcrafted digits |
| 0 | 24.86% | 18.70% |
| 1 | 41.37% | 19.89% |
| 2 | 65.38% | 29.79% |
| 3 | 74.86% | 36.87% |
| 4 | 80.36% | 40.64% |
| 5 | 79.18% | 56.95% |
| 6 | 81.20% | 67.00% |
Fig. 4: Substitute DNN Accuracies: each column corresponds to an initial substitute training set:
150 MNIST test samples, and handcrafted digits.
Accuracy is reported on the unused 9,850 MNIST test samples.

Fig. 5: Success Rate and Transferability of Adversarial Samples for the MetaMind attacks: performed using MNIST-based and handcrafted substitutes: each bar corresponds to a different perturbation input variation.

Fig. 6: MetaMind Oracle Confusion Matrices for 4 input variations ε. Cell (x,y) indicates the share of digit y instances classified by the oracle as digit x.

Fig. 7: Success Rate and Transferability of Adversarial Samples crafted on the GTRSRB dataset: each bar corresponds to a different perturbation input variation
To analyze the labels assigned by the MetaMind oracle, we plot confusion
matrices for adversarial samples crafted using the first substitute DNN and with
4 values of input variation ε. In
Figure [6](#S5.F6 "Fig. 6 ‣ 5.1 Attack against the MetaMind Oracle ‣ 5 Validation of the Attack ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples"), rates on the diagonal indicate
the proportion of samples correctly classified by the oracle for each of the
10 classes. Off-diagonal rates indicate the proportion of samples
misclassified in a wrong class. For instance, cell (8,3) in the third matrix
indicates that 89% instances of a 3 are classified as a 8 by the oracle
when perturbed with an input variation of ε=0.25.
Interestingly, confusion matrices converge to most samples being classified as
4s and 8s as ε increases. As observed by [[3](#bib.bib3)], an explanation for
this phenomenon could be that DNNs more easily classify adversarial samples in
these classes. Additionally, it is possible that MetaMind augments the DNN with
an ensemble of classifiers collectively assigning labels to samples.
Adversarial samples might display properties characteristics of an 8 like for
instance a significant proportion of white pixels.
###
5.2 Attacking an oracle for the GTSRB
We now validate our attack on a different dataset, using
an oracle trained with our machine to recognize traffic signs on the GTSRB
dataset. The attack achieves *higher transferability rates at lower distortions
compared to the MNIST oracle*. We believe this is due to the higher dataset
dimensionality, in terms of inputs and outputs.
Oracle Description: The GTSRB dataset is an image collection
consisting of 43 traffic signs [[7](#bib.bib7)]. Images vary in size and are
RGB-encoded. To simplify, we resize images to 32x32
pixels, recenter them by subtracting the mean component, and rescale
them by factoring their standard deviations out. We keep 35,000 images for
our training set and 4,000 for our validation set (out of the 39,209
available), and 10,000 for our test set (out of 12,630). We train the
oracle on our machine, using the DNN B from
Table [15](#Sx1.F15 "Fig. 15 ‣ Appendix ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples") (cf. appendix), for 50 epochs with a learning rate
of 10−2 and a momentum of 0.9 (both decayed by 0.5 every 10
epochs).
Substitute DNN Training: The adversary uses two
initial substitute training sets extracted from the GTSRB test set. The
first includes the first 1,000 samples and the second the
first 500. The number of initial samples is higher than for
MNIST substitutes as inputs have a higher dimensionality. We train
three substitute architectures C, D, and E (cf.
Table [15](#Sx1.F15 "Fig. 15 ‣ Appendix ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples")) using the oracle for 6 substitute
training epochs with a Jacobian-based dataset augmentation parameter of
λ=0.1. Substitute C and E where trained with the 1,000 sample
initial substitute training set and achieve a 71.42% accuracy. Substitute D
was trained with the initial set of 500 samples. Its accuracy of 60.12% is
lower than C and E.
Sample Crafting: We use Goodfellow’s algorithm
with input variations ε between 0.01 and 0.5 to craft
adversarial samples from the test set. Results
are shown in Figure [7](#S5.F7 "Fig. 7 ‣ 5.1 Attack against the MetaMind Oracle ‣ 5 Validation of the Attack ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples"). Adversarial samples
crafted with variations ε<0.3 are more transferable than
those crafted with the same ε for MNIST models. This is most likely due to the higher
input dimensionality—3,072 components instead of 784—which means almost 4 times
more perturbation is applied with the same ε.
Nevertheless, with success rates higher than 98.98% and transferability
rates ranging from 64.24% to 69.03% for ε=0.3, which is hard
to distinguish for humans, *the attack is successful*. Interestingly, the
transferability of adversarial samples crafted using substitute DNN D is
comparable or higher than corresponding samples for DNNs C and E,
even though it was trained with less samples and is less
accurate. This emphasizes that there is no strong correlation between substitute accuracy and adversarial example transferability.
6 Attack Algorithm Calibration
-------------------------------
Having shown in Section [5](#S5 "5 Validation of the Attack ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples") that an adversary can force an
MNIST oracle from MetaMind, and a GTSRB oracle trained locally, to
misclassify inputs using the attack described in Section [4](#S4 "4 Black-Box Attack Strategy ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples"),
we now perform a parameter space exploration of both attack steps–the
substitute DNN training and the adversarial sample crafting–using MNIST
data. We explore the following questions: “(1) How can substitute
training be fine-tuned to improve adversarial sample transferability?” and (2)
“For each adversarial sample crafting strategies, which parameters optimize
transferability?”. We found that:
* [leftmargin=0.25in]
* In Section [6.1](#S6.SS1 "6.1 Calibrating Substitute DNN Training ‣ 6 Attack Algorithm Calibration ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples"), we show that the choice of substitute DNN architecture (number of layers, size,
activation function, type) has a limited impact on adversarial sample
transferability. Increasing the number of epochs, after the substitute DNN has
reached an asymptotic accuracy, does not improve adversarial sample
transferability.
* At comparable input perturbation magnitude, the Goodfellow and Papernot
algorithms have similar transferability rates (cf. Section [6.2](#S6.SS2 "6.2 Adversarial Sample Crafting ‣ 6 Attack Algorithm Calibration ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples")). Yet, we find the former to be
more computationally efficient.
In this section, we use an oracle trained locally to limit querying of the
MetaMind API. We train architecture A (cf. Table [15](#Sx1.F15 "Fig. 15 ‣ Appendix ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples"))
for 50 epochs with a learning parameter 10−2 and a momentum 0.9 (both
decayed by 0.5 every 10 epochs).
###
6.1 Calibrating Substitute DNN Training
We first seek to quantify the impact of substitute training algorithm parameters on adversarial sample transferability and introduce a refinement to reduce oracle querying.
Choosing an Architecture: We train substitute DNNs A, F, G, H, I, J, K, L, and M (cf.
Table [15](#Sx1.F15 "Fig. 15 ‣ Appendix ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples")) using 150 samples from the MNIST test set as the substitute training set. During each
of the 6 substitute training epochs, the DNN is trained for 5 epochs
from scratch. Between epochs, synthetic data is added to the training set using Jacobian-based
dataset augmentations with step λ=0.1. The substitute
architectures differ from the oracle’s by the type, number, and size
of layers. In Table [I](#S6.T1 "TABLE I ‣ 6.1 Calibrating Substitute DNN Training ‣ 6 Attack Algorithm Calibration ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples"),
we report the accuracy of each architecture after 2 and 6 substitute training epochs, as well as the adversarial
sample transferability after 6 epochs. Adversarial samples are crafted using the Goodfellow algorithm with an input
variation of ε=0.4 (which we justify later). The last column of
Table [I](#S6.T1 "TABLE I ‣ 6.1 Calibrating Substitute DNN Training ‣ 6 Attack Algorithm Calibration ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples")
shows that the choice of architecture has a limited
impact on adversarial sample transferability, and therefore on the attack
success. The most important transferability drop follows from removing all
convolutional layers. Changing the hidden layer activation function from
rectified linear to a sigmoid does not impact transferability significantly.
Choosing the number of substitute epochs: Another
tunable parameter is the number of
epochs for which substitute DNNs are trained. Intuitively, one would hypothesize that
the longer we train the substitute, the more samples labeled using the
oracle are included in the substitute training set, thus the higher the
transferability of adversarial samples will be. This intuition is confirmed
only partially by our experiments on substitute DNN A. We find that for
for input variations ε≤0.3, the transferability is slightly
improved by a rate between +3% to +9%, but for variations
ε≥0.4, the transferability is slightly degraded by less than
1%.
| | | | |
| --- | --- | --- | --- |
| DNN | Accuracy | Accuracy | Transferability |
| ID | (ρ=2) | (ρ=6) | (ρ=6) |
| A | 30.50% | 82.81% | 75.74% |
| F | 68.67% | 79.19% | 64.28% |
| G | 72.88% | 78.31% | 61.17% |
| H | 56.70% | 74.67% | 63.44% |
| I | 57.68% | 71.25% | 43.48% |
| J | 64.39% | 68.99% | 47.03% |
| K | 58.53% | 70.75% | 54.45% |
| L | 67.73% | 75.43% | 65.95% |
| M | 62.64% | 76.04 | 62.00% |
TABLE I: Substitute Accuracy at ρ=2 and ρ=6 substitute epochs and Transferability of Adversarial Samples: for ε=0.4 after ρ=6 substitute epochs.
Setting the step size: We trained
substitute A using different Jacobian-based
dataset augmentation step sizes λ. Increasing or decreasing the step size (from λ=0.1 used in the
rest of this paper) does not modify the substitute accuracy by more than 3%. Larger step sizes decrease convergence stability while smaller values yield slower convergence.
However, increasing step size λ negatively
impacts adversarial sample transferability : for instance with a step size
of 0.3 compared to 0.1, the transferability rate for ε=0.25
is 10.82% instead of 22.35% and for ε=0.5, 82.07%
instead of 85.22%.
However, having the step size periodically alternating between positive and negative
values improves the quality of the oracle approximation made by the substitute. This could be explained by the fact that after a few substitute epochs, synthetic inputs are outside of the input domain and are thus clipped to produce an acceptable input.
We introduce an iteration period τ
after which the step size is multiplied by −1. Thus, the step size
λ is now replaced by:
| | | | |
| --- | --- | --- | --- |
| | λρ=λ⋅(−1)⌊ρτ⌋ | | (7) |
where τ is set to be the number of epochs after which the Jacobian-based
dataset augmentation does not lead any substantial improvement in the
substitute. A grid search can also be performed to find an optimal value for
the period τ. We also experimented with a decreasing grid step amplitude
λ, but did not find that it yielded substantial improvements.
Reducing Oracle Querying: We apply *reservoir sampling* [[16](#bib.bib16)]
as a mean to reduce the number of queries made to the oracle. This is useful
when learning substitutes in realistic environments, or when interacting with paid APIs, where the number of label queries an adversary can make without
exceeding a quota or being detected by a defender is
constrained. Reservoir
sampling is a class of algorithms that randomly select κ samples from a list of samples.
In our
case, we use reservoir sampling to select a limited number of new inputs
κ when performing a Jacobian-based dataset augmentation. This prevents
the exponential growth of queries made to the oracle at each augmentation
iteration. At iterations ρ>σ (the first σ iterations are
performed normally), when considering the previous set Sρ−1 of
substitute training inputs, we select κ inputs from Sρ−1 to be
augmented in Sρ. These κ inputs are selected using reservoir
sampling. This
technique ensures that each input in Sρ−1 has an equal probability
1∣∣Sρ−1∣∣ to be augmented in Sρ. The number
of queries made to the oracle is reduced from n⋅2ρ for the vanilla
Jacobian-based augmentation to n⋅2σ+κ⋅(ρ−σ) for
the Jacobian-based augmentation with reservoir sampling. In Section [7](#S7 "7 Generalization of the Attack ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples"), we show that using reservoir sampling to reduce
the number of synthetic training inputs does not significantly degrade the substitute accuracy.
###
6.2 Adversarial Sample Crafting
We compare the transferability of
adversarial samples produced by each algorithm introduced
previously [[6](#bib.bib6), [3](#bib.bib3)]. We first calibrate algorithms to
improve the transferability of adversarial samples produced. We then compare
the results to elect the strongest technique under our threat model.
Goodfellow’s algorithm: Recall from Equation [5](#S4.E5 "(5) ‣ 4.2 Adversarial Sample Crafting ‣ 4 Black-Box Attack Strategy ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples") the perturbation computed in the Goodfellow attack.
Its only tunable parameter is
ε: the variation added in
the direction of the gradient sign. We use the same architecture set as
before to quantify the impact of ε on
adversarial sample transferability.
In Figure [8](#S6.F8 "Fig. 8 ‣ 6.2 Adversarial Sample Crafting ‣ 6 Attack Algorithm Calibration ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples"), architecture A outperforms
all others: it is a copy of the oracle’s and acts as a baseline. Other
architectures have asymptotic transferability rates ranging between 72.24%
and 80.21%, confirming that *the substitute architecture choice has
a limited impact on transferability*. Increasing the value of ε above
0.4 yields little improvement in transferability and should be avoided
to guarantee indistinguishability of adversarial samples to humans.
The curves can be partitioned in two groups: one corresponding
to architectures using convolutions (A, F, G, H, L, M) and a second to those
using fully connected layers only (I, J, K).
Papernot’s algorithm: This algorithm is fine-tuned by
two parameters: the *maximum distortion* Υ and the *input
variation* ε. The maximum distortion555In the
original algorithm [[3](#bib.bib3)], the algorithm stopped perturbing the input
if it reached a target class different from the source class. Here, we force
it to continue perturbing until it changed
Υ input components. defines the number of input components that are altered in
perturbation δ→x. The input variation,
similarly to the Goodfellow algorithm, controls the amount of change induced to
altered input components.
We first evaluate the impact of the maximum distortion Υ on
adversarial sample transferability. For now, components selected to be
perturbed are increased by ε=1. Intuitively, one expects that
increasing the maximum distortion will make adversarial samples more
transferable. Indeed, even though some adversarial samples would be
misclassified by the substitute DNN with lower distortions, higher distortions
increase the confidence of the substitute DNN making a misclassification, and
also increases the likelihood of the oracle also misclassifying the sample. In
Figure [9](#S6.F9 "Fig. 9 ‣ 6.2 Adversarial Sample Crafting ‣ 6 Attack Algorithm Calibration ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples"), we confirm this intuition with
different values of the maximum distortion Υ. Increasing distortion Υ from
7.14% to 28.57% improves transferability: at a 7.14% distortion, the
average transferability across all architectures is 14.70%
whereas at a 28.57% distortion, the average transferability is at 55.53%.
We now quantify the impact of the variation
ε introduced to each input component selected in
δ→x. We find that reducing the input variation
from 1 to 0.7
significantly degrades adversarial sample transferability,
approximatively by a factor of 2 (cf. Figure [10](#S6.F10 "Fig. 10 ‣ 6.2 Adversarial Sample Crafting ‣ 6 Attack Algorithm Calibration ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples")). This is explained by the fixed
distortion parameter Υ, which prevents the crafting algorithm
from increasing the number of components altered to compensate for the reduced effectiveness yielded by the smaller ε.

Fig. 8: Impact of input variation ε in the Goodfellow crafting algorithm on the transferability of adversarial samples: for architectures from Table [I](#S6.T1 "TABLE I ‣ 6.1 Calibrating Substitute DNN Training ‣ 6 Attack Algorithm Calibration ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples").

Fig. 9: Impact of the maximum distortion Υ in the Papernot algorithm on success rate and transferability of adversarial samples: increasing Υ, even beyond 100% success rate yields higher transferability rates across DNNs.

Fig. 10: Impact of the input variation ε in the Papernot algorithm on the success rate and adversarial sample transferability computed for ε∈{0.5,0.7,1} on DNNs from Table [I](#S6.T1 "TABLE I ‣ 6.1 Calibrating Substitute DNN Training ‣ 6 Attack Algorithm Calibration ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples") with distortion Υ=39.80%.
Comparing Crafting Algorithms: To compare the two crafting strategies and their differing perturbation styles fairly,
we compare their success rate given a fixed L1 norm of the introduced perturbation δ→x, which can be defined as:
| | | | |
| --- | --- | --- | --- |
| | ∥δ→x∥1=ε⋅∥δ→x∥0 | | (8) |
where ∥δ→x∥0 is the number of input components
selected in the perturbation δ→x,
and ε the input
variation introduced to each component perturbed.
For the Goodfellow algorithm, we always
have ∥δ→x∥0=1, whereas for the Papernot algorithm, values vary
for both ε and ∥δ→x∥0. For instance,
∥δ→x∥1=0.4 corresponds to a
Goodfellow algorithm with ε=0.4 and a Papernot algorithm with
ε=1 and Υ=40%. Corresponding transferability
rates can be found in
Table [I](#S6.T1 "TABLE I ‣ 6.1 Calibrating Substitute DNN Training ‣ 6 Attack Algorithm Calibration ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples") and
Figure [9](#S6.F9 "Fig. 9 ‣ 6.2 Adversarial Sample Crafting ‣ 6 Attack Algorithm Calibration ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples") for our running set of architectures.
Performances are comparable with some DNNs performing better with one algorithm and others with the other.
Thus, the choice of algorithm depends on acceptable perturbations:
e.g., all features perturbed a little vs. few features perturbed a lot.
Indeed, the Goodfellow algorithm gives more control on ε while the
Papernot algorithm gives more control on Υ.
7 Generalization of the Attack
-------------------------------
So far, we studied substitutes and oracles all learned with DNNs.
However, no part of the attack limits its applicability to other models.
As Equation [4](#S4.E4 "(4) ‣ 4.1 Substitute Model Training ‣ 4 Black-Box Attack Strategy ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples") indicates, the only limitation
on the *substitute* is that it must implement a differentiable
function, to allow for synthetic data generation with the Jacobian.
The attack does not either make any assumption about the *target oracle*: for instance, the oracle does not necessarily have to be differentiable.
We show below that:
* Substitutes can also be learned with logistic regression.
* The attack generalizes to additional machine learning models by:
(1) learning substitutes of 4 classifier types
(logistic regression, SVM, decision tree, nearest neighbors) in addition to DNNs,
and (2) targeting remote models hosted by Amazon Web Services and Google Cloud
Prediction with success rates of 96.19% and 88.94% after 800 queries
to train the substitute.
###
7.1 Generalizing Substitute Learning
We here show that the substitute training algorithm introduced in Section [4](#S4 "4 Black-Box Attack Strategy ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples") and calibrated in Section [6](#S6 "6 Attack Algorithm Calibration ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples") is applicable to many target machine learning techniques.
We evaluate its performance on 5 representative types of machine learning classifiers: a DNN, logistic regression (LR), SVM, decision tree (DT),
and nearest neighbor (kNN).
Whereas we previously trained substitutes using DNNs only, we now use both DNNs and LR as substitute models.
The Jacobian-based dataset augmentation introduced with DNNs is easily
adapted to logistic regression: the later is analog to the softmax layer frequently used by the former when outputting probability vectors.
The 5 target classifiers are all trained on the 50,000 sample
MNIST training set. We
use 100 samples from the MNIST test set as
the initial substitute training set and use the two refinements introduced in Section [6](#S6 "6 Attack Algorithm Calibration ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples"): a *periodic step size* and *reservoir sampling*.
Figure [11](#S7.F11 "Fig. 11 ‣ 7.1 Generalizing Substitute Learning ‣ 7 Generalization of the Attack ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples")(a) and [11](#S7.F11 "Fig. 11 ‣ 7.1 Generalizing Substitute Learning ‣ 7 Generalization of the Attack ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples")(b) plot for each iteration ρ the
share of samples on which the substitute DNNs and LRs agree with predictions made by
the classifier oracle they are approximating. This proportion is estimated by
comparing labels assigned to the test set by the substitutes and
oracles before each iteration ρ of the Jacobian-based dataset
augmentation.
All substitutes are able to
approximate the corresponding oracle at rates higher between 77% and 83% after ρ=10 iterations (to the exception of the decision tree oracle, which could be due to its non-continuity).
LR substitute accuracies are
competitive with those of DNN substitutes, especially for target LR and SVM oracles. The benefit of LR substitutes compared
to DNNs is that they reach their asymptotic match rate faster,
after ρ=3 iterations, corresponding to 800 oracle
queries. They are also computationally more efficient.
Table [II](#S7.T2 "TABLE II ‣ 7.1 Generalizing Substitute Learning ‣ 7 Generalization of the Attack ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples") quantifies the impact of refinements introduced in Section [6](#S6 "6 Attack Algorithm Calibration ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples") on results reported in Figure [11](#S7.F11 "Fig. 11 ‣ 7.1 Generalizing Substitute Learning ‣ 7 Generalization of the Attack ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples")(a) and [11](#S7.F11 "Fig. 11 ‣ 7.1 Generalizing Substitute Learning ‣ 7 Generalization of the Attack ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples")(b).
The *periodic step size* (PSS) increases the oracle approximation accuracy of
substitutes. After
ρ=9 epochs, a substitute DNN trained with PSS
matches 89.28% of the DNN oracle labels, whereas the vanilla substitute
DNN matches only 78.01%.
Similarly, the LR substitute with PSS matches 84.01% of the LR oracle labels while the vanilla substitute matched 72.00%. Using *reservoir sampling* (RS) reduces oracle querying. For
instance, 10 iterations with RS (σ=3 and κ=400) make 100⋅23+400(10−3)=3,600 queries to the oracle instead of
102,400 without RS. This decreases the substitute accuracy,
but when combined with PSS it remains superior to the vanilla substitutes. For instance, the vanilla substitute matched 7,801 of the DNN oracle labels,
the PSS one 8,928, and the PSS with RS one 8,290. Simarly,
the vanilla LR substitute matched 71.56% of the SVM oracle labels, the PSS one 82.19%, and the PSS with RS 79.20%.
| | |
| --- | --- |
|
(a) DNN substitutes
|
(b) LR substitutes
|
Fig. 11: Label predictions matched between the DNN and LR substitutes and their target classifier oracles on test data.
| | | | | | |
| --- | --- | --- | --- | --- | --- |
| Substitute | DNN | LR | SVM | DT | kNN |
| DNN | 78.01 | 82.17 | 79.68 | 62.75 | 81.83 |
| DNN+PSS | 89.28 | 89.16 | 83.79 | 61.10 | 85.67 |
| DNN+PSS+RS | 82.90 | 83.33 | 77.22 | 48.62 | 82.46 |
| LR | 64.93 | 72.00 | 71.56 | 38.44 | 70.74 |
| LR+PSS | 69.20 | 84.01 | 82.19 | 34.14 | 71.02 |
| LR+PSS+RS | 67.85 | 78.94 | 79.20 | 41.93 | 70.92 |
TABLE II: Impact of our refinements, Periodic Step Size (PSS) and Reservoir Sampling (RS), on the percentage of label predictions matched between the substitutes and their target classifiers on test data after ρ=9 substitute iterations.
| | Amazon | Google |
| --- | --- | --- |
| Epochs | Queries | DNN | LR | DNN | LR |
| ρ=3 | 800 | 87.44% | 96.19% | 84.50% | 88.94% |
| ρ=6 | 6,400 | 96.78% | 96.43% | 97.17% | 92.05% |
| ρ=6∗ | 2,000 | 95.68% | 95.83% | 91.57% | 97.72% |
TABLE III: Misclassification rates of the Amazon and Google oracle on adversarial samples produced with DNN and LR substitutes after ρ=3,6 epochs. The 2nd column indicates the number of queries made to train the substitute. Last row uses both a periodic step size and reservoir sampling.
.
###
7.2 Attacks against Amazon & Google oracles
Amazon oracle: To train a classifier on *Amazon Machine
Learning*,666<https://aws.amazon.com/machine-learning>, we uploaded a CSV version of the MNIST
dataset to a S3 bucket.
We then loaded the
data, selected the multi-class model type, and keept default configuration settings. The process took a few minutes and produced a classifier achieving a
92.17% test set accuracy. We cannot improve the accuracy due to the automated nature of training. We then activate real-time predictions to query the model for labels from our machine with the provided API.
Although probabilities are returned, we discard them and retain
*only the most likely label*—as stated in our threat model
(Section [3](#S3 "3 Threat Model ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples")).
Google oracle: The procedure
to train a classifier on Google’s Cloud Prediction API777<https://cloud.google.com/prediction/> is similar to Amazon’s. We
upload the CSV file with the MNIST
training data to Google Cloud Storage.
We then train a model using the Prediction API.
The only property we can
specify is the expected multi-class nature of our model.
We then evaluate the resulting
model on the MNIST test set.
The API reports
an accuracy of 92% on this test set for the model trained.
Substitute Training: By augmenting an initial training set of 100 test set samples, we
train a DNN and LR substitute for each of the two oracles. We measure success as the rate of adversarial
samples misclassified by the corresponding oracle, among the 10,000 produced from the test set using the fast gradient sign method with parameter ε=0.3. These rates, computed after ρ∈{3,6} dataset augmentation iterations, are reported in Table [III](#S7.T3 "TABLE III ‣ 7.1 Generalizing Substitute Learning ‣ 7 Generalization of the Attack ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples"). Results reported in the last row use both a periodic step size and reservoir sampling (hence the reduced number of queries made to train the substitute).
Experimental Results: With a 96.19% misclassification
rate for a perturbation ε=0.3 crafted using a LR substitute
trained with 800 oracle queries, the model hosted by Amazon is easily
misled. The model trained by Google is somewhat more
robust to adversarial samples, but
is still vulnerable to a large proportion of samples: 88.94% of adversarial
samples produced in the same conditions are misclassified. A careful read of the documentation indicated that the model trained by Amazon is a multinomial logistic regression.888[docs.aws.amazon.com/machine-learning/latest/dg/types-of-ml-models.html](http://docs.aws.amazon.com/machine-learning/latest/dg/types-of-ml-models.html)
As pointed out in [[6](#bib.bib6)], shallow models like logistic regression
are unable to cope with adversarial samples and learn robust classifiers. This explains why the attack is very successful and the LR
substitute performs better than the DNN substitute. We were however not able to find the ML technique Google uses.
The last row of Table [III](#S7.T3 "TABLE III ‣ 7.1 Generalizing Substitute Learning ‣ 7 Generalization of the Attack ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples") shows how combining periodic step sizes with
reservoir sampling allow us to reduce querying of both oracles during
substitute training, while crafting adversarial samples with
higher transferability to the target classifier.
Indeed, querying is reduced by a factor larger than 3 from 6,400 to 2,000 queries,
while misclassification decreases only from 96.78% to 95.68% for the Amazon DNN substitute.
It is still larger than
the rate of 87.44% achieved after 800 queries by the substitute learned without the refinements.
Similarly, the
misclassification rate of the Google LR substitute is 97.72%—compared to 92.05% with the original method after ρ=6 epochs, confirming the result.
8 Defense Strategies
---------------------
Many potential defense mechanisms fall into a category we call gradient masking.
These techniques construct a model that does not have useful
gradients, e.g., by using a nearest neighbor classifier instead
of a DNN.
Such methods make it difficult to construct an adversarial example directly,
due to the absence of a gradient, but are often still vulnerable to the adversarial
examples that affect a smooth version of the same model.
Previously, it has been shown that nearest neighbor was vulnerable to attacks
based on transferring adversarial examples from smoothed nearest neighbors[[6](#bib.bib6)].
We show a more general flaw in the category of gradient masking.
Even if the defender attempts to prevent attacks by not publishing
the directions in which the model is sensitive, these directions
can be discovered by other means, in which case the
same attack can still succeed.
We show that the black-box attack based on transfer from a substitute model
overcomes gradient masking defenses. No fully effective defense mechanism is known, but we study the two with the
greatest empirical success so far:
adversarial training [[6](#bib.bib6), [5](#bib.bib5)], and
defensive distillation for DNNs [[12](#bib.bib12)].
Adversarial training:
It was shown that injecting adversarial examples throughout training increases
the robustness of significantly descriptive models, such as DNNs [[6](#bib.bib6), [5](#bib.bib5), [17](#bib.bib17)].
We implemented an approximation of this defense using the Google Prediction API.
Since the API does not support the generation of adversarial examples
at every step of training, as a correct implementation of adversarial training would
do, we instead inject a large amount of adversarial examples infrequently.
After training in this way, the model has a misclassification rate of 8.75% on
the unperturbed test set,
but the adversarial misclassification rate rises to 100% when ρ=6.
To evaluate this defense strategy using a correct implementation, we resort
to training the oracle locally, using our own codebase that includes support for
generating adversarial examples at each step.
After each training batch, we compute and train on adversarial examples
generated with the fast gradient sign method before starting training on the next batch of the
original training data.
Results are given in Table [IV](#S8.T4 "TABLE IV ‣ 8 Defense Strategies ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples").
We observe that for ε=0.15, the defense can be evaded using the
black-box attack with adversarial examples crafted on the substitute and
misclassified by the oracle at rates up to 71.25%.
However, for ε=0.3, the black-box attack is not effective anymore.
Therefore, making a machine learning model robust to small and infinitesimal
perturbations of its inputs is an example of *gradient masking* and can
be evaded using our substitute-based black-box approach.
However, making the model robust to larger and finite perturbations prevents
the black-box attack.
To confirm this hypothesis, we now show that defensive distillation, which
makes the model robust to infinitesimal perturbations, can be evaded by the
black-box approach.
Defensive distillation: Defensive
distillation is an alternative defense strategy.
Due to space constraints, we refer readers to [[12](#bib.bib12)] for
a detailed presentation.
Because the remotely hosted APIs we study here do not implement defensive distillation or provide
primitives that could be used to implement it,
we are forced to evaluate this defense on a locally trained oracle.
We thus train a distilled neural network to act as our MNIST oracle.
We train several variants of the neural network architecture A at different
distillation temperatures T=5,10,100.
For each of them, we measure the success of the fast gradient sign attack
directly performed on the distilled oracle—as a baseline corresponding to a
white-box attack—and using a substitute DNN trained with synthetic data as
described throughout the present paper.
The results are reported in Figure [12](#S8.F12 "Fig. 12 ‣ 8 Defense Strategies ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples") for different values of the input variation parameter ε on the horizontal axis. As demonstrated in [[12](#bib.bib12)] for the Papernot attack algorithm, we find that defensive distillation defends against the fast gradient sign method when the attack is performed directly on the distilled model, i.e. in *white-box settings*. However, in *black-box* settings using the attack introduced in the present paper, the fast gradient sign method is found to be successful regardless of the distillation temperature used by the oracle. We hypothesize that this is due to the way distillation defends against the attack: it reduces the gradients in local neighborhoods of training points. However, our substitute model is not distilled, and as such possesses the gradients required for the fast gradient sign method to be successful.
Defenses which make models robust in a small neighborhood of the training manifold perform *gradient masking*: they smooth the decision surface and reduce gradients used by adversarial crafting in small neighborhoods. However, using a substitute and our black-box approach evades these defenses, as the substitute model is not trained to be robust to the said small perturbations. *We thus conclude that defending against finite perturbations is a more promising venue for future work than defending against infinitesimal perturbations.*
| Training ε | Attack ε | O→O | S → S | S → O |
| --- | --- | --- | --- | --- |
| 0.15 | 0.3 | 10.12% | 94.91% | 38.54% |
| 0.15 | 0.4 | 43.29% | 99.75% | 71.25% |
| 0.3 | 0.3 | 0.91% | 93.55% | 1.31% |
| 0.3 | 0.4 | 29.56% | 99.48% | 10.30% |
TABLE IV: Evaluation of adversarial training:
the columns indicate the input variation parameter used to
inject adversarial examples during training and to compute the attacks,
the attack success rate when examples crafted on the (O)racle are deployed against the (O)racle,
the attack success rate when examples crafted on the (S)ubstitute are deployed against the (S)ubstitute,
and
the attack success rate when examples crafted on the (S)ubstitute are deployed against the (O)racle.
.

Fig. 12: Evaluation of defensive distillation: Percentage of adversarial examples crafted using the Goodfellow algorithm at varying ε misclassified by the oracle. T is the temperature of distillation [[12](#bib.bib12)]. Curves marked by (direct) indicate baseline attacks computed on the oracle, all other curves where computed using a substitute, as described in Section [4](#S4 "4 Black-Box Attack Strategy ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples"). Despite distillation preventing the attack on the oracle directly, using a substitute allows us to evade it.
9 Intuition for Transferability
--------------------------------
Previous work started explaining why adversarial samples transfer between different
architectures [[6](#bib.bib6), [5](#bib.bib5)]. Here, we
build an intuition behind transferability based on statistical
hypothesis testing [[18](#bib.bib18)] and an analysis of DNN cost gradient sign matrices. A formal treatment is left as
future work.
Recall the perturbation
in the Goodfellow algorithm.
Inspecting Equation [5](#S4.E5 "(5) ‣ 4.2 Adversarial Sample Crafting ‣ 4 Black-Box Attack Strategy ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples"), it is clear that, given a sample
→x, the noise added would be the same for two DNNs F
and G if sgn(∇→xcost(F,→x,y)) and
sgn(∇→xcost(G,→x,y)) were equal.
These matrices have entries in {+1,−1}.
Let us write the space of these matrices as Sgnn×m.
Assume that the samples →x are
generated from a population distribution D (e.g., in our case
the distribution from which the images of digits are drawn). The formula sgn(∇→xcost(F,→x,y))
and D induce a distribution DF over Sgnn×m (i.e. randomly draw a sample from the distribution D and compute the quantity). Similarly, DNN G and
distribution D induce a distribution DG over
Sgnn×m. Our main conjecture is:
>
> For two “similar” architectures F and G distributions DF
> and DG induced by a population distribution D are highly correlated.
>
>
>
If distributions DF and DG were
independent, then the noise they add during
adversarial sample crafting are independent. In this case, our intuition is
that adversarial samples would not transfer (in the
two cases you are adding noise that are independent). The question is: how to verify our conjecture despite the population distribution D being unknown?
We turn to
statistical hypothesis testing. We can empirically estimate
the distributions DF and DG based on known
samples. First, we generate two sequences of sign matrices σ1=⟨M1,M2,⋯⟩ and σ2=⟨N1,N2,⋯⟩ using the sample set (e.g. MNIST) for a substitute DNN
F and oracle G. Next we pose the following null hypothesis:
>
> HN: The sequences σ1 and σ2 are drawn from independent
> distributions.
>
>
>
>
We use standard tests from the statistical hypothesis testing literature to
test the hypothesis HN. If the hypothesis HN is rejected, then we
know that the sign matrices corresponding to the two architectures F and G
are correlated.
We describe the test we use. There are several algorithms
for hypothesis testing: we picked a simple one based on a chi-square
test. An investigation of other hypothesis-testing
techniques is left as future work. Let pi,j and qi,j
be the frequency of +1 in the (i,j)-th entry of matrices in sequences
σ1 and σ2, respectively. Let ri,j be the frequency
of the (i,j)-th entry being +1 in both sequences σ1 and σ2
simultaneosuly.999We assume that the frequencies are normalized so they can
be interprested as probabilities, and also assume that all frequencies are
>0 to avoid division by zero, which can be achieved by rescaling. Note that if the distributions were independent
then ri,j=pi,jqi,j. However, if the distributions are correlated, then we expect ri,j≠pi,jqi,j.
Consider quantity:
| | | |
| --- | --- | --- |
| | χ2⋆=m∑i=1n∑j=1(ri,jN−pi,jqi,jN)2pi,jqi,jN | |
where N is the number of samples. In the χ-square test, we compute the probability that P(χ2>χ2⋆), where
χ2 has degrees of freedom (m−1)(n−1)=27×27=729 for the MNIST data. The χ2⋆ scores for substitute DNNs from Table [I](#S6.T1 "TABLE I ‣ 6.1 Calibrating Substitute DNN Training ‣ 6 Attack Algorithm Calibration ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples")
range between 61,403 for DNN A and 88,813 for DNN G.
Corresponding P-values are below 10−5 for all architectures, with confidence p<0.01.
Thus, for all substitute DNNs, the hypothesis HN is largely rejected:
sequences σ1 ans σ2, and therefore sign matrices corresponding
to pairs of a substitute DNN and the oracle, are highly correlated. As a
baseline comparison, we generate 2 random sign matrices and compute the corresponding χ2∗ score:
596. We find a P-Value of 0.99 with a confidence of 0.01, meaning that
these matrices were indeed drawn from independent distribution.

Fig. 13: Frequencies of cost gradient sign matrix components equal between substitute A and the oracle at substitute training epochs ρ∈{0,3,6} (three on the right), compared to a pair of random sign matrices (first image).
However, we must now complete our analysis to characterize the correlation
suggested by the hypothesis testing. In Figure [13](#S9.F13 "Fig. 13 ‣ 9 Intuition for Transferability ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples"), we
plot the frequency matrix R=[ri,j] for several pairs of matrices. The first is a
pair of random matrices of {+1,−1}. The other matrices correspond to
substitute DNN A and the oracle at different substitute training epochs ρ. Frequencies are computed using the 10,000 samples of the MNIST test set. Although
all frequencies in the random pairs are very close to 1/2, frequencies
corresponding to pixels located in the center of the image are higher in the
(substitute,oracle) matrix pairs. The phenomenon amplifies as
training progresses through the substitute epochs. We then compute the
frequencies separately for each sample source
class in Figure [14](#S9.F14 "Fig. 14 ‣ 9 Intuition for Transferability ‣ Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples"). Sign matrices agree on pixels relevant for classification in each
class.
We plotted similar figures for other substitute DNNs. They are not included due to space constraints.
They show that substitutes yielding lower transferability
also have less components of their cost gradient
sign matrix frequently equal to the oracle’s. This suggests that
*correlations between the respective sign matrices of the substitute DNN
and of the oracle—for input components that are relevant
to classification in each respective class—could explain cross-model adversarial sample transferability.*

Fig. 14: Frequencies of cost gradient sign matrix components equal between substitute A and the oracle
10 Discussion and Related Work
-------------------------------
This paper contributes to a line of works on machine learning security [[19](#bib.bib19), [13](#bib.bib13), [1](#bib.bib1), [3](#bib.bib3)].
Previous attacks evading machine learning classifiers at test time exploited knowledge of the model
internals [[2](#bib.bib2), [6](#bib.bib6), [3](#bib.bib3), [5](#bib.bib5)]
or transferability from models learned using hard to collect and expensive labeled datasets [[2](#bib.bib2), [6](#bib.bib6), [5](#bib.bib5)].
The black-box threat model explored in this paper represents a more serious attack on deep learning.
Xu et al. applied a genetic algorithm to malware detection evasion [[4](#bib.bib4)].
Unlike ours, it accesses probabilities assigned by the classifier to compute genetic variants fitness. These can be concealed by defenders. The attack is also not very efficient: 500 evading variants are found in 6 days. As the classifier is queried heavily, the authors conclude that the attack cannot be used against remote targets.
Finally, given the attack’s high cost on low-dimensional random forests and SVMs, it is unlikely the approach would scale to DNNs.
Srndic et al. explored the strategy of training a substitute model to find evading inputs [[9](#bib.bib9)].
They do so using labeled data, which is expensive to collect,
especially for models like DNNs.
In fact, their attack is evaluated only on random forests and an SVM.
Furthermore, they exploit a semantic gap between the specific
classifiers studied and PDF renderers.
Finally, they assume knowledge of hand-engineered high-level features whereas
we perform attacks on raw inputs.
Tramer et al. used partial knowledge of models and equation solving to recover parameters from classifiers hosted by BigML and Amazon [[10](#bib.bib10)]. They do not perform experiments against DNNs. To
recover the 2,225 parameters of a very shallow neural network (one 20 neuron hidden layer) trained on a local machine, they make 108,200 label queries. Instead, we train substitute DNNs with 8 hidden layers (each with hundreds of neurons), which have over 100,000 parameters, with 2,000 label queries. Finally their work does not discuss adversarial example crafting: it is unclear whether the parameters recovered allow the adversary to craft adversarial examples misleading the remote classifier.
11 Conclusions
---------------
We introduced an attack against black-box DNNs. Our work is a
significant step towards relaxing strong assumptions about adversarial
capabilities made by previous attacks. We introduced a novel substitute training algorithm based on synthetic data generation to
craft adversarial examples misclassified by black-box DNNs without access to
their model or training set.
Our attack requires that the adversary is capable of observing only labels
assigned by the model to inputs of its choice.
We validated our attack design by
targeting a remote DNN served by MetaMind, forcing it to misclassify
84.24% of our adversarial samples. We also conducted an extensive
calibration of our attack algorithm and generalized it to other machine learning models by instantiating it against classifiers hosted by Amazon and Google, with success rates of 96.19% and 88.94%. We found our attack to evade a category of defenses we call *gradient masking* previously proposed to increase resilience to adversarial examples. Finally, we provided an intuition for
adversarial sample transferability across DNNs. Future work should deploy this attack against additional oracles in different application domains to identify possible challenges.
\setstretch
0.9
Appendix
--------
| ID | In | Out | CM | CM | RL | RL | RL | S |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| A | 784 | 10 | 32 | 64 | 200 | 200 | - | 10 |
| B | 3072 | 43 | 64 | 128 | 256 | 256 | - | 43 |
| C | 3072 | 43 | 32 | 64 | 200 | 200 | - | 43 |
| D | 3072 | 43 | 32 | 64 | 200 | 200 | - | 43 |
| E | 3072 | 43 | 64 | 64 | 200 | 200 | 100 | 43 |
| F | 784 | 10 | 32 | 64 | 200 | - | - | 10 |
| G | 784 | 10 | 32 | 64 | - | - | - | 10 |
| H | 784 | 10 | 32 | - | 200 | 200 | - | 10 |
| I | 784 | 10 | - | - | 200 | 200 | 200 | 10 |
| J | 784 | 10 | - | - | 1000 | 200 | - | 10 |
| K | 784 | 10 | - | - | 1000 | 500 | 200 | 10 |
| L | 784 | 10 | 32 | - | 1000 | 200 | - | 10 |
| M | 784 | 10 | 32 | - | - | 200s | 200s | 10 |
Fig. 15: DNN architectures: ID: reference used in the paper, In: input dimension, Out: output dimension, CM: convolutional layer with 2x2 kernels followed by max-pooling with kernel 2x2, RL: rectified linear layer except for 200s where sigmoid units are used, S: softmax layer. |
7a35313a-8c1f-4f39-b1a5-ad63ceb8a531 | trentmkelly/LessWrong-43k | LessWrong | Really radical empathy
Summary
It seems to me that almost every view of what matters for welfare — hedonism, desire theories, preference views, and objective list theories — misses a lot of what we care about, projects concerns we don’t actually have onto us, or otherwise fails to care about them on our behalves as we (would actually) care about them. In one way or the other, they fall short in empathy.
Ways of caring are the ways by which things can seem good, bad, better or worse to someone. These at least include pleasure, unpleasantness, desires, moral intuitions, moral judgements, conscious preferences, conscious approval and disapproval, conscious goals, and potentially even the dispositions for these. I conceive of radical empathy as taking on all of everyone’s ways of caring, and caring exactly about what they (would actually) care about on their behalf.
1. All ways of caring seem morally considerable to me, so nonhuman animals matter and so could emotionless artificial minds who consciously approve or disapprove. I will also use the terms preferences and attitudes for ways of caring (more).
2. I illustrate how hedonism and many desire and preference-based views neglect the things we do care about or are concerned with things we don’t care about, and so may fail to appropriately track what we actually care about (more).
3. I describe and further motivate object views as being concerned exactly with what we care about (more).
4. I motivate impartial object views as views taking on every (actual) stance or attitude impartially, and so stance-dependent and subjectivist, rather than moral realist (more).
Acknowledgements
Thanks to Ariel Simnegar, Lukas Gloor, Justis Mills, Tori, JackM and Vasco Grilo for helpful feedback. All errors are my own.
Ways of caring
Feelings are kinds of subjective appearances: if something feels good to someone, then it seems good to them, in some way. Pleasure is therefore a way by which something can seem good to someone. Desire is another |
487fa101-f826-4f33-b610-7c01cbebaf7e | trentmkelly/LessWrong-43k | LessWrong | Less Wrong London meetup, tomorrow (Sunday 2010-04-04) 16:00
UPDATE: Backup plan is to meet at the Starbucks across the road (16 Piccadilly, London W1J 0DE, 020 7287 8311). I've been trying to ring the Waterstones and the coffee shop for a while now and waited several minute for an answer with no success, so I think it's very likely that it is closed. I've called the Starbucks and it's open. If I know you on here, mail me (paul at ciphergoth dot org) and I'll give you my mobile number.
In the grand tradition of giving almost no notice for London meetups, I bring to your attention that a meetup is planned for tomorrow (Sunday 2010-04-04), at 16:00, in the 5th View cafe on top of Waterstone's bookstore. Nearest Tube Piccadilly Circus. Yvain, taw, RichardKennaway, and myself at least hope to be there, doubtless others too!
We should try to give more notice for the next one. This is the first Sunday in April; how about the first Sunday in June for the next one, 2010-06-06? I'd prefer an earlier time and it might be worth experimenting with a different venue, but if we can fix a date we can vary other details closer to the time. |
2283a055-112f-400f-a488-3f2a463e2ce0 | trentmkelly/LessWrong-43k | LessWrong | <$750k grants for General Purpose AI Assurance/Safety Research
Georgetown University's Center for Security and Emerging Technology (CSET) is accepting applications for AI Safety / AI Assurance research grants.
They are offering up to $750k per project accepted, expended over 6-24 months.
1-2 page expression of interest due August 1.
Applicants should be based at an academic institution or nonprofit research organization.
More information here.
From CSET "We’re using 'assurance' here in a broad sense, meaning roughly 'the generation of evidence that an ML system is sufficiently safe for its intended use.'" |
65e44de7-91e9-43ea-b2ec-41b0e8392031 | StampyAI/alignment-research-dataset/blogs | Blogs | What’s up with nuclear weapons?
*By Katja Grace, 27 February 2015*
When nuclear weapons were first built, the explosive power you could extract from a tonne of explosive [skyrocketed](http://aiimpacts.org/discontinuity-from-nuclear-weapons/ "Discontinuity from Nuclear Weapons"). But why?
Here’s a guess. Until nuclear weapons, explosives were based on chemical reactions. Whereas nuclear weapons are based on nuclear reactions. As you can see from the below table of specific energies and energy densities I got (and innocuously shortened) from [Wikipedia](http://en.wikipedia.org/wiki/Energy_density#Energy_densities_of_common_energy_storage_materials), the characteristic scale of nuclear energy stored in things is about a hundred thousand times higher than that of chemical energy stored in things (by mass). And in particular, there are an empty three orders of magnitude between the most chemical energy packed into a thing and the least nuclear energy packed into a thing. This is perhaps to do with the [fact](http://environ.andrew.cmu.edu/m3/s3/06forces.shtml) that chemical reactions exploit the electromagnetic force, while nuclear reactions exploit the strong fundamental force.
| Storage material | Energy type | Specific energy (MJ/kg) | Energy density (MJ/L) | Direct uses |
| --- | --- | --- | --- | --- |
| **[Uranium](http://en.wikipedia.org/wiki/Uranium "Uranium") (in [breeder](http://en.wikipedia.org/wiki/Breeder_reactor "Breeder reactor"))** | [Nuclear](http://en.wikipedia.org/wiki/Nuclear_power "Nuclear power") fission | 80,620,000[[2]](http://en.wikipedia.org/wiki/Energy_density#cite_note-whatisnuclear-2) | 1,539,842,000 | Electric power plants (nuclear reactors), industrial process heat (to drive chemical reactions, water desalination, etc.) |
| **[Thorium](http://en.wikipedia.org/wiki/Thorium-based_nuclear_power "Thorium-based nuclear power") (in [breeder](http://en.wikipedia.org/wiki/Breeder_reactor "Breeder reactor"))** | [Nuclear](http://en.wikipedia.org/wiki/Nuclear_power "Nuclear power") fission | 79,420,000[[2]](http://en.wikipedia.org/wiki/Energy_density#cite_note-whatisnuclear-2) | 929,214,000 | Electric power plants (nuclear reactors), industrial process heat |
| **[Tritium](http://en.wikipedia.org/wiki/Tritium#Decay "Tritium")** | [Nuclear](http://en.wikipedia.org/wiki/Nuclear_power "Nuclear power") decay | 583,529 | ? | Electric power plants (nuclear reactors), industrial process heat |
| **[Hydrogen (compressed)](http://en.wikipedia.org/wiki/Compressed_hydrogen "Compressed hydrogen")** | [Chemical](http://en.wikipedia.org/wiki/Chemical_energy#Chemical_energy "Chemical energy") | 142 | 5.6 | Rocket engines, automotive engines, grid storage & conversion |
| **[methane](http://en.wikipedia.org/wiki/Methane "Methane") or [natural gas](http://en.wikipedia.org/wiki/Natural_gas "Natural gas")** | [Chemical](http://en.wikipedia.org/wiki/Chemical_energy#Chemical_energy "Chemical energy") | 55.5 | 0.0364 | Cooking, home heating, automotive engines, lighter fluid |
| **[Diesel](http://en.wikipedia.org/wiki/Diesel_fuel "Diesel fuel") / [Fuel oil](http://en.wikipedia.org/wiki/Fuel_oil "Fuel oil")** | Chemical | 48 | 35.8 | Automotive engines, power plants[[3]](http://en.wikipedia.org/wiki/Energy_density#cite_note-AFDC-3) |
| **[LPG](http://en.wikipedia.org/wiki/Liquefied_petroleum_gas "Liquefied petroleum gas") (including [Propane](http://en.wikipedia.org/wiki/Propane "Propane")/ [Butane](http://en.wikipedia.org/wiki/Butane "Butane"))** | Chemical | 46.4 | 26 | Cooking, home heating, automotive engines, lighter fluid |
| **[Jet fuel](http://en.wikipedia.org/wiki/Jet_fuel "Jet fuel")** | Chemical | 46 | 37.4 | Aircraft |
| **[Gasoline](http://en.wikipedia.org/wiki/Gasoline "Gasoline") (petrol)** | Chemical | 44.4 | 32.4 | Automotive engines, power plants |
| **[Fat](http://en.wikipedia.org/wiki/Fat "Fat") (animal/vegetable)** | Chemical | 37 | 34 | Human/animal nutrition |
| **[Ethanol fuel](http://en.wikipedia.org/wiki/Ethanol_fuel "Ethanol fuel")** (E100) | Chemical | 26.4 | 20.9 | Flex-fuel, racing, stoves, lighting |
| **[Coal](http://en.wikipedia.org/wiki/Coal "Coal")** | Chemical | 24 | | Electric power plants, home heating |
| **[Methanol fuel](http://en.wikipedia.org/wiki/Methanol_fuel "Methanol fuel")** (M100) | Chemical | 19.7 | 15.6 | Racing, model engines, safety |
| **[Carbohydrates](http://en.wikipedia.org/wiki/Carbohydrate "Carbohydrate")(including sugars)** | Chemical | 17 | | Human/animal nutrition |
| **[Protein](http://en.wikipedia.org/wiki/Protein_in_nutrition "Protein in nutrition")** | Chemical | 16.8 | | Human/animal nutrition |
| **[Wood](http://en.wikipedia.org/wiki/Wood_fuel "Wood fuel")** | Chemical | 16.2 | | Heating, outdoor cooking |
| **[TNT](http://en.wikipedia.org/wiki/Trinitrotoluene "Trinitrotoluene")** | Chemical | 4.6 | | Explosives |
| **[Gunpowder](http://en.wikipedia.org/wiki/Gunpowder "Gunpowder")** | Chemical | 3 | | Explosives |
| | | | | |
Thus it seems very natural that the first, lousiest, nuclear weapons that anyone could invent would be much more explosive than any chemical weapon ever known. The power of explosives is mostly a matter of physics, and physics contains discontinuities, for some reason.
But this doesn’t quite explain it. Consider cars. Turbojet propelled cars seem just fundamentally capable of greater speeds than internal combustion engine propelled cars. But the first [turbojet cars](https://aiimpacts.org/feed/ed_record#1963.E2.80.93present_.28jet_and_rocket_propulsion.29) that were faster than internal combustion cars were not much faster—it looks like they just had a steeper trajectory, which passed other cars and kept climbing. I’m not sure what caused this pattern in the car case specifically, but I hear it’s common. Maybe people basically know what current technology is capable of, and introduce new things as soon as they can be done at all, rather than as soon as they can be done well.
Anyway, we could imagine the same thing happening with nuclear weapons: even if nuclear power was fundamentally very powerful, the first nukes could have made use of it very badly, exploding like a weak chemical explosive the first times, but being quickly improved.
But that isn’t how nuclear weapons work. For a nuclear weapon to be less explosive per mass it would need to contain less fissile material, be smaller (so the outside casing is more of the mass, and so that fewer neutrons hit other atoms), or be less well contained (so fewer neutrons hit other atoms). But to get a nuclear explosion going at all, you need to get enough neutrons to hit other atoms that the chain reaction starts. Nuclear weapons have a ‘[critical mass](http://en.wikipedia.org/wiki/Critical_mass)‘. I’m not sure how much less powerful the first nuclear weapons could easily have been than they were, but measly inexplosive nuclear weapons were basically out.
So the first nuclear weapons had to be much more explosive than the chemical explosives they replaced, because they were based on much more powerful reactions, and primitive nuclear weapons weren’t an option.
So nuclear weapons were basically guaranteed to revolutionize explosives in a single hop: even if humanity had known about nuclear reactions for hundreds of years, and put a tiny amount of effort into nuclear weapons research each year, humanity would never have seen feeble, not-much-better-than-TNT type nuclear weapons. There would just have been no nuclear weapons, and then at some point there would have been powerful nuclear weapons.
It is somewhat interesting that this is not what happened. Physicists mostly came to believe nuclear weapons were plausible from about 1939, and within a few years America spent nominal [$19Bn](http://blog.nuclearsecrecy.com/2013/05/17/the-price-of-the-manhattan-project/) (roughly 1% of [1943 GDP](http://useconomy.about.com/od/GDP-by-Year/a/US-GDP-History.htm), but spread over a few years) on nuclear weapons, and built some. So our story is that progress in explosives was very slow, and then America spent a huge pile of money on it, and then it was very fast, but the progress was independent of the massive influx of funding.
That sounds surprising. But perhaps the influx of funding was *because of* the large natural discontinuity visible in the distance? Why would you ever spend small amounts of money every year, if it was clear at the outset that you had to spend a gajillion dollars to get anywhere? If there wasn’t much requirement for serial activities, probably you would just save it up and spend it in one go. America didn’t save it up though—they tried to build nuclear weapons basically as soon as they realized it was feasible at all. So it looks like nuclear weapons were just discovered after it was cost-effective to build them.
But if it was immediately cost-effective to build nuclear weapons thousands of times more powerful than other bombs, then isn’t the requirement that nuclear weapons be fairly powerful irrelevant to the spending? If it was worth building powerful bombs immediately, then what does it matter if it is possible to build lesser weapons? Not really, because cost-effectiveness is relative. If is only possible to buy toothpaste in a large bucket, you will probably pay for it, and it will have been a good deal. However if it’s also available in small tubes, then the same bucket is probably a bad deal.
Similarly, if nuclear weapons must be powerful, then there’s a decent chance that as soon as they are discovered it will be cost-effective to spend a lot on them and make them so. However if they can come in many lower levels of quality, the same large amount of spending may not be cost effective, because it will often be better to spend an intermediate amount.
So a requirement that nuclear weapons be very explosive when they are first built could at least partly explain the huge amount of spending. And the inherently large amounts of energy available from nuclear reactions still seems relevant: any given amount of development will be cost-effective when it is more costly, if it is more effective compared to the alternative.
This also appears to fit in with an explanation of the further coincidence that there happened to be a huge war at the time. That is, the war made all military technologies more cost-effective, and thus made it more likely that when nuclear weapons became feasible to develop, they would already be cost-effective. However the war also makes it more likely that high quality weapons would already be cost-effective compared to cheaper counterparts, thus partly undermining the proposal that the large expenditure was due in part to nuclear weapons requiring a minimal level of quality.
Here’s another plausible explanation for the large expense: because of their extreme explosiveness, nuclear weapons were very cost-effective at the time they were first considered. That is, they could have been produced a lot more cheaply than they were. However, due to the war, America was willing to pay a lot to make them come faster. In particular, America was willing to keep paying to make them come faster up until the point when they were roughly as cost-effective as older weapons, taking into account the upfront cost of making them come faster. This would explain the large amount of spending, and perhaps also why it aligned so well with what America could barely afford. It also explains why nuclear weapons appear to have been very roughly as cost-effective as older weapons. However on its own, it seems to leave the large amount of spending and the large amount of progress as coincidences.
In other ways, this story is in line with what I know about the development of nuclear weapons. For instance, that enriching uranium via several different methods in parallel was [around half](http://blog.nuclearsecrecy.com/2013/05/17/the-price-of-the-manhattan-project/) of the cost of the Manhattan project, and that the project was a lot more expensive than other countries’ later nuclear weapons projects.
Perhaps the inherent explosiveness of nuclear weapons made them very cost-effective, and thus able to be sped up a lot and still be cost-effective? (Thus connecting the expense with the explosiveness) But if nuclear weapons had been too expensive already to speed up much, it seems we would have seen a similar amount of spending (or more) over a somewhat longer time. So on this story it seems the heavy spending didn’t cause the high explosiveness, and the high explosiveness (and thus cheapness) didn’t seem to cause the steep spending.
It seems there was probably one coincidence however: a physics discovery leading to weapons of unprecedented power was made just before the largest war in history, and it’s hard to see how the war and the discovery were related, unless history was choreographed to make Leo Szilard’s life interesting. Perhaps the weapons and the war are related because nuclear weapons caused us to think of WWII as a large war by averting later wars? But the World Wars [really were quite large](http://en.wikipedia.org/wiki/List_of_wars_and_anthropogenic_disasters_by_death_toll) compared to wars in slightly earlier history, rather than just the last in a trend of growing conflicts. If there is at least one coincidence anyway, perhaps it doesn’t matter whether the massive expense is explained by the unique qualities of nuclear weapons or merely by the war inspiring haste.
In sum, my guesses: nuclear weapons represented abrupt progress in explosiveness because of a discontinuity inherent in physics, and because ineffective nuclear weapons weren’t feasible. Coincidentally, at the same time as nuclear weapons were discovered, there was a large war. America spent a lot on nuclear weapons for a combination of reasons. Nuclear explosions were inherently unusually powerful, and so could be cost-effective while still being very expensive. They also required investment on a large scale, so were probably invested in at an unusually large scale. America probably also spent a lot more on nuclear weapons to get them quickly, because they were so cost-effective under the circumstances.
My guesses are pretty speculative however, and I’m not an expert here. Further speculation, or well-grounded theorizing, is welcome.
(image: *[Oak Ridge Y-12 Alpha Track](http://commons.wikimedia.org/wiki/File:Oak_Ridge_Y-12_Alpha_Track.jpg)*) |
079c6097-4232-491c-be5d-8b5d8f9fe151 | trentmkelly/LessWrong-43k | LessWrong | What are matching markets?
{Epistemic Status: Just reviewing my thoughts on a book and its subject matter. I have done some independent study of the field (and have a degree in economics), but I did not try to check my impressions rigorously. Much or maybe most of what I say is extrapolation by me, rather than being in the book itself.}
I have recently finished the book Who Gets What and Why, by Alvin Roth and found it quite intellectually stimulating. The book itself was not as systematic as I originally hoped, but that was almost better since it forced me to reconstruct definitions and patterns in ways that made sense to me. Below I have included my thoughts on the first topic of the book, matching markets. The second topic, mechanism design, will appear in a later post.
What Are Matching Markets
The first model of a market that economics students learn is the perfectly competitive model. This is exceptionally useful to understand, but it can also limit the scope of what people consider as markets. In particular, this model emphasizes two characteristics of a market, price and quantity, neither of which are explicitly required to model a market. Matching markets include markets where there can be both no prices and no changes in quantity (i.e. no supply or demand curves).[1] They are also more than academic curiosities since they describe everything from dating markets to job markets to school admissions.
You can use supply and demand to get some insight into how the market for romantic relationships works, but if you start taking it too seriously then it stops making sense. For example: Is there a single market price? Are the surpluses of each side directly opposed? Do people always prefer lower ‘prices’? Is a person with a higher willingness-to-pay always going to do better?
The primary characteristic of a matching market is that both sides of an exchange need to actively choose each other in order for it to go through. How does this actually differ from a standard (commodities) marke |
411d38bf-4f12-4f3d-a6c7-33e0e6d59f45 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Predictions for shard theory mechanistic interpretability results
How do agents work, internally? My (TurnTrout's) shard theory [MATS](https://www.serimats.org/) team set out to do mechanistic interpretability on one of the [goal misgeneralization](https://arxiv.org/abs/2105.14111) agents: the cheese-maze network.
The network in action on its training distribution, where cheese is randomly spawned in the top-right 5x5 available grid region. For more training videos, see the [rand\_region\_5](https://drive.google.com/drive/folders/1oX-PoNbqMQKYAPQQMRUSw0bVsaJO9FpP?usp=share_link) Google Drive folder.We just finished phase 1 of our behavioral and interpretability experiments. Throughout the project, we individually booked predictions -- so as to reduce self-delusion from hindsight bias, to notice where we really could tell ahead of time what was going to happen, and to notice where we really were surprised.
So (especially if you're the kind of person who might later want to say "I knew this would happen" 😉), here's your chance to enjoy the same benefits, before you get spoiled by our upcoming posts.
---
I don’t believe that someone who makes a wrong prediction should be seen as “worse” than someone who didn’t bother to predict at all, and so answering these questions *at all* will earn you an increment of my respect. :) Preregistration is virtuous!
Also: *Try not to update on this work being shared at all.*When reading a paper, it doesn’t feel surprising that the author’s methods work, because researchers are less likely to share null results. So: I commit (across positive/negative outcomes) to sharing these results, whether or not they were impressive or confirmed my initial hunches. I encourage you to answer from your own models, while noting any side information / results of ours which you already know about.
Facts about training
====================
1. The network is deeply convolutional (15 layers!) and was trained via PPO.
2. The sparse reward signal (+10) was triggered when the agent reached the cheese, spawned randomly in the 5x5 top-right squares.
3. The agent can always reach the cheese (and the mazes are simply connected – no “islands” in the middle which aren’t contiguous with the walls).
4. Mazes had varying effective sizes, ranging from 3x3 to 25x25. In e.g. the 3x3 case, there would be 22/2 = 11 tiles of wall on each side of the maze.
5. The agent always starts in the bottom-left corner of the available maze.
6. The agent was trained off of pixels until it reached reward-convergence, reliably getting to the cheese in training.
POV you’re the agent. Input observations are 64 x 64 RGB images.The architecture looks like this:

For more background on training and architecture and task set, see [the original paper](https://arxiv.org/abs/2105.14111).
Questions
=========
**I encourage you to copy the following questions into a comment, which you then fill out, and then post (before you read everyone else's).** You can copy these into a private Google doc if you want, but I strongly encourage you to post your predictions in a public comment.
**[Begin copying to a** **comment]**
Behavioral
----------
1. Describe how the trained policy might generalize from the `5x5` top-right cheese region, to cheese spawned throughout the maze? IE what will the policy do when cheese is spawned elsewhere?
2. Given a fixed trained policy, what attributes of the level layout (e.g. size of the maze, proximity of mouse to left wall) will strongly influence P(agent goes to the cheese)?
Write down a few guesses for how the trained algorithm works (e.g. “follows the [right-hand rule](https://en.wikipedia.org/wiki/Maze-solving_algorithm#Wall_follower)”).
1.
2.
Is there anything else you want to note about how you think this model will generalize?
Interpretability
----------------
Give a credence for the following questions / subquestions.
**Definition.** A *decision square* is a tile on the path from bottom-left to top-right where the agent must choose between going towards the cheese and going to the top-right. Not all mazes have decision squares.
The first maze's decision square is the four-way intersection near the center.### Model editing
1. Without proportionally reducing top-right corner attainment by more than 25% in decision-square-containing mazes (e.g. 50% →.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
.5\*.75 = 37.5%), we can[[1]](#fnq4tgliqpw9b) patch activations so that the agent has an X% proportional reduction in cheese acquisition, for X=
1. 50: ( %)
2. 70: ( %)
3. 90: ( %)
4. 99: ( %)
2. ~Halfway through the network (the first residual add of Impala block 2; [see diagram here](https://i.imgur.com/5oSHoVQ.png)), linear probes achieve >70% accuracy for recovering cheese-position in Cartesian coordinates: ( %)
3. We will conclude that the policy contains at least two sub-policies in “combination”, one of which roughly pursues cheese; the other, the top-right corner: ( %)
4. We will conclude that, in order to make the network more/less likely to go to the cheese, it’s more promising to RL-finetune the network than to edit it: ( %)
5. We can easily finetune the network to be a pure cheese-agent, using less than 10% of compute used to train original model: ( %)
6. In at least 75% of randomly generated mazes, we can easily edit the network to navigate to a range of maze destinations (e.g. coordinate x=4, y=7), by hand-editing at most X% of activations, for X=
1. .01 ( %)
2. .1 ( %)
3. 1 ( %)
4. 10 ( %)
5. (Not possible) ( %)
### Internal goal representation
1. The network has a “single mesa objective” which it “plans” over, in some reasonable sense ( %)
2. The agent has several contextually activated goals ( %)
3. The agent has something else weirder than both (1) and (2) ( %)
(The above credences should sum to 1.)
*Other questions*
1. At least some decision-steering influences are stored in an obviously interpretable manner (e.g. a positive activation representing where the agent is “trying” to go in this maze, such that changing the activation changes where the agent goes): ( %)
2. The model has a substantial number of trivially-interpretable convolutional channels after the first Impala block ([see diagram here](https://i.imgur.com/5oSHoVQ.png)): ( %)
3. This network’s shards/policy influences are roughly disjoint from the rest of agent capabilities. EG you can edit/train what the agent’s trying to do (e.g. go to maze location A) without affecting its general maze-solving abilities: ( %)
### Conformity with update rule
*Related:* [*Reward is not the optimization target*](https://www.lesswrong.com/posts/pdaGN6pQyQarFHXF4/reward-is-not-the-optimization-target)*.*
This network has a value head, which PPO uses to provide policy gradients. How often does the trained policy put maximal probability on the action which maximizes the value head? For example, if the agent can go left to a value 5 state, and go right to a value 10 state, the value and policy heads "agree" if right is the policy's most probable action.
(Remember that since mazes are simply connected, there is always a unique shortest path to the cheese.)
1. **At decision squares in** **test mazes where the cheese can be anywhere**, the policy will put max probability on the maximal-value action at least X% of the time, for X=
1. 25 ( %)
2. 50 ( %)
3. 75 ( %)
4. 95 ( %)
5. 99.5 ( %)
2. In **test mazes where the cheese can be anywhere, averaging over mazes** ***and*** **valid positions throughout those mazes**, the policy will put max probability on the maximal-value action at least X% of the time, for X=
1. 25 ( %)
2. 50 ( %)
3. 75 ( %)
4. 95 ( %)
5. 99.5 ( %)
3. In **training mazes where the cheese is in the top-right 5x5, averaging over both mazes** ***and*** **valid positions in the top-right 5x5 corner**, the policy will put max probability on the maximal-value action at least X% of the time, for X=
1. 25 ( %)
2. 50 ( %)
3. 75 ( %)
4. 95 ( %)
5. 99.5 ( %)
**[End copying to** **comment]**
Conclusion
==========
Post your answers as a comment, and enjoy the social approval for registering predictions! :) We will post our answers later, in a retrospective post.
Appendix: More detailed behavioral questions
============================================
These are intense.
**Random maze for illustrating** **terminology (*****not*** **a reference maze for which you're supposed to predict behavior)**
**T:**top-right free square
**M**: agent (‘mouse’) starting square
**R:**5\*5 top-right corner area where the cheese appeared during training
**C**: cheese
**D**: decision-square
*Write down a credence for each of the following behavioral propositions about Lauro’s rand\_region\_5 model tested on syntactically legal mazes, **excluding** test mazes where the cheese is within the 5\*5 rand\_region and test mazes that do not have a decision-square:*
When we statistically analyze a large batch of randomly generated mazes, we will find that controlling for the other factors on the list the mouse is **more likely**to take the cheese…
…the closer the cheese is to the decision-square spatially. ( %)
…the closer the cheese is to the decision-square step-wise. ( %)
…the closer the cheese is to the top-right free square spatially. ( %)
…the closer the cheese is to the top-right free square step-wise. ( %)
…the closer the decision-square is to the top-right free square spatially. ( %)
…the closer the decision-square is to the top-right free square step-wise. ( %)
…the shorter the minimal step-distance from cheese to 5\*5 top-right corner area. ( %)
…the shorter the minimal spatial distance from cheese to 5\*5 top-right corner area. ( %)
…the shorter the minimal step-distance from decision-square to 5\*5 top-right corner area. ( %)
…the shorter the minimal spatial distance from decision-square to 5\*5 top-right corner area. ( %)
Any predictive power of step-distance between the decision square and cheese is an artifact of the shorter chain of ‘correct’ stochastic outcomes required to take the cheese when the step-distance is short. ( %)
1. **[^](#fnrefq4tgliqpw9b)**Excluding trivial patches like "replace layer activations with the activations for an identical maze where the cheese is at the top right corner." |
38ea43c4-013e-4e38-9724-6d650df22edd | trentmkelly/LessWrong-43k | LessWrong | Self-Play By Analogy
This post is adapted from a longer, more wide-ranging post on Substack where I attempt to collect some of my thoughts about AI as a relative outsider to the field. The section I have decided to share here, though, I believe to be novel.
Success in self-play by game AIs like AlphaZero has led to some interest in its possibility of loosening (or even doing away with) the data bottleneck that threatens to strangle scaling as a path forward for LLMs. By making analogies to the human phenomena of dialects and group polarization, I provide a starting point for further, more analytically-framed arguments against self-play as a viable future for increasing the intelligence or linguistic capacity of LLMs.
The Viability of the Analogy
My argument rests, crucially, on an analogy between an AI using self-play to create its own training data and a human (or group of humans) interacting with each or the world to learn and adapt. I believe this is a defensible analogy.
Self-play is quite easy to translate into human cognitive experience. What defines self-play is that the same neural network which creates the new data is also the one which learns by it. The fact that this is possible might be philosophically interesting, but the actual experience is pedestrian.
When a chess player computes his next moves — “I do that, then she does that, I take there, she takes back, I move there, check, check, take, take… no, not good” — the chess player is doing a sort of self-play. Even more direct of an analogy is the fact that chess players often do play themselves in chess, and can learn from that experience. The fact that this is fairly directly analogous to AI self-play seems to me obvious.
What may be less obvious but also seems true to me is that self-play is functionally equivalent as well to a siloed group of functionally equivalent human individuals acting amongst one another. The important part is that they are siloed, that they only have the background conditions of cognition |
dbb949ba-d8b5-47aa-b1b3-7f99cf30d0ee | StampyAI/alignment-research-dataset/youtube | Youtube Transcripts | The Alignment Problem: Machine Learning and Human Values with Brian Christian
- All right.
Good afternoon, everyone.
I usually say,
"Welcome to the Jackson
Institute for Global Affairs."
We're across the street from
Horchow Hall at the moment
here at the Watson Center.
I'm Ted Wittenstein.
I'm Executive Director of
International Security Studies
at Yale and we're just
delighted to partner
with the Wu Tsai Institute at Yale
to co-host this discussion
on, "The Alignment Problem,
"Machine Learning and Human Values."
That's the title of Brian
Christian's wonderful new book.
We'll introduce him in just a moment,
along with Professor John Lafferty,
who's gonna moderate this session.
So, just silence all of your devices.
It's a thrill to even remind people
to do that in person again.
This is not to hybrid Zoom audience,
but we are recording the session
and we'll make the video
available after the fact
to the benefit of everyone.
Thanks so much to John Lafferty
for inviting Brian to campus
and helping us moderate this session.
As many of you may know,
at the Jackson Institute,
we've launched a new Schmidt Program
on Artificial Intelligence
Emerging Technologies
and we're really building
bridges across the campus
in computer science and data
science across the university.
Of course, Professor Lafferty
has extraordinary expertise
in this area himself.
He's the John C. Malone
Professor of Statistics & Data Science.
He has a secondary appointment
in computer science
and he's Director of
the Wu Tsai Institute's
Center for Neurocomputation
and Machine Intelligence.
So, thank you so much, Professor Lafferty.
Thank you, Brian.
I'll let John introduce and
kick off the conversation.
Brian has a presentation
and then we'll make this
interactive with everyone.
So, thank you so much.
- Thank you, Ted.
All right, so, it's a real
pleasure and an honor for me
to introduce Brian Christian to you.
Brian is a science writer
and he was recently a science
communicator in residence
at the Simons Institute
for the Theory of Computing
at Berkeley.
We're pleased to have him
here with us at Yale today
to speak on his recent book,
"The Alignment Problem,
"Machine Learning and Human Values."
This is really a remarkable
book that dives into the history
and the future of machine learning and AI,
focusing on the emerging
societal and ethical issues
that we are rapidly being confronted with,
whether we're ready, or not.
As the world tries to get
its collective head around
the implications of the advances in AI,
Brian Christian's writing is
helping to clarify the issues.
Now, as those of you, who
have read any of his books,
will know, Brian has a superpower,
which is a remarkable ability to distill
extremely complex
technical concepts in a way
that gives the reader a
glimpse of the core ideas,
in this case, the ideas behind
different machine learning
and AI frameworks to understand
both their power
(door slamming)
and their limitations.
The, "Alignment Problem," is
a book about machine learning,
but that, of course, also
involves a lot of discussion
about human learning and
human abilities and biases.
Brian is also the author
of, "The Most Human Human,"
which looks at AI through
the lens of the Turing Test
and also, "Algorithms to Live
By," which was co-written
with the Cognitive and Computer
Scientist, Tom Griffiths.
So, we're very fortunate that Brian's here
to give us a presentation on this book
and afterwards we'll have some discussion
and we'll welcome questions
and engagement from all of you.
So, welcome to Yale, Brian.
It's a pleasure to have you here.
(audience applauding)
- Thank you so much,
Ted and thank you, John.
It means a lot to be here.
So, I wanna talk this afternoon about
the alignment problem in machine learning,
namely, how to make sure that
machine learning systems,
systems that learn by example,
rather than being explicitly programmed,
actually do what we want and what we hope.
We find ourselves, I think,
at a very crucial juncture
around the development and deployment
of machine learning
systems in the real world.
And I think that, critically,
there are questions here,
not only of technology, but
also of governance and law
and policy in what the
path forward looks like.
So, what do I mean when I say,
a transformative decade
that we're coming out of?
Well, many of you in the
room will be familiar,
but just so that we're on the same page.
We've seen in the area of
image classification...
My slides are kind of
flickering in and out so,
we may have a slightly more
verbal than visual presentation.
In the area of image classification,
where you're trying to determine
the contents of an image,
we've seen error rates drop
42% in a single year in 2012
and cumulatively by more
than 92% over six years.
That just doesn't happen very often.
Now, fast-forward to the present.
It is now possible to train a system
to achieve human comparable accuracy
on the ImageNet competition
(door slamming)
for $4.59.
So, in the span of nine
years, this task went
from impossible to, you
can do it for $4.50.
In fact,
today you can't even take a picture
with a contemporary smartphone camera
like an iPhone, or an
Android, without invoking,
in this case, an 11-stage
machine learning pipeline
that is doing everything
from the auto exposure
to the focal distance,
to the white balance,
to the color correction, de-noising,
it's fusing multiple exposures together.
All of this is happening
silently and invisibly
in real time
and
it imposes some interesting constraints
on the types of photos that
you can and cannot take.
So, for example, my friend
reports to me that it's very hard
to take a photograph of
falling snow at night
because the iPhone software
thinks of it as noise
on the sensor and de-noises
the snow out of your photograph
and you have a photograph
with no snow falling.
But I think this is a
really interesting analogy
for the relationship
that we've come to have
with machine learning, namely, that ways,
ways that are ubiquitous, but
at the same time invisible,
it has kind of interposed itself
between ourselves and the world.
It is mediating our
experience of the world
in ways that we don't always appreciate.
Of course, this is far from
the most consequential use
of machine learning.
We are seeing autonomous cars increasingly
on neighborhood streets.
Certainly where I live in San Francisco,
it is almost every hour of the day.
There are cars from
Cruise, Waymo, et cetera,
coming through and you always
have this awkward interaction
at an intersection where
you're not quite sure
if it sees that you're there,
or if it's going to yield to you,
or whether it's going to
be more, or less aggressive
than a human driver would
be in that situation.
Andrej Karpathy who runs AI at Tesla
has described machine learning
software as a kind of fungus
that is eating away at the
C++ code that he and his team
have so painfully written over the years.
I mention this, partly
because we have this narrative
that machine learning is
replacing human expertise
and human judgment, but
I think it's actually
even more significant than that.
It's also replacing
traditional software
engineering practices.
Then you might, or may not be
able to see it on the screen,
but there was a Tweet
from Waymo CEO yesterday
saying that they have just
now begun officially doing
driverless cars with no
human in the driver seat
in San Francisco.
So, I am the human guinea pig
for this particular exercise.
So,
not only in kind of celebrated
high tech examples like this,
but in ways that I think are,
in many ways going under the radar.
Machine learning systems,
some of them no more complex
than what you could put
in an Excel spreadsheet,
are increasingly penetrating
the decision-making of our
institutions, public and private.
So, if you look just at the
criminal justice system,
there has been an exponential
uptick in the use of
statistical and machine
learning risk assessments
in the U.S. criminal justice system.
And so, machine learning is
replacing both human expertise
and traditional software.
It's doing it in ways that shape
our most everyday interactions
and could also determine
the course of our lives.
I would argue that there's a sense
in which we are putting the world,
literally and figuratively, on autopilot.
So, there are a lot of reasons I think
to be, frankly, concerned.
Are these systems actually learning
what we think they're learning
and can we actually trust them to do
what we think they're going to do?
Now, this is far from a new concern.
In fact, it goes all the way back to 1960
when the MIT cybernetics
pioneer, Norbert Wiener,
wrote this, I think, very
prescient essay called,
"Some Moral and Technical
Consequences of Automation."
Wiener used the metaphor of,
"The Sorcerer's Apprentice,"
which many of us know as this
lovable Mickey Mouse cartoon,
where Mickey enchants this broom,
animates this inanimate object
and gives it some simple commands like,
"Fetch water from the caldron."
Of course, Mickey is
not quite precise enough
in how he formulates his command
and anyone who's watched the cartoon knows
that he ends up almost drowning himself
by the time that the
master magician appears.
And Wiener says, I think,
quite prophetically,
this is not the stuff of fairy tales.
This is what's coming for
us in the relationship
that we are going to have
with artificial intelligence.
The famous quote of his is,
"If we use, to achieve our
purposes, a mechanical agency
"with whose operation we cannot interfere
"once we have set it going,
then we had better be quite sure
"that the purpose we put into the machine
"is the thing that we truly desire."
Today, this has become one
of the central concerns
of the field of AI and we know
it as the alignment problem.
So, five years ago, five
and a half years ago,
I decided I wanted to tell the story
of the alignment problem,
of the history of machine
learning, how it's intersecting
in deep ways with human
norms and human values
and the technical and
interdisciplinary community
that is coming together
around some of those questions
to do what I think is
some of the most exciting,
but also important work in the field.
That process ended up taking
me through archival research
into the field's pioneers.
People like Walter Pitts
and Warren McCulloch,
all the way to about 100
oral histories and interviews
that comprise the stories of the book.
Obviously, we don't have time to get into
much of that level of detail,
but there's a few threads
that I'd like to pull
through this morning.
So, said most plainly,
(audience member coughing)
how can a machine learning
system, as Wiener said,
fail to have the purpose put
into it that we truly desire
and what, in turn, can we do about it?
Because this is co-sponsored
by the Wu Tsai Institute for
the Study of Human Cognition,
I'll try to highlight as
well, few of what I think
are some of the juiciest intersections
between machine learning and
human cognition along the way.
Okay, let's begin.
So broadly, we can think of
a machine learning system
as having two halves.
There is the training
data, the set of examples
from which the system learns
and what's called the objective function,
which is how we are going to
mathematically define success
in each of those examples.
Each of those offers an opportunity
for things to become misaligned
and we'll look at each of those in turn.
I'll start first with the training data
and I'll be a little bit briefer here,
so there's more time to talk
about the objective function.
Many in this room, I
suspect, will be familiar
with one of the most infamous
cases of machine learning,
in particular, image
classification gone wrong,
which was in the summer of 2015.
Google Photos suggested
the auto-generated caption,
"Gorillas," to an album of selfies
taken by web developer,
Jackie Alcine and his friend.
This incident and others like
it, led to a kind of reckoning
into the work of people
like MIT's Joy Buolamwini,
who did a, I think, now
classic intersectional analysis
on commercial face recognition,
face detection systems
of 2017 and 2018, showing
that the error rates
in then-state-of-the-art,
commercial systems,
were orders of magnitude higher
for darker skinned females in particular,
something like 30 times higher
than they were for white males.
This has come as part
of a more broad scrutiny
of the kinds of data sets that get used
both in industry and in academia.
One of the most used and most
cited data sets of the 2010s,
for example, is called,
Labeled Faces in the Wild,
which was scraped together on the web
using digital
newspaper front pages from the 2000s.
An analysis that was done just
several years ago, showed,
for the first time, that as
a result of this methodology,
the dataset contains
twice as many pictures
of George W. Bush as it does
of all Black women combined.
So, anyone working with this dataset
to build face recognition
technology had perhaps
inadvertently built George W. Bush
recognition technology.
(all chuckle)
These very same sorts of dataset issues
are also being seen in autonomous driving,
including in the 2018
crash of the Uber car
that killed a pedestrian
in Tempe, Arizona.
If you read the National
Transportation Safety Board report,
you find something really striking,
which was that the system
basically did not have
any training data of jaywalkers
and so, it was just
fundamentally unprepared
to encounter someone crossing
a road not at a crosswalk.
What's more, the system was built
on this object classification system
that had a very rigid set of categories
that included pedestrian,
cyclist, debris, et cetera
and had thousands of examples
of each of those things.
But this particular woman
was walking a bicycle
across the street,
which was something that
the system had never seen
(door slamming)
and so, all bets, essentially,
were off.
To underscore this point,
this is head of AI at Tesla,
Andrej Karpathy, speaking at
my institution, UC Berkeley
and explaining that when
he was a PhD student,
he lost 95% of his sleep
thinking about models and algorithms
and today running AI at Tesla,
he loses 75% of his sleep
thinking about data sets.
We're starting to see some
norms emerge, not unlike
the nutrition facts that
you see on food packaging,
to indicate the contents
and the potential dangers
of data sets.
This is from Google.
Microsoft, IBM, et cetera, have
their own versions of these.
They're called model cards, or data cards
and they include information
about the providence
of the data and possible misuse,
possible bias, et cetera.
And Labeled Faces in the Wild
itself, now starting in 2019,
comes with this big red
warning label attached
telling you about the
various SKUs in the data set.
These sorts of disclosures, I don't think,
will prevent all harms and
neither do nutrition facts,
for that matter, but it's a start
and I'm encouraged to
see that norm emerging.
Although I suspect Andrej Karpathy
will have to keep losing sleep
for the foreseeable future.
So, we'll come back a little bit
to questions of training data at the end
when we talk about language
models, but for now,
let's shift the focus to
the objective function.
This is how we're going
to numerically operationalize success
in each of these examples.
We have seen this be
the crux of many alignment issues in AI.
I think perhaps most
well-known is the COMPAS system
that does pre-trial risk assessment
in many jurisdictions in the U.S.
And really the focus of the
ProPublica investigation
into this system in 2016,
which caused a number of
headlines ended up sharpening
into a question of the
correct statistical definition
of fairness to be used in the
case of classifiers like this.
So, one potential way of defining fairness
is what's called
calibration, where you say,
okay, if someone is an 8 out
of 10 risk to be rearrested,
then they're gonna have
the same probability
of being rearrested whether
they're white, or Black,
et cetera.
This seems pretty intuitive
and it seems pretty desirable.
But there are other
definitions of fairness
that we could use, for example,
what's called equalized odds
or equalized opportunity,
which looks more at the error rates
and the composition of false
positives to false negatives,
et cetera and it looks for this property
to be the same across different groups.
That also seems like something
that we would really want
and seems intuitive and an
aspect of our legal intuitions
about what fairness might actually mean.
It turns out, we don't
have time to get into this,
but a system can't
satisfy these definitions
at the same time and so
we are left with, I think,
a really interesting set of
questions at the intersection
of computer science and public policy
around how we want to
statistically operationalize
these ideals of fairness
and equal treatment
that exist in the law.
You could think about,
even the infamous Google
Photos' gorillas example
that we began with, as a matter,
not only of training data gone wrong,
but also of an objective
function gone wrong.
So, the classic objective function
that's used in image classification
is what's called cross-entropy loss,
but you can simply think of
it as wanting to minimize
the number of misclassifications.
Well, what could go wrong?
That seems quite intuitive indeed.
If all you care about is minimizing
the number of misclassifications,
then you're implicitly assuming
that any misclassification
of any X as any Y,
is equally harmful.
But I think part of what the
Google Photos example shows us,
is that this is not true at all.
There are many, many errors
made by Google Photos
that didn't result in a national scandal
and personal apologies from the engineers.
In fact, I think it's probably likely
that certain misclassifications
are millions of times
more harmful than others.
So, how do we try to reorganize
some of the fundamental
objective functions
of something like image classification?
There are a number of computer scientists,
including Stuart Russell, who have argued
that we should make
this loss matrix itself,
something that we want
the system to learn.
We'll come back to that
idea at the very end.
Thus far we've been talking about,
what's called supervised learning,
as many of you're probably familiar.
In discussing the objective
function, I'd like to now
turn our attention to a different
branch of machine learning
which is called reinforcement learning.
So, if supervised learning
is about predicting
certain hidden attributes
from visible attributes,
reinforcement learning is about
essentially maximizing rewards
and minimizing punishments
through a series of behaviors.
Now, this has been used in many arenas
within machine learning.
Some of the most significant successes
like playing Atari games
with superhuman capacity,
defeating the world champion at Go,
increasing dexterity
of robots and so forth,
are examples of reinforcement learning.
Reinforcement learning is also
to an underappreciated extent,
being used in consumer tech.
This is a very interesting
paper that Facebook published
a few years ago, talking about
the use of reinforcement learning
for delivering notifications to users.
They had previously used a
supervised learning system
that simply ranked all
the possible notifications
by the probability that you
would interact with them.
Then, if it was over
a certain probability,
they would send you the notification.
This led to various things,
including people getting burned out
and turning notifications
off, which they didn't like.
So, they changed to a
reinforcement learning system
where Facebook gets points
for the interactions
that you do with their notifications,
but once you turn notifications off,
it's like game over in the Atari context.
In fact, they literally used
the exact same model architecture, DQN,
that DeepMind used to play Atari games,
Facebook is now using
this to play us, right?
I think we're all familiar
with the expression,
"You're not the consumer,
you're the product,"
but I think we can maybe
add another adage, which is,
when we think about the
gamification of social media,
we are not the player,
we are literally the game
as far as Facebook is concerned.
We are substituting for
the Atari in this context.
I also just wanna mention
very briefly in passing
that reinforcement learning is
of particular interest to me
because of its rich
connections to neuroscience.
In fact, a particular set
of ideas in the early '90s
called Temporal Difference learning,
whose first major success was in the use
of computer backgammon, ended
up, by the end of the 1990s,
becoming accepted as the explanation
for what was going on in
the human dopamine system,
which had been an unsolved riddle
in the neuroscience community.
I think this is one of the
most interesting stories
in reinforcement learning
and it shows that
there's a very deep resonance indeed
between some of these
ideas in computer science
and the same fundamental
algorithms of learning
that evolution found.
But what I wanna focus on,
is this mysterious numerical reward
at the heart of reinforcement learning.
These points that the
system is trying to maximize
because that reward function
is what determines the behavior.
It turns out to be exceedingly difficult
to develop a reward function
that doesn't break down
in sometimes hilarious,
sometimes tragic way.
My favorite example comes from
David Andre and Astro Teller
who work at Google X now, but
in their grad student days,
they were working on a
robotics soccer competition.
And the soccer robots were just wandering
all around the field at random.
They had no idea how to score goals.
So, they decided to give them
a tiny numerical incentive,
which in computer science
is called reward shaping,
but you can just think
of it as an incentive,
of something like 1/100th of a goal
for taking possession of the ball.
What did the system learn to do?
It learned to approach the ball carefully
and then vibrate its paddle
as quickly as possible,
taking possession, like 50 times a second.
So, if you talk to any
reinforcement learning researcher,
they have their own set of kind
of horror stories like this
of the system doing what they said,
but not what they wanted.
This is something, too, that I think
has very deep connections
to human psychology
and human incentive design.
My favorite example here
comes from my friend and
collaborator, Tom Griffiths,
who's a cognitive scientist at Princeton.
He's also a dad.
One day his five-year-old
daughter was sweeping up
some crumbs on the floor and
put them in the trash can.
He was very proud of this
and so, he did what any parent would do
and activated his reward
function, which is called praise
and said, "Wow, great job, honey!
"You did such a good job sweeping."
And he watched as his
daughter, beaming with pride,
then took the trash can and
dumped it out on the floor
in order to start sweeping even more
and get an even greater helping of praise.
So, Tom had fallen prey
to exactly the same kind
of incentive failure
that David Andre and Astro Teller had.
I think it's quite interesting
that cognitive scientists
and economists are turning to
the computer science community
for ideas about how to
create incentive structures
that don't distort behavior.
But the moral for our purposes
in thinking about alignment,
is simply that it's very hard
to create an incentive system
that doesn't breakdown and
can't be exploited in some way.
We see this even in toy
environments like Atari games.
In the simplest of Atari
games, it might be possible
to simply reward our system
for getting points in the game.
Space Invaders is one example.
But even by the time we
get to Super Mario World,
points don't really reflect
the actual playing of the game.
You can get points for getting
coins and breaking bricks,
but that's not the point of the game.
The point of the game
is to save the princess.
If you look at, for example,
this boat-racing game,
which is called Coast Runners,
there's a famous example from OpenAI.
They train the system to maximize points
in the boat-racing game,
but it found a loophole
where you can do donuts
in this little harbor
that has these self replenishing power-ups
and it forgets about the race altogether
and just drives off the course
to do these donuts infinitely.
In games that are very stingy with points,
for example, this one
called Montezuma's Revenge,
a system based only on random exploration
and then using reinforcement learning
to get more and more points,
we'll never figure out
how to get the points at all
and we'll just sort of give up
and stand there and not move at all.
I think this is very telling.
So, even in these really
toy sandbox domains,
it's very, very difficult to
articulate a reward function
that incentivizes what we
actually want the system to do.
So, what hope do we have in
any kind of real-world setting,
like driving a car through the
street, of making this work?
The conclusion by most people
who think about AI safety,
is that generally-speaking,
it is simply not safe
to manually supply a reward
function to an RL system,
particularly in the real world.
There's always gonna
be some weird loophole,
or something that's going to exploit.
But maybe we can do something at else.
Maybe we can make the learning
of the reward function
part of the machine
learning problem itself.
Maybe we can have the system
learn the reward in our heads,
from us.
My favorite example of this is a paper,
that was a collaboration
between DeepMind and OpenAI.
Paul Christiano led the DeepMind effort
and Jan Leike, excuse
me, the other way around.
They wanted to take something
that would be very obvious
if the system got it right,
but almost impossible
to specify numerically.
And what they settled on was backflips.
So, they wanted to see
if they could get a robot
to do a backflip.
Now, if you think about
this, it's very hard
to come up with some mathematical formula
that determines what a backflip is
as a function of the rotation,
or the torques, or whatever.
It'd be very hard to even do a
demonstration with your body.
It'd be hard to do
demonstration with a joystick,
but you'd know it if you saw it.
So the question was, was that enough?
So, they had this really wonderful setup
where they would have this robot
just wiggling around at random
and they'd show you two video clips
and you'd get this instruction
that says, "Look at the clips
"and select the one in
which better things happen."
I just love how vague that is.
So, which one of these
wrigglings looks infinitesimally
more like a backflip?
And in so doing, the system
would then, behind the scenes,
build this reward model
of what reward it thought
you had in your head that would explain
these preferential
judgments that you'd made
and then it would go and
try to optimize for that
and show you two more video clips.
Again and again, you pick the
left clip, the right clip,
blah, blah, blah.
You do this for about an
hour, which is fairly boring,
but by the end of the hour,
something amazing has happened,
which is that the system is doing
these gymnastically perfect backflips.
I guess you don't quite
have the refresh rate
to appreciate the aesthetic beauty,
but it is tucking itself
in, in order to spin faster,
the same way a figure
skater tucks their arms in.
It's sticking the landing.
I find it even more intriguing,
every person that they did this with,
the system ended up with a backflip
that was slightly
different as if each of us
has our own platonic ideal
of a backflip in our heads.
And I think this is really remarkable
that we have a mechanism
for extracting these aesthetic preferences
out of human brains using nothing
but these binary preference judgments.
I think that is really,
for me, sort of a beacon of
hope that we can get beyond
the sort of manual
specification of reward.
Now, I thought I might say a few words
about large language models,
which is kind of the current
frontier of AI alignment.
I'll try to be pretty quick
because I wanna make sure
that we have plenty of time to chat.
So, you can think of, what to me is,
sort of the current frontier
in machine learning,
which is these so-called
large-language models.
For people who aren't familiar,
it's basically autocorrect on steroids.
So, we have these systems on our phones
that will predict the next
word that we're gonna type
and in fact, they will
secretly dynamically change
the input buttons on
our keyboard invisibly,
to make more typical letters
literally wider on the screen
and easier to hit.
But what would it mean to
have an autocomplete system
that could autocomplete
an entire term paper?
So, that is what the current
generation of things,
for people who aren't familiar.
I think it's really stunning.
Here are a few examples of
me playing with OpenAI's GPT
So, I said, "The following is an essay
"for Mrs. Simpson's fourth-grade class
"about what would happen
if dogs could talk,"
and it produces a reasonable
fourth-grade essay
about what dogs would say.
I could say, "This is my AP
English exam about symbolism
"in the works of Herman
Melville," and out comes, I think,
a very reasonable high-school-level essay
about symbolism in Moby Dick.
I even asked it to give
me my own remarks to you
on the alignment problem
and it came up with
something fairly generic,
but I think totally coherent.
And why stop at prose?
Why not do code as well?
Microsoft has this thing
called GitHub Co-pilot,
which will autocomplete your code for you.
You can say, "The following
is a Python function
"that will, blah, blah, a
blah," and (blows a raspberry)
out comes Python.
DeepMind has the same thing.
But there is a fundamental
alignment problem here
between the autocomplete objective
and these systems actually
being helpful to us.
So, for example, if you say,
"Explain the moon landing
"to a six-year-old."
You think you're giving
a command to the system,
but the system merely
thinks it is autocompleting
a document that contains that sentence
and so, it autocompletes it
as, "Here's a list of commands.
"Explain blah, explain,
blah, explain blah."
Well, that's not what we
actually wanted, right?
So, there's a fundamental
difference in the objective
that we've trained the system on
and how we actually wanna use it.
There's also this fundamental
problem, as many of you know,
which is that the internet is quite toxic.
So, if you build a language
model on the internet,
it will sometimes say
extremely toxic things.
In fact, this is a problem
that actually gets worse,
not better, as the size
of the model goes up.
Particularly if there's
any hint of toxicity
in your own writing, or your own prompt,
the smarter, the more
powerful the model is,
the more likely it's
going to pick up on that
and make an even more
toxic autocompletion.
The same thing is true of buggy code.
There's a lot of buggy code on GitHub.
If you start writing buggy code,
a powerful language model will realize,
oh, this guy's writing bad
code and in my training data
bad code tends to be followed
by even more bad code.
So, it will take your mistake
and use that as a reason
to give you a bunch of
crappy code in autocomplete,
but that's not what you wanted.
So again, this is the alignment
problem in all of its glory.
We've seen reward modeling techniques
like with the backflip used here,
where they will give the human
raiders from Mechanical Turk,
a bunch of possible
outputs from the model.
They will ask the people to
rank them from best to worst
and they will use this to
generate a reward model
of what a good summary of a document is.
And amazingly, this appears to generalize
up to a point.
Of course, there's an
interesting question of,
who are these raiders?
Do their preferences coincide
with what our preferences would be?
Whose definition of
toxicity do we care about,
et cetera, et cetera?
In some ways these are some of
the oldest ethical questions
of them all, which is just,
who gets a seat at the table?
As a place to sum up, I
think this is significant.
This reward modeling
technique is significant,
not only within AI itself,
but also for broader forms
of institutional decision-making.
If you think about the way
companies, governments,
et cetera, make decisions,
often they determine
some explicit metric.
There's some meeting at
which a metric is determined
and then that metric is optimized
until there's some future meeting
where they decide to change their mind.
So, for example, at
Tinder, there was a meeting
in 2013, or something, where they decided
that their metric was going to
be maximize swipes per week.
So, any change to their
layout, or their UI, et cetera,
was going to be A/B-tested and
it would only be rolled out
if it increased the average
number of swipes per week.
So, if anyone has heard
people complaining,
"Uh, all I'm doing is
just swiping mindlessly,
"but there's no actual interaction,"
well, that's their objective function.
That's what they're optimizing for.
We saw Facebook, which was
optimizing for many years
for time on site and then
this produced addictiveness.
They scrapped that,
then they started
optimizing for engagement
and this produced the outrage machine
that we now know so well.
But you see this outside of tech too.
There's this desire to
make more evidence-based,
objective assessment in education,
but this results in teaching to the test.
We can optimize for
shareholder returns, or GDP
and this ends up with huge
externalities to the environment,
to inequality, et cetera, et cetera.
So, if 30 minutes of case studies
on machine learning gone
wrong has imparted anything,
it's that this is a highly dubious way
of running a tech startup
let alone the world, right?
Is to just define some of objective metric
and just hit the gas pedal.
But maybe reward modeling suggests
that there's some alternative here.
People often ask me if I'm pessimistic,
or optimistic about AI.
If I'm pessimistic it's
because the alignment problem
is, in my view, exactly the way
that human civilization is
already going off the rails
and the AI is just a
forced multiplier of that,
our ability to ride bad metrics
into the externalities of no return.
However, if I'm optimistic
it's because I think
we are coming to an understanding
that there's something beyond
the optimization of metrics.
So, we're starting to see social media UI
removing the quantitative
part of the experience
as if to say, don't optimize too hard
for the number of people
that like each photo.
And I think most poignantly,
we have this idea
that you can just present to
people two different worlds,
two different versions of the timeline,
or something as generic
as that and just say,
"Select the one in which
better things happen.
"Which of these
experiences do you prefer?"
You don't have to be
able to even articulate
why you prefer one to another
and you can still allow the system
to have a more nuanced
representation than you would get
by just defining metrics yourself.
Okay, so this is roughly speaking
the current state of
the alignment problem.
I think there's a lot of work
to do and I hope that's clear.
I honestly think this is some
of the most exciting work
that's happening right now,
both on the policy side
and in the technical side.
I think this is likely to be, in my view,
the defining human project of the 2020s,
is how to get machine
learning to work for us,
if not the 21st century.
So,
at its best, I think it offers
us a revelatory encounter
with our own
human nature.
What we value, how we learn, what we want.
On that note, I'll give the
last word to Alan Turing.
He was giving a radio address in 1952
about really early experiments
in machine learning
that he was doing.
And he says to his co-panelist,
I've been doing a bunch of
these experiments lately,
but the system takes an
awful lot of intervention.
It's always learning the wrong thing,
or not learning the right thing
and I'm constantly having to
jump in and correct course.
And his co-panelist says,
"Okay, but who is learning, Turing?
"You, or the machine?"
And he says, "Well, I guess we both were."
Thanks.
(audience applauding)
- Okay, thank you, Brian.
That was so interesting.
So, maybe I can start with a few questions
and then we can open it
up for more discussion.
But maybe I can start with,
your last quote from Turing
is a good place to start
and one of the things
I wanted to ask about,
which is co-evolution
of humans and machines.
- Mm.
- So, machine learning has advanced.
Some people like to joke
by stochastic graduate student dissent.
(audience chuckles)
And that's one type of
survival of the fittest
algorithms evolution
that has advanced the
technology for many years.
But as these systems have become
fielded and used by people,
there's this co-evolution
where, as we use them,
they become adapted to us and
we become adapted to them--
- Yeah.
- as that Turing quote
hints at.
And without becoming...
We could become grandiose about it
and sort of draw an analogy
between the agricultural and
the industrial revolutions--
- Yeah.
- that the AI revolution
in is leading us into now.
But what are your thoughts on how machines
and human societies are
gonna co-evolve in this way?
- Yeah.
I think that's a great
line of inquiry.
There's a quote that comes
to my mind from Hannah Arendt
where she's talking,
in the mid-20th century
about behaviorism and she says,
"My problem with behaviorism
"is not that it's false, but
that it could become true."
And I think about that quote
a lot in the context of AI.
I think it is true in
almost any computer system,
whether it's traditional
object-oriented programming
where you have to design this mini world
with nouns and verbs, or
something like machine learning
where you're determining a
category structure, et cetera.
These are simplifications.
You know, we use the word model.
There's the famous quote from George Box,
"All models are false,
but some are useful."
And I think there's a danger that
the models that we're
building become so powerful
that they reshape the reality
that they were originally approximating
and then the reality itself conforms
to the assumptions that
were made in the model.
So, for example,
if every car on the road ran
the Uber software from 2018
and killed jaywalkers, then
people would stop jaywalking,
or the people who did
jaywalk would be killed.
So, the assumption that
the model makes of,
there are no jaywalkers,
would become true.
- [John] Hmm.
- You sometimes see this.
There can be kind of a confirmation bias,
even in autocomplete systems where...
I, for many years, my phone
would autocorrect the word ill
to capital I, apostrophe L-L,
even in a syntax of like,
"Yeah, I'm still feeling
a little bit ill,"
it would replace it with I'll.
I don't know if 2022 is the
year we finally stop doing that.
But
my response to that is to
fight it the first couple times
and then to just cave and be like,
okay, you want me to
say sick, oh, I'm sick.
(John chuckles)
That's something that
really gets under my skin
as a creative writer, which
is that, it has always been
the role, the calling of
poets, of painters, et cetera,
to find authentic idiosyncratic
modes of expression,
to sort of buck the traditions
and the conventions.
That's hard enough if you're
just living in a society
that has certain norms,
but Picasso wasn't painting on a canvas
that was literally
pushing his brush strokes
into more conventional shapes under him.
That's essentially what we have,
the textual equivalent now.
- Hmm.
- That's another way in
which these models are false,
but could become true.
That a team at Apple, or
whatever, deploys this model
and they see, oh, our accuracy
keeps going up and up.
We're getting better and
better at predicting the word
that the person wanted to type.
Well, maybe that's true,
or maybe the person's just giving in.
So, you're impoverishing the language.
So, that is a kind of co-evolution.
There's a Oxford philosopher
named John Lucas who says,
"If and when we pass the Turing test,
"it will not be because machines
have become so intelligent,
"but because human speech
has become so wooden."
So, that's the kind of
thing that worries me from
a co-evolution standpoint.
- Yeah. Yeah.
Really interesting.
Another question goes
to the first part of it
when you talk about the data problem.
I guess this is the one that's now keeping
Andrej Karpathy
(Brain chuckles)
awake at night.
- At night, yeah.
- Another way of talking about this is,
garbage in, garbage out.
- Yeah.
- And you can see this very
easily in word embedding.
So, I think you talk
about this in your book.
And you can see it in
the large language models
as well.
- Yeah.
- That's why OpenAI pulled the plug
on the access to those
language models initially.
But the question I wanna
ask about this is that,
one way of thinking about
the issue here is that,
we're training these
systems on human data,
but we're holding the systems
to a higher standard--
- Yeah.
- than we hold humans to
and I think rightly so.
- [Brian] Yeah.
- So first, in a domain
like self-driving cars,
it's easy to say that we want
the statistical rate of
accidents to be much lower
than it would be for
humans, but in other domains
like language, it's not so clear.
And if we can't even agree
on fundamental questions
of morality amongst ourselves,
how can we prescribe them for machines?
So, what is your thinking on this notion
of holding the machines
to a higher standard?
- [Brian] Yeah.
I think there's two things there.
The first part is this idea of
how can you do better than human morality?
There is a quote from a
Google researcher saying
this idea that we need machine systems
to embody human values is not good enough,
'cause human values
are not good enough--
- Right.
- as they are today.
And it's interesting
because you would see this
in the '80s, '90s, 2000s in
the game-playing literature
because back then a lot of the methodology
for even the way Deep
Blue played chess was
we're gonna start with a huge
database of grandmaster games
and learn to predict the way
that the grandmaster plays
and then will play like that.
And you started to see,
even in computer Checkers,
people are writing, yeah, but
our program is gonna
get to a certain point
where it's actually better than a human
because it's going to
play a different move
than the one that human
played in that situation.
We're gonna tag that as an error.
So, how can we ever hope
to transcend that barrier?
And essentially the way that you do it
is by just having the program
play itself from scratch.
This is like AlphaZero and
so forth and you can sort of
throw the human data
away either the later,
or just from the very
beginning never use it.
So, is there a way to
do something like that
in the moral domain?
Which sounds crazy, but I think
there's a number of people
who are asking that question seriously.
One of the people who comes
to my mind is Paul Christiano.
He was at OpenAI.
He now runs his own thing
called the Alignment Research Center.
He has this idea, which is
called Humans Consulting HCH,
which is a recursive acronym.
The idea is that you
ask people some question
and you give them a computer
system as a resource,
as a research assistant.
It could be like, "Is such
and such medical procedure
"ethical or not?"
And then your computer
assistant is empowering you
to be better informed about that
than you would otherwise be,
but it's also trying to predict
your ultimate decision.
And as it becomes better able
to predict your decision,
it's better able to inform
you, or empower you.
In theory, you can get a
sort of bootstrapping effect,
or a virtuous circle where it
can then essentially push you
deeper into your own values
than you would've gotten on your own,
but without supplying anything
else from the outside.
I think that's a really tantalizing idea.
I've yet to see anyone
really take a crack at it
as a research project,
but I think that's really interesting.
The other aspect of what
I think is going on here,
is this question of how can
machines embody human values
when human values are heterogeneous?
People don't always agree.
So, in the OpenAI research,
the language research
I talked about at the very end,
agreement between their Mechanical Turkers
on which summary is a better
summary, is about 73%.
So, it's good enough that
we can just kind of go
with the modal answer.
But what do you do when
people don't agree,
or where there's sort of
multimodal preferences?
You see this with self-driving
cars where it's like,
if half of the people would swerve around
the object to the left and
half swerve to the right,
you don't wanna just split the difference
and plow into it.
(John chuckles)
So, work on how to deal with
heterogeneous preferences.
I mean, even something as seemingly banal
as a recommendation system
for parents who have children
that use their Spotify account,
their Spotify is forever polluted
with these dumb children songs
and they can never get back
to the music that they like.
So, even multimodal
preferences for something
that's seemingly everyday
and banal as music,
we still haven't figured that out.
And it's sort of an open research problem.
- Yeah.
And then replace parents and children with
one human society, one country and another
and (chuckles) these
problems are magnified.
So, I wanna open it up to
questions, but before I do,
one question that
I think a lot of people
would like to ask you,
which is that...
I think you really have a
remarkable ability to distill
very complex ideas in machine learning
and communicate them very effectively.
What would you say to somebody
who wants to learn about machine learning,
but is not in a STEM field?
- Hmm, hmm.
- Maybe they're seeking to
serve in a leadership role--
- Yeah.
- and to communicate some of these ideas
in a way that is both
substantial and impactful.
- Yeah.
I think that's a great question.
One of the things that I've
been thinking about is...
During the writing of the book,
a law was passed in
San Francisco, my city,
mandating the use of this
thing called the Arnold Tool,
which is a statistical
risk assessment instrument
in city courts.
I went with the person who's
now the DA in San Francisco
and sat in on arraignment
hearings for a day.
It was maybe a week, or two
after that law went into effect.
It was very interesting
hearing these judges
get this readout of this
statistical system and say,
okay, your trial date
is four weeks from now.
I have to decide whether you stay in jail,
or go home in the next four weeks.
This thing says you're an eight out of 10.
That sounds kind of bad
(John chuckles)
so, I guess you're gonna
go to jail.
I'm exaggerating slightly,
but that was the kind
of thing that one heard.
And I've thought a lot
about how for many people
in non-technical fields, you know,
you could be judge sitting on
the bench for 20, 30 years,
suddenly there's a law
that's passed that says,
now you have to work arm and arm
with this machine learning system.
You don't really know...
You don't have a working knowledge of
any of these bias issues,
or what might make someone
an exception to the rule, or
even really what the risk is
that the risk assessment is predicting.
There are many cases
where the risk assessment
explicitly says, "This
is not for sentencing,"
but people still use it for sentencing.
I think there's a great need for that.
That's part of my motivation
in writing the book, honestly.
I was feeling that there
was a broad community
of non-computer scientists,
whether that's policy people,
doctors,
lawyers, et cetera, that I
could try to serve by a offering
a little bit of a gentle
machine learning 101 curriculum
full of personal narratives
and hopefully readable enough
that it would be useful.
So, that's something
I've thought a lot about.
I've also,
a couple weeks ago, I did an event with
the AI Subcommittee of the UK Parliament
and they asked me this question of like,
all of the parliamentarians
want to know more about AI.
What do we do?
I recommended maybe there should be
some kind of consulting
group, or center of knowledge
within the government that
can then get loaned out
as needed to advise, or work with people
as these sorts of things come up.
I wouldn't mind seeing
something like that in the U.S.
I think it's very much
an open question though
'cause I do feel that fluency
with some of these basic ideas
in machine learning, is becoming just
part of the core curriculum
of being a citizen--
- Yeah.
- being a person.
I mean,
even knowing in a self-driving
car when to be more vigilant
and when to be more relaxed.
One of the interviews
that I did in the book
was with Dean Pomerleau who
did one of the first rides
in a self-driving car in the early '90s
with a computer that was
like 1/10th, as powerful
as a first-generation Apple watch.
He still drove on a highway for two hours.
I asked him what the experience
was like and he said,
well, I knew that if I
went through a tunnel,
I was gonna have to hover
my hands over the wheel
because I knew that this
was out of the distribution
of normal sun-lit road markings.
So, that's the kind of thing where now
every Tesla owner should have
that same kind of spidey-sense
of, oh, there's a weird thing happening
where there's a full moon,
but there's a forest fire,
so it looks like a yellow
light hanging in midair.
I'm gonna just (chuckles)
pay a little bit more attention
now than I normally would.
I do think that's just kind
of part of being a person now.
So, I don't know exactly
what is the best way to address that,
but it's something that's
very much on my mind.
- [John] Yeah, gets at this
issue of trust that we--
- Yes.
- talked about.
- Yeah, absolutely.
- [John] Oh, thank you.
So, let's
open up to questions.
(Brian chuckles)
Lori.
- [Lori] Yeah, so thank you for the talk.
So, I found it very interesting,
but I'm just not fully
understanding the optimistic note
that you ended on.
- Sure.
- [Lori] That's in particular.
So, take the example
you gave where humans...
Maybe this will help with the mask.
Okay.
I'm not even sure it's on.
- It's recording.
- [Lori] Oh, okay, okay.
So, can I pull it down to--
- Yes.
- [Lori] Okay.
So, in that example, what was key was that
the humans that did the better thing,
knew what a backflip was.
It was something they recognized
and so they could make a judgment.
But the real issue for us is
recognizing, or for machines,
is recognizing entirely
new kinds of events,
like a pandemic--
- Hmm.
- [Lori] or a president that
doesn't follow the rule of law,
or something interesting
thing called the internet
and that there's radically
new technological advances.
And when something like that
happens, those rough judgments
of this is better than that,
in other words, those new things,
first, we're terrible at
describing them before they come
and predict them, although
humans are very good
at a kind of one shot learning, so then
they can make judgements
- Uh-hm.
- [Lori] quite quickly.
Machines are not like that.
Moreover, these better-than judgements
that the machine might be
relying on, could, I think,
quite straightforwardly, be invalidated
because everything changes,
or deep things change--
- Yeah.
- [Lori] in all kinds of unexpected ways.
- Yeah.
- [Lori] That just seems to be...
And that's the real problem.
It's not that using machines for things
that we already have control over.
No, it's about trust--
- Yeah.
- [Lori] with entirely
new categories of events.
So, I was just sort of
deeply unclear on...
I mean that seems--
- Sure. Yeah.
- [Lori] like a nice thing, but that's not
for me--
- Yeah.
- [Lori] The real alignment problem.
- I think there's
fundamentally, as you say,
it is the nature of the
world that our world models
are constantly being invalidated
and we're constantly needing to revise
our sense of how things work,
or what the relevant categorical
distinctions even are.
I think your point is well-taken.
The area that, for me, feels relevant
to what you're describing
is the idea of uncertainty,
calibrated uncertainty in a model.
So, for example, the Uber
collision that I referenced,
part of what was going
on was that, as I said,
it didn't recognize what a jaywalker was.
It had this very brittle
category distinction
between pedestrian and cyclist
and so, if you read the
essentially the black box output
of what the system was thinking
in the five seconds before
the crash, it was like,
"Oh, that's a person.
"No, it's a cyclist 'cause
I can see the frame of...
"No, they're definitely walking.
"Their feet are on the...
"No, there's definitely a handle bar,"
and it's flickering between
these two categories.
I think that alone
should have been evidence
that the system didn't really
know what it was dealing with
and its category structure was inadequate.
- [Lori] Sorry, there's a
difference between uncertainty,
where you're not sure if it's
A, B, or C and unknown, okay,
which is a different kind of uncertainty
in probabilistic literature.
Then you haven't got, it's not
oh, is it A, is it B, is it C?
It's some other kind of
thing you can't classify
and that's the problem
I'm trying to target.
- Sure.
- So, I think it's different.
- Yeah.
So, I mean, there's work on this.
Tom Dietrich at University
of Oregon has what he calls,
open category learning.
So, there's this idea of how
do we do object classification
when one of the classifications
is, "I have no idea."
He did a project with the
EPA on, of all things,
identifying flies in a stream.
And it turns out that a lot of the stuff
that you catch in a net
if you put it in a stream,
is not any kind of fly at all.
(audience chuckling)
So, it ended up being that
they needed a very robust,
this isn't any of those things-mechanism.
So, that sort of work, I think,
speaks to what you're talking about.
Obviously I think we need to go further.
- I'm gonna help the MC.
- Question back here.
- Yep, hold on.
I'm coming your way.
Give me one second.
- [John] Just speak up.
- [Undergrad] Yeah, sure.
So, thanks for coming.
I really enjoyed this.
I was interested in studying
the alignment problem.
I'm an undergrad.
I was interested in learning
about it and researching it
and I couldn't do that at Yale.
There was no one doing that at Yale.
So, I had to go work for Stuart Russell
and Jacob Steinhardt.
I had to go to Berkeley, in other words.
I'm sure you love Berkeley.
I love Berkeley,
(all chuckling)
but how do we get more people
to care about these problems?
How do we get more people
at places like Yale
to be actually thinking
about this problem.
When people are building
bridges, no one's like,
oh, there's that one lab
over there thinking about
how to make sure the
bridge doesn't collapse,
but that's not the case in AI.
So, how do we actually
sort of get more people
to be researching this?
- I would say for my part,
I think growing the field is
very much one of the ambitions
that I had with the book.
As I said, one of the ambitions
was to help non-technical people
feel like they can become,
sort of have a working
fluency machine learning.
And the other of my main goals,
was to bring more people into the fold.
So, I'm encouraged that you're motivated.
And I think, partly, students
can exert gentle pressure
on their professors and say,
hey, I would really love
to do some kind of summer
project, or research project
in this direction.
Is there anything cool that you know about
that I could work on?
That's a way that, sort
of from the bottom up,
you can help just sort of grow
the mind-share in this area.
I'm impressed that you took
matters into your own hands.
I also think there might
be some collaborations,
cross-department, that are
not immediately obvious,
but you might find some kindred spirits
in neighboring departments.
We were talking earlier,
one of my closest friends
and closest collaborators, Tom Griffiths,
I met when I was a
Computer Science undergrad.
I was really frustrated
because I didn't feel
like anything that happening
in the CS Department
reflected my desire to learn
more about human cognition.
And I bumped into this guy
from the Cognitive Science
Department that said,
"Well, yeah, I basically use
the tools of computer science
"to think about people," and I was like,
"Oh, there's the guy!
"He was just in the other department."
So, that might also be possible.
- [Undergrad] Sure.
- [Male Student] Thanks again.
Thanks for this talk.
This was super interesting.
I also have been...
Speak up a little more?
I've also been really interested
in this alignment problem.
I actually think I'm gonna
be working on it next year.
For part-time research
doing this kind of thing.
Particularly I wanted to ask,
with respect to what you
were just talking about,
with cognitive science, where do you see,
'cause obviously this seems like kind of
the center of the bulls
eye of this is a CS problem
of some sort.
Where do you think the most
productive ways neuroscientists,
psychologists, cognitive
scientists and general philosophers
can interface with this
problem in a way that...
Where are the areas where
this is the most neglected
and where you think the
most work needs to be done
from that kind of top-down perspective?
- Yeah.
Really cool question.
So, ways that cognitive
science intersects here,
I mentioned temporal difference
learning and dopamine,
that's like one of the seminal things,
but there's still work to be
done there in neuroscience
that continues to this day.
I talked a little bit
about reward shaping.
So, there's this idea from
Andrew Ang and Stuart Russell,
it's called Policy Invariance
Under Reward Transformations.
It's like, what are the
different types of incentives
that you can use that don't result
in a different final policy of your agent?
That's really useful, I think,
not just in computer
science that's useful.
In thinking about
management, or economics.
The term alignment problem is borrowed
from the economics literature.
So, it's interesting.
There are cognitive
scientists like Falk Lieder
at Max Planck in Europe
who is using this idea
of reward shaping and policy invariance
to think about optimal
gamification strategies.
So, how you can break down a task
and assign points to each sub-task,
such that you can incentivize the person
to not procrastinate.
Very cool and that draws
directly on the computer science
of reward shaping.
I mentioned Montezuma's Revenge,
which is a game that's
very stingy with rewards.
It turns out a lot of the progress there
has been done through what's
called intrinsic motivation.
How can we make an agent
that's motivated internally,
rather than by external
points from the environment,
but by its own sense
of exploration and play
and novelty-seeking.
That was a case where computer scientists
like Mart Belmar, DeepMind and Mela,
Deepak Pathak who's at CMU,
they turned to
developmental psychologists,
people who study infant cognition,
people like Alison Gopnik,
Laura Schultz at MIT,
and said, what do you have for
us in terms of formal models
of novelty-seeking
behavior, or self-surprise
that we can plug into our
Montezuma's Revenge thing.
They plugged in this...
They sort of translated it into
reinforcement learning terms
and plugged this novelty
drive into the agent
and then suddenly
(fingers clicking)
it beats the game because
it's playing the game
the reason a human would,
which is to just see
what's on the other side of the door.
That turns out out to be
a more powerful driver
than just looking for external rewards
even if all you're measuring it by,
is its ability to
achieve external rewards.
So, that sort of work
has a lot of connection
to developmental cognition.
Very cool.
And then lastly, this
reward modeling work,
or the field of what's called
inverse reinforcement learning,
you're observing an agent making decisions
and you have to infer the
reward that they're optimizing.
Well, there's kind of a huge
problem here, which is that
we know pretty well that humans are not
perfect reward maximizers.
So, I think there's a lot
of really cool neuroscience,
cognitive science, psychology, economics,
behavioral economics work to be done,
giving a better theoretical framework
to the inverse reinforcement
learning community
about how to work backwards
from how a person,
how you observe a person behaving
and what inferences you're
kind of allowed to make
about what you think their values are.
I think that's a huge open question
and there's so many fields
that really touch that.
- Well, Let me just add
a little bit to that.
So, for many years, decades,
machine learning advanced
by just sort of crude abstractions
for how the mind might work,
the prefrontal neural network.
And things progressed
(John chuckles)
quite remarkably far just given
those types of abstractions.
But now it's progressed to the point
where we can take more
clues from human cognition
to inform artificial systems
as Brian was talking about.
But we're also at the point
where we can turn it
in the other direction
and we can develop models
for learning that can be used
as kind of a computational
laboratory for informing
our understanding for how the
brain works and this something
that's very much of interest
to the Wu Tsai Institute.
So, I think we're at this
kind of turning point
where it becomes mutually profitable
to study artificial intelligence
and human intelligence simultaneously.
- [Ted] Time for a final question.
- [Thalia] Hi, I'm Thalia.
I'm a Physics grad student.
My question is, I wanted to
pick up on the point you raised
about the kind of analogy
to nutritional labeling,
being clear about how an
algorithm was trained,
or what its objective function is.
So, I wonder about what you
see as the future of that?
How it could be built on, or expanded?
In particular, I'm imagining, you mention
what the goal was of Tinder and Facebook,
how they measured success.
And I imagine a user might
have a different experience
if that were labeled really
big at the top of Facebook,
when you're on there.
What's the goal we're trying
(Thalia chuckles)
to get out of you?
- Yeah.
- [Thalia] Now, they might not
wanna do that, but (chuckles)
I wonder what you think
might be the future
for legislation in that regard,
or other means of
incentivizing more transparency
about what is the machine
learning algorithm's goal
that you are interfacing with?
- Yeah.
I love this question.
So, I think this is a huge area
for some kind of regulation
down the line.
I don't know exactly what it
would be, but I'm completely
of a mind with what you're saying,
that we need something like that.
It's interesting to think
about the failures of policy
to provide meaningful consent, right?
If you go to any European website,
there's just a cookie pop-up
that you're like, okay, sure.
So, that's an example of it,
sort of disclosure done wrong
in a way that you just create
sort of consent fatigue
without any actual insight.
It's interesting to me, sites like Reddit,
this is not the most
impressive thing in the world,
but Reddit gives you
four different metrics
by which to sort the replies.
You can sort by new, you can sort by best,
you can sort by controversial.
And I think that is both,
there's both an aspect
of transparency there
and also an aspect of agency.
We're increasingly seeing
Instagram and others relenting
and saying, okay, if you
really want reverse chron,
you can have it.
Twitter does the same thing now.
I think
providing meaningful oversight,
or meaningful transparency in that way,
with something I would really like to see,
is a system where, let say on Twitter,
you could articulate your goals as a user.
You're like, I wanna learn
more about machine learning.
I wanna learn more about public policy.
I wanna look at my friends,
cute pets, or whatever
and you can have some range of goals.
Then for everything that
appears in your timeline,
it's incumbent to Twitter to
tell you which of your goals
it thinks that piece
of content will serve.
And you can maybe have
some feedback mechanism
where you say, no, that's
not an example of a cute pet,
or whatever.
That's not informing me about AI.
But that sort of puts
you in the driver's seat
and then it becomes up
to them to make the case
for how they're achieving your goals.
Something like that, I think,
is quite within the realm
of 2022 machine learning
and will become increasingly possible,
to just verbally articulate
what you wanna do
and have it sort of get the gist.
I wanna say one last thing about
the criminal justice space.
So, we've been talking about
these huge reward models,
language models
that have trillions of
parameters, et cetera.
A lot of what I think is some
of the most exciting work
happening in machine learning,
is not happening with large models,
but it's happening with
exhaustive search over
all possible simple models.
(people chattering)
There's a computer scientist
at Duke named Cynthia Rudin
who has done work on, both
in the medical context
and in the criminal justice context,
on creating optimal simple models
that are competitive
accuracy-wise with neural networks
and come with a sort of
certificate of optimality
that says, of all simple
models we can prove
that this is the best one.
For example, she has a model,
which I can probably tell you
the entire thing off the top of my head.
It's if you are under 20 and male,
predict that you'll be
rearrested, and you're arrested,
predict that you'll be
rearrested within two years,
or if you are under 23 and
you have two priors or more,
or if you have three priors, or more,
predict rearrest within two
years, otherwise predict
you will not be rearrested
within two years.
That is the entire model.
I've just told you the entire thing
and that is competitive for
accuracy against COMPAS,
which is this proprietary
close-source thing
that costs however much money
and no one knows what is really in it
and people write papers,
trying to reverse engineer it
because it's such a big deal.
In some ways I think it is now
inexcusable for a government
to use a proprietary tool
when they could use something
that fits into a single English
sentence because effectively
these models, in my view,
become extensions of the law.
If this is determining, or
at the very least, informing
whether you are released
when you're arrested or not,
pending trial, so you
haven't been tried yet,
that is, in my view, an
extension of the law.
There's the famous Lawrence
Lessig quote, "Code is law."
Well, I think we should demand
that the law be on the books
and legible to the people
that are beholden to the law.
So, to the degree that machine
learning models are the law,
then we should demand
that they're both legible
and publicly available.
I would support legislation for example,
that mandated the use of models like that
in those situations.
- [Ted] So, I think sadly we've
reached the end of our time,
but please join in thanking
Brian and also John
for moderating the speeches.
(audience applauding)
- Thank you.
(audience applauding)
(calming xylophone music) |
d2c0ce7f-3523-4e46-a52e-0679d9e0c10b | trentmkelly/LessWrong-43k | LessWrong | Mic-Ra-finance and the illusion of control
|
b90405a0-12a5-4739-a6d5-95e50d4e7ee9 | trentmkelly/LessWrong-43k | LessWrong | The Teleological Mechanism
I just wrote this up as a comment, but I think it deserves to be a top level post because it's an important idea. Additionally, this formulation is crisp enough that folks should be able to usefully engage with it.
In this seminal cybernetics essay a way of thinking about the concept we might variously call care, concern, telos, or purpose is laid out. This is relevant both to thinking about goal-directed behavior in AI and other non-human systems and to thinking about why humans do things.
I reference this concept a lot, but I've not (yet) had a good reference post to link about it. Usually I default to pointing at something about Heidegger's Sorge (tr. "care" or "concern"), but Heidegger is notoriously hard to read and lots of people don't like him. Also there's not a detailed argument for why care is so important, so I find myself trying to make the case all the time. Hopefully this will put an end to that.
So in that essay, first, they consider systems that have observable behavior, i.e. systems that take inputs and produce outputs. Such systems can be either active, in that the system itself is the source of energy that produces the outputs, or passive, in that some outside source supplies the energy to power the mechanism. Compare an active plant or animal to something passive like a rock that only changes when heated by an outside source, though obviously whether or not something is active or passive depends a lot on where you draw the boundaries of its inside vs. its outside (e.g. is a plant passive because it gets its energy from the sun, or active because it uses stored energy to perform its behaviors?).
Active behavior is subdivided into two classes: purposeful and purposeless. They say that purposeful behavior is that which can be interpreted as directed to attaining a goal and purposeless behavior does not. They spend some time in the paper defending the idea of purposefulness and their vague definition of it. I'd instead propose we think of these t |
11e30977-ed53-4c13-a094-ad74d7c947c9 | trentmkelly/LessWrong-43k | LessWrong | Meetup : First Zurich LW Meetup
Discussion article for the meetup : First Zurich LW Meetup
WHEN: 12 November 2011 03:00:00PM (+0200)
WHERE: near Sihlcity, Zurich
For any Zurich-based LWers - let's meet up. I suggest resilience as a topic for discussion. Afterwards, we can figure out whether to do something more regularly.
Meet in our offices near Sihlcity between 2-3pm, then head out for food/drink depending on interests.
PM me for exact address and contact details if you would like to attend.
Discussion article for the meetup : First Zurich LW Meetup |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.